关于文章 sql

deepin title

Deepin标题栏太高的解决办法(自定义高度)

Deepin标题栏太高的解决办法(自定义高度)

Deepin标题栏太高的解决办法

# 如果你是用的默认的白色主题
mkdir -p ~/.local/share/deepin/themes/deepin/light
# 如果你用的黑色主题
mkdir -p ~/.local/share/deepin/themes/deepin/dark



cd ~/.local/share/deepin/themes/deepin/dark
deepin-editor titlebar.ini




[Active]
height=24

[Inactive]
height=24
bash调教,自动补全,高亮,忽略大小写

记录一下mac下bash调教的配置

bash调教,自动补全,高亮,忽略大小写

首先bashrc

#引入bash_profile
source ~/.bash_profile
#开启高亮
export CLICOLOR=1
export LSCOLORS=gxfxaxdxcxegedabagacad

其次开启忽略大小写 ~/.inputrc

set completion-ignore-case on
set show-all-if-ambiguous on
TAB: menu-complete

然后bash配色

[[ -r "/usr/local/etc/profile.d/bash_completion.sh" ]] && . "/usr/local/etc/profile.d/bash_completion.sh"
find_git_branch () {
  local dir=. head
  until [ "$dir" -ef / ]; do
    if [ -f "$dir/.git/HEAD" ]; then
      head=$(< "$dir/.git/HEAD")
      if [[ $head = ref:\ refs/heads/* ]]; then
        git_branch=" → ${head#*/*/}"
      elif [[ $head != '' ]]; then
        git_branch=" → (detached)"
      else
        git_branch=" → (unknow)"
      fi
      return
    fi
    dir="../$dir"
  done
  git_branch=''
}

PROMPT_COMMAND="find_git_branch; $PROMPT_COMMAND"
# Heree

black=$'\[\e[1;30m\]'

red=$'\[\e[1;31m\]'

green=$'\[\e[1;32m\]'

yellow=$'\[\e[1;33m\]'

blue=$'\[\e[1;34m\]'

magenta=$'\[\e[1;35m\]'

cyan=$'\[\e[1;36m\]'

white=$'\[\e[1;37m\]'

normal=$'\[\e[m\]'

PS1="$white[$magenta\u$white$white:$cyan\w$yellow\$git_branch$white]\$ $normal \r\n $blue[->] "


export HOMEBREW_BOTTLE_DOMAIN=https://mirrors.tuna.tsinghua.edu.cn/homebrew-bottles
free gnu libre 开源

免费的午餐?乌托邦是否真的存在?自由软件和开源软件

免费的午餐?乌托邦是否真的存在?自由软件和开源软件

首先声明 ,渣渣一个。

全文只代表个人观点。

本人为Linux、开源的狂热信仰者,参与者,与受益者。

假如没耐心看我收集的资料的话 就看一下我最后的几句废话

再次声明 我的观点一定不是公平公正

但是为了尽量准确文中大部分定义贴自

GNU官网

自由软件基金会

Open Source Initiative

先先看看一些事

红薯🍠发文,对开源要心存感激

t-io文档收费

喧喧聊天疑似闭源

项目内出现广告位引争议,开源如何持续健康运营?

项目内置广告后续:npm 禁止终端广告

活在贫困线之下的开源软件项目——开源的可持续性斗争

beetl作者闲大赋为什么如此自负和目中无人?

歪果仁吐槽国内开源,势态将逆转还是恶化?

开源是一种态度!

为什么越来越少的开源项目使用 GPL 协议

Ant Design 圣诞“彩蛋”炸雷,开源项目为何失控了?

Netflix 宣布停止开发 Hystrix

“封杀中兴”后,MySQL 等开源项目也被“闭源”怎么办?

不想让云提供商白白获利,Neo4j 宣布企业版彻底闭源

看不惯云计算公司流氓行为,MongoDB 更改开源协议

Redis 模块开源许可证变更,多个项目不再开源遭质疑

太无奈!Redis 作者被迫修改 master-slave 架构的描述

看了这些事- - 就真的想问问 到底什么是开源 ????

要问,什么是开源。先问,GNU是什么?

GNU是一个自由软件操作系统—就是说,

GNU is an operating system that is free software—that is,

它尊重其使用者的自由。

it respects users' freedom.

GNU操作系统包括GNU软件包(专门由GNU工程发布的程序)和由第三方发布的自由软件。

The GNU operating system consists of GNU packages (programs specifically released by the GNU Project) as well as free software released by third parties.

GNU的开发使你能够使用电脑而无需安装可能会侵害你自由的软件。

The development of GNU made it possible to use a computer without software that would trample your freedom.

关于开源的定义

让我斗胆细数一下现在的情况-点击回到

开源不仅仅要开放源代码。开源软件的发行条款必须符合以下标准:

自由再发布

Free Redistribution

许可证不得限制任何一方出售或赠送软件作为包含多个不同来源的程序的总软件分发的一部分。许可证不得要求特许权使用费或其他销售费用。

开放源代码

Source Code

程序必须包含源代码,并且必须允许以源代码和编译形式分发。如果某种形式的产品没有与源代码一起分发,则必须有一种广为宣传的方法,以不超过合理的复制成本获得源代码,最好是通过互联网免费下载。源代码必须是程序员修改程序的首选格式。不允许故意混淆源代码。不允许使用中间形式,如预处理器或转换器的输出。

衍生作品

Derived Works

许可证必须允许修改和派生作品,并且必须允许它们在与原始软件许可证相同的条款下分发。

The license must allow modifications and derived works, and must allow them to be distributed under the same terms as the license of the original software.

作者源代码的完整性

Integrity of The Author's Source Code

只有当许可证允许将源代码与“补丁文件”一起分发,以便在生成时修改程序时,许可证才能限制源代码以修改的形式分发。许可证必须明确允许分发由修改后的源代码生成的软件。许可证可能要求派生的作品带有与原始软件不同的名称或版本号

The license may restrict source-code from being distributed in modified form only if the license allows the distribution of "patch files" with the source code for the purpose of modifying the program at build time. The license must explicitly permit distribution of software built from modified source code. The license may require derived works to carry a different name or version number from the original software.

不歧视个人或群体

No Discrimination Against Persons or Groups

许可证不得歧视任何人或任何群体。

The license must not discriminate against any person or group of persons.

对程式在任何领域内的利用不得有差别待遇

No Discrimination Against Fields of Endeavor

许可证不得限制任何人在特定领域使用本程序。例如,它可能不会限制该程序在商业中的应用,或用于基因研究。

The license must not restrict anyone from making use of the program in a specific field of endeavor. For example, it may not restrict the program from being used in a business, or from being used for genetic research.

许可证的发放

Distribution of License

附加在程序上的权利必须适用于程序重新分配给的所有人,而无需这些当事人执行额外的许可证

The rights attached to the program must apply to all to whom the program is redistributed without the need for execution of an additional license by those parties.

许可证不能特定于产品

License Must Not Be Specific to a Product

附加到程序的权限不能依赖于程序是特定软件发行版的一部分。如果程序是从该分发版中提取的,并且在程序许可的条款内使用或分发,则重新分发该程序的所有当事方应享有与原始软件分发版一起授予的所有当事方相同的权利。

The rights attached to the program must not depend on the program's being part of a particular software distribution. If the program is extracted from that distribution and used or distributed within the terms of the program's license, all parties to whom the program is redistributed should have the same rights as those that are granted in conjunction with the original software distribution.

许可证不得限制其他软件

License Must Not Restrict Other Software

许可证不得对随许可软件一起分发的其他软件施加限制。例如,许可证不能坚持在同一介质上发布的所有其他程序都必须是开源软件。

The license must not place restrictions on other software that is distributed along with the licensed software. For example, the license must not insist that all other programs distributed on the same medium must be open-source software.

许可证必须与技术无关

License Must Be Technology-Neutral

许可证的规定不得以任何单独的技术或接口样式为前提

No provision of the license may be predicated on any individual technology or style of interface.

开源软件(OSS)即一类计算机软件,用户可以免费且公开地获得其源代码;用户可以对其源代码做哪些操作,则根据软件的许可证规定各有不同

Make use of open-source software (OSS). OSS is software for which the source code is freely and publicly available, though the specific licensing agreements vary as to what one is allowed to do with that code.

gnu运动是什么

自由软件运动致力于通过自由软件使计算机用户获得自由权利。自由软件的用户可以自主控制自己的计算。

The free software movement campaigns to win for the users of computing the freedom that comes from free software. Free software puts its users in control of their own computing. Nonfree software puts its users under the power of the software's developer.

什么是自由软件?

自由软件意味着使用者有运行、复制、发布、研究、修改和改进该软件的自由。

Free software means the users have the freedom to run, copy, distribute, study, change and improve the software.

自由软件是权利问题,不是价格问题。要理解这个概念,你应该考虑“free”是“言论自由(free speech)”中的“自由”;而不是“免费啤酒(free beer)”中的“免费”。

Free software is a matter of liberty, not price. To understand the concept, you should think of “free” as in “free speech”, not as in “free beer”.

自由软件赋予软件使用者四项基本自由

不论目的为何,有运行该软件的自由(自由之零)。

The freedom to run the program as you wish, for any purpose (freedom 0).

有研究该软件如何工作以及按需改写该软件的自由(自由之一)。取得该软件源代码为达成此目的之前提。

The freedom to study how the program works, and change it so it does your computing as you wish (freedom 1). Access to the source code is a precondition for this.

有重新发布拷贝的自由,这样你可以借此来敦亲睦邻(自由之二)。

The freedom to redistribute copies so you can help others (freedom 2).

有向公众发布改进版软件的自由(自由之三),这样整个社群都可因此受惠。取得该软件源码为达成此目的之前提。

The freedom to distribute copies of your modified versions to others (freedom 3). By doing this you can give the whole community a chance to benefit from your changes. Access to the source code is a precondition for this.

关于“自由软件”和“开源”的常见误解

在英语中,“自由软件”,即Free Software,这个词很容易被误解:Free一词既有免费的意思,也有自由的意思。

The term “free software” is prone to misinterpretation: an unintended meaning, “software you can get for zero price,” fits the term just as well as the intended meaning, “software which gives the user certain freedoms.”

而我们所谓的自由软件,则是“一类可以赋予用户指定自由的软件”。

We address this problem by publishing the definition of free software, and by saying “Think of ‘free speech,’ not ‘free beer.’”

要解决这个问题,我们发布了自由软件的定义。为了方便理解,我们解释自由软件中Free,是自由言论中所说的自由,而非免费赠饮中的免费。

This is not a perfect solution;

这显然不是个理想的解决方案,它无法完全杜绝这一问题。

it cannot completely eliminate the problem.

一个意思正确,又没有歧异的词显然更好些,不过前提是这词不会引起其他麻烦。

An unambiguous and correct term would be better, if it didn't present other problems.

关于开源支持者和自由软件支持者的区别

对于一个纯粹的开源狂热者来说—假设他没有被自由软件的理想所影响—可能会说,“你们(专有软件开发者)竟然没用我们的开发模型,还能开发出这么好的软件。这太让我感到意外了。能给我拷一份你们的软件吗?” 这样的态度会让专有软件的诡计得逞—剥夺我们的自由。

A pure open source enthusiast, one that is not at all influenced by the ideals of free software, will say, “I am surprised you were able to make the program work so well without using our development model, but you did. How can I get a copy?” This attitude will reward schemes that take away our freedom, leading to its loss.

而自由软件支持者则会说,“您的软件非常吸引人,不过我更看重我的自由。很遗憾,我不得不放弃使用您的软件。我会用其他的方法完成我的工作,并支持一个实现类似功能的自由软件项目。”你若真心珍视你的自由,我们就可以用行动去捍卫它。

The free software activist will say, “Your program is very attractive, but I value my freedom more. So I reject your program. I will get my work done some other way, and support a project to develop a free replacement.” If we value our freedom, we can act to maintain and defend it.

什么是Copyleft?

我们的目标是让所有的用户可以自由地重新发布或修改GNU软件。如果允许中间商剥夺这种自由,我们也许会以此“获得很多的用户”,但这些用户便不再拥有自由。所以我们并> 不把GNU软件发布到公有领域,我们对它保留“Copyleft”。所谓Copyleft是指任何人都可以重新分发软件,不管有没有进行修改,但必须同时保留软件所具有的自由特性。> Copyleft是为了保证所有用户都拥有自由。

In the GNU project, our aim is to give all users the freedom to redistribute and change GNU software. If middlemen could strip off the freedom, our code might “have many users,” but it would not give them freedom. So instead of putting GNU software in the public domain, we “copyleft” it. Copyleft says that anyone who redistributes the software, with or without changes, must pass along the freedom to further copy and change it. Copyleft guarantees that every user has freedom.

自由软件的自由

“自由软件”不等于“非商业软件”。一个自由软件必须允许商业用户、商业开发和商业发布。商业开发自由软件早就司空见惯了,这样的自由软件非常重要。你可能需要花钱购买自由软件的拷贝,也可能免费拿到。但是无论你如何获得你的拷贝,作为用户,你的四大自由都会被保证,你可以自由地运行,修改,发布甚至出售你拿到的自由软件。

“Free software” does not mean “noncommercial”. A free program must be available for commercial use, commercial development, and commercial distribution. Commercial development of free software is no longer unusual; such free commercial software is very important. You may have paid money to get copies of free software, or you may have obtained copies at no charge. But regardless of how you got your copies, you always have the freedom to copy and change the software, even to sell copies.

让我斗胆细数一下现在的情况

  1. 用户认为作者对用户有不得拒绝的帮助使用责任 关于开源的定义
  2. 用户认为开源软件作者应当无欲无求不得获益 自由软件的自由
  3. 作者认为用户利用开源软件获益导致冲突 关于开源的定义
  4. 用户认为作者应当对用户使用作品的正常运行负责,甚至对作品产生的经济效益负责 关于开源支持者和自由软件支持者的区别

有不同意见的话可以看看最开始的新闻/文章/帖子

总结

人是生而自由的,却又无时不处在枷锁之中。

Man is born free; and everywhere he is in chains.

人类向来认为自己是万物的主宰,但事实上,他们比其他任何事物所受的奴役都要多。

One thinks himself the master of others, and still remains a greater slave than they.

这种情况是如何发生的呢?

How did this change come about?

我无法解释。

I do not know.

我所能解释的是,这种情况是如何被合法化的。

What can make it legitimate? That question I think I can answer.

如果只是考虑强力和由强力施加的影响,我会说:“如果人民被强迫去服从,并且服从了,这样做很对;同样,如果人民可以打破这种桎梏,并且打破了,这样做更对。

If I took into account only force, and the effects derived from it, I should say:“As long as a people is compelled to obey, and obeys, it does well; as soon as it can shake off the yoke, and shakes it off, it does still better;

因为,人民正是依据当初夺走自己自由的方式来重新夺回自己自由的,所以他们完全有权利来重新获得这种自由。 for, regaining its liberty by the same right as took it away, either it is justified in resuming it,

如果说这种方式不正当,那只能说当初夺走人民自由的方式也不正当。”

or there was no justification for those who took it away.

社会秩序就是这样一种神圣的权力,它为其他权利提供了存在的依据。

But the social order is a sacred right which is the basis of all other rights. Nevertheless, this right does not come from nature, and must therefore be founded on conventions.

                                                                  ---------  社会契约论 (让·雅克·卢梭)

开源运动 / 自由软件运动 最初始是对自由信仰基于契约的运动

古来圣贤能几人,开源活动不是以身饲鹰。

无论是开源协议还是更加激进的自由软件运动 都没有对作者获得经济利益的限制

作者对用户不附带责任 而是不限制用户

说一个不大妥帖的比喻

我写了一本菜谱,我不能限制你吃什么菜,但是我没有把菜做好喂你的责任,

我可以卖菜谱,我也要赖以为生。

有人抄了菜谱,要基于我写这本菜谱的契约把我的名字也著名就好,

用人用改良了菜谱上的菜开了饭馆,但是根据菜谱的契约把改良的菜谱也开放给大家看了就好

我们每个人都享受着gnu 和 linux这两道大菜

而这些都基于相互遵守开源协议而达成的

最后 用红薯🍠老大 和 Linus大神的文来结束

对开源要心存感激 一生只为寻找欢笑

支持自由软件-对gnu项目捐赠

书单

书单

书单

桐野夏生 ( 摘抄一小段 异常里的话 “我怀着恶意活着” ,她身上的恶意,咄咄逼人,毫不留情,却自卑得可以。当这股硬撑的恶意遽然消散时,她也没了自己的模样。她,是我,也是你,是所有游离于残酷现实世界的幽暗灵魂。

越界

异常

村上春树 (几个小故事 大学的时候读的

恋しくて

乔治·佩雷克 (脑洞很大

人生拼图版

东野圭吾 (这几年突然很火的 所以人们熟知的我就不推荐了 这是几个猎奇的

分身

恶意

变身

卢梭 (之前联系朗诵的时候读了读

孤独漫步者的遐想

社会契约论

太宰治 (丧的时候真的深有同感

人间失格

叔本华 (最近正在读

作为意志和表象的世界

加夫列拉·米斯特拉尔 (一直很喜欢的一个诗人

葡萄压榨机

蔡骏(悬疑小说

天机

人间

西格蒙德·弗洛伊德

精神分析引论(有趣儿

卡伦 荷妮

神经症与人的成长 (审视自我成长的经历

正统道藏(有趣儿

python 离线 第三方包

Python离线安装依赖包记录一下

Python离线安装依赖包记录一下

虽然都 9102 年了 但是应该还会有不方便上网 或者网络状况不佳的情况

emmmmmmmmm

所以记录一下离线使用第三方包的方法

也是用于python开发桌面端时候要用的

巨简单

如下两行代码

下载
pip download -r requirement.txt -d offline_package


安装
pip install --no-index --find-links=./offline_package -r requirement.txt
python 线程 线程池 进程 进程池

python关于线程池和进程池的乱用

python关于线程池和进程池的乱用

关于线程池和进程池的破应用

  • -其实是为了测试线程 进程 锁 和队列的用法

撸了这么一个奇奇怪怪的东西

恩。。至少知道咋用了

需要cpu消耗的走慢队列 进 进程池

需要io消耗的走快队列 走 线程池

丢到asyncio的框架里用来处理非异步库应该会可以的

emmmmm

import logging
from concurrent.futures import ThreadPoolExecutor
import threading
from queue import Queue
from utils import route



class MsgProtcl:
    def __init__(self,id,msg):
        self.id = id
        self.msg = msg
    def __repr__(self):
        return f"id:{self.id}msg:{self.msg}"

class MsgEventPool():
    def __init__(self):
        self.__msgQueue = Queue()
        self.__pool = ThreadPoolExecutor(max_workers=3, thread_name_prefix='fish_event')
        self.__isRun = False
        self.__router = {}

    def addMsg(self,msg):
        self.__msgQueue.put(msg)


    def start(self):
        if not self.__isRun:
            self.__isRun = True
            t1= threading.Thread(target=self.__getmsg)
            t1.setName("msgThread")
            t1.setDaemon(True)#设置为后台线程,这里默认是False,设置为True之后则主线程不用等待子线程
            t1.start()

    def __poolrun(self,msgObj:MsgProtcl):
        logging.debug(threading.current_thread())
        """
        在这里插入路由req和res
        """
        route.init(msgObj.id,msgObj.msg)

    def __getmsg(self):
        logging.debug(threading.current_thread())
        while True:
            self.__pool.submit(self.__poolrun,self.__msgQueue.get())


msgEvent = MsgEventPool()

callback

def parse(obj):
    res=obj.result()

t.submit(get,url).add_done_callback(parse)
python rpc xml xmlrpc

python自带模块xmlrpc的简单使用

python自带模块xmlrpc

多线程多进程使用sqlite有点麻烦 对速度和性能又没有要求很高

所以就用了一下xmlrpc

服务端

from xmlrpc.server import SimpleXMLRPCServer
from xmlrpc.server import SimpleXMLRPCRequestHandler
import sqlite3
def dict_factory(cursor, row):
    d = {}
    for idx, col in enumerate(cursor.description):
        d[col[0]] = row[idx]
    return d

class SQL():
    def __init__(self):
        self.__conn = sqlite3.connect("cash.db")

    def query(self,sql,arg,size=0):
        self.__conn.row_factory = dict_factory
        cur= self.__conn.cursor()
        cur.execute(sql,arg)
        if size !=0:
            rs = cur.fetchmany(size)
        else:
            rs =  cur.fetchall()
        self.__conn.commit()
        return rs

    def update(self,sql,arg):
        self.__conn.row_factory = dict_factory
        cur= self.__conn.cursor()
        cur.execute(sql,arg)
        result = cur.rowcount
        self.__conn.commit()
        return result


class RequestHandler(SimpleXMLRPCRequestHandler):
    rpc_paths = ('/RPC2',)
if __name__ == '__main__':

    with SimpleXMLRPCServer(('localhost', 8887),
                        requestHandler=RequestHandler,logRequests=True) as server:
        server.register_introspection_functions()
        server.register_instance(SQL())
        server.serve_forever()

客户端

import xmlrpc.client
import time
import logging
class Db():
    def __init__(self):
        self.conn = xmlrpc.client.ServerProxy('http://localhost:8887')
    def query(self,sql,args,size=0):
        start = time.clock()
        rs= self.conn.query(sql,args,size)
        s = time.clock() - start
        if s > 0.005:
            logging.warning(f"SQL:{sql},args:{args},size:{size},run:{s}s")
        return rs
    def update(self,sql,args):
        start = time.clock()
        rs = self.conn.update(sql,args)
        s =time.clock() - start
        if s > 0.005:
            logging.warning(f"SQL:{sql},args:{args},run:{s}s")
        return rs
s = Db()

orm

import datetime
import logging
import time
import uuid
from utils.sqliteClient import s

def log(sql, args=()):
    logging.info('SQL: %s' % sql)

class MyDb():
    def __init__(self):
        self.pool = s

    def select(self,sql, args, size=0):
        log(sql, args)
        rs = self.pool.query(sql,args,size)
        logging.info('rows returned: %s' % len(rs))
        return rs

    def execute(self,sql, args):
        log(sql)
        affected = 0
        try:
            affected = self.pool.update(sql,args)
        except BaseException as e:
            log(e)
        return affected
db = MyDb()


def next_id():
    return '%015d%s000' % (int(time.time() * 1000), uuid.uuid4().hex)

def create_args_string(num):
    L = []
    for n in range(num):
        L.append('?')
    return ', '.join(L)

class Field(object):

    def __init__(self, name, column_type, primary_key, default):
        self.name = name
        self.column_type = column_type
        self.primary_key = primary_key
        self.default = default

    def __str__(self):
        return '<%s, %s:%s>' % (self.__class__.__name__, self.column_type, self.name)

class StringField(Field):

    def __init__(self, name=None, primary_key=False, default=None, ddl='varchar(100)'):
        super().__init__(name, ddl, primary_key, default)

class BooleanField(Field):

    def __init__(self, name=None, default=False):
        super().__init__(name, 'boolean', False, default)

class IntegerField(Field):

    def __init__(self, name=None, primary_key=False, default=0):
        super().__init__(name, 'bigint', primary_key, default)

class FloatField(Field):

    def __init__(self, name=None, primary_key=False, default=0.0):
        super().__init__(name, 'real', primary_key, default)

class TextField(Field):

    def __init__(self, name=None, default=None):
        super().__init__(name, 'text', False, default)

class DateTimeField(Field):

    def __init__(self, name=None, default=None):
        super().__init__(name, 'timestamp', False, default)

class ModelMetaclass(type):

    def __new__(cls, name, bases, attrs):
        if name=='Model':
            return type.__new__(cls, name, bases, attrs)
        tableName = attrs.get('__table__', None) or name
        logging.info('found model: %s (table: %s)' % (name, tableName))
        mappings = dict()
        fields = []
        primaryKey = None
        for k, v in attrs.items():
            if isinstance(v, Field):
                logging.info('  found mapping: %s ==> %s' % (k, v))
                mappings[k] = v
                if v.primary_key:
                    # 找到主键:
                    if primaryKey:
                        raise AttributeError('Duplicate primary key for field: %s' % k)
                    primaryKey = k
                else:
                    fields.append(k)
        if not primaryKey:
            raise AttributeError('Primary key not found.')
        for k in mappings.keys():
            attrs.pop(k)
        escaped_fields = list(map(lambda f: '`%s`' % f, fields))
        attrs['__mappings__'] = mappings # 保存属性和列的映射关系
        attrs['__table__'] = tableName
        attrs['__primary_key__'] = primaryKey # 主键属性名
        attrs['__fields__'] = fields # 除主键外的属性名
        attrs['__select__'] = 'select `%s`, %s from `%s`' % (primaryKey, ', '.join(escaped_fields), tableName)
        attrs['__insert__'] = 'insert into `%s` (%s, `%s`) values (%s)' % (tableName, ', '.join(escaped_fields), primaryKey, create_args_string(len(escaped_fields) + 1))
        attrs['__update__'] = 'update `%s` set %s where `%s`=?' % (tableName, ', '.join(map(lambda f: '`%s`=?' % (mappings.get(f).name or f), fields)), primaryKey)
        attrs['__delete__'] = 'delete from `%s` where `%s`=?' % (tableName, primaryKey)
        return type.__new__(cls, name, bases, attrs)

class Model(dict, metaclass=ModelMetaclass):

    def __init__(self, **kw):
        super(Model, self).__init__(**kw)

    def __getattr__(self, key):
        try:
            return self[key]
        except KeyError:
            raise AttributeError(r"'Model' object has no attribute '%s'" % key)

    def __setattr__(self, key, value):
        self[key] = value

    def getValue(self, key):
        return getattr(self, key, None)

    def getValueOrDefault(self, key):
        value = getattr(self, key, None)
        if value is None:
            field = self.__mappings__[key]
            if field.default is not None:
                value = field.default() if callable(field.default) else field.default
                logging.debug('using default value for %s: %s' % (key, str(value)))
                setattr(self, key, value)
        return value

    @classmethod
    def findAll(cls, where=None, args=None, **kw):
        ' find objects by where clause. '
        sql = [cls.__select__]
        if where:
            sql.append('where')
            sql.append(where)
        if args is None:
            args = []
        orderBy = kw.get('orderBy', None)
        if orderBy:
            sql.append('order by')
            sql.append(orderBy)
        limit = kw.get('limit', None)
        if limit is not None:
            sql.append('limit')
            if isinstance(limit, int):
                sql.append('?')
                args.append(limit)
            elif isinstance(limit, tuple) and len(limit) == 2:
                sql.append('?, ?')
                args.extend(limit)
            else:
                raise ValueError('Invalid limit value: %s' % str(limit))
        rs = db.select(' '.join(sql), args)
        return [cls(**r) for r in rs]

    @classmethod
    def query(cls, sql=None, args=()):
        rs = db.select(sql, args)
        return [cls(**r) for r in rs]

    @classmethod
    def findNumber(cls, selectField, where=None, args=None):
        ' find number by select and where. '
        sql = ['select %s _num_ from `%s`' % (selectField, cls.__table__)]
        if where:
            sql.append('where')
            sql.append(where)
        rs = db.select(' '.join(sql), args, 1)
        if len(rs) == 0:
            return None
        return rs[0]['_num_']

    @classmethod
    def find(cls, pk):
        ' find object by primary key. '
        rs = db.select('%s where `%s`=?' % (cls.__select__, cls.__primary_key__), [pk], 1)
        if len(rs) == 0:
            return None
        return cls(**rs[0])

    def save(self):
        args = list(map(self.getValueOrDefault, self.__fields__))
        args.append(self.getValueOrDefault(self.__primary_key__))
        rows = db.execute(self.__insert__, args)
        if rows != 1:
            logging.error('failed to insert record: affected rows: %s' % rows)

    def update(self):
        args = list(map(self.getValue, self.__fields__))
        args.append(self.getValue(self.__primary_key__))
        rows = db.execute(self.__update__, args)
        if rows != 1:
            logging.error('failed to update by primary key: affected rows: %s' % rows)

    def remove(self):
        args = [self.getValue(self.__primary_key__)]
        rows = db.execute(self.__delete__, args)
        if rows != 1:
            logging.error('failed to remove by primary key: affected rows: %s' % rows)

def timestamp_datetime(ts):
    if isinstance(ts, (int, float, str)):
        try:
            ts = int(ts)
        except ValueError:
            raise

        if len(str(ts)) == 13:
            ts = int(ts / 1000)
        if len(str(ts)) != 10:
            raise ValueError
    else:
        raise ValueError()

    return datetime.datetime.fromtimestamp(ts)

简单的测试

from utils.orm import Model,StringField,TextField,next_id,DateTimeField,IntegerField,timestamp_datetime
from datetime import datetime

class UserBean(Model):
    __table__ = 'fish_users'
    id = StringField(primary_key=True,default=next_id(),ddl="char(50)")
    username = StringField(ddl="char(200)")
    name = StringField(ddl="char(200)",default="")
    passwd = StringField(ddl="char(200)")
    create_date = DateTimeField(default=datetime.now().timestamp())
    token = StringField(default="token")
    last_login = DateTimeField(default=datetime.now().timestamp())
    is_online = IntegerField(default=0)

还存在的问题

时间类型序列化还存在问题

是否语序None存在不稳定因素

但是已经可以凑活用了

idea 创业

当有创业点子时,先问自己几个问题

当有创业点子时,先问自己几个问题

当有创业点子时,先问自己几个问题


我用这些问题列表来消除无意义的创业好点子

产品

1、你创造了什么?

2、它将属于谁?

3、满足需求的本质是什么,当面向用户时 用户会说谢谢这正是我想要的么?

4、写一条面向客户的微薄,解释产品如何满足他们的需求。

5、为您的产品发布写一篇博客文章标题。这真令人惊讶吗?这是新的吗?您的目标客户是否想要点击它?他们想分享链接吗?他们还会在第二天分享吗?

6、写下产品公告博客文章的第一段。包括产品名称,产品说明,目标市场,主要优势和行动呼吁。

7、您的目标客户关心什么“良好指标”?您的产品是否主导了这些指标的每个可用替代方案?

发展

1、填写自下而上的市场规模公式:NUM_USERS * ACV = MARKET_SIZE。你的号码可靠吗?如果您正在构建一些全新的东西,请找一个好的参考类。

2、您的目标客户的哪个子集受到现状的限制,他们会欢迎有缺陷的产品吗?

3、列出前十位客户。

4、在前十名之后,您会使用哪种剧本来吸引客户?

5、在18个月内你需要获得基本无限的廉价资本?你将如何实现这一目标?

战略

为什么现在?没有其他人想到的世界真实的是什么?

在25年的时间范围内,贵公司最雄心勃勃的可实现里程碑是什么?

您的产品是否可靠地推进这一里程碑?

这个里程碑的下一个可靠进展是什么?之后呢?之后呢?

你将如何建造护城河?

含义

什么会达到你25年的里程碑意味着世界?这个未来真的令人兴奋吗?你有多少年的生命会放弃传送到那里?如果你发现自己处于这个反事实的世界,你想回去吗?

如果另一家公司正在研究这个想法而不是你,那你会怎么想呢?你会加入他们吗?

想象一下,你自己站在你的团队,投资者,家人和朋友面前。你失败了,他们在等你说话。你会说些什么?鉴于失败是默认的,你是否愿意解决这个问题?

奖金

你公司的股票代码是什么?

它可能是今年开始的最重要的公司吗?

原文

install linux mysql mysql8

mysql 8.x 安装略有不同 记录一下

linux安装mysql 8.x

mysql 8.x 安装与 5.x 略有不同 这里大概记录一下

my.ini

[mysqld]

port=3306

max_connections=200
max_connect_errors=10
character-set-server=utf8
default-storage-engine=INNODB
default_authentication_plugin=mysql_native_password
[mysql]
default-character-set=utf8
[client]
port=3306
default-character-set=utf8

初始化

./bin/mysqld --initialize

复制出密码来

//修改root的密码

ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY '新密码';  

//创建新用户

CREATE USER 'free'@'%' IDENTIFIED WITH mysql_native_password BY 'free';

// 授权所有权限

GRANT ALL PRIVILEGES ON *.* TO 'free'@'%';

// 授权部分

GRANT SELECT,INSERT,UPDATE,DELETE,CREATE,DROP ON *.* TO 'free'@'%';
chinese fulltext index lucene search

OrientDB使用Lucene的全文索引,并且使用中文分词器

In addition to the standard FullText Index, which uses the SB-Tree index algorithm, you can also create FullText indexes using the Lucene Engine . Apache LuceneTM is a high-performance, full-featured text search engine library written entirely in Java. Check the Lucene documentation for a full overview of its capabilities.

文下有官方文档的备份 当前数据库版本为3.0.x

# 创建表

CREATE CLASS Item;

# 创建字段

CREATE PROPERTY Item.text STRING;

#  创建索引(默认英文索引

CREATE INDEX Item.text ON Item(text) FULLTEXT ENGINE LUCENE;

#  插入数据

INSERT INTO Item (text) VALUES ('My sister is coming for the holidays.');



#  创建中文分词器

CREATE INDEX Item.text ON Item(text)
            FULLTEXT ENGINE LUCENE METADATA {
                "analyzer": "org.apache.lucene.analysis.cn.smart.SmartChineseAnalyzer"
            }


#  查询


SELECT FROM Item WHERE SEARCH_CLASS("sister") = true

# 搜索包含两个词的 

SELECT FROM Item WHERE SEARCH_CLASS("+sister +coming") = true

#  包含sister但是不包含 coming

SELECT FROM Item WHERE SEARCH_CLASS("+sister -coming") = true


# 通配符

SELECT FROM Item WHERE SEARCH_CLASS('meet*') = true

#  高亮


SELECT FROM City WHERE SEARCH_CLASS("+name:cas*  +description:beautiful", {
    "allowLeadingWildcard": true ,
    "lowercaseExpandedTerms": false,
    "boost": {
        "name": 2
    },
    "highlight": {
        "fields": ["name"],
        "start": "",
        "end": ""
    }
}) = true

SELECT name, $name_hl, description, $description_hl FROM City
WHERE SEARCH_CLASS("+name:cas*  +description:beautiful", {
    "highlight": {
        "fields": ["name", "description"],
        "start": "",
        "end": ""
    }
}) = true


#  指定索引搜索

SELECT FROM City WHERE SEARCH_INDEX("City.name", "cas*") = true

search: keywords: ['index', 'FULLTEXT', 'full text', 'Lucene']


Lucene FullText Index

In addition to the standard FullText Index, which uses the SB-Tree index algorithm, you can also create FullText indexes using the Lucene Engine . Apache LuceneTM is a high-performance, full-featured text search engine library written entirely in Java. Check the Lucene documentation for a full overview of its capabilities.

How Lucene's works?

Let's look at a sample corpus of five documents: * My sister is coming for the holidays. * The holidays are a chance for family meeting. * Who did your sister meet? * It takes an hour to make fudge. * My sister makes awesome fudge.

What does Lucene do? Lucene is a full text search library. Search has two principal stages: indexing and retrieval.

During indexing, each document is broken into words, and the list of documents containing each word is stored in a list called the postings list. The posting list for the word my is:

my --> 1,5

Posting list for others terms:

fudge --> 4,5

sister --> 1,2,3,5

fudge --> 4,5

The index consists of all the posting lists for the words in the corpus. Indexing must be done before retrieval, and we can only retrieve documents that were indexed.

Retrieval is the process starting with a query and ending with a ranked list of documents. Say the query is "my fudge". In order to find matches for the query, we break it into the individual words, and go to the posting lists. The full list of documents containing the keywords is [1,4,5]. Note that the query is broken into words (terms) and each term is matched with the terms in the index. Lucene's default operator is OR, so it retrieves the documents tha contain my OR fudge. If we want to retrieve documents that contain both my and fudge, rewrite the query: "+my +fudge".

Lucene doesn't work as a LIKE operator on steroids, it works on single terms. Terms are produced analyzing the provided text, so the right analyzer should be configured. On the other side, it offers a complete query language, well documented here:

Index creation

To create an index based on Lucene

CREATE INDEX  ON  (prop-names) FULLTEXT ENGINE LUCENE [{json metadata}]

The following SQL statement will create a FullText index on the property name for the class City, using the Lucene Engine.

CREATE INDEX City.name ON City(name) FULLTEXT ENGINE LUCENE

Indexes can also be created on n-properties. For example, create an index on the properties name and description on the class City.

CREATE INDEX City.name_description ON City(name, description)
          FULLTEXT ENGINE LUCENE

When multiple properties should be indexed, define a single multi-field index over the class. A single multi-field index needs less resources, such as file handlers. Moreover, it is easy to write better Lucene queries. The default analyzer used by OrientDB when a Lucene index is created is the StandardAnalyzer. The StandardAnalyzer usually works fine with western languages, but Lucene offers analyzer for different languages and use cases.

Two minutes tutorial

Open studio or console and create a sample dataset:

CREATE CLASS Item;
CREATE PROPERTY Item.text STRING;
CREATE INDEX Item.text ON Item(text) FULLTEXT ENGINE LUCENE;
INSERT INTO Item (text) VALUES ('My sister is coming for the holidays.');
INSERT INTO Item (text) VALUES ('The holidays are a chance for family meeting.');
INSERT INTO Item (text) VALUES ('Who did your sister meet?');
INSERT INTO Item (text) VALUES ('It takes an hour to make fudge.');
INSERT INTO Item (text) VALUES ('My sister makes awesome fudge.');

Search all documents that contain sister:

SELECT FROM Item WHERE SEARCH_CLASS("sister") = true

Search all documents that contain sister AND coming:

SELECT FROM Item WHERE SEARCH_CLASS("+sister +coming") = true

Search all documents that contain sister but NOT coming:

SELECT FROM Item WHERE SEARCH_CLASS("+sister -coming") = true

Search all documents that contain the phrase sister meet:

SELECT FROM Item WHERE SEARCH_CLASS(' "sister meet" ') = true

Search all documents that contain terms starting with meet:

SELECT FROM Item WHERE SEARCH_CLASS('meet*') = true

To better understand how the query parser work, read carefully the official documentation and play with the above documents.

Customize Analyzers

In addition to the StandardAnalyzer, full text indexes can be configured to use different analyzer by the METADATA operator through CREATE INDEX.

Configure the index on City.name to use the EnglishAnalyzer:

CREATE INDEX City.name ON City(name)
            FULLTEXT ENGINE LUCENE METADATA {
                "analyzer": "org.apache.lucene.analysis.en.EnglishAnalyzer"
            }

Configure the index on City.name to use different analyzers for indexing and querying.

CREATE INDEX City.name ON City(name)
            FULLTEXT ENGINE LUCENE METADATA {
                "index": "org.apache.lucene.analysis.en.EnglishAnalyzer",
                "query": "org.apache.lucene.analysis.standard.StandardAnalyzer"
          }

EnglishAnalyzer will be used to analyze text while indexing and the StandardAnalyzer will be used to analyze query text.

A very detailed configuration, on multi-field index configuration, could be:

CREATE INDEX Song.fulltext ON Song(name, lyrics, title, author, description)
            FULLTEXT ENGINE LUCENE METADATA {
                "default": "org.apache.lucene.analysis.standard.StandardAnalyzer",
                "index": "org.apache.lucene.analysis.core.KeywordAnalyzer",
                "query": "org.apache.lucene.analysis.standard.StandardAnalyzer",
                "name_index": "org.apache.lucene.analysis.standard.StandardAnalyzer",
                "name_query": "org.apache.lucene.analysis.core.KeywordAnalyzer",
                "lyrics_index": "org.apache.lucene.analysis.en.EnglishAnalyzer",
                "title_index": "org.apache.lucene.analysis.en.EnglishAnalyzer",
                "title_query": "org.apache.lucene.analysis.en.EnglishAnalyzer",
                "author_query": "org.apache.lucene.analysis.core.KeywordAnalyzer",
                "description_index": "org.apache.lucene.analysis.standard.StandardAnalyzer",
                "description_index_stopwords": [
                  "the",
                  "is"
                ]
            }

With this configuration, the underlying Lucene index will works in different way on each field:

  • name: indexed with StandardAnalyzer, searched with KeywordAnalyzer (it's a strange choice, but possible)
  • lyrics: indexed with EnglishAnalyzer, searched with default query analyzer StandardAnalyzer
  • title: indexed and searched with EnglishAnalyzer
  • author: indexed and searched with KeywordAnalyzer
  • description: indexed with StandardAnalyzer with a given set of stop-words that overrides the internal set

Analysis is the foundation of Lucene. By default the StandardAnalyzer removes english stop-words and punctuation and lowercase the generated terms:

The holidays are a chance for family meeting!

Would produce

  • holidays
  • are
  • chance
  • for
  • family
  • meeting

Each analyzer has its set of stop-words and tokenize the text in a different way. Read the full (documentation)[http://lucene.apache.org/core/6_6_0/].

Query parser

It is possible to configure some behavior of the Lucene query parser Query parser's behavior can be configured at index creation time and overridden at runtime.

Allow Leading Wildcard

Lucene by default doesn't support leading wildcard: Lucene wildcard support

It is possible to override this behavior with a dedicated flag on meta-data:

{
  "allowLeadingWildcard": true
}
CREATE INDEX City.name ON City(name)
            FULLTEXT ENGINE LUCENE METADATA {
                "allowLeadingWildcard": true
            }

Use this flag carefully, as stated in the Lucene FAQ:

Note that this can be an expensive operation: it requires scanning the list of tokens in the index in its entirety to look for those that match the pattern.

Disable lower case on terms

Lucene's QueryParser applies a lower case filter on expanded queries by default. It is possible to override this behavior with a dedicated flag on meta-data:

{
  "lowercaseExpandedTerms": false
}

It is useful when used in pair with keyword analyzer:

CREATE INDEX City.name ON City(name)
            FULLTEXT ENGINE LUCENE METADATA {
              "lowercaseExpandedTerms": false,
              "default" : "org.apache.lucene.analysis.core.KeywordAnalyzer"
            }

With lowercaseExpandedTerms set to false, these two queries will return different results:

SELECT from Person WHERE SEARCH_CLASS("NAME") = true

SELECT from Person WHERE WHERE SEARCH_CLASS("name") = true

Querying Lucene FullText Indexes

OrientDB 3.0.x introduced search functions: SEARCH_CLASS, SEARCH_FIELDS, SEARCH_INDEX, SEARCH_MORE Every function accepts as last, optional, parameter a JSON with additional configuration.

SEARCH_CLASS

The best way to use the search capabilities of OrientDB is to define a single multi-fields index and use the SEARCH_CLASS function. In case more than one full-text index are defined over a class, an error is raised in case of SEARCH_CLASSI invocation.

Suppose to have this index

CREATE INDEX City.fulltex ON City(name, description) FULLTEXT ENGINE LUCENE 

A query that retrieve cities with the name starting with cas and description containing the word beautiful:

SELECT FROM City WHERE SEARCH_CLASS("+name:cas*  +description:beautiful") = true

The function accepts metadata JSON as second parameter:

SELECT FROM City WHERE SEARCH_CLASS("+name:cas*  +description:beautiful", {
    "allowLeadingWildcard": true ,
    "lowercaseExpandedTerms": false,
    "boost": {
        "name": 2
    },
    "highlight": {
        "fields": ["name"],
        "start": "",
        "end": ""
    }
}) = true

The query shows query parser's configuration overrides, boost of field name with highlight. Highlight and boost will be explained later.

SEARCH_MORE

OrientDB exposes the Lucene's more like this capability with a dedicated function.

The first parameter is the array of RID of elements to be used to calculate similarity, the second parameter the usual metadata JSON used to tune the query behaviour.

SELECT FROM City WHERE SEARCH_MORE([#25:2, #25:3],{'minTermFreq':1, 'minDocFreq':1} ) = true

It is possible to use a query to gather RID of documents to be used to calculate similarity:

SELECT FROM City
    let $a=(SELECT @rid FROM City WHERE name = 'Rome')
    WHERE SEARCH_MORE( $a, { 'minTermFreq':1, 'minDocFreq':1} ) = true

Lucene's MLT has a lot of parameter, and all these are exposed through the metadata JSON: http://lucene.apache.org/core/6_6_0/queries/org/apache/lucene/queries/mlt/MoreLikeThis.html

  • fieldNames: array of field's names to be used to extract content
  • maxQueryTerms
  • minDocFreq
  • maxDocFreq
  • minTermFreq
  • boost
  • boostFactor
  • maxWordLen
  • minWordLen
  • maxNumTokensParsed
  • stopWords

Query parser's runtime configuration

It is possible to override the query parser's configuration given at creation index time at runtime passing a json:

SELECT from Person WHERE SEARCH_CLASS("bob",{
        "allowLeadingWildcard": true ,
        "lowercaseExpandedTerms": false
    } ) = true

The same can be done for query analyzer, overriding the configuration given at index creation's time:

SELECT from Person WHERE SEARCH_CLASS("bob",{
        "customAnalysis": true ,
        "query": "org.apache.lucene.analysis.standard.StandardAnalyzer",
        "name_query": "org.apache.lucene.analysis.en.EnglishAnalyzer"
    } ) = true

The customAnalysis flag is mandatory to enable the runtime configuration of query analyzers. The runtime configuration is per query and it isn't stored nor reused for a subsequent query. The custom configuration can be used with all the functions.

SEARCH_INDEX

The SEARCH_INDEX function allows to execute the query on a single index. It is useful if more than one index are defined over a class.

SELECT FROM City WHERE SEARCH_INDEX("City.name", "cas*") = true

The function accepts a JSON as third parameter, as for SEARCH_CLASS.

SEARCH_FIELDS

The SEARCH_FIELDS function allows to execute query over the index that is defined over one ormore fields:

SELECT FROM City WHERE SEARCH_FIELDS(["name", "description"], "name:cas* description:beautiful") = true

The function accepts a JSON as third parameter, as for SEARCH_CLASS.

Numeric and date range queries

If the index is defined over a numeric field (INTEGER, LONG, DOUBLE) or a date field (DATE, DATETIME), the engine supports range queries Suppose to have a City class witha multi-field Lucene index defined:

CREATE CLASS CITY EXTENDS V
CREATE PROPERTY CITY.name STRING
CREATE PROPERTY CITY.size INTEGER
CREATE INDEX City.name ON City(name,size) FULLTEXT ENGINE LUCENE

Then query using ranges:

SELECT FROM City WHERE SEARCH_CLASS('name:cas* AND size:[15000 TO 20000]') = true

Ranges can be applied to DATE/DATETIME field as well. Create a Lucene index over a property:

CREATE CLASS Article EXTENDS V
CREATE PROPERTY Article.createdAt DATETIME
CREATE INDEX Article.createdAt  ON Article(createdAt) FULLTEXT ENGINE LUCENE

Then query to retrieve articles published only in a given time range:

SELECT FROM Article WHERE SEARCH_CLASS('[201612221000 TO 201612221100]') =true

Retrieve the Score

When the lucene index is used in a query, the results set carries a context variable for each record representing the score. To display the score add $score in projections.

SELECT *,$score FROM V WHERE name LUCENE "test*"

Highlighting

OrientDB uses the Lucene's highlighter. Highlighting can be configured using the metadata JSON. The highlighted content of a field is returned in a dedicated field suffixed with _hl:

SELECT name, $name_hl, description, $description_hl FROM City
WHERE SEARCH_CLASS("+name:cas*  +description:beautiful", {
    "highlight": {
        "fields": ["name", "description"],
        "start": "",
        "end": ""
    }
}) = true

Parameters * fields: array of field names to be highlighted * start: start delimiter for highlighted text (default \) * end: end delimiter for highlighted text (default \) * maxNumFragments: maximum number of text's fragments to highlight

Sorting

Documents retrieved by a search call are ordered by their score. It is possible to configure the way the document are sorted. Read carefully the official documentation about sorting : https://lucene.apache.org/core/6_6_1/core/org/apache/lucene/search/Sort.html

SELECT name, description, size FROM City
WHERE SEARCH_CLASS("+name:cas*  +description:beautiful", {
    "sort": [ { 'field': 'size', reverse:true, type:'INT' }]
}) = true

Sort over multiple fields is possible:

SELECT name, description, size FROM City
WHERE SEARCH_CLASS("+name:cas*  +description:beautiful", {
    "sort": [
        { 'field': 'size', reverse:true, type:'INT' },
        { 'field': 'name', reverse:false, type:'STRING' },
        { reverse:false, type:'DOC' },
        ]
}) = true

Sort configuraton: * field: is the field name. Could be absent only if the sort type is DOC or INDEX * reverse: if set to true, will sort for the given field in reverse order * type: look to https://lucene.apache.org/core/6_6_1/core/org/apache/lucene/search/SortField.Type.html

CUSTOM type is not supported

Cross class search (Enterprise Edition)

Bundled with the enterprise edition there's the SEARH_CROSS function that is able to search over all the Lucene indexes defined on a database

Suppose to define two indexes:

CREATE INDEX Song.title ON Song (title,author) FULLTEXT ENGINE LUCENE METADATA
CREATE INDEX Author.name on Author(name,score) FULLTEXT ENGINE LUCENE METADATA

Searching for a term on each class implies a lot of different queries to be aggregated.

The SEARCH_CLASS function automatically performs the given query on each full-text index configured inside the database.

SELECT  EXPAND(SEARCH_CROSS('beautiful'))

The query will be execute over all the indexes configured on each field. It is possible to search over a given field of a certain class, just qualify the field names with their class name:

SELECT  EXPAND(SEARCH_CROSS('Song.title:beautiful  Author.name:bob'))

Another way is to use the metadata field _CLASS present in every index:

SELECT expand(SEARCH_CROSS('(+_CLASS:Song +title:beautiful) (+_CLASS:Author +name:bob)') )

All the options of a Lucene's query are allowed: inline boosting, phrase queries, proximity etc.

The function accepts a metadata JSON as second parameter:

SELECT  EXPAND(SEARCH_CROSS('Author.name:bob Song.title:*tain', {"
   "allowLeadingWildcard" : true,
   "boost": {
        "Author.name": 2.0
        }
   }
)

Highlight isn't supported yet.

Lucene Writer fine tuning (expert)

It is possible to fine tune the behaviour of the underlying Lucene's IndexWriter

CREATE INDEX City.name ON City(name)
    FULLTEXT ENGINE LUCENE METADATA {
        "directory_type": "nio",
        "use_compound_file": false,
        "ram_buffer_MB": "16",
        "max_buffered_docs": "-1",
        "max_buffered_delete_terms": "-1",
        "ram_per_thread_MB": "1024",
        "default": "org.apache.lucene.analysis.standard.StandardAnalyzer"
    }
  • directory_type: configure the acces type to the Lucene's index
    • nio (default): the index is opened with NIOFSDirectory
    • mmap: the index is opened with MMapDirectory
    • ram: index will be created in memory with RAMDirectory
  • use_compound_file: default is false
  • ram_buffer_MB: size of the document's buffer in MB, default value is 16 MB (which means flush when buffered docs consume approximately 16 MB RAM)
  • max_buffered_docs: size of the document's buffer in number of docs, disabled by default (because IndexWriter flushes by RAM usage by default)
  • max_buffered_delete_terms: disabled by default (because IndexWriter flushes by RAM usage by default).
  • ram_per_thread_MB: default value is 1945

For a detailed explanation of config parameters and IndexWriter behaviour

  • indexWriterConfig : https://lucene.apache.org/core/6_6_0/core/org/apache/lucene/index/IndexWriterConfig.html
  • indexWriter: https://lucene.apache.org/core/6_6_0/core/org/apache/lucene/index/IndexWriter.html

Index lifecycle

Lucene indexes are lazy. If the index is in idle mode, no reads and no writes, it will be closed. Intervals are fully configurable.

  • flushIndexInterval: flushing index interval in milliseconds, default to 20000 (10s)
  • closeAfterInterval: closing index interval in milliseconds, default to 120000 (12m)
  • firstFlushAfter: first flush time in milliseconds, default to 10000 (10s)

To configure the index lifecycle, just pass the parameters in the JSON of metadata:

CREATE INDEX City.name ON City(name) FULLTEXT ENGINE LUCENE METADATA
{
  "flushIndexInterval": 200000,
  "closeAfterInterval": 200000,
  "firstFlushAfter": 20000
}

Create index using the Java API

The FullText Index with the Lucene Engine is configurable through the Java API.

OSchema schema = databaseDocumentTx.getMetadata().getSchema();
    OClass oClass = schema.createClass("Foo");
    oClass.createProperty("name", OType.STRING);
    oClass.createIndex("City.name", "FULLTEXT", null, null, "LUCENE", new String[] { "name"});

The LUCENE operator (deprecated)

NOTE: LUCENE operator is translated to SEARCH_FIELDS function, but it doesn't support the metadata JSON

You can query the Lucene FullText Index using the custom operator LUCENE with the Query Parser Syntax from the Lucene Engine.

SELECT FROM V WHERE name LUCENE "test*"

This query searches for test, tests, tester, and so on from the property name of the class V. The query can use proximity operator ~, the required (+) and prohibit (-) operators, phrase queries, regexp queries:

SELECT FROM Article WHERE content LUCENE "(+graph -rdbms) AND +cloud"

Working with multiple fields (deprecated)

NOTE: define a single Lucene index on the class and use SEARCH_CLASS function

In addition to the standard Lucene query above, you can also query multiple fields. For example,

SELECT FROM Class WHERE [prop1, prop2] LUCENE "query"

In this case, if the word query is a plain string, the engine parses the query using MultiFieldQueryParser on each indexed field.

To execute a more complex query on each field, surround your query with parentheses, which causes the query to address specific fields.

SELECT FROM Article WHERE [content, author] LUCENE "(content:graph AND author:john)"

Here, the engine parses the query using the QueryParser

Creating a Manual Lucene Index (deprecated)

NOTE: avoid manual Lucene index

The Lucene Engine supports index creation without the need for a class.

Syntax:

CREATE INDEX  FULLTEXT ENGINE LUCENE  [] [METADATA {}]

For example, create a manual index using the CREATE INDEX command:

CREATE INDEX Manual FULLTEXT ENGINE LUCENE STRING, STRING

Once you have created the index Manual, you can insert values in index using the INSERT INTO INDEX:... command.

INSERT INTO INDEX:Manual (key, rid) VALUES(['Enrico', 'Rome'], #5:0)

You can then query the index through SELECT...FROM INDEX::

SELECT FROM INDEX:Manual WHERE key LUCENE "Enrico"

Manual indexes could be created programmatically using the Java API

ODocument meta = new ODocument().field("analyzer", StandardAnalyzer.class.getName());
OIndex> index = databaseDocumentTx.getMetadata().getIndexManager()
     .createIndex("apiManual", OClass.INDEX_TYPE.FULLTEXT.toString(),
         new OSimpleKeyIndexDefinition(1, OType.STRING, OType.STRING), null, null, meta, OLuceneIndexFactory.LUCENE_ALGORITHM);

java nosql orientdb sql

orientdb的restfulapi调用

orientdb

本来想用 python 版本的驱动 但是 - - 已经有好久没更新了

只好用http的

    • 看起来还行
import requests
import json
class SqlSdk():
    def __init__(self,url="http://192.168.1.91:2480",name="free",password="free",database="rpg"):
        self.url = url
        self.name = name
        self.password = password
        self.database = database
        self.auth = (self.name,self.password)
    def exec(self,sql,params=[]):
        return requests.post(f"{self.url}/command/{self.database}/sql",auth=self.auth,data=json.dumps({
            "command": sql,
            "parameters": params
        }))

以下为官方文档的


search: keywords: ['SQL']


Introduction

When it comes to query languages, SQL is the most widely recognized standard. The majority of developers have experience and are comfortable with SQL. For this reason Orient DB uses SQL as its query language and adds some extensions to enable graph functionality. There are a few differences between the standard SQL syntax and that supported by OrientDB, but for the most part, it should feel very natural. The differences are covered in the OrientDB SQL dialect section of this page.

If you are looking for the most efficient way to traverse a graph, we suggest to use the SQL-Match instead.

Many SQL commands share the WHERE condition. Keywords and class names in OrientDB SQL are case insensitive. Field names and values are case sensitive. In the following examples keywords are in uppercase but this is not strictly required.

If you are not yet familiar with SQL, we suggest you to get the course on KhanAcademy.

For example, if you have a class MyClass with a field named id, then the following SQL statements are equivalent:

SELECT FROM MyClass WHERE id = 1
select from myclass where id = 1

The following is NOT equivalent. Notice that the field name 'ID' is not the same as 'id'.

SELECT FROM MyClass WHERE ID = 1

Automatic usage of indexes

OrientDB allows you to execute queries against any field, indexed or not-indexed. The SQL engine automatically recognizes if any indexes can be used to speed up execution. You can also query any indexes directly by using INDEX:<index-name> as a target. Example:

SELECT FROM INDEX:myIndex WHERE key = 'Jay'

Extra resources

OrientDB SQL dialect

OrientDB supports SQL as a query language with some differences compared with SQL. Orient Technologies decided to avoid creating Yet-Another-Query-Language. Instead we started from familiar SQL with extensions to work with graphs. We prefer to focus on standards.

If you want learn SQL, there are many online courses such as: - Online course Introduction to Databases by Jennifer Widom from Stanford university - Introduction to SQL at W3 Schools - Beginner guide to SQL - SQLCourse.com - YouTube channel Basic SQL Training by Joey Blue

To know more, look to OrientDB SQL Syntax.

Or order any book like these

No JOINs

The most important difference between OrientDB and a Relational Database is that relationships are represented by LINKS instead of JOINs.

For this reason, the classic JOIN syntax is not supported. OrientDB uses the "dot (.) notation" to navigate LINKS. Example 1 : In SQL you might create a join such as:

SELECT *
FROM Employee A, City B
WHERE A.city = B.id
AND B.name = 'Rome'

In OrientDB, an equivalent operation would be:

SELECT * FROM Employee WHERE city.name = 'Rome'

This is much more straight forward and powerful! If you use multiple JOINs, the OrientDB SQL equivalent will be an even larger benefit. Example 2: In SQL you might create a join such as:

SELECT *
FROM Employee A, City B, Country C,
WHERE A.city = B.id
AND B.country = C.id
AND C.name = 'Italy'

In OrientDB, an equivalent operation would be:

SELECT * FROM Employee WHERE city.country.name = 'Italy'

Projections

In SQL, projections are mandatory and you can use the star character * to include all of the fields. With OrientDB this type of projection is optional. Example: In SQL to select all of the columns of Customer you would write:

SELECT * FROM Customer

In OrientDB, the * is optional:

SELECT FROM Customer

See SQL projections

DISTINCT

In OrientDB v 3.0 you can use DISTINCT keyword exactly as in a relational database:

SELECT DISTINCT name FROM City

Until v 2.2, DISTINCT keyword was not allowed; there was a DISTINCT() function instead, with limited capabilities

//legacy

SELECT DISTINCT(name) FROM City

HAVING

OrientDB does not support the HAVING keyword, but with a nested query it's easy to obtain the same result. Example in SQL:

SELECT city, sum(salary) AS salary
FROM Employee
GROUP BY city
HAVING salary > 1000

This groups all of the salaries by city and extracts the result of aggregates with the total salary greater than 1,000 dollars. In OrientDB the HAVING conditions go in a select statement in the predicate:

SELECT FROM ( SELECT city, SUM(salary) AS salary FROM Employee GROUP BY city ) WHERE salary > 1000

Select from multiple targets

OrientDB allows only one class (classes are equivalent to tables in this discussion) as opposed to SQL, which allows for many tables as the target. If you want to select from 2 classes, you have to execute 2 sub queries and join them with the UNIONALL function:

SELECT FROM E, V

In OrientDB, you can accomplish this with a few variable definitions and by using the expand function to the union:

SELECT EXPAND( $c ) LET $a = ( SELECT FROM E ), $b = ( SELECT FROM V ), $c = UNIONALL( $a, $b )
参数 pg_hba.conf postgresql postgresql.auto.conf postgresql.conf 模板

PostgreSQL 11 参数模板 - 珍藏级 --- digoal

PostgreSQL , 参数 , 模板 , postgresql.conf , pg_hba.conf , postgresql.auto.conf

PostgreSQL 11 参数模板 - 珍藏级

作者

digoal

原文地址 https://github.com/digoal/blog/blob/master/201812/20181203_01.md

日期

2018-12-03

标签

PostgreSQL , 参数 , 模板 , postgresql.conf , pg_hba.conf , postgresql.auto.conf


背景

PostgreSQL 11 postgresql.conf 参数模板

# -----------------------------  
# PostgreSQL configuration file  
# -----------------------------  
#  
# This file consists of lines of the form:  
#  
#   name = value  
#  
# (The "=" is optional.)  Whitespace may be used.  Comments are introduced with  
# "#" anywhere on a line.  The complete list of parameter names and allowed  
# values can be found in the PostgreSQL documentation.  
#  
# The commented-out settings shown in this file represent the default values.  
# Re-commenting a setting is NOT sufficient to revert it to the default value;  
# you need to reload the server.  
#  
# This file is read on server startup and when the server receives a SIGHUP  
# signal.  If you edit the file on a running system, you have to SIGHUP the  
# server for the changes to take effect, run "pg_ctl reload", or execute  
# "SELECT pg_reload_conf()".  Some parameters, which are marked below,  
# require a server shutdown and restart to take effect.  
#  
# Any parameter can also be given as a command-line option to the server, e.g.,  
# "postgres -c log_connections=on".  Some parameters can be changed at run time  
# with the "SET" SQL command.  
#  
# Memory units:  kB = kilobytes        Time units:  ms  = milliseconds  
#                MB = megabytes                     s   = seconds  
#                GB = gigabytes                     min = minutes  
#                TB = terabytes                     h   = hours  
#                                                   d   = days  


#------------------------------------------------------------------------------  
# FILE LOCATIONS  
#------------------------------------------------------------------------------  

# The default values of these variables are driven from the -D command-line  
# option or PGDATA environment variable, represented here as ConfigDir.  

#data_directory = 'ConfigDir'           # use data in another directory  
                                        # (change requires restart)  
#hba_file = 'ConfigDir/pg_hba.conf'     # host-based authentication file  
                                        # (change requires restart)  
#ident_file = 'ConfigDir/pg_ident.conf' # ident configuration file  
                                        # (change requires restart)  

# If external_pid_file is not explicitly set, no extra PID file is written.  
#external_pid_file = ''                 # write an extra PID file  
                                        # (change requires restart)  


#------------------------------------------------------------------------------  
# CONNECTIONS AND AUTHENTICATION  
#------------------------------------------------------------------------------  

# - Connection Settings -  

listen_addresses = '0.0.0.0'            # what IP address(es) to listen on;  
                                        # comma-separated list of addresses;  
                                        # defaults to 'localhost'; use '*' for all  
                                        # (change requires restart)  
# 根据业务需求设定监听  
port = 1921                             # (change requires restart)  

# 建议不要大于 200 * 四分之一物理内存(GB), 例如四分之一物理内存为16G,则建议不要超过3200.      
# (假设一个连接耗费5MB,实际上syscache很大时,可能更多)   
# [《PostgreSQL relcache在长连接应用中的内存霸占"坑"》](201607/20160709_01.md)   
max_connections = 2000                  # (change requires restart)  
superuser_reserved_connections = 13      # (change requires restart)  

# $PGDATA, /tmp中 创建unix socket监听  
unix_socket_directories = '., /tmp'        # comma-separated list of directories  
                                        # (change requires restart)  
#unix_socket_group = ''                 # (change requires restart)  

# 除了OWNER和超级用户,其他用户无法从/tmp unix socket连接该实例  
unix_socket_permissions = 0700          # begin with 0 to use octal notation     
                                        # (change requires restart)  
#bonjour = off                          # advertise server via Bonjour  
                                        # (change requires restart)  
#bonjour_name = ''                      # defaults to the computer name  
                                        # (change requires restart)  

# - TCP Keepalives -  
# see "man 7 tcp" for details  

# 如果你连接数据库空闲一段时间会端口,可能是网络中存在会话超时的设备,建议可以设置一下这个心跳时间,TCP心跳间隔会缩短到60秒。  
tcp_keepalives_idle = 60                # TCP_KEEPIDLE, in seconds;  
                                        # 0 selects the system default  
tcp_keepalives_interval = 10            # TCP_KEEPINTVL, in seconds;  
                                        # 0 selects the system default  
tcp_keepalives_count = 10               # TCP_KEEPCNT;  
                                        # 0 selects the system default  

# - Authentication -  

#authentication_timeout = 1min          # 1s-600s  

# md5 or scram-sha-256   # 如果MD5会泄露,建议使用scram-sha-256,但是相互不兼容,请注意。   
# [《PostgreSQL 10.0 preview 安全增强 - SASL认证方法 之 scram-sha-256 安全认证机制》](201703/20170309_01.md)    
#password_encryption = md5              # md5 or scram-sha-256  
#db_user_namespace = off  

# GSSAPI using Kerberos  
#krb_server_keyfile = ''  
#krb_caseins_users = off  

# - SSL -  

#ssl = off  
#ssl_ca_file = ''  
#ssl_cert_file = 'server.crt'  
#ssl_crl_file = ''  
#ssl_key_file = 'server.key'  
#ssl_ciphers = 'HIGH:MEDIUM:+3DES:!aNULL' # allowed SSL ciphers  
#ssl_prefer_server_ciphers = on  
#ssl_ecdh_curve = 'prime256v1'  
#ssl_dh_params_file = ''  
#ssl_passphrase_command = ''  
#ssl_passphrase_command_supports_reload = off  


#------------------------------------------------------------------------------  
# RESOURCE USAGE (except WAL)  
#------------------------------------------------------------------------------  

# - Memory -  

# 1/4 主机内存   
shared_buffers = 24GB                  # min 128kB  
                                        # (change requires restart)  
# 当不使用huge page,并且连接数大于3000时,建议shared buffer不要超过48G  
# 建议shared buffer设置超过32GB时 使用大页,页大小 /proc/meminfo Hugepagesize    
huge_pages = try                # on, off, or try  
                                        # (change requires restart)  
#temp_buffers = 8MB                     # min 800kB  

# 如果用户需要使用两阶段提交,需要设置为大于0,建议与max_connections一样大  
max_prepared_transactions = 2000                # zero disables the feature  
                                        # (change requires restart)  
# Caution: it is not advisable to set max_prepared_transactions nonzero unless  
# you actively intend to use prepared transactions.  

# 可以在会话中设置,如果有大量JOIN,聚合操作,并且期望使用hash agg或hash join。   
# 可以设大一些,但是不建议大于    四分之一内存除以最大连接数  .   
# (一条QUERY中可以使用多倍WORK_MEM,与执行计划中的NODE有关)    
# 建议给一个输入,AP模式TP模式OR混合模式。三种模式使用三种不同的计算公式  
work_mem = 8MB                          # min 64kB  

# min( 2G, (1/4 主机内存)/autovacuum_max_workers )    
maintenance_work_mem = 2GB              # min 1MB  
#autovacuum_work_mem = -1               # min 1MB, or -1 to use maintenance_work_mem  
#max_stack_depth = 2MB                  # min 100kB  
dynamic_shared_memory_type = posix      # the default is the first option  
                                        # supported by the operating system:  
                                        #   posix  
                                        #   sysv  
                                        #   windows  
                                        #   mmap  
                                        # use none to disable dynamic shared memory  
                                        # (change requires restart)  

# - Disk -  

# 如果需要限制临时文件使用量,可以设置。  
# 例如防止有异常的递归调用,无限使用临时文件。  
#temp_file_limit = -1                   # limits per-process temp file space  
                                        # in kB, or -1 for no limit  

# - Kernel Resources -  

## 如果你的数据库有非常多小文件(比如有几十万以上的表,还有索引等,并且每张表都会被访问到时),  
# 建议FD可以设多一些,避免进程需要打开关闭文件。  
## 但是不要大于前面章节系统设置的ulimit -n(open files)  
# max_files_per_process=655360  

#max_files_per_process = 1000           # min 25  
                                        # (change requires restart)  

# - Cost-Based Vacuum Delay -  

# 如果你的IO非常好,则可以关闭vacuum delay   
vacuum_cost_delay = 0                   # 0-100 milliseconds  
#vacuum_cost_page_hit = 1               # 0-10000 credits  
#vacuum_cost_page_miss = 10             # 0-10000 credits  
#vacuum_cost_page_dirty = 20            # 0-10000 credits  

# io很好,CPU核数很多的机器,设大一些。如果设置了vacuum_cost_delay = 0 ,则这个不需要配置  
vacuum_cost_limit = 10000                # 1-10000 credits  

# - Background Writer -  

bgwriter_delay = 10ms                   # 10-10000ms between rounds  
bgwriter_lru_maxpages = 1000            # max buffers written/round, 0 disables  
bgwriter_lru_multiplier = 10.0          # 0-10.0 multiplier on buffers scanned/round  
bgwriter_flush_after = 512kB            # measured in pages, 0 disables  

# - Asynchronous Behavior -  

effective_io_concurrency = 0            # 1-1000; 0 disables prefetching  

# wal sender, user 动态fork的process, parallel worker等都算作 worker process, 所以你需要设置足够大.   
max_worker_processes = 128              # (change requires restart)  

#  如果需要使用并行创建索引,设置为大于1 ,不建议超过 主机cores-2  
max_parallel_maintenance_workers = 6    # taken from max_parallel_workers  

#  如果需要使用并行查询,设置为大于1 ,不建议超过 主机cores-2  
max_parallel_workers_per_gather = 0     # taken from max_parallel_workers  
parallel_leader_participation = on  

#  如果需要使用并行查询,设置为大于1 ,不建议超过 主机cores-2  
#  必须小于 max_worker_processes   
max_parallel_workers = 32               # maximum number of max_worker_processes that  
                                        # can be used in parallel operations  
#old_snapshot_threshold = -1            # 1min-60d; -1 disables; 0 is immediate  
                                        # (change requires restart)  
#backend_flush_after = 256               # measured in pages, 0 disables  


#------------------------------------------------------------------------------  
# WRITE-AHEAD LOG  
#------------------------------------------------------------------------------  

# - Settings -  

# 需要流复制物理备库、归档、时间点恢复时,设置为replica,需要逻辑订阅或逻辑备库则设置为logical  
wal_level = replica  # minimal, replica, or logical  
                                        # (change requires restart)  
#fsync = on                             # flush data to disk for crash safety  
                                        # (turning this off can cause  
                                        # unrecoverable data corruption)  

# 如果双节点,设置为ON,如果是多副本,同步模式,建议设置为remote_write。   
# 如果磁盘性能很差,并且是OLTP业务。可以考虑设置为off降低COMMIT的RT,提高吞吐(设置为OFF时,可能丢失部分XLOG RECORD)  
synchronous_commit = off                # synchronization level;  
                                        # off, local, remote_write, remote_apply, or on  

# 建议使用pg_test_fsync测试后,决定用哪个最快。通常LINUX下open_datasync比较快。  
#wal_sync_method = fsync                # the default is the first option  
                                        # supported by the operating system:  
                                        #   open_datasync  
                                        #   fdatasync (default on Linux)  
                                        #   fsync  
                                        #   fsync_writethrough  
                                        #   open_sync  

# 如果文件系统支持COW例如ZFS,则建议设置为OFF。 如果文件系统可以保证datafile block size的原子写,在对齐后也可以设置为OFF。  
# 如果底层存储能保证IO的原子写,也可以设置为OFF。  
full_page_writes = on                  # recover from partial page writes  

# 当写FULL PAGE的io是瓶颈时建议开启  
wal_compression = on                  # enable compression of full-page writes  
#wal_log_hints = off                    # also do full page writes of non-critical updates  
                                        # (change requires restart)  
# 建议 min( 512MB, shared_buffers/32 )   
#wal_buffers = -1                       # min 32kB, -1 sets based on shared_buffers  
                                        # (change requires restart)  

# 如果设置了synchronous_commit = off,可以设置wal_writer_delay  
wal_writer_delay = 10ms         # 1-10000 milliseconds  
wal_writer_flush_after = 1MB            # measured in pages, 0 disables  

# 如果synchronous_commit=on, 并且已知业务系统为高并发,对数据库有写操作的小事务,则可以设置commit_delay来实现分组提交,合并WAL FSYNCIO 。  
#commit_delay = 10                       # range 0-100000, in microseconds  
# 同时处于提交状态的事务数超过commit_siblings时,使用分组提交  
#commit_siblings = 5                    # range 1-1000  

# - Checkpoints -  

#  不建议频繁做检查点,否则XLOG会产生很多的FULL PAGE WRITE(when full_page_writes=on)。  
checkpoint_timeout = 30min              # range 30s-1d  

# 建议等于SHARED BUFFER,或2倍。  
# 同时需要考虑崩溃恢复时间, 越大,检查点可能拉越长导致崩溃恢复耗时越长。但是越小,开启FPW时,WAL日志写入量又越大。 建议采用COW文件系统,关闭FPW。  
max_wal_size = 48GB  
# 建议是SHARED BUFFER的2分之一  
min_wal_size = 12GB  

# 硬盘好的情况下,可以让检查点快速结束,恢复时也可以快速达到一致状态。否则建议0.5~0.9  
checkpoint_completion_target = 0.1    # checkpoint target duration, 0.0 - 1.0  

# IO很好的机器,不需要考虑平滑调度, 否则建议128~256kB  
checkpoint_flush_after = 256kB          # measured in pages, 0 disables  
#checkpoint_flush_after = 0             # measured in pages, 0 disables  
#checkpoint_warning = 30s               # 0 disables  

# - Archiving -  

# 建议默认打开,因为修改它需要重启实例  
#archive_mode = off             # enables archiving; off, on, or always  
                                # (change requires restart)  

#  后期再修改,如  'test ! -f /disk1/digoal/arch/%f && cp %p /disk1/digoal/arch/%f'  
#archive_command = ''           # command to use to archive a logfile segment  
                                # placeholders: %p = path of file to archive  
                                #               %f = file name only  
                                # e.g. 'test ! -f /mnt/server/archivedir/%f && cp %p /mnt/server/archivedir/%f'  
#archive_timeout = 0            # force a logfile segment switch after this  
                                # number of seconds; 0 disables  


#------------------------------------------------------------------------------  
# REPLICATION  
#------------------------------------------------------------------------------  

# - Sending Servers -  

# Set these on the master and on any standby that will send replication data.  

# 同时需要几个流复制连接,根据实际需求设定  
max_wal_senders = 10             # max number of walsender processes  
                                # (change requires restart)  

# 根据实际情况设置保留WAL的数量,主要是防止过早的清除WAL,导致备库因为主库的WAL清除而中断。根据实际情况设定。  
#wal_keep_segments = 0          # in logfile segments; 0 disables  
#wal_sender_timeout = 60s       # in milliseconds; 0 disables  


# 根据实际情况设置需要创建多少replication slot  
# 使用slot,可以保证流复制下游没有接收的WAL会在当前节点永久保留。所以必须留意下游的接收情况,否则可能导致WAL爆仓  
# 建议大于等于max_wal_senders  
#max_replication_slots = 10     # max number of replication slots  
                                # (change requires restart)  
#track_commit_timestamp = off   # collect timestamp of transaction commit  
                                # (change requires restart)  

# - Master Server -  

# These settings are ignored on a standby server.  


# 如果有2个或2个以上的备库,可以考虑使用同步多副本模式。 根据实际情况设置  
[PostgreSQL 一主多从(多副本,强同步)简明手册 - 配置、压测、监控、切换、防脑裂、修复、0丢失 - 珍藏级》](201803/20180326_01.md)    
#synchronous_standby_names = '' # standby servers that provide sync rep  
                                # method to choose sync standbys, number of sync standbys,  
                                # and comma-separated list of application_name  
                                # from standby(s); '*' = all  

# 注意,容易导致膨胀,容易导致VACUUM进程空转,导致IO和CPU飙升。(特别是autovacuum naptime配置很小时)  
#vacuum_defer_cleanup_age = 0   # number of xacts by which cleanup is delayed  

# - Standby Servers -  

# These settings are ignored on a master server.  

#hot_standby = on                       # "off" disallows queries during recovery  
                                        # (change requires restart)  
#max_standby_archive_delay = 30s        # max delay before canceling queries  
                                        # when reading WAL from archive;  
                                        # -1 allows indefinite delay  
#max_standby_streaming_delay = 30s      # max delay before canceling queries  
                                        # when reading streaming WAL;  
                                        # -1 allows indefinite delay  
#wal_receiver_status_interval = 10s     # send replies at least this often  
                                        # 0 disables  

# 建议关闭,以免备库长事务导致 主库无法回收垃圾而膨胀。  
[PostgreSQL物理"备库"的哪些操作或配置,可能影响"主库"的性能、垃圾回收、IO波动](201704/20170410_03.md)    
#hot_standby_feedback = off             # send info from standby to prevent  
                                        # query conflicts  
#wal_receiver_timeout = 60s             # time that receiver waits for  
                                        # communication from master  
                                        # in milliseconds; 0 disables  
#wal_retrieve_retry_interval = 5s       # time to wait before retrying to  
                                        # retrieve WAL after a failed attempt  

# - Subscribers -  

# These settings are ignored on a publisher.  

# [《PostgreSQL 10.0 preview 逻辑订阅 - 原理与最佳实践》](201702/20170227_01.md)    
# These settings are ignored on a publisher.   
# 必须小于  max_worker_processes  
#max_logical_replication_workers = 4    # taken from max_worker_processes  
                                        # (change requires restart)  
#max_sync_workers_per_subscription = 2  # taken from max_logical_replication_workers  


#------------------------------------------------------------------------------  
# QUERY TUNING  
#------------------------------------------------------------------------------  

# - Planner Method Configuration -  

#enable_bitmapscan = on  
#enable_hashagg = on  
#enable_hashjoin = on  
#enable_indexscan = on  
#enable_indexonlyscan = on  
#enable_material = on  
#enable_mergejoin = on  
#enable_nestloop = on  
#enable_parallel_append = on  
#enable_seqscan = on  
#enable_sort = on  
#enable_tidscan = on  
#enable_partitionwise_join = off  
#enable_partitionwise_aggregate = off  
#enable_parallel_hash = on  
#enable_partition_pruning = on  

# - Planner Cost Constants -  

#seq_page_cost = 1.0                    # measured on an arbitrary scale  
# 离散IO很好的机器(例如ssd, nvme ssd),不需要考虑离散和顺序扫描的成本差异   
random_page_cost = 1.1                 # same scale as above  
#cpu_tuple_cost = 0.01                  # same scale as above  
#cpu_index_tuple_cost = 0.005           # same scale as above  
#cpu_operator_cost = 0.0025             # same scale as above  
#parallel_tuple_cost = 0.1              # same scale as above  
#parallel_setup_cost = 1000.0   # same scale as above  

#jit_above_cost = 100000                # perform JIT compilation if available  
                                        # and query more expensive, -1 disables  
#jit_optimize_above_cost = 500000       # optimize JITed functions if query is  
                                        # more expensive, -1 disables  
#jit_inline_above_cost = 500000         # attempt to inline operators and  
                                        # functions if query is more expensive,  
                                        # -1 disables  

#min_parallel_table_scan_size = 8MB  
#min_parallel_index_scan_size = 512kB  

# 扣掉会话连接RSS,shared buffer, autovacuum worker, 剩下的都是OS可用的CACHE。  
effective_cache_size = 80GB  

# - Genetic Query Optimizer -  

#geqo = on  
#geqo_threshold = 12  
#geqo_effort = 5                        # range 1-10  
#geqo_pool_size = 0                     # selects default based on effort  
#geqo_generations = 0                   # selects default based on effort  
#geqo_selection_bias = 2.0              # range 1.5-2.0  
#geqo_seed = 0.0                        # range 0.0-1.0  

# - Other Planner Options -  

#default_statistics_target = 100        # range 1-10000  
#constraint_exclusion = partition       # on, off, or partition  
#cursor_tuple_fraction = 0.1            # range 0.0-1.0  
#from_collapse_limit = 8  
#join_collapse_limit = 8                # 1 disables collapsing of explicit  
                                        # JOIN clauses  
#force_parallel_mode = off  


#------------------------------------------------------------------------------  
# REPORTING AND LOGGING  
#------------------------------------------------------------------------------  

# - Where to Log -  

log_destination = 'csvlog'              # Valid values are combinations of  
                                        # stderr, csvlog, syslog, and eventlog,  
                                        # depending on platform.  csvlog  
                                        # requires logging_collector to be on.  

# This is used when logging to stderr:  
logging_collector = on                  # Enable capturing of stderr and csvlog  
                                        # into log files. Required to be on for  
                                        # csvlogs.  
                                        # (change requires restart)  

# These are only used if logging_collector is on:  
log_directory = 'log'                   # directory where log files are written,  
                                        # can be absolute or relative to PGDATA  
log_filename = 'postgresql-%a.log'      # log file name pattern,  
                                        # can include strftime() escapes  
#log_file_mode = 0600                   # creation mode for log files,  
                                        # begin with 0 to use octal notation  
log_truncate_on_rotation = on           # If on, an existing log file with the  
                                        # same name as the new log file will be  
                                        # truncated rather than appended to.  
                                        # But such truncation only occurs on  
                                        # time-driven rotation, not on restarts  
                                        # or size-driven rotation.  Default is  
                                        # off, meaning append to existing files  
                                        # in all cases.  
log_rotation_age = 1d                   # Automatic rotation of logfiles will  
                                        # happen after that time.  0 disables.  
log_rotation_size = 0                   # Automatic rotation of logfiles will  
                                        # happen after that much log output.  
                                        # 0 disables.  

# These are relevant when logging to syslog:  
#syslog_facility = 'LOCAL0'  
#syslog_ident = 'postgres'  
#syslog_sequence_numbers = on  
#syslog_split_messages = on  

# This is only relevant when logging to eventlog (win32):  
# (change requires restart)  
#event_source = 'PostgreSQL'  

# - When to Log -  

#client_min_messages = notice           # values in order of decreasing detail:  
                                        #   debug5  
                                        #   debug4  
                                        #   debug3  
                                        #   debug2  
                                        #   debug1  
                                        #   log  
                                        #   notice  
                                        #   warning  
                                        #   error  

#log_min_messages = warning             # values in order of decreasing detail:  
                                        #   debug5  
                                        #   debug4  
                                        #   debug3  
                                        #   debug2  
                                        #   debug1  
                                        #   info  
                                        #   notice  
                                        #   warning  
                                        #   error  
                                        #   log  
                                        #   fatal  
                                        #   panic  

#log_min_error_statement = error        # values in order of decreasing detail:  
                                        #   debug5  
                                        #   debug4  
                                        #   debug3  
                                        #   debug2  
                                        #   debug1  
                                        #   info  
                                        #   notice  
                                        #   warning  
                                        #   error  
                                        #   log  
                                        #   fatal  
                                        #   panic (effectively off)  

# 根据实际情况设定,例如业务上认为5秒以上是慢SQL,那么就设置为5秒。  
log_min_duration_statement = 5s        # -1 is disabled, 0 logs all statements  
                                        # and their durations, > 0 logs only  
                                        # statements running at least this number  
                                        # of milliseconds  


# - What to Log -  

#debug_print_parse = off  
#debug_print_rewritten = off  
#debug_print_plan = off  
#debug_pretty_print = on  
log_checkpoints = on   

# 如果业务是短连接,建议设置为OFF,否则建议设置为ON  
log_connections = on  

# 如果业务是短连接,建议设置为OFF,否则建议设置为ON  
log_disconnections = on  
#log_duration = off  
log_error_verbosity = verbose    # terse, default, or verbose messages  
#log_hostname = off  
log_line_prefix = '%m [%p] '            # special values:  
                                        #   %a = application name  
                                        #   %u = user name  
                                        #   %d = database name  
                                        #   %r = remote host and port  
                                        #   %h = remote host  
                                        #   %p = process ID  
                                        #   %t = timestamp without milliseconds  
                                        #   %m = timestamp with milliseconds  
                                        #   %n = timestamp with milliseconds (as a Unix epoch)  
                                        #   %i = command tag  
                                        #   %e = SQL state  
                                        #   %c = session ID  
                                        #   %l = session line number  
                                        #   %s = session start timestamp  
                                        #   %v = virtual transaction ID  
                                        #   %x = transaction ID (0 if none)  
                                        #   %q = stop here in non-session  
                                        #        processes  
                                        #   %% = '%'  
                                        # e.g. '<%u%%%d> '  
#log_lock_waits = off                   # log lock waits >= deadlock_timeout  

# 如果需要审计SQL,则可以设置为all  
#log_statement = 'none'                 # none, ddl, mod, all  
#log_replication_commands = off  
#log_temp_files = -1                    # log temporary files equal or larger  
                                        # than the specified size in kilobytes;  
                                        # -1 disables, 0 logs all temp files  
log_timezone = 'PRC'    

#------------------------------------------------------------------------------  
# PROCESS TITLE  
#------------------------------------------------------------------------------  

#cluster_name = ''                      # added to process titles if nonempty  
                                        # (change requires restart)  
#update_process_title = on  


#------------------------------------------------------------------------------  
# STATISTICS  
#------------------------------------------------------------------------------  

# - Query and Index Statistics Collector -  

#track_activities = on  
#track_counts = on  

# 跟踪IO耗时会带来一定的性能影响,默认是关闭的  
# 如果需要统计IO的时间开销,设置为ON  
# 建议用pg_test_timing测试一下获取时间的开销,如果开销很大,建议关闭这个时间跟踪。  
#track_io_timing = off  
#track_functions = none                 # none, pl, all  
#track_activity_query_size = 1024       # (change requires restart)  
#stats_temp_directory = 'pg_stat_tmp'  


# - Monitoring -  

#log_parser_stats = off  
#log_planner_stats = off  
#log_executor_stats = off  
#log_statement_stats = off  


#------------------------------------------------------------------------------  
# AUTOVACUUM  
#------------------------------------------------------------------------------  

#autovacuum = on                        # Enable autovacuum subprocess?  'on'  
                                        # requires track_counts to also be on.  
log_autovacuum_min_duration = 0 # -1 disables, 0 logs all actions and  
                                        # their durations, > 0 logs only  
                                        # actions running at least this number  
                                        # of milliseconds.  

# CPU核多,并且IO好的情况下,可多点,但是注意最多可能消耗这么多内存:   
# autovacuum_max_workers * autovacuum mem(autovacuum_work_mem),  
# 会消耗较多内存,所以内存也要有基础。       
# 当DELETE\UPDATE非常频繁时,建议设置多一点,防止膨胀严重      
autovacuum_max_workers = 8              # max number of autovacuum subprocesses  
                                        # (change requires restart)  

# 建议不要太高频率,否则会因为vacuum产生较多的XLOG。或者在某些垃圾回收不掉的情况下(例如长事务、feed back on,等),导致一直触发vacuum,CPU和IO都会升高  
[PostgreSQL垃圾回收代码分析 - why postgresql cann't reclaim tuple is HEAPTUPLE_RECENTLY_DEAD》](201505/20150503_01.md)    
[PostgreSQL物理"备库"的哪些操作或配置,可能影响"主库"的性能、垃圾回收、IO波动](201704/20170410_03.md)    
#autovacuum_naptime = 1min              # time between autovacuum runs  
#autovacuum_vacuum_threshold = 50       # min number of row updates before  
                                        # vacuum  
#autovacuum_analyze_threshold = 50      # min number of row updates before  
                                        # analyze  
#autovacuum_vacuum_scale_factor = 0.2   # fraction of table size before vacuum  
#autovacuum_analyze_scale_factor = 0.1  # fraction of table size before analyze  

# 除了设置较大的FREEZE值。  
# 还是需要注意FREEZE风暴  [《PostgreSQL Freeze 风暴预测续 - 珍藏级SQL》](201804/20180411_01.md)    
# 表级定制freeze  
autovacuum_freeze_max_age = 1200000000  # maximum XID age before forced vacuum  
                                        # (change requires restart)  
autovacuum_multixact_freeze_max_age = 1400000000        # maximum multixact age  
                                        # before forced vacuum  
                                        # (change requires restart)  

# 如果数据库UPDATE非常频繁,建议设置为0。并且建议使用SSD  
autovacuum_vacuum_cost_delay = 0ms      # default vacuum cost delay for  
                                        # autovacuum, in milliseconds;  
                                        # -1 means use vacuum_cost_delay  
#autovacuum_vacuum_cost_limit = -1      # default vacuum cost limit for  
                                        # autovacuum, -1 means use  
                                        # vacuum_cost_limit  


#------------------------------------------------------------------------------  
# CLIENT CONNECTION DEFAULTS  
#------------------------------------------------------------------------------  

# - Statement Behavior -  

#search_path = '"$user", public'        # schema names  
#row_security = on  
#default_tablespace = ''                # a tablespace name, '' uses the default  
#temp_tablespaces = ''                  # a list of tablespace names, '' uses  
                                        # only default tablespace  
#check_function_bodies = on  
#default_transaction_isolation = 'read committed'  
#default_transaction_read_only = off  
#default_transaction_deferrable = off  
#session_replication_role = 'origin'  

# 可以用来防止风暴,但是不建议全局设置  
#statement_timeout = 0                  # in milliseconds, 0 is disabled  

# 执行DDL时,建议加上超时  
#lock_timeout = 0                       # in milliseconds, 0 is disabled  

# 空闲中事务自动清理,根据业务实际情况设置  
#idle_in_transaction_session_timeout = 0        # in milliseconds, 0 is disabled  

#vacuum_freeze_min_age = 50000000  
vacuum_freeze_table_age = 1150000000  
#vacuum_multixact_freeze_min_age = 5000000  
vacuum_multixact_freeze_table_age = 1150000000  
#vacuum_cleanup_index_scale_factor = 0.1        # fraction of total number of tuples  
                                                # before index cleanup, 0 always performs  
                                                # index cleanup  
#bytea_output = 'hex'                   # hex, escape  
#xmlbinary = 'base64'  
#xmloption = 'content'  

# 限制GIN扫描的返回结果集大小,在想限制超多匹配的返回时可以设置  
#gin_fuzzy_search_limit = 0  

# GIN索引pending list的大小  
#gin_pending_list_limit = 4MB  

# - Locale and Formatting -  

datestyle = 'iso, mdy'  
#intervalstyle = 'postgres'  
timezone = 'PRC'  
#timezone_abbreviations = 'Default'     # Select the set of available time zone  
                                        # abbreviations.  Currently, there are  
                                        #   Default  
                                        #   Australia (historical usage)  
                                        #   India  
                                        # You can create your own file in  
                                        # share/timezonesets/.  
#extra_float_digits = 0                 # min -15, max 3  
#client_encoding = sql_ascii            # actually, defaults to database  
                                        # encoding  

# These settings are initialized by initdb, but they can be changed.  
lc_messages = 'C'                       # locale for system error message  
                                        # strings  
lc_monetary = 'C'                       # locale for monetary formatting  
lc_numeric = 'C'                        # locale for number formatting  
lc_time = 'C'                           # locale for time formatting  

# default configuration for text search  
default_text_search_config = 'pg_catalog.english'  

# - Shared Library Preloading -  

# 需要加载什么LIB,预先加载,对于经常访问的库也建议预加载,例如postgis  
#shared_preload_libraries = 'pg_jieba,pipelinedb'        # (change requires restart)  
#local_preload_libraries = ''  
#session_preload_libraries = ''  

# - Other Defaults -  

#dynamic_library_path = '$libdir'  

jit = off                               # allow JIT compilation  
#jit_provider = 'llvmjit'               # JIT implementation to use  

#------------------------------------------------------------------------------  
# LOCK MANAGEMENT  
#------------------------------------------------------------------------------  

#deadlock_timeout = 1s  
#max_locks_per_transaction = 64         # min 10  
                                        # (change requires restart)  
#max_pred_locks_per_transaction = 64    # min 10  
                                        # (change requires restart)  
#max_pred_locks_per_relation = -2       # negative values mean  
                                        # (max_pred_locks_per_transaction  
                                        #  / -max_pred_locks_per_relation) - 1  
#max_pred_locks_per_page = 2            # min 0  


#------------------------------------------------------------------------------  
# VERSION AND PLATFORM COMPATIBILITY  
#------------------------------------------------------------------------------  

# - Previous PostgreSQL Versions -  

#array_nulls = on  
#backslash_quote = safe_encoding        # on, off, or safe_encoding  
#default_with_oids = off  

# [《PostgreSQL 转义、UNICODE、与SQL注入》](201704/20170402_01.md)    
#escape_string_warning = on  
#lo_compat_privileges = off  
#operator_precedence_warning = off  
#quote_all_identifiers = off  
#standard_conforming_strings = on  
#synchronize_seqscans = on  

# - Other Platforms and Clients -  

#transform_null_equals = off  


#------------------------------------------------------------------------------  
# ERROR HANDLING  
#------------------------------------------------------------------------------  

#exit_on_error = off                    # terminate session on any error?  
#restart_after_crash = on               # reinitialize after backend crash?  


#------------------------------------------------------------------------------  
# CONFIG FILE INCLUDES  
#------------------------------------------------------------------------------  

# These options allow settings to be loaded from files other than the  
# default postgresql.conf.  

#include_dir = 'conf.d'                 # include files ending in '.conf' from  
                                        # directory 'conf.d'  
#include_if_exists = 'exists.conf'      # include file only if it exists  
#include = 'special.conf'               # include file  


#------------------------------------------------------------------------------  
# CUSTOMIZED OPTIONS  
#------------------------------------------------------------------------------  

# Add settings for extensions here  

64G内存,16核,SSD机器的配置例子

listen_addresses = '0.0.0.0'  
port = 1921  
max_connections = 2000  
superuser_reserved_connections = 13  
unix_socket_directories = '/tmp, .'  
unix_socket_permissions = 0700  
tcp_keepalives_idle = 60  
tcp_keepalives_interval = 10  
tcp_keepalives_count = 10  
shared_buffers = 8GB  
max_prepared_transactions = 2000  
maintenance_work_mem = 1GB  
vacuum_cost_delay = 0  
bgwriter_delay = 10ms  
bgwriter_lru_maxpages = 1000  
bgwriter_lru_multiplier = 10.0  
effective_io_concurrency = 0  
max_worker_processes = 128  
max_parallel_maintenance_workers = 8  
max_parallel_workers_per_gather = 8  
max_parallel_workers = 10  
wal_level = replica  
synchronous_commit = off  
full_page_writes = on  
wal_compression = on  
wal_buffers = 64MB  
wal_writer_delay = 10ms  
checkpoint_timeout = 30min  
max_wal_size = 16GB  
min_wal_size = 4GB  
checkpoint_completion_target = 0.1  
archive_mode = on  
archive_command = '/bin/date'  
max_wal_senders = 16  
max_standby_archive_delay = 300s  
max_standby_streaming_delay = 300s  
hot_standby_feedback = off  
max_logical_replication_workers = 10  

# 以下两个参数默认关闭, 分区表JOIN时使用并行JOIN
# [《PostgreSQL 11 preview - 分区表智能并行JOIN (已类似MPP架构,性能暴增)》](201802/20180202_02.md)  
enable_partitionwise_join = on  
enable_partitionwise_aggregate = on  

enable_parallel_hash = on  
enable_partition_pruning = on  
random_page_cost = 1.1  
effective_cache_size = 48GB  

# 如果使用LLVM编译的话,并且业务有复杂SQL的情况下。建议开启JIT。
# jit = on  
log_destination = 'csvlog'  
logging_collector = on  
log_directory = 'log'  
log_filename = 'postgresql-%a.log'  
log_truncate_on_rotation = on  
log_rotation_age = 1d  
log_rotation_size = 0  
log_min_duration_statement = 5s  
log_checkpoints = on  
log_connections = on  
log_disconnections = on  
log_error_verbosity = verbose     
log_line_prefix = '%m [%p] '  
log_lock_waits = on  
log_statement = 'ddl'  
track_activity_query_size = 2048  
autovacuum = on  
log_autovacuum_min_duration = 0  
autovacuum_max_workers = 8  
autovacuum_freeze_max_age = 1200000000  
autovacuum_multixact_freeze_max_age = 1400000000  
autovacuum_vacuum_cost_delay = 0ms  
# statement_timeout = 45min  
lock_timeout = 15s                                
idle_in_transaction_session_timeout = 60s  
vacuum_freeze_table_age = 1150000000  
vacuum_multixact_freeze_table_age = 1150000000  
# shared_preload_libraries = 'pg_stat_statements'  
deadlock_timeout = 1s  

pg_hba.conf 数据库防火墙配置模板

# TYPE  DATABASE        USER            ADDRESS                 METHOD  

# "local" is for Unix domain socket connections only  
local   all             all                                     trust  
# IPv4 local connections:  
host    all             all             127.0.0.1/32            trust  
# IPv6 local connections:  
host    all             all             ::1/128                 trust  
# Allow replication connections from localhost, by a user with the  
# replication privilege.  
local   replication     all                                     trust  
host    replication     all             127.0.0.1/32            trust  
host    replication     all             ::1/128                 trust  

# 禁止超级用户从远程连接  
host all postgres 0.0.0.0/0 reject  

# 应用连接配置:哪个用户,从哪里来,连接什么数据库。规则为使用何种认证方法,或拒绝?  
# TYPE  DATABASE        USER            ADDRESS                 METHOD  

# 如果不想挨个配置,可以使用如下配置,允许所有来源,通过任意用户访问任意数据库  
host all all 0.0.0.0/0 md5  

Flag Counter

digoal's 大量PostgreSQL文章入口

免费领取阿里云RDS PostgreSQL实例、ECS虚拟机

ch341a flashrom

解决flashrom 刷写时候必须大小和flash相等的问题

flashrom,ch341a 解决flashrom 刷写时候必须大小和flash相等的问题

我的ar9331买了得有几个月了。。一直卡在了 刷uboot这里 今天突然搜索到了 一个办法= = 没想到贼简单

size =  8388608 - bin的大小



sudo dd if=/dev/zero bs=1 count=size >> xxx.bin





sudo flashrom -p ch341a_spi -l layout.txt -i a -w xxx.bin
python,cython,linux

cython写的python包,带so/带ddl的包打包到pypi上,上传

cython写的python包,带so/带ddl的包打包到pypi上,上传

由于python版本的混乱- - 打包上传第三方包的教程也是坑的很 这里踩坑一下

cython 的程序 编译成so文件

from distutils.core import setup as cysetup
from Cython.Build import cythonize
cysetup(ext_modules = cythonize("lib.py",language_level=3),)

编译脚本

python ./setup.py build_ext  --inplace

让setup.py支持打包so文件

新建 MANIFEST.in

写入

recursive-include src *

这样打包的时候就会跟进保存

打包时候的setup.py

from setuptools import setup, find_packages


setup(
    name='pythonGroupMsg',
      version='0.0.1',
      description='This is a packet that broadcasts redis multiple queues',
      url='https://github.com/zhenruyan/pythonGroupMsg',
      author='zhenruyan',
      author_email='baiyangwangzhan@hotmail.com',
      license='WTFPL',
      packages=find_packages(),
      zip_safe=False,
      platforms=["linux"],
      long_description=open('README.rst').read(),
classifiers=[
        'Operating System :: OS Independent',
        'Intended Audience :: Developers',
        'Programming Language :: Python :: 3.7',
        'Topic :: Software Development :: Libraries'
    ],include_package_data=True,
          )

打包需要安装两个包

pip install wheel
pip install twine

打包脚本

python setup.py sdist build
python setup.py bdist_wheel --universal

上传

twine upload ./dist/*
broadcast multicast queue redis redisgroupmsg

redisGroupMsg redis队列组播广播

redisGroupMsg redis队列组播广播

redisGroupMsg redis队列组播广播

redis 向多个队列发送广播

最近要做一个类似聊天软件的东西,经过大量测试搞了这么一玩意儿

性能是py直接循环发送的13倍速度

pip install redisGroupMsg
from redisGroupMsg import redisMessage

r = redisMessage()

if __name__ == '__main__':
    for a in range(1,10):
        e = "id:"+str(a)
        # 添加到组
        # r.addGroup("test",e)
        #在组内广播
        # r.sendGroup("test",e)
        #从组内删除
        r.removeGroup("test",e)
java kafka linux python,發佈,訂閱,发布,订阅

kafka 发布订阅分组

kafka 发布订阅分组 發佈訂閱

惹不起的kafka

启动zookeeper

./zookeeper-server-start.sh ../config/zookeeper.pperties

启动kafka

./kafka-server-start.sh ../config/server.propertis

记得给配置文件配置分区数

代码

发布

from kafka import KafkaProducer
producer = KafkaProducer(bootstrap_servers='localhost:9092')

if __name__ == '__main__':
    for a in range(1,2):
        producer.send('chat',partition=1,value=b'some_message_bytes')
        producer.flush()

订阅分区

from kafka import KafkaConsumer

if __name__ == '__main__':

    consumer = KafkaConsumer("chat",bootstrap_servers=['localhost:9092'])
    consumer.subscribe(pattern="1")
    for msg in consumer:
        print(msg)

惹不起 惹不起 rabbitMQ 的2500个 队列才3g内存

kafka 一个topic (概念上的队列) 分区(打的tag) 2000个 竟然 需要40g 储存。。。

口区。。。。

我要去试试 nsq !!!

linux pub rabbitmq sub 廣播

RabbitMQ 路由绑定与广播

RabbitMQ 路由綁定與廣播

rabbitMQ 实现组播与广播

rabbitmq 分路由和队列

路由分三个模式

topic 模糊匹配 * 匹配一个字符 # 匹配多个字符

fanout 广播

direct 全匹配

生产者代码

# !/usr/bin/env python
import pika
import time
credentials = pika.PlainCredentials('guest','guest')



if __name__ == '__main__':
    # 声明queue

        connection = pika.BlockingConnection(pika.ConnectionParameters(
        '127.0.0.1', 5672, '/', credentials))
        # channel.exchange_declare()
        for a in range(1,1000000):
            channel = connection.channel()
            channel.queue_declare(queue="chat."+str(a),durable=False)
            channel.basic_publish(exchange='amq.topic',
                                  routing_key="chat.*",
                                  body='Hello World!')
            channel.queue_bind(exchange='amq.topic',
                               queue="chat."+str(a),
                               routing_key="chat.*")
            channel.basic_publish(exchange='amq.topic',
                                  routing_key="chat.*",
                                  body='Hello World!')
            channel.close()
            time.sleep(1)
            print(" [x] Sent 'Hello World!'")

        connection.close()

消费者代码

# _*_coding:utf-8_*_
import pika
import time
credentials = pika.PlainCredentials('guest','guest')
connection = pika.BlockingConnection(pika.ConnectionParameters(
    '127.0.0.1',5672,'/',credentials))


if __name__ == '__main__':
    while True:
        channel = connection.channel()
        channel.exchange_declare(exchange='topic_logs',type='topic')
        method_frame, header_frame, body = channel.consume("chat")
        if method_frame:
            print(method_frame, header_frame, body)
            channel.basic_ack(method_frame.delivery_tag)
        else:
            time.sleep(1)
            print('No message returned')
arch linux rime 简体中文

rime设置为默认简体

rime设置为默认简体

转载 https://github.com/ModerRAS/ModerRAS.github.io/blob/master/_posts/2018-11-07-rime%E8%AE%BE%E7%BD%AE%E4%B8%BA%E9%BB%98%E8%AE%A4%E7%AE%80%E4%BD%93.md

写在开始

我的Arch Linux上面安装的rime-ibus默认输入是繁体中文,但是我日常要用简体中文,然后每次切换输入法的时候都要按一次F4我感觉很麻烦,所以就找了一下怎么修改简体字。

修改默认的那个的配置文件

我的这个配置文件在这里~/.config/ibus/rime/build/,所以看一下这个目录下面的文件:

bopomofo.prism.bin   
bopomofo_tw.prism.bin    
cangjie5.prism.bin    
default.yaml                   
luna_pinyin_fluency.schema.yaml  
luna_pinyin.schema.yaml     
luna_pinyin_simp.schema.yaml  
luna_quanpin.schema.yaml  
stroke.schema.yaml      
terra_pinyin.schema.yaml
bopomofo.schema.yaml  
bopomofo_tw.schema.yaml  
cangjie5.schema.yaml  
luna_pinyin_fluency.prism.bin  
luna_pinyin.prism.bin            
luna_pinyin_simp.prism.bin  
luna_quanpin.prism.bin        
stroke.prism.bin          
terra_pinyin.prism.bin

然后我用的是明月拼音所以我第一反应看到的是luna_pinyin.schema.yaml,似乎也猜对了。拉到底找到这里:

switches:
  - name: ascii_mode
    reset: 0
    states: ["中文", "西文"]
  - name: full_shape
    states: ["半角", "全角"]
  - name: simplification
    states: ["漢字", "汉字"]
  - name: ascii_punct
    states: ["。,", ".,"]

然后改成这样:

switches:
  - name: ascii_mode
    reset: 0
    states: ["中文", "西文"]
  - name: full_shape
    states: ["半角", "全角"]
  - name: simplification
    reset: 1
    states: ["漢字", "汉字"]
  - name: ascii_punct
    states: ["。,", ".,"]

其实就是加一个reset的选择选成简体中文,就这样就结束了。

写在最后

这个就是这样子了,还算容易,但是不是很好找,因为网上一般都是用F4或者ctrl+`实现的, 这种就要每次打开都要设置一遍,比较麻烦,所以这样子会比较省事一点

ltsb ltsc windows wsl

windows 2019 ltsb(ltsc) 支持了 wsl 完美!

windows,wsl,ltsb,ltsc

下载地址

用管理员权限 打开powershell

不得不说 现在powershell 挺牛逼 ! (但是我就不用!!!)

Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Windows-Subsystem-Linux

解压安装!

Rename-Item ~/Ubuntu.appx ~/Ubuntu.zip
Expand-Archive ~/Ubuntu.zip ~/Ubuntu

设置环境变量

$userenv = [System.Environment]::GetEnvironmentVariable("Path", "User")
[System.Environment]::SetEnvironmentVariable("PATH", $userenv + "C:\Distros\Ubuntu", "User")

至此 windows 成为最好用的linux发行版!

linux 万岁!

asyncio linux python socket

python3.7的asyncio的socket server

python3.7的asyncio的socket server

猝不及防的看了一下文档 发现出现这个简单实用的语法糖?

import asyncio

async def client_connected(reader:asyncio.StreamReader, writer: asyncio.StreamWriter):
    e=await reader.read(10*1024*1024)
    print(e)
    writer.write(b"200 hello world")
    await writer.drain()
    writer.close()

async def main(host, port):
    srv = await asyncio.start_server(
        client_connected, host, port)
    await srv.serve_forever()

if __name__ == "__main__":
    asyncio.run(main('127.0.0.1', 8080))
img linux openwrt 虚拟机文件制作

img 虚拟机文件制作

img 虚拟机文件制作

pacman -S qemu
qemu-img convert -f raw LEDE-17.01.2-R7.3.3-x64-combined-squashfs.img  -O  vmdk lede.vmdk
asoc linux mariadb mysql
bash find linux rm unix
fcitx linux sogou xfce4

解决xfce4的fcitx不能唤醒的问题

解决xfce4的fcitx不能唤醒的问题

在 ~/.xprofile

export XIM="fcitx"
export XIM_PROGRAM="fcitx"
export XMODIFIERS="@im=fcitx"
export GTK_IM_MODULE="fcitx"
export QT_IM_MODULE="fcitx"
cache linux mount tmp tmpfs

tmp挂载在tmpfs上

mount tmpfs /tmp -t tmpfs -o size=128m

一条命令

mount tmpfs /tmp -t tmpfs -o size=128m
django linux python tornado wsgi

使用tornado回调启动django

使用tornado回调启动django

使用tornado回调启动django

import os
import sys
from tornado.options import options, define, parse_command_line
import django.core.handlers.wsgi
import tornado.httpserver
import tornado.ioloop
import tornado.web
import tornado.wsgi
from django.core.wsgi import get_wsgi_application
sys.path.append(os.path.dirname(os.path.abspath(__file__)))
os.environ['DJANGO_SETTINGS_MODULE'] = "tornago.settings"

define('port', type=int, default=8000)


def main():
    parse_command_line()
    wsgi_app = tornado.wsgi.WSGIContainer(get_wsgi_application())
    tornado_app = tornado.web.Application(

        [('.*', tornado.web.FallbackHandler, dict(fallback=wsgi_app)),]
    )
    server = tornado.httpserver.HTTPServer(tornado_app)
    server.listen(options.port)
    tornado.ioloop.IOLoop.instance().start()

if __name__ == '__main__':
    main()
80 linux nginx root
image img linux losetup mount

原始的linux挂载多分区镜像

挂载多分区镜像

多分区镜像挂载

  • 查看loop设备
losetup -f
  • 查看起始位置
有cfdisk的话

cfdisk  ./xxx.img

没有cfdisk的话

fdisk  -l  ./xxx.img
  • 起始位置乘以512进行挂载
losetup -o xxx乘以512 /dev/loop0  xxx.img
  • 真正的挂载目录
mount /dev/loop0  xxx
  • 想要卸载
umount 卸载目录


losetup  -d /dev/loop0
cron linux python schedule

schedule python的定时任务神器

python的定时任务

最近需要玩玩定时任务

。=。

上代码

import schedule
import time

def job():
    print("I'm working...")

schedule.every(10).minutes.do(job)
schedule.every().hour.do(job)
schedule.every().day.at("10:30").do(job)
schedule.every().monday.do(job)
schedule.every().wednesday.at("13:15").do(job)

while True:
    schedule.run_pending()
    time.sleep(1)
eventloop loop python run_in_executor 线程池
aosc arch deepin dmenu i3wm linux rofi ubuntu

i3wm dmenu的优秀替代品 rofi 窗口切换/搜索

i3wm dmenu的优秀替代品 rofi 窗口切换/搜索

rofi

https://github.com/DaveDavenport/rofi/

效果- -还不错

速度很快 比gnome-do之类的要爽的多

上配置文件

bindsym $mod+d exec --no-startup-id "rofi -combi-modi window,drun,run,ssh -show combi -modi combi"

备份一份

~/.config/i3/config

# i3 config file (v4)
# Please see http://i3wm.org/docs/userguide.html for a complete reference!

# Set mod key (Mod1=<Alt>, Mod4=<Super>)
set $mod Mod4

# set default desktop layout (default is tiling)
# workspace_layout tabbed <stacking|tabbed>

# Configure border style <normal|1pixel|pixel xx|none|pixel>
new_window pixel 1
new_float normal

# Hide borders
hide_edge_borders none

# change borders
bindsym $mod+u border none
bindsym $mod+y border pixel 1
bindsym $mod+n border normal

# Font for window titles. Will also be used by the bar unless a different font
# is used in the bar {} block below.
font xft:Noto Sans 10

# Use Mouse+$mod to drag floating windows
floating_modifier $mod

# start a terminal
# bindsym $mod+Return exec terminal
bindsym $mod+Return exec terminator

# kill focused window
bindsym $mod+Shift+q kill

# start program launcher
bindsym $mod+Shift+x exec --no-startup-id dmenu-frecency
bindsym $mod+d exec --no-startup-id "rofi -combi-modi window,drun,run,ssh -show combi -modi combi"
# launch categorized menu
bindsym $mod+z exec --no-startup-id morc_menu

################################################################################################
## sound-section - DO NOT EDIT if you wish to automatically upgrade Alsa -> Pulseaudio later! ##
################################################################################################

exec --no-startup-id volumeicon
#bindsym $mod+Ctrl+m exec terminal -e 'alsamixer'
#exec --no-startup-id pulseaudio
#exec --no-startup-id pa-applet
bindsym $mod+Ctrl+m exec pavucontrol

################################################################################################

# Screen brightness controls
bindsym XF86MonBrightnessUp exec "xbacklight -inc 10; notify-send 'brightness up'"
bindsym XF86MonBrightnessDown exec "xbacklight -dec 10; notify-send 'brightness down'"

# Start Applications
bindsym $mod+Ctrl+b exec --no-startup-id terminal -e 'bmenu'
bindsym $mod+F2 exec --no-startup-id firefox
bindsym $mod+F3 exec --no-startup-id pcmanfm
# bindsym $mod+F3 exec ranger
# bindsym $mod+Shift+F3 exec gksu nautilus
bindsym $mod+F5 exec terminator -e 'htop'
bindsym $mod+t exec --no-startup-id pkill compton
bindsym $mod+Ctrl+t exec --no-startup-id compton -b
bindsym $mod+Shift+d --release exec "killall dunst; exec notify-send 'restart dunst'"
bindsym Print exec --no-startup-id ~/.config/scrot/i3-scrot
bindsym $mod+Print exec --no-startup-id ~/.config/scrot/i3-scrot -w
bindsym Shift+Print exec --no-startup-id ~/.config/scrot/i3-scrot -s
bindsym $mod+Shift+h exec xdg-open /usr/share/doc/aosc/i3_help.pdf
bindsym $mod+Ctrl+x --release exec --no-startup-id xkill

# focus_follows_mouse no

# change focus
bindsym $mod+j focus left
bindsym $mod+k focus down
bindsym $mod+l focus up
bindsym $mod+odiaeresis focus right

# alternatively, you can use the cursor keys:
bindsym $mod+Left focus left
bindsym $mod+Down focus down
bindsym $mod+Up focus up
bindsym $mod+Right focus right

# move focused window
bindsym $mod+Shift+j move left
bindsym $mod+Shift+k move down
bindsym $mod+Shift+l move up
bindsym $mod+Shift+odiaeresis move right

# alternatively, you can use the cursor keys:
bindsym $mod+Shift+Left move left
bindsym $mod+Shift+Down move down
bindsym $mod+Shift+Up move up
bindsym $mod+Shift+Right move right

# workspace back and forth (with/without active container)
workspace_auto_back_and_forth yes
bindsym $mod+b workspace back_and_forth
bindsym $mod+Shift+b move container to workspace back_and_forth; workspace back_and_forth

# split orientation
bindsym $mod+h split h;exec notify-send 'tile horizontally'
bindsym $mod+v split v;exec notify-send 'tile vertically'
bindsym $mod+q split toggle

# toggle fullscreen mode for the focused container
bindsym $mod+f fullscreen toggle

# change container layout (stacked, tabbed, toggle split)
bindsym $mod+s layout stacking
bindsym $mod+w layout tabbed
bindsym $mod+e layout toggle split

# toggle tiling / floating
bindsym $mod+Shift+space floating toggle

# change focus between tiling / floating windows
bindsym $mod+space focus mode_toggle

# toggle sticky
bindsym $mod+Shift+s sticky toggle

# focus the parent container
bindsym $mod+a focus parent

# move the currently focused window to the scratchpad
bindsym $mod+Shift+minus move scratchpad

# Show the next scratchpad window or hide the focused scratchpad window.
# If there are multiple scratchpad windows, this command cycles through them.
bindsym $mod+minus scratchpad show

#navigate workspaces next / previous
bindsym $mod+Ctrl+Right workspace next
bindsym $mod+Ctrl+Left workspace prev

# Workspace names
# to display names or symbols instead of plain workspace numbers you can use
# something like: set $ws1 1:mail
#                 set $ws2 2:
set $ws1 1
set $ws2 2
set $ws3 3
set $ws4 4
set $ws5 5
set $ws6 6
set $ws7 7
set $ws8 8

# switch to workspace
bindsym $mod+1 workspace $ws1
bindsym $mod+2 workspace $ws2
bindsym $mod+3 workspace $ws3
bindsym $mod+4 workspace $ws4
bindsym $mod+5 workspace $ws5
bindsym $mod+6 workspace $ws6
bindsym $mod+7 workspace $ws7
bindsym $mod+8 workspace $ws8

# Move focused container to workspace
bindsym $mod+Ctrl+1 move container to workspace $ws1
bindsym $mod+Ctrl+2 move container to workspace $ws2
bindsym $mod+Ctrl+3 move container to workspace $ws3
bindsym $mod+Ctrl+4 move container to workspace $ws4
bindsym $mod+Ctrl+5 move container to workspace $ws5
bindsym $mod+Ctrl+6 move container to workspace $ws6
bindsym $mod+Ctrl+7 move container to workspace $ws7
bindsym $mod+Ctrl+8 move container to workspace $ws8

# Move to workspace with focused container
bindsym $mod+Shift+1 move container to workspace $ws1; workspace $ws1
bindsym $mod+Shift+2 move container to workspace $ws2; workspace $ws2
bindsym $mod+Shift+3 move container to workspace $ws3; workspace $ws3
bindsym $mod+Shift+4 move container to workspace $ws4; workspace $ws4
bindsym $mod+Shift+5 move container to workspace $ws5; workspace $ws5
bindsym $mod+Shift+6 move container to workspace $ws6; workspace $ws6
bindsym $mod+Shift+7 move container to workspace $ws7; workspace $ws7
bindsym $mod+Shift+8 move container to workspace $ws8; workspace $ws8

# Open applications on specific workspaces
# assign [class="Thunderbird"] $ws1
# assign [class="Pale moon"] $ws2
# assign [class="Pcmanfm"] $ws3
# assign [class="Skype"] $ws5

# Open specific applications in floating mode
for_window [title="alsamixer"] floating enable border pixel 1
for_window [class="Calamares"] floating enable border normal
for_window [class="Clipgrab"] floating enable
for_window [title="File Transfer*"] floating enable
for_window [class="Galculator"] floating enable border pixel 1
for_window [class="GParted"] floating enable border normal
for_window [title="i3_help"] floating enable sticky enable border normal
for_window [class="Lightdm-gtk-greeter-settings"] floating enable
for_window [class="Lxappearance"] floating enable sticky enable border normal
for_window [title="MuseScore: Play Panel"] floating enable
for_window [class="Nitrogen"] floating enable sticky enable border normal
for_window [class="Oblogout"] fullscreen enable
for_window [class="Pavucontrol"] floating enable
for_window [class="qt5ct"] floating enable sticky enable border normal
for_window [class="Qtconfig-qt4"] floating enable sticky enable border normal
for_window [class="Simple-scan"] floating enable border normal
for_window [class="(?i)System-config-printer.py"] floating enable border normal
for_window [class="Skype"] floating enable border normal
for_window [class="Timeset-gui"] floating enable border normal
for_window [class="(?i)virtualbox"] floating enable border normal
for_window [class="Xfburn"] floating enable

# switch to workspace with urgent window automatically
for_window [urgent=latest] focus

# reload the configuration file
bindsym $mod+Shift+c reload

# restart i3 inplace (preserves your layout/session, can be used to upgrade i3)
bindsym $mod+Shift+r restart

# exit i3 (logs you out of your X session)
bindsym $mod+Shift+e exec "i3-nagbar -t warning -m 'You pressed the exit shortcut. Do you really want to exit i3? This will end your X session.' -b 'Yes, exit i3' 'i3-msg exit'"

# Set shut down, restart and locking features
bindsym $mod+0 mode "$mode_system"
set $mode_system (e)xit, switch_(u)ser, (s)uspend, (h)ibernate, (r)eboot, (Shift+s)hutdown
mode "$mode_system" {
    bindsym l exec --no-startup-id i3exit lock, mode "default"
    bindsym s exec --no-startup-id i3exit suspend, mode "default"
    bindsym u exec --no-startup-id i3exit switch_user, mode "default"
    bindsym e exec --no-startup-id i3exit logout, mode "default"
    bindsym h exec --no-startup-id i3exit hibernate, mode "default"
    bindsym r exec --no-startup-id i3exit reboot, mode "default"
    bindsym Shift+s exec --no-startup-id i3exit shutdown, mode "default"

    # exit system mode: "Enter" or "Escape"
    bindsym Return mode "default"
    bindsym Escape mode "default"
}

# Resize window (you can also use the mouse for that)
bindsym $mod+r mode "resize"
mode "resize" {
        # These bindings trigger as soon as you enter the resize mode
        # Pressing left will shrink the window’s width.
        # Pressing right will grow the window’s width.
        # Pressing up will shrink the window’s height.
        # Pressing down will grow the window’s height.
        bindsym j resize shrink width 5 px or 5 ppt
        bindsym k resize grow height 5 px or 5 ppt
        bindsym l resize shrink height 5 px or 5 ppt
        bindsym odiaeresis resize grow width 5 px or 5 ppt

        # same bindings, but for the arrow keys
        bindsym Left resize shrink width 5 px or 5 ppt
        bindsym Down resize grow height 5 px or 5 ppt
        bindsym Up resize shrink height 5 px or 5 ppt
        bindsym Right resize grow width 5 px or 5 ppt

        # exit resize mode: Enter or Escape
        bindsym Return mode "default"
        bindsym Escape mode "default"
}

# Lock screen
bindsym $mod+9 exec --no-startup-id i3lock -c 000000

# Autostart applications
# exec --no-startup-id /usr/lib/gnome-settings-daemon/gnome-settings-daemon-localeexec
exec --no-startup-id /usr/lib/polkit-gnome/polkit-gnome-authentication-agent-1
exec --no-startup-id nitrogen --restore; sleep 1; compton -b
exec --no-startup-id nm-applet
exec --no-startup-id xdg-user-dirs-update

#exec --no-startup-id conky -c ~/.config/conky/conky_cherubim
#exec --no-startup-id conky -c ~/.config/conky/conky_i3shortcuts
#exec --no-startup-id conky -c ~/.config/conky/conky_weather
#exec --no-startup-id conky -c ~/.config/conky/conky_rss
#exec --no-startup-id conky -c ~/.config/conky/conky_status
#exec --no-startup-id conky -c ~/.config/conky/conky_webmonitor
exec --no-startup-id ~/.config/conky/autoconky.py
exec --no-startup-id fcitx -d
exec --no-startup-id guake
# exec --no-startup-id blueman
# exec --no-startup-id xautolock -time 10 -locker blurlock


bar {
        status_command i3blocks
        position top

        colors {
            background #071E31

            focused_workspace #3685e2 #3685e2 #fafafa
            active_workspace #5294e2 #5294e2 #fafafa
            inactive_workspace #404552 #404552 #fafafa
            urgent_workspace #ff5757 #ff5757 #fafafa
        }
}




# Start i3bar to display a workspace bar (plus the system information i3status if available)
#bar {
#   status_command i3status
#   position top

## please set your primary output first. Example: 'xrandr --output eDP1 --primary'
#   tray_output primary
#   tray_output eDP1
#
#   bindsym button4 nop
#   bindsym button5 nop
#   font xft:Noto Sans 10.5
#   strip_workspace_numbers yes

#   colors {
#   background $transparent
#       background #2B2C2B
#                statusline #F9FAF9
#       separator  #454947
#
#                                  border  backgr. text
#       focused_workspace  #F9FAF9 #16A085 #2B2C2B
#       active_workspace   #595B5B #353836 #FDF6E3
#       inactive_workspace #595B5B #353836 #EEE8D5
#       urgent_workspace   #16A085 #FDF6E3 #E5201D
#   }
#}

# hide/unhide i3status bar
bindsym $mod+m bar mode toggle

# Theme colors
# class                 border  backgr. text    indic.  child_border
client.focused          #808280 #808280 #80FFF9 #FDF6E3
client.focused_inactive #434745 #434745 #16A085 #454948
client.unfocused        #434745 #434745 #16A085 #454948
client.urgent           #CB4B16 #FDF6E3 #16A085 #268BD2
client.placeholder      #000000 #0c0c0c #ffffff #000000 #0c0c0c

client.background       #2B2C2B

#############################
### settings for i3-gaps: ###
#############################

# Set inner/outer gaps
#gaps inner 10
#gaps outer -4

# Additionally, you can issue commands with the following syntax. This is useful to bind keys to changing the gap size.
# gaps inner|outer current|all set|plus|minus <px>
# gaps inner all set 10
# gaps outer all plus 5

# Smart gaps (gaps used if only more than one container on the workspace)
#smart_gaps on

# Smart borders (draw borders around container only if it is not the only container on this workspace) 
# on|no_gaps (on=always activate and no_gaps=only activate if the gap size to the edge of the screen is 0)
#smart_borders on

# Press $mod+Shift+g to enter the gap mode. Choose o or i for modifying outer/inner gaps. Press one of + / - (in-/decrement for current workspace) or 0 (remove gaps for current workspace). If you also press Shift with these keys, the change will be global for all workspaces.
#set $mode_gaps Gaps: (o) outer, (i) inner
#set $mode_gaps_outer Outer Gaps: +|-|0 (local), Shift + +|-|0 (global)
#set $mode_gaps_inner Inner Gaps: +|-|0 (local), Shift + +|-|0 (global)
#bindsym $mod+Shift+g mode "$mode_gaps"

#mode "$mode_gaps" {
#        bindsym o      mode "$mode_gaps_outer"
#        bindsym i      mode "$mode_gaps_inner"
#        bindsym Return mode "default"
#        bindsym Escape mode "default"
#}
#mode "$mode_gaps_inner" {
#        bindsym plus  gaps inner current plus 5
#        bindsym minus gaps inner current minus 5
#        bindsym 0     gaps inner current set 0
#
#
#        bindsym Shift+plus  gaps inner all plus 5
#        bindsym Shift+minus gaps inner all minus 5
#        bindsym Shift+0     gaps inner all set 0
#
#        bindsym Return mode "default"
#        bindsym Escape mode "default"
#}
#mode "$mode_gaps_outer" {
#        bindsym plus  gaps outer current plus 5
#        bindsym minus gaps outer current minus 5
#        bindsym 0     gaps outer current set 0
#
#        bindsym Shift+plus  gaps outer all plus 5
#        bindsym Shift+minus gaps outer all minus 5
#        bindsym Shift+0     gaps outer all set 0
#
#        bindsym Return mode "default"
#        bindsym Escape mode "default"
#}

~/.dmenurc

#
# ~/.dmenurc
#

## define the font for dmenu to be used
DMENU_FN="Noto-10.5"

## background colour for unselected menu-items
DMENU_NB="#2B2C2B"

## textcolour for unselected menu-items
DMENU_NF="#F9FAF9"

## background colour for selected menu-items
DMENU_SB="#16A085"

## textcolour for selected menu-items
DMENU_SF="#F9FAF9"

## command for the terminal application to be used:
TERMINAL_CMD="terminal -e"

## export our variables
DMENU_OPTIONS="-fn $DMENU_FN -nb $DMENU_NB -nf $DMENU_NF -sf $DMENU_SF -sb $DMENU_SB"

~/.dmrc

[Desktop]
Language=zh_CN.utf8
Session=i3

~/.i3blocks.conf

# i3blocks config file
#
# Please see man i3blocks for a complete reference!
# The man page is also hosted at http://vivien.github.io/i3blocks
#
# List of valid properties:
#
# align
# color
# command
# full_text
# instance
# interval
# label
# min_width
# name
# separator
# separator_block_width
# short_text
# signal
# urgent

# Global properties
#
# The top properties below are applied to every block, but can be overridden.
# Each block command defaults to the script name to avoid boilerplate.
command=~/.config/blocks/$BLOCK_NAME
separator_block_width=15
markup=none


# Generic media player support
#
# This displays "ARTIST - SONG" if a music is playing.
# Supported players are: spotify, vlc, audacious, xmms2, mplayer, and others.

[bandwidth]
instance=wlp3s0;in
color=#FFD700
label=
interval=3
separator=false

[bandwidth]
instance=wlp3s0;out
color=#FFD700
label=
interval=3
separator=false

[network]
label=
instance=enp4s0f2
interval=10
separator=false

[ssid]
label=
color=#00BFFF
interval=60
separator=false

[network]
label=
color=#00ff00
instance=wlp3s0
interval=10
separator=false

[ip-address]
label=
color=#DB7093
interval=60

[mediaplayer]
#instance=spotify
label=🎵
color=#C62F2F
interval=5
signal=10

[audio]
label=
color=#87CEEB
interval=5
separator=false

[microphone]
label=
color=#87CEEB
interval=5

[packages]
label=
interval=300

[space]
label=
color=#bd93f9
interval=30

[bluetooth]
label=
color=#3365A4
interval=10

[temperature]
instance=Core
label=
color=#FFA500
interval=5

[load]
label=
color=#32CD32
interval=10
separator=false

[cpu]
label=
color=#008DF6
interval=2

[memory]
label=
color=#F0B28A
instance=mem;free
interval=30

[memory]
label=
instance=swap;total
interval=30
#[load_average]
#interval=10

# Battery indicator
#
# The battery instance defaults to 0.
#[battery]
#label=BAT
#label=⚡
#instance=1
#interval=30

[battery]
command=~/.config/blocks/battery/battery
markup=pango
interval=30


# Date Time
#
[time]
label=
command=date '+%Y-%m-%d  %H:%M:%S'
color=#1DE9B6
interval=3

[user]
label=
color=#90CAF9
interval=once

# Key indicators
#
# Add the following bindings to i3 config file:
#
# bindsym --release Caps_Lock exec pkill -SIGRTMIN+11 i3blocks
# bindsym --release Num_Lock  exec pkill -SIGRTMIN+11 i3blocks
#[keyindicator]
#instance=CAPS
#interval=once
#signal=11

#[keyindicator]
#instance=NUM
#interval=once
#signal=11
aosc arch fedora i3 i3-wm linux wm 亮度

装上i3-wm后解决亮度调节

linux屏幕亮度调节解决办法

修改grub

sudo vi /etc/default/grub

修改内容

GRUB_CMDLINE_LINUX="acpi_backlight=vendor"

更新grub.conf

sudo update-grub

设置亮度

echo 500 > /sys/class/backlight/intel_backlight/brightness
cookie error tomcat

解决高版本tomcat cookie的问题

解决高版本tomcat cookie的问题

解决高版本tomcat cookie的问题

<?xml version="1.0" encoding="UTF-8"?>
<!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
  this work for additional information regarding copyright ownership.
  The ASF licenses this file to You under the Apache License, Version 2.0
  (the "License"); you may not use this file except in compliance with
  the License.  You may obtain a copy of the License at

      http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License.
-->
<!-- The contents of this file will be loaded for each web application -->
<Context>

    <!-- Default set of monitored resources. If one of these changes, the    -->
    <!-- web application will be reloaded.                                   -->
    <WatchedResource>WEB-INF/web.xml</WatchedResource>
    <WatchedResource>WEB-INF/tomcat-web.xml</WatchedResource>
    <WatchedResource>${catalina.base}/conf/web.xml</WatchedResource>

    <!-- Uncomment this to disable session persistence across Tomcat restarts -->
    <!--
    <Manager pathname="" />
    -->
<CookieProcessor className="org.apache.tomcat.util.http.LegacyCookieProcessor"/>
</Context>
linux tidb

TiDB Binary 部署方案详解(备份)

TiDB Binary 部署方案详解


title: TiDB Binary 部署方案详解 category: deployment


TiDB Binary 部署指导

概述

一个完整的 TiDB 集群包括 PD,TiKV 以及 TiDB。启动顺序依次是 PD,TiKV 以及 TiDB。在关闭数据库服务时,请按照启动的相反顺序进行逐一关闭服务。

阅读本章前,请先确保阅读 TiDB 整体架构部署建议

本文档描述了三种场景的二进制部署方式:

TiDB 组件及默认端口

1. TiDB 数据库组件(必装)

组件 默认端口 协议 说明
ssh 22 TCP sshd 服务
TiDB 4000 TCP 应用及 DBA 工具访问通信端口
TiDB 10080 TCP TiDB 状态信息上报通信端口
TiKV 20160 TCP TiKV 通信端口
PD 2379 TCP 提供 TiDB 和 PD 通信端口
PD 2380 TCP PD 集群节点间通信端口

2. TiDB 数据库组件(选装)

组件 默认端口 协议 说明
Prometheus 9090 TCP Prometheus 服务通信端口
Pushgateway 9091 TCP TiDB, TiKV, PD 监控聚合和上报端口
Node_exporter 9100 TCP TiDB 集群每个节点的系统信息上报通信端口
Grafana 3000 TCP Web 监控服务对外服务和客户端(浏览器)访问端口
alertmanager 9093 TCP 告警服务端口

TiDB 安装前系统配置与检查

操作系统检查

配置 描述
支持平台 请查看和了解系统部署建议
文件系统 TiDB 部署环境推荐使用 ext4 文件系统
Swap 空间 TiDB 部署推荐关闭 Swap 空间
Disk Block Size 设置系统磁盘 Block 大小为 4096

网络与防火墙

配置 描述
防火墙 / 端口 请查看 TiDB 所需端口在各个节点之间是否能正常访问

操作系统参数

配置 说明
Nice Limits 系统用户 tidb 的 nice 值设置为缺省值 0
min_free_kbytes sysctl.conf 中关于 vm.min_free_kbytes 的设置需要足够高
User Open Files Limit 对数据库管理员 tidb 的 open 文件数设置为 1000000
System Open File Limits 对系统的 open 文件数设置为 1000000
User Process Limits limits.conf 配置的 tidb 用户的 nproc 为 4096
Address Space Limits limits.conf 配置的 tidb 用户空间为 unlimited
File Size Limits limits.conf 配置的 tidb 用户 fsize 为 unlimited
Disk Readahead 设置数据磁盘 readahead 至少为 4096
NTP 服务 为各个节点配置 NTP 时间同步服务
SELinux 关闭各个节点的 SELinux 服务
CPU Frequency Scaling TiDB 推荐打开 CPU 超频
Transparent Hugepages 针对 Red Hat 7+ 和 CentOS 7+ 系统, Transparent Hugepages 必须被设置为 always
I/O Scheduler 设置数据磁盘 I/0 Schedule 设置为 deadline 模式
vm.swappiness 设置 vm.swappiness = 0

注意:请联系系统管理员进行操作系统参数调整。

数据库运行用户设置

配置 说明
LANG 环境设定 设置 LANG = en_US.UTF8
TZ 时区设定 确保所有节点的时区 TZ 设置为一样的值

创建系统数据库运行账户

在 Linux 环境下,在每台安装节点上创建 tidb 作为数据库系统运行用户并设置集群节点之间的 ssh 互信访问。以下是一个示例,具体创建用户与开通 ssh 互信访问请联系系统管理员进行。

# useradd tidb
# usermod -a -G tidb tidb
# su - tidb
Last login: Tue Aug 22 12:06:23 CST 2017 on pts/2
-bash-4.2$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/tidb/.ssh/id_rsa):
Created directory '/home/tidb/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/tidb/.ssh/id_rsa.
Your public key has been saved in /home/tidb/.ssh/id_rsa.pub.
The key fingerprint is:
5a:00:e6:df:9e:40:25:2c:2d:e2:6e:ee:74:c6:c3:c1 tidb@t001
The key's randomart image is:
+--[ RSA 2048]----+
|    oo. .        |
|  .oo.oo         |
| . ..oo          |
|  .. o o         |
| .  E o S        |
|  oo . = .       |
| o. * . o        |
| ..o .           |
| ..              |
+-----------------+

-bash-4.2$ cd .ssh
-bash-4.2$ cat id_rsa.pub >> authorized_keys
-bash-4.2$ chmod 644 authorized_keys
-bash-4.2$ ssh-copy-id -i ~/.ssh/id_rsa.pub 192.168.1.100

下载官方 Binary

TiDB 官方提供了支持 Linux 版本的二进制安装包,官方推荐使用 Redhat 7+、CentOS 7+ 以上版本的操作系统,不推荐在 Redhat 6、CentOS 6 上部署 TiDB 集群。

操作系统:Linux ( Redhat 7+,CentOS 7+ )

执行步骤:

# 下载压缩包

wget http://download.pingcap.org/tidb-latest-linux-amd64.tar.gz
wget http://download.pingcap.org/tidb-latest-linux-amd64.sha256

# 检查文件完整性,返回 ok 则正确
sha256sum -c tidb-latest-linux-amd64.sha256

# 解开压缩包
tar -xzf tidb-latest-linux-amd64.tar.gz
cd tidb-latest-linux-amd64

单节点方式快速部署

在获取 TiDB 二进制文件包后,我们可以在单机上面,运行和测试 TiDB 集群,请按如下步骤依次启动 PD,TiKV,TiDB。

注意:以下启动各个应用程序组件实例的时候,请选择后台启动,避免前台失效后程序自动退出。

步骤一. 启动 PD:

./bin/pd-server --data-dir=pd \
                --log-file=pd.log

步骤二. 启动 TiKV:

./bin/tikv-server --pd="127.0.0.1:2379" \
                  --data-dir=tikv \
                  --log-file=tikv.log

步骤三. 启动 TiDB:

./bin/tidb-server --store=tikv \
                  --path="127.0.0.1:2379" \
                  --log-file=tidb.log

步骤四. 使用 MySQL 客户端连接 TiDB:

mysql -h 127.0.0.1 -P 4000 -u root -D test

功能性测试部署

如果只是对 TiDB 进行测试,并且机器数量有限,我们可以只启动一台 PD 测试整个集群。

这里我们使用四个节点,部署一个 PD,三个 TiKV,以及一个 TiDB,各个节点以及所运行服务信息如下:

Name Host IP Services
node1 192.168.199.113 PD1, TiDB
node2 192.168.199.114 TiKV1
node3 192.168.199.115 TiKV2
node4 192.168.199.116 TiKV3

请按如下步骤依次启动 PD 集群,TiKV 集群以及 TiDB:

注意:以下启动各个应用程序组件实例的时候,请选择后台启动,避免前台失效后程序自动退出。

步骤一. 在 node1 启动 PD:

./bin/pd-server --name=pd1 \
                --data-dir=pd1 \
                --client-urls="http://192.168.199.113:2379" \
                --peer-urls="http://192.168.199.113:2380" \
                --initial-cluster="pd1=http://192.168.199.113:2380" \
                --log-file=pd.log

步骤二. 在 node2,node3,node4 启动 TiKV:

./bin/tikv-server --pd="192.168.199.113:2379" \
                  --addr="192.168.199.114:20160" \
                  --data-dir=tikv1 \
                  --log-file=tikv.log

./bin/tikv-server --pd="192.168.199.113:2379" \
                  --addr="192.168.199.115:20160" \
                  --data-dir=tikv2 \
                  --log-file=tikv.log

./bin/tikv-server --pd="192.168.199.113:2379" \
                  --addr="192.168.199.116:20160" \
                  --data-dir=tikv3 \
                  --log-file=tikv.log

步骤三. 在 node1 启动 TiDB:

./bin/tidb-server --store=tikv \
                  --path="192.168.199.113:2379" \
                  --log-file=tidb.log

步骤四. 使用 MySQL 客户端连接 TiDB:

mysql -h 192.168.199.113 -P 4000 -u root -D test

多节点集群模式部署

在生产环境中,我们推荐多节点部署 TiDB 集群,首先请参考部署建议。

这里我们使用六个节点,部署三个 PD,三个 TiKV,以及一个 TiDB,各个节点以及所运行服务信息如下:

Name Host IP Services
node1 192.168.199.113 PD1, TiDB
node2 192.168.199.114 PD2
node3 192.168.199.115 PD3
node4 192.168.199.116 TiKV1
node5 192.168.199.117 TiKV2
node6 192.168.199.118 TiKV3

请按如下步骤依次启动 PD 集群,TiKV 集群以及 TiDB:

步骤一 . 在 node1,node2,node3 依次启动 PD:

./bin/pd-server --name=pd1 \
                --data-dir=pd1 \
                --client-urls="http://192.168.199.113:2379" \
                --peer-urls="http://192.168.199.113:2380" \
                --initial-cluster="pd1=http://192.168.199.113:2380,pd2=http://192.168.199.114:2380,pd3=http://192.168.199.115:2380" \
                -L "info" \
                --log-file=pd.log

./bin/pd-server --name=pd2 \
                --data-dir=pd2 \
                --client-urls="http://192.168.199.114:2379" \
                --peer-urls="http://192.168.199.114:2380" \
                --initial-cluster="pd1=http://192.168.199.113:2380,pd2=http://192.168.199.114:2380,pd3=http://192.168.199.115:2380" \
                --join="http://192.168.199.113:2379" \
                -L "info" \
                --log-file=pd.log

./bin/pd-server --name=pd3 \
                --data-dir=pd3 \
                --client-urls="http://192.168.199.115:2379" \
                --peer-urls="http://192.168.199.115:2380" \
                --initial-cluster="pd1=http://192.168.199.113:2380,pd2=http://192.168.199.114:2380,pd3=http://192.168.199.115:2380" \
                --join="http://192.168.199.113:2379" \
                -L "info" \
                --log-file=pd.log

步骤二. 在 node4,node5,node6 启动 TiKV:

./bin/tikv-server --pd="192.168.199.113:2379,192.168.199.114:2379,192.168.199.115:2379" \
                  --addr="192.168.199.116:20160" \
                  --data-dir=tikv1 \
                  --log-file=tikv.log

./bin/tikv-server --pd="192.168.199.113:2379,192.168.199.114:2379,192.168.199.115:2379" \
                  --addr="192.168.199.117:20160" \
                  --data-dir=tikv2 \
                  --log-file=tikv.log

./bin/tikv-server --pd="192.168.199.113:2379,192.168.199.114:2379,192.168.199.115:2379" \
                  --addr="192.168.199.118:20160" \
                  --data-dir=tikv3 \
                  --log-file=tikv.log

步骤三. 在 node1 启动 TiDB:

./bin/tidb-server --store=tikv \
                  --path="192.168.199.113:2379,192.168.199.114:2379,192.168.199.115:2379" \
                  --log-file=tidb.log

步骤四. 使用 MySQL 客户端连接 TiDB:

mysql -h 192.168.199.113 -P 4000 -u root -D test

注意:在生产环境中启动 TiKV 时,建议使用 --config 参数指定配置文件路径,如果不设置这个参数,TiKV 不会读取配置文件。同样,在生产环境中部署 PD 时,也建议使用 --config 参数指定配置文件路径。

TiKV 调优参见:TiKV 性能参数调优

注意:如果使用 nohup 在生产环境中启动集群,需要将启动命令放到一个脚本文件里面执行,否则会出现因为 Shell 退出导致 nohup 启动的进程也收到异常信号退出的问题,具体参考进程异常退出。

TiDB 监控和告警环境安装

安装部署监控和告警环境的系统信息如下:

Name Host IP Services
node1 192.168.199.113 node_export, pushgateway, Prometheus, Grafana
node2 192.168.199.114 node_export
node3 192.168.199.115 node_export
node4 192.168.199.116 node_export

获取二进制包

# 下载压缩包
wget https://github.com/prometheus/prometheus/releases/download/v1.5.2/prometheus-1.5.2.linux-amd64.tar.gz
wget https://github.com/prometheus/node_exporter/releases/download/v0.14.0-rc.2/node_exporter-0.14.0-rc.2.linux-amd64.tar.gz
wget https://grafanarel.s3.amazonaws.com/builds/grafana-4.1.2-1486989747.linux-x64.tar.gz
wget https://github.com/prometheus/pushgateway/releases/download/v0.3.1/pushgateway-0.3.1.linux-amd64.tar.gz

# 解开压缩包
tar -xzf prometheus-1.5.2.linux-amd64.tar.gz
tar -xzf node_exporter-0.14.0-rc.1.linux-amd64.tar.gz
tar -xzf grafana-4.1.2-1486989747.linux-x64.tar.gz
tar -xzf pushgateway-0.3.1.linux-amd64.tar.gz

启动监控服务

在 node1,node2,node3,node4 启动 node_exporter

$cd node_exporter-0.14.0-rc.1.linux-amd64

#启动 node_exporter 服务
./node_exporter --web.listen-address=":9100" \
    --log.level="info"

在 node1 启动 pushgateway:

$cd pushgateway-0.3.1.linux-amd64

#启动 pushgateway 服务
./pushgateway \
    --log.level="info" \
    --web.listen-address=":9091"

在 node1 启动 Prometheus:

$cd prometheus-1.5.2.linux-amd64

# 修改配置文件

vi prometheus.yml

...
global:
  scrape_interval:     15s # By default, scrape targets every 15 seconds.
  evaluation_interval: 15s # By default, scrape targets every 15 seconds.
  # scrape_timeout is set to the global default (10s).
  labels:
    cluster: 'test-cluster'
    monitor: "prometheus"

scrape_configs:
  - job_name: 'overwritten-cluster'
    scrape_interval: 3s
    honor_labels: true # don't overwrite job & instance labels
    static_configs:
      - targets: ['192.168.199.113:9091']

  - job_name: "overwritten-nodes"
    honor_labels: true # don't overwrite job & instance labels
    static_configs:
    - targets:
      - '192.168.199.113:9100'
      - '192.168.199.114:9100'
      - '192.168.199.115:9100'
      - '192.168.199.116:9100'
...

# 启动 Prometheus:
./prometheus \
    --config.file="/data1/tidb/deploy/conf/prometheus.yml" \
    --web.listen-address=":9090" \
    --web.external-url="http://192.168.199.113:9090/" \
    --log.level="info" \
    --storage.local.path="/data1/tidb/deploy/data.metrics" \
    --storage.local.retention="360h0m0s"

在 node1 启动 Grafana:

cd grafana-4.1.2-1486989747.linux-x64

#编辑配置文件

vi grafana.ini

...

# The http port  to use
http_port = 3000

# The public facing domain name used to access grafana from a browser
domain = 192.168.199.113

...

#启动 Grafana 服务
./grafana-server \
    --homepath="/data1/tidb/deploy/opt/grafana" \
    --config="/data1/tidb/deploy/opt/grafana/conf/grafana.ini"
10kb 500 error nginx tomcat

nginx+tomcat上传图片大于10kb就500的异常

nginx+tomcat上传图片大于10kb就500的异常

= = 在本地上传完全没问题

测试服务器上就gg

= = 日狗

一开始以为是tomcat的问题

添加了

maxPostSize="-1" maxHttpHeaderSize="1024000"

没有任何影响= =

玩命谷歌后

和curl 测试

发现是nginx

的问题

在localtion下加入

proxy_buffering off;
client_body_buffer_size 10240K;

client_body_buffer_size 默认的client_body_buffer_size设置,是操作系统页面大小的两倍,8k或者16k

= =

deepin linux stalonetray wine 异常 托盘 无响应

deepin wineqq tim 微信 托盘异常无反应解决= =

stalonetray解决wine程序在deepin下托盘异常

上代码~~~

sudo apt install stalonetray

然后

nano ~/.stalonetrayrc

再然后~~

# background 
background "#777777"

# decorations
# 可选值: all, title, border, none
decorations none

# display # as usual
# dockapp_mode # set dockapp mode, which can be either simple (for
# e.g. OpenBox, wmaker for WindowMaker, or none
# (default). NEW in 0.8.
dockapp_mode none
# fuzzy_edges [] # enable fuzzy edges and set fuzziness level. level
# can be from 0 (disabled) to 3; this setting works
# with tinting and/or transparent and/or pixmap
# backgrounds
fuzzy_edges 0

# geometry 
geometry 1x1+0+0

# grow_gravity # 可选值有:N, S, E, W, NW, NE, SW, SE; 托盘图标的增长方式。
grow_gravity NW

# icon_gravity # 托盘图标的方向: NW, NE, SW, SE
icon_gravity NW

# icon_size # spe
icon_size 24

# log_level # controls the amount of logging output, level can
# be err (default), info, or trace (enabled only
# when stalonetray configured with --enable-debug)
# NEW in 0.8.
log_level err

# kludges kludge[,kludge] # enable specific kludges to work around
# non-conforming WMs and/or stalonetray bugs.
# NEW in 0.8. Argument is a
# comma-separated list of
# * fix_window_pos - fix tray window position on
# erroneous moves by WM
# * force_icon_size - ignore resize events on all
# icons; force their size to be equal to
# icon_size
# * use_icon_hints - use icon window hints to
# dtermine icon size

# max_geometry # maximal tray dimensions; 0 in width/height means
# no limit
max_geometry 0x0

# no_shrink [] # disables shrink-back mode
no_shrink false

# parent_bg [] # whether to use pseudo-transparency
# (looks better when reparented into smth like FvwmButtons)
parent_bg false

# pixmap_bg <path_to_xpm> # use pixmap from specified xpm file for (tiled) background
# pixmap_bg /home/user/.stalonetraybg.xpm

# scrollbars # enable/disable scrollbars; mode is either
# vertical, horizontal, all or none (default)
# NEW in 0.8.
scrollbars none

# scrollbars-size # scrollbars step in pixels; default is slot_size / 4
# scrollbars-step 8

# scrollbars-step # scrollbars step in pixels; default is slot_size / 2
# scrollbars-step 32

# slot_size # specifies size of icon slot, defaults to
# icon_size NEW in 0.8.

# skip_taskbar [] # hide tray`s window from the taskbar
skip_taskbar true

# sticky [] # make a tray`s window sticky across the
# desktops/pages
sticky true

# tint_color # set tinting color
tint_color white

# tint_level # set tinting level; level ranges from 0 (disabled)
# to 255
tint_level 0

# transparent [] # whether to use root-transparency (background
# image must be set with Esetroot or compatible utility)
transparent false

# vertical [] # whether to use vertical layout (horisontal layout
# is used by default)
vertical false

# window_layer # set the EWMH-compatible window layer; one of:
# bootom, normal, top
window_layer normal

# window_strut # enable/disable window struts for tray window (to
# avoid converting of tray window by maximized
# windows); mode defines to which screen border tray
# will be attached; it can be either top, bottom,
# left, right, none or auto (default)
window_strut auto

# window_type # set the EWMH-compatible window type; one of:
# desktop, dock, normal, toolbar, utility
window_type dock

# xsync [] # whether to operate on X server synchronously (SLOOOOW)
xsync false

= = 然而比较蛋疼

python threading timer 定时器

python 定时器 ,还是很好玩的

python起线程定时器

上代码!

from threading import Timer

##循环loop定时器
class LoopTimer(Timer):
    def __init__(self, interval, function, args=[], kwargs={}):
        Timer.__init__(self, interval, function, args, kwargs)
    def run(self):
        while True:
            self.finished.wait(self.interval)
            if self.finished.is_set():
                self.finished.set()
                break
            self.function(*self.args, **self.kwargs)
    #定时执行注解
    def delayed(seconds):
        def decorator(f):
            def wrapper(*args, **kargs):
                t = LoopTimer(seconds, f, args, kargs)
                t.start()
            return wrapper
        return decorator

#单次定时器
class OneTimer(Timer):
    def __init__(self, interval, function, args=[], kwargs={}):
        Timer.__init__(self, interval, function, args, kwargs)

注意

要用的时候

def  a():
    print("aaa")


t = LoopTimer(1,a)

t.start()

#函数要不加括号
linux master password postgresql 主从 安装 配置

从入门到差点放弃,postgresql极简安装+主从配置

postgresql安装,配置主从

安装

解压后

pgsql/bin/initdb -D /usr/local/pgsql/data
local/pgsql/bin/postgres -D /usr/local/pgsql/data >logfile 2>&1 &
local/pgsql/bin/createdb test
local/pgsql/bin/psql test

账户设置

创建用户

CREATE USER oschina WITH PASSWORD 'oschina123';
CREATE ROLE adam WITH LOGIN CREATEDB PASSWORD '654321';  记得添加login权限

改密码

ALTER ROLE davide WITH PASSWORD 'hu8jmn3';

让一个角色能够创建其他角色和新的数据库:

ALTER ROLE miriam CREATEROLE CREATEDB;

查看所有数据库

psql -l 

删除数据库

dropdb mydb

使用数据库

psql mydb

创建数据库

createdb mydb

配置主从

1.主创建同步账号

CREATE USER replica replication LOGIN CONNECTION LIMIT 3 ENCRYPTED PASSWORD 'replica';

2,postgresql.conf

wal_level = hot_standby  # 这个是设置主为wal的主机

max_wal_senders = 32 # 这个设置了可以最多有几个流复制连接,差不多有几个从,就设置几个
wal_keep_segments = 256 # 设置流复制保留的最多的xlog数目
wal_sender_timeout = 60s # 设置流复制主机发送数据的超时时间
max_connections = 100 # 这个设置要注意下,从库的max_connections必须要大于主库的

3,

pg_hda.conf

host    all     all     0.0.0.0/0       md5

4,

pg_basebackup -F p --progress -D /data/replica -h 192.168.1.12 -p 5432 -U replica --password 

5,复制recovery.conf

6,re的内容

standby_mode = on  # 这个说明这台机器为从库
primary_conninfo = 'host=10.12.12.10 port=5432 user=replica password=replica'  # 这个说明这台机器对应主库的信息

recovery_target_timeline = 'latest' # 这个说明这个流复制同步到最新的数据

postgresql。conf

max_connections = 1000 # 一般查多于写的应用从库的最大连接数要比较大

hot_standby = on  # 说明这台机器不仅仅是用于数据归档,也用于数据查询
max_standby_streaming_delay = 30s # 数据流备份的最大延迟时间
wal_receiver_status_interval = 1s  # 多久向主报告一次从的状态,当然从每次数据复制都会向主报告状态,这里只是设置最长的间隔时间
hot_standby_feedback = on # 如果有错误的数据复制,是否向主进行反馈

测试成果

主的机器上sender进程 从的机器上receiver进程

主的机器上

select * from pg_stat_replication;
pid              | 8467       # sender的进程
usesysid         | 44673      # 复制的用户id
usename          | replica    # 复制的用户用户名
application_name | walreceiver  
client_addr      | 10.12.12.12 # 复制的客户端地址
client_hostname  |
client_port      | 55804  # 复制的客户端端口
backend_start    | 2015-05-12 07:31:16.972157+08  # 这个主从搭建的时间
backend_xmin     |
state            | streaming  # 同步状态 startup: 连接中、catchup: 同步中、streaming: 同步
sent_location    | 3/CF123560 # Master传送WAL的位置
write_location   | 3/CF123560 # Slave接收WAL的位置
flush_location   | 3/CF123560 # Slave同步到磁盘的WAL位置
replay_location  | 3/CF123560 # Slave同步到数据库的WAL位置
sync_priority    | 0  #同步Replication的优先度
                      0: 异步、1~?: 同步(数字越小优先度越高)
sync_state       | async  # 有三个值,async: 异步、sync: 同步、potential: 虽然现在是异步模式,但是有可能升级到同步模式

最后注意几个坑

  • systemd 启动的话 配置文件可能在etc下

  • hba配置文件放开ip

  • 创建用户时给的权限

  • 记得给文件夹postgres的 用户组和用户身份

ab aiohttp linux python python3.6 python3.7 torando uvloop

python3.7 和python3.6 压力测试 aiohttp-tornado-uvloop

aiohttp-tornado-uvloop 在python3.7下进行了压力测试

统一的ab

ab -n 10000 -c 1000 "http://0.0.0.0:8080/"

结果

aiohttp
    asyncio
        3.7 : 4000
        3.6 : 3300
    uvloop
        3.7 : 4300
        3.6 : 4700
tornado
    ioloop
        3.7 : 3100
        3.6 : 1300
    uvloop
        3.7 : 1700
        3.6 : 1700

aiohttp

from aiohttp import web

async def handle(request):
    name = request.match_info.get('name', "Anonymous")
    text = "Hello, " + name
    return web.Response(text=text)

app = web.Application()
app.add_routes([web.get('/', handle),
                web.get('/{name}', handle)])

web.run_app(app)

aiohttp + uvloop

from aiohttp import web
import uvloop
import asyncio
async def handle(request):
    name = request.match_info.get('name', "Anonymous")
    text = "Hello, " + name
    return web.Response(text=text)

app = web.Application()
app.add_routes([web.get('/', handle),
                web.get('/{name}', handle)])
asyncio.set_event_loop_policy(uvloop.EventLoopPolicy())
loop = asyncio.get_event_loop()
app._set_loop(loop)

web.run_app(app)

tornado

import tornado.ioloop
import tornado.web

class MainHandler(tornado.web.RequestHandler):
    def get(self):
        self.write("Hello, world")

def make_app():
    return tornado.web.Application([
        (r"/", MainHandler),
    ])

if __name__ == "__main__":
    app = make_app()
    app.listen(8080)
    tornado.ioloop.IOLoop.current().start()

tornado + uvloop

import tornado.ioloop
import tornado.web

class MainHandler(tornado.web.RequestHandler):
    def get(self):
        self.write("Hello, world")

import uvloop
import asyncio

from tornado.platform.asyncio import BaseAsyncIOLoop


class TornadoUvloop(BaseAsyncIOLoop):

    def initialize(self, **kwargs):
        loop = uvloop.new_event_loop()
        try:
            super(TornadoUvloop, self).initialize(
                loop, close_loop=True, **kwargs)
        except Exception:
            loop.close()
            raise

def make_app():
    return tornado.web.Application([
        (r"/", MainHandler),
    ])

if __name__ == "__main__":
    app = make_app()
    app.listen(8080)
    tornado.ioloop.IOLoop.configure(TornadoUvloop)
    tornado.ioloop.IOLoop.current().start()

依赖

aiohttp==3.3.2
async-timeout==3.0.0
attrs==18.1.0
chardet==3.0.4
idna==2.7
multidict==4.3.1
tornado==5.0.2
uvloop==0.10.2
yarl==1.2.6

看来python3.7的asyncio的速度有很大的提升

3.7.0 anaconda conda linux miniconda python

linux 下安逸的食用 python 3.7.0

论最小代价切换使用python3.7

python 3.7.0 出来了 到处都是水文

那么如何最快的速度上手体验

下载miniconda

https://mirrors.ustc.edu.cn/anaconda/miniconda/

安装

bash miniconda-xxxxxxxx-.sh

切换到虚拟环境

source ~/miniconda/bin/activate

安装python3.7.0

conda create --name python37 python=3.7

切换到3.7.0

conda activate python37
deepin linux nopassword opensuse sudo sudoer

sudo 配置密码默认超时时间

sudo 配置密码默认超时时间

之前一直用的是nopasswd = =,然后觉得似乎不大好

好吧

sudoer 可以配置 第一次输入密码后的超时时长

vim /etc/sudoer

添加上一行

Defaults        env_reset,timestamp_timeout=30


#30 代表了30分钟

如图

完美!

测试成功

asyncio eventloop python 多线程 异步

python 中的run_in_executor 和 run_on_executor 实现在异步中调用阻塞函数

python 异步是怎样练成的!

有个挺好玩的东西 python中asyncio 有两个函数 run_in_executorrun_on_executor

run_on_executor 用法

##这部分懒得写demo了 随意找地方复制的 = =

from concurrent.futures import ThreadPoolExecutor
from tornado.ioloop import IOLoop
from tornado.concurrent import run_on_executor

class SleepHandler(tornado.web.RequestHandler):
    executor = ThreadPoolExecutor(10)
    @tornado.web.asynchronous
    @tornado.gen.coroutine
    def get(self):
        start = time.time()
        res = yield self.sleep()
        self.write("when i sleep %f s" % (time.time() - start))
        self.finish()

    @run_on_executor
    def sleep(self):
        time.sleep(5)
        return 5

run_in_executor 用法

class Index(basic):
    async def get(self):
        name =  self.request.match_info.get('name', "Anonymous")
        text = "Hello, " + name
        loop = asyncio.get_event_loop()
        print(loop.__hash__())
        c = await  loop.run_in_executor(None,fuck,"asasas")
        return self.outJson(text=c)

def fuck(data)->str:
    return data+"async!!!"

似乎差别不是很大 = =

继续折腾!

apt tim upgrade

apt 更新时忽略某个包

apt忽略更新

当使用 apt upgrade 命令时,默认会将所有需要更新的包都下载更新

apt-mark hold xxx

使用这个命令可以将指定的包的版本hold住,这样在更新的时候就会忽略掉这个包。

apt-mark unhold xxx

将 hold 替换为 unhold 就可以取消对这个包版本的锁定了。

aiohttp asyncio callback linux tornado

aiohttp 类 tornado 代码风格模式去开发 !!!完美阿!!!!

使用aiohttp进行tornado代码风格的开发

仔细读了一下 aiohttp 的文档

http://aiohttp.readthedocs.io/en/stable/index.html

= = 竟然!!!

竟然有了!!!!!

类tornado开发特色的!!! 贼开心

from aiohttp import web
class basic(web.View):
    def out(self,text):
        return web.Response(text=text)
class Index(basic):
    async def get(self):
        name = self.request.match_info.get('name', "Anonymous")
        text = "Hello, " + name
        return self.out(text=text)
app = web.Application()
app.add_routes([web.view('/', Index),])
if __name__ == "__main__":
    web.run_app(app)
jieba search whoosh 分词 搜索

博客使用whoosh+jieba作搜索

whoosh作引擎,jieba作分词,实现搜索功能

博客一直没有搜索,本来想用es ,但是想用更硬核一点的所以选用了whoosh ,whoosh是纯py编写的

上代码

单例模式初始化一次whoosh

更新后注销调对象

唯一难受的是 博客从30m/上下的内存占用 彪升到226m!!!!!!!!!!

网上的whoosh 都是贼简单的demo

import os
from jieba.analyse import ChineseAnalyzer
from whoosh.qparser import MultifieldParser
from services import Singleton
from logic.articleDao import articleDao
from whoosh.index import create_in
from whoosh.fields import Schema,ID,TEXT
ana=ChineseAnalyzer()
class Search(metaclass=Singleton):
    def __init__(self):
        self.list = articleDao.listAllNoPage()
        schema = Schema(
            id=ID(stored=True, analyzer=ana),
            title=TEXT(stored=True, analyzer=ana),
            content=TEXT(stored=True, analyzer=ana),
            keyword=TEXT(stored=True, analyzer=ana),
            desc=TEXT(stored=True, analyzer=ana),)
        if not os.path.exists("index"):
            os.mkdir("index")
        ix= create_in("index",schema)
        writer = ix.writer()
        for art in self.list:
            writer.add_document(
                            id=str(art.id),
                           title=art.title,
                           content=art.content,
                           keyword=art.keyword,
                           desc=art.desc)
        writer.commit()
        self.ix= ix

    def search(self,keyword):
        searcher = self.ix.searcher()
        query = MultifieldParser(["content","title","desc","keyword"],schema=self.ix.schema).parse(keyword)
        res=searcher.search(query,limit=len(self.list))
        result = []
        for r in res:
            result.append(r.get("id"))
        searcher.close()
        return result

    @classmethod
    def clear(cls):
        cls._instances = {}
linux mysql tidb

TiDB 用户账户管理

几乎完全兼容mysql的tidb的用户管理


title: TiDB 用户账户管理 category: user guide


TiDB 用户账户管理

用户名和密码

TiDB 将用户账户存储在 mysql.user 系统表里面。每个账户由用户名和 host 作为标识。每个账户可以设置一个密码。

通过 MySQL 客户端连接到 TiDB 服务器,通过指定的账户和密码登陆:

shell> mysql --port 4000 --user xxx --password

使用缩写的命令行参数则是:

shell> mysql -P 4000 -u xxx -p

添加用户

添加用户有两种方式:

  • 通过标准的用户管理的 SQL 语句创建用户以及授予权限,比如 CREATE USERGRANT
  • 直接通过 INSERTUPDATEDELETE 操作授权表。

推荐的方式是使用第一种。第二种方式修改容易导致一些不完整的修改,因此不推荐。还有另一种可选方式是使用第三方工具的图形化界面工具。

下面的例子用 CREATE USERGRANT 语句创建了四个账户:

mysql> CREATE USER 'finley'@'localhost' IDENTIFIED BY 'some_pass';
mysql> GRANT ALL PRIVILEGES ON *.* TO 'finley'@'localhost' WITH GRANT OPTION;
mysql> CREATE USER 'finley'@'%' IDENTIFIED BY 'some_pass';
mysql> GRANT ALL PRIVILEGES ON *.* TO 'finley'@'%' WITH GRANT OPTION;
mysql> CREATE USER 'admin'@'localhost' IDENTIFIED BY 'admin_pass';
mysql> GRANT RELOAD,PROCESS ON *.* TO 'admin'@'localhost';
mysql> CREATE USER 'dummy'@'localhost';

使用 SHOW GRANTS 可以看到为一个用户授予的权限:

mysql> SHOW GRANTS FOR 'admin'@'localhost';
+-----------------------------------------------------+
| Grants for admin@localhost                          |
+-----------------------------------------------------+
| GRANT RELOAD, PROCESS ON *.* TO 'admin'@'localhost' |
+-----------------------------------------------------+

删除用户

使用 DROP USER 语句可以删除用户,例如:

mysql> DROP USER 'jeffrey'@'localhost';

保留用户账户

TiDB 在数据库初始化时会生成一个 'root'@'%' 的默认账户。

设置资源限制

暂不支持。

设置密码

TiDB 将密码存在 mysql.user 系统数据库里面。只有拥有 CREATE USER 权限,或者拥有 mysql 数据库权限( INSERT 权限用于创建, UPDATE 权限用于更新)的用户才能够设置或修改密码。

CREATE USER 创建用户时可以通过 IDENTIFIED BY 指定密码:

CREATE USER 'jeffrey'@'localhost' IDENTIFIED BY 'mypass';

为一个已存在的账户修改密码,可以通过 SET PASSWORD FOR 或者 ALTER USER 语句完成:

SET PASSWORD FOR 'root'@'%' = 'xxx';

或者

ALTER USER 'jeffrey'@'localhost' IDENTIFIED BY 'mypass';

更多在官方文档 https://github.com/pingcap/docs-cn

aiozmq async asyncio python rpc zeromq zmq

aiozmq 最简单使用 ,应该可以用在tornado 同步驱动转异步操作

基于zeromq的rpc组件 ,用起来比zeromq更简单

服务端

import asyncio
from aiozmq import rpc
import random

class Handler(rpc.AttrHandler):
    @rpc.method
    def remote(self, arg1, arg2):
        asyncio.sleep(random.randint(0,9))
        return arg1 + arg2

@asyncio.coroutine
def go():
    server =  yield from rpc.serve_rpc(Handler(),
                                       bind='ipc://a')
    yield from server.wait_closed()
if __name__ == "__main__":
    asyncio.get_event_loop().run_until_complete(go())
    print("DONE")

客户端

import asyncio
from aiozmq import rpc

@asyncio.coroutine
def go():
    client = yield from rpc.connect_rpc(connect='ipc://a')
    for a in range(1,50000):
        ret = yield from client.call.remote(a, 2)
        print(ret)

asyncio.get_event_loop().run_until_complete(go())

最简单的demo- - 应该还能结合 zeromq的router 进行分布式多进程 worker

cache kv python sqlite

python基于sqlite线程安全/进程安全的缓存-有趣儿的脑洞

基于内存中sqlite的缓存系统

Python disk backed cache

说是基于磁盘 实际上再linux上新建一个内存盘 把sqlite文件设置到内存盘中就可以了

安装
pip install diskcache
使用
import diskcache

#设置文件到内存中。=。  速度还不错
cache = diskcache.Cache("/tmp")


#写入缓存
cache.set(
    key="key"
    value="内容",
    expire="过期时间",
    tag="命名空间”
)


#手动清除超过生存时间的缓存
cache.cull()

#清除命名空间的缓存
cache.evict()

号称 Thread-safe and process-safe 经过测试还不错!

blog

关于本博客

blog

技能点

  • linux

日常生活和开发环境使用 自使用至今已有五六年

  • php

启蒙语言之一 刚接触建站的时候帮我进入了编程的世界

  • python

周末语言之一 日常在npm和pypi淘宝 本站点也是基于py3开发 02180712更新 真的爽!!!

  • nodejs

周末语言之一 乐趣!

  • java

工作语言 现就职于开源中国的java工程师

联系我

admin@pkold.com

https://github.com/zhenruyan/

关于本站

开发结构

nginx(tengine) 反向代理

nodejs + pm2
 + python3  + 
  uvloop(更新python3.7后 asyncio速度已经不差与uvloop 故删除此部分) 
   +  tornado +  fastcache(后期更换为了diskcache)  
   +  mongodb(后期更换成为postgresql)
     + markdown

网站第一版就是每天早上起床后写1个小时大约两周的产物

前端极为简单  懒

rps  大约能跑到500

个人感受

python 的面向对象用起来比java的要舒服一些

尴尬的程度处于  js  和  java  之间

(默哀胎死腹中的nodejs版本和php版本)

pm2 是个好东西

博客程序迭代完善后开源

开发版本

  • 20181228 博客数据库更换成pg,进行seo优化

  • 20181123 博客运行良好,准备加一个大佬名言。从日更变周更,最后变月更,友链功能遥遥无期

  • 20180813 修复内存泄漏- - 搜索依旧不优雅 ,访问记录依旧丑陋 等稍微的修修准备把这丑代码丢出去- - (竟然真的有人不知道tornado如何下手,那我的博客多少还是有了一点价值)

  • 20180801 博客的线程模型进行了重构,数据访问层一个线程池,日志一个线程池。 下一个版本完善多进程模型

  • 20180707 博客视图层进行了重构,进行了猜想式的重构,以多线程异步的方式构建视图

  • 20180626 博客终于有了搜索

  • 20180625 加入了whoosh作搜索,但是性能低下,暂时不放在前端

  • 20180605 修复cpu长期占用100%的问题 rps再次500+

  • 20180530 完善了seo优化,前端稍微做了改动

  • 20180501 第一个简单的版本 增删改查

  • 20180524 第二个版本 完善成一个基本完成的博客

arangodb nosql

arangodb 的配置优化

线上运行arangodb要做的优化! 前提当然一定是linux

启动参数

cpu分配内存块
numactl --interleave=all 


systemd 编写services时  需要写绝对路径
ExecStart=/usr/bin/numactl --interleave=all
内存回收机制
sudo bash -c "echo madvise >/sys/kernel/mm/transparent_hugepage/enabled"
sudo bash -c "echo madvise >/sys/kernel/mm/transparent_hugepage/defrag"
内存分配
sudo bash -c "echo 2 > /proc/sys/vm/overcommit_memory"
zone_reclaim_mode(据说是缓存)
sudo bash -c "echo 0 >/proc/sys/vm/zone_reclaim_mode"
多线程最大内存?
数值=cpu核心数 x 8 x 1000
sudo bash -c "sysctl -w 'vm.max_map_count=320000'"
禁用内存池
export GLIBCXX_FORCE_NEW=1
虚拟内存
/proc/sys/vm/overcommit_ratio (100 * (max(0, (RAM - Swap Space)) / RAM)) 

sudo bash -c "echo 97 > /proc/sys/vm/overcommit_ratio"
city ip log tornado

tornado 异步日志统计获取IP地址和对应城市

异步日志统计获取IP地址和对应城市

ip转城市

直接贴代码

class Singleton(type):
    _instances = {}

    def __call__(cls, *args, **kwargs):
        if cls not in cls._instances:
            cls._instances[cls] = super(Singleton, cls).__call__(*args, **kwargs)
        return cls._instances[cls]


class SearchIp(metaclass=Singleton):
    def __init__(self):
        try:
            dbpath = os.path.abspath("./utils/ip/ip2region.db")
            self.search = Ip2Region(dbpath)
        except IOError:
            log.error("err connect db",IOError.errno)
        else:
            log.info("db connect success")

    def getSession(self):
        return self.search
    def searchCity(self,ip):
        try:
            if self.getSession().isip(ip):
                city =  self.getSession().btreeSearch(ip=ip).get("region","火星").decode("utf-8")
            else:
                city = "火星"
        except IOError:
            log.error(IOError)
            city = "火星"
            return city
        else:
            return city
    def close(self):
        self.search.close()

单例模式 然后ip转城市

异步存到mongodb

class baseHttp (tornado.web.RequestHandler):
    executor = ThreadPoolExecutor(100)

        @tornado.web.asynchronous
        @tornado.gen.coroutine
        def initialize(self):
            yield self.logsave()

        @run_on_executor
        def logsave(self):
            ipDao.saveLog(self.request.remote_ip,self.request.uri,self.request.headers.get("User-Agent","鬼知道"))

似乎还凑活~~

python threadpoolexecutor tornado

python tornado ThreadPoolExecutor实现异步

tornado

#!/bin/env python
# -*- coding:utf-8 -*-
## 异步demo
import tornado.httpserver
import tornado.ioloop
import tornado.options
import tornado.web
from tornado import httpclient
import tornado.gen
from tornado import gen
from tornado.concurrent import run_on_executor
from concurrent.futures import ThreadPoolExecutor
import time
from tornado.options import define, options
define("port", default=8000, help="run on the given port", type=int)
class SleepHandler(tornado.web.RequestHandler):
    executor = ThreadPoolExecutor(10000)
    @tornado.web.asynchronous
    @tornado.gen.coroutine
    def get(self):
        # 假如你执行的异步会返回值被继续调用可以这样(只是为了演示),否则直接yield就行
        res = yield self.sleep()
        self.write("when i sleep %s s" % res)
        self.finish()
    @run_on_executor
    def sleep(self):
        for a in  range(1,900000):
            print(a)
        return 5
class JustNowHandler(tornado.web.RequestHandler):
    def get(self):
        self.write("i hope just now see you")
if __name__ == "__main__":
    tornado.options.parse_command_line()
    app = tornado.web.Application(handlers=[
            (r"/sleep", SleepHandler), (r"/justnow", JustNowHandler)])
    http_server = tornado.httpserver.HTTPServer(app)
    http_server.listen(options.port)
    tornado.ioloop.IOLoop.instance().start()
linux python queue zbus

记录一次zbus在Python上推送报Exception in thread Thread-2异常

记录一次zbus在Python上推送报Exception in thread Thread-2异常的解决办法,实际上是python的包管理确实坑了我

昨天还好好的 今天zbus 推送的时候一直报Exception in thread Thread-2

用git 回退到昨天

依旧报

神奇的。。。

修改zbus存放index目录

无效

重新创建一个venv环境

= = 成功

重新的去检查pip freeze

挑出我用的包

重新干掉venv 生成

安装

== 解决了 抓狂、、、

经过考虑应该是卸载不干净依赖的包的锅

= = 继续填坑

java linux mq python rpc zbus 队列

python使用zbus队列尝试

zbus小巧而极速的MQ, RPC实现, 支持HTTP/TCP代理,开放易扩展,多语言支撑微服务,系统总线架构

小巧而极速的MQ, RPC实现, 支持HTTP/TCP代理,开放易扩展,多语言支撑微服务,系统总线架构

最近再想做对外api服务,再纠结数据库异步驱动后

突然想起了zbus = =

这似乎是一个代价更小的方案

先试试官方demo

发布者

broker = Broker('localhost:15555') 

p = Producer(broker) 
p.declare('MyTopic') 

msg = Message()
msg.topic = 'MyTopic'
msg.body = 'hello world'

res = p.publish(msg)

消费者

broker = Broker('localhost:15555')  

def message_handler(msg, client):
    print(msg)

c = Consumer(broker, 'MyTopic')
c.message_handler = message_handler 
c.start()

消费者会一直阻塞 只要有资源 就会取到 然后调用回调

经过测试 body可以随意写 都可以序列化

那么数据入库的结构就可以这样

tornado -> httpapi -> 队列塞进去

队列 <---> 消费者 --> 数据库

cpu100% 博客 性能

记录一次博客性能修复

博客突然cpu100,还好不是玄学调优

博客建立之初 rps 可以跑到 500

cpu 几乎再百分之2左右

某天突然发现一直维持再100%

测试数据库 没有效果

数据库索引 没有效果

缓存命中 没有效果

从单进程到多进程 没有效果

好吧 怀疑人生

冥思苦想很久

突然想起老大说过markdown解析非常耗费时间

给markdown 加一个缓存

rps 瞬间到500+

完美!

mongodb 多表

mongoengine 多表查询写入

mongoengine 实现多表查询写入,动态查询存储

def ArticleDoyColl(coll="free"):
    class ArticleSave(mongo.Document):
        meta = {
            "collection":coll'indexes':[

                'name',

            ],          
            }
        name = mongo.StringField()
    return ArticleSave

if __name__ == "__main__":
    a = ArticleDoyColl("free")
    b=a.objects()
    print(b)

这个可以通过在meta字典里声明一个叫键为 'collection', 定义索引 indexes

deepin linux 深度系统 蓝牙 键盘
metaclass object python 单例模式 工厂模式

python 通过元类 实现单例模式

理解Python中的元类(metaclass)以及元类实现单例模式

class Singleton(type):
    _instances = {}

    def __call__(cls, *args, **kwargs):
        if cls not in cls._instances:
            cls._instances[cls] = super(Singleton, cls).__call__(*args, **kwargs)
        return cls._instances[cls]


class oneObj(metaclass=Singleton):
    def __init__(self):
        pass

python 中 类 也是对象

由于类也是对象,python中的类只要使用class 关键字就可以动态的创建类

def creatClass():
    class classDemo():
        pass
    return classDemo

return 出来的就是一个类对象 而不是实例化后的

用type创建一个类

objs = type('objs', (), {'echo_bar': "asdsad"})
            对象名      集成的元祖      类属性

创建元类

class Singleton(type):
    def __new__(singleton, class_name, class_parents, class_attr):
        return super(Singleton, cls).__new__(cls,class_name, class_parents, class_attr)
asyncio nodejs python uvloop 事件 异步

tornado使用uvloop执行

tornado使用uvloop执行异步

asyncio 是Python3.4 之后引入的标准库的,这个包使用事件循环驱动的协程实现并发。

uvloop 是 基于libuv 代替 asyncio 内事件循环的库

livbuv 则是大名鼎鼎的nodejs使用的io库

= =

自打知道这玩意就想用上

上代码

import tornado.ioloop
import tornado.web
from .router import Url
from core.service import log
#入口文件
settings = {
    'template_path': 'views',
    'static_path': 'static',
    'static_url_prefix': '/static/',
    "cookie_secret" : "61oETzKXQAGaYdkL5gEmGeJJFuYh7EQnp2XdTP1o/Vo=",
    "xsrf_cookies": False,
}

import uvloop

from tornado.platform.asyncio import BaseAsyncIOLoop


class TornadoUvloop(BaseAsyncIOLoop):

    def initialize(self, **kwargs):
        loop = uvloop.new_event_loop()
        try:
            super(TornadoUvloop, self).initialize(
                loop, close_loop=True, **kwargs)
        except Exception:
            loop.close()
            raise

    @staticmethod
    def main(port):
        application = tornado.web.Application(
            Url,
            **settings,)
        application.listen(port)
        tornado.ioloop.IOLoop.configure(TornadoUvloop)
        tornado.ioloop.IOLoop.current().start()
arch deepin fedora linux opensuse 开发

跟风发博客-用linux作开发

五年日用加夜用linux的生活经验

deepin安装后要做的几件事

我不会上任何图,这是一个多少带点干货的博客

  • 第一件事 安装bash的自动提示和git,pv,htop 等等让命令行用的舒服的包
sudo apt install  bash-completion pv git htop nginx redis mongodb axel vim nano aria2 wget

安装后 需要一定的配置 bash开启自动补全

bash 显示配置 linux bash显示git分支
禁用utc 开启免密码sudo 内存小可以开启zram

这能让你用的时候舒服不少

  • 第二步 安装开发环境

1 nodejs 安装 点击进入

2 python 安装 点击进入

3 python 配置 点击进入

4 golang 安装 点击进入

5 ruby 安装 点击进入

6 java 安装

sudo apt install oracle-java

ide 肯定是IntelliJ 全家桶

vscode sublime atom 三件套

  • 浏览器

chrome firefox chromium opera yandex vivaldi

  • 视频

ffmpeg vlc mpv

  • 游戏

我的世界 steam

关于发行版

刚玩linux的小白几乎都会纠结这件事

然而linux不同发行版之间的差别只在于你学会前有多难受

不要带上国产非国产的有色眼睛

更不要去想着装不装逼

恩,学会后也就那样了

那么说说

deepin 最推荐的linux发行版

只需要一点点配置,当然如果上面我说的几个配置你并不认为多的话

arch 需要更多的配置 如果你的网络足够好,并且百度谷歌能力强

ubuntu/opensuse/debian/fedora/centos/

排名不分先后

都需要添加各种第三方源

如果你现在知道什么是源

相当于安装几个应用商店

关于办公 QQ office

QQ,tim,wps 在deepin上直接就有

在其他发行版也有其他方案

但是我想这中浪费时间的事情你并不喜欢

linux 还可以做什么

linux只是一个普通平凡的系统

它把一切选择权交给你

放平心态你才能享受这权力带来的乐趣

arangodb 图数据库 搜索 智能推荐

ArangoDB图数据库应用探索

arangodb数据库图搜索的应用

title: ArangoDB 图数据库 应用探索

ArangoDB 图数据库 应用探索

Not Only SQL

ArangoDB

  • 图数据库

  • K/V数据库

  • 文档数据库

  • Foxx(V8引擎)

ArangoDB 图数据库场景

行为分析 社会关系 风险控制 人脉管理

现实应用

  • 智能推荐
  • 广告投递
  • 婚恋交友
  • 猎头挖人
  • 公安破案
  • 金融风控

App应用商店关系分析

  • 用户 -安装-> APP
  • 用户 -卸载-> APP
  • APP -属于-> 公司
  • APP -属于-> 分类

App应用商店关系分析

用户集合

生成五千个随机用户数据

for user in 1..5000
    INSERT {
            "date":(DATE_NOW()+FLOOR(RAND()*100)),
            "info":CONCAT("附加信息",user)
            } IN users return NEW

  {
    "_key": "40502",
    "_id": "users/40502",
    "_rev": "_WfKeWT---C",
    "date": 1520739311982,
    "info": "附加信息3362"
  }

App应用商店关系分析

app集合

生成一万个随机APP数据

for app IN 1..10000
    INSERT {"name":CONCAT("app",app)}
    INTO apps

  {
    "_key": "64203",
    "_id": "apps/64203",
    "_rev": "_WfKkWO2-_v",
    "name": "app9922"
  }

App应用商店关系分析

分类集合

生成六个随机分类

for doc IN 1..6
    INSERT {
        "name":CONCAT("classify",doc)
            } INTO classify
    RETURN NEW


{
"_key": "64881",
"_id": "classify/64881",
"_rev": "_WfKpDZW--_",
"name": "classify2"
}

App应用商店关系分析

公司集合

生成三百个随机公司

for doc IN 1..300
    INSERT {
        "name":CONCAT("company",doc)
            } INTO company
    RETURN NEW

{
    "_key": "65022",
    "_id": "company/65022",
    "_rev": "_WfKq8fW---",
    "name": "company1"
}

App应用商店关系生成

用户安装

随机用户共三万次随机安装

FOR doc IN 1..30000
    LET edge = {_from:(FOR user IN users
                        SORT RAND()
                        LIMIT 1
                        RETURN user)[0]._id,
            _to:(FOR app IN apps
                        SORT RAND()
                        LIMIT 1
                        RETURN app)[0]._id,
            "info":CONCAT("附加信息",doc)}
    INSERT edge INTO installs
    RETURN NEW

{
    "_key": "190139",
    "_id": "installs/190139",
    "_from": "users/24833",
    "_to": "apps/50991",
    "_rev": "_WfLNQbm---",
    "info": "附加信息1"
}

App应用商店关系生成

用户卸载

随机用户共三万次随机卸载

FOR doc IN 1..30000
    LET edge = {_from:(FOR user IN users
                        SORT RAND()
                        LIMIT 1
                        RETURN user)[0]._id,
            _to:(FOR app IN apps
                        SORT RAND()
                        LIMIT 1
                        RETURN app)[0]._id,
            "info":CONCAT("附加信息",doc)}
    INSERT edge INTO unstalls
    RETURN NEW
 {
    "_key": "190139",
    "_id": "unstall/190139",
    "_from": "users/24833",
    "_to": "apps/50991",
    "_rev": "_WfLNQbm---",
    "info": "附加信息1"
}  

App应用商店关系生成

APP属于某公司

FOR app IN apps
    LET edge = {_from:app._id,
            _to:(FOR com IN company
                        SORT RAND()
                        LIMIT 1
                        RETURN com)[0]._id,
            "info":CONCAT("附加信息",app._key)}
    INSERT edge INTO belongtocompany
    RETURN NEW

 {
    "_key": "190139",
    "_id": "unstall/190139",
    "_from": "users/24833",
    "_to": "apps/50991",
    "_rev": "_WfLNQbm---",
    "info": "附加信息1"
}  

App应用商店关系生成

APP属于某分类

FOR app IN apps
    LET edge = {_from:app._id,
            _to:(FOR class IN classify
                        SORT RAND()
                        LIMIT 1
                        RETURN class)[0]._id,
            "info":CONCAT("附加信息",app._key)}
    INSERT edge INTO belongtoclassify
    RETURN NEW

APP属于某分类

APP属于某分类

App应用商店关系生成分析(开发环境)

二级

测试图

App应用商店关系生成分析(开发环境)

三级

测试图

App应用商店关系

动态推荐APP

与用户相关分类APP,根据安装最多排序

//卸载的APP
LET uninstallapp=( FOR app IN unstalls
    FILTER app._from == @user
    return app)

//安装的APP
LET installapp = ( FOR app IN installs
    FILTER app._from ==@user
    return app)

//根据用户安装情况关联未安装的app并且根据安装量排序
FOR v,e,p IN 1..3 OUTBOUND @user installs ,ANY belongtoclassify
FILTER v NOT IN uninstallapp
FILTER v NOT IN installapp
FILTER v IN apps
    FILTER v._id IN installs[*]._to
    COLLECT to = v.id
        WITH COUNT INTO size
        SORT size DESC
        LIMIT 10
        RETURN {
                "app":to,
                "size":size
                }

App应用商店关系生成

动态推荐某公司

安装量最多

FOR coll IN  belongtocompany
    FOR app IN apps
        FILTER app._id == coll._from
        FOR install IN installs
            FILTER coll._from == install._to 
            COLLECT appcom = coll._to
                WITH COUNT INTO installsize
                SORT installsize DESC
RETURN {
        size:installsize,
        app:appcom}

App应用商店关系生成

动态推荐分类

用户安装app关联分类

FOR classedge IN  belongtoclassify
    FOR app IN apps
        FILTER app._id == classedge._from
        FOR class IN classify
            FILTER class._id == classedge._to 
            COLLECT classifycount = classedge._to
                WITH COUNT INTO installsize
                SORT installsize DESC
RETURN {classify:classifycount,
        size:installsize}

ArangoDB 其他应用体验

Foxx

const createRouter = require('@arangodb/foxx/router');
const indexRouter = createRouter();
indexRouter.all('/', function (req, res) {
  res.redirect('index.html');
});
module.context.use(indexRouter);
markdown python

python的markdown解析发现太过于标准,有的写法竟然会不生效,备份一份

markdown基本语法

<< 访问 Wow!Ubuntu

NOTE: This is Simplelified Chinese Edition Document of Markdown Syntax. If you are seeking for English Edition Document. Please refer to Markdown: Syntax.

声明: 这份文档派生(fork)于繁体中文版,在此基础上进行了繁体转简体工作,并进行了适当的润色。此文档用 Markdown 语法编写,你可以到这里查看它的源文件。「繁体中文版的原始文件可以查看这里 。」--By @riku

注: 本项目托管于 GitCafe上,请通过"派生"和"合并请求"来帮忙改进本项目。

Markdown 语法说明 (简体中文版) / (点击查看快速入门)


概述

宗旨

Markdown 的目标是实现「易读易写」。

可读性,无论如何,都是最重要的。一份使用 Markdown 格式撰写的文件应该可以直接以纯文本发布,并且看起来不会像是由许多标签或是格式指令所构成。Markdown 语法受到一些既有 text-to-HTML 格式的影响,包括 SetextatxTextilereStructuredTextGrutatextEtText,而最大灵感来源其实是纯文本电子邮件的格式。

总之, Markdown 的语法全由一些符号所组成,这些符号经过精挑细选,其作用一目了然。比如:在文字两旁加上星号,看起来就像*强调*。Markdown 的列表看起来,嗯,就是列表。Markdown 的区块引用看起来就真的像是引用一段文字,就像你曾在电子邮件中见过的那样。

兼容 HTML

Markdown 语法的目标是:成为一种适用于网络的书写语言。

Markdown 不是想要取代 HTML,甚至也没有要和它相近,它的语法种类很少,只对应 HTML 标记的一小部分。Markdown 的构想不是要使得 HTML 文档更容易书写。在我看来, HTML 已经很容易写了。Markdown 的理念是,能让文档更容易读、写和随意改。HTML 是一种发布的格式,Markdown 是一种书写的格式。就这样,Markdown 的格式语法只涵盖纯文本可以涵盖的范围。

不在 Markdown 涵盖范围之内的标签,都可以直接在文档里面用 HTML 撰写。不需要额外标注这是 HTML 或是 Markdown;只要直接加标签就可以了。

要制约的只有一些 HTML 区块元素――比如 <div><table><pre><p> 等标签,必须在前后加上空行与其它内容区隔开,还要求它们的开始标签与结尾标签不能用制表符或空格来缩进。Markdown 的生成器有足够智能,不会在 HTML 区块标签外加上不必要的 <p> 标签。

例子如下,在 Markdown 文件里加上一段 HTML 表格:

这是一个普通段落。

<table>
    <tr>
        <td>Foo</td>
    </tr>
</table>

这是另一个普通段落。

请注意,在 HTML 区块标签间的 Markdown 格式语法将不会被处理。比如,你在 HTML 区块内使用 Markdown 样式的*强调*会没有效果。

HTML 的区段(行内)标签如 <span><cite><del> 可以在 Markdown 的段落、列表或是标题里随意使用。依照个人习惯,甚至可以不用 Markdown 格式,而直接采用 HTML 标签来格式化。举例说明:如果比较喜欢 HTML 的 <a><img> 标签,可以直接使用这些标签,而不用 Markdown 提供的链接或是图像标签语法。

和处在 HTML 区块标签间不同,Markdown 语法在 HTML 区段标签间是有效的。

特殊字符自动转换

在 HTML 文件中,有两个字符需要特殊处理: <&< 符号用于起始标签,& 符号则用于标记 HTML 实体,如果你只是想要显示这些字符的原型,你必须要使用实体的形式,像是 &lt;&amp;

& 字符尤其让网络文档编写者受折磨,如果你要打「AT&T」 ,你必须要写成「AT&amp;T」。而网址中的 & 字符也要转换。比如你要链接到:

http://images.google.com/images?num=30&q=larry+bird

你必须要把网址转换写为:

http://images.google.com/images?num=30&amp;q=larry+bird

才能放到链接标签的 href 属性里。不用说也知道这很容易忽略,这也可能是 HTML 标准检验所检查到的错误中,数量最多的。

Markdown 让你可以自然地书写字符,需要转换的由它来处理好了。如果你使用的 & 字符是 HTML 字符实体的一部分,它会保留原状,否则它会被转换成 &amp;。

所以你如果要在文档中插入一个版权符号 ©,你可以这样写:

&copy;

Markdown 会保留它不动。而若你写:

AT&T

Markdown 就会将它转为:

AT&amp;T

类似的状况也会发生在 < 符号上,因为 Markdown 允许 兼容 HTML ,如果你是把 < 符号作为 HTML 标签的定界符使用,那 Markdown 也不会对它做任何转换,但是如果你写:

4 < 5

Markdown 将会把它转换为:

4 &lt; 5

不过需要注意的是,code 范围内,不论是行内还是区块, <& 两个符号都一定会被转换成 HTML 实体,这项特性让你可以很容易地用 Markdown 写 HTML code (和 HTML 相对而言, HTML 语法中,你要把所有的 <& 都转换为 HTML 实体,才能在 HTML 文件里面写出 HTML code。)


区块元素

段落和换行

一个 Markdown 段落是由一个或多个连续的文本行组成,它的前后要有一个以上的空行(空行的定义是显示上看起来像是空的,便会被视为空行。比方说,若某一行只包含空格和制表符,则该行也会被视为空行)。普通段落不该用空格或制表符来缩进。

「由一个或多个连续的文本行组成」这句话其实暗示了 Markdown 允许段落内的强迫换行(插入换行符),这个特性和其他大部分的 text-to-HTML 格式不一样(包括 Movable Type 的「Convert Line Breaks」选项),其它的格式会把每个换行符都转成 <br /> 标签。

如果你确实想要依赖 Markdown 来插入 <br /> 标签的话,在插入处先按入两个以上的空格然后回车。

的确,需要多费点事(多加空格)来产生 <br /> ,但是简单地「每个换行都转换为 <br />」的方法在 Markdown 中并不适合, Markdown 中 email 式的 区块引用 和多段落的 列表 在使用换行来排版的时候,不但更好用,还更方便阅读。

Markdown 支持两种标题的语法,类 Setext 和类 atx 形式。

类 Setext 形式是用底线的形式,利用 = (最高阶标题)和 - (第二阶标题),例如:

This is an H1
=============

This is an H2
-------------

任何数量的 =- 都可以有效果。

类 Atx 形式则是在行首插入 1 到 6 个 # ,对应到标题 1 到 6 阶,例如:

# 这是 H1

## 这是 H2

###### 这是 H6

你可以选择性地「闭合」类 atx 样式的标题,这纯粹只是美观用的,若是觉得这样看起来比较舒适,你就可以在行尾加上 #,而行尾的 # 数量也不用和开头一样(行首的井字符数量决定标题的阶数):

# 这是 H1 #

## 这是 H2 ##

### 这是 H3 ######

区块引用 Blockquotes

Markdown 标记区块引用是使用类似 email 中用 > 的引用方式。如果你还熟悉在 email 信件中的引言部分,你就知道怎么在 Markdown 文件中建立一个区块引用,那会看起来像是你自己先断好行,然后在每行的最前面加上 >

> This is a blockquote with two paragraphs. Lorem ipsum dolor sit amet,
> consectetuer adipiscing elit. Aliquam hendrerit mi posuere lectus.
> Vestibulum enim wisi, viverra nec, fringilla in, laoreet vitae, risus.
> 
> Donec sit amet nisl. Aliquam semper ipsum sit amet velit. Suspendisse
> id sem consectetuer libero luctus adipiscing.

Markdown 也允许你偷懒只在整个段落的第一行最前面加上 >

> This is a blockquote with two paragraphs. Lorem ipsum dolor sit amet,
consectetuer adipiscing elit. Aliquam hendrerit mi posuere lectus.
Vestibulum enim wisi, viverra nec, fringilla in, laoreet vitae, risus.

> Donec sit amet nisl. Aliquam semper ipsum sit amet velit. Suspendisse
id sem consectetuer libero luctus adipiscing.

区块引用可以嵌套(例如:引用内的引用),只要根据层次加上不同数量的 >

> This is the first level of quoting.
>
> > This is nested blockquote.
>
> Back to the first level.

引用的区块内也可以使用其他的 Markdown 语法,包括标题、列表、代码区块等:

> ## 这是一个标题。
> 
> 1.   这是第一行列表项。
> 2.   这是第二行列表项。
> 
> 给出一些例子代码:
> 
>     return shell_exec("echo $input | $markdown_script");

任何像样的文本编辑器都能轻松地建立 email 型的引用。例如在 BBEdit 中,你可以选取文字后然后从选单中选择增加引用阶层

列表

Markdown 支持有序列表和无序列表。

无序列表使用星号、加号或是减号作为列表标记:

*   Red
*   Green
*   Blue

等同于:

+   Red
+   Green
+   Blue

也等同于:

-   Red
-   Green
-   Blue

有序列表则使用数字接着一个英文句点:

1.  Bird
2.  McHale
3.  Parish

很重要的一点是,你在列表标记上使用的数字并不会影响输出的 HTML 结果,上面的列表所产生的 HTML 标记为:

<ol>
<li>Bird</li>
<li>McHale</li>
<li>Parish</li>
</ol>

如果你的列表标记写成:

1.  Bird
1.  McHale
1.  Parish

或甚至是:

3. Bird
1. McHale
8. Parish

你都会得到完全相同的 HTML 输出。重点在于,你可以让 Markdown 文件的列表数字和输出的结果相同,或是你懒一点,你可以完全不用在意数字的正确性。

如果你使用懒惰的写法,建议第一个项目最好还是从 1. 开始,因为 Markdown 未来可能会支持有序列表的 start 属性。

列表项目标记通常是放在最左边,但是其实也可以缩进,最多 3 个空格,项目标记后面则一定要接着至少一个空格或制表符。

要让列表看起来更漂亮,你可以把内容用固定的缩进整理好:

*   Lorem ipsum dolor sit amet, consectetuer adipiscing elit.
    Aliquam hendrerit mi posuere lectus. Vestibulum enim wisi,
    viverra nec, fringilla in, laoreet vitae, risus.
*   Donec sit amet nisl. Aliquam semper ipsum sit amet velit.
    Suspendisse id sem consectetuer libero luctus adipiscing.

但是如果你懒,那也行:

*   Lorem ipsum dolor sit amet, consectetuer adipiscing elit.
Aliquam hendrerit mi posuere lectus. Vestibulum enim wisi,
viverra nec, fringilla in, laoreet vitae, risus.
*   Donec sit amet nisl. Aliquam semper ipsum sit amet velit.
Suspendisse id sem consectetuer libero luctus adipiscing.

如果列表项目间用空行分开,在输出 HTML 时 Markdown 就会将项目内容用 <p> 标签包起来,举例来说:

*   Bird
*   Magic

会被转换为:

<ul>
<li>Bird</li>
<li>Magic</li>
</ul>

但是这个:

*   Bird

*   Magic

会被转换为:

<ul>
<li><p>Bird</p></li>
<li><p>Magic</p></li>
</ul>

列表项目可以包含多个段落,每个项目下的段落都必须缩进 4 个空格或是 1 个制表符:

1.  This is a list item with two paragraphs. Lorem ipsum dolor
    sit amet, consectetuer adipiscing elit. Aliquam hendrerit
    mi posuere lectus.

    Vestibulum enim wisi, viverra nec, fringilla in, laoreet
    vitae, risus. Donec sit amet nisl. Aliquam semper ipsum
    sit amet velit.

2.  Suspendisse id sem consectetuer libero luctus adipiscing.

如果你每行都有缩进,看起来会看好很多,当然,再次地,如果你很懒惰,Markdown 也允许:

*   This is a list item with two paragraphs.

    This is the second paragraph in the list item. You're
only required to indent the first line. Lorem ipsum dolor
sit amet, consectetuer adipiscing elit.

*   Another item in the same list.

如果要在列表项目内放进引用,那 > 就需要缩进:

*   A list item with a blockquote:

    > This is a blockquote
    > inside a list item.

如果要放代码区块的话,该区块就需要缩进两次,也就是 8 个空格或是 2 个制表符:

*   一列表项包含一个列表区块:

        <代码写在这>

当然,项目列表很可能会不小心产生,像是下面这样的写法:

1986. What a great season.

换句话说,也就是在行首出现数字-句点-空白,要避免这样的状况,你可以在句点前面加上反斜杠。

1986\. What a great season.

代码区块

和程序相关的写作或是标签语言原始码通常会有已经排版好的代码区块,通常这些区块我们并不希望它以一般段落文件的方式去排版,而是照原来的样子显示,Markdown 会用 <pre><code> 标签来把代码区块包起来。

要在 Markdown 中建立代码区块很简单,只要简单地缩进 4 个空格或是 1 个制表符就可以,例如,下面的输入:

这是一个普通段落:

    这是一个代码区块。

Markdown 会转换成:

<p>这是一个普通段落:</p>

<pre><code>这是一个代码区块。
</code></pre>

这个每行一阶的缩进(4 个空格或是 1 个制表符),都会被移除,例如:

Here is an example of AppleScript:

    tell application "Foo"
        beep
    end tell

会被转换为:

<p>Here is an example of AppleScript:</p>

<pre><code>tell application "Foo"
    beep
end tell
</code></pre>

一个代码区块会一直持续到没有缩进的那一行(或是文件结尾)。

在代码区块里面, &<> 会自动转成 HTML 实体,这样的方式让你非常容易使用 Markdown 插入范例用的 HTML 原始码,只需要复制贴上,再加上缩进就可以了,剩下的 Markdown 都会帮你处理,例如:

    <div class="footer">
        &copy; 2004 Foo Corporation
    </div>

会被转换为:

<pre><code>&lt;div class="footer"&gt;
    &amp;copy; 2004 Foo Corporation
&lt;/div&gt;
</code></pre>

代码区块中,一般的 Markdown 语法不会被转换,像是星号便只是星号,这表示你可以很容易地以 Markdown 语法撰写 Markdown 语法相关的文件。

分隔线

你可以在一行中用三个以上的星号、减号、底线来建立一个分隔线,行内不能有其他东西。你也可以在星号或是减号中间插入空格。下面每种写法都可以建立分隔线:

* * *

***

*****

- - -

---------------------------------------

区段元素

Markdown 支持两种形式的链接语法: 行内式参考式两种形式。

不管是哪一种,链接文字都是用 [方括号] 来标记。

要建立一个行内式的链接,只要在方块括号后面紧接着圆括号并插入网址链接即可,如果你还想要加上链接的 title 文字,只要在网址后面,用双引号把 title 文字包起来即可,例如:

This is [an example](http://example.com/ "Title") inline link.

[This link](http://example.net/) has no title attribute.

会产生:

<p>This is <a href="http://example.com/" title="Title">
an example</a> inline link.</p>

<p><a href="http://example.net/">This link</a> has no
title attribute.</p>

如果你是要链接到同样主机的资源,你可以使用相对路径:

See my [About](/about/) page for details.

参考式的链接是在链接文字的括号后面再接上另一个方括号,而在第二个方括号里面要填入用以辨识链接的标记:

This is [an example][id] reference-style link.

你也可以选择性地在两个方括号中间加上一个空格:

This is [an example] [id] reference-style link.

接着,在文件的任意处,你可以把这个标记的链接内容定义出来:

[id]: http://example.com/  "Optional Title Here"

链接内容定义的形式为:

  • 方括号(前面可以选择性地加上至多三个空格来缩进),里面输入链接文字
  • 接着一个冒号
  • 接着一个以上的空格或制表符
  • 接着链接的网址
  • 选择性地接着 title 内容,可以用单引号、双引号或是括弧包着

下面这三种链接的定义都是相同:

[foo]: http://example.com/  "Optional Title Here"
[foo]: http://example.com/  'Optional Title Here'
[foo]: http://example.com/  (Optional Title Here)

请注意:有一个已知的问题是 Markdown.pl 1.0.1 会忽略单引号包起来的链接 title。

链接网址也可以用尖括号包起来:

[id]: <http://example.com/>  "Optional Title Here"

你也可以把 title 属性放到下一行,也可以加一些缩进,若网址太长的话,这样会比较好看:

[id]: http://example.com/longish/path/to/resource/here
    "Optional Title Here"

网址定义只有在产生链接的时候用到,并不会直接出现在文件之中。

链接辨别标签可以有字母、数字、空白和标点符号,但是并区分大小写,因此下面两个链接是一样的:

[link text][a]
[link text][A]

隐式链接标记功能让你可以省略指定链接标记,这种情形下,链接标记会视为等同于链接文字,要用隐式链接标记只要在链接文字后面加上一个空的方括号,如果你要让 "Google" 链接到 google.com,你可以简化成:

[Google][]

然后定义链接内容:

[Google]: http://google.com/

由于链接文字可能包含空白,所以这种简化型的标记内也许包含多个单词:

Visit [Daring Fireball][] for more information.

然后接着定义链接:

[Daring Fireball]: http://daringfireball.net/

链接的定义可以放在文件中的任何一个地方,我比较偏好直接放在链接出现段落的后面,你也可以把它放在文件最后面,就像是注解一样。

下面是一个参考式链接的范例:

I get 10 times more traffic from [Google] [1] than from
[Yahoo] [2] or [MSN] [3].

  [1]: http://google.com/        "Google"
  [2]: http://search.yahoo.com/  "Yahoo Search"
  [3]: http://search.msn.com/    "MSN Search"

如果改成用链接名称的方式写:

I get 10 times more traffic from [Google][] than from
[Yahoo][] or [MSN][].

  [google]: http://google.com/        "Google"
  [yahoo]:  http://search.yahoo.com/  "Yahoo Search"
  [msn]:    http://search.msn.com/    "MSN Search"

上面两种写法都会产生下面的 HTML。

<p>I get 10 times more traffic from <a href="http://google.com/"
title="Google">Google</a> than from
<a href="http://search.yahoo.com/" title="Yahoo Search">Yahoo</a>
or <a href="http://search.msn.com/" title="MSN Search">MSN</a>.</p>

下面是用行内式写的同样一段内容的 Markdown 文件,提供作为比较之用:

I get 10 times more traffic from [Google](http://google.com/ "Google")
than from [Yahoo](http://search.yahoo.com/ "Yahoo Search") or
[MSN](http://search.msn.com/ "MSN Search").

参考式的链接其实重点不在于它比较好写,而是它比较好读,比较一下上面的范例,使用参考式的文章本身只有 81 个字符,但是用行内形式的却会增加到 176 个字元,如果是用纯 HTML 格式来写,会有 234 个字元,在 HTML 格式中,标签比文本还要多。

使用 Markdown 的参考式链接,可以让文件更像是浏览器最后产生的结果,让你可以把一些标记相关的元数据移到段落文字之外,你就可以增加链接而不让文章的阅读感觉被打断。

强调

Markdown 使用星号(*)和底线(_)作为标记强调字词的符号,被 *_ 包围的字词会被转成用 <em> 标签包围,用两个 *_ 包起来的话,则会被转成 <strong>,例如:

*single asterisks*

_single underscores_

**double asterisks**

__double underscores__

会转成:

<em>single asterisks</em>

<em>single underscores</em>

<strong>double asterisks</strong>

<strong>double underscores</strong>

你可以随便用你喜欢的样式,唯一的限制是,你用什么符号开启标签,就要用什么符号结束。

强调也可以直接插在文字中间:

un*frigging*believable

但是如果你的 *_ 两边都有空白的话,它们就只会被当成普通的符号

如果要在文字前后直接插入普通的星号或底线,你可以用反斜线:

\*this text is surrounded by literal asterisks\*

代码

如果要标记一小段行内代码,你可以用反引号把它包起来(`),例如:

Use the `printf()` function.

会产生:

<p>Use the <code>printf()</code> function.</p>

如果要在代码区段内插入反引号,你可以用多个反引号来开启和结束代码区段:

``There is a literal backtick (`) here.``

这段语法会产生:

<p><code>There is a literal backtick (`) here.</code></p>

代码区段的起始和结束端都可以放入一个空白,起始端后面一个,结束端前面一个,这样你就可以在区段的一开始就插入反引号:

A single backtick in a code span: `` ` ``

A backtick-delimited string in a code span: `` `foo` ``

会产生:

<p>A single backtick in a code span: <code>`</code></p>

<p>A backtick-delimited string in a code span: <code>`foo`</code></p>

在代码区段内,& 和尖括号会被自动地转成 HTML 实体,这使得插入 HTML 原始码变得很容易,Markdown 会把下面这段:

Please don't use any `<blink>` tags.

转为:

<p>Please don't use any <code>&lt;blink&gt;</code> tags.</p>

你也可以这样写:

`&#8212;` is the decimal-encoded equivalent of `&mdash;`.

以产生:

<p><code>&amp;#8212;</code> is the decimal-encoded
equivalent of <code>&amp;mdash;</code>.</p>

图片

很明显地,要在纯文字应用中设计一个「自然」的语法来插入图片是有一定难度的。

Markdown 使用一种和链接很相似的语法来标记图片,同样也允许两种样式: 行内式参考式

行内式的图片语法看起来像是:

![Alt text](/path/to/img.jpg)

![Alt text](/path/to/img.jpg "Optional title")

详细叙述如下:

  • 一个惊叹号 !
  • 接着一个方括号,里面放上图片的替代文字
  • 接着一个普通括号,里面放上图片的网址,最后还可以用引号包住并加上 选择性的 'title' 文字。

参考式的图片语法则长得像这样:

![Alt text][id]

「id」是图片参考的名称,图片参考的定义方式则和连结参考一样:

[id]: url/to/image  "Optional title attribute"

到目前为止, Markdown 还没有办法指定图片的宽高,如果你需要的话,你可以使用普通的 <img> 标签。


其它

Markdown 支持以比较简短的自动链接形式来处理网址和电子邮件信箱,只要是用尖括号包起来, Markdown 就会自动把它转成链接。一般网址的链接文字就和链接地址一样,例如:

<http://example.com/>

Markdown 会转为:

<a href="http://example.com/">http://example.com/</a>

邮址的自动链接也很类似,只是 Markdown 会先做一个编码转换的过程,把文字字符转成 16 进位码的 HTML 实体,这样的格式可以糊弄一些不好的邮址收集机器人,例如:

<address@example.com>

Markdown 会转成:

<a href="&#x6D;&#x61;i&#x6C;&#x74;&#x6F;:&#x61;&#x64;&#x64;&#x72;&#x65;
&#115;&#115;&#64;&#101;&#120;&#x61;&#109;&#x70;&#x6C;e&#x2E;&#99;&#111;
&#109;">&#x61;&#x64;&#x64;&#x72;&#x65;&#115;&#115;&#64;&#101;&#120;&#x61;
&#109;&#x70;&#x6C;e&#x2E;&#99;&#111;&#109;</a>

在浏览器里面,这段字串(其实是 <a href="mailto:address@example.com">address@example.com</a>)会变成一个可以点击的「address@example.com」链接。

(这种作法虽然可以糊弄不少的机器人,但并不能全部挡下来,不过总比什么都不做好些。不管怎样,公开你的信箱终究会引来广告信件的。)

反斜杠

Markdown 可以利用反斜杠来插入一些在语法中有其它意义的符号,例如:如果你想要用星号加在文字旁边的方式来做出强调效果(但不用 <em> 标签),你可以在星号的前面加上反斜杠:

\*literal asterisks\*

Markdown 支持以下这些符号前面加上反斜杠来帮助插入普通的符号:

\   反斜线
`   反引号
*   星号
_   底线
{}  花括号
[]  方括号
()  括弧
#   井字号
+   加号
-   减号
.   英文句点
!   惊叹号

感谢

感谢 leafy7382 协助翻译,hlbRandylien 帮忙润稿,ethantw汉字标准格式・CSS ResetWM 回报文字错误。

感谢 fenpraceaddv


Markdown 免费编辑器

Windows 平台

Linux 平台

Mac 平台

在线编辑器

浏览器插件

高级应用

*** 如有更好的 Markdown 免费编辑器推荐,请到这里反馈,谢谢!

basic linux linuxvb vb vf 开源vb

有趣的好玩具gambas仿照vb的开源实现

Gambas 是一个面向对象的BASIC语言分支和一个附带的IDE

Gambas 是一个面向对象的BASIC语言分支和一个附带的IDE

可惜的是只能在unix-like系统下运行

但是不得不说是有趣儿的

linux 下编译安装

http://gambas.sourceforge.net/zh/main.html#

下载

wget https://gitlab.com/gambas/gambas/-/archive/3.11.3/gambas-3.11.3.tar.bz2

解压

tar xvf  gambas-3.11.3.tar.bz2

编译

cd  gambas-3.11.3

configure prefix=/home/user/basic

make -j8

make install

gambas 自带应用商店

可以下载第三方组建,以及各种demo

测试了一下 基本上都可以执行

办公 搜索 效率

一些存活的搜索引擎

好用的搜索引擎

一些存活的搜索引擎

Rambler.ru

Rambler.ru 是俄罗斯门户网站,也是俄罗斯三大门户网站之一 。 重点是无需科学上网。这个网站的搜索引擎是谷歌提供支持。网页搜索右下角显示由谷歌技术驱动。

https://nova.rambler.ru/

Bird.so

Bird.so 关于技术问题的搜索结果来自 google 搜索、雅虎搜索、必应搜索的聚合;经过测试,优先展示 google 搜索结果

http://bird.so/

yahoo

不用多解释,昔日巨头

https://sg.search.yahoo.com

mezw

经过几个简单的关键字搜索,发现 MEZW 搜索结果与 Google 并无太大差异。优点:国内正常访问,界面简洁

https://so.mezw.com/

avira

Avira 是世界著名的杀毒软件,中文名:小红伞,来自德国。 搜索引擎基于 ASK

https://search.avira.com/

Ecosia

Ecosia 是一个基于 Bing 和 Yahoo 的绿色搜索引擎,通过自身算法优化整合 Bing 和 Yahoo 的搜索结果,展示最优的结果

https://www.ecosia.org/

jpg linux 压缩

linux下压缩jpg

linux下压缩jpg 很简单拉

jpegoptim  -m50 xxx.jpg

-m50是代表了有损压缩

nodejs php python swoole 压力测试

有意思的 Hello World

有意思的 Hello World 三种语言的压力测试

周末突发奇想 看了下swoole

便有了以下测试

python3 aiohttp rps 2000+

nodejs rps 3000+

swoole rps 13000+

被震惊了一下

被群友提醒了一下

swoole 会默认开cpu同等的进程

开启 nodejs 的cluster

rps 下降到2800

瞬间对nodejs怀疑了人生。。。

linux swap 性能

swap性能优化

swap性能优化

swappiness sysctl 参数代表了内核对于交换空间的喜好(或厌恶)程度。Swappiness 可以有 0 到 100 的值。设置这个参数为较低的值会减少内存的交换,从而提升一些系统上的响应度。

/etc/sysctl.d/90-swappiness.conf
vm.swappiness=1
vm.vfs_cache_pressure=50

优先级

如果你有多于一个交换文件或交换分区,你可以给它们各自分配一个优先级值(0 到 32767)。系统会在使用较低优先级的交换区域前优先使用较高优先级的交换区域。例如,如果你有一个较快的磁盘 (/dev/sda) 和一个较慢的磁盘 (/dev/sdb),给较快的设备分配一个更高的优先级。优先级可以在 fstab 中通过 pri 参数指定:

/dev/sda1 none swap defaults,pri=100 0 0
/dev/sdb2 none swap defaults,pri=10  0 0

或者通过 swapon 的 −p (或者 −−priority) 参数:

swapon -p 100 /dev/sda1
install linux postgresql 最佳部署

【转】PostgreSQL on Linux 最佳部署手册

PostgreSQL其实安装很简单,但是那仅仅是可用,并不是好用。

作者

digoal

日期

2016-11-21

标签

Linux , PostgreSQL , Install , 最佳部署


背景

数据库的安装一直以来都挺复杂的,特别是Oracle,现在身边都还有安装Oracle数据库赚外快的事情。

PostgreSQL其实安装很简单,但是那仅仅是可用,并不是好用。很多用户使用默认的方法安装好数据库之后,然后测试一通性能,发现性能不行就不用了。

原因不用说,多方面没有优化的结果。

PostgreSQL数据库为了适应更多的场景能使用,默认的参数都设得非常保守,通常需要优化,比如检查点,SHARED BUFFER等。

本文将介绍一下PostgreSQL on Linux的最佳部署方法,其实在我的很多文章中都有相关的内容,但是没有总结成一篇文档。

OS与硬件认证检查

目的是确认服务器与OS通过certification

Intel Xeon v3和v4的cpu,能支持的RHEL的最低版本是不一样的,

详情请见:https://access.redhat.com/support/policy/intel

Intel Xeon v3和v4的cpu,能支持的Oracle Linux 的最低版本是不一样的,

详情请见:http://linux.oracle.com/pls/apex/f?p=117:1

第一:RedHat生态系统--来自RedHat的认证列表https://access.redhat.com/ecosystem

第二:Oracle Linux 对服务器和存储的硬件认证列表 http://linux.oracle.com/pls/apex/f?p=117:1

安装常用包

# yum -y install coreutils glib2 lrzsz mpstat dstat sysstat e4fsprogs xfsprogs ntp readline-devel zlib-devel openssl-devel pam-devel libxml2-devel libxslt-devel python-devel tcl-devel gcc make smartmontools flex bison perl-devel perl-ExtUtils* openldap-devel jadetex  openjade bzip2

配置OS内核参数

1. sysctl

注意某些参数,根据内存大小配置(已说明)

含义详见

《DBA不可不知的操作系统内核参数》

# vi /etc/sysctl.conf

# add by digoal.zhou
fs.aio-max-nr = 1048576
fs.file-max = 76724600
kernel.core_pattern= /data01/corefiles/core_%e_%u_%t_%s.%p         
# /data01/corefiles事先建好,权限777,如果是软链接,对应的目录修改为777
kernel.sem = 4096 2147483647 2147483646 512000    
# 信号量, ipcs -l-u 查看,每16个进程一组,每组信号量需要17个信号量。
kernel.shmall = 107374182      
# 所有共享内存段相加大小限制(建议内存的80%)
kernel.shmmax = 274877906944   
# 最大单个共享内存段大小(建议为内存一半), >9.2的版本已大幅降低共享内存的使用
kernel.shmmni = 819200         
# 一共能生成多少共享内存段,每个PG数据库集群至少2个共享内存段
net.core.netdev_max_backlog = 10000
net.core.rmem_default = 262144       
# The default setting of the socket receive buffer in bytes.
net.core.rmem_max = 4194304          
# The maximum receive socket buffer size in bytes
net.core.wmem_default = 262144       
# The default setting (in bytes) of the socket send buffer.
net.core.wmem_max = 4194304          
# The maximum send socket buffer size in bytes.
net.core.somaxconn = 4096
net.ipv4.tcp_max_syn_backlog = 4096
net.ipv4.tcp_keepalive_intvl = 20
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_time = 60
net.ipv4.tcp_mem = 8388608 12582912 16777216
net.ipv4.tcp_fin_timeout = 5
net.ipv4.tcp_synack_retries = 2
net.ipv4.tcp_syncookies = 1    
# 开启SYN Cookies。当出现SYN等待队列溢出时,启用cookie来处理,可防范少量的SYN攻击
net.ipv4.tcp_timestamps = 1    
# 减少time_wait
net.ipv4.tcp_tw_recycle = 0    
# 如果=1则开启TCP连接中TIME-WAIT套接字的快速回收,但是NAT环境可能导致连接失败,建议服务端关闭它
net.ipv4.tcp_tw_reuse = 1      
# 开启重用。允许将TIME-WAIT套接字重新用于新的TCP连接
net.ipv4.tcp_max_tw_buckets = 262144
net.ipv4.tcp_rmem = 8192 87380 16777216
net.ipv4.tcp_wmem = 8192 65536 16777216
net.nf_conntrack_max = 1200000
net.netfilter.nf_conntrack_max = 1200000
vm.dirty_background_bytes = 409600000       
#  系统脏页到达这个值,系统后台刷脏页调度进程 pdflush(或其他) 自动将(dirty_expire_centisecs/100)秒前的脏页刷到磁盘
vm.dirty_expire_centisecs = 3000             
#  比这个值老的脏页,将被刷到磁盘。3000表示30秒。
vm.dirty_ratio = 95                          
#  如果系统进程刷脏页太慢,使得系统脏页超过内存 95 % 时,则用户进程如果有写磁盘的操作(如fsync, fdatasync等调用),则需要主动把系统脏页刷出。
#  有效防止用户进程刷脏页,在单机多实例,并且使用CGROUP限制单实例IOPS的情况下非常有效。  
vm.dirty_writeback_centisecs = 100            
#  pdflush(或其他)后台刷脏页进程的唤醒间隔, 100表示1秒。
vm.mmap_min_addr = 65536
vm.overcommit_memory = 0     
#  在分配内存时,允许少量over malloc, 如果设置为 1, 则认为总是有足够的内存,内存较少的测试环境可以使用 1 .  
vm.overcommit_ratio = 90     
#  当overcommit_memory = 2 时,用于参与计算允许指派的内存大小。
vm.swappiness = 0            
#  关闭交换分区
vm.zone_reclaim_mode = 0     
# 禁用 numa, 或者在vmlinux中禁止. 
net.ipv4.ip_local_port_range = 40000 65535    
# 本地自动分配的TCP, UDP端口号范围
fs.nr_open=20480000
# 单个进程允许打开的文件句柄上限

# 以下参数请注意
# vm.extra_free_kbytes = 4096000
# vm.min_free_kbytes = 2097152
# 如果是小内存机器,以上两个值不建议设置
# vm.nr_hugepages = 66536    
#  建议shared buffer设置超过64GB时 使用大页,页大小 /proc/meminfo Hugepagesize
# vm.lowmem_reserve_ratio = 1 1 1
# 对于内存大于64G时,建议设置,否则建议默认值 256 256 32

2. 生效配置

sysctl -p

配置OS资源限制

# vi /etc/security/limits.conf

# nofile超过1048576的话,一定要先将sysctl的fs.nr_open设置为更大的值,并生效后才能继续设置nofile.

* soft    nofile  1024000
* hard    nofile  1024000
* soft    nproc   unlimited
* hard    nproc   unlimited
* soft    core    unlimited
* hard    core    unlimited
* soft    memlock unlimited
* hard    memlock unlimited

最好在关注一下/etc/security/limits.d目录中的文件内容,会覆盖limits.conf的配置。

已有进程的ulimit请查看/proc/pid/limits,例如

Limit                     Soft Limit           Hard Limit           Units     
Max cpu time              unlimited            unlimited            seconds   
Max file size             unlimited            unlimited            bytes     
Max data size             unlimited            unlimited            bytes     
Max stack size            10485760             unlimited            bytes     
Max core file size        0                    unlimited            bytes     
Max resident set          unlimited            unlimited            bytes     
Max processes             11286                11286                processes 
Max open files            1024                 4096                 files     
Max locked memory         65536                65536                bytes     
Max address space         unlimited            unlimited            bytes     
Max file locks            unlimited            unlimited            locks     
Max pending signals       11286                11286                signals   
Max msgqueue size         819200               819200               bytes     
Max nice priority         0                    0                    
Max realtime priority     0                    0                    
Max realtime timeout      unlimited            unlimited            us

如果你要启动其他进程,建议退出SHELL再进一遍,确认ulimit环境配置已生效,再启动。

配置OS防火墙

(建议按业务场景设置,我这里先清掉)

iptables -F

配置范例

# 私有网段
-A INPUT -s 192.168.0.0/16 -j ACCEPT
-A INPUT -s 10.0.0.0/8 -j ACCEPT
-A INPUT -s 172.16.0.0/16 -j ACCEPT

selinux

如果没有这方面的需求,建议禁用

# vi /etc/sysconfig/selinux 

SELINUX=disabled
SELINUXTYPE=targeted

关闭不必要的OS服务

chkconfig --list|grep on  
关闭不必要的,例如 
chkconfig iscsi off

部署文件系统

注意SSD对齐,延长寿命,避免写放大。

parted -s /dev/sda mklabel gpt
parted -s /dev/sda mkpart primary 1MiB 100%

格式化(如果你选择ext4的话)

mkfs.ext4 /dev/sda1 -m 0 -O extent,uninit_bg -E lazy_itable_init=1 -T largefile -L u01

建议使用的ext4 mount选项

# vi /etc/fstab

LABEL=u01 /u01     ext4        defaults,noatime,nodiratime,nodelalloc,barrier=0,data=writeback    0 0

# mkdir /u01
# mount -a

为什么需要data=writeback?

pic

建议pg_xlog放到独立的IOPS性能贼好的块设备中。

设置SSD盘的调度为deadline

如果不是SSD的话,还是使用CFQ,否则建议使用DEADLINE。

临时设置(比如sda盘)

echo deadline > /sys/block/sda/queue/scheduler

永久设置

编辑grub文件修改块设备调度策略

vi /boot/grub.conf

elevator=deadline

注意,如果既有机械盘,又有SSD,那么可以使用/etc/rc.local,对指定磁盘修改为对应的调度策略。

关闭透明大页、numa

加上前面的默认IO调度,如下

vi /boot/grub.conf

elevator=deadline numa=off transparent_hugepage=never 

编译器

建议使用较新的编译器,安装 gcc 6.2.0 参考

《PostgreSQL clang vs gcc 编译》

如果已安装好,可以分发给不同的机器。

cd ~
tar -jxvf gcc6.2.0.tar.bz2
tar -jxvf python2.7.12.tar.bz2


# vi /etc/ld.so.conf

/home/digoal/gcc6.2.0/lib
/home/digoal/gcc6.2.0/lib64
/home/digoal/python2.7.12/lib

# ldconfig

环境变量

# vi ~/env_pg.sh

export PS1="$USER@`/bin/hostname -s`-> "
export PGPORT=$1
export PGDATA=/$2/digoal/pg_root$PGPORT
export LANG=en_US.utf8
export PGHOME=/home/digoal/pgsql9.6
export LD_LIBRARY_PATH=/home/digoal/gcc6.2.0/lib:/home/digoal/gcc6.2.0/lib64:/home/digoal/python2.7.12/lib:$PGHOME/lib:/lib64:/usr/lib64:/usr/local/lib64:/lib:/usr/lib:/usr/local/lib:$LD_LIBRARY_PATH
export PATH=/home/digoal/gcc6.2.0/bin:/home/digoal/python2.7.12/bin:/home/digoal/cmake3.6.3/bin:$PGHOME/bin:$PATH:.
export DATE=`date +"%Y%m%d%H%M"`
export MANPATH=$PGHOME/share/man:$MANPATH
export PGHOST=$PGDATA
export PGUSER=postgres
export PGDATABASE=postgres
alias rm='rm -i'
alias ll='ls -lh'
unalias vi

icc, clang

如果你想使用ICC或者clang编译PostgreSQL,请参考

《[转载]用intel编译器icc编译PostgreSQL》

《PostgreSQL clang vs gcc 编译》

编译PostgreSQL

建议使用NAMED_POSIX_SEMAPHORES

src/backend/port/posix_sema.c

create sem : 
named :
mySem = sem_open(semname, O_CREAT | O_EXCL,
(mode_t) IPCProtection, (unsigned) 1);


unamed :
/*
* PosixSemaphoreCreate
*
* Attempt to create a new unnamed semaphore.
*/
static void
PosixSemaphoreCreate(sem_t * sem)
{
if (sem_init(sem, 1, 1) < 0)
elog(FATAL, "sem_init failed: %m");
}


remove sem : 

#ifdef USE_NAMED_POSIX_SEMAPHORES
/* Got to use sem_close for named semaphores */
if (sem_close(sem) < 0)
elog(LOG, "sem_close failed: %m");
#else
/* Got to use sem_destroy for unnamed semaphores */
if (sem_destroy(sem) < 0)
elog(LOG, "sem_destroy failed: %m");
#endif

编译项

. ~/env_pg.sh 1921 u01

cd postgresql-9.6.1
export USE_NAMED_POSIX_SEMAPHORES=1
LIBS=-lpthread CC="/home/digoal/gcc6.2.0/bin/gcc" CFLAGS="-O3 -flto" ./configure --prefix=/home/digoal/pgsql9.6
LIBS=-lpthread CC="/home/digoal/gcc6.2.0/bin/gcc" CFLAGS="-O3 -flto" make world -j 64
LIBS=-lpthread CC="/home/digoal/gcc6.2.0/bin/gcc" CFLAGS="-O3 -flto" make install-world

如果你是开发环境,需要调试,建议这样编译。

cd postgresql-9.6.1
export USE_NAMED_POSIX_SEMAPHORES=1
LIBS=-lpthread CC="/home/digoal/gcc6.2.0/bin/gcc" CFLAGS="-O0 -flto -g -ggdb -fno-omit-frame-pointer" ./configure --prefix=/home/digoal/pgsql9.6 --enable-cassert
LIBS=-lpthread CC="/home/digoal/gcc6.2.0/bin/gcc" CFLAGS="-O0 -flto -g -ggdb -fno-omit-frame-pointer" make world -j 64
LIBS=-lpthread CC="/home/digoal/gcc6.2.0/bin/gcc" CFLAGS="-O0 -flto -g -ggdb -fno-omit-frame-pointer" make install-world

初始化数据库集群

pg_xlog建议放在IOPS最好的分区。

. ~/env_pg.sh 1921 u01
initdb -D $PGDATA -E UTF8 --locale=C -U postgres -X /u02/digoal/pg_xlog$PGPORT

配置postgresql.conf

以PostgreSQL 9.6, 512G内存主机为例

最佳到文件末尾即可,重复的会以末尾的作为有效值。  

$ vi postgresql.conf

listen_addresses = '0.0.0.0'
port = 1921
max_connections = 5000
unix_socket_directories = '.'
tcp_keepalives_idle = 60
tcp_keepalives_interval = 10
tcp_keepalives_count = 10
shared_buffers = 128GB                      # 1/4 主机内存
maintenance_work_mem = 2GB                  # min( 2G, (1/4 主机内存)/autovacuum_max_workers )
dynamic_shared_memory_type = posix
vacuum_cost_delay = 0
bgwriter_delay = 10ms
bgwriter_lru_maxpages = 1000
bgwriter_lru_multiplier = 10.0
bgwriter_flush_after = 0                    # IO很好的机器,不需要考虑平滑调度
max_worker_processes = 128
max_parallel_workers_per_gather = 0         #  如果需要使用并行查询,设置为大于1 ,不建议超过 主机cores-2
old_snapshot_threshold = -1
backend_flush_after = 0  # IO很好的机器,不需要考虑平滑调度, 否则建议128~256kB
wal_level = replica
synchronous_commit = off
full_page_writes = on   # 支持原子写超过BLOCK_SIZE的块设备,在对齐后可以关闭。或者支持cow的文件系统可以关闭。
wal_buffers = 1GB       # min( 2047MB, shared_buffers/32 ) = 512MB
wal_writer_delay = 10ms
wal_writer_flush_after = 0  # IO很好的机器,不需要考虑平滑调度, 否则建议128~256kB
checkpoint_timeout = 30min  # 不建议频繁做检查点,否则XLOG会产生很多的FULL PAGE WRITE(when full_page_writes=on)。
max_wal_size = 256GB       # 建议是SHARED BUFFER的2倍
min_wal_size = 64GB        # max_wal_size/4
checkpoint_completion_target = 0.05          # 硬盘好的情况下,可以让检查点快速结束,恢复时也可以快速达到一致状态。否则建议0.5~0.9
checkpoint_flush_after = 0                   # IO很好的机器,不需要考虑平滑调度, 否则建议128~256kB
archive_mode = on
archive_command = '/bin/date'      #  后期再修改,如  'test ! -f /disk1/digoal/arch/%f && cp %p /disk1/digoal/arch/%f'
max_wal_senders = 8
random_page_cost = 1.3  # IO很好的机器,不需要考虑离散和顺序扫描的成本差异
parallel_tuple_cost = 0
parallel_setup_cost = 0
min_parallel_relation_size = 0
effective_cache_size = 300GB                          # 看着办,扣掉会话连接RSS,shared buffer, autovacuum worker, 剩下的都是OS可用的CACHE。
force_parallel_mode = off
log_destination = 'csvlog'
logging_collector = on
log_truncate_on_rotation = on
log_checkpoints = on
log_connections = on
log_disconnections = on
log_error_verbosity = verbose
log_timezone = 'PRC'
vacuum_defer_cleanup_age = 0
hot_standby_feedback = off                             # 建议关闭,以免备库长事务导致 主库无法回收垃圾而膨胀。
max_standby_archive_delay = 300s
max_standby_streaming_delay = 300s
autovacuum = on
log_autovacuum_min_duration = 0
autovacuum_max_workers = 16                            # CPU核多,并且IO好的情况下,可多点,但是注意16*autovacuum mem,会消耗较多内存,所以内存也要有基础。  
autovacuum_naptime = 45s                               # 建议不要太高频率,否则会因为vacuum产生较多的XLOG。
autovacuum_vacuum_scale_factor = 0.1
autovacuum_analyze_scale_factor = 0.1
autovacuum_freeze_max_age = 1600000000
autovacuum_multixact_freeze_max_age = 1600000000
vacuum_freeze_table_age = 1500000000
vacuum_multixact_freeze_table_age = 1500000000
datestyle = 'iso, mdy'
timezone = 'PRC'
lc_messages = 'C'
lc_monetary = 'C'
lc_numeric = 'C'
lc_time = 'C'
default_text_search_config = 'pg_catalog.english'
shared_preload_libraries='pg_stat_statements'

## 如果你的数据库有非常多小文件(比如有几十万以上的表,还有索引等,并且每张表都会被访问到时),建议FD可以设多一些,避免进程需要打开关闭文件。
## 但是不要大于前面章节系统设置的ulimit -n(open files)
max_files_per_process=655360

配置pg_hba.conf

避免不必要的访问,开放允许的访问,建议务必使用密码访问。

$ vi pg_hba.conf

host replication xx 0.0.0.0/0 md5  # 流复制

host all postgres 0.0.0.0/0 reject # 拒绝超级用户从网络登录
host all all 0.0.0.0/0 md5  # 其他用户登陆

启动数据库

pg_ctl start

好了,你的PostgreSQL数据库基本上部署好了,可以愉快的玩耍了。

Count

arangodb linux mysql nosql 分布式

arangodb-php 使用

ArangoDB 是一个开源的分布式原生多模型数据库

ArangoDB 是一个开源的分布式原生多模型数据库 (Apache 2 license)。 其理念是: 利用一个引擎,一个 query 语法,一项数据库技术,以及多个数据 模型,来最大力度满足项目的灵活性,简化技术堆栈,简化数据库运维,降低运营成本。

  1. 多数据模型:可以灵活的使用 document, graph, key-value 或者他们的组合作为你的数据模型
  2. 方便的查询:支持类似 SQL 的查询语法 AQL,或者通过 REST 以及其他查询
  3. Ruby 和 JS 扩展:没有语言范围限制,你可以从前台到后台都使用同一种语言
  4. 高性能以及低空间占用:ArangoDB 比其他 NoSQL 都要快,同时占用的空间更小
  5. 简单易用:可以在几秒内启动并且使用,同时可以通过图形界面来管理你的 ArangoDB
  6. 开源且免费:ArangoDB 遵守 Apache 协议

arangodb-php 暂时还没有什么中文资料

arandodb-php的示例代码也不是很清楚 这里尝试了一下curd的简单操作

/**
 * Created by PhpStorm.
 * User: free
 * Date: 17-7-28
 * Time: 下午10:05
 */
//使用方法
//$connection=new arango();
//
//$id=new ArangoDocumentHandler($connection->c);
//
//
//$data=$id->get('user',aaaa);//返回的是json  可先转为数组操作


//composer require triagens/arangodb


//require 'vendor/autoload.php';

use triagens\ArangoDb\Collection as ArangoCollection;
use triagens\ArangoDb\CollectionHandler as ArangoCollectionHandler;
use triagens\ArangoDb\Connection as ArangoConnection;
use triagens\ArangoDb\ConnectionOptions as ArangoConnectionOptions;
use triagens\ArangoDb\DocumentHandler as ArangoDocumentHandler;
use triagens\ArangoDb\Document as ArangoDocument;
use triagens\ArangoDb\Exception as ArangoException;
use triagens\ArangoDb\Export as ArangoExport;
use triagens\ArangoDb\ConnectException as ArangoConnectException;
use triagens\ArangoDb\ClientException as ArangoClientException;
use triagens\ArangoDb\ServerException as ArangoServerException;
use triagens\ArangoDb\Statement as ArangoStatement;
use triagens\ArangoDb\UpdatePolicy as ArangoUpdatePolicy;

class arango
{
    public function __construct(){
        $connectionOptions = [
            // database name
            ArangoConnectionOptions::OPTION_DATABASE => 'free',
            // server endpoint to connect to
            ArangoConnectionOptions::OPTION_ENDPOINT => 'tcp://127.0.0.1:8529',
            // authorization type to use (currently supported: 'Basic')
            ArangoConnectionOptions::OPTION_AUTH_TYPE => 'Basic',
            // user for basic authorization
            ArangoConnectionOptions::OPTION_AUTH_USER => 'root',
            // password for basic authorization
            ArangoConnectionOptions::OPTION_AUTH_PASSWD => 'free',
            // connection persistence on server. can use either 'Close' (one-time connections) or 'Keep-Alive' (re-used connections)
            ArangoConnectionOptions::OPTION_CONNECTION => 'Keep-Alive',
            // connect timeout in seconds
            ArangoConnectionOptions::OPTION_TIMEOUT => 3,
            // whether or not to reconnect when a keep-alive connection has timed out on server
            ArangoConnectionOptions::OPTION_RECONNECT => true,
            // optionally create new collections when inserting documents
            ArangoConnectionOptions::OPTION_CREATE => true,
            // optionally create new collections when inserting documents
            ArangoConnectionOptions::OPTION_UPDATE_POLICY => ArangoUpdatePolicy::LAST,
        ];


// turn on exception logging (logs to whatever PHP is configured)
        ArangoException::enableLogging();


        $this->c = new ArangoConnection($connectionOptions);
//        $connect->auth()

    }
}
mirrors pip python 国内源

python pip 改国内源

python pip 改国内源

http://mirrors.aliyun.com/pypi/simple/


[global]
index-url = http://pypi.douban.com/simple
[install]
trusted-host=pypi.douban.com

~/.pip/pip.conf
kv linux nosql redis

ardb 兼容redis多种储存引擎的好玩轮子

Ardb是一个新的构建在持久化Key/Value存储实现上的NoSQL DB服务实现

Ardb是一个新的构建在持久化Key/Value存储实现上的NoSQL DB服务实现,支持list/set/sorted set/bitset/hash/table等复杂的数据结构,以Redis协议对外提供访问接口。

支持多种储存引擎

git clone https://github.com/yinqiwen/ardb

storage_engine=rocksdb make
storage_engine=leveldb make
storage_engine=lmdb make
storage_engine=wiredtiger make
storage_engine=perconaft make
storage_engine=forestdb make


make dist就可以了

rocksdb facebook基于leveldb的闪存储存引擎

点击下载

leveldb Leveldb是一个google实现的非常高效的kv数据库

点击下载

lmdb是openLDAP项目开发的嵌入式(作为一个库嵌入到宿主程序)存储引擎

点击下载

wiredtiger mongodb的储存引擎

点击下载

perconaft percona公司的轮子 他家优化的各种数据库都挺不错

点击下载

ForestDB 是一个快速的 Key-Value 存储引擎,基于层次B +树单词查找树。由 Couchbase 缓存和存储团队开发。

谁知道什么鬼!! 编译失败了一个!!!!!!

aql mysql 高级操作

arangodb-aql详细操作

arangodb-aql详细操作


下面介绍以下高级操作:

  • FOR:遍历数组的所有元素。

  • RETURN:生成查询的结果。

  • FILTER:将结果限制为与任意逻辑条件匹配的元素。

  • SORT:强制排序已生成的中间结果的数组。

  • LIMIT:将结果中的元素数减少到至多指定的数字, 可以选择跳过元素 (分页)。

  • LET:将任意值赋给变量。

  • COLLECT:按一个或多个组条件对数组进行分组。也可以计数和聚合。

  • REMOVE:从集合中移除文档。

  • UPDATE:部分更新集合中的文档。

  • REPLACE:完全替换集合中的文档。

  • INSERT:将新文档插入到集合中。

  • UPSERT:更新/替换现有文档, 或在不存在的情况下创建它。

  • WITH:指定查询中使用的集合 (仅在查询开始时)。


FOR

FOR 关键字可以是循环访问数组的所有元素。一般语法是:

FOR variableName IN expression

图遍历还有一个特殊的变体:

FOR vertexVariableName, edgeVariableName, pathVariableName IN traversalExpression

每个由表达式返回的数组元素仅访问一次。在所有情况下, 表达式都需要返回一个数组。也允许空数组。当前数组元素可用于在 variableName 指定的变量中进行进一步处理。

FOR u IN users
RETURN u

返回值

[
{
  "_key": "2427801",
  "_id": "ks/2427801",
  "_rev": "_VeWiZ2i---",
  "id": 1,
  "a": "test",
  "b": [
    "aaaaaaaaaaaaaaaaa"
  ]
}
]

这将遍历阵列用户的所有元素 (注意: 此数组由本例中名为 "users" 的集合中的所有文档组成), 并使当前数组元素在变量 u 中可用. 在本例中没有修改, 只是使用 RETURN 关键字推入结果。

注意: 当迭代基于数组时, 如下所示, 文档的顺序是未定义的, 除非使用排序语句定义了显式排序顺序。

FOR 引入的变量是可用的, 直到 FOR 所放置的范围关闭。

另一个使用静态声明的值数组循环访问的示例:

FOR year IN [ 2011, 2012, 2013 ]
RETURN { "year" : year, "isLeapYear" : year % 4 == 0 && (year % 100 != 0 || year % 400 == 0) }

也允许多个语句的嵌套。当对语句进行嵌套时, 将创建由单个语句返回的数组元素的交叉乘积。

FOR u IN users
FOR l IN locations
  RETURN { "user" : u, "location" : l }

在此示例中, 有两个数组迭代: 在数组用户上的外部迭代加上在数组位置上的内部迭代。内部数组的遍历次数与外部数组中的元素数相同。对于每个迭代, 用户和位置的当前值都可用于在变量中进行进一步的处理。


RETURN

返回语句可用于生成查询结果。必须在数据选择查询的每个块的末尾指定 RETURN 语句, 否则查询结果将是未定义的。在数据修改查询中使用主级别的返回是可选的。

RETURN expression

返回语句所返回的表达式是在返回声明所放置的块中的每个迭代中生成的。这意味着返回语句的结果始终是一个数组。这包括一个空数组, 如果没有与查询匹配的文档, 则返回一个返回值作为数组的一个元素。

要在不修改的情况下返回当前迭代数组中的所有元素, 可以使用以下简单形式:

FOR variableName IN expression
RETURN variableName

当返回允许指定表达式时, 可以执行任意计算来计算结果元素。可将返回的范围中有效的任何变量用于计算。

若要循环访问名为 users 的集合的所有文档并返回完整文档, 可以编写:

FOR u IN users
RETURN u

在 for 循环的每个迭代中, 用户集合的文档被分配给一个变量, 并在本例中未修改返回。若要只返回每个文档的一个属性, 可以使用不同的返回表达式:

FOR u IN users
RETURN u.name

或者要返回多个属性, 可以像这样构造一个对象:

FOR u IN users
RETURN { name: u.name, age: u.age }

注意: 返回将关闭当前范围并消除其中的所有局部变量。在使用子查询时要记住这一点很重要。

FOR u IN users
RETURN { [ u._id ]: u.age }

在本示例中, 每个用户的文档 _id 用作表达式来计算属性键:

[
{
  "users/9883": 32
},
{
  "users/9915": 27
},
{
  "users/10074": 69
}
]

结果包含每个用户一个具有单个键/值对的对象。这通常是不需要的。对于将用户 id 映射到年龄的单个对象, 需要合并单个结果并返回另一个返回:

RETURN MERGE(
  FOR u IN users
    RETURN { [ u._id ]: u.age }
)

.

[
{
  "users/10074": 69,
  "users/9883": 32,
  "users/9915": 27
}
]

请记住, 如果键表达式多次计算为相同的值, 则只有其中一个具有重复名称的键/值对才能生存合并 ()。为了避免出现这种情况, 您可以不使用动态属性名, 而改用静态名称, 并将所有文档属性作为属性值返回:

FOR u IN users
RETURN { name: u.name, age: u.age }

.

[
{
  "name": "John Smith",
  "age": 32
},
{
  "name": "James Hendrix",
  "age": 69
},
{
  "name": "Katie Foster",
  "age": 27
}
]

FILTER

筛选语句可用于将结果限制为与任意逻辑条件匹配的元素。

常规语法

FILTER condition

条件必须是计算结果为 false 或 true 的条件。如果条件结果为 false, 则跳过当前元素, 因此不会进一步处理它, 也不会成为结果的一部分。如果条件为 true, 则不跳过当前元素, 并且可以进一步处理。有关可以在条件中使用的比较运算符、逻辑运算符等的列表, 请参见运算符。

FOR u IN users
FILTER u.active == true && u.age < 39
RETURN u

允许在查询中指定多个筛选语句, 即使在同一块中也是如此。如果使用了多个筛选器语句, 则它们的结果将与逻辑 and 合并, 这意味着所有筛选条件都必须为真, 才能包含元素。

FOR u IN users
FILTER u.active == true
FILTER u.age < 39
RETURN u

在上面的示例中, 用户的所有数组元素的值都为 true, 且属性的值小于 39 (包括 null), 将包括在结果中。将跳过所有其他用户元素, 而不会将其包含在返回结果中。您可以参考从集合访问数据的章节来描述不存在或 null 属性的影响。

操作顺序

请注意, 筛选语句的位置可能会影响查询的结果。测试数据中有16活动用户, 例如:

FOR u IN users
FILTER u.active == true
RETURN u

我们最多可以将结果集限制为5用户:

FOR u IN users
FILTER u.active == true
LIMIT 5
RETURN u

这可能会返回的用户文件, 吉姆, 迭戈, 安东尼, 迈克尔和克洛伊的例子。返回的是未定义的, 因为没有用于确保特定顺序的排序语句。如果我们添加第二个筛选语句只返回女性..。

FOR u IN users
FILTER u.active == true
LIMIT 5
FILTER u.gender == "f"
RETURN u

它可能只返回克洛伊文档, 因为该限制在第二个筛选器之前应用。不超过5文件到达第二个过滤器块, 并且不是所有他们完成性别标准, 即使有超过5活跃女性用户在汇集。通过添加排序块可以实现更具确定性的结果:

FOR u IN users
FILTER u.active == true
SORT u.age ASC
LIMIT 5
FILTER u.gender == "f"
RETURN u

这将返回用户玛丽亚和玛丽。如果按年龄降序排序, 则返回索菲亚、艾玛和麦迪逊文件。但在限制之后的筛选不是很常见, 您可能需要这样的查询:

FOR u IN users
FILTER u.active == true AND u.gender == "f"
SORT u.age ASC
LIMIT 5
RETURN u

放置过滤块的意义在于, 这个单一的关键字可以担当两个 SQL 关键字的角色, 并且具有。因此, AQL 的过滤器与任何其他中间结果、文档属性等的收集聚合体相同。


SORT

排序语句将强制在当前块中已生成的中间结果的数组排序。排序允许指定一个或多个排序条件和方向。一般语法是:

SORT expression direction

按姓氏排序的示例查询 (按升序排列), 然后是名字 (按升序排列), 然后按 id (按降序排列):

FOR u IN users
SORT u.lastName, u.firstName, u.id DESC
RETURN u

指定方向是可选的。排序表达式的默认 (隐式) 方向为升序顺序。若要显式指定排序方向, 可以使用关键字 ASC (升序) 和降序。可以使用逗号分隔多个排序条件。在这种情况下, 为每个表达式 sperately 指定方向。例如

SORT doc.lastName, doc.firstName

将首先按姓氏以升序排序文档, 然后按名字以升序排列。

SORT doc.lastName DESC, doc.firstName

将首先按姓氏按降序排列文档, 然后按名字以升序排序。

SORT doc.lastName, doc.firstName DESC

将首先按姓氏以升序排序文档, 然后按名字降序排列。

注意: 当迭代基于数组时, 文档的顺序始终是未定义的, 除非使用排序定义了显式排序顺序。

请注意, 常量排序表达式可用于指示不需要特定的排序顺序。在优化过程中, AQL 优化器将对常量排序表达式进行优化, 但如果优化器不需要考虑任何特定的排序顺序, 则显式指定它们可能会启用进一步优化。这在收集语句之后尤其如此, 它应该产生一个排序结果。在收集语句后指定额外的排序空值允许 AQL 优化器完全删除收集结果的 post-sorting。


LIMIT

限制语句允许使用偏移量和计数对结果数组进行切片。它将结果中的元素数减少到最多指定的数字。采用了两种一般的限制形式:

LIMIT count
LIMIT offset, count

第一个窗体允许只指定计数值, 而第二个窗体允许指定偏移量和计数。第一个窗体是相同的, 使用第二个窗体的偏移值为0。

FOR u IN users
LIMIT 5
RETURN u

上面的查询返回用户集合的前五文档。它也可以写为限制 0, 5 为相同的结果。它实际上返回的文件是相当任意的, 因为没有明确的排序顺序被指定然而。因此, 限制应通常伴随排序操作。

偏移值指定应跳过结果中的多少元素。它必须是0或更大。count 值指定在结果中最多包含多少元素

FOR u IN users
SORT u.firstName, u.lastName, u.id DESC
LIMIT 2, 5
RETURN u

在上面的示例中, 对用户的文档进行排序, 前两个结果被跳过, 并返回下一个五用户文档。

请注意, 变量和表达式不能用于偏移和计数。在查询编译时, 它们的值必须是已知的, 这意味着您只能使用数字文本和绑定参数。

在与查询中的其他操作相关的情况下, 使用限制是有意义的。特别是在筛选器之前限制操作可以显著地更改结果, 因为这些操作是按照它们在查询中的写入顺序执行的。有关详细示例, 请参见筛选器。


LET

"LET" 语句可用于将任意值赋给变量。然后在让语句所放置的范围中引入变量。

LET variableName = expression

变量在 AQL 中是不可变的, 这意味着它们不能重新:

LET a = [1, 2, 3]  // initial assignment

a = PUSH(a, 4)     // syntax error, unexpected identifier
LET a = PUSH(a, 4) // parsing error, variable 'a' is assigned multiple times
LET b = PUSH(a, 4) // allowed, result: [1, 2, 3, 4]

让语句主要用于声明复杂计算, 并避免在查询的多个部分重复计算相同的值。

FOR u IN users
LET numRecommendations = LENGTH(u.recommendations)
RETURN { 
  "user" : u, 
  "numRecommendations" : numRecommendations, 
  "isPowerUser" : numRecommendations >= 10 
}

在上面的示例中, 使用 "LET" 语句计算出建议的数量, 从而避免在 RETURN 语句中计算两次值。

让我们使用的另一个用例是在子查询中声明一个复杂的计算, 使整个查询更具可读性。

FOR u IN users
LET friends = (
FOR f IN friends 
  FILTER u.id == f.userId
  RETURN f
)
LET memberships = (
FOR m IN memberships
  FILTER u.id == m.userId
    RETURN m
)
RETURN { 
  "user" : u, 
  "friends" : friends, 
  "numFriends" : LENGTH(friends), 
  "memberShips" : memberships 
}

COLLECT

"COLLECT" 关键字可用于按一个或多个组条件对数组进行分组。

COLLECT语句将消除当前范围内的所有局部变量。COLLECT后, 只有由COLLECT本身引入的变量是可用的。

COLLECT的一般语法是:

COLLECT variableName = expression options
COLLECT variableName = expression INTO groupsVariable options
COLLECT variableName = expression INTO groupsVariable = projectionExpression options
COLLECT variableName = expression INTO groupsVariable KEEP keepVariable options
COLLECT variableName = expression WITH COUNT INTO countVariable options
COLLECT variableName = expression AGGREGATE variableName = aggregateExpression options
COLLECT AGGREGATE variableName = aggregateExpression options
COLLECT WITH COUNT INTO countVariable options

选项在所有变体中都是可选的。

对语法进行分组

"COLLECT" 的第一个语法形式仅将结果按表达式中指定的组条件分组。为了进一步处理收集到的结果, 引入了一个新的变量 (由 variableName 指定)。此变量包含组值。

下面是一个查询, 它在美国城市中找到了不同的值, 并使它们可在可变城市中使用:

FOR u IN users
  COLLECT city = u.city
  RETURN { 
    "city" : city 
  }

第二个窗体与第一个窗体相同, 但另外引入了一个变量 (由 groupsVariable 指定), 其中包含掉到组中的所有元素。它的工作方式如下: groupsVariable 变量是一个数组, 其中包含的元素与组中的一样多。该数组的每个成员都是一个 JSON 对象, 其中在 AQL 查询中定义的每个变量的值都绑定到相应的属性。请注意, 这将考虑在收集语句之前定义的所有变量, 而不是在顶层 (任何一个 FOR) 的前面, 除非收集语句本身位于顶层, 在这种情况下, 所有变量都被采用。此外, 请注意, 优化程序可能会将语句移出以用于语句以提高性能。

FOR u IN users
COLLECT city = u.city INTO groups
RETURN { 
  "city" : city, 
  "usersInCity" : groups 
}

在上面的示例中, 数组用户将按属性城市分组。结果是一个新的文档数组, 每个元素都有一个不同的 u. 城市值。元素从原始的数组 (这里: 用户) 每个城市被使可利用在可变的小组。这是由于进入条款。

"COLLECT" 还允许指定多个组条件。单个组条件可以用逗号分隔:

FOR u IN users
COLLECT country = u.country, city = u.city INTO groups
RETURN { 
  "country" : country, 
  "city" : city, 
  "usersInCity" : groups 
}

在上面的示例中, 数组用户首先按国家和城市分组, 对于每个不同的国家和城市组合, 用户将被返回。

丢弃过时的变量

第三种形式的COLLECT允许使用任意 projectionExpression 改写 groupsVariable 的内容:

FOR u IN users
COLLECT country = u.country, city = u.city INTO groups = u.name
RETURN { 
  "country" : country, 
  "city" : city, 
  "userNames" : groups 
}

在上面的例子中, 只有 projectionExpression 是 u 名称。因此, 仅将此属性复制到每个文档的 groupsVariable 中。这可能比将范围内的所有变量复制到 groupsVariable 中要有效得多, 因为它会在没有 projectionExpression 的情况下发生。

下面的表达式也可以用于任意计算:

FOR u IN users
COLLECT country = u.country, city = u.city INTO groups = { 
  "name" : u.name, 
  "isActive" : u.status == "active"
}
RETURN { 
  "country" : country, 
  "city" : city, 
  "usersInCity" : groups 
}

COLLECT还提供一个可选的保留子句, 可用于控制将哪些变量复制到创建的变量中。如果未指定保留子句, 则范围中的所有变量都将作为 sub-attributes 复制到 groupsVariable 中。这是安全的, 但如果范围内有许多变量或变量包含大量数据, 则会对性能产生负面影响。

下面的示例将复制到 groupsVariable 中的变量限制为仅名称。您和 someCalculation 在作用域中的变量也不会被复制到 groupsVariable 中, 因为它们没有在 "保留" 子句中列出:

FOR u IN users
LET name = u.name
LET someCalculation = u.value1 + u.value2
COLLECT city = u.city INTO groups KEEP name 
RETURN { 
  "city" : city, 
  "userNames" : groups[*].name 
}

保持是仅有效的与入的组合。在 "保留" 子句中只能使用有效的变量名。保持支持多个变量名的规范。

组长度计算

"COLLECT" 还提供了一个特殊的计数子句, 可用于有效地确定组成员的数量。

最简单的表单只返回使其进入collect的项的数量:

FOR u IN users
COLLECT WITH COUNT INTO length
RETURN length

上述内容等同于, 但效率高于:

RETURN LENGTH(
  FOR u IN users
    RETURN length
)

使用 count 子句还可以有效地计算每个组中的项数:

FOR u IN users
COLLECT age = u.age WITH COUNT INTO length
RETURN { 
  "age" : age, 
  "count" : length 
}

聚合

COLLECT语句可用于执行每个组的数据聚合。为只确定组长度, 与计数入变异的收集可以使用如前面描述。

对于其他聚合, 可以对收集结果运行聚合函数:

FOR u IN users
COLLECT ageGroup = FLOOR(u.age / 5) * 5 INTO g
RETURN { 
  "ageGroup" : ageGroup,
  "minAge" : MIN(g[*].u.age),
  "maxAge" : MAX(g[*].u.age)
}

REMOVE

  • remove * 关键字可用于从集合中移除文档。在一个 单台服务器, 则在事务中执行文档删除。 完全没有时尚。对于切分集合, 整个删除操作 不是事务性的。

每个 * remove * 操作仅限于单个集合, 而 集合名称 不得为动态。 每个 AQL 查询只允许每个集合使用单 * 删除 * 语句, 并且 它不能后跟访问同一集合的读取操作, 遍历操作或可以读取文档的 AQL 函数。

删除操作的语法为:

REMOVE keyExpression IN collection options
  • collection * 必须包含要删除文档的集合的名称 从。* keyExpression * 必须是包含文档标识的表达式。 这可以是一个字符串 (然后必须包含 文档密钥 或 文档, 它必须包含 * _key * 属性。

因此, 下列查询是等效的:

FOR u IN users
  REMOVE { _key: u._key } IN users

FOR u IN users
  REMOVE u._key IN users

FOR u IN users
  REMOVE u IN users

注意 : 删除操作可以删除任意文档, 并且文档 不需要与前面的 * 声明所产生的相同:

FOR i IN 1..1000
  REMOVE { _key: CONCAT('test', i) } IN users

FOR u IN users
  FILTER u.active == false
  REMOVE { _key: u._key } IN backup

设置查询选项

  • option * 可用于禁止在尝试 删除不存在的文档。例如, 以下查询将失败, 如果一个 to-be 删除的文档不存在:
FOR i IN 1..1000
  REMOVE { _key: CONCAT('test', i) } IN users

通过指定 * ignoreErrors * 查询选项, 可以抑制这些错误, 以便 查询完成:

FOR i IN 1..1000
  REMOVE { _key: CONCAT('test', i) } IN users OPTIONS { ignoreErrors: true }

为了确保在查询返回时已将数据写入磁盘, waitForSync * 查询选项:

FOR i IN 1..1000
  REMOVE { _key: CONCAT('test', i) } IN users OPTIONS { waitForSync: true }

返回已删除的文档

已删除的文档也可以由查询返回。在这种情况下, "REMOVE" 语句后面必须有一个 "RETURN" 语句 (中间的 ' LET ' 语句 也允许). "REMOVE" 引入了 pseudo-value "旧" 来引用已删除的 文件:

REMOVE keyExpression IN collection options RETURN OLD

下面是一个示例, 它使用名为 "已删除" 的变量来捕获被删除的 文件.对于每个已删除的文档, 将返回文档密钥。

FOR u IN users
  REMOVE u IN users 
  LET removed = OLD 
  RETURN removed._key

UPDATE

  • update * 关键字可用于部分更新集合中的文档。在一个 单一服务器, 更新执行事务在一个全有的时尚。 对于切分的集合, 整个更新操作不是事务性的。

每个 * UPDATE * 操作仅限于单个集合, 而 集合名称 不得为动态。 每个 AQL 查询只允许每个集合的单 * UPDATE * 语句, 并且 它不能后跟访问同一集合的读取操作, 遍历操作或可以读取文档的 AQL 函数。 系统属性 * _id , * _key * 和 * _rev * 不能更新, _from * 和 *_to * 可以。

更新操作的两个语法是:

UPDATE document IN collection options
UPDATE keyExpression WITH document IN collection options
  • collection * 必须包含文档的集合名称 进行更新。* document * 必须是包含属性和值的文档 要更新。使用第一个语法时, * document * 也必须包含 * _key * 属性来标识要更新的文档。
FOR u IN users
  UPDATE { _key: u._key, name: CONCAT(u.firstName, " ", u.lastName) } IN users

下面的查询无效, 因为它不包含 * _key * 属性和 因此不可能确定要更新的文档:

FOR u IN users
  UPDATE { name: CONCAT(u.firstName, " ", u.lastName) } IN users

使用第二个语法时, * keyExpression * 提供文档标识。 这可以是一个字符串 (随后必须包含文档密钥) 或 文档, 它必须包含 * _key * 属性。

下列查询是等效的:

FOR u IN users
  UPDATE u._key WITH { name: CONCAT(u.firstName, " ", u.lastName) } IN users

FOR u IN users
  UPDATE { _key: u._key } WITH { name: CONCAT(u.firstName, " ", u.lastName) } IN users

FOR u IN users
  UPDATE u WITH { name: CONCAT(u.firstName, " ", u.lastName) } IN users

更新操作可能会更新不需要相同的任意文档 由前 * FOR * 的陈述所产生的部分:

FOR i IN 1..1000
  UPDATE CONCAT('test', i) WITH { foobar: true } IN users

FOR u IN users
  FILTER u.active == false
  UPDATE u WITH { status: 'inactive' } IN backup

使用文档属性的当前值

"WITH" 子句中不支持 $this "OLD" ( 在 "更新" 之后可用)。若要访问当前属性值, 可以 通常通过 "for" 循环的变量来引用文档, 这是用来 循环访问集合:

FOR doc IN users
  UPDATE doc WITH {
    fullName: CONCAT(doc.firstName, " ", doc.lastName)
  } IN users

如果没有循环, 因为单个文档只更新, 那么 可能不是像上面的变量 ("doc"), 这将让你引用 正在更新的文档:

UPDATE "users/john" WITH { ... } IN users

若要在这种情况下访问当前值, 必须检索文档 并首先存储在变量中:

LET doc = DOCUMENT("users/john")
UPDATE doc WITH {
  fullName: CONCAT(doc.firstName, " ", doc.lastName)
} IN users

可以通过这种方式修改现有属性的当前值, 要递增计数器, 例如:

UPDATE doc WITH {
  karma: doc.karma + 1
} IN users

如果属性 "karma" 还不存在, "karma" 被评估为 * 为 null 。 该表达式 "null + 1" 导致新属性 "karma" 被设置为 * 1 。 如果属性确实存在, 则它会增加 * 1 *。

当然, 数组也可以被突变:

UPDATE doc WITH {
  hobbies: PUSH(doc.hobbies, "swimming")
} IN users

如果属性 "hobbies" 还不存在, 它就会被方便地初始化 作为 "[swimming]", 否则延长。

设置查询选项

  • option * 可用于禁止在尝试 更新不存在的文档或违反唯一的键约束:
FOR i IN 1..1000
  UPDATE {
    _key: CONCAT('test', i)
  } WITH {
    foobar: true
  } IN users OPTIONS { ignoreErrors: true }

更新操作将只更新 * document * 中指定的属性, 并 保持其他属性不变。内部属性 (如 * _id , * _key , * _rev , * _from * 和 * _to ) 不能更新, 并在 * document * 中指定时被忽略。 更新文档将使用服务器生成的值修改文档的修订号。

在更新具有 null 值的属性时, ArangoDB 不会删除该属性 从文档中, 但存储一个空值。删除更新中的属性 操作, 请将它们设置为 null 并提供 * keepNull * 选项:

FOR u IN users
  UPDATE u WITH {
    foobar: true,
    notNeeded: null
  } IN users OPTIONS { keepNull: false }

上述查询将从文档中删除 * notNeeded * 属性, 并更新 * foobar * 属性正常。

还有一个选项 * mergeObjects *, 控制是否将对象内容 如果对象属性同时出现在 * UPDATE * 查询和 to-be 更新的文档。

以下查询将更新后的文档的 * name * 属性设置为精确 在查询中指定的值相同。这是由于 mergeObjects * 选项 被设置为 * false *:

FOR u IN users
  UPDATE u WITH {
    name: { first: "foo", middle: "b.", last: "baz" }
  } IN users OPTIONS { mergeObjects: false }

相反, 下面的查询将合并 * name * 属性的内容。 具有查询中指定值的原始文档:

FOR u IN users
  UPDATE u WITH {
    name: { first: "foo", middle: "b.", last: "baz" }
  } IN users OPTIONS { mergeObjects: true }
  • name * 中存在于 to-be 更新的文档中的属性, 但不在 现在将保留查询。两者中存在的属性将被改写 在查询中指定的值。

注: * mergeObjects * 为 * true * 的默认值 , 因此无需指定 明确.

为了确保数据在更新查询返回时是持久的, 有 * waitForSync * 查询选项:

FOR u IN users
  UPDATE u WITH {
    foobar: true
  } IN users OPTIONS { waitForSync: true }

返回修改后的文档

修改后的文档也可以由查询返回。在这种情况下, "UPDATE 语句需要遵循 "RETURN" 语句 (中间的 ' LET ' 语句 也是允许的)。这些语句可以引用 pseudo-values 的 "OLD" 和 "NEW"。 "OLD" pseudo-value 指更新前的文档修订, 以及 "NEW" 是指更新后的文档修订。

"OLD" 和 "NEW" 都将包含所有文档属性, 即使没有指定 在 update 表达式中。

UPDATE document IN collection options RETURN OLD
UPDATE document IN collection options RETURN NEW
UPDATE keyExpression WITH document IN collection options RETURN OLD
UPDATE keyExpression WITH document IN collection options RETURN NEW

下面是一个示例, 它使用名为 "previous" 的变量来捕获原始 修改前的文档。对于每个已修改的文档, 将返回文档密钥。

FOR u IN users
  UPDATE u WITH { value: "test" } 
  LET previous = OLD 
  RETURN previous._key

下面的查询使用 "NEW" pseudo-value 返回更新的文档, 没有某些系统属性:

FOR u IN users
  UPDATE u WITH { value: "test" } 
  LET updated = NEW 
  RETURN UNSET(updated, "_key", "_id", "_rev")

还可以返回 "旧" 和 "新":

FOR u IN users
  UPDATE u WITH { value: "test" } 
  RETURN { before: OLD, after: NEW }

REPLACE

  • REPLACE * 关键字可用于完全替换集合中的文档。在一个 单台服务器, 替换操作执行事务在一个全有-没有 时尚.对于切分集合, 整个替换操作不是事务性的。

每个 * REPLACE * 操作仅限于单个集合, 而 每个 AQL 查询只允许每个集合使用单 * REPLACE * 语句, 并且 它不能后跟访问同一集合的读取操作, 遍历操作或可以读取文档的 AQL 函数。 系统属性 * _id *, * _key * 和 * _rev * 不能被替换, * _from * 和 * _to * 可以。

替换操作的两个语法为:

REPLACE document IN collection options
REPLACE keyExpression WITH document IN collection options
  • collection * 必须包含文档的集合名称 被替换。* document * 为替换文件。使用第一个语法时, * document * 还必须包含 * _key * 属性, 以标识要替换的文档。
FOR u IN users
  REPLACE { _key: u._key, name: CONCAT(u.firstName, u.lastName), status: u.status } IN users

下面的查询无效, 因为它不包含 * _key * 属性和 因此不可能确定要替换的文档:

FOR u IN users
  REPLACE { name: CONCAT(u.firstName, u.lastName, status: u.status) } IN users

使用第二个语法时, * keyExpression * 提供文档标识。 这可以是一个字符串 (随后必须包含文档密钥) 或 文档, 它必须包含 * _key * 属性。

下列查询是等效的:

FOR u IN users
  REPLACE { _key: u._key, name: CONCAT(u.firstName, u.lastName) } IN users

FOR u IN users
  REPLACE u._key WITH { name: CONCAT(u.firstName, u.lastName) } IN users

FOR u IN users
  REPLACE { _key: u._key } WITH { name: CONCAT(u.firstName, u.lastName) } IN users

FOR u IN users
  REPLACE u WITH { name: CONCAT(u.firstName, u.lastName) } IN users

替换将完全替换现有文档, 但不会修改值 内部属性 (如 * _id , * _key , * _from * 和 * _to *)。替换文档 将使用服务器生成的值修改文档的修订号。

替换操作可以更新不需要相同的任意文档 由前 * FOR * 的陈述所产生的部分:

FOR i IN 1..1000
  REPLACE CONCAT('test', i) WITH { foobar: true } IN users

FOR u IN users
  FILTER u.active == false
  REPLACE u WITH { status: 'inactive', name: u.name } IN backup

设置查询选项

  • option * 可用于禁止在尝试 替换不存在的文档或在违反唯一键约束时:
FOR i IN 1..1000
  REPLACE { _key: CONCAT('test', i) } WITH { foobar: true } IN users OPTIONS { ignoreErrors: true }

为了确保在替换查询返回时数据是持久的, 有 * waitForSync * 查询选项:

FOR i IN 1..1000
  REPLACE { _key: CONCAT('test', i) } WITH { foobar: true } IN users OPTIONS { waitForSync: true }

返回修改后的文档

修改后的文档也可以由查询返回。在这种情况下, "REPLACE" 语句后面必须有一个 "RETURN" 语句 (中间的 ' LET' 语句是 允许的, 太)。"OLD" pseudo-value 可用于引用文档修订版之前 替换, "NEW" 是指替换后的文档修订。

"OLD" 和 "NEW" 都将包含所有文档属性, 即使没有指定 在 "替换" 表达式中。

REPLACE document IN collection options RETURN OLD
REPLACE document IN collection options RETURN NEW
REPLACE keyExpression WITH document IN collection options RETURN OLD
REPLACE keyExpression WITH document IN collection options RETURN NEW

下面是一个示例, 它使用名为 "previous" 的变量返回原始 修改前的文档。对于每个被替换的文档, 文档密钥将 返回:

FOR u IN users
  REPLACE u WITH { value: "test" } 
  LET previous = OLD 
  RETURN previous._key

下面的查询使用 "NEW" pseudo-value 返回替换的 文档 (不含某些系统属性):

FOR u IN users
  REPLACE u WITH { value: "test" } 
  LET replaced = NEW 
  RETURN UNSET(replaced, '_key', '_id', '_rev')

INSERT

  • INSERT * 关键字可用于将新文档插入到集合中。在一个 单服务器, 插入操作在事务中执行。 时尚.对于切分集合, 整个插入操作不是事务性的。

每个 * INSERT * 操作仅限于单个集合, 而 每个 AQL 查询只允许每个集合使用单 * INSERT * 语句, 并且 它不能后跟访问同一集合的读取操作, 遍历操作或可以读取文档的 AQL 函数。

插入操作的语法为:

INSERT document IN collection options

: * INTO * 关键字也允许在 * IN *。

  • collection * 必须包含文档的集合名称 入。* document * 是要插入的文档, 它可能包含也可能不包括
  • _key * 属性。如果不提供 * _key * 属性, ArangoDB 将自动 值为 * _key * 值。插入文档也将自动文档 文档的修订号。
FOR i IN 1..100
  INSERT { value: i } IN numbers

当插入到 edge collection, 在文档中指定属性 * _from * 和 * _to * 为必填项:

FOR u IN users
  FOR p IN products
    FILTER u._key == p.recommendedBy
    INSERT { _from: u._id, _to: p._id } IN recommendations

设置查询选项

  • option * 可用于禁止在违反唯一 关键约束:
FOR i IN 1..1000
  INSERT {
    _key: CONCAT('test', i),
    name: "test",
    foobar: true
  } INTO users OPTIONS { ignoreErrors: true }

为了确保在插入查询返回时数据是持久的, 有 * waitForSync * 查询选项:

FOR i IN 1..1000
  INSERT {
    _key: CONCAT('test', i),
    name: "test",
    foobar: true
  } INTO users OPTIONS { waitForSync: true }

返回插入的文档

插入的文档也可以由查询返回。在这种情况下, "INSERT" 语句可以是 "RETURN" 语句 (中间的 "LET" 语句也是允许的)。 要引用插入的文档, "INSERT" 语句引入了一个 pseudo-value 命名为 "NEW"。

"NEW" 中包含的文档将包含所有属性, 即使是自动生成的 数据库 (例如 "_id"、"_key"、"_rev")。

INSERT document IN collection options RETURN NEW

下面是一个示例, 它使用名为 "inserted" 的变量返回插入的 文件.对于每个插入的文档, 将返回文档密钥:

FOR i IN 1..100
  INSERT { value: i } 
  LET inserted = NEW 
  RETURN inserted._key

WITH

AQL 查询可以选择以 WITH 语句和 查询使用的集合。所有在 WITH 中指定的集合将 在查询开始时读锁定, 除了其他集合的查询 使用 AQL 查询分析器检测到的。

WITH managers, usersHaveManagers
FOR v, e, p IN OUTBOUND 'users/1' GRAPH 'userGraph'
  RETURN { v, e, p }

document graph key-value

arangodb 安装

利用一个引擎,一个 query 语法,一项数据库技术,以及多个数据 模型,来最大力度满足项目的灵活性

ArangoDB 是一个开源的分布式原生多模型数据库 (Apache 2 license)。 其理念是: 利用一个引擎,一个 query 语法,一项数据库技术,以及多个数据 模型,来最大力度满足项目的灵活性,简化技术堆栈,简化数据库运维,降低运营成本。

  1. 多数据模型:可以灵活的使用 document, graph, key-value 或者他们的组合作为你的数据模型
  2. 方便的查询:支持类似 SQL 的查询语法 AQL,或者通过 REST 以及其他查询
  3. Ruby 和 JS 扩展:没有语言范围限制,你可以从前台到后台都使用同一种语言
  4. 高性能以及低空间占用:ArangoDB 比其他 NoSQL 都要快,同时占用的空间更小
  5. 简单易用:可以在几秒内启动并且使用,同时可以通过图形界面来管理你的 ArangoDB
  6. 开源且免费:ArangoDB 遵守 Apache 协议

我下载的是debian的

``` sudo dpkg -i arangodb-xxx.deb arangosh 提示 Connected to ArangoDB 'http+tcp://127.0.0.1:8529' version: 3.2.0 [server], database: '_system', username: 'root'

```

直接访问 http://127.0.0.1:8529/ 便进入后台管理 先修改root密码

linux nosql 集群

avocadodb/arangodb集群

一个arangodb集群由多任务运行形成集群。

一个arangodb集群由多任务运行形成集群。 arangodb本身不会启动或监视这些任务。 因此,它需要某种监控和启动这些任务的监督者。

手工配置集群是非常简单的。

一个代理角色 两个数据节点角色 一个控制器角色

一下将讲解每个角色所需的参数

集群将由 控制器->代理->数据节点的方向进行

代理与数据节点都可以是多个

代理节点 (Agency)

要启动一个代理,首先要通过agency.activate参数激活。

代理节点数量要通过agency.size=3进行设置 当然 也可以只用1个

在初始化过程中,代理必须相互查找。 这样做至少提供一个共同的agency.endpoint。 指定agency.my-address自己的ip。

单代理节点时

在cluster下配置参数

//监听ip
server.endpoint=tcp://0.0.0.0:5001
//关闭掉密码验证
server.authentication=false 
agency.activate=true 
agency.size=1 
//代理节点
agency.endpoint=tcp://127.0.0.1:5001 
agency.supervision=true 
多代理节点配置

主代理节点配置

server.endpoint=tcp://0.0.0.0:5001
//  服务器监听节点
agency.my-address=tcp://127.0.0.1:5001
//  代理监听节点
server.authentication=false
//  密码验证关闭
agency.activate=true
agency.size=3
//   代理节点数量
agency.endpoint=tcp://127.0.0.1:5001
//   监听主代理节点的ip
agency.supervision=true

子代理节点配置

server.endpoint=tcp://0.0.0.0:5002
agency.my-address=tcp://127.0.0.1:5002
server.authentication=false
agency.activate=true
agency.size=3
agency.endpoint=tcp://127.0.0.1:5001
agency.supervision=true 

所有节点agency.endpoint指向同一个ip/port

控制器和数据节点的配置

数据节点配置

server.authentication=false
server.endpoint=tcp://0.0.0.0:8529
cluster.my-address=tcp://127.0.0.1:8529
cluster.my-local-info=db1
cluster.my-role=PRIMARY
cluster.agency-endpoint=tcp://127.0.0.1:5001
cluster.agency-endpoint=tcp://127.0.0.1:5002

控制器节点配置

server.authentication=false
server.endpoint=tcp://0.0.0.0:8531
cluster.my-address=tcp://127.0.0.1:8531
cluster.my-local-info=coord1
cluster.my-role=COORDINATOR
cluster.agency-endpoint=tcp://127.0.0.1:5001
cluster.agency-endpoint=tcp://127.0.0.1:5002

启动每个节点

1

2

javascript js nosql restful

CouchDB 安装

CouchDB 是一个开源的面向文档的数据库管理系统

CouchDB 是一个开源的面向文档的数据库管理系统,可以通过 RESTful JavaScript Object Notation (JSON) API 访问。术语 “Couch” 是 “Cluster Of Unreliable Commodity Hardware” 的首字母缩写,它反映了 CouchDB 的目标具有高度可伸缩性,提供了高可用性和高可靠性,即使运行在容易出现故障的硬件上也是如此。CouchDB 最初是用 C++ 编写的,但在 2008 年 4 月,这个项目转移到 Erlang OTP 平台进行容错测试

直接下载不知道为啥总会在编译release出错,大概是没rebar配置文件

这里直接github下拉

git clone https://github.com/apache/couchdb

安装编译环境

debian

sudo apt-get --no-install-recommends -y install \
    build-essential pkg-config erlang \
    libicu-dev libmozjs185-dev libcurl4-openssl-dev

redhat

sudo yum install autoconf autoconf-archive automake \
    curl-devel erlang-asn1 erlang-erts erlang-eunit gcc-c++ \
    erlang-os_mon erlang-xmerl erlang-erl_interface help2man \
    js-devel-1.8.5 libicu-devel libtool perl-Test-Harness

生成配置文件

./configure  --disable-docs #文档也会编译出错。。谁知道咋回事呢。。不过官方文档支持直接下载。所以可有可无这里禁用掉

make 

make release
这里就编译出来了   直接在  rel目录下的 couchdb  执行bin下的couchdb 即可  ,如果报错一般是端口占用  去etc/default.ini修改端口即可

运行无误后  浏览器访问 http://localhost:5984/_utils/index.html#verifyinstall

执行初次安装
java leveldb linux nosql rocksdb

leveldb-rocksdb java使用

rocksdb是在leveldb上开发来的

leveldb-rocksdb在java中的demo

(arangodb储存引擎用的rocksdb,然而rocksdb是在leveldb上开发来的)

rocksdb

package net.oschina.itags.gateway.service;
import org.rocksdb.Options;
import org.rocksdb.RocksDB;
import org.rocksdb.RocksDBException;

public class BaseRocksDb {
    public final static RocksDB rocksDB() throws RocksDBException {

        Options options = new Options().setCreateIfMissing(true);
        RocksDB.loadLibrary();
        RocksDB db=RocksDB.open(options,"./rock");
        return db;
    }
}

leveldb

package net.oschina.itags.gateway.service;
import org.iq80.leveldb.*;
import org.iq80.leveldb.impl.Iq80DBFactory;

import java.io.File;
import java.io.IOException;
import java.nio.charset.Charset;

public class BaseLevelDb {

public static final DB db() throws IOException {
    boolean cleanup = true;
    Charset charset = Charset.forName("utf-8");
    String path = "./level";

//init
    DBFactory factory = Iq80DBFactory.factory;
    File dir = new File(path);
//如果数据不需要reload则每次重启尝试清理磁盘中path下的旧数据
    if(cleanup) {
        factory.destroy(dir,null);//清除文件夹内的所有文件
    }
    Options options = new Options().createIfMissing(true);
//重新open新的db
    DB db = factory.open(dir,options);
  return db;
}
}
arangodb nodejs nosql

arangodb-node 使用

arangodb-node 使用 nodejs

Install

With NPM

npm install arangojs

With bower

bower install arangojs

From source

git clone https://github.com/arangodb/arangojs.git
cd arangojs
npm install
npm run dist

Basic usage example

// ES2015-style
import arangojs, {Database, aql} from 'arangojs';
let db1 = arangojs(); // convenience short-hand
let db2 = new Database();
let {query, bindVars} = aql`RETURN ${Date.now()}`;

// or plain old Node-style
var arangojs = require('arangojs');
var db1 = arangojs();
var db2 = new arangojs.Database();
var aql = arangojs.aql(['RETURN ', ''], Date.now());
var query = aql.query;
var bindVars = aql.bindVars;

API

All asynchronous functions take an optional Node-style callback (or "errback") as the last argument with the following arguments:

  • err: an Error object if an error occurred, or null if no error occurred.
  • result: the function's result (if applicable).

For expected API errors, err will be an instance of ArangoError. For any other error responses (4xx/5xx status code), err will be an instance of the apropriate http-errors error type. If the response indicates success but the response body could not be parsed, err will be a SyntaxError. In all of these cases the error object will additionally have a response property containing the server response object.

If Promise is defined globally, asynchronous functions return a promise if no callback is provided.

If you want to use promises in environments that don't provide the global Promise constructor, use a promise polyfill like es6-promise or inject a ES6-compatible promise implementation like bluebird into the global scope.

Examples

// Node-style callbacks
db.createDatabase('mydb', function (err, info) {
    if (err) console.error(err.stack);
    else {
        // database created
    }
});

// Using promises with ES2015 arrow functions
db.createDatabase('mydb')
.then(info => {
    // database created
}, err => console.error(err.stack));

// Using proposed ES.next "async/await" syntax
try {
    let info = await db.createDatabase('mydb');
    // database created
} catch (err) {
    console.error(err.stack);
}

Table of Contents

Database API

new Database

new Database([config]): Database

Creates a new Database instance.

If config is a string, it will be interpreted as config.url.

Arguments

  • config: Object (optional)

An object with the following properties:

  • url: string (Default: http://localhost:8529)

    Base URL of the ArangoDB server.

    If you want to use ArangoDB with HTTP Basic authentication, you can provide the credentials as part of the URL, e.g. http://user:pass@localhost:8529.

    The driver automatically uses HTTPS if you specify an HTTPS url.

    If you need to support self-signed HTTPS certificates, you may have to add your certificates to the agentOptions, e.g.:

    js agentOptions: { ca: [ fs.readFileSync('.ssl/sub.class1.server.ca.pem'), fs.readFileSync('.ssl/ca.pem') ] }

  • databaseName: string (Default: _system)

    Name of the active database.

  • arangoVersion: number (Default: 20300)

    Value of the x-arango-version header.

  • headers: Object (optional)

    An object with additional headers to send with every request.

  • agent: Agent (optional)

    An http Agent instance to use for connections.

    By default a new http.Agent (or https.Agent) instance will be created using the agentOptions.

    This option has no effect when using the browser version of arangojs.

  • agentOptions: Object (Default: see below)

    An object with options for the agent. This will be ignored if agent is also provided.

    Default: {maxSockets: 3, keepAlive: true, keepAliveMsecs: 1000}.

    In the browser version of arangojs this option can be used to pass additional options to the underlying calls of the xhr module. The options keepAlive and keepAliveMsecs have no effect in the browser but maxSockets will still be used to limit the amount of parallel requests made by arangojs.

  • promise: Class (optional)

    The Promise implementation to use or false to disable promises entirely.

    By default the global Promise constructor will be used if available.

Manipulating databases

These functions implement the HTTP API for manipulating databases.

database.useDatabase

database.useDatabase(databaseName): this

Updates the Database instance and its connection string to use the given databaseName, then returns itself.

Arguments

  • databaseName: string

The name of the database to use.

Examples

var db = require('arangojs')();
db.useDatabase('test');
// The database instance now uses the database "test".
database.createDatabase

async database.createDatabase(databaseName, [users]): Object

Creates a new database with the given databaseName.

Arguments

  • databaseName: string

Name of the database to create.

  • users: ArrayObject (optional)

If specified, the array must contain objects with the following properties:

  • username: string

    The username of the user to create for the database.

  • passwd: string (Default: empty)

    The password of the user.

  • active: boolean (Default: true)

    Whether the user is active.

  • extra: Object (optional)

    An object containing additional user data.

Examples

var db = require('arangojs')();
db.createDatabase('mydb', [{username: 'root'}])
.then(info => {
    // the database has been created
});
database.get

async database.get(): Object

Fetches the database description for the active database from the server.

Examples

var db = require('arangojs')();
db.get()
.then(info => {
    // the database exists
});
database.listDatabases

async database.listDatabases(): Array string

Fetches all databases from the server and returns an array of their names.

Examples

var db = require('arangojs')();
db.listDatabases()
.then(names => {
    // databases is an array of database names
});
database.listUserDatabases

async database.listUserDatabases(): Array string

Fetches all databases accessible to the active user from the server and returns an array of their names.

Examples

var db = require('arangojs')();
db.listUserDatabases()
.then(names => {
    // databases is an array of database names
});
database.dropDatabase

async database.dropDatabase(databaseName): Object

Deletes the database with the given databaseName from the server.

var db = require('arangojs')();
db.dropDatabase('mydb')
.then(() => {
    // database "mydb" no longer exists
})
database.truncate

async database.truncate([excludeSystem]): Object

Deletes all documents in all collections in the active database.

Arguments

  • excludeSystem: boolean (Default: true)

Whether system collections should be excluded.

Examples

var db = require('arangojs')();

db.truncate()
.then(() => {
    // all non-system collections in this database are now empty
});

// -- or --

db.truncate(false)
.then(() => {
    // I've made a huge mistake...
});

Accessing collections

These functions implement the HTTP API for accessing collections.

database.collection

database.collection(collectionName): DocumentCollection

Returns a DocumentCollection instance for the given collection name.

Arguments

  • collectionName: string

Name of the edge collection.

Examples

var db = require('arangojs')();
var collection = db.collection('potatos');
database.edgeCollection

database.edgeCollection(collectionName): EdgeCollection

Returns an EdgeCollection instance for the given collection name.

Arguments

  • collectionName: string

Name of the edge collection.

Examples

var db = require('arangojs')();
var collection = db.edgeCollection('potatos');
database.listCollections

async database.listCollections([excludeSystem]): ArrayObject

Fetches all collections from the database and returns an array of collection descriptions.

Arguments

  • excludeSystem: boolean (Default: true)

Whether system collections should be excluded from the results.

Examples

var db = require('arangojs')();

db.listCollections()
.then(collections => {
    // collections is an array of collection descriptions
    // not including system collections
});

// -- or --

db.listCollections(false)
.then(collections => {
    // collections is an array of collection descriptions
    // including system collections
});
database.collections

async database.collections([excludeSystem]): Array<Collection>

Fetches all collections from the database and returns an array of DocumentCollection and EdgeCollection instances for the collections.

Arguments

  • excludeSystem: boolean (Default: true)

Whether system collections should be excluded from the results.

Examples

var db = require('arangojs')();

db.listCollections()
.then(collections => {
    // collections is an array of DocumentCollection
    // and EdgeCollection instances
    // not including system collections
});

// -- or --

db.listCollections(false)
.then(collections => {
    // collections is an array of DocumentCollection
    // and EdgeCollection instances
    // including system collections
});

Accessing graphs

These functions implement the HTTP API for accessing general graphs.

database.graph

database.graph(graphName): Graph

Returns a Graph instance representing the graph with the given graph name.

database.listGraphs

async database.listGraphs(): ArrayObject

Fetches all graphs from the database and returns an array of graph descriptions.

Examples

var db = require('arangojs')();
db.listGraphs()
.then(graphs => {
    // graphs is an array of graph descriptions
});
database.graphs

async database.graphs(): Array<Graph>

Fetches all graphs from the database and returns an array of Graph instances for the graphs.

Examples

var db = require('arangojs')();
db.graphs()
.then(graphs => {
    // graphs is an array of Graph instances
});

Transactions

This function implements the HTTP API for transactions.

database.transaction

async database.transaction(collections, action, [params,] [lockTimeout]): Object

Performs a server-side transaction and returns its return value.

Arguments

  • collections: Object

An object with the following properties:

  • read: Array string (optional)

    An array of names (or a single name) of collections that will be read from during the transaction.

  • write: Array string (optional)

    An array of names (or a single name) of collections that will be written to or read from during the transaction.

  • action: string

A string evaluating to a JavaScript function to be executed on the server.

  • params: Array<any> (optional)

Parameters that will be passed to the action function.

  • lockTimeout: number (optional)

Determines how long the database will wait while attemping to gain locks on collections used by the transaction before timing out.

If collections is an array or string, it will be treated as collections.write.

Please note that while action should be a string evaluating to a well-formed JavaScript function, it's not possible to pass in a JavaScript function directly because the function needs to be evaluated on the server and will be transmitted in plain text.

For more information on transactions, see the HTTP API documentation for transactions.

Examples

var db = require('arangojs')();
var action = String(function () {
    // This code will be executed inside ArangoDB!
    var db = require('org/arangodb').db;
    return db._query('FOR user IN _users RETURN u.user').toArray<any>();
});
db.transaction({read: '_users'}, action)
.then(result => {
    // result contains the return value of the action
});

Queries

This function implements the HTTP API for single roundtrip AQL queries.

For collection-specific queries see simple queries.

database.query

async database.query(query, [bindVars,] [opts]): Cursor

Performs a database query using the given query and bindVars, then returns a new Cursor instance for the result list.

Arguments

  • query: string

An AQL query string or a query builder instance.

  • bindVars: Object (optional)

An object defining the variables to bind the query to.

  • opts: Object (optional)

Additional options that will be passed to the query API.

If opts.count is set to true, the cursor will have a count property set to the query result count.

If query is an object with query and bindVars properties, those will be used as the values of the respective arguments instead.

Examples

var db = require('arangojs')();
var active = true;

// Using ES2015 string templates
var aql = require('arangojs').aql;
db.query(aql`
    FOR u IN _users
    FILTER u.authData.active == ${active}
    RETURN u.user
`)
.then(cursor => {
    // cursor is a cursor for the query result
});

// -- or --

// Using the query builder
var qb = require('aqb');
db.query(
    qb.for('u').in('_users')
    .filter(qb.eq('u.authData.active', '@active'))
    .return('u.user'),
    {active: true}
)
.then(cursor => {
    // cursor is a cursor for the query result
});

// -- or --

// Using plain arguments
db.query(
    'FOR u IN _users'
    + ' FILTER u.authData.active == @active'
    + ' RETURN u.user',
    {active: true}
)
.then(cursor => {
    // cursor is a cursor for the query result
});
aql

aql(strings, ...args): Object

Template string handler for AQL queries. Converts an ES2015 template string to an object that can be passed to database.query by converting arguments to bind variables.

Any Collection instances will automatically be converted to collection bind variables.

Examples

var db = require('arangojs')();
var aql = require('arangojs').aql;
var userCollection = db.collection('_users');
var role = 'admin';
db.query(aql`
    FOR user IN ${userCollection}
    FILTER user.role == ${role}
    RETURN user
`)
.then(cursor => {
    // cursor is a cursor for the query result
});
// -- is equivalent to --
db.query(
  'FOR user IN @@value0 FILTER user.role == @value1 RETURN user',
  {'@value0': userCollection.name, value1: role}
)
.then(cursor => {
    // cursor is a cursor for the query result
});

Managing AQL user functions

These functions implement the HTTP API for managing AQL user functions.

database.listFunctions

async database.listFunctions(): ArrayObject

Fetches a list of all AQL user functions registered with the database.

Examples

var db = require('arangojs')();
db.listFunctions()
.then(functions => {
    // functions is a list of function descriptions
})
database.createFunction

async database.createFunction(name, code): Object

Creates an AQL user function with the given name and code if it does not already exist or replaces it if a function with the same name already existed.

Arguments

  • name: string

A valid AQL function name, e.g.: "myfuncs::accounting::calculate_vat".

  • code: string

A string evaluating to a JavaScript function (not a JavaScript function object).

Examples

var db = require('arangojs')();
var aql = require('arangojs').aql;
db.createFunction(
  'ACME::ACCOUNTING::CALCULATE_VAT',
  String(function (price) {
      return price * 0.19;
  })
)
// Use the new function in an AQL query with template handler:
.then(() => db.query(aql`
    FOR product IN products
    RETURN MERGE(
      {vat: ACME::ACCOUNTING::CALCULATE_VAT(product.price)},
      product
    )
`))
.then(cursor => {
    // cursor is a cursor for the query result
});
database.dropFunction

async database.dropFunction(name, [group]): Object

Deletes the AQL user function with the given name from the database.

Arguments

  • name: string

The name of the user function to drop.

  • group: boolean (Default: false)

If set to true, all functions with a name starting with name will be deleted; otherwise only the function with the exact name will be deleted.

Examples

var db = require('arangojs')();
db.dropFunction('ACME::ACCOUNTING::CALCULATE_VAT')
.then(() => {
    // the function no longer exists
});

Arbitrary HTTP routes

database.route

database.route([path,] [headers]): Route

Returns a new Route instance for the given path (relative to the database) that can be used to perform arbitrary HTTP requests.

Arguments

  • path: string (optional)

The database-relative URL of the route.

  • headers: Object (optional)

Default headers that should be sent with each request to the route.

If path is missing, the route will refer to the base URL of the database.

For more information on Route instances see the Route API below.

Examples

var db = require('arangojs')();
var myFoxxService = db.route('my-foxx-service');
myFoxxService.post('users', {
    username: 'admin',
    password: 'hunter2'
})
.then(response => {
    // response.body is the result of
    // POST /_db/_system/my-foxx-service/users
    // with JSON request body '{"username": "admin", "password": "hunter2"}'
});

Cursor API

Cursor instances provide an abstraction over the HTTP API's limitations. Unless a method explicitly exhausts the cursor, the driver will only fetch as many batches from the server as necessary. Like the server-side cursors, Cursor instances are incrementally depleted as they are read from.

var db = require('arangojs')();
db.query('FOR x IN 1..100 RETURN x')
// query result list: [1, 2, 3, ..., 99, 100]
.then(cursor => {
    cursor.next())
    .then(value => {
        value === 1;
        // remaining result list: [2, 3, 4, ..., 99, 100]
    });
});

cursor.count

cursor.count: number

The total number of documents in the query result. This is only available if the count option was used.

cursor.all

async cursor.all(): ArrayObject

Exhausts the cursor, then returns an array containing all values in the cursor's remaining result list.

Examples

// query result list: [1, 2, 3, 4, 5]
cursor.all()
.then(vals => {
    // vals is an array containing the entire query result
    Array.isArray(vals);
    vals.length === 5;
    vals; // [1, 2, 3, 4, 5]
    cursor.hasNext() === false;
});

cursor.next

async cursor.next(): Object

Advances the cursor and returns the next value in the cursor's remaining result list. If the cursor has already been exhausted, returns undefined instead.

Examples

// query result list: [1, 2, 3, 4, 5]
cursor.next()
.then(val => {
    val === 1;
    // remaining result list: [2, 3, 4, 5]
    return cursor.next();
})
.then(val2 => {
    val2 === 2;
    // remaining result list: [3, 4, 5]
});

cursor.hasNext

cursor.hasNext(): boolean

Returns true if the cursor has more values or false if the cursor has been exhausted.

Examples

cursor.all() // exhausts the cursor
.then(() => {
    cursor.hasNext() === false;
});

cursor.each

async cursor.each(fn): any

Advances the cursor by applying the function fn to each value in the cursor's remaining result list until the cursor is exhausted or fn explicitly returns false.

Returns the last return value of fn.

Equivalent to Array.prototype.forEach (except async).

Arguments

  • fn: Function

A function that will be invoked for each value in the cursor's remaining result list until it explicitly returns false or the cursor is exhausted.

The function receives the following arguments:

  • value: any

    The value in the cursor's remaining result list.

  • index: number

    The index of the value in the cursor's remaining result list.

  • cursor: Cursor

    The cursor itself.

Examples

var results = [];
function doStuff(value) {
    var VALUE = value.toUpperCase();
    results.push(VALUE);
    return VALUE;
}
// query result list: ['a', 'b', 'c']
cursor.each(doStuff)
.then(last => {
    String(results) === 'A,B,C';
    cursor.hasNext() === false;
    last === 'C';
});

cursor.every

async cursor.every(fn): boolean

Advances the cursor by applying the function fn to each value in the cursor's remaining result list until the cursor is exhausted or fn returns a value that evaluates to false.

Returns false if fn returned a value that evalutes to false, or true otherwise.

Equivalent to Array.prototype.every (except async).

Arguments

  • fn: Function

A function that will be invoked for each value in the cursor's remaining result list until it returns a value that evaluates to false or the cursor is exhausted.

The function receives the following arguments:

  • value: any

    The value in the cursor's remaining result list.

  • index: number

    The index of the value in the cursor's remaining result list.

  • cursor: Cursor

    The cursor itself.

function even(value) {
    return value % 2 === 0;
}
// query result list: [0, 2, 4, 5, 6]
cursor.every(even)
.then(result => {
    result === false; // 5 is not even
    cursor.hasNext() === true;
    cursor.next()
    .then(value => {
        value === 6; // next value after 5
    });
});

cursor.some

async cursor.some(fn): boolean

Advances the cursor by applying the function fn to each value in the cursor's remaining result list until the cursor is exhausted or fn returns a value that evaluates to true.

Returns true if fn returned a value that evalutes to true, or false otherwise.

Equivalent to Array.prototype.some (except async).

Examples

function even(value) {
    return value % 2 === 0;
}
// query result list: [1, 3, 4, 5]
cursor.some(even)
.then(result => {
    result === true; // 4 is even
    cursor.hasNext() === true;
    cursor.next()
    .then(value => {
        value === 5; // next value after 4
    });
});

cursor.map

cursor.map(fn): Array<any>

Advances the cursor by applying the function fn to each value in the cursor's remaining result list until the cursor is exhausted.

Returns an array of the return values of fn.

Equivalent to Array.prototype.map (except async).

Arguments

  • fn: Function

A function that will be invoked for each value in the cursor's remaining result list until the cursor is exhausted.

The function receives the following arguments:

  • value: any

    The value in the cursor's remaining result list.

  • index: number

    The index of the value in the cursor's remaining result list.

  • cursor: Cursor

    The cursor itself.

Examples

function square(value) {
    return value * value;
}
// query result list: [1, 2, 3, 4, 5]
cursor.map(square)
.then(result => {
    result.length === 5;
    result; // [1, 4, 9, 16, 25]
    cursor.hasNext() === false;
});

cursor.reduce

cursor.reduce(fn, [accu]): any

Exhausts the cursor by reducing the values in the cursor's remaining result list with the given function fn. If accu is not provided, the first value in the cursor's remaining result list will be used instead (the function will not be invoked for that value).

Equivalent to Array.prototype.reduce (except async).

Arguments

  • fn: Function

A function that will be invoked for each value in the cursor's remaining result list until the cursor is exhausted.

The function receives the following arguments:

  • accu: any

    The return value of the previous call to fn. If this is the first call, accu will be set to the accu value passed to reduce or the first value in the cursor's remaining result list.

  • value: any

    The value in the cursor's remaining result list.

  • index: number

    The index of the value in the cursor's remaining result list.

  • cursor: Cursor

    The cursor itself.

Examples

function add(a, b) {
    return a + b;
}
// query result list: [1, 2, 3, 4, 5]

var baseline = 1000;
cursor.reduce(add, baseline)
.then(result => {
    result === (baseline + 1 + 2 + 3 + 4 + 5);
    cursor.hasNext() === false;
});

// -- or --

cursor.reduce(add)
.then(result => {
    result === (1 + 2 + 3 + 4 + 5);
    cursor.hasNext() === false;
});

Route API

Route instances provide access for arbitrary HTTP requests. This allows easy access to Foxx services and other HTTP APIs not covered by the driver itself.

route.route

route.route([path], [headers]): Route

Returns a new Route instance for the given path (relative to the current route) that can be used to perform arbitrary HTTP requests.

Arguments

  • path: string (optional)

The relative URL of the route.

  • headers: Object (optional)

Default headers that should be sent with each request to the route.

If path is missing, the route will refer to the base URL of the database.

Examples

var db = require('arangojs')();
var route = db.route('my-foxx-service');
var users = route.route('users');
// equivalent to db.route('my-foxx-service/users')

route.get

async route.get([path,] [qs]): Response

Performs a GET request to the given URL and returns the server response.

Arguments

  • path: string (optional)

The route-relative URL for the request. If omitted, the request will be made to the base URL of the route.

  • qs: string (optional)

The query string for the request. If qs is an object, it will be translated to a query string.

Examples

var db = require('arangojs')();
var route = db.route('my-foxx-service');
route.get()
.then(response => {
    // response.body is the response body of calling
    // GET _db/_system/my-foxx-service
});

// -- or --

route.get('users')
.then(response => {
    // response.body is the response body of calling
    // GET _db/_system/my-foxx-service/users
});

// -- or --

route.get('users', {group: 'admin'})
.then(response => {
    // response.body is the response body of calling
    // GET _db/_system/my-foxx-service/users?group=admin
});

route.post

async route.post([path,] [body, [qs]]): Response

Performs a POST request to the given URL and returns the server response.

Arguments

  • path: string (optional)

The route-relative URL for the request. If omitted, the request will be made to the base URL of the route.

  • body: string (optional)

The response body. If body is an object, it will be encoded as JSON.

  • qs: string (optional)

The query string for the request. If qs is an object, it will be translated to a query string.

Examples

var db = require('arangojs')();
var route = db.route('my-foxx-service');
route.post()
.then(response => {
    // response.body is the response body of calling
    // POST _db/_system/my-foxx-service
});

// -- or --

route.post('users')
.then(response => {
    // response.body is the response body of calling
    // POST _db/_system/my-foxx-service/users
});

// -- or --

route.post('users', {
    username: 'admin',
    password: 'hunter2'
})
.then(response => {
    // response.body is the response body of calling
    // POST _db/_system/my-foxx-service/users
    // with JSON request body {"username": "admin", "password": "hunter2"}
});

// -- or --

route.post('users', {
    username: 'admin',
    password: 'hunter2'
}, {admin: true})
.then(response => {
    // response.body is the response body of calling
    // POST _db/_system/my-foxx-service/users?admin=true
    // with JSON request body {"username": "admin", "password": "hunter2"}
});

route.put

async route.put([path,] [body, [qs]]): Response

Performs a PUT request to the given URL and returns the server response.

Arguments

  • path: string (optional)

The route-relative URL for the request. If omitted, the request will be made to the base URL of the route.

  • body: string (optional)

The response body. If body is an object, it will be encoded as JSON.

  • qs: string (optional)

The query string for the request. If qs is an object, it will be translated to a query string.

Examples

var db = require('arangojs')();
var route = db.route('my-foxx-service');
route.put()
.then(response => {
    // response.body is the response body of calling
    // PUT _db/_system/my-foxx-service
});

// -- or --

route.put('users/admin')
.then(response => {
    // response.body is the response body of calling
    // PUT _db/_system/my-foxx-service/users
});

// -- or --

route.put('users/admin', {
    username: 'admin',
    password: 'hunter2'
})
.then(response => {
    // response.body is the response body of calling
    // PUT _db/_system/my-foxx-service/users/admin
    // with JSON request body {"username": "admin", "password": "hunter2"}
});

// -- or --

route.put('users/admin', {
    username: 'admin',
    password: 'hunter2'
}, {admin: true})
.then(response => {
    // response.body is the response body of calling
    // PUT _db/_system/my-foxx-service/users/admin?admin=true
    // with JSON request body {"username": "admin", "password": "hunter2"}
});

route.patch

async route.patch([path,] [body, [qs]]): Response

Performs a PATCH request to the given URL and returns the server response.

Arguments

  • path: string (optional)

The route-relative URL for the request. If omitted, the request will be made to the base URL of the route.

  • body: string (optional)

The response body. If body is an object, it will be encoded as JSON.

  • qs: string (optional)

The query string for the request. If qs is an object, it will be translated to a query string.

Examples

var db = require('arangojs')();
var route = db.route('my-foxx-service');
route.patch()
.then(response => {
    // response.body is the response body of calling
    // PATCH _db/_system/my-foxx-service
});

// -- or --

route.patch('users/admin')
.then(response => {
    // response.body is the response body of calling
    // PATCH _db/_system/my-foxx-service/users
});

// -- or --

route.patch('users/admin', {
    password: 'hunter2'
})
.then(response => {
    // response.body is the response body of calling
    // PATCH _db/_system/my-foxx-service/users/admin
    // with JSON request body {"password": "hunter2"}
});

// -- or --

route.patch('users/admin', {
    password: 'hunter2'
}, {admin: true})
.then(response => {
    // response.body is the response body of calling
    // PATCH _db/_system/my-foxx-service/users/admin?admin=true
    // with JSON request body {"password": "hunter2"}
});

route.delete

async route.delete([path,] [qs]): Response

Performs a DELETE request to the given URL and returns the server response.

Arguments

  • path: string (optional)

The route-relative URL for the request. If omitted, the request will be made to the base URL of the route.

  • qs: string (optional)

The query string for the request. If qs is an object, it will be translated to a query string.

Examples

var db = require('arangojs')();
var route = db.route('my-foxx-service');
route.delete()
.then(response => {
    // response.body is the response body of calling
    // DELETE _db/_system/my-foxx-service
});

// -- or --

route.delete('users/admin')
.then(response => {
    // response.body is the response body of calling
    // DELETE _db/_system/my-foxx-service/users/admin
});

// -- or --

route.delete('users/admin', {permanent: true})
.then(response => {
    // response.body is the response body of calling
    // DELETE _db/_system/my-foxx-service/users/admin?permanent=true
});

route.head

async route.head([path,] [qs]): Response

Performs a HEAD request to the given URL and returns the server response.

Arguments

  • path: string (optional)

The route-relative URL for the request. If omitted, the request will be made to the base URL of the route.

  • qs: string (optional)

The query string for the request. If qs is an object, it will be translated to a query string.

Examples

var db = require('arangojs')();
var route = db.route('my-foxx-service');
route.head()
.then(response => {
    // response is the response object for
    // HEAD _db/_system/my-foxx-service
});

route.request

async route.request([opts]): Response

Performs an arbitrary request to the given URL and returns the server response.

Arguments

  • opts: Object (optional)

An object with any of the following properties:

  • path: string (optional)

    The route-relative URL for the request. If omitted, the request will be made to the base URL of the route.

  • absolutePath: boolean (Default: false)

    Whether the path is relative to the connection's base URL instead of the route.

  • body: string (optional)

    The response body. If body is an object, it will be encoded as JSON.

  • qs: string (optional)

    The query string for the request. If qs is an object, it will be translated to a query string.

  • headers: Object (optional)

    An object containing additional HTTP headers to be sent with the request.

  • method: string (Default: "GET")

    HTTP method of this request.

Examples

var db = require('arangojs')();
var route = db.route('my-foxx-service');
route.request({
    path: 'hello-world',
    method: 'POST',
    body: {hello: 'world'},
    qs: {admin: true}
})
.then(response => {
    // response.body is the response body of calling
    // POST _db/_system/my-foxx-service/hello-world?admin=true
    // with JSON request body '{"hello": "world"}'
});

Collection API

These functions implement the HTTP API for manipulating collections.

The Collection API is implemented by all Collection instances, regardless of their specific type. I.e. it represents a shared subset between instances of DocumentCollection, EdgeCollection, GraphVertexCollection and GraphEdgeCollection.

Getting information about the collection

See the HTTP API documentation for details.

collection.get

async collection.get(): Object

Retrieves general information about the collection.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');
collection.get()
.then(data => {
    // data contains general information about the collection
});
collection.properties

async collection.properties(): Object

Retrieves the collection's properties.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');
collection.properties()
.then(data => {
    // data contains the collection's properties
});
collection.count

async collection.count(): Object

Retrieves information about the number of documents in a collection.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');
collection.count()
.then(data => {
    // data contains the collection's count
});
collection.figures

async collection.figures(): Object

Retrieves statistics for a collection.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');
collection.figures()
.then(data => {
    // data contains the collection's figures
});
collection.revision

async collection.revision(): Object

Retrieves the collection revision ID.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');
collection.revision()
.then(data => {
    // data contains the collection's revision
});
collection.checksum

async collection.checksum([opts]): Object

Retrieves the collection checksum.

Arguments

  • opts: Object (optional)

For information on the possible options see the HTTP API for getting collection information.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');
collection.checksum()
.then(data => {
    // data contains the collection's checksum
});

Manipulating the collection

These functions implement the HTTP API for modifying collections.

collection.create

async collection.create([properties]): Object

Creates a collection with the given properties for this collection's name, then returns the server response.

Arguments

  • properties: Object (optional)

For more information on the properties object, see the HTTP API documentation for creating collections.

Examples

var db = require('arangojs')();
collection = db.collection('potatos');
collection.create()
.then(() => {
    // the document collection "potatos" now exists
});

// -- or --

var collection = var collection = db.edgeCollection('friends');
collection.create({
    waitForSync: true // always sync document changes to disk
})
.then(() => {
    // the edge collection "friends" now exists
});
collection.load

async collection.load([count]): Object

Tells the server to load the collection into memory.

Arguments

  • count: boolean (Default: true)

If set to false, the return value will not include the number of documents in the collection (which may speed up the process).

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');
collection.load(false)
.then(() => {
    // the collection has now been loaded into memory
});
collection.unload

async collection.unload(): Object

Tells the server to remove the collection from memory.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');
collection.unload()
.then(() => {
    // the collection has now been unloaded from memory
});
collection.setProperties

async collection.setProperties(properties): Object

Replaces the properties of the collection.

Arguments

  • properties: Object

For information on the properties argument see the HTTP API for modifying collections.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');
collection.setProperties({waitForSync: true})
.then(result => {
    result.waitForSync === true;
    // the collection will now wait for data being written to disk
    // whenever a document is changed
});
collection.rename

async collection.rename(name): Object

Renames the collection. The Collection instance will automatically update its name when the rename succeeds.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');
collection.rename('new-collection-name')
.then(result => {
    result.name === 'new-collection-name';
    collection.name === result.name;
    // result contains additional information about the collection
});
collection.rotate

async collection.rotate(): Object

Rotates the journal of the collection.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');
collection.rotate()
.then(data => {
    // data.result will be true if rotation succeeded
});
collection.truncate

async collection.truncate(): Object

Deletes all documents in the collection in the database.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');
collection.truncate()
.then(() => {
    // the collection "some-collection" is now empty
});
collection.drop

async collection.drop(): Object

Deletes the collection from the database.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');
collection.drop()
.then(() => {
    // the collection "some-collection" no longer exists
});

Manipulating indexes

These functions implement the HTTP API for manipulating indexes.

collection.createIndex

async collection.createIndex(details): Object

Creates an arbitrary index on the collection.

Arguments

  • details: Object

For information on the possible properties of the details object, see the HTTP API for manipulating indexes.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');
collection.createIndex({type: 'cap', size: 20})
.then(index => {
    index.id; // the index's handle
    // the index has been created
});
collection.createCapConstraint

async collection.createCapConstraint(size): Object

Creates a cap constraint index on the collection.

Note: This method is not available when using the driver with ArangoDB 3.0 and higher as cap constraints are no longer supported.

Arguments

  • size: Object

An object with any of the following properties:

  • size: number (optional)

    The maximum number of documents in the collection.

  • byteSize: number (optional)

    The maximum size of active document data in the collection (in bytes).

If size is a number, it will be interpreted as size.size.

For more information on the properties of the size object see the HTTP API for creating cap constraints.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');

collection.createCapConstraint(20)
.then(index => {
    index.id; // the index's handle
    index.size === 20;
    // the index has been created
});

// -- or --

collection.createCapConstraint({size: 20})
.then(index => {
    index.id; // the index's handle
    index.size === 20;
    // the index has been created
});
collection.createHashIndex

async collection.createHashIndex(fields, [opts]): Object

Creates a hash index on the collection.

Arguments

  • fields: Array string

An array of names of document fields on which to create the index. If the value is a string, it will be wrapped in an array automatically.

  • opts: Object (optional)

Additional options for this index. If the value is a boolean, it will be interpreted as opts.unique.

For more information on hash indexes, see the HTTP API for hash indexes.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');

collection.createHashIndex('favorite-color')
.then(index => {
    index.id; // the index's handle
    index.fields; // ['favorite-color']
    // the index has been created
});

// -- or --

collection.createHashIndex(['favorite-color'])
.then(index => {
    index.id; // the index's handle
    index.fields; // ['favorite-color']
    // the index has been created
});
collection.createSkipList

async collection.createSkipList(fields, [opts]): Object

Creates a skiplist index on the collection.

Arguments

  • fields: Array string

An array of names of document fields on which to create the index. If the value is a string, it will be wrapped in an array automatically.

  • opts: Object (optional)

Additional options for this index. If the value is a boolean, it will be interpreted as opts.unique.

For more information on skiplist indexes, see the HTTP API for skiplist indexes.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');

collection.createSkipList('favorite-color')
.then(index => {
    index.id; // the index's handle
    index.fields; // ['favorite-color']
    // the index has been created
});

// -- or --

collection.createSkipList(['favorite-color'])
.then(index => {
    index.id; // the index's handle
    index.fields; // ['favorite-color']
    // the index has been created
});
collection.createGeoIndex

async collection.createGeoIndex(fields, [opts]): Object

Creates a geo-spatial index on the collection.

Arguments

  • fields: Array string

An array of names of document fields on which to create the index. Currently, geo indexes must cover exactly one field. If the value is a string, it will be wrapped in an array automatically.

  • opts: Object (optional)

An object containing additional properties of the index.

For more information on the properties of the opts object see the HTTP API for manipulating geo indexes.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');

collection.createGeoIndex(['longitude', 'latitude'])
.then(index => {
    index.id; // the index's handle
    index.fields; // ['longitude', 'latitude']
    // the index has been created
});

// -- or --

collection.createGeoIndex('location', {geoJson: true})
.then(index => {
    index.id; // the index's handle
    index.fields; // ['location']
    // the index has been created
});
collection.createFulltextIndex

async collection.createFulltextIndex(fields, [minLength]): Object

Creates a fulltext index on the collection.

Arguments

  • fields: Array string

An array of names of document fields on which to create the index. Currently, fulltext indexes must cover exactly one field. If the value is a string, it will be wrapped in an array automatically.

  • minLength (optional):

Minimum character length of words to index. Uses a server-specific default value if not specified.

For more information on fulltext indexes, see the HTTP API for fulltext indexes.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');

collection.createFulltextIndex('description')
.then(index => {
    index.id; // the index's handle
    index.fields; // ['description']
    // the index has been created
});

// -- or --

collection.createFulltextIndex(['description'])
.then(index => {
    index.id; // the index's handle
    index.fields; // ['description']
    // the index has been created
});
collection.index

async collection.index(indexHandle): Object

Fetches information about the index with the given indexHandle and returns it.

Arguments

  • indexHandle: string

The handle of the index to look up. This can either be a fully-qualified identifier or the collection-specific key of the index. If the value is an object, its id property will be used instead.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');
collection.createFulltextIndex('description')
.then(index => {
    collection.index(index.id)
    .then(result => {
        result.id === index.id;
        // result contains the properties of the index
    });

    // -- or --

    collection.index(index.id.split('/')[1])
    .then(result => {
        result.id === index.id;
        // result contains the properties of the index
    });
});
collection.indexes

async collection.indexes(): ArrayObject

Fetches a list of all indexes on this collection.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');
collection.createFulltextIndex('description')
.then(() => collection.indexes())
.then(indexes => {
    indexes.length === 1;
    // indexes contains information about the index
});
collection.dropIndex

async collection.dropIndex(indexHandle): Object

Deletes the index with the given indexHandle from the collection.

Arguments

  • indexHandle: string

The handle of the index to delete. This can either be a fully-qualified identifier or the collection-specific key of the index. If the value is an object, its id property will be used instead.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');
collection.createFulltextIndex('description')
.then(index => {
    collection.dropIndex(index.id)
    .then(() => {
        // the index has been removed from the collection
    });

    // -- or --

    collection.dropIndex(index.id.split('/')[1])
    .then(() => {
        // the index has been removed from the collection
    });
});

Simple queries

These functions implement the HTTP API for simple queries.

collection.all

async collection.all([opts]): Cursor

Performs a query to fetch all documents in the collection. Returns a new Cursor instance for the query results.

Arguments

  • opts: Object (optional)

For information on the possible options see the HTTP API for returning all documents.

collection.any

async collection.any(): Object

Fetches a document from the collection at random.

collection.first

async collection.first([opts]): ArrayObject

Performs a query to fetch the first documents in the collection. Returns an array of the matching documents.

Note: This method is not available when using the driver with ArangoDB 3.0 and higher as the corresponding API method has been removed.

Arguments

  • opts: Object (optional)

For information on the possible options see the HTTP API for returning the first documents of a collection.

If opts is a number it is treated as opts.count.

collection.last

async collection.last([opts]): ArrayObject

Performs a query to fetch the last documents in the collection. Returns an array of the matching documents.

Note: This method is not available when using the driver with ArangoDB 3.0 and higher as the corresponding API method has been removed.

Arguments

  • opts: Object (optional)

For information on the possible options see the HTTP API for returning the last documents of a collection.

If opts is a number it is treated as opts.count.

collection.byExample

async collection.byExample(example, [opts]): Cursor

Performs a query to fetch all documents in the collection matching the given example. Returns a new Cursor instance for the query results.

Arguments

  • example: Object

An object representing an example for documents to be matched against.

  • opts: Object (optional)

For information on the possible options see the HTTP API for fetching documents by example.

collection.firstExample

async collection.firstExample(example): Object

Fetches the first document in the collection matching the given example.

Arguments

  • example: Object

An object representing an example for documents to be matched against.

collection.removeByExample

async collection.removeByExample(example, [opts]): Object

Removes all documents in the collection matching the given example.

Arguments

  • example: Object

An object representing an example for documents to be matched against.

  • opts: Object (optional)

For information on the possible options see the HTTP API for removing documents by example.

collection.replaceByExample

async collection.replaceByExample(example, newValue, [opts]): Object

Replaces all documents in the collection matching the given example with the given newValue.

Arguments

  • example: Object

An object representing an example for documents to be matched against.

  • newValue: Object

The new value to replace matching documents with.

  • opts: Object (optional)

For information on the possible options see the HTTP API for replacing documents by example.

collection.updateByExample

async collection.updateByExample(example, newValue, [opts]): Object

Updates (patches) all documents in the collection matching the given example with the given newValue.

Arguments

  • example: Object

An object representing an example for documents to be matched against.

  • newValue: Object

The new value to update matching documents with.

  • opts: Object (optional)

For information on the possible options see the HTTP API for updating documents by example.

collection.lookupByKeys

async collection.lookupByKeys(keys): ArrayObject

Fetches the documents with the given keys from the collection. Returns an array of the matching documents.

Arguments

  • keys: Array

An array of document keys to look up.

collection.removeByKeys

async collection.removeByKeys(keys, [opts]): Object

Deletes the documents with the given keys from the collection.

Arguments

  • keys: Array

An array of document keys to delete.

  • opts: Object (optional)

For information on the possible options see the HTTP API for removing documents by keys.

collection.fulltext

async collection.fulltext(fieldName, query, [opts]): Cursor

Performs a fulltext query in the given fieldName on the collection.

Arguments

  • fieldName: String

Name of the field to search on documents in the collection.

  • query: String

Fulltext query string to search for.

  • opts: Object (optional)

For information on the possible options see the HTTP API for fulltext queries.

Bulk importing documents

This function implements the HTTP API for bulk imports.

collection.import

async collection.import(data, [opts]): Object

Bulk imports the given data into the collection.

Arguments

  • data: Array Array any | ArrayObject

The data to import. This can be an array of documents:

js [ {key1: value1, key2: value2}, // document 1 {key1: value1, key2: value2}, // document 2 ... ]

Or it can be an array of value arrays following an array of keys.

js [ ['key1', 'key2'], // key names [value1, value2], // document 1 [value1, value2], // document 2 ... ]

  • opts: Object (optional) If opts is set, it must be an object with any of the following properties:

  • waitForSync: boolean (Default: false)

    Wait until the documents have been synced to disk.

  • details: boolean (Default: false)

    Whether the response should contain additional details about documents that could not be imported.false*.

  • type: string (Default: "auto")

    Indicates which format the data uses. Can be "documents", "array" or "auto".

If data is a JavaScript array, it will be transmitted as a line-delimited JSON stream. If opts.type is set to "array", it will be transmitted as regular JSON instead. If data is a string, it will be transmitted as it is without any processing.

For more information on the opts object, see the HTTP API documentation for bulk imports.

Examples

var db = require('arangojs')();
var collection = db.collection('users');

collection.import(
    [// document stream
        {username: 'admin', password: 'hunter2'},
        {username: 'jcd', password: 'bionicman'},
        {username: 'jreyes', password: 'amigo'},
        {username: 'ghermann', password: 'zeitgeist'}
    ]
)
.then(result => {
    result.created === 4;
});

// -- or --

collection.import(
    [// array stream with header
        ['username', 'password'], // keys
        ['admin', 'hunter2'], // row 1
        ['jcd', 'bionicman'], // row 2
        ['jreyes', 'amigo'],
        ['ghermann', 'zeitgeist']
    ]
)
.then(result => {
    result.created === 4;
});

// -- or --

collection.import(
    // raw line-delimited JSON array stream with header
    '["username", "password"]\r\n' +
    '["admin", "hunter2"]\r\n' +
    '["jcd", "bionicman"]\r\n' +
    '["jreyes", "amigo"]\r\n' +
    '["ghermann", "zeitgeist"]\r\n'
)
.then(result => {
    result.created === 4;
});

Manipulating documents

These functions implement the HTTP API for manipulating documents.

collection.replace

async collection.replace(documentHandle, newValue, [opts]): Object

Replaces the content of the document with the given documentHandle with the given newValue and returns an object containing the document's metadata.

Note: The policy option is not available when using the driver with ArangoDB 3.0 as it is redundant when specifying the rev option.

Arguments

  • documentHandle: string

The handle of the document to replace. This can either be the _id or the _key of a document in the collection, or a document (i.e. an object with an _id or _key property).

  • newValue: Object

The new data of the document.

  • opts: Object (optional)

If opts is set, it must be an object with any of the following properties:

  • waitForSync: boolean (Default: false)

    Wait until the document has been synced to disk. Default: false.

  • rev: string (optional)

    Only replace the document if it matches this revision.

  • policy: string (optional)

    Determines the behaviour when the revision is not matched:

    • if policy is set to "last", the document will be replaced regardless of the revision.
    • if policy is set to "error" or not set, the replacement will fail with an error.

If a string is passed instead of an options object, it will be interpreted as the rev option.

For more information on the opts object, see the HTTP API documentation for working with documents.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');
var doc = {number: 1, hello: 'world'};
collection.save(doc)
.then(doc1 => {
    collection.replace(doc1, {number: 2})
    .then(doc2 => {
        doc2._id === doc1._id;
        doc2._rev !== doc1._rev;
        collection.document(doc1)
        .then(doc3 => {
            doc3._id === doc1._id;
            doc3._rev === doc2._rev;
            doc3.number === 2;
            doc3.hello === undefined;
        })
    });
});
collection.update

async collection.update(documentHandle, newValue, [opts]): Object

Updates (merges) the content of the document with the given documentHandle with the given newValue and returns an object containing the document's metadata.

Note: The policy option is not available when using the driver with ArangoDB 3.0 as it is redundant when specifying the rev option.

Arguments

  • documentHandle: string

Handle of the document to update. This can be either the _id or the _key of a document in the collection, or a document (i.e. an object with an _id or _key property).

  • newValue: Object

The new data of the document.

  • opts: Object (optional)

If opts is set, it must be an object with any of the following properties:

  • waitForSync: boolean (Default: false)

    Wait until document has been synced to disk.

  • keepNull: boolean (Default: true)

    If set to false, properties with a value of null indicate that a property should be deleted.

  • mergeObjects: boolean (Default: true)

    If set to false, object properties that already exist in the old document will be overwritten rather than merged. This does not affect arrays.

  • rev: string (optional)

    Only update the document if it matches this revision.

  • policy: string (optional)

    Determines the behaviour when the revision is not matched:

    • if policy is set to "last", the document will be replaced regardless of the revision.
    • if policy is set to "error" or not set, the replacement will fail with an error.

If a string is passed instead of an options object, it will be interpreted as the rev option.

For more information on the opts object, see the HTTP API documentation for working with documents.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');
var doc = {number: 1, hello: 'world'};
collection.save(doc)
.then(doc1 => {
    collection.update(doc1, {number: 2})
    .then(doc2 => {
        doc2._id === doc1._id;
        doc2._rev !== doc1._rev;
        collection.document(doc2)
        .then(doc3 => {
          doc3._id === doc2._id;
          doc3._rev === doc2._rev;
          doc3.number === 2;
          doc3.hello === doc.hello;
        });
    });
});
collection.remove

async collection.remove(documentHandle, [opts]): Object

Deletes the document with the given documentHandle from the collection.

Note: The policy option is not available when using the driver with ArangoDB 3.0 as it is redundant when specifying the rev option.

Arguments

  • documentHandle: string

The handle of the document to delete. This can be either the _id or the _key of a document in the collection, or a document (i.e. an object with an _id or _key property).

  • opts: Object (optional)

If opts is set, it must be an object with any of the following properties:

  • waitForSync: boolean (Default: false)

    Wait until document has been synced to disk.

  • rev: string (optional)

    Only update the document if it matches this revision.

  • policy: string (optional)

    Determines the behaviour when the revision is not matched:

    • if policy is set to "last", the document will be replaced regardless of the revision.
    • if policy is set to "error" or not set, the replacement will fail with an error.

If a string is passed instead of an options object, it will be interpreted as the rev option.

For more information on the opts object, see the HTTP API documentation for working with documents.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');

collection.remove('some-doc')
.then(() => {
    // document 'some-collection/some-doc' no longer exists
});

// -- or --

collection.remove('some-collection/some-doc')
.then(() => {
    // document 'some-collection/some-doc' no longer exists
});
collection.list

async collection.list([type]): Array string

Retrieves a list of references for all documents in the collection.

Arguments

  • type: string (Default: "id")

The format of the document references:

  • if type is set to "id", each reference will be the _id of the document.
  • if type is set to "key", each reference will be the _key of the document.
  • if type is set to "path", each reference will be the URI path of the document.

DocumentCollection API

The DocumentCollection API extends the Collection API (see above) with the following methods.

documentCollection.document

async documentCollection.document(documentHandle): Object

Retrieves the document with the given documentHandle from the collection.

Arguments

  • documentHandle: string

The handle of the document to retrieve. This can be either the _id or the _key of a document in the collection, or a document (i.e. an object with an _id or _key property).

Examples

var db = require('arangojs')();
var collection = db.collection('my-docs');

collection.document('some-key')
.then(doc => {
    // the document exists
    doc._key === 'some-key';
    doc._id === 'my-docs/some-key';
});

// -- or --

collection.document('my-docs/some-key')
.then(doc => {
    // the document exists
    doc._key === 'some-key';
    doc._id === 'my-docs/some-key';
});

documentCollection.save

async documentCollection.save(data): Object

Creates a new document with the given data and returns an object containing the document's metadata.

Arguments

  • data: Object

The data of the new document, may include a _key.

Examples

var db = require('arangojs')();
var collection = db.collection('my-docs');
var doc = {some: 'data'};
collection.save(doc)
.then(doc1 => {
    doc1._key; // the document's key
    doc1._id === ('my-docs/' + doc1._key);
    collection.document(doc)
    .then(doc2 => {
        doc2._id === doc1._id;
        doc2._rev === doc1._rev;
        doc2.some === 'data';
    });
});

EdgeCollection API

The EdgeCollection API extends the Collection API (see above) with the following methods.

edgeCollection.edge

async edgeCollection.edge(documentHandle): Object

Retrieves the edge with the given documentHandle from the collection.

Arguments

  • documentHandle: string

The handle of the edge to retrieve. This can be either the _id or the _key of an edge in the collection, or an edge (i.e. an object with an _id or _key property).

Examples

var db = require('arangojs')();
var collection = var collection = db.edgeCollection('edges');

collection.edge('some-key')
.then(edge => {
    // the edge exists
    edge._key === 'some-key';
    edge._id === 'edges/some-key';
});

// -- or --

collection.edge('edges/some-key')
.then(edge => {
    // the edge exists
    edge._key === 'some-key';
    edge._id === 'edges/some-key';
});

edgeCollection.save

async edgeCollection.save(data, [fromId, toId]): Object

Creates a new edge between the documents fromId and toId with the given data and returns an object containing the edge's metadata.

Arguments

  • data: Object

The data of the new edge. If fromId and toId are not specified, the data needs to contain the properties _from and _to.

  • fromId: string (optional)

The handle of the start vertex of this edge. This can be either the _id of a document in the database, the _key of an edge in the collection, or a document (i.e. an object with an _id or _key property).

  • toId: string (optional)

The handle of the end vertex of this edge. This can be either the _id of a document in the database, the _key of an edge in the collection, or a document (i.e. an object with an _id or _key property).

Examples

var db = require('arangojs')();
var collection = db.edgeCollection('edges');
var edge = {some: 'data'};

collection.save(
    edge,
    'vertices/start-vertex',
    'vertices/end-vertex'
)
.then(edge1 => {
    edge1._key; // the edge's key
    edge1._id === ('edges/' + edge1._key);
    collection.edge(edge)
    .then(edge2 => {
        edge2._key === edge1._key;
        edge2._rev = edge1._rev;
        edge2.some === edge.some;
        edge2._from === 'vertices/start-vertex';
        edge2._to === 'vertices/end-vertex';
    });
});

// -- or --

collection.save({
    some: 'data',
    _from: 'verticies/start-vertex',
    _to: 'vertices/end-vertex'
})
.then(edge => {
    // ...
})

edgeCollection.edges

async edgeCollection.edges(documentHandle): ArrayObject

Retrieves a list of all edges of the document with the given documentHandle.

Arguments

  • documentHandle: string

The handle of the document to retrieve the edges of. This can be either the _id of a document in the database, the _key of an edge in the collection, or a document (i.e. an object with an _id or _key property).

Examples

var db = require('arangojs')();
var collection = db.edgeCollection('edges');
collection.import([
    ['_key', '_from', '_to'],
    ['x', 'vertices/a', 'vertices/b'],
    ['y', 'vertices/a', 'vertices/c'],
    ['z', 'vertices/d', 'vertices/a']
])
.then(() => collection.edges('vertices/a'))
.then(edges => {
    edges.length === 3;
    edges.map(function (edge) {return edge._key;}); // ['x', 'y', 'z']
});

edgeCollection.inEdges

async edgeCollection.inEdges(documentHandle): Array Object

Retrieves a list of all incoming edges of the document with the given documentHandle.

Arguments

  • documentHandle: string

The handle of the document to retrieve the edges of. This can be either the _id of a document in the database, the _key of an edge in the collection, or a document (i.e. an object with an _id or _key property).

Examples

var db = require('arangojs')();
var collection = db.edgeCollection('edges');
collection.import([
    ['_key', '_from', '_to'],
    ['x', 'vertices/a', 'vertices/b'],
    ['y', 'vertices/a', 'vertices/c'],
    ['z', 'vertices/d', 'vertices/a']
])
.then(() => collection.inEdges('vertices/a'))
.then(edges => {
    edges.length === 1;
    edges[0]._key === 'z';
});

edgeCollection.outEdges

async edgeCollection.outEdges(documentHandle): Array Object

Retrieves a list of all outgoing edges of the document with the given documentHandle.

Arguments

  • documentHandle: string

The handle of the document to retrieve the edges of. This can be either the _id of a document in the database, the _key of an edge in the collection, or a document (i.e. an object with an _id or _key property).

Examples

var db = require('arangojs')();
var collection = db.edgeCollection('edges');
collection.import([
    ['_key', '_from', '_to'],
    ['x', 'vertices/a', 'vertices/b'],
    ['y', 'vertices/a', 'vertices/c'],
    ['z', 'vertices/d', 'vertices/a']
])
.then(() => collection.outEdges('vertices/a'))
.then(edges => {
    edges.length === 2;
    edges.map(function (edge) {return edge._key;}); // ['x', 'y']
});

edgeCollection.traversal

async edgeCollection.traversal(startVertex, opts): Object

Performs a traversal starting from the given startVertex and following edges contained in this edge collection.

Arguments

  • startVertex: string

The handle of the start vertex. This can be either the _id of a document in the database, the _key of an edge in the collection, or a document (i.e. an object with an _id or _key property).

  • opts: Object

See the HTTP API documentation for details on the additional arguments.

Please note that while opts.filter, opts.visitor, opts.init, opts.expander and opts.sort should be strings evaluating to well-formed JavaScript code, it's not possible to pass in JavaScript functions directly because the code needs to be evaluated on the server and will be transmitted in plain text.

Examples

var db = require('arangojs')();
var collection = db.edgeCollection('edges');
collection.import([
    ['_key', '_from', '_to'],
    ['x', 'vertices/a', 'vertices/b'],
    ['y', 'vertices/b', 'vertices/c'],
    ['z', 'vertices/c', 'vertices/d']
])
.then(() => collection.traversal('vertices/a', {
    direction: 'outbound',
    visitor: 'result.vertices.push(vertex._key);',
    init: 'result.vertices = [];'
}))
.then(result => {
    result.vertices; // ['a', 'b', 'c', 'd']
});

Graph API

These functions implement the HTTP API for manipulating graphs.

graph.get

async graph.get(): Object

Retrieves general information about the graph.

Examples

var db = require('arangojs')();
var graph = db.graph('some-graph');
graph.get()
.then(data => {
    // data contains general information about the graph
});

graph.create

async graph.create(properties): Object

Creates a graph with the given properties for this graph's name, then returns the server response.

Arguments

  • properties: Object

For more information on the properties object, see the HTTP API documentation for creating graphs.

Examples

var db = require('arangojs')();
var graph = db.graph('some-graph');
graph.create({
    edgeDefinitions: [
        {
            collection: 'edges',
            from: [
                'start-vertices'
            ],
            to: [
                'end-vertices'
            ]
        }
    ]
})
.then(graph => {
    // graph is a Graph instance
    // for more information see the Graph API below
});

graph.drop

async graph.drop([dropCollections]): Object

Deletes the graph from the database.

Arguments

  • dropCollections: boolean (optional)

If set to true, the collections associated with the graph will also be deleted.

Examples

var db = require('arangojs')();
var graph = db.graph('some-graph');
graph.drop()
.then(() => {
    // the graph "some-graph" no longer exists
});

Manipulating vertices

graph.vertexCollection

graph.vertexCollection(collectionName): GraphVertexCollection

Returns a new GraphVertexCollection instance with the given name for this graph.

Arguments

  • collectionName: string

Name of the vertex collection.

Examples

var db = require('arangojs')();
var graph = db.graph('some-graph');
var collection = graph.vertexCollection('vertices');
collection.name === 'vertices';
// collection is a GraphVertexCollection
graph.addVertexCollection

async graph.addVertexCollection(collectionName): Object

Adds the collection with the given collectionName to the graph's vertex collections.

Arguments

  • collectionName: string

Name of the vertex collection to add to the graph.

Examples

var db = require('arangojs')();
var graph = db.graph('some-graph');
graph.addVertexCollection('vertices')
.then(() => {
    // the collection "vertices" has been added to the graph
});
graph.removeVertexCollection

async graph.removeVertexCollection(collectionName, [dropCollection]): Object

Removes the vertex collection with the given collectionName from the graph.

Arguments

  • collectionName: string

Name of the vertex collection to remove from the graph.

  • dropCollection: boolean (optional)

If set to true, the collection will also be deleted from the database.

Examples

var db = require('arangojs')();
var graph = db.graph('some-graph');

graph.removeVertexCollection('vertices')
.then(() => {
    // collection "vertices" has been removed from the graph
});

// -- or --

graph.removeVertexCollection('vertices', true)
.then(() => {
    // collection "vertices" has been removed from the graph
    // the collection has also been dropped from the database
    // this may have been a bad idea
});

Manipulating edges

graph.edgeCollection

graph.edgeCollection(collectionName): GraphEdgeCollection

Returns a new GraphEdgeCollection instance with the given name bound to this graph.

Arguments

  • collectionName: string

Name of the edge collection.

Examples

var db = require('arangojs')();
// assuming the collections "edges" and "vertices" exist
var graph = db.graph('some-graph');
var collection = graph.edgeCollection('edges');
collection.name === 'edges';
// collection is a GraphEdgeCollection
graph.addEdgeDefinition

async graph.addEdgeDefinition(definition): Object

Adds the given edge definition definition to the graph.

Arguments

  • definition: Object

For more information on edge definitions see the HTTP API for managing graphs.

Examples

var db = require('arangojs')();
// assuming the collections "edges" and "vertices" exist
var graph = db.graph('some-graph');
graph.addEdgeDefinition({
    collection: 'edges',
    from: ['vertices'],
    to: ['vertices']
})
.then(() => {
    // the edge definition has been added to the graph
});
graph.replaceEdgeDefinition

async graph.replaceEdgeDefinition(collectionName, definition): Object

Replaces the edge definition for the edge collection named collectionName with the given definition.

Arguments

  • collectionName: string

Name of the edge collection to replace the definition of.

  • definition: Object

For more information on edge definitions see the HTTP API for managing graphs.

Examples

var db = require('arangojs')();
// assuming the collections "edges", "vertices" and "more-vertices" exist
var graph = db.graph('some-graph');
graph.replaceEdgeDefinition('edges', {
    collection: 'edges',
    from: ['vertices'],
    to: ['more-vertices']
})
.then(() => {
    // the edge definition has been modified
});
graph.removeEdgeDefinition

async graph.removeEdgeDefinition(definitionName, [dropCollection]): Object

Removes the edge definition with the given definitionName form the graph.

Arguments

  • definitionName: string

Name of the edge definition to remove from the graph.

  • dropCollection: boolean (optional)

If set to true, the edge collection associated with the definition will also be deleted from the database.

Examples

var db = require('arangojs')();
var graph = db.graph('some-graph');

graph.removeEdgeDefinition('edges')
.then(() => {
    // the edge definition has been removed
});

// -- or --

graph.removeEdgeDefinition('edges', true)
.then(() => {
    // the edge definition has been removed
    // and the edge collection "edges" has been dropped
    // this may have been a bad idea
});
graph.traversal

async graph.traversal(startVertex, opts): Object

Performs a traversal starting from the given startVertex and following edges contained in any of the edge collections of this graph.

Arguments

  • startVertex: string

The handle of the start vertex. This can be either the _id of a document in the graph or a document (i.e. an object with an _id property).

  • opts: Object

See the HTTP API documentation for details on the additional arguments.

Please note that while opts.filter, opts.visitor, opts.init, opts.expander and opts.sort should be strings evaluating to well-formed JavaScript functions, it's not possible to pass in JavaScript functions directly because the functions need to be evaluated on the server and will be transmitted in plain text.

Examples

var db = require('arangojs')();
var graph = db.graph('some-graph');
var collection = graph.edgeCollection('edges');
collection.import([
    ['_key', '_from', '_to'],
    ['x', 'vertices/a', 'vertices/b'],
    ['y', 'vertices/b', 'vertices/c'],
    ['z', 'vertices/c', 'vertices/d']
])
.then(() => graph.traversal('vertices/a', {
    direction: 'outbound',
    visitor: 'result.vertices.push(vertex._key);',
    init: 'result.vertices = [];'
}))
.then(result => {
    result.vertices; // ['a', 'b', 'c', 'd']
});

GraphVertexCollection API

The GraphVertexCollection API extends the Collection API (see above) with the following methods.

graphVertexCollection.remove

async graphVertexCollection.remove(documentHandle): Object

Deletes the vertex with the given documentHandle from the collection.

Arguments

  • documentHandle: string

The handle of the vertex to retrieve. This can be either the _id or the _key of a vertex in the collection, or a vertex (i.e. an object with an _id or _key property).

Examples

var graph = db.graph('some-graph');
var collection = graph.vertexCollection('vertices');

collection.remove('some-key')
.then(() => {
    // document 'vertices/some-key' no longer exists
});

// -- or --

collection.remove('vertices/some-key')
.then(() => {
    // document 'vertices/some-key' no longer exists
});

graphVertexCollection.vertex

async graphVertexCollection.vertex(documentHandle): Object

Retrieves the vertex with the given documentHandle from the collection.

Arguments

  • documentHandle: string

The handle of the vertex to retrieve. This can be either the _id or the _key of a vertex in the collection, or a vertex (i.e. an object with an _id or _key property).

Examples

var graph = db.graph('some-graph');
var collection = graph.vertexCollection('vertices');

collection.vertex('some-key')
.then(doc => {
    // the vertex exists
    doc._key === 'some-key';
    doc._id === 'vertices/some-key';
});

// -- or --

collection.vertex('vertices/some-key')
.then(doc => {
    // the vertex exists
    doc._key === 'some-key';
    doc._id === 'vertices/some-key';
});

graphVertexCollection.save

async graphVertexCollection.save(data): Object

Creates a new vertex with the given data.

Arguments

  • data: Object

The data of the vertex.

Examples

var db = require('arangojs')();
var graph = db.graph('some-graph');
var collection = graph.vertexCollection('vertices');
collection.save({some: 'data'})
.then(doc => {
    doc._key; // the document's key
    doc._id === ('vertices/' + doc._key);
    doc.some === 'data';
});

GraphEdgeCollection API

The GraphEdgeCollection API extends the Collection API (see above) with the following methods.

graphEdgeCollection.remove

async graphEdgeCollection.remove(documentHandle): Object

Deletes the edge with the given documentHandle from the collection.

Arguments

  • documentHandle: string

The handle of the edge to retrieve. This can be either the _id or the _key of an edge in the collection, or an edge (i.e. an object with an _id or _key property).

Examples

var graph = db.graph('some-graph');
var collection = graph.edgeCollection('edges');

collection.remove('some-key')
.then(() => {
    // document 'edges/some-key' no longer exists
});

// -- or --

collection.remove('edges/some-key')
.then(() => {
    // document 'edges/some-key' no longer exists
});

graphEdgeCollection.edge

async graphEdgeCollection.edge(documentHandle): Object

Retrieves the edge with the given documentHandle from the collection.

Arguments

  • documentHandle: string

The handle of the edge to retrieve. This can be either the _id or the _key of an edge in the collection, or an edge (i.e. an object with an _id or _key property).

Examples

var graph = db.graph('some-graph');
var collection = graph.edgeCollection('edges');

collection.edge('some-key')
.then(edge => {
    // the edge exists
    edge._key === 'some-key';
    edge._id === 'edges/some-key';
});

// -- or --

collection.edge('edges/some-key')
.then(edge => {
    // the edge exists
    edge._key === 'some-key';
    edge._id === 'edges/some-key';
});

graphEdgeCollection.save

async graphEdgeCollection.save(data, [fromId, toId]): Object

Creates a new edge between the vertices fromId and toId with the given data.

Arguments

  • data: Object

The data of the new edge. If fromId and toId are not specified, the data needs to contain the properties _from and _to.

  • fromId: string (optional)

The handle of the start vertex of this edge. This can be either the _id of a document in the database, the _key of an edge in the collection, or a document (i.e. an object with an _id or _key property).

  • toId: string (optional)

The handle of the end vertex of this edge. This can be either the _id of a document in the database, the _key of an edge in the collection, or a document (i.e. an object with an _id or _key property).

Examples

var db = require('arangojs')();
var graph = db.graph('some-graph');
var collection = graph.edgeCollection('edges');
collection.save(
    {some: 'data'},
    'vertices/start-vertex',
    'vertices/end-vertex'
)
.then(edge => {
    edge._key; // the edge's key
    edge._id === ('edges/' + edge._key);
    edge.some === 'data';
    edge._from === 'vertices/start-vertex';
    edge._to === 'vertices/end-vertex';
});

graphEdgeCollection.edges

async graphEdgeCollection.edges(documentHandle): Array Object

Retrieves a list of all edges of the document with the given documentHandle.

Arguments

  • documentHandle: string

The handle of the document to retrieve the edges of. This can be either the _id of a document in the database, the _key of an edge in the collection, or a document (i.e. an object with an _id or _key property).

Examples

var db = require('arangojs')();
var graph = db.graph('some-graph');
var collection = graph.edgeCollection('edges');
collection.import([
    ['_key', '_from', '_to'],
    ['x', 'vertices/a', 'vertices/b'],
    ['y', 'vertices/a', 'vertices/c'],
    ['z', 'vertices/d', 'vertices/a']
])
.then(() => collection.edges('vertices/a'))
.then(edges => {
    edges.length === 3;
    edges.map(function (edge) {return edge._key;}); // ['x', 'y', 'z']
});

graphEdgeCollection.inEdges

async graphEdgeCollection.inEdges(documentHandle): Array Object

Retrieves a list of all incoming edges of the document with the given documentHandle.

Arguments

  • documentHandle: string

The handle of the document to retrieve the edges of. This can be either the _id of a document in the database, the _key of an edge in the collection, or a document (i.e. an object with an _id or _key property).

Examples

var db = require('arangojs')();
var graph = db.graph('some-graph');
var collection = graph.edgeCollection('edges');
collection.import([
    ['_key', '_from', '_to'],
    ['x', 'vertices/a', 'vertices/b'],
    ['y', 'vertices/a', 'vertices/c'],
    ['z', 'vertices/d', 'vertices/a']
])
.then(() => collection.inEdges('vertices/a'))
.then(edges => {
    edges.length === 1;
    edges[0]._key === 'z';
});

graphEdgeCollection.outEdges

async graphEdgeCollection.outEdges(documentHandle): ArrayObject

Retrieves a list of all outgoing edges of the document with the given documentHandle.

Arguments

  • documentHandle: string

The handle of the document to retrieve the edges of. This can be either the _id of a document in the database, the _key of an edge in the collection, or a document (i.e. an object with an _id or _key property).

Examples

var db = require('arangojs')();
var graph = db.graph('some-graph');
var collection = graph.edgeCollection('edges');
collection.import([
    ['_key', '_from', '_to'],
    ['x', 'vertices/a', 'vertices/b'],
    ['y', 'vertices/a', 'vertices/c'],
    ['z', 'vertices/d', 'vertices/a']
])
.then(() => collection.outEdges('vertices/a'))
.then(edges => {
    edges.length === 2;
    edges.map(function (edge) {return edge._key;}); // ['x', 'y']
});

graphEdgeCollection.traversal

async graphEdgeCollection.traversal(startVertex, opts): Object

Performs a traversal starting from the given startVertex and following edges contained in this edge collection.

Arguments

  • startVertex: string

The handle of the start vertex. This can be either the _id of a document in the database, the _key of an edge in the collection, or a document (i.e. an object with an _id or _key property).

  • opts: Object

See the HTTP API documentation for details on the additional arguments.

Please note that while opts.filter, opts.visitor, opts.init, opts.expander and opts.sort should be strings evaluating to well-formed JavaScript code, it's not possible to pass in JavaScript functions directly because the code needs to be evaluated on the server and will be transmitted in plain text.

Examples

var db = require('arangojs')();