关于文章 nosql

java nosql orientdb sql

orientdb的restfulapi调用

orientdb

本来想用 python 版本的驱动 但是 - - 已经有好久没更新了

只好用http的

    • 看起来还行
import requests
import json
class SqlSdk():
    def __init__(self,url="http://192.168.1.91:2480",name="free",password="free",database="rpg"):
        self.url = url
        self.name = name
        self.password = password
        self.database = database
        self.auth = (self.name,self.password)
    def exec(self,sql,params=[]):
        return requests.post(f"{self.url}/command/{self.database}/sql",auth=self.auth,data=json.dumps({
            "command": sql,
            "parameters": params
        }))

以下为官方文档的


search: keywords: ['SQL']


Introduction

When it comes to query languages, SQL is the most widely recognized standard. The majority of developers have experience and are comfortable with SQL. For this reason Orient DB uses SQL as its query language and adds some extensions to enable graph functionality. There are a few differences between the standard SQL syntax and that supported by OrientDB, but for the most part, it should feel very natural. The differences are covered in the OrientDB SQL dialect section of this page.

If you are looking for the most efficient way to traverse a graph, we suggest to use the SQL-Match instead.

Many SQL commands share the WHERE condition. Keywords and class names in OrientDB SQL are case insensitive. Field names and values are case sensitive. In the following examples keywords are in uppercase but this is not strictly required.

If you are not yet familiar with SQL, we suggest you to get the course on KhanAcademy.

For example, if you have a class MyClass with a field named id, then the following SQL statements are equivalent:

SELECT FROM MyClass WHERE id = 1
select from myclass where id = 1

The following is NOT equivalent. Notice that the field name 'ID' is not the same as 'id'.

SELECT FROM MyClass WHERE ID = 1

Automatic usage of indexes

OrientDB allows you to execute queries against any field, indexed or not-indexed. The SQL engine automatically recognizes if any indexes can be used to speed up execution. You can also query any indexes directly by using INDEX:<index-name> as a target. Example:

SELECT FROM INDEX:myIndex WHERE key = 'Jay'

Extra resources

OrientDB SQL dialect

OrientDB supports SQL as a query language with some differences compared with SQL. Orient Technologies decided to avoid creating Yet-Another-Query-Language. Instead we started from familiar SQL with extensions to work with graphs. We prefer to focus on standards.

If you want learn SQL, there are many online courses such as: - Online course Introduction to Databases by Jennifer Widom from Stanford university - Introduction to SQL at W3 Schools - Beginner guide to SQL - SQLCourse.com - YouTube channel Basic SQL Training by Joey Blue

To know more, look to OrientDB SQL Syntax.

Or order any book like these

No JOINs

The most important difference between OrientDB and a Relational Database is that relationships are represented by LINKS instead of JOINs.

For this reason, the classic JOIN syntax is not supported. OrientDB uses the "dot (.) notation" to navigate LINKS. Example 1 : In SQL you might create a join such as:

SELECT *
FROM Employee A, City B
WHERE A.city = B.id
AND B.name = 'Rome'

In OrientDB, an equivalent operation would be:

SELECT * FROM Employee WHERE city.name = 'Rome'

This is much more straight forward and powerful! If you use multiple JOINs, the OrientDB SQL equivalent will be an even larger benefit. Example 2: In SQL you might create a join such as:

SELECT *
FROM Employee A, City B, Country C,
WHERE A.city = B.id
AND B.country = C.id
AND C.name = 'Italy'

In OrientDB, an equivalent operation would be:

SELECT * FROM Employee WHERE city.country.name = 'Italy'

Projections

In SQL, projections are mandatory and you can use the star character * to include all of the fields. With OrientDB this type of projection is optional. Example: In SQL to select all of the columns of Customer you would write:

SELECT * FROM Customer

In OrientDB, the * is optional:

SELECT FROM Customer

See SQL projections

DISTINCT

In OrientDB v 3.0 you can use DISTINCT keyword exactly as in a relational database:

SELECT DISTINCT name FROM City

Until v 2.2, DISTINCT keyword was not allowed; there was a DISTINCT() function instead, with limited capabilities

//legacy

SELECT DISTINCT(name) FROM City

HAVING

OrientDB does not support the HAVING keyword, but with a nested query it's easy to obtain the same result. Example in SQL:

SELECT city, sum(salary) AS salary
FROM Employee
GROUP BY city
HAVING salary > 1000

This groups all of the salaries by city and extracts the result of aggregates with the total salary greater than 1,000 dollars. In OrientDB the HAVING conditions go in a select statement in the predicate:

SELECT FROM ( SELECT city, SUM(salary) AS salary FROM Employee GROUP BY city ) WHERE salary > 1000

Select from multiple targets

OrientDB allows only one class (classes are equivalent to tables in this discussion) as opposed to SQL, which allows for many tables as the target. If you want to select from 2 classes, you have to execute 2 sub queries and join them with the UNIONALL function:

SELECT FROM E, V

In OrientDB, you can accomplish this with a few variable definitions and by using the expand function to the union:

SELECT EXPAND( $c ) LET $a = ( SELECT FROM E ), $b = ( SELECT FROM V ), $c = UNIONALL( $a, $b )
arangodb nosql

arangodb 的配置优化

线上运行arangodb要做的优化! 前提当然一定是linux

启动参数

cpu分配内存块
numactl --interleave=all 


systemd 编写services时  需要写绝对路径
ExecStart=/usr/bin/numactl --interleave=all
内存回收机制
sudo bash -c "echo madvise >/sys/kernel/mm/transparent_hugepage/enabled"
sudo bash -c "echo madvise >/sys/kernel/mm/transparent_hugepage/defrag"
内存分配
sudo bash -c "echo 2 > /proc/sys/vm/overcommit_memory"
zone_reclaim_mode(据说是缓存)
sudo bash -c "echo 0 >/proc/sys/vm/zone_reclaim_mode"
多线程最大内存?
数值=cpu核心数 x 8 x 1000
sudo bash -c "sysctl -w 'vm.max_map_count=320000'"
禁用内存池
export GLIBCXX_FORCE_NEW=1
虚拟内存
/proc/sys/vm/overcommit_ratio (100 * (max(0, (RAM - Swap Space)) / RAM)) 

sudo bash -c "echo 97 > /proc/sys/vm/overcommit_ratio"
arangodb linux mysql nosql 分布式

arangodb-php 使用

ArangoDB 是一个开源的分布式原生多模型数据库

ArangoDB 是一个开源的分布式原生多模型数据库 (Apache 2 license)。 其理念是: 利用一个引擎,一个 query 语法,一项数据库技术,以及多个数据 模型,来最大力度满足项目的灵活性,简化技术堆栈,简化数据库运维,降低运营成本。

  1. 多数据模型:可以灵活的使用 document, graph, key-value 或者他们的组合作为你的数据模型
  2. 方便的查询:支持类似 SQL 的查询语法 AQL,或者通过 REST 以及其他查询
  3. Ruby 和 JS 扩展:没有语言范围限制,你可以从前台到后台都使用同一种语言
  4. 高性能以及低空间占用:ArangoDB 比其他 NoSQL 都要快,同时占用的空间更小
  5. 简单易用:可以在几秒内启动并且使用,同时可以通过图形界面来管理你的 ArangoDB
  6. 开源且免费:ArangoDB 遵守 Apache 协议

arangodb-php 暂时还没有什么中文资料

arandodb-php的示例代码也不是很清楚 这里尝试了一下curd的简单操作

/**
 * Created by PhpStorm.
 * User: free
 * Date: 17-7-28
 * Time: 下午10:05
 */
//使用方法
//$connection=new arango();
//
//$id=new ArangoDocumentHandler($connection->c);
//
//
//$data=$id->get('user',aaaa);//返回的是json  可先转为数组操作


//composer require triagens/arangodb


//require 'vendor/autoload.php';

use triagens\ArangoDb\Collection as ArangoCollection;
use triagens\ArangoDb\CollectionHandler as ArangoCollectionHandler;
use triagens\ArangoDb\Connection as ArangoConnection;
use triagens\ArangoDb\ConnectionOptions as ArangoConnectionOptions;
use triagens\ArangoDb\DocumentHandler as ArangoDocumentHandler;
use triagens\ArangoDb\Document as ArangoDocument;
use triagens\ArangoDb\Exception as ArangoException;
use triagens\ArangoDb\Export as ArangoExport;
use triagens\ArangoDb\ConnectException as ArangoConnectException;
use triagens\ArangoDb\ClientException as ArangoClientException;
use triagens\ArangoDb\ServerException as ArangoServerException;
use triagens\ArangoDb\Statement as ArangoStatement;
use triagens\ArangoDb\UpdatePolicy as ArangoUpdatePolicy;

class arango
{
    public function __construct(){
        $connectionOptions = [
            // database name
            ArangoConnectionOptions::OPTION_DATABASE => 'free',
            // server endpoint to connect to
            ArangoConnectionOptions::OPTION_ENDPOINT => 'tcp://127.0.0.1:8529',
            // authorization type to use (currently supported: 'Basic')
            ArangoConnectionOptions::OPTION_AUTH_TYPE => 'Basic',
            // user for basic authorization
            ArangoConnectionOptions::OPTION_AUTH_USER => 'root',
            // password for basic authorization
            ArangoConnectionOptions::OPTION_AUTH_PASSWD => 'free',
            // connection persistence on server. can use either 'Close' (one-time connections) or 'Keep-Alive' (re-used connections)
            ArangoConnectionOptions::OPTION_CONNECTION => 'Keep-Alive',
            // connect timeout in seconds
            ArangoConnectionOptions::OPTION_TIMEOUT => 3,
            // whether or not to reconnect when a keep-alive connection has timed out on server
            ArangoConnectionOptions::OPTION_RECONNECT => true,
            // optionally create new collections when inserting documents
            ArangoConnectionOptions::OPTION_CREATE => true,
            // optionally create new collections when inserting documents
            ArangoConnectionOptions::OPTION_UPDATE_POLICY => ArangoUpdatePolicy::LAST,
        ];


// turn on exception logging (logs to whatever PHP is configured)
        ArangoException::enableLogging();


        $this->c = new ArangoConnection($connectionOptions);
//        $connect->auth()

    }
}
kv linux nosql redis

ardb 兼容redis多种储存引擎的好玩轮子

Ardb是一个新的构建在持久化Key/Value存储实现上的NoSQL DB服务实现

Ardb是一个新的构建在持久化Key/Value存储实现上的NoSQL DB服务实现,支持list/set/sorted set/bitset/hash/table等复杂的数据结构,以Redis协议对外提供访问接口。

支持多种储存引擎

git clone https://github.com/yinqiwen/ardb

storage_engine=rocksdb make
storage_engine=leveldb make
storage_engine=lmdb make
storage_engine=wiredtiger make
storage_engine=perconaft make
storage_engine=forestdb make


make dist就可以了

rocksdb facebook基于leveldb的闪存储存引擎

点击下载

leveldb Leveldb是一个google实现的非常高效的kv数据库

点击下载

lmdb是openLDAP项目开发的嵌入式(作为一个库嵌入到宿主程序)存储引擎

点击下载

wiredtiger mongodb的储存引擎

点击下载

perconaft percona公司的轮子 他家优化的各种数据库都挺不错

点击下载

ForestDB 是一个快速的 Key-Value 存储引擎,基于层次B +树单词查找树。由 Couchbase 缓存和存储团队开发。

谁知道什么鬼!! 编译失败了一个!!!!!!

linux nosql 集群

avocadodb/arangodb集群

一个arangodb集群由多任务运行形成集群。

一个arangodb集群由多任务运行形成集群。 arangodb本身不会启动或监视这些任务。 因此,它需要某种监控和启动这些任务的监督者。

手工配置集群是非常简单的。

一个代理角色 两个数据节点角色 一个控制器角色

一下将讲解每个角色所需的参数

集群将由 控制器->代理->数据节点的方向进行

代理与数据节点都可以是多个

代理节点 (Agency)

要启动一个代理,首先要通过agency.activate参数激活。

代理节点数量要通过agency.size=3进行设置 当然 也可以只用1个

在初始化过程中,代理必须相互查找。 这样做至少提供一个共同的agency.endpoint。 指定agency.my-address自己的ip。

单代理节点时

在cluster下配置参数

//监听ip
server.endpoint=tcp://0.0.0.0:5001
//关闭掉密码验证
server.authentication=false 
agency.activate=true 
agency.size=1 
//代理节点
agency.endpoint=tcp://127.0.0.1:5001 
agency.supervision=true 
多代理节点配置

主代理节点配置

server.endpoint=tcp://0.0.0.0:5001
//  服务器监听节点
agency.my-address=tcp://127.0.0.1:5001
//  代理监听节点
server.authentication=false
//  密码验证关闭
agency.activate=true
agency.size=3
//   代理节点数量
agency.endpoint=tcp://127.0.0.1:5001
//   监听主代理节点的ip
agency.supervision=true

子代理节点配置

server.endpoint=tcp://0.0.0.0:5002
agency.my-address=tcp://127.0.0.1:5002
server.authentication=false
agency.activate=true
agency.size=3
agency.endpoint=tcp://127.0.0.1:5001
agency.supervision=true 

所有节点agency.endpoint指向同一个ip/port

控制器和数据节点的配置

数据节点配置

server.authentication=false
server.endpoint=tcp://0.0.0.0:8529
cluster.my-address=tcp://127.0.0.1:8529
cluster.my-local-info=db1
cluster.my-role=PRIMARY
cluster.agency-endpoint=tcp://127.0.0.1:5001
cluster.agency-endpoint=tcp://127.0.0.1:5002

控制器节点配置

server.authentication=false
server.endpoint=tcp://0.0.0.0:8531
cluster.my-address=tcp://127.0.0.1:8531
cluster.my-local-info=coord1
cluster.my-role=COORDINATOR
cluster.agency-endpoint=tcp://127.0.0.1:5001
cluster.agency-endpoint=tcp://127.0.0.1:5002

启动每个节点

1

2

javascript js nosql restful

CouchDB 安装

CouchDB 是一个开源的面向文档的数据库管理系统

CouchDB 是一个开源的面向文档的数据库管理系统,可以通过 RESTful JavaScript Object Notation (JSON) API 访问。术语 “Couch” 是 “Cluster Of Unreliable Commodity Hardware” 的首字母缩写,它反映了 CouchDB 的目标具有高度可伸缩性,提供了高可用性和高可靠性,即使运行在容易出现故障的硬件上也是如此。CouchDB 最初是用 C++ 编写的,但在 2008 年 4 月,这个项目转移到 Erlang OTP 平台进行容错测试

直接下载不知道为啥总会在编译release出错,大概是没rebar配置文件

这里直接github下拉

git clone https://github.com/apache/couchdb

安装编译环境

debian

sudo apt-get --no-install-recommends -y install \
    build-essential pkg-config erlang \
    libicu-dev libmozjs185-dev libcurl4-openssl-dev

redhat

sudo yum install autoconf autoconf-archive automake \
    curl-devel erlang-asn1 erlang-erts erlang-eunit gcc-c++ \
    erlang-os_mon erlang-xmerl erlang-erl_interface help2man \
    js-devel-1.8.5 libicu-devel libtool perl-Test-Harness

生成配置文件

./configure  --disable-docs #文档也会编译出错。。谁知道咋回事呢。。不过官方文档支持直接下载。所以可有可无这里禁用掉

make 

make release
这里就编译出来了   直接在  rel目录下的 couchdb  执行bin下的couchdb 即可  ,如果报错一般是端口占用  去etc/default.ini修改端口即可

运行无误后  浏览器访问 http://localhost:5984/_utils/index.html#verifyinstall

执行初次安装
java leveldb linux nosql rocksdb

leveldb-rocksdb java使用

rocksdb是在leveldb上开发来的

leveldb-rocksdb在java中的demo

(arangodb储存引擎用的rocksdb,然而rocksdb是在leveldb上开发来的)

rocksdb

package net.oschina.itags.gateway.service;
import org.rocksdb.Options;
import org.rocksdb.RocksDB;
import org.rocksdb.RocksDBException;

public class BaseRocksDb {
    public final static RocksDB rocksDB() throws RocksDBException {

        Options options = new Options().setCreateIfMissing(true);
        RocksDB.loadLibrary();
        RocksDB db=RocksDB.open(options,"./rock");
        return db;
    }
}

leveldb

package net.oschina.itags.gateway.service;
import org.iq80.leveldb.*;
import org.iq80.leveldb.impl.Iq80DBFactory;

import java.io.File;
import java.io.IOException;
import java.nio.charset.Charset;

public class BaseLevelDb {

public static final DB db() throws IOException {
    boolean cleanup = true;
    Charset charset = Charset.forName("utf-8");
    String path = "./level";

//init
    DBFactory factory = Iq80DBFactory.factory;
    File dir = new File(path);
//如果数据不需要reload则每次重启尝试清理磁盘中path下的旧数据
    if(cleanup) {
        factory.destroy(dir,null);//清除文件夹内的所有文件
    }
    Options options = new Options().createIfMissing(true);
//重新open新的db
    DB db = factory.open(dir,options);
  return db;
}
}
arangodb nodejs nosql

arangodb-node 使用

arangodb-node 使用 nodejs

Install

With NPM

npm install arangojs

With bower

bower install arangojs

From source

git clone https://github.com/arangodb/arangojs.git
cd arangojs
npm install
npm run dist

Basic usage example

// ES2015-style
import arangojs, {Database, aql} from 'arangojs';
let db1 = arangojs(); // convenience short-hand
let db2 = new Database();
let {query, bindVars} = aql`RETURN ${Date.now()}`;

// or plain old Node-style
var arangojs = require('arangojs');
var db1 = arangojs();
var db2 = new arangojs.Database();
var aql = arangojs.aql(['RETURN ', ''], Date.now());
var query = aql.query;
var bindVars = aql.bindVars;

API

All asynchronous functions take an optional Node-style callback (or "errback") as the last argument with the following arguments:

  • err: an Error object if an error occurred, or null if no error occurred.
  • result: the function's result (if applicable).

For expected API errors, err will be an instance of ArangoError. For any other error responses (4xx/5xx status code), err will be an instance of the apropriate http-errors error type. If the response indicates success but the response body could not be parsed, err will be a SyntaxError. In all of these cases the error object will additionally have a response property containing the server response object.

If Promise is defined globally, asynchronous functions return a promise if no callback is provided.

If you want to use promises in environments that don't provide the global Promise constructor, use a promise polyfill like es6-promise or inject a ES6-compatible promise implementation like bluebird into the global scope.

Examples

// Node-style callbacks
db.createDatabase('mydb', function (err, info) {
    if (err) console.error(err.stack);
    else {
        // database created
    }
});

// Using promises with ES2015 arrow functions
db.createDatabase('mydb')
.then(info => {
    // database created
}, err => console.error(err.stack));

// Using proposed ES.next "async/await" syntax
try {
    let info = await db.createDatabase('mydb');
    // database created
} catch (err) {
    console.error(err.stack);
}

Table of Contents

Database API

new Database

new Database([config]): Database

Creates a new Database instance.

If config is a string, it will be interpreted as config.url.

Arguments

  • config: Object (optional)

An object with the following properties:

  • url: string (Default: http://localhost:8529)

    Base URL of the ArangoDB server.

    If you want to use ArangoDB with HTTP Basic authentication, you can provide the credentials as part of the URL, e.g. http://user:pass@localhost:8529.

    The driver automatically uses HTTPS if you specify an HTTPS url.

    If you need to support self-signed HTTPS certificates, you may have to add your certificates to the agentOptions, e.g.:

    js agentOptions: { ca: [ fs.readFileSync('.ssl/sub.class1.server.ca.pem'), fs.readFileSync('.ssl/ca.pem') ] }

  • databaseName: string (Default: _system)

    Name of the active database.

  • arangoVersion: number (Default: 20300)

    Value of the x-arango-version header.

  • headers: Object (optional)

    An object with additional headers to send with every request.

  • agent: Agent (optional)

    An http Agent instance to use for connections.

    By default a new http.Agent (or https.Agent) instance will be created using the agentOptions.

    This option has no effect when using the browser version of arangojs.

  • agentOptions: Object (Default: see below)

    An object with options for the agent. This will be ignored if agent is also provided.

    Default: {maxSockets: 3, keepAlive: true, keepAliveMsecs: 1000}.

    In the browser version of arangojs this option can be used to pass additional options to the underlying calls of the xhr module. The options keepAlive and keepAliveMsecs have no effect in the browser but maxSockets will still be used to limit the amount of parallel requests made by arangojs.

  • promise: Class (optional)

    The Promise implementation to use or false to disable promises entirely.

    By default the global Promise constructor will be used if available.

Manipulating databases

These functions implement the HTTP API for manipulating databases.

database.useDatabase

database.useDatabase(databaseName): this

Updates the Database instance and its connection string to use the given databaseName, then returns itself.

Arguments

  • databaseName: string

The name of the database to use.

Examples

var db = require('arangojs')();
db.useDatabase('test');
// The database instance now uses the database "test".
database.createDatabase

async database.createDatabase(databaseName, [users]): Object

Creates a new database with the given databaseName.

Arguments

  • databaseName: string

Name of the database to create.

  • users: ArrayObject (optional)

If specified, the array must contain objects with the following properties:

  • username: string

    The username of the user to create for the database.

  • passwd: string (Default: empty)

    The password of the user.

  • active: boolean (Default: true)

    Whether the user is active.

  • extra: Object (optional)

    An object containing additional user data.

Examples

var db = require('arangojs')();
db.createDatabase('mydb', [{username: 'root'}])
.then(info => {
    // the database has been created
});
database.get

async database.get(): Object

Fetches the database description for the active database from the server.

Examples

var db = require('arangojs')();
db.get()
.then(info => {
    // the database exists
});
database.listDatabases

async database.listDatabases(): Array string

Fetches all databases from the server and returns an array of their names.

Examples

var db = require('arangojs')();
db.listDatabases()
.then(names => {
    // databases is an array of database names
});
database.listUserDatabases

async database.listUserDatabases(): Array string

Fetches all databases accessible to the active user from the server and returns an array of their names.

Examples

var db = require('arangojs')();
db.listUserDatabases()
.then(names => {
    // databases is an array of database names
});
database.dropDatabase

async database.dropDatabase(databaseName): Object

Deletes the database with the given databaseName from the server.

var db = require('arangojs')();
db.dropDatabase('mydb')
.then(() => {
    // database "mydb" no longer exists
})
database.truncate

async database.truncate([excludeSystem]): Object

Deletes all documents in all collections in the active database.

Arguments

  • excludeSystem: boolean (Default: true)

Whether system collections should be excluded.

Examples

var db = require('arangojs')();

db.truncate()
.then(() => {
    // all non-system collections in this database are now empty
});

// -- or --

db.truncate(false)
.then(() => {
    // I've made a huge mistake...
});

Accessing collections

These functions implement the HTTP API for accessing collections.

database.collection

database.collection(collectionName): DocumentCollection

Returns a DocumentCollection instance for the given collection name.

Arguments

  • collectionName: string

Name of the edge collection.

Examples

var db = require('arangojs')();
var collection = db.collection('potatos');
database.edgeCollection

database.edgeCollection(collectionName): EdgeCollection

Returns an EdgeCollection instance for the given collection name.

Arguments

  • collectionName: string

Name of the edge collection.

Examples

var db = require('arangojs')();
var collection = db.edgeCollection('potatos');
database.listCollections

async database.listCollections([excludeSystem]): ArrayObject

Fetches all collections from the database and returns an array of collection descriptions.

Arguments

  • excludeSystem: boolean (Default: true)

Whether system collections should be excluded from the results.

Examples

var db = require('arangojs')();

db.listCollections()
.then(collections => {
    // collections is an array of collection descriptions
    // not including system collections
});

// -- or --

db.listCollections(false)
.then(collections => {
    // collections is an array of collection descriptions
    // including system collections
});
database.collections

async database.collections([excludeSystem]): Array<Collection>

Fetches all collections from the database and returns an array of DocumentCollection and EdgeCollection instances for the collections.

Arguments

  • excludeSystem: boolean (Default: true)

Whether system collections should be excluded from the results.

Examples

var db = require('arangojs')();

db.listCollections()
.then(collections => {
    // collections is an array of DocumentCollection
    // and EdgeCollection instances
    // not including system collections
});

// -- or --

db.listCollections(false)
.then(collections => {
    // collections is an array of DocumentCollection
    // and EdgeCollection instances
    // including system collections
});

Accessing graphs

These functions implement the HTTP API for accessing general graphs.

database.graph

database.graph(graphName): Graph

Returns a Graph instance representing the graph with the given graph name.

database.listGraphs

async database.listGraphs(): ArrayObject

Fetches all graphs from the database and returns an array of graph descriptions.

Examples

var db = require('arangojs')();
db.listGraphs()
.then(graphs => {
    // graphs is an array of graph descriptions
});
database.graphs

async database.graphs(): Array<Graph>

Fetches all graphs from the database and returns an array of Graph instances for the graphs.

Examples

var db = require('arangojs')();
db.graphs()
.then(graphs => {
    // graphs is an array of Graph instances
});

Transactions

This function implements the HTTP API for transactions.

database.transaction

async database.transaction(collections, action, [params,] [lockTimeout]): Object

Performs a server-side transaction and returns its return value.

Arguments

  • collections: Object

An object with the following properties:

  • read: Array string (optional)

    An array of names (or a single name) of collections that will be read from during the transaction.

  • write: Array string (optional)

    An array of names (or a single name) of collections that will be written to or read from during the transaction.

  • action: string

A string evaluating to a JavaScript function to be executed on the server.

  • params: Array<any> (optional)

Parameters that will be passed to the action function.

  • lockTimeout: number (optional)

Determines how long the database will wait while attemping to gain locks on collections used by the transaction before timing out.

If collections is an array or string, it will be treated as collections.write.

Please note that while action should be a string evaluating to a well-formed JavaScript function, it's not possible to pass in a JavaScript function directly because the function needs to be evaluated on the server and will be transmitted in plain text.

For more information on transactions, see the HTTP API documentation for transactions.

Examples

var db = require('arangojs')();
var action = String(function () {
    // This code will be executed inside ArangoDB!
    var db = require('org/arangodb').db;
    return db._query('FOR user IN _users RETURN u.user').toArray<any>();
});
db.transaction({read: '_users'}, action)
.then(result => {
    // result contains the return value of the action
});

Queries

This function implements the HTTP API for single roundtrip AQL queries.

For collection-specific queries see simple queries.

database.query

async database.query(query, [bindVars,] [opts]): Cursor

Performs a database query using the given query and bindVars, then returns a new Cursor instance for the result list.

Arguments

  • query: string

An AQL query string or a query builder instance.

  • bindVars: Object (optional)

An object defining the variables to bind the query to.

  • opts: Object (optional)

Additional options that will be passed to the query API.

If opts.count is set to true, the cursor will have a count property set to the query result count.

If query is an object with query and bindVars properties, those will be used as the values of the respective arguments instead.

Examples

var db = require('arangojs')();
var active = true;

// Using ES2015 string templates
var aql = require('arangojs').aql;
db.query(aql`
    FOR u IN _users
    FILTER u.authData.active == ${active}
    RETURN u.user
`)
.then(cursor => {
    // cursor is a cursor for the query result
});

// -- or --

// Using the query builder
var qb = require('aqb');
db.query(
    qb.for('u').in('_users')
    .filter(qb.eq('u.authData.active', '@active'))
    .return('u.user'),
    {active: true}
)
.then(cursor => {
    // cursor is a cursor for the query result
});

// -- or --

// Using plain arguments
db.query(
    'FOR u IN _users'
    + ' FILTER u.authData.active == @active'
    + ' RETURN u.user',
    {active: true}
)
.then(cursor => {
    // cursor is a cursor for the query result
});
aql

aql(strings, ...args): Object

Template string handler for AQL queries. Converts an ES2015 template string to an object that can be passed to database.query by converting arguments to bind variables.

Any Collection instances will automatically be converted to collection bind variables.

Examples

var db = require('arangojs')();
var aql = require('arangojs').aql;
var userCollection = db.collection('_users');
var role = 'admin';
db.query(aql`
    FOR user IN ${userCollection}
    FILTER user.role == ${role}
    RETURN user
`)
.then(cursor => {
    // cursor is a cursor for the query result
});
// -- is equivalent to --
db.query(
  'FOR user IN @@value0 FILTER user.role == @value1 RETURN user',
  {'@value0': userCollection.name, value1: role}
)
.then(cursor => {
    // cursor is a cursor for the query result
});

Managing AQL user functions

These functions implement the HTTP API for managing AQL user functions.

database.listFunctions

async database.listFunctions(): ArrayObject

Fetches a list of all AQL user functions registered with the database.

Examples

var db = require('arangojs')();
db.listFunctions()
.then(functions => {
    // functions is a list of function descriptions
})
database.createFunction

async database.createFunction(name, code): Object

Creates an AQL user function with the given name and code if it does not already exist or replaces it if a function with the same name already existed.

Arguments

  • name: string

A valid AQL function name, e.g.: "myfuncs::accounting::calculate_vat".

  • code: string

A string evaluating to a JavaScript function (not a JavaScript function object).

Examples

var db = require('arangojs')();
var aql = require('arangojs').aql;
db.createFunction(
  'ACME::ACCOUNTING::CALCULATE_VAT',
  String(function (price) {
      return price * 0.19;
  })
)
// Use the new function in an AQL query with template handler:
.then(() => db.query(aql`
    FOR product IN products
    RETURN MERGE(
      {vat: ACME::ACCOUNTING::CALCULATE_VAT(product.price)},
      product
    )
`))
.then(cursor => {
    // cursor is a cursor for the query result
});
database.dropFunction

async database.dropFunction(name, [group]): Object

Deletes the AQL user function with the given name from the database.

Arguments

  • name: string

The name of the user function to drop.

  • group: boolean (Default: false)

If set to true, all functions with a name starting with name will be deleted; otherwise only the function with the exact name will be deleted.

Examples

var db = require('arangojs')();
db.dropFunction('ACME::ACCOUNTING::CALCULATE_VAT')
.then(() => {
    // the function no longer exists
});

Arbitrary HTTP routes

database.route

database.route([path,] [headers]): Route

Returns a new Route instance for the given path (relative to the database) that can be used to perform arbitrary HTTP requests.

Arguments

  • path: string (optional)

The database-relative URL of the route.

  • headers: Object (optional)

Default headers that should be sent with each request to the route.

If path is missing, the route will refer to the base URL of the database.

For more information on Route instances see the Route API below.

Examples

var db = require('arangojs')();
var myFoxxService = db.route('my-foxx-service');
myFoxxService.post('users', {
    username: 'admin',
    password: 'hunter2'
})
.then(response => {
    // response.body is the result of
    // POST /_db/_system/my-foxx-service/users
    // with JSON request body '{"username": "admin", "password": "hunter2"}'
});

Cursor API

Cursor instances provide an abstraction over the HTTP API's limitations. Unless a method explicitly exhausts the cursor, the driver will only fetch as many batches from the server as necessary. Like the server-side cursors, Cursor instances are incrementally depleted as they are read from.

var db = require('arangojs')();
db.query('FOR x IN 1..100 RETURN x')
// query result list: [1, 2, 3, ..., 99, 100]
.then(cursor => {
    cursor.next())
    .then(value => {
        value === 1;
        // remaining result list: [2, 3, 4, ..., 99, 100]
    });
});

cursor.count

cursor.count: number

The total number of documents in the query result. This is only available if the count option was used.

cursor.all

async cursor.all(): ArrayObject

Exhausts the cursor, then returns an array containing all values in the cursor's remaining result list.

Examples

// query result list: [1, 2, 3, 4, 5]
cursor.all()
.then(vals => {
    // vals is an array containing the entire query result
    Array.isArray(vals);
    vals.length === 5;
    vals; // [1, 2, 3, 4, 5]
    cursor.hasNext() === false;
});

cursor.next

async cursor.next(): Object

Advances the cursor and returns the next value in the cursor's remaining result list. If the cursor has already been exhausted, returns undefined instead.

Examples

// query result list: [1, 2, 3, 4, 5]
cursor.next()
.then(val => {
    val === 1;
    // remaining result list: [2, 3, 4, 5]
    return cursor.next();
})
.then(val2 => {
    val2 === 2;
    // remaining result list: [3, 4, 5]
});

cursor.hasNext

cursor.hasNext(): boolean

Returns true if the cursor has more values or false if the cursor has been exhausted.

Examples

cursor.all() // exhausts the cursor
.then(() => {
    cursor.hasNext() === false;
});

cursor.each

async cursor.each(fn): any

Advances the cursor by applying the function fn to each value in the cursor's remaining result list until the cursor is exhausted or fn explicitly returns false.

Returns the last return value of fn.

Equivalent to Array.prototype.forEach (except async).

Arguments

  • fn: Function

A function that will be invoked for each value in the cursor's remaining result list until it explicitly returns false or the cursor is exhausted.

The function receives the following arguments:

  • value: any

    The value in the cursor's remaining result list.

  • index: number

    The index of the value in the cursor's remaining result list.

  • cursor: Cursor

    The cursor itself.

Examples

var results = [];
function doStuff(value) {
    var VALUE = value.toUpperCase();
    results.push(VALUE);
    return VALUE;
}
// query result list: ['a', 'b', 'c']
cursor.each(doStuff)
.then(last => {
    String(results) === 'A,B,C';
    cursor.hasNext() === false;
    last === 'C';
});

cursor.every

async cursor.every(fn): boolean

Advances the cursor by applying the function fn to each value in the cursor's remaining result list until the cursor is exhausted or fn returns a value that evaluates to false.

Returns false if fn returned a value that evalutes to false, or true otherwise.

Equivalent to Array.prototype.every (except async).

Arguments

  • fn: Function

A function that will be invoked for each value in the cursor's remaining result list until it returns a value that evaluates to false or the cursor is exhausted.

The function receives the following arguments:

  • value: any

    The value in the cursor's remaining result list.

  • index: number

    The index of the value in the cursor's remaining result list.

  • cursor: Cursor

    The cursor itself.

function even(value) {
    return value % 2 === 0;
}
// query result list: [0, 2, 4, 5, 6]
cursor.every(even)
.then(result => {
    result === false; // 5 is not even
    cursor.hasNext() === true;
    cursor.next()
    .then(value => {
        value === 6; // next value after 5
    });
});

cursor.some

async cursor.some(fn): boolean

Advances the cursor by applying the function fn to each value in the cursor's remaining result list until the cursor is exhausted or fn returns a value that evaluates to true.

Returns true if fn returned a value that evalutes to true, or false otherwise.

Equivalent to Array.prototype.some (except async).

Examples

function even(value) {
    return value % 2 === 0;
}
// query result list: [1, 3, 4, 5]
cursor.some(even)
.then(result => {
    result === true; // 4 is even
    cursor.hasNext() === true;
    cursor.next()
    .then(value => {
        value === 5; // next value after 4
    });
});

cursor.map

cursor.map(fn): Array<any>

Advances the cursor by applying the function fn to each value in the cursor's remaining result list until the cursor is exhausted.

Returns an array of the return values of fn.

Equivalent to Array.prototype.map (except async).

Arguments

  • fn: Function

A function that will be invoked for each value in the cursor's remaining result list until the cursor is exhausted.

The function receives the following arguments:

  • value: any

    The value in the cursor's remaining result list.

  • index: number

    The index of the value in the cursor's remaining result list.

  • cursor: Cursor

    The cursor itself.

Examples

function square(value) {
    return value * value;
}
// query result list: [1, 2, 3, 4, 5]
cursor.map(square)
.then(result => {
    result.length === 5;
    result; // [1, 4, 9, 16, 25]
    cursor.hasNext() === false;
});

cursor.reduce

cursor.reduce(fn, [accu]): any

Exhausts the cursor by reducing the values in the cursor's remaining result list with the given function fn. If accu is not provided, the first value in the cursor's remaining result list will be used instead (the function will not be invoked for that value).

Equivalent to Array.prototype.reduce (except async).

Arguments

  • fn: Function

A function that will be invoked for each value in the cursor's remaining result list until the cursor is exhausted.

The function receives the following arguments:

  • accu: any

    The return value of the previous call to fn. If this is the first call, accu will be set to the accu value passed to reduce or the first value in the cursor's remaining result list.

  • value: any

    The value in the cursor's remaining result list.

  • index: number

    The index of the value in the cursor's remaining result list.

  • cursor: Cursor

    The cursor itself.

Examples

function add(a, b) {
    return a + b;
}
// query result list: [1, 2, 3, 4, 5]

var baseline = 1000;
cursor.reduce(add, baseline)
.then(result => {
    result === (baseline + 1 + 2 + 3 + 4 + 5);
    cursor.hasNext() === false;
});

// -- or --

cursor.reduce(add)
.then(result => {
    result === (1 + 2 + 3 + 4 + 5);
    cursor.hasNext() === false;
});

Route API

Route instances provide access for arbitrary HTTP requests. This allows easy access to Foxx services and other HTTP APIs not covered by the driver itself.

route.route

route.route([path], [headers]): Route

Returns a new Route instance for the given path (relative to the current route) that can be used to perform arbitrary HTTP requests.

Arguments

  • path: string (optional)

The relative URL of the route.

  • headers: Object (optional)

Default headers that should be sent with each request to the route.

If path is missing, the route will refer to the base URL of the database.

Examples

var db = require('arangojs')();
var route = db.route('my-foxx-service');
var users = route.route('users');
// equivalent to db.route('my-foxx-service/users')

route.get

async route.get([path,] [qs]): Response

Performs a GET request to the given URL and returns the server response.

Arguments

  • path: string (optional)

The route-relative URL for the request. If omitted, the request will be made to the base URL of the route.

  • qs: string (optional)

The query string for the request. If qs is an object, it will be translated to a query string.

Examples

var db = require('arangojs')();
var route = db.route('my-foxx-service');
route.get()
.then(response => {
    // response.body is the response body of calling
    // GET _db/_system/my-foxx-service
});

// -- or --

route.get('users')
.then(response => {
    // response.body is the response body of calling
    // GET _db/_system/my-foxx-service/users
});

// -- or --

route.get('users', {group: 'admin'})
.then(response => {
    // response.body is the response body of calling
    // GET _db/_system/my-foxx-service/users?group=admin
});

route.post

async route.post([path,] [body, [qs]]): Response

Performs a POST request to the given URL and returns the server response.

Arguments

  • path: string (optional)

The route-relative URL for the request. If omitted, the request will be made to the base URL of the route.

  • body: string (optional)

The response body. If body is an object, it will be encoded as JSON.

  • qs: string (optional)

The query string for the request. If qs is an object, it will be translated to a query string.

Examples

var db = require('arangojs')();
var route = db.route('my-foxx-service');
route.post()
.then(response => {
    // response.body is the response body of calling
    // POST _db/_system/my-foxx-service
});

// -- or --

route.post('users')
.then(response => {
    // response.body is the response body of calling
    // POST _db/_system/my-foxx-service/users
});

// -- or --

route.post('users', {
    username: 'admin',
    password: 'hunter2'
})
.then(response => {
    // response.body is the response body of calling
    // POST _db/_system/my-foxx-service/users
    // with JSON request body {"username": "admin", "password": "hunter2"}
});

// -- or --

route.post('users', {
    username: 'admin',
    password: 'hunter2'
}, {admin: true})
.then(response => {
    // response.body is the response body of calling
    // POST _db/_system/my-foxx-service/users?admin=true
    // with JSON request body {"username": "admin", "password": "hunter2"}
});

route.put

async route.put([path,] [body, [qs]]): Response

Performs a PUT request to the given URL and returns the server response.

Arguments

  • path: string (optional)

The route-relative URL for the request. If omitted, the request will be made to the base URL of the route.

  • body: string (optional)

The response body. If body is an object, it will be encoded as JSON.

  • qs: string (optional)

The query string for the request. If qs is an object, it will be translated to a query string.

Examples

var db = require('arangojs')();
var route = db.route('my-foxx-service');
route.put()
.then(response => {
    // response.body is the response body of calling
    // PUT _db/_system/my-foxx-service
});

// -- or --

route.put('users/admin')
.then(response => {
    // response.body is the response body of calling
    // PUT _db/_system/my-foxx-service/users
});

// -- or --

route.put('users/admin', {
    username: 'admin',
    password: 'hunter2'
})
.then(response => {
    // response.body is the response body of calling
    // PUT _db/_system/my-foxx-service/users/admin
    // with JSON request body {"username": "admin", "password": "hunter2"}
});

// -- or --

route.put('users/admin', {
    username: 'admin',
    password: 'hunter2'
}, {admin: true})
.then(response => {
    // response.body is the response body of calling
    // PUT _db/_system/my-foxx-service/users/admin?admin=true
    // with JSON request body {"username": "admin", "password": "hunter2"}
});

route.patch

async route.patch([path,] [body, [qs]]): Response

Performs a PATCH request to the given URL and returns the server response.

Arguments

  • path: string (optional)

The route-relative URL for the request. If omitted, the request will be made to the base URL of the route.

  • body: string (optional)

The response body. If body is an object, it will be encoded as JSON.

  • qs: string (optional)

The query string for the request. If qs is an object, it will be translated to a query string.

Examples

var db = require('arangojs')();
var route = db.route('my-foxx-service');
route.patch()
.then(response => {
    // response.body is the response body of calling
    // PATCH _db/_system/my-foxx-service
});

// -- or --

route.patch('users/admin')
.then(response => {
    // response.body is the response body of calling
    // PATCH _db/_system/my-foxx-service/users
});

// -- or --

route.patch('users/admin', {
    password: 'hunter2'
})
.then(response => {
    // response.body is the response body of calling
    // PATCH _db/_system/my-foxx-service/users/admin
    // with JSON request body {"password": "hunter2"}
});

// -- or --

route.patch('users/admin', {
    password: 'hunter2'
}, {admin: true})
.then(response => {
    // response.body is the response body of calling
    // PATCH _db/_system/my-foxx-service/users/admin?admin=true
    // with JSON request body {"password": "hunter2"}
});

route.delete

async route.delete([path,] [qs]): Response

Performs a DELETE request to the given URL and returns the server response.

Arguments

  • path: string (optional)

The route-relative URL for the request. If omitted, the request will be made to the base URL of the route.

  • qs: string (optional)

The query string for the request. If qs is an object, it will be translated to a query string.

Examples

var db = require('arangojs')();
var route = db.route('my-foxx-service');
route.delete()
.then(response => {
    // response.body is the response body of calling
    // DELETE _db/_system/my-foxx-service
});

// -- or --

route.delete('users/admin')
.then(response => {
    // response.body is the response body of calling
    // DELETE _db/_system/my-foxx-service/users/admin
});

// -- or --

route.delete('users/admin', {permanent: true})
.then(response => {
    // response.body is the response body of calling
    // DELETE _db/_system/my-foxx-service/users/admin?permanent=true
});

route.head

async route.head([path,] [qs]): Response

Performs a HEAD request to the given URL and returns the server response.

Arguments

  • path: string (optional)

The route-relative URL for the request. If omitted, the request will be made to the base URL of the route.

  • qs: string (optional)

The query string for the request. If qs is an object, it will be translated to a query string.

Examples

var db = require('arangojs')();
var route = db.route('my-foxx-service');
route.head()
.then(response => {
    // response is the response object for
    // HEAD _db/_system/my-foxx-service
});

route.request

async route.request([opts]): Response

Performs an arbitrary request to the given URL and returns the server response.

Arguments

  • opts: Object (optional)

An object with any of the following properties:

  • path: string (optional)

    The route-relative URL for the request. If omitted, the request will be made to the base URL of the route.

  • absolutePath: boolean (Default: false)

    Whether the path is relative to the connection's base URL instead of the route.

  • body: string (optional)

    The response body. If body is an object, it will be encoded as JSON.

  • qs: string (optional)

    The query string for the request. If qs is an object, it will be translated to a query string.

  • headers: Object (optional)

    An object containing additional HTTP headers to be sent with the request.

  • method: string (Default: "GET")

    HTTP method of this request.

Examples

var db = require('arangojs')();
var route = db.route('my-foxx-service');
route.request({
    path: 'hello-world',
    method: 'POST',
    body: {hello: 'world'},
    qs: {admin: true}
})
.then(response => {
    // response.body is the response body of calling
    // POST _db/_system/my-foxx-service/hello-world?admin=true
    // with JSON request body '{"hello": "world"}'
});

Collection API

These functions implement the HTTP API for manipulating collections.

The Collection API is implemented by all Collection instances, regardless of their specific type. I.e. it represents a shared subset between instances of DocumentCollection, EdgeCollection, GraphVertexCollection and GraphEdgeCollection.

Getting information about the collection

See the HTTP API documentation for details.

collection.get

async collection.get(): Object

Retrieves general information about the collection.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');
collection.get()
.then(data => {
    // data contains general information about the collection
});
collection.properties

async collection.properties(): Object

Retrieves the collection's properties.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');
collection.properties()
.then(data => {
    // data contains the collection's properties
});
collection.count

async collection.count(): Object

Retrieves information about the number of documents in a collection.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');
collection.count()
.then(data => {
    // data contains the collection's count
});
collection.figures

async collection.figures(): Object

Retrieves statistics for a collection.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');
collection.figures()
.then(data => {
    // data contains the collection's figures
});
collection.revision

async collection.revision(): Object

Retrieves the collection revision ID.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');
collection.revision()
.then(data => {
    // data contains the collection's revision
});
collection.checksum

async collection.checksum([opts]): Object

Retrieves the collection checksum.

Arguments

  • opts: Object (optional)

For information on the possible options see the HTTP API for getting collection information.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');
collection.checksum()
.then(data => {
    // data contains the collection's checksum
});

Manipulating the collection

These functions implement the HTTP API for modifying collections.

collection.create

async collection.create([properties]): Object

Creates a collection with the given properties for this collection's name, then returns the server response.

Arguments

  • properties: Object (optional)

For more information on the properties object, see the HTTP API documentation for creating collections.

Examples

var db = require('arangojs')();
collection = db.collection('potatos');
collection.create()
.then(() => {
    // the document collection "potatos" now exists
});

// -- or --

var collection = var collection = db.edgeCollection('friends');
collection.create({
    waitForSync: true // always sync document changes to disk
})
.then(() => {
    // the edge collection "friends" now exists
});
collection.load

async collection.load([count]): Object

Tells the server to load the collection into memory.

Arguments

  • count: boolean (Default: true)

If set to false, the return value will not include the number of documents in the collection (which may speed up the process).

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');
collection.load(false)
.then(() => {
    // the collection has now been loaded into memory
});
collection.unload

async collection.unload(): Object

Tells the server to remove the collection from memory.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');
collection.unload()
.then(() => {
    // the collection has now been unloaded from memory
});
collection.setProperties

async collection.setProperties(properties): Object

Replaces the properties of the collection.

Arguments

  • properties: Object

For information on the properties argument see the HTTP API for modifying collections.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');
collection.setProperties({waitForSync: true})
.then(result => {
    result.waitForSync === true;
    // the collection will now wait for data being written to disk
    // whenever a document is changed
});
collection.rename

async collection.rename(name): Object

Renames the collection. The Collection instance will automatically update its name when the rename succeeds.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');
collection.rename('new-collection-name')
.then(result => {
    result.name === 'new-collection-name';
    collection.name === result.name;
    // result contains additional information about the collection
});
collection.rotate

async collection.rotate(): Object

Rotates the journal of the collection.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');
collection.rotate()
.then(data => {
    // data.result will be true if rotation succeeded
});
collection.truncate

async collection.truncate(): Object

Deletes all documents in the collection in the database.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');
collection.truncate()
.then(() => {
    // the collection "some-collection" is now empty
});
collection.drop

async collection.drop(): Object

Deletes the collection from the database.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');
collection.drop()
.then(() => {
    // the collection "some-collection" no longer exists
});

Manipulating indexes

These functions implement the HTTP API for manipulating indexes.

collection.createIndex

async collection.createIndex(details): Object

Creates an arbitrary index on the collection.

Arguments

  • details: Object

For information on the possible properties of the details object, see the HTTP API for manipulating indexes.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');
collection.createIndex({type: 'cap', size: 20})
.then(index => {
    index.id; // the index's handle
    // the index has been created
});
collection.createCapConstraint

async collection.createCapConstraint(size): Object

Creates a cap constraint index on the collection.

Note: This method is not available when using the driver with ArangoDB 3.0 and higher as cap constraints are no longer supported.

Arguments

  • size: Object

An object with any of the following properties:

  • size: number (optional)

    The maximum number of documents in the collection.

  • byteSize: number (optional)

    The maximum size of active document data in the collection (in bytes).

If size is a number, it will be interpreted as size.size.

For more information on the properties of the size object see the HTTP API for creating cap constraints.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');

collection.createCapConstraint(20)
.then(index => {
    index.id; // the index's handle
    index.size === 20;
    // the index has been created
});

// -- or --

collection.createCapConstraint({size: 20})
.then(index => {
    index.id; // the index's handle
    index.size === 20;
    // the index has been created
});
collection.createHashIndex

async collection.createHashIndex(fields, [opts]): Object

Creates a hash index on the collection.

Arguments

  • fields: Array string

An array of names of document fields on which to create the index. If the value is a string, it will be wrapped in an array automatically.

  • opts: Object (optional)

Additional options for this index. If the value is a boolean, it will be interpreted as opts.unique.

For more information on hash indexes, see the HTTP API for hash indexes.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');

collection.createHashIndex('favorite-color')
.then(index => {
    index.id; // the index's handle
    index.fields; // ['favorite-color']
    // the index has been created
});

// -- or --

collection.createHashIndex(['favorite-color'])
.then(index => {
    index.id; // the index's handle
    index.fields; // ['favorite-color']
    // the index has been created
});
collection.createSkipList

async collection.createSkipList(fields, [opts]): Object

Creates a skiplist index on the collection.

Arguments

  • fields: Array string

An array of names of document fields on which to create the index. If the value is a string, it will be wrapped in an array automatically.

  • opts: Object (optional)

Additional options for this index. If the value is a boolean, it will be interpreted as opts.unique.

For more information on skiplist indexes, see the HTTP API for skiplist indexes.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');

collection.createSkipList('favorite-color')
.then(index => {
    index.id; // the index's handle
    index.fields; // ['favorite-color']
    // the index has been created
});

// -- or --

collection.createSkipList(['favorite-color'])
.then(index => {
    index.id; // the index's handle
    index.fields; // ['favorite-color']
    // the index has been created
});
collection.createGeoIndex

async collection.createGeoIndex(fields, [opts]): Object

Creates a geo-spatial index on the collection.

Arguments

  • fields: Array string

An array of names of document fields on which to create the index. Currently, geo indexes must cover exactly one field. If the value is a string, it will be wrapped in an array automatically.

  • opts: Object (optional)

An object containing additional properties of the index.

For more information on the properties of the opts object see the HTTP API for manipulating geo indexes.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');

collection.createGeoIndex(['longitude', 'latitude'])
.then(index => {
    index.id; // the index's handle
    index.fields; // ['longitude', 'latitude']
    // the index has been created
});

// -- or --

collection.createGeoIndex('location', {geoJson: true})
.then(index => {
    index.id; // the index's handle
    index.fields; // ['location']
    // the index has been created
});
collection.createFulltextIndex

async collection.createFulltextIndex(fields, [minLength]): Object

Creates a fulltext index on the collection.

Arguments

  • fields: Array string

An array of names of document fields on which to create the index. Currently, fulltext indexes must cover exactly one field. If the value is a string, it will be wrapped in an array automatically.

  • minLength (optional):

Minimum character length of words to index. Uses a server-specific default value if not specified.

For more information on fulltext indexes, see the HTTP API for fulltext indexes.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');

collection.createFulltextIndex('description')
.then(index => {
    index.id; // the index's handle
    index.fields; // ['description']
    // the index has been created
});

// -- or --

collection.createFulltextIndex(['description'])
.then(index => {
    index.id; // the index's handle
    index.fields; // ['description']
    // the index has been created
});
collection.index

async collection.index(indexHandle): Object

Fetches information about the index with the given indexHandle and returns it.

Arguments

  • indexHandle: string

The handle of the index to look up. This can either be a fully-qualified identifier or the collection-specific key of the index. If the value is an object, its id property will be used instead.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');
collection.createFulltextIndex('description')
.then(index => {
    collection.index(index.id)
    .then(result => {
        result.id === index.id;
        // result contains the properties of the index
    });

    // -- or --

    collection.index(index.id.split('/')[1])
    .then(result => {
        result.id === index.id;
        // result contains the properties of the index
    });
});
collection.indexes

async collection.indexes(): ArrayObject

Fetches a list of all indexes on this collection.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');
collection.createFulltextIndex('description')
.then(() => collection.indexes())
.then(indexes => {
    indexes.length === 1;
    // indexes contains information about the index
});
collection.dropIndex

async collection.dropIndex(indexHandle): Object

Deletes the index with the given indexHandle from the collection.

Arguments

  • indexHandle: string

The handle of the index to delete. This can either be a fully-qualified identifier or the collection-specific key of the index. If the value is an object, its id property will be used instead.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');
collection.createFulltextIndex('description')
.then(index => {
    collection.dropIndex(index.id)
    .then(() => {
        // the index has been removed from the collection
    });

    // -- or --

    collection.dropIndex(index.id.split('/')[1])
    .then(() => {
        // the index has been removed from the collection
    });
});

Simple queries

These functions implement the HTTP API for simple queries.

collection.all

async collection.all([opts]): Cursor

Performs a query to fetch all documents in the collection. Returns a new Cursor instance for the query results.

Arguments

  • opts: Object (optional)

For information on the possible options see the HTTP API for returning all documents.

collection.any

async collection.any(): Object

Fetches a document from the collection at random.

collection.first

async collection.first([opts]): ArrayObject

Performs a query to fetch the first documents in the collection. Returns an array of the matching documents.

Note: This method is not available when using the driver with ArangoDB 3.0 and higher as the corresponding API method has been removed.

Arguments

  • opts: Object (optional)

For information on the possible options see the HTTP API for returning the first documents of a collection.

If opts is a number it is treated as opts.count.

collection.last

async collection.last([opts]): ArrayObject

Performs a query to fetch the last documents in the collection. Returns an array of the matching documents.

Note: This method is not available when using the driver with ArangoDB 3.0 and higher as the corresponding API method has been removed.

Arguments

  • opts: Object (optional)

For information on the possible options see the HTTP API for returning the last documents of a collection.

If opts is a number it is treated as opts.count.

collection.byExample

async collection.byExample(example, [opts]): Cursor

Performs a query to fetch all documents in the collection matching the given example. Returns a new Cursor instance for the query results.

Arguments

  • example: Object

An object representing an example for documents to be matched against.

  • opts: Object (optional)

For information on the possible options see the HTTP API for fetching documents by example.

collection.firstExample

async collection.firstExample(example): Object

Fetches the first document in the collection matching the given example.

Arguments

  • example: Object

An object representing an example for documents to be matched against.

collection.removeByExample

async collection.removeByExample(example, [opts]): Object

Removes all documents in the collection matching the given example.

Arguments

  • example: Object

An object representing an example for documents to be matched against.

  • opts: Object (optional)

For information on the possible options see the HTTP API for removing documents by example.

collection.replaceByExample

async collection.replaceByExample(example, newValue, [opts]): Object

Replaces all documents in the collection matching the given example with the given newValue.

Arguments

  • example: Object

An object representing an example for documents to be matched against.

  • newValue: Object

The new value to replace matching documents with.

  • opts: Object (optional)

For information on the possible options see the HTTP API for replacing documents by example.

collection.updateByExample

async collection.updateByExample(example, newValue, [opts]): Object

Updates (patches) all documents in the collection matching the given example with the given newValue.

Arguments

  • example: Object

An object representing an example for documents to be matched against.

  • newValue: Object

The new value to update matching documents with.

  • opts: Object (optional)

For information on the possible options see the HTTP API for updating documents by example.

collection.lookupByKeys

async collection.lookupByKeys(keys): ArrayObject

Fetches the documents with the given keys from the collection. Returns an array of the matching documents.

Arguments

  • keys: Array

An array of document keys to look up.

collection.removeByKeys

async collection.removeByKeys(keys, [opts]): Object

Deletes the documents with the given keys from the collection.

Arguments

  • keys: Array

An array of document keys to delete.

  • opts: Object (optional)

For information on the possible options see the HTTP API for removing documents by keys.

collection.fulltext

async collection.fulltext(fieldName, query, [opts]): Cursor

Performs a fulltext query in the given fieldName on the collection.

Arguments

  • fieldName: String

Name of the field to search on documents in the collection.

  • query: String

Fulltext query string to search for.

  • opts: Object (optional)

For information on the possible options see the HTTP API for fulltext queries.

Bulk importing documents

This function implements the HTTP API for bulk imports.

collection.import

async collection.import(data, [opts]): Object

Bulk imports the given data into the collection.

Arguments

  • data: Array Array any | ArrayObject

The data to import. This can be an array of documents:

js [ {key1: value1, key2: value2}, // document 1 {key1: value1, key2: value2}, // document 2 ... ]

Or it can be an array of value arrays following an array of keys.

js [ ['key1', 'key2'], // key names [value1, value2], // document 1 [value1, value2], // document 2 ... ]

  • opts: Object (optional) If opts is set, it must be an object with any of the following properties:

  • waitForSync: boolean (Default: false)

    Wait until the documents have been synced to disk.

  • details: boolean (Default: false)

    Whether the response should contain additional details about documents that could not be imported.false*.

  • type: string (Default: "auto")

    Indicates which format the data uses. Can be "documents", "array" or "auto".

If data is a JavaScript array, it will be transmitted as a line-delimited JSON stream. If opts.type is set to "array", it will be transmitted as regular JSON instead. If data is a string, it will be transmitted as it is without any processing.

For more information on the opts object, see the HTTP API documentation for bulk imports.

Examples

var db = require('arangojs')();
var collection = db.collection('users');

collection.import(
    [// document stream
        {username: 'admin', password: 'hunter2'},
        {username: 'jcd', password: 'bionicman'},
        {username: 'jreyes', password: 'amigo'},
        {username: 'ghermann', password: 'zeitgeist'}
    ]
)
.then(result => {
    result.created === 4;
});

// -- or --

collection.import(
    [// array stream with header
        ['username', 'password'], // keys
        ['admin', 'hunter2'], // row 1
        ['jcd', 'bionicman'], // row 2
        ['jreyes', 'amigo'],
        ['ghermann', 'zeitgeist']
    ]
)
.then(result => {
    result.created === 4;
});

// -- or --

collection.import(
    // raw line-delimited JSON array stream with header
    '["username", "password"]\r\n' +
    '["admin", "hunter2"]\r\n' +
    '["jcd", "bionicman"]\r\n' +
    '["jreyes", "amigo"]\r\n' +
    '["ghermann", "zeitgeist"]\r\n'
)
.then(result => {
    result.created === 4;
});

Manipulating documents

These functions implement the HTTP API for manipulating documents.

collection.replace

async collection.replace(documentHandle, newValue, [opts]): Object

Replaces the content of the document with the given documentHandle with the given newValue and returns an object containing the document's metadata.

Note: The policy option is not available when using the driver with ArangoDB 3.0 as it is redundant when specifying the rev option.

Arguments

  • documentHandle: string

The handle of the document to replace. This can either be the _id or the _key of a document in the collection, or a document (i.e. an object with an _id or _key property).

  • newValue: Object

The new data of the document.

  • opts: Object (optional)

If opts is set, it must be an object with any of the following properties:

  • waitForSync: boolean (Default: false)

    Wait until the document has been synced to disk. Default: false.

  • rev: string (optional)

    Only replace the document if it matches this revision.

  • policy: string (optional)

    Determines the behaviour when the revision is not matched:

    • if policy is set to "last", the document will be replaced regardless of the revision.
    • if policy is set to "error" or not set, the replacement will fail with an error.

If a string is passed instead of an options object, it will be interpreted as the rev option.

For more information on the opts object, see the HTTP API documentation for working with documents.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');
var doc = {number: 1, hello: 'world'};
collection.save(doc)
.then(doc1 => {
    collection.replace(doc1, {number: 2})
    .then(doc2 => {
        doc2._id === doc1._id;
        doc2._rev !== doc1._rev;
        collection.document(doc1)
        .then(doc3 => {
            doc3._id === doc1._id;
            doc3._rev === doc2._rev;
            doc3.number === 2;
            doc3.hello === undefined;
        })
    });
});
collection.update

async collection.update(documentHandle, newValue, [opts]): Object

Updates (merges) the content of the document with the given documentHandle with the given newValue and returns an object containing the document's metadata.

Note: The policy option is not available when using the driver with ArangoDB 3.0 as it is redundant when specifying the rev option.

Arguments

  • documentHandle: string

Handle of the document to update. This can be either the _id or the _key of a document in the collection, or a document (i.e. an object with an _id or _key property).

  • newValue: Object

The new data of the document.

  • opts: Object (optional)

If opts is set, it must be an object with any of the following properties:

  • waitForSync: boolean (Default: false)

    Wait until document has been synced to disk.

  • keepNull: boolean (Default: true)

    If set to false, properties with a value of null indicate that a property should be deleted.

  • mergeObjects: boolean (Default: true)

    If set to false, object properties that already exist in the old document will be overwritten rather than merged. This does not affect arrays.

  • rev: string (optional)

    Only update the document if it matches this revision.

  • policy: string (optional)

    Determines the behaviour when the revision is not matched:

    • if policy is set to "last", the document will be replaced regardless of the revision.
    • if policy is set to "error" or not set, the replacement will fail with an error.

If a string is passed instead of an options object, it will be interpreted as the rev option.

For more information on the opts object, see the HTTP API documentation for working with documents.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');
var doc = {number: 1, hello: 'world'};
collection.save(doc)
.then(doc1 => {
    collection.update(doc1, {number: 2})
    .then(doc2 => {
        doc2._id === doc1._id;
        doc2._rev !== doc1._rev;
        collection.document(doc2)
        .then(doc3 => {
          doc3._id === doc2._id;
          doc3._rev === doc2._rev;
          doc3.number === 2;
          doc3.hello === doc.hello;
        });
    });
});
collection.remove

async collection.remove(documentHandle, [opts]): Object

Deletes the document with the given documentHandle from the collection.

Note: The policy option is not available when using the driver with ArangoDB 3.0 as it is redundant when specifying the rev option.

Arguments

  • documentHandle: string

The handle of the document to delete. This can be either the _id or the _key of a document in the collection, or a document (i.e. an object with an _id or _key property).

  • opts: Object (optional)

If opts is set, it must be an object with any of the following properties:

  • waitForSync: boolean (Default: false)

    Wait until document has been synced to disk.

  • rev: string (optional)

    Only update the document if it matches this revision.

  • policy: string (optional)

    Determines the behaviour when the revision is not matched:

    • if policy is set to "last", the document will be replaced regardless of the revision.
    • if policy is set to "error" or not set, the replacement will fail with an error.

If a string is passed instead of an options object, it will be interpreted as the rev option.

For more information on the opts object, see the HTTP API documentation for working with documents.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');

collection.remove('some-doc')
.then(() => {
    // document 'some-collection/some-doc' no longer exists
});

// -- or --

collection.remove('some-collection/some-doc')
.then(() => {
    // document 'some-collection/some-doc' no longer exists
});
collection.list

async collection.list([type]): Array string

Retrieves a list of references for all documents in the collection.

Arguments

  • type: string (Default: "id")

The format of the document references:

  • if type is set to "id", each reference will be the _id of the document.
  • if type is set to "key", each reference will be the _key of the document.
  • if type is set to "path", each reference will be the URI path of the document.

DocumentCollection API

The DocumentCollection API extends the Collection API (see above) with the following methods.

documentCollection.document

async documentCollection.document(documentHandle): Object

Retrieves the document with the given documentHandle from the collection.

Arguments

  • documentHandle: string

The handle of the document to retrieve. This can be either the _id or the _key of a document in the collection, or a document (i.e. an object with an _id or _key property).

Examples

var db = require('arangojs')();
var collection = db.collection('my-docs');

collection.document('some-key')
.then(doc => {
    // the document exists
    doc._key === 'some-key';
    doc._id === 'my-docs/some-key';
});

// -- or --

collection.document('my-docs/some-key')
.then(doc => {
    // the document exists
    doc._key === 'some-key';
    doc._id === 'my-docs/some-key';
});

documentCollection.save

async documentCollection.save(data): Object

Creates a new document with the given data and returns an object containing the document's metadata.

Arguments

  • data: Object

The data of the new document, may include a _key.

Examples

var db = require('arangojs')();
var collection = db.collection('my-docs');
var doc = {some: 'data'};
collection.save(doc)
.then(doc1 => {
    doc1._key; // the document's key
    doc1._id === ('my-docs/' + doc1._key);
    collection.document(doc)
    .then(doc2 => {
        doc2._id === doc1._id;
        doc2._rev === doc1._rev;
        doc2.some === 'data';
    });
});

EdgeCollection API

The EdgeCollection API extends the Collection API (see above) with the following methods.

edgeCollection.edge

async edgeCollection.edge(documentHandle): Object

Retrieves the edge with the given documentHandle from the collection.

Arguments

  • documentHandle: string

The handle of the edge to retrieve. This can be either the _id or the _key of an edge in the collection, or an edge (i.e. an object with an _id or _key property).

Examples

var db = require('arangojs')();
var collection = var collection = db.edgeCollection('edges');

collection.edge('some-key')
.then(edge => {
    // the edge exists
    edge._key === 'some-key';
    edge._id === 'edges/some-key';
});

// -- or --

collection.edge('edges/some-key')
.then(edge => {
    // the edge exists
    edge._key === 'some-key';
    edge._id === 'edges/some-key';
});

edgeCollection.save

async edgeCollection.save(data, [fromId, toId]): Object

Creates a new edge between the documents fromId and toId with the given data and returns an object containing the edge's metadata.

Arguments

  • data: Object

The data of the new edge. If fromId and toId are not specified, the data needs to contain the properties _from and _to.

  • fromId: string (optional)

The handle of the start vertex of this edge. This can be either the _id of a document in the database, the _key of an edge in the collection, or a document (i.e. an object with an _id or _key property).

  • toId: string (optional)

The handle of the end vertex of this edge. This can be either the _id of a document in the database, the _key of an edge in the collection, or a document (i.e. an object with an _id or _key property).

Examples

var db = require('arangojs')();
var collection = db.edgeCollection('edges');
var edge = {some: 'data'};

collection.save(
    edge,
    'vertices/start-vertex',
    'vertices/end-vertex'
)
.then(edge1 => {
    edge1._key; // the edge's key
    edge1._id === ('edges/' + edge1._key);
    collection.edge(edge)
    .then(edge2 => {
        edge2._key === edge1._key;
        edge2._rev = edge1._rev;
        edge2.some === edge.some;
        edge2._from === 'vertices/start-vertex';
        edge2._to === 'vertices/end-vertex';
    });
});

// -- or --

collection.save({
    some: 'data',
    _from: 'verticies/start-vertex',
    _to: 'vertices/end-vertex'
})
.then(edge => {
    // ...
})

edgeCollection.edges

async edgeCollection.edges(documentHandle): ArrayObject

Retrieves a list of all edges of the document with the given documentHandle.

Arguments

  • documentHandle: string

The handle of the document to retrieve the edges of. This can be either the _id of a document in the database, the _key of an edge in the collection, or a document (i.e. an object with an _id or _key property).

Examples

var db = require('arangojs')();
var collection = db.edgeCollection('edges');
collection.import([
    ['_key', '_from', '_to'],
    ['x', 'vertices/a', 'vertices/b'],
    ['y', 'vertices/a', 'vertices/c'],
    ['z', 'vertices/d', 'vertices/a']
])
.then(() => collection.edges('vertices/a'))
.then(edges => {
    edges.length === 3;
    edges.map(function (edge) {return edge._key;}); // ['x', 'y', 'z']
});

edgeCollection.inEdges

async edgeCollection.inEdges(documentHandle): Array Object

Retrieves a list of all incoming edges of the document with the given documentHandle.

Arguments

  • documentHandle: string

The handle of the document to retrieve the edges of. This can be either the _id of a document in the database, the _key of an edge in the collection, or a document (i.e. an object with an _id or _key property).

Examples

var db = require('arangojs')();
var collection = db.edgeCollection('edges');
collection.import([
    ['_key', '_from', '_to'],
    ['x', 'vertices/a', 'vertices/b'],
    ['y', 'vertices/a', 'vertices/c'],
    ['z', 'vertices/d', 'vertices/a']
])
.then(() => collection.inEdges('vertices/a'))
.then(edges => {
    edges.length === 1;
    edges[0]._key === 'z';
});

edgeCollection.outEdges

async edgeCollection.outEdges(documentHandle): Array Object

Retrieves a list of all outgoing edges of the document with the given documentHandle.

Arguments

  • documentHandle: string

The handle of the document to retrieve the edges of. This can be either the _id of a document in the database, the _key of an edge in the collection, or a document (i.e. an object with an _id or _key property).

Examples

var db = require('arangojs')();
var collection = db.edgeCollection('edges');
collection.import([
    ['_key', '_from', '_to'],
    ['x', 'vertices/a', 'vertices/b'],
    ['y', 'vertices/a', 'vertices/c'],
    ['z', 'vertices/d', 'vertices/a']
])
.then(() => collection.outEdges('vertices/a'))
.then(edges => {
    edges.length === 2;
    edges.map(function (edge) {return edge._key;}); // ['x', 'y']
});

edgeCollection.traversal

async edgeCollection.traversal(startVertex, opts): Object

Performs a traversal starting from the given startVertex and following edges contained in this edge collection.

Arguments

  • startVertex: string

The handle of the start vertex. This can be either the _id of a document in the database, the _key of an edge in the collection, or a document (i.e. an object with an _id or _key property).

  • opts: Object

See the HTTP API documentation for details on the additional arguments.

Please note that while opts.filter, opts.visitor, opts.init, opts.expander and opts.sort should be strings evaluating to well-formed JavaScript code, it's not possible to pass in JavaScript functions directly because the code needs to be evaluated on the server and will be transmitted in plain text.

Examples

var db = require('arangojs')();
var collection = db.edgeCollection('edges');
collection.import([
    ['_key', '_from', '_to'],
    ['x', 'vertices/a', 'vertices/b'],
    ['y', 'vertices/b', 'vertices/c'],
    ['z', 'vertices/c', 'vertices/d']
])
.then(() => collection.traversal('vertices/a', {
    direction: 'outbound',
    visitor: 'result.vertices.push(vertex._key);',
    init: 'result.vertices = [];'
}))
.then(result => {
    result.vertices; // ['a', 'b', 'c', 'd']
});

Graph API

These functions implement the HTTP API for manipulating graphs.

graph.get

async graph.get(): Object

Retrieves general information about the graph.

Examples

var db = require('arangojs')();
var graph = db.graph('some-graph');
graph.get()
.then(data => {
    // data contains general information about the graph
});

graph.create

async graph.create(properties): Object

Creates a graph with the given properties for this graph's name, then returns the server response.

Arguments

  • properties: Object

For more information on the properties object, see the HTTP API documentation for creating graphs.

Examples

var db = require('arangojs')();
var graph = db.graph('some-graph');
graph.create({
    edgeDefinitions: [
        {
            collection: 'edges',
            from: [
                'start-vertices'
            ],
            to: [
                'end-vertices'
            ]
        }
    ]
})
.then(graph => {
    // graph is a Graph instance
    // for more information see the Graph API below
});

graph.drop

async graph.drop([dropCollections]): Object

Deletes the graph from the database.

Arguments

  • dropCollections: boolean (optional)

If set to true, the collections associated with the graph will also be deleted.

Examples

var db = require('arangojs')();
var graph = db.graph('some-graph');
graph.drop()
.then(() => {
    // the graph "some-graph" no longer exists
});

Manipulating vertices

graph.vertexCollection

graph.vertexCollection(collectionName): GraphVertexCollection

Returns a new GraphVertexCollection instance with the given name for this graph.

Arguments

  • collectionName: string

Name of the vertex collection.

Examples

var db = require('arangojs')();
var graph = db.graph('some-graph');
var collection = graph.vertexCollection('vertices');
collection.name === 'vertices';
// collection is a GraphVertexCollection
graph.addVertexCollection

async graph.addVertexCollection(collectionName): Object

Adds the collection with the given collectionName to the graph's vertex collections.

Arguments

  • collectionName: string

Name of the vertex collection to add to the graph.

Examples

var db = require('arangojs')();
var graph = db.graph('some-graph');
graph.addVertexCollection('vertices')
.then(() => {
    // the collection "vertices" has been added to the graph
});
graph.removeVertexCollection

async graph.removeVertexCollection(collectionName, [dropCollection]): Object

Removes the vertex collection with the given collectionName from the graph.

Arguments

  • collectionName: string

Name of the vertex collection to remove from the graph.

  • dropCollection: boolean (optional)

If set to true, the collection will also be deleted from the database.

Examples

var db = require('arangojs')();
var graph = db.graph('some-graph');

graph.removeVertexCollection('vertices')
.then(() => {
    // collection "vertices" has been removed from the graph
});

// -- or --

graph.removeVertexCollection('vertices', true)
.then(() => {
    // collection "vertices" has been removed from the graph
    // the collection has also been dropped from the database
    // this may have been a bad idea
});

Manipulating edges

graph.edgeCollection

graph.edgeCollection(collectionName): GraphEdgeCollection

Returns a new GraphEdgeCollection instance with the given name bound to this graph.

Arguments

  • collectionName: string

Name of the edge collection.

Examples

var db = require('arangojs')();
// assuming the collections "edges" and "vertices" exist
var graph = db.graph('some-graph');
var collection = graph.edgeCollection('edges');
collection.name === 'edges';
// collection is a GraphEdgeCollection
graph.addEdgeDefinition

async graph.addEdgeDefinition(definition): Object

Adds the given edge definition definition to the graph.

Arguments

  • definition: Object

For more information on edge definitions see the HTTP API for managing graphs.

Examples

var db = require('arangojs')();
// assuming the collections "edges" and "vertices" exist
var graph = db.graph('some-graph');
graph.addEdgeDefinition({
    collection: 'edges',
    from: ['vertices'],
    to: ['vertices']
})
.then(() => {
    // the edge definition has been added to the graph
});
graph.replaceEdgeDefinition

async graph.replaceEdgeDefinition(collectionName, definition): Object

Replaces the edge definition for the edge collection named collectionName with the given definition.

Arguments

  • collectionName: string

Name of the edge collection to replace the definition of.

  • definition: Object

For more information on edge definitions see the HTTP API for managing graphs.

Examples

var db = require('arangojs')();
// assuming the collections "edges", "vertices" and "more-vertices" exist
var graph = db.graph('some-graph');
graph.replaceEdgeDefinition('edges', {
    collection: 'edges',
    from: ['vertices'],
    to: ['more-vertices']
})
.then(() => {
    // the edge definition has been modified
});
graph.removeEdgeDefinition

async graph.removeEdgeDefinition(definitionName, [dropCollection]): Object

Removes the edge definition with the given definitionName form the graph.

Arguments

  • definitionName: string

Name of the edge definition to remove from the graph.

  • dropCollection: boolean (optional)

If set to true, the edge collection associated with the definition will also be deleted from the database.

Examples

var db = require('arangojs')();
var graph = db.graph('some-graph');

graph.removeEdgeDefinition('edges')
.then(() => {
    // the edge definition has been removed
});

// -- or --

graph.removeEdgeDefinition('edges', true)
.then(() => {
    // the edge definition has been removed
    // and the edge collection "edges" has been dropped
    // this may have been a bad idea
});
graph.traversal

async graph.traversal(startVertex, opts): Object

Performs a traversal starting from the given startVertex and following edges contained in any of the edge collections of this graph.

Arguments

  • startVertex: string

The handle of the start vertex. This can be either the _id of a document in the graph or a document (i.e. an object with an _id property).

  • opts: Object

See the HTTP API documentation for details on the additional arguments.

Please note that while opts.filter, opts.visitor, opts.init, opts.expander and opts.sort should be strings evaluating to well-formed JavaScript functions, it's not possible to pass in JavaScript functions directly because the functions need to be evaluated on the server and will be transmitted in plain text.

Examples

var db = require('arangojs')();
var graph = db.graph('some-graph');
var collection = graph.edgeCollection('edges');
collection.import([
    ['_key', '_from', '_to'],
    ['x', 'vertices/a', 'vertices/b'],
    ['y', 'vertices/b', 'vertices/c'],
    ['z', 'vertices/c', 'vertices/d']
])
.then(() => graph.traversal('vertices/a', {
    direction: 'outbound',
    visitor: 'result.vertices.push(vertex._key);',
    init: 'result.vertices = [];'
}))
.then(result => {
    result.vertices; // ['a', 'b', 'c', 'd']
});

GraphVertexCollection API

The GraphVertexCollection API extends the Collection API (see above) with the following methods.

graphVertexCollection.remove

async graphVertexCollection.remove(documentHandle): Object

Deletes the vertex with the given documentHandle from the collection.

Arguments

  • documentHandle: string

The handle of the vertex to retrieve. This can be either the _id or the _key of a vertex in the collection, or a vertex (i.e. an object with an _id or _key property).

Examples

var graph = db.graph('some-graph');
var collection = graph.vertexCollection('vertices');

collection.remove('some-key')
.then(() => {
    // document 'vertices/some-key' no longer exists
});

// -- or --

collection.remove('vertices/some-key')
.then(() => {
    // document 'vertices/some-key' no longer exists
});

graphVertexCollection.vertex

async graphVertexCollection.vertex(documentHandle): Object

Retrieves the vertex with the given documentHandle from the collection.

Arguments

  • documentHandle: string

The handle of the vertex to retrieve. This can be either the _id or the _key of a vertex in the collection, or a vertex (i.e. an object with an _id or _key property).

Examples

var graph = db.graph('some-graph');
var collection = graph.vertexCollection('vertices');

collection.vertex('some-key')
.then(doc => {
    // the vertex exists
    doc._key === 'some-key';
    doc._id === 'vertices/some-key';
});

// -- or --

collection.vertex('vertices/some-key')
.then(doc => {
    // the vertex exists
    doc._key === 'some-key';
    doc._id === 'vertices/some-key';
});

graphVertexCollection.save

async graphVertexCollection.save(data): Object

Creates a new vertex with the given data.

Arguments

  • data: Object

The data of the vertex.

Examples

var db = require('arangojs')();
var graph = db.graph('some-graph');
var collection = graph.vertexCollection('vertices');
collection.save({some: 'data'})
.then(doc => {
    doc._key; // the document's key
    doc._id === ('vertices/' + doc._key);
    doc.some === 'data';
});

GraphEdgeCollection API

The GraphEdgeCollection API extends the Collection API (see above) with the following methods.

graphEdgeCollection.remove

async graphEdgeCollection.remove(documentHandle): Object

Deletes the edge with the given documentHandle from the collection.

Arguments

  • documentHandle: string

The handle of the edge to retrieve. This can be either the _id or the _key of an edge in the collection, or an edge (i.e. an object with an _id or _key property).

Examples

var graph = db.graph('some-graph');
var collection = graph.edgeCollection('edges');

collection.remove('some-key')
.then(() => {
    // document 'edges/some-key' no longer exists
});

// -- or --

collection.remove('edges/some-key')
.then(() => {
    // document 'edges/some-key' no longer exists
});

graphEdgeCollection.edge

async graphEdgeCollection.edge(documentHandle): Object

Retrieves the edge with the given documentHandle from the collection.

Arguments

  • documentHandle: string

The handle of the edge to retrieve. This can be either the _id or the _key of an edge in the collection, or an edge (i.e. an object with an _id or _key property).

Examples

var graph = db.graph('some-graph');
var collection = graph.edgeCollection('edges');

collection.edge('some-key')
.then(edge => {
    // the edge exists
    edge._key === 'some-key';
    edge._id === 'edges/some-key';
});

// -- or --

collection.edge('edges/some-key')
.then(edge => {
    // the edge exists
    edge._key === 'some-key';
    edge._id === 'edges/some-key';
});

graphEdgeCollection.save

async graphEdgeCollection.save(data, [fromId, toId]): Object

Creates a new edge between the vertices fromId and toId with the given data.

Arguments

  • data: Object

The data of the new edge. If fromId and toId are not specified, the data needs to contain the properties _from and _to.

  • fromId: string (optional)

The handle of the start vertex of this edge. This can be either the _id of a document in the database, the _key of an edge in the collection, or a document (i.e. an object with an _id or _key property).

  • toId: string (optional)

The handle of the end vertex of this edge. This can be either the _id of a document in the database, the _key of an edge in the collection, or a document (i.e. an object with an _id or _key property).

Examples

var db = require('arangojs')();
var graph = db.graph('some-graph');
var collection = graph.edgeCollection('edges');
collection.save(
    {some: 'data'},
    'vertices/start-vertex',
    'vertices/end-vertex'
)
.then(edge => {
    edge._key; // the edge's key
    edge._id === ('edges/' + edge._key);
    edge.some === 'data';
    edge._from === 'vertices/start-vertex';
    edge._to === 'vertices/end-vertex';
});

graphEdgeCollection.edges

async graphEdgeCollection.edges(documentHandle): Array Object

Retrieves a list of all edges of the document with the given documentHandle.

Arguments

  • documentHandle: string

The handle of the document to retrieve the edges of. This can be either the _id of a document in the database, the _key of an edge in the collection, or a document (i.e. an object with an _id or _key property).

Examples

var db = require('arangojs')();
var graph = db.graph('some-graph');
var collection = graph.edgeCollection('edges');
collection.import([
    ['_key', '_from', '_to'],
    ['x', 'vertices/a', 'vertices/b'],
    ['y', 'vertices/a', 'vertices/c'],
    ['z', 'vertices/d', 'vertices/a']
])
.then(() => collection.edges('vertices/a'))
.then(edges => {
    edges.length === 3;
    edges.map(function (edge) {return edge._key;}); // ['x', 'y', 'z']
});

graphEdgeCollection.inEdges

async graphEdgeCollection.inEdges(documentHandle): Array Object

Retrieves a list of all incoming edges of the document with the given documentHandle.

Arguments

  • documentHandle: string

The handle of the document to retrieve the edges of. This can be either the _id of a document in the database, the _key of an edge in the collection, or a document (i.e. an object with an _id or _key property).

Examples

var db = require('arangojs')();
var graph = db.graph('some-graph');
var collection = graph.edgeCollection('edges');
collection.import([
    ['_key', '_from', '_to'],
    ['x', 'vertices/a', 'vertices/b'],
    ['y', 'vertices/a', 'vertices/c'],
    ['z', 'vertices/d', 'vertices/a']
])
.then(() => collection.inEdges('vertices/a'))
.then(edges => {
    edges.length === 1;
    edges[0]._key === 'z';
});

graphEdgeCollection.outEdges

async graphEdgeCollection.outEdges(documentHandle): ArrayObject

Retrieves a list of all outgoing edges of the document with the given documentHandle.

Arguments

  • documentHandle: string

The handle of the document to retrieve the edges of. This can be either the _id of a document in the database, the _key of an edge in the collection, or a document (i.e. an object with an _id or _key property).

Examples

var db = require('arangojs')();
var graph = db.graph('some-graph');
var collection = graph.edgeCollection('edges');
collection.import([
    ['_key', '_from', '_to'],
    ['x', 'vertices/a', 'vertices/b'],
    ['y', 'vertices/a', 'vertices/c'],
    ['z', 'vertices/d', 'vertices/a']
])
.then(() => collection.outEdges('vertices/a'))
.then(edges => {
    edges.length === 2;
    edges.map(function (edge) {return edge._key;}); // ['x', 'y']
});

graphEdgeCollection.traversal

async graphEdgeCollection.traversal(startVertex, opts): Object

Performs a traversal starting from the given startVertex and following edges contained in this edge collection.

Arguments

  • startVertex: string

The handle of the start vertex. This can be either the _id of a document in the database, the _key of an edge in the collection, or a document (i.e. an object with an _id or _key property).

  • opts: Object

See the HTTP API documentation for details on the additional arguments.

Please note that while opts.filter, opts.visitor, opts.init, opts.expander and opts.sort should be strings evaluating to well-formed JavaScript code, it's not possible to pass in JavaScript functions directly because the code needs to be evaluated on the server and will be transmitted in plain text.

Examples

var db = require('arangojs')();
var graph = db.graph('some-graph');
var collection = graph.edgeCollection('edges');
collection.import([
    ['_key', '_from', '_to'],
    ['x', 'vertices/a', 'vertices/b'],
    ['y', 'vertices/b', 'vertices/c'],
    ['z', 'vertices/c', 'vertices/d']
])
.then(() => collection.traversal('vertices/a', {
    direction: 'outbound',
    visitor: 'result.vertices.push(vertex._key);',
    init: 'result.vertices = [];'
}))
.then(result => {
    result.vertices; // ['a', 'b', 'c', 'd']
});

License

The Apache License, Version 2.0. For more information, see the accompanying LICENSE file.

linux nosql redis 持久化

redis持久化

redis持久化

redis rdb模式保存

save  #io保存

bgsave #异步保存

#自动保存
save  5     1       #Redis服务器在5秒之内,对数据库进行了至少1次修改,就执行bgsave命令
save  300   10      #Redis服务器在300秒之内,对数据库进行了至少10次修改,就执行bgsave命令
save  60    10000   #Redis服务器在60秒之内,对数据库进行了至少10000次修改,就执行bgsave命令

###  redis AOF持久化配置

```bash
appendonly yes
appendfsync=always#everysec#no

appendfsync设置为always时,服务器在每个事件循环中将aof_buf缓冲区中的所有内容写入并同步到AOF文件。从效率来说,是三个选项值当中最慢的一个,但从安全性来说,always是最安全的,因为即使出现故障停机,AOF持久化也只会丢失一个事件循环中所产生的命令数据。

appendfsync设置为everysec时,服务器在每个事件循环中将aof_buf缓冲区中的所有内容写入到AOF文件,并且每隔一秒将再次对AOF文件进行同步,并且这个同步操作是由一个线程专门负责执行的。从效率上来说,everysec模式足够快,并且就算出现故障停机,数据库也只丢失一秒钟的命令数据。

appendfsync设置为no时,服务器在每个事件循环中,将aof_buf缓冲区中的所有内容写入到AOF文件,但并不对AOF文件进行同步,何时同步由操作系统决定。从效率上来说,与everysec模式相当。AOF文件写入速度是最快的,但是单次同步时长是三种模式中最长的,当出现故障停机时,服务器将丢失上次同步AOF文件之后的所有写命令数据。

leveldb linux nosql redis ssdb

兼容redis的持久化储存ssdb

一个高性能的支持丰富数据结构的 NoSQL 数据库, 用于替代 Redis.

一个高性能的支持丰富数据结构的 NoSQL 数据库, 用于替代 Redis.

特性

  • 替代 Redis 数据库, Redis 的 100 倍容量
  • LevelDB 网络支持, 使用 C/C++ 开发
  • Redis API 兼容, 支持 Redis 客户端
  • 适合存储集合数据, 如 list, hash, zset...
  • 客户端 API 支持的语言包括: C++, PHP, Python, Java, Go
  • 持久化的队列服务
  • 主从复制, 负载均衡

安装ssdb(linux)

wget --no-check-certificate https://github.com/ideawu/ssdb/archive/master.zip
unzip master
cd ssdb-master
make
# optional, install ssdb in /usr/local/ssdb
sudo make install

启动

# start master
./ssdb-server ssdb.conf

# or start as daemon
./ssdb-server -d ssdb.conf

php使用

require_once('SSDB.php');
$ssdb = new SimpleSSDB('127.0.0.1', 8888);
$resp = $ssdb->set('key', '123');
$resp = $ssdb->get('key');
echo $resp; // output: 123
linux mysql nosql rocksdb

myrocks编译

myrocks编译

先准备编译环境(其实gayhub上都有wiki可是就是英文的。。)

deb系的系统

sudo apt-get update
sudo apt-get -y install g++ cmake libbz2-dev libaio-dev bison \
zlib1g-dev libsnappy-dev libboost-all-dev
sudo apt-get -y install libgflags-dev libreadline6-dev libncurses5-dev \
libssl-dev liblz4-dev gdb git

rpm系的系统

sudo yum install cmake gcc-c++ bzip2-devel libaio-devel bison \
zlib-devel snappy-devel boost-devel
sudo yum install gflags-devel readline-devel ncurses-devel \
openssl-devel lz4-devel gdb git

简单粗暴的下载和编译过程

git clone https://github.com/facebook/mysql-5.6.git
cd mysql-5.6
git submodule init
git submodule update
cmake . -DCMAKE_BUILD_TYPE=RelWithDebInfo -DWITH_SSL=system \
-DWITH_ZLIB=bundled -DMYSQL_MAINTAINER_MODE=0 -DENABLED_LOCAL_INFILE=1 \
-DENABLE_DTRACE=0 -DCMAKE_CXX_FLAGS="-march=native"
make -j8

make package

。=。不出意外的话 轻松写意的就编译完了。。。但是出了一大堆意外。。。随后我会放出已编译好的包

kv linux nosql

myrocks介绍-freeidea

RocksDB为快速而又低延迟的存储设备(例如闪存或者高速硬盘)而特殊优化处理。

MyRocks简介

RocksDB为快速而又低延迟的存储设备(例如闪存或者高速硬盘)而特殊优化处理。 RocksDB将最大限度的发挥闪存和RAM的高度率读写性能。

RocksDB是facebook基于LevelDB实现的,目前为facebook内部大量业务提供服务。经过facebook大量工作,将RocksDB作为MySQL的一个存储引擎移植到MySQL,称之为MyRocks。

RocksDB与innodb的比较

  • innodb空间浪费, B tree分裂导致page内有较多空闲,page利用率不高。innodb现有的压缩效率也不高,压缩以block为单位,也会造成浪费。

  • 写入放大:innodb 更新以页为单位,最坏的情况更新N行会更新N个页。RocksDB append only方式 另外,innodb开启double write也会增加写入。

  • RocksDB对齐开销小:SST file (默认2MB)需要对齐,但远大于4k, RocksDB_block_size(默认4k) 不需要对齐,因此对齐浪费空间较少

  • RocksDB索引前缀相同值压缩存储

  • RocksDB占总数据量90%的最底层数据,行内不需要存储系统列seqid (innodb聚簇索引列包含trxid,roll_ptr等信息)

说了这么多都特么是复制的。。该上干货了。。。这玩意是源码。。还没编译好的。。

myrocks编译

linux nodejs nosql 区块链

读ebookcoin源码-区块链的原理与实现

读ebookcoin源码-区块链的原理与实现

读ebookcoin源码-区块链的原理与实现


比特币大红大紫的前提下,几乎所有人都对区块链技术看好

区块链会颠覆xx行业 xxx需要p2p和区块链技术声一片

作为小白虽然连工作都没稳定下来

但也总要跟一根形势

github下翻来覆去 找到一个区块链开源项目 ebookcoin

p2p网络+分布数据库

理清一下思路

理清一下思路

如果没理解错的

健壮的p2p网络+加密传输协议+通过特殊算法保持子节点一致性的数据库

p2p网络的建设
理想状态的p2p

子节点从基础节点获取节点列表->每个节点不断发现与链接其他节点

最开始p2p网络是这样的

NAT下的节点打洞
mysql newsql nosql sql tidb

TiDB 权限管理

TiDB 权限管理


title: 权限管理 category: compatibility


权限管理

TiDB的权限管理系统是按照MySQL的权限管理进行实现,大部分的MySQL的语法和权限类型都是支持的。如果发现行为跟MySQL不一致的地方,欢迎报告issue。

注意:当前版本的权限功能并没有默认开启,需要添加启动参数指定: ./tidb-server -privilege=true 如果不指定参数,权限检查不会生效。将来去掉这个参数(预计RC3)并默认启用权限检查。

1. 用户账户操作

更改密码

set password for 'root'@'%' = 'xxx';

添加用户

create user 'test'@'127.0.0.1' identified by 'xxx';

用户名是大小写敏感的。host则支持模糊匹配,比如:

create user 'test'@'192.168.10.%';

允许test用户从192.168.10子网的任何一个主机登陆。

如果没有指定host,则默认是所有IP均可登陆。如果没有指定密码,默认为空:

create user 'test';

等价于

create user 'test'@'%' identified by '';

删除用户

drop user 'test'@'%';

这个操作会清除用户在mysql.user表里面的记录项,并且清除在授权表里面的相关记录。

忘记root密码

使用一个特殊的启动参数启动TiDB(需要root权限):

sudo ./tidb-server -skip-grant-table=true

这个参数启动,TiDB会跳过权限系统,然后使用root登陆以后修改密码:

mysql -h 127.0.0.1 -P 4000 -u root

2. 权限相关操作

授予权限

授予xxx用户对数据库test的读权限:

grant Select on test.* to 'xxx'@'%';

为test用户授予所有数据库,全部权限:

grant all privileges on *.* to 'xxx'@'%';

如果grant的目标用户不存在,TiDB会自动创建用户。

mysql> select * from mysql.user where user='xxxx';
Empty set (0.00 sec)

mysql> grant all privileges on test.* to 'xxxx'@'%' identified by 'yyyyy';
Query OK, 0 rows affected (0.00 sec)

mysql> select user,host from mysql.user where user='xxxx';
+------|------+
| user | host |
+------|------+
| xxxx | %    |
+------|------+
1 row in set (0.00 sec)

例子中xxxx@%就是自动添加进去的用户。

grant对于数据库或者表的授权,不检查数据库或表是否存在。

mysql> select * from test.xxxx;
ERROR 1146 (42S02): Table 'test.xxxx' doesn't exist

mysql> grant all privileges on test.xxxx to xxxx;
Query OK, 0 rows affected (0.00 sec)

mysql> select user,host from mysql.tables_priv where user='xxxx';
+------|------+
| user | host |
+------|------+
| xxxx | %    |
+------|------+
1 row in set (0.00 sec)

grant可以模糊匹配地授予数据库和表

mysql> grant all privileges on `te%`.* to genius;
Query OK, 0 rows affected (0.00 sec)

mysql> select user,host,db from mysql.db where user='genius';
+--------|------|-----+
| user   | host | db  |
+--------|------|-----+
| genius | %    | te% |
+--------|------|-----+
1 row in set (0.00 sec)

这个例子中通过%模糊匹配,所有te开头的数据库,都被授予了权限。

收回权限

revoke语句与grant对应:

revoke all privileges on `test`.* from 'genius'@'localhost';

注意revoke收回权限时只做精确匹配,若找不到记录则报错。而grant授予权限时可以使用模糊匹配。

mysql> revoke all privileges on `te%`.* from 'genius'@'%';
ERROR 1141 (42000): There is no such grant defined for user 'genius' on host '%'

关于模糊匹配和转义,字符串和identifier

mysql> grant all privileges on `te\%`.* to 'genius'@'localhost'; Query OK, 0 rows affected (0.00 sec)

这个例子是精确匹配名叫te%的数据库,注意到用了\转义字符。

以单引号包含的,是一个字符串。以反引号包含的,是一个identifier。注意下面区别:

``` mysql> grant all privileges on 'test'. to 'genius'@'localhost'; ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ''test'. to 'genius'@'localhost'' at line 1

mysql> grant all privileges on test.* to 'genius'@'localhost'; Query OK, 0 rows affected (0.00 sec) ```

如果一些特殊的关键字想做为表名,可以用反引号包含起来。比如:

mysql> create table `select` (id int); Query OK, 0 rows affected (0.27 sec)

查看为用户分配的权限

show grant语句可以查看为用户分配了哪些权限。

show grants for 'root'@'%';

更精确的方式,可以通过直接查看授权表的数据实现。比如想知道,test@%该用户是否拥有对db1.t的Insert权限。

先查看该用户是否拥有全局Insert权限:

select Insert from mysql.user where user='test' and host='%';

如果没有,再查看该用户是否拥有db1数据库级别的Insert权限:

select Insert from mysql.db where user='test' and host='%';

如果仍然没有,则继续判断是否拥有db1.t这张表的Insert权限:

select tables_priv from mysql.tables_priv where user='test' and host='%' and db='db1';

3. 权限系统的实现

授权表

有几张系统表是非常特殊的表,权限相关的数据全部存储在这几张表内。

  • mysql.user 用户账户,全局权限
  • mysql.db 数据库级别的权限
  • mysql.tables_priv 表级别的权限
  • mysql.columns_priv 列级别的权限

这几张表包含了数据的生效范围和权限信息。例如,mysql.user表的部分数据:

mysql> select User,Host,Select_priv,Insert_priv from mysql.user limit 1;
+------|------|-------------|-------------+
| User | Host | Select_priv | Insert_priv |
+------|------|-------------|-------------+
| root | %    | Y           | Y           |
+------|------|-------------|-------------+
1 row in set (0.00 sec)

这条记录中,Host和User决定了root用户从任意主机(%)发送过来的连接请求可以被接受,而Select_privInsert_priv表示用户拥有全局的Select和Insert权限。mysql.user这张表里面的生效范围是全局的。

mysql.db表里面包含的Host和User决定了用户可以访问哪些数据库,权限列的生效范围是数据库。

理论上,所有权限管理相关的操作,都可以通过直接对授权表的CRUD操作完成。

实现层面其实也只是包装了一层语法糖。例如删除用户会执行:

delete from mysql.user where user='test';

但是不推荐用户手动修改授权表。

连接验证

当客户端发送连接请求时,TiDB服务器会对登陆操作进行验证。验证过程先检查mysql.user表,当某条记录的User和Host和连接请求匹配上了,再去验证Password。用户身份基于两部分信息,发起连接的客户端的Host,以及用户名User。如果User不为空,则用户名必须精确匹配。

User+Host可能会匹配user表里面多行,为了处理这种情况,user表的行是排序过的,客户端连接时会依次去匹配,并使用首次匹配到的那一行做权限验证。排序是按Host在前,User在后。

请求验证

连接成功之后,请求验证会检测执行操作是否拥有足够的权限。

对于数据库相关请求(INSERT,UPDATE),先检查mysql.user表里面的用户全局权限,如果权限够,则直接可以访问。如果全局权限不足,则再检查mysql.db表。

user表的权限是全局的,并且不管默认数据库是哪一个。比如user里面有DELETE权限,任何一行,任何的表,任何的数据库。

db表里面,User为空是匹配匿名用户,User里面不能有通配符。Host和Db列里面可以有%_,可以模式匹配。

userdb读到内存也是排序的。

tables_privcolumns_priv 中使用%是类似的,但是在Db, Table_name, Column_name 这些列不能包含%。加载进来时排序也是类似的。

生效时机

TiDB启动时,将一些权限检查的表加载到内存,之后使用缓存的数据来验证权限。系统会周期性的将授权表从数据库同步到缓存,生效则是由同步的周期决定,目前这个值设定的是5分钟。

修改了授权表,如果需要立即生效,可以手动调用:

flush privileges;

4. 限制和约束

一些使用频率偏低的权限当前版本的实现中还未做检查,比如FILE/USAGE/SHUTDOWN/EXECUTE/PROCESS/INDEX等等,未来会陆续完善。

现阶段对权限的支持还没有做到column级别。

mysql newsql nosql pg sql tidb

TiDB 命令行参数

TiDB 命令行参数


title: PD Control 使用说明 category: monitoring


PD Control 使用说明

PD Control 是 PD 的命令行工具,用于获取集群状态信息和调整集群。

源码编译

  1. Go Version 1.7 以上
  2. 在 PD 项目根目录使用 make 命令进行编译,生成 bin/pd-ctl

简单例子

单命令模式:

./pd-ctl store -d -u http://127.0.0.1:2379

交互模式:

./pd-ctl -u http://127.0.0.1:2379

使用环境变量:

export PD_ADDR=http://127.0.0.1:2379
./pd-ctl

命令行参数(flags)

--pd,-u

  • 指定 PD 的地址
  • 默认地址: http://127.0.0.1:2379
  • 环境变量: PD_ADDR

--detach,-d

  • 使用单命令行模式(不进入 readline )
  • 默认值: false

命令(command)

store [delete] \<store_id>

用于显示 store 信息或者删除指定 store。

示例:

>> store            // 显示所有 store 信息
{
  "count": 3,
  "stores": [...]
}
>> store 1          // 获取 store id 为 1 的 store
  ......
>> store delete 1   // 下线 store id 为 1 的 store
  ......

region \<region_id>

用于显示 region 信息。

示例:

>> region                               // 显示所有 region 信息
{
  "count": 1,
  "regions": [......]
}

>> region 2                             // 显示 region id 为 2 的信息
{
  "region": {
      "id": 2,
      ......
  }
  "leader": {
      ......
  }
}

region key [--format=raw|pb|proto|protobuf] \<key>

用于查询某个 key 在哪个 region 上,支持 raw 和 protobuf 格式。

Raw 格式(默认)示例:

>> region key abc
{
  "region": {
    "id": 2,
    ......
  }
}

Protobuf 格式示例:

>> region key --format=pb t\200\000\000\000\000\000\000\377\035_r\200\000\000\000\000\377\017U\320\000\000\000\000\000\372
{
  "region": {
    "id": 2,
    ......
  }
}

member [leader | delete]

用于显示 PD 成员信息或删除指定成员。

示例:

>> member                               // 显示所有成员的信息
{
  "members": [......]
}
>> member leader                        // 显示 leader 的信息
{
  "name": "pd",
  "addr": "http://192.168.199.229:2379",
  "id": 9724873857558226554
}
>> member delete pd2                    // 下线 "pd2"
Success!

config [show | set \<option> \<value>]

用于显示或调整配置信息。

示例:

>> config show                             // 显示 config 的信息
{
  "max-snapshot-count": 3,
  "max-store-down-time": "1h",
  "leader-schedule-limit": 8,
  "region-schedule-limit": 4,
  "replica-schedule-limit": 8,
}

通过调整 leader-schedule-limit 可以控制同时进行 leader 调度的任务个数。 这个值主要影响 leader balance 的速度,值越大调度得越快,设置为 0 则关闭调度。 Leader 调度的开销较小,需要的时候可以适当调大。

>> config set leader-schedule-limit 4       // 最多同时进行 4 个 leader 调度

通过调整 region-schedule-limit 可以控制同时进行 region 调度的任务个数。 这个值主要影响 region balance 的速度,值越大调度得越快,设置为 0 则关闭调度。 Region 调度的开销较大,所以这个值不宜调得太大。

>> config set region-schedule-limit 2       // 最多同时进行 2 个 region 调度

通过调整 replica-schedule-limit 可以控制同时进行 replica 调度的任务个数。 这个值主要影响节点挂掉或者下线的时候进行调度的速度,值越大调度得越快,设置为 0 则关闭调度。 Replica 调度的开销较大,所以这个值不宜调得太大。

>> config set replica-schedule-limit 4      // 最多同时进行 4 个 replica 调度

operator [show | add | remove]

用于显示和控制调度操作。

示例:

>> operator show                            // 显示所有的 operators
>> operator show admin                      // 显示所有的 admin operators
>> operator show leader                     // 显示所有的 leader operators
>> operator show region                     // 显示所有的 region operators
>> operator add transfer-leader 1 2         // 把 region 1 的 leader 调度到 store 2
>> operator add transfer-region 1 2 3 4     // 把 region 1 调度到 store 2,3,4
>> operator add transfer-peer 1 2 3         // 把 region 1 在 store 2 上的副本调度到 store 3
>> operator remove 1                        // 把 region 1 的调度操作删掉

scheduler [show | add | remove]

用于显示和控制调度策略。

示例:

>> scheduler show                             // 显示所有的 schedulers
>> scheduler add grant-leader-scheduler 1     // 把 store 1 上的所有 region 的 leader 调度到 store 1
>> scheduler add evict-leader-scheduler 1     // 把 store 1 上的所有 region 的 leader 从 store 1 调度出去
>> scheduler add shuffle-leader-scheduler     // 随机交换不同 store 上的 leader
>> scheduler add shuffle-region-scheduler     // 随机调度不同 store 上的 region
>> scheduler remove grant-leader-scheduler-1  // 把对应的 scheduler 删掉

title: 参数解释 category: deployment


参数解释

TiDB

--store

  • 用来指定 TiDB 底层使用的存储引擎
  • 默认: "goleveldb"
  • 你可以选择 "memory", "goleveldb", "BoltDB" 或者 "TiKV"。(前面三个是本地存储引擎,而 TiKV 是一个分布式存储引擎)
  • 例如,如果我们可以通过 tidb-server --store=memory 来启动一个纯内存引擎的 TiDB

--path

  • 对于本地存储引擎 "goleveldb", "BoltDB" 来说,path 指定的是实际的数据存放路径
  • 对于 "memory" 存储引擎来说,path 不用设置
  • 对于 "TiKV" 存储引擎来说,path 指定的是实际的 PD 地址。假设我们在 192.168.100.113:2379, 192.168.100.114:2379 和 192.168.100.115:2379 上面部署了 PD,那么 path 为 "192.168.100.113:2379, 192.168.100.114:2379, 192.168.100.115:2379"
  • 默认: "/tmp/tidb"

-L

  • Log 级别
  • 默认: "info"
  • 我们能选择 debug, info, warn, error 或者 fatal

--log-file

  • Log 文件
  • 默认: ""
  • 如果没设置这个参数,log 会默认输出到 "stderr",如果设置了,log 就会输出到对应的文件里面,在每天凌晨,log 会自动轮转使用一个新的文件,并且将以前的文件改名备份

--host

  • TiDB 服务监听 host
  • 默认: "0.0.0.0"
  • TiDB 服务会监听这个 host
  • 0.0.0.0 默认会监听所有的网卡 address。如果有多块网卡,可以指定对外提供服务的网卡,譬如192.168.100.113

-P

  • TiDB 服务监听端口
  • 默认: "4000"
  • TiDB 服务将会使用这个端口接受 MySQL 客户端发过来的请求

--status

  • TiDB 服务状态监听端口
  • 默认: "10080"
  • 这个端口是为了展示 TiDB 内部数据用的。包括 prometheus 统计 以及 pprof
  • Prometheus 统计可以通过 "http://host:status_port/metrics" 访问
  • Pprof 数据可以通过 "http://host:status_port/debug/pprof" 访问

--lease

  • Schema 的租约时间,单位:秒
  • 默认: "1"
  • Schema 的 lease 主要用在 online schema changes 上面。这个值会影响到实际的 DDL 语句的执行时间。千万不要随便改动这个值,除非你能知道相关的内部机制

--socket

  • TiDB 服务使用 unix socket file 方式接受外部连接
  • 默认: ""
  • 譬如我们可以使用 "/tmp/tidb.sock" 来打开 unix socket file

--perfschema

  • 使用 true/false 来打开或者关闭性能 schema
  • 默认: false
  • 值可以是 (true) or (false)。性能 Schema 可以帮助我们在运行时检测内部的执行情况。可以通过 performance schema 获取更多信息。但需要注意,开启性能 Schema,会影响 TiDB 的性能

--privilege

  • 使用 true/false 来打开或者关闭权限功能(用于开发调试)
  • 默认: true
  • 值可以是(true) or (false)。当前版本的权限控制还在完善中,将来会去掉此选项

--skip-grant-table

  • 允许任何人不带密码连接,并且所有的操作不检查权限
  • 默认: false
  • 值可以是(true) or (false)。启用此选项需要有本机的root权限,一般用于忘记密码时重置

--report-status

  • 打开 (true) 或者关闭 (false) 服务状态监听端口
  • 默认: true
  • 值可以为 (true) 或者 (false). (true) 表明我们开启状态监听端口。 (false) 表明关闭

--metrics-addr

  • Prometheus Push Gateway 地址
  • 默认: ""
  • 如果为空,TiDB 不会将统计信息推送给 Push Gateway

--metrics-intervel

  • 推送统计信息到 Prometheus Push Gateway 的时间间隔
  • 默认: 15s
  • 设置为 0 表明不推送统计信息给 Push Gateway

Placement Driver (PD)

-L

  • Log 级别
  • 默认: "info"
  • 我们能选择 debug, info, warn, error 或者 fatal

--log-file

  • Log 文件
  • 默认: ""
  • 如果没设置这个参数,log 会默认输出到 "stderr",如果设置了,log 就会输出到对应的文件里面,在每天凌晨,log 会自动轮转使用一个新的文件,并且将以前的文件改名备份

--config

  • 配置文件
  • 默认: ""
  • 如果你指定了配置文件,PD 会首先读取配置文件的配置。然后如果对应的配置在命令行参数里面也存在,PD 就会使用命令行参数的配置来覆盖配置文件里面的

--name

  • 当前 PD 的名字
  • 默认: "pd"
  • 如果你需要启动多个 PD,一定要给 PD 使用不同的名字

--data-dir

  • PD 存储数据路径
  • 默认: "default.${name}"

--client-urls

  • 处理客户端请求监听 URL 列表
  • 默认: "http://127.0.0.1:2379"
  • 如果部署一个集群,--client-urls 必须指定当前主机的 IP 地址,例如 "http://192.168.100.113:2379",如果是运行在 docker 则需要指定为 "http://0.0.0.0:2379"

--advertise-client-urls

  • 对外客户端访问 URL 列表
  • 默认: ${client-urls}
  • 在某些情况下,譬如 docker,或者 NAT 网络环境,客户端并不能通过 PD 自己监听的 client URLs 来访问到 PD,这时候,你就可以设置 advertise urls 来让客户端访问
  • 例如,docker 内部 IP 地址为 172.17.0.1,而宿主机的 IP 地址为 192.168.100.113 并且设置了端口映射 -p 2379:2379,那么可以设置为 --advertise-client-urls="http://192.168.100.113:2379",客户端可以通过 http://192.168.100.113:2379 来找到这个服务

--peer-urls

  • 处理其他 PD 节点请求监听 URL 列表。
  • default: "http://127.0.0.1:2380"
  • 如果部署一个集群,--peer-urls 必须指定当前主机的 IP 地址,例如 "http://192.168.100.113:2380",如果是运行在 docker 则需要指定为 "http://0.0.0.0:2380"

--advertise-peer-urls

  • 对外其他 PD 节点访问 URL 列表。
  • 默认: ${peer-urls}
  • 在某些情况下,譬如 docker,或者 NAT 网络环境,其他节点并不能通过 PD 自己监听的 peer URLs 来访问到 PD,这时候,你就可以设置 advertise urls 来让其他节点访问
  • 例如,docker 内部 IP 地址为 172.17.0.1,而宿主机的 IP 地址为 192.168.100.113 并且设置了端口映射 -p 2380:2380,那么可以设置为 --advertise-peer-urls="http://192.168.100.113:2380",其他 PD 节点可以通过 http://192.168.100.113:2380 来找到这个服务

--initial-cluster

  • 初始化 PD 集群配置。
  • 默认: "{name}=http://{advertise-peer-url}"
  • 例如,如果 name 是 "pd", 并且 advertise-peer-urls 是 "http://192.168.100.113:2380", 那么 initial-cluster 就是 pd=http://192.168.100.113:2380
  • 如果你需要启动三台 PD,那么 initial-cluster 可能就是 pd1=http://192.168.100.113:2380, pd2=http://192.168.100.114:2380, pd3=192.168.100.115:2380

--join

  • 动态加入 PD 集群
  • 默认: ""
  • 如果你想动态将一台 PD 加入集群,你可以使用 --join="${advertise-client-urls}"advertise-client-url 是当前集群里面任意 PD 的 advertise-client-url,你也可以使用多个 PD 的,需要用逗号分隔

TiKV

TiKV 在命令行参数上面支持一些可读性好的单位转换。

  • 文件大小(以 bytes 为单位): KB, MB, GB, TB, PB(也可以全小写)
  • 时间(以毫秒为单位): ms, s, m, h

-A, --addr

  • TiKV 监听地址
  • 默认: "127.0.0.1:20160"
  • 如果部署一个集群,--addr 必须指定当前主机的 IP 地址,例如 "http://192.168.100.113:20160",如果是运行在 docker 则需要指定为 "http://0.0.0.0:20160"

--advertise-addr

  • TiKV 对外访问地址。
  • 默认: ${addr}
  • 在某些情况下,譬如 docker,或者 NAT 网络环境,客户端并不能通过 TiKV 自己监听的地址来访问到 TiKV,这时候,你就可以设置 advertise addr 来让 客户端访问
  • 例如,docker 内部 IP 地址为 172.17.0.1,而宿主机的 IP 地址为 192.168.100.113 并且设置了端口映射 -p 20160:20160,那么可以设置为 --advertise-addr="192.168.100.113:20160",客户端可以通过 192.168.100.113:20160 来找到这个服务

-L, --log

  • Log 级别
  • 默认: "info"
  • 我们能选择 trace, debug, info, warn, error, 或者 off

--log-file

  • Log 文件
  • 默认: ""
  • 如果没设置这个参数,log 会默认输出到 "stderr",如果设置了,log 就会输出到对应的文件里面,在每天凌晨,log 会自动轮转使用一个新的文件,并且将以前的文件改名备份

-C, --config

  • 配置文件
  • 默认: ""
  • 如果你指定了配置文件,TiKV 会首先读取配置文件的配置。然后如果对应的配置在命令行参数里面也存在,TiKV 就会使用命令行参数的配置来覆盖配置文件里面的

--data-dir

  • TiKV 数据存储路径
  • 默认: "/tmp/tikv/store"

--capacity

  • TiKV 存储数据的容量
  • 默认: 0 (无限)
  • PD 需要使用这个值来对整个集群做 balance 操作。(提示:你可以使用 10GB 来替代 10737418240,从而简化参数的传递)

--pd

  • PD 地址列表。
  • 默认: ""
  • TiKV 必须使用这个值连接 PD,才能正常工作。使用逗号来分隔多个 PD 地址,例如: 192.168.100.113:2379, 192.168.100.114:2379, 192.168.100.115:2379
kv mysql newsql nosql sql

TiDB Binary 部署方案详解

TiDB Binary 部署方案详解


title: TiDB Binary 部署方案详解 category: deployment


# TiDB Binary 部署方案

## 概述

一个完整的 TiDB 集群包括 PD,TiKV 以及 TiDB。启动顺序依次是 PD,TiKV 以及 TiDB。

阅读本章前,请先确保阅读 TiDB 整体架构部署建议

快速了解和试用 TiDB,推荐使用单节点方式快速部署

功能性测试 TiDB,推荐使用功能性测试部署

生产环境使用 TiDB,推荐使用多节点集群模式部署

## 下载官方 Binary

### Linux (CentOS 7+, Ubuntu 14.04+)

```bash # 下载压缩包 wget http://download.pingcap.org/tidb-latest-linux-amd64.tar.gz wget http://download.pingcap.org/tidb-latest-linux-amd64.sha256

# 检查文件完整性,返回 ok 则正确 sha256sum -c tidb-latest-linux-amd64.sha256

# 解开压缩包 tar -xzf tidb-latest-linux-amd64.tar.gz cd tidb-latest-linux-amd64 ``` ### CentOS 6

注意:我们大部分开发和测试都是在 CentOS 7+, Ubuntu 14.04+ 上进行,CentOS 6 上面并没有经过严格测试,所以不推荐在 CentOS 6 上部署 TiDB 集群

```bash # 下载 CentOS6 压缩包 wget http://download.pingcap.org/tidb-latest-linux-amd64-centos6.tar.gz wget http://download.pingcap.org/tidb-latest-linux-amd64-centos6.sha256

# 检查文件完整性,返回 ok 则正确 sha256sum -c tidb-latest-linux-amd64-centos6.sha256

# 解开压缩包 tar -xzf tidb-latest-linux-amd64-centos6.tar.gz cd tidb-latest-linux-amd64-centos6 ```

## 单节点方式快速部署

我们可以在单机上面,运行和测试 TiDB 集群,请按如下步骤依次启动 PD,TiKV,TiDB:

  1. 启动 PD

    bash ./bin/pd-server --data-dir=pd \ --log-file=pd.log

  2. 启动 TiKV

    bash ./bin/tikv-server --pd="127.0.0.1:2379" \ --data-dir=tikv \ --log-file=tikv.log

  3. 启动 TiDB

    bash ./bin/tidb-server --store=tikv \ --path="127.0.0.1:2379" \ --log-file=tidb.log

  4. 使用官方的 mysql 客户端连接 TiDB

    bash mysql -h 127.0.0.1 -P 4000 -u root -D test

## 多节点集群模式部署 在生产环境中,我们推荐多节点部署 TiDB 集群,首先请参考部署建议

这里我们使用六个节点,部署三个 PD,三个 TiKV,以及一个 TiDB,各个节点以及所运行服务信息如下:

Name Host IP Services
node1 192.168.199.113 PD1, TiDB
node2 192.168.199.114 PD2
node3 192.168.199.115 PD3
node4 192.168.199.116 TiKV1
node5 192.168.199.117 TiKV2
node6 192.168.199.118 TiKV3

请按如下步骤 依次启动 PD 集群,TiKV 集群以及 TiDB:

  1. 在 node1,node2,node3 依次启动 PD

    ```bash ./bin/pd-server --name=pd1 \ --data-dir=pd1 \ --client-urls="http://192.168.199.113:2379" \ --peer-urls="http://192.168.199.113:2380" \ --initial-cluster="pd1=http://192.168.199.113:2380" \ --log-file=pd.log

    ./bin/pd-server --name=pd2 \ --data-dir=pd2 \ --client-urls="http://192.168.199.114:2379" \ --peer-urls="http://192.168.199.114:2380" \ --join="http://192.168.199.113:2379" \ --log-file=pd.log

    ./bin/pd-server --name=pd3 \ --data-dir=pd3 \ --client-urls="http://192.168.199.115:2379" \ --peer-urls="http://192.168.199.115:2380" \ --join="http://192.168.199.113:2379" \ --log-file=pd.log ```

  2. 在 node4,node5,node6 启动 TiKV

    ```bash ./bin/tikv-server --pd="192.168.199.113:2379,192.168.199.114:2379,192.168.199.115:2379" \ --addr="192.168.199.116:20160" \ --data-dir=tikv1 \ --log-file=tikv.log

    ./bin/tikv-server --pd="192.168.199.113:2379,192.168.199.114:2379,192.168.199.115:2379" \ --addr="192.168.199.117:20160" \ --data-dir=tikv2 \ --log-file=tikv.log

    ./bin/tikv-server --pd="192.168.199.113:2379,192.168.199.114:2379,192.168.199.115:2379" \ --addr="192.168.199.118:20160" \ --data-dir=tikv3 \ --log-file=tikv.log ```

  3. 在 node1 启动 TiDB

    bash ./bin/tidb-server --store=tikv \ --path="192.168.199.113:2379,192.168.199.114:2379,192.168.199.115:2379" \ --log-file=tidb.log

  4. 使用官方 mysql 客户端连接 TiDB

    bash mysql -h 192.168.199.113 -P 4000 -u root -D test

注意:在生产环境中启动 TiKV 时,建议使用 --config 参数指定配置文件路径,如果不设置这个参数,TiKV 不会读取配置文件。同样,在生产环境中部署 PD 时,也建议使用 --config 参数指定配置文件路径。

注意:如果使用 nohup 在生产环境中启动集群,需要将启动命令放到一个脚本文件里面执行,否则会出现因为 Shell 退出导致 nohup 启动的进程也收到异常信号退出的问题,具体参考进程异常退出

## 功能性测试部署

如果只是对 TiDB 进行测试,并且机器数量有限,我们可以只启动一台 PD 测试 整个集群。

这里我们使用四个节点,部署一个 PD,三个 TiKV,以及一个 TiDB,各个节点以及所运行服务信息如下:

Name Host IP Services
node1 192.168.199.113 PD1, TiDB
node2 192.168.199.114 TiKV1
node3 192.168.199.115 TiKV2
node4 192.168.199.116 TiKV3

请按如下步骤 依次启动 PD 集群,TiKV 集群以及 TiDB:

  1. 在 node1 启动 PD

    bash ./bin/pd-server --name=pd1 \ --data-dir=pd1 \ --client-urls="http://192.168.199.113:2379" \ --peer-urls="http://192.168.199.113:2380" \ --initial-cluster="pd1=http://192.168.199.113:2380" \ --log-file=pd.log

  2. 在 node2,node3,node4 启动 TiKV

    ```bash ./bin/tikv-server --pd="192.168.199.113:2379" \ --addr="192.168.199.114:20160" \ --data-dir=tikv1 \ --log-file=tikv.log

    ./bin/tikv-server --pd="192.168.199.113:2379" \ --addr="192.168.199.115:20160" \ --data-dir=tikv2 \ --log-file=tikv.log

    ./bin/tikv-server --pd="192.168.199.113:2379" \ --addr="192.168.199.116:20160" \ --data-dir=tikv3 \ --log-file=tikv.log ```

  3. 在 node1 启动 TiDB

    bash ./bin/tidb-server --store=tikv \ --path="192.168.199.113:2379" \ --log-file=tidb.log

  4. 使用官方 mysql 客户端连接 TiDB

    bash mysql -h 192.168.199.113 -P 4000 -u root -D test

mysql nosql sql

mysql.ini

mysql的配置文件

[client]
port=3306

[mysql]
default-character-set=utf8

[mysqld]
port=3306
server_id=1
character-set-server=utf8
default-storage-engine=MYISAM
sql-mode="NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION"
slow_query_log=0
long_query_time=2
local-infile=0
skip-external-locking
#skip-innodb
#log-bin=mysql-bin
#binlog_format=mixed

max_connections=1000
query_cache_size=0
key_buffer_size=64M
sort_buffer_size=256kb
read_buffer_size=512kb
join_buffer_size=2M
read_rnd_buffer_size=2M
max_allowed_packet=16M
table_open_cache=256
tmp_table_size=64M
max_heap_table_size=64M

myisam_max_sort_file_size=64G
myisam_sort_buffer_size=32M
myisam_repair_threads=1

innodb_buffer_pool_size=64M
innodb_log_file_size=16M
innodb_log_buffer_size=2M
innodb_file_per_table=1
innodb_flush_log_at_trx_commit=1
innodb_lock_wait_timeout=50

[mysqldump]
quick
max_allowed_packet=16M

[mysql]
no-auto-rehash

[myisamchk]
key_buffer_size=20M
sort_buffer_size=20M
read_buffer=2M
write_buffer=2M

[mysqlhotcopy]
interactive-timeout

[mysqld_safe]
open-files-limit=8192