Pkold

install linux postgresql 最佳部署

【转】PostgreSQL on Linux 最佳部署手册

PostgreSQL其实安装很简单,但是那仅仅是可用,并不是好用。

作者

digoal

日期

2016-11-21

标签

Linux , PostgreSQL , Install , 最佳部署


背景

数据库的安装一直以来都挺复杂的,特别是Oracle,现在身边都还有安装Oracle数据库赚外快的事情。

PostgreSQL其实安装很简单,但是那仅仅是可用,并不是好用。很多用户使用默认的方法安装好数据库之后,然后测试一通性能,发现性能不行就不用了。

原因不用说,多方面没有优化的结果。

PostgreSQL数据库为了适应更多的场景能使用,默认的参数都设得非常保守,通常需要优化,比如检查点,SHARED BUFFER等。

本文将介绍一下PostgreSQL on Linux的最佳部署方法,其实在我的很多文章中都有相关的内容,但是没有总结成一篇文档。

OS与硬件认证检查

目的是确认服务器与OS通过certification

Intel Xeon v3和v4的cpu,能支持的RHEL的最低版本是不一样的,

详情请见:https://access.redhat.com/support/policy/intel

Intel Xeon v3和v4的cpu,能支持的Oracle Linux 的最低版本是不一样的,

详情请见:http://linux.oracle.com/pls/apex/f?p=117:1

第一:RedHat生态系统--来自RedHat的认证列表https://access.redhat.com/ecosystem

第二:Oracle Linux 对服务器和存储的硬件认证列表 http://linux.oracle.com/pls/apex/f?p=117:1

安装常用包

# yum -y install coreutils glib2 lrzsz mpstat dstat sysstat e4fsprogs xfsprogs ntp readline-devel zlib-devel openssl-devel pam-devel libxml2-devel libxslt-devel python-devel tcl-devel gcc make smartmontools flex bison perl-devel perl-ExtUtils* openldap-devel jadetex  openjade bzip2

配置OS内核参数

1. sysctl

注意某些参数,根据内存大小配置(已说明)

含义详见

《DBA不可不知的操作系统内核参数》

# vi /etc/sysctl.conf

# add by digoal.zhou
fs.aio-max-nr = 1048576
fs.file-max = 76724600
kernel.core_pattern= /data01/corefiles/core_%e_%u_%t_%s.%p         
# /data01/corefiles事先建好,权限777,如果是软链接,对应的目录修改为777
kernel.sem = 4096 2147483647 2147483646 512000    
# 信号量, ipcs -l-u 查看,每16个进程一组,每组信号量需要17个信号量。
kernel.shmall = 107374182      
# 所有共享内存段相加大小限制(建议内存的80%)
kernel.shmmax = 274877906944   
# 最大单个共享内存段大小(建议为内存一半), >9.2的版本已大幅降低共享内存的使用
kernel.shmmni = 819200         
# 一共能生成多少共享内存段,每个PG数据库集群至少2个共享内存段
net.core.netdev_max_backlog = 10000
net.core.rmem_default = 262144       
# The default setting of the socket receive buffer in bytes.
net.core.rmem_max = 4194304          
# The maximum receive socket buffer size in bytes
net.core.wmem_default = 262144       
# The default setting (in bytes) of the socket send buffer.
net.core.wmem_max = 4194304          
# The maximum send socket buffer size in bytes.
net.core.somaxconn = 4096
net.ipv4.tcp_max_syn_backlog = 4096
net.ipv4.tcp_keepalive_intvl = 20
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_time = 60
net.ipv4.tcp_mem = 8388608 12582912 16777216
net.ipv4.tcp_fin_timeout = 5
net.ipv4.tcp_synack_retries = 2
net.ipv4.tcp_syncookies = 1    
# 开启SYN Cookies。当出现SYN等待队列溢出时,启用cookie来处理,可防范少量的SYN攻击
net.ipv4.tcp_timestamps = 1    
# 减少time_wait
net.ipv4.tcp_tw_recycle = 0    
# 如果=1则开启TCP连接中TIME-WAIT套接字的快速回收,但是NAT环境可能导致连接失败,建议服务端关闭它
net.ipv4.tcp_tw_reuse = 1      
# 开启重用。允许将TIME-WAIT套接字重新用于新的TCP连接
net.ipv4.tcp_max_tw_buckets = 262144
net.ipv4.tcp_rmem = 8192 87380 16777216
net.ipv4.tcp_wmem = 8192 65536 16777216
net.nf_conntrack_max = 1200000
net.netfilter.nf_conntrack_max = 1200000
vm.dirty_background_bytes = 409600000       
#  系统脏页到达这个值,系统后台刷脏页调度进程 pdflush(或其他) 自动将(dirty_expire_centisecs/100)秒前的脏页刷到磁盘
vm.dirty_expire_centisecs = 3000             
#  比这个值老的脏页,将被刷到磁盘。3000表示30秒。
vm.dirty_ratio = 95                          
#  如果系统进程刷脏页太慢,使得系统脏页超过内存 95 % 时,则用户进程如果有写磁盘的操作(如fsync, fdatasync等调用),则需要主动把系统脏页刷出。
#  有效防止用户进程刷脏页,在单机多实例,并且使用CGROUP限制单实例IOPS的情况下非常有效。  
vm.dirty_writeback_centisecs = 100            
#  pdflush(或其他)后台刷脏页进程的唤醒间隔, 100表示1秒。
vm.mmap_min_addr = 65536
vm.overcommit_memory = 0     
#  在分配内存时,允许少量over malloc, 如果设置为 1, 则认为总是有足够的内存,内存较少的测试环境可以使用 1 .  
vm.overcommit_ratio = 90     
#  当overcommit_memory = 2 时,用于参与计算允许指派的内存大小。
vm.swappiness = 0            
#  关闭交换分区
vm.zone_reclaim_mode = 0     
# 禁用 numa, 或者在vmlinux中禁止. 
net.ipv4.ip_local_port_range = 40000 65535    
# 本地自动分配的TCP, UDP端口号范围
fs.nr_open=20480000
# 单个进程允许打开的文件句柄上限

# 以下参数请注意
# vm.extra_free_kbytes = 4096000
# vm.min_free_kbytes = 2097152
# 如果是小内存机器,以上两个值不建议设置
# vm.nr_hugepages = 66536    
#  建议shared buffer设置超过64GB时 使用大页,页大小 /proc/meminfo Hugepagesize
# vm.lowmem_reserve_ratio = 1 1 1
# 对于内存大于64G时,建议设置,否则建议默认值 256 256 32

2. 生效配置

sysctl -p

配置OS资源限制

# vi /etc/security/limits.conf

# nofile超过1048576的话,一定要先将sysctl的fs.nr_open设置为更大的值,并生效后才能继续设置nofile.

* soft    nofile  1024000
* hard    nofile  1024000
* soft    nproc   unlimited
* hard    nproc   unlimited
* soft    core    unlimited
* hard    core    unlimited
* soft    memlock unlimited
* hard    memlock unlimited

最好在关注一下/etc/security/limits.d目录中的文件内容,会覆盖limits.conf的配置。

已有进程的ulimit请查看/proc/pid/limits,例如

Limit                     Soft Limit           Hard Limit           Units     
Max cpu time              unlimited            unlimited            seconds   
Max file size             unlimited            unlimited            bytes     
Max data size             unlimited            unlimited            bytes     
Max stack size            10485760             unlimited            bytes     
Max core file size        0                    unlimited            bytes     
Max resident set          unlimited            unlimited            bytes     
Max processes             11286                11286                processes 
Max open files            1024                 4096                 files     
Max locked memory         65536                65536                bytes     
Max address space         unlimited            unlimited            bytes     
Max file locks            unlimited            unlimited            locks     
Max pending signals       11286                11286                signals   
Max msgqueue size         819200               819200               bytes     
Max nice priority         0                    0                    
Max realtime priority     0                    0                    
Max realtime timeout      unlimited            unlimited            us

如果你要启动其他进程,建议退出SHELL再进一遍,确认ulimit环境配置已生效,再启动。

配置OS防火墙

(建议按业务场景设置,我这里先清掉)

iptables -F

配置范例

# 私有网段
-A INPUT -s 192.168.0.0/16 -j ACCEPT
-A INPUT -s 10.0.0.0/8 -j ACCEPT
-A INPUT -s 172.16.0.0/16 -j ACCEPT

selinux

如果没有这方面的需求,建议禁用

# vi /etc/sysconfig/selinux 

SELINUX=disabled
SELINUXTYPE=targeted

关闭不必要的OS服务

chkconfig --list|grep on  
关闭不必要的,例如 
chkconfig iscsi off

部署文件系统

注意SSD对齐,延长寿命,避免写放大。

parted -s /dev/sda mklabel gpt
parted -s /dev/sda mkpart primary 1MiB 100%

格式化(如果你选择ext4的话)

mkfs.ext4 /dev/sda1 -m 0 -O extent,uninit_bg -E lazy_itable_init=1 -T largefile -L u01

建议使用的ext4 mount选项

# vi /etc/fstab

LABEL=u01 /u01     ext4        defaults,noatime,nodiratime,nodelalloc,barrier=0,data=writeback    0 0

# mkdir /u01
# mount -a

为什么需要data=writeback?

pic

建议pg_xlog放到独立的IOPS性能贼好的块设备中。

设置SSD盘的调度为deadline

如果不是SSD的话,还是使用CFQ,否则建议使用DEADLINE。

临时设置(比如sda盘)

echo deadline > /sys/block/sda/queue/scheduler

永久设置

编辑grub文件修改块设备调度策略

vi /boot/grub.conf

elevator=deadline

注意,如果既有机械盘,又有SSD,那么可以使用/etc/rc.local,对指定磁盘修改为对应的调度策略。

关闭透明大页、numa

加上前面的默认IO调度,如下

vi /boot/grub.conf

elevator=deadline numa=off transparent_hugepage=never 

编译器

建议使用较新的编译器,安装 gcc 6.2.0 参考

《PostgreSQL clang vs gcc 编译》

如果已安装好,可以分发给不同的机器。

cd ~
tar -jxvf gcc6.2.0.tar.bz2
tar -jxvf python2.7.12.tar.bz2


# vi /etc/ld.so.conf

/home/digoal/gcc6.2.0/lib
/home/digoal/gcc6.2.0/lib64
/home/digoal/python2.7.12/lib

# ldconfig

环境变量

# vi ~/env_pg.sh

export PS1="$USER@`/bin/hostname -s`-> "
export PGPORT=$1
export PGDATA=/$2/digoal/pg_root$PGPORT
export LANG=en_US.utf8
export PGHOME=/home/digoal/pgsql9.6
export LD_LIBRARY_PATH=/home/digoal/gcc6.2.0/lib:/home/digoal/gcc6.2.0/lib64:/home/digoal/python2.7.12/lib:$PGHOME/lib:/lib64:/usr/lib64:/usr/local/lib64:/lib:/usr/lib:/usr/local/lib:$LD_LIBRARY_PATH
export PATH=/home/digoal/gcc6.2.0/bin:/home/digoal/python2.7.12/bin:/home/digoal/cmake3.6.3/bin:$PGHOME/bin:$PATH:.
export DATE=`date +"%Y%m%d%H%M"`
export MANPATH=$PGHOME/share/man:$MANPATH
export PGHOST=$PGDATA
export PGUSER=postgres
export PGDATABASE=postgres
alias rm='rm -i'
alias ll='ls -lh'
unalias vi

icc, clang

如果你想使用ICC或者clang编译PostgreSQL,请参考

《[转载]用intel编译器icc编译PostgreSQL》

《PostgreSQL clang vs gcc 编译》

编译PostgreSQL

建议使用NAMED_POSIX_SEMAPHORES

src/backend/port/posix_sema.c

create sem : 
named :
mySem = sem_open(semname, O_CREAT | O_EXCL,
(mode_t) IPCProtection, (unsigned) 1);


unamed :
/*
* PosixSemaphoreCreate
*
* Attempt to create a new unnamed semaphore.
*/
static void
PosixSemaphoreCreate(sem_t * sem)
{
if (sem_init(sem, 1, 1) < 0)
elog(FATAL, "sem_init failed: %m");
}


remove sem : 

#ifdef USE_NAMED_POSIX_SEMAPHORES
/* Got to use sem_close for named semaphores */
if (sem_close(sem) < 0)
elog(LOG, "sem_close failed: %m");
#else
/* Got to use sem_destroy for unnamed semaphores */
if (sem_destroy(sem) < 0)
elog(LOG, "sem_destroy failed: %m");
#endif

编译项

. ~/env_pg.sh 1921 u01

cd postgresql-9.6.1
export USE_NAMED_POSIX_SEMAPHORES=1
LIBS=-lpthread CC="/home/digoal/gcc6.2.0/bin/gcc" CFLAGS="-O3 -flto" ./configure --prefix=/home/digoal/pgsql9.6
LIBS=-lpthread CC="/home/digoal/gcc6.2.0/bin/gcc" CFLAGS="-O3 -flto" make world -j 64
LIBS=-lpthread CC="/home/digoal/gcc6.2.0/bin/gcc" CFLAGS="-O3 -flto" make install-world

如果你是开发环境,需要调试,建议这样编译。

cd postgresql-9.6.1
export USE_NAMED_POSIX_SEMAPHORES=1
LIBS=-lpthread CC="/home/digoal/gcc6.2.0/bin/gcc" CFLAGS="-O0 -flto -g -ggdb -fno-omit-frame-pointer" ./configure --prefix=/home/digoal/pgsql9.6 --enable-cassert
LIBS=-lpthread CC="/home/digoal/gcc6.2.0/bin/gcc" CFLAGS="-O0 -flto -g -ggdb -fno-omit-frame-pointer" make world -j 64
LIBS=-lpthread CC="/home/digoal/gcc6.2.0/bin/gcc" CFLAGS="-O0 -flto -g -ggdb -fno-omit-frame-pointer" make install-world

初始化数据库集群

pg_xlog建议放在IOPS最好的分区。

. ~/env_pg.sh 1921 u01
initdb -D $PGDATA -E UTF8 --locale=C -U postgres -X /u02/digoal/pg_xlog$PGPORT

配置postgresql.conf

以PostgreSQL 9.6, 512G内存主机为例

最佳到文件末尾即可,重复的会以末尾的作为有效值。  

$ vi postgresql.conf

listen_addresses = '0.0.0.0'
port = 1921
max_connections = 5000
unix_socket_directories = '.'
tcp_keepalives_idle = 60
tcp_keepalives_interval = 10
tcp_keepalives_count = 10
shared_buffers = 128GB                      # 1/4 主机内存
maintenance_work_mem = 2GB                  # min( 2G, (1/4 主机内存)/autovacuum_max_workers )
dynamic_shared_memory_type = posix
vacuum_cost_delay = 0
bgwriter_delay = 10ms
bgwriter_lru_maxpages = 1000
bgwriter_lru_multiplier = 10.0
bgwriter_flush_after = 0                    # IO很好的机器,不需要考虑平滑调度
max_worker_processes = 128
max_parallel_workers_per_gather = 0         #  如果需要使用并行查询,设置为大于1 ,不建议超过 主机cores-2
old_snapshot_threshold = -1
backend_flush_after = 0  # IO很好的机器,不需要考虑平滑调度, 否则建议128~256kB
wal_level = replica
synchronous_commit = off
full_page_writes = on   # 支持原子写超过BLOCK_SIZE的块设备,在对齐后可以关闭。或者支持cow的文件系统可以关闭。
wal_buffers = 1GB       # min( 2047MB, shared_buffers/32 ) = 512MB
wal_writer_delay = 10ms
wal_writer_flush_after = 0  # IO很好的机器,不需要考虑平滑调度, 否则建议128~256kB
checkpoint_timeout = 30min  # 不建议频繁做检查点,否则XLOG会产生很多的FULL PAGE WRITE(when full_page_writes=on)。
max_wal_size = 256GB       # 建议是SHARED BUFFER的2倍
min_wal_size = 64GB        # max_wal_size/4
checkpoint_completion_target = 0.05          # 硬盘好的情况下,可以让检查点快速结束,恢复时也可以快速达到一致状态。否则建议0.5~0.9
checkpoint_flush_after = 0                   # IO很好的机器,不需要考虑平滑调度, 否则建议128~256kB
archive_mode = on
archive_command = '/bin/date'      #  后期再修改,如  'test ! -f /disk1/digoal/arch/%f && cp %p /disk1/digoal/arch/%f'
max_wal_senders = 8
random_page_cost = 1.3  # IO很好的机器,不需要考虑离散和顺序扫描的成本差异
parallel_tuple_cost = 0
parallel_setup_cost = 0
min_parallel_relation_size = 0
effective_cache_size = 300GB                          # 看着办,扣掉会话连接RSS,shared buffer, autovacuum worker, 剩下的都是OS可用的CACHE。
force_parallel_mode = off
log_destination = 'csvlog'
logging_collector = on
log_truncate_on_rotation = on
log_checkpoints = on
log_connections = on
log_disconnections = on
log_error_verbosity = verbose
log_timezone = 'PRC'
vacuum_defer_cleanup_age = 0
hot_standby_feedback = off                             # 建议关闭,以免备库长事务导致 主库无法回收垃圾而膨胀。
max_standby_archive_delay = 300s
max_standby_streaming_delay = 300s
autovacuum = on
log_autovacuum_min_duration = 0
autovacuum_max_workers = 16                            # CPU核多,并且IO好的情况下,可多点,但是注意16*autovacuum mem,会消耗较多内存,所以内存也要有基础。  
autovacuum_naptime = 45s                               # 建议不要太高频率,否则会因为vacuum产生较多的XLOG。
autovacuum_vacuum_scale_factor = 0.1
autovacuum_analyze_scale_factor = 0.1
autovacuum_freeze_max_age = 1600000000
autovacuum_multixact_freeze_max_age = 1600000000
vacuum_freeze_table_age = 1500000000
vacuum_multixact_freeze_table_age = 1500000000
datestyle = 'iso, mdy'
timezone = 'PRC'
lc_messages = 'C'
lc_monetary = 'C'
lc_numeric = 'C'
lc_time = 'C'
default_text_search_config = 'pg_catalog.english'
shared_preload_libraries='pg_stat_statements'

## 如果你的数据库有非常多小文件(比如有几十万以上的表,还有索引等,并且每张表都会被访问到时),建议FD可以设多一些,避免进程需要打开关闭文件。
## 但是不要大于前面章节系统设置的ulimit -n(open files)
max_files_per_process=655360

配置pg_hba.conf

避免不必要的访问,开放允许的访问,建议务必使用密码访问。

$ vi pg_hba.conf

host replication xx 0.0.0.0/0 md5  # 流复制

host all postgres 0.0.0.0/0 reject # 拒绝超级用户从网络登录
host all all 0.0.0.0/0 md5  # 其他用户登陆

启动数据库

pg_ctl start

好了,你的PostgreSQL数据库基本上部署好了,可以愉快的玩耍了。

Count

arangodb linux mysql nosql 分布式

arangodb-php 使用

ArangoDB 是一个开源的分布式原生多模型数据库

ArangoDB 是一个开源的分布式原生多模型数据库 (Apache 2 license)。 其理念是: 利用一个引擎,一个 query 语法,一项数据库技术,以及多个数据 模型,来最大力度满足项目的灵活性,简化技术堆栈,简化数据库运维,降低运营成本。

  1. 多数据模型:可以灵活的使用 document, graph, key-value 或者他们的组合作为你的数据模型
  2. 方便的查询:支持类似 SQL 的查询语法 AQL,或者通过 REST 以及其他查询
  3. Ruby 和 JS 扩展:没有语言范围限制,你可以从前台到后台都使用同一种语言
  4. 高性能以及低空间占用:ArangoDB 比其他 NoSQL 都要快,同时占用的空间更小
  5. 简单易用:可以在几秒内启动并且使用,同时可以通过图形界面来管理你的 ArangoDB
  6. 开源且免费:ArangoDB 遵守 Apache 协议

arangodb-php 暂时还没有什么中文资料

arandodb-php的示例代码也不是很清楚 这里尝试了一下curd的简单操作

/**
 * Created by PhpStorm.
 * User: free
 * Date: 17-7-28
 * Time: 下午10:05
 */
//使用方法
//$connection=new arango();
//
//$id=new ArangoDocumentHandler($connection->c);
//
//
//$data=$id->get('user',aaaa);//返回的是json  可先转为数组操作


//composer require triagens/arangodb


//require 'vendor/autoload.php';

use triagens\ArangoDb\Collection as ArangoCollection;
use triagens\ArangoDb\CollectionHandler as ArangoCollectionHandler;
use triagens\ArangoDb\Connection as ArangoConnection;
use triagens\ArangoDb\ConnectionOptions as ArangoConnectionOptions;
use triagens\ArangoDb\DocumentHandler as ArangoDocumentHandler;
use triagens\ArangoDb\Document as ArangoDocument;
use triagens\ArangoDb\Exception as ArangoException;
use triagens\ArangoDb\Export as ArangoExport;
use triagens\ArangoDb\ConnectException as ArangoConnectException;
use triagens\ArangoDb\ClientException as ArangoClientException;
use triagens\ArangoDb\ServerException as ArangoServerException;
use triagens\ArangoDb\Statement as ArangoStatement;
use triagens\ArangoDb\UpdatePolicy as ArangoUpdatePolicy;

class arango
{
    public function __construct(){
        $connectionOptions = [
            // database name
            ArangoConnectionOptions::OPTION_DATABASE => 'free',
            // server endpoint to connect to
            ArangoConnectionOptions::OPTION_ENDPOINT => 'tcp://127.0.0.1:8529',
            // authorization type to use (currently supported: 'Basic')
            ArangoConnectionOptions::OPTION_AUTH_TYPE => 'Basic',
            // user for basic authorization
            ArangoConnectionOptions::OPTION_AUTH_USER => 'root',
            // password for basic authorization
            ArangoConnectionOptions::OPTION_AUTH_PASSWD => 'free',
            // connection persistence on server. can use either 'Close' (one-time connections) or 'Keep-Alive' (re-used connections)
            ArangoConnectionOptions::OPTION_CONNECTION => 'Keep-Alive',
            // connect timeout in seconds
            ArangoConnectionOptions::OPTION_TIMEOUT => 3,
            // whether or not to reconnect when a keep-alive connection has timed out on server
            ArangoConnectionOptions::OPTION_RECONNECT => true,
            // optionally create new collections when inserting documents
            ArangoConnectionOptions::OPTION_CREATE => true,
            // optionally create new collections when inserting documents
            ArangoConnectionOptions::OPTION_UPDATE_POLICY => ArangoUpdatePolicy::LAST,
        ];


// turn on exception logging (logs to whatever PHP is configured)
        ArangoException::enableLogging();


        $this->c = new ArangoConnection($connectionOptions);
//        $connect->auth()

    }
}
mirrors pip python 国内源

python pip 改国内源

python pip 改国内源

http://mirrors.aliyun.com/pypi/simple/


[global]
index-url = http://pypi.douban.com/simple
[install]
trusted-host=pypi.douban.com

~/.pip/pip.conf
kv linux nosql redis

ardb 兼容redis多种储存引擎的好玩轮子

Ardb是一个新的构建在持久化Key/Value存储实现上的NoSQL DB服务实现

Ardb是一个新的构建在持久化Key/Value存储实现上的NoSQL DB服务实现,支持list/set/sorted set/bitset/hash/table等复杂的数据结构,以Redis协议对外提供访问接口。

支持多种储存引擎

git clone https://github.com/yinqiwen/ardb

storage_engine=rocksdb make
storage_engine=leveldb make
storage_engine=lmdb make
storage_engine=wiredtiger make
storage_engine=perconaft make
storage_engine=forestdb make


make dist就可以了

rocksdb facebook基于leveldb的闪存储存引擎

点击下载

leveldb Leveldb是一个google实现的非常高效的kv数据库

点击下载

lmdb是openLDAP项目开发的嵌入式(作为一个库嵌入到宿主程序)存储引擎

点击下载

wiredtiger mongodb的储存引擎

点击下载

perconaft percona公司的轮子 他家优化的各种数据库都挺不错

点击下载

ForestDB 是一个快速的 Key-Value 存储引擎,基于层次B +树单词查找树。由 Couchbase 缓存和存储团队开发。

谁知道什么鬼!! 编译失败了一个!!!!!!

aql mysql 高级操作

arangodb-aql详细操作

arangodb-aql详细操作


下面介绍以下高级操作:

  • FOR:遍历数组的所有元素。

  • RETURN:生成查询的结果。

  • FILTER:将结果限制为与任意逻辑条件匹配的元素。

  • SORT:强制排序已生成的中间结果的数组。

  • LIMIT:将结果中的元素数减少到至多指定的数字, 可以选择跳过元素 (分页)。

  • LET:将任意值赋给变量。

  • COLLECT:按一个或多个组条件对数组进行分组。也可以计数和聚合。

  • REMOVE:从集合中移除文档。

  • UPDATE:部分更新集合中的文档。

  • REPLACE:完全替换集合中的文档。

  • INSERT:将新文档插入到集合中。

  • UPSERT:更新/替换现有文档, 或在不存在的情况下创建它。

  • WITH:指定查询中使用的集合 (仅在查询开始时)。


FOR

FOR 关键字可以是循环访问数组的所有元素。一般语法是:

FOR variableName IN expression

图遍历还有一个特殊的变体:

FOR vertexVariableName, edgeVariableName, pathVariableName IN traversalExpression

每个由表达式返回的数组元素仅访问一次。在所有情况下, 表达式都需要返回一个数组。也允许空数组。当前数组元素可用于在 variableName 指定的变量中进行进一步处理。

FOR u IN users
RETURN u

返回值

[
{
  "_key": "2427801",
  "_id": "ks/2427801",
  "_rev": "_VeWiZ2i---",
  "id": 1,
  "a": "test",
  "b": [
    "aaaaaaaaaaaaaaaaa"
  ]
}
]

这将遍历阵列用户的所有元素 (注意: 此数组由本例中名为 "users" 的集合中的所有文档组成), 并使当前数组元素在变量 u 中可用. 在本例中没有修改, 只是使用 RETURN 关键字推入结果。

注意: 当迭代基于数组时, 如下所示, 文档的顺序是未定义的, 除非使用排序语句定义了显式排序顺序。

FOR 引入的变量是可用的, 直到 FOR 所放置的范围关闭。

另一个使用静态声明的值数组循环访问的示例:

FOR year IN [ 2011, 2012, 2013 ]
RETURN { "year" : year, "isLeapYear" : year % 4 == 0 && (year % 100 != 0 || year % 400 == 0) }

也允许多个语句的嵌套。当对语句进行嵌套时, 将创建由单个语句返回的数组元素的交叉乘积。

FOR u IN users
FOR l IN locations
  RETURN { "user" : u, "location" : l }

在此示例中, 有两个数组迭代: 在数组用户上的外部迭代加上在数组位置上的内部迭代。内部数组的遍历次数与外部数组中的元素数相同。对于每个迭代, 用户和位置的当前值都可用于在变量中进行进一步的处理。


RETURN

返回语句可用于生成查询结果。必须在数据选择查询的每个块的末尾指定 RETURN 语句, 否则查询结果将是未定义的。在数据修改查询中使用主级别的返回是可选的。

RETURN expression

返回语句所返回的表达式是在返回声明所放置的块中的每个迭代中生成的。这意味着返回语句的结果始终是一个数组。这包括一个空数组, 如果没有与查询匹配的文档, 则返回一个返回值作为数组的一个元素。

要在不修改的情况下返回当前迭代数组中的所有元素, 可以使用以下简单形式:

FOR variableName IN expression
RETURN variableName

当返回允许指定表达式时, 可以执行任意计算来计算结果元素。可将返回的范围中有效的任何变量用于计算。

若要循环访问名为 users 的集合的所有文档并返回完整文档, 可以编写:

FOR u IN users
RETURN u

在 for 循环的每个迭代中, 用户集合的文档被分配给一个变量, 并在本例中未修改返回。若要只返回每个文档的一个属性, 可以使用不同的返回表达式:

FOR u IN users
RETURN u.name

或者要返回多个属性, 可以像这样构造一个对象:

FOR u IN users
RETURN { name: u.name, age: u.age }

注意: 返回将关闭当前范围并消除其中的所有局部变量。在使用子查询时要记住这一点很重要。

FOR u IN users
RETURN { [ u._id ]: u.age }

在本示例中, 每个用户的文档 _id 用作表达式来计算属性键:

[
{
  "users/9883": 32
},
{
  "users/9915": 27
},
{
  "users/10074": 69
}
]

结果包含每个用户一个具有单个键/值对的对象。这通常是不需要的。对于将用户 id 映射到年龄的单个对象, 需要合并单个结果并返回另一个返回:

RETURN MERGE(
  FOR u IN users
    RETURN { [ u._id ]: u.age }
)

.

[
{
  "users/10074": 69,
  "users/9883": 32,
  "users/9915": 27
}
]

请记住, 如果键表达式多次计算为相同的值, 则只有其中一个具有重复名称的键/值对才能生存合并 ()。为了避免出现这种情况, 您可以不使用动态属性名, 而改用静态名称, 并将所有文档属性作为属性值返回:

FOR u IN users
RETURN { name: u.name, age: u.age }

.

[
{
  "name": "John Smith",
  "age": 32
},
{
  "name": "James Hendrix",
  "age": 69
},
{
  "name": "Katie Foster",
  "age": 27
}
]

FILTER

筛选语句可用于将结果限制为与任意逻辑条件匹配的元素。

常规语法

FILTER condition

条件必须是计算结果为 false 或 true 的条件。如果条件结果为 false, 则跳过当前元素, 因此不会进一步处理它, 也不会成为结果的一部分。如果条件为 true, 则不跳过当前元素, 并且可以进一步处理。有关可以在条件中使用的比较运算符、逻辑运算符等的列表, 请参见运算符。

FOR u IN users
FILTER u.active == true && u.age < 39
RETURN u

允许在查询中指定多个筛选语句, 即使在同一块中也是如此。如果使用了多个筛选器语句, 则它们的结果将与逻辑 and 合并, 这意味着所有筛选条件都必须为真, 才能包含元素。

FOR u IN users
FILTER u.active == true
FILTER u.age < 39
RETURN u

在上面的示例中, 用户的所有数组元素的值都为 true, 且属性的值小于 39 (包括 null), 将包括在结果中。将跳过所有其他用户元素, 而不会将其包含在返回结果中。您可以参考从集合访问数据的章节来描述不存在或 null 属性的影响。

操作顺序

请注意, 筛选语句的位置可能会影响查询的结果。测试数据中有16活动用户, 例如:

FOR u IN users
FILTER u.active == true
RETURN u

我们最多可以将结果集限制为5用户:

FOR u IN users
FILTER u.active == true
LIMIT 5
RETURN u

这可能会返回的用户文件, 吉姆, 迭戈, 安东尼, 迈克尔和克洛伊的例子。返回的是未定义的, 因为没有用于确保特定顺序的排序语句。如果我们添加第二个筛选语句只返回女性..。

FOR u IN users
FILTER u.active == true
LIMIT 5
FILTER u.gender == "f"
RETURN u

它可能只返回克洛伊文档, 因为该限制在第二个筛选器之前应用。不超过5文件到达第二个过滤器块, 并且不是所有他们完成性别标准, 即使有超过5活跃女性用户在汇集。通过添加排序块可以实现更具确定性的结果:

FOR u IN users
FILTER u.active == true
SORT u.age ASC
LIMIT 5
FILTER u.gender == "f"
RETURN u

这将返回用户玛丽亚和玛丽。如果按年龄降序排序, 则返回索菲亚、艾玛和麦迪逊文件。但在限制之后的筛选不是很常见, 您可能需要这样的查询:

FOR u IN users
FILTER u.active == true AND u.gender == "f"
SORT u.age ASC
LIMIT 5
RETURN u

放置过滤块的意义在于, 这个单一的关键字可以担当两个 SQL 关键字的角色, 并且具有。因此, AQL 的过滤器与任何其他中间结果、文档属性等的收集聚合体相同。


SORT

排序语句将强制在当前块中已生成的中间结果的数组排序。排序允许指定一个或多个排序条件和方向。一般语法是:

SORT expression direction

按姓氏排序的示例查询 (按升序排列), 然后是名字 (按升序排列), 然后按 id (按降序排列):

FOR u IN users
SORT u.lastName, u.firstName, u.id DESC
RETURN u

指定方向是可选的。排序表达式的默认 (隐式) 方向为升序顺序。若要显式指定排序方向, 可以使用关键字 ASC (升序) 和降序。可以使用逗号分隔多个排序条件。在这种情况下, 为每个表达式 sperately 指定方向。例如

SORT doc.lastName, doc.firstName

将首先按姓氏以升序排序文档, 然后按名字以升序排列。

SORT doc.lastName DESC, doc.firstName

将首先按姓氏按降序排列文档, 然后按名字以升序排序。

SORT doc.lastName, doc.firstName DESC

将首先按姓氏以升序排序文档, 然后按名字降序排列。

注意: 当迭代基于数组时, 文档的顺序始终是未定义的, 除非使用排序定义了显式排序顺序。

请注意, 常量排序表达式可用于指示不需要特定的排序顺序。在优化过程中, AQL 优化器将对常量排序表达式进行优化, 但如果优化器不需要考虑任何特定的排序顺序, 则显式指定它们可能会启用进一步优化。这在收集语句之后尤其如此, 它应该产生一个排序结果。在收集语句后指定额外的排序空值允许 AQL 优化器完全删除收集结果的 post-sorting。


LIMIT

限制语句允许使用偏移量和计数对结果数组进行切片。它将结果中的元素数减少到最多指定的数字。采用了两种一般的限制形式:

LIMIT count
LIMIT offset, count

第一个窗体允许只指定计数值, 而第二个窗体允许指定偏移量和计数。第一个窗体是相同的, 使用第二个窗体的偏移值为0。

FOR u IN users
LIMIT 5
RETURN u

上面的查询返回用户集合的前五文档。它也可以写为限制 0, 5 为相同的结果。它实际上返回的文件是相当任意的, 因为没有明确的排序顺序被指定然而。因此, 限制应通常伴随排序操作。

偏移值指定应跳过结果中的多少元素。它必须是0或更大。count 值指定在结果中最多包含多少元素

FOR u IN users
SORT u.firstName, u.lastName, u.id DESC
LIMIT 2, 5
RETURN u

在上面的示例中, 对用户的文档进行排序, 前两个结果被跳过, 并返回下一个五用户文档。

请注意, 变量和表达式不能用于偏移和计数。在查询编译时, 它们的值必须是已知的, 这意味着您只能使用数字文本和绑定参数。

在与查询中的其他操作相关的情况下, 使用限制是有意义的。特别是在筛选器之前限制操作可以显著地更改结果, 因为这些操作是按照它们在查询中的写入顺序执行的。有关详细示例, 请参见筛选器。


LET

"LET" 语句可用于将任意值赋给变量。然后在让语句所放置的范围中引入变量。

LET variableName = expression

变量在 AQL 中是不可变的, 这意味着它们不能重新:

LET a = [1, 2, 3]  // initial assignment

a = PUSH(a, 4)     // syntax error, unexpected identifier
LET a = PUSH(a, 4) // parsing error, variable 'a' is assigned multiple times
LET b = PUSH(a, 4) // allowed, result: [1, 2, 3, 4]

让语句主要用于声明复杂计算, 并避免在查询的多个部分重复计算相同的值。

FOR u IN users
LET numRecommendations = LENGTH(u.recommendations)
RETURN { 
  "user" : u, 
  "numRecommendations" : numRecommendations, 
  "isPowerUser" : numRecommendations >= 10 
}

在上面的示例中, 使用 "LET" 语句计算出建议的数量, 从而避免在 RETURN 语句中计算两次值。

让我们使用的另一个用例是在子查询中声明一个复杂的计算, 使整个查询更具可读性。

FOR u IN users
LET friends = (
FOR f IN friends 
  FILTER u.id == f.userId
  RETURN f
)
LET memberships = (
FOR m IN memberships
  FILTER u.id == m.userId
    RETURN m
)
RETURN { 
  "user" : u, 
  "friends" : friends, 
  "numFriends" : LENGTH(friends), 
  "memberShips" : memberships 
}

COLLECT

"COLLECT" 关键字可用于按一个或多个组条件对数组进行分组。

COLLECT语句将消除当前范围内的所有局部变量。COLLECT后, 只有由COLLECT本身引入的变量是可用的。

COLLECT的一般语法是:

COLLECT variableName = expression options
COLLECT variableName = expression INTO groupsVariable options
COLLECT variableName = expression INTO groupsVariable = projectionExpression options
COLLECT variableName = expression INTO groupsVariable KEEP keepVariable options
COLLECT variableName = expression WITH COUNT INTO countVariable options
COLLECT variableName = expression AGGREGATE variableName = aggregateExpression options
COLLECT AGGREGATE variableName = aggregateExpression options
COLLECT WITH COUNT INTO countVariable options

选项在所有变体中都是可选的。

对语法进行分组

"COLLECT" 的第一个语法形式仅将结果按表达式中指定的组条件分组。为了进一步处理收集到的结果, 引入了一个新的变量 (由 variableName 指定)。此变量包含组值。

下面是一个查询, 它在美国城市中找到了不同的值, 并使它们可在可变城市中使用:

FOR u IN users
  COLLECT city = u.city
  RETURN { 
    "city" : city 
  }

第二个窗体与第一个窗体相同, 但另外引入了一个变量 (由 groupsVariable 指定), 其中包含掉到组中的所有元素。它的工作方式如下: groupsVariable 变量是一个数组, 其中包含的元素与组中的一样多。该数组的每个成员都是一个 JSON 对象, 其中在 AQL 查询中定义的每个变量的值都绑定到相应的属性。请注意, 这将考虑在收集语句之前定义的所有变量, 而不是在顶层 (任何一个 FOR) 的前面, 除非收集语句本身位于顶层, 在这种情况下, 所有变量都被采用。此外, 请注意, 优化程序可能会将语句移出以用于语句以提高性能。

FOR u IN users
COLLECT city = u.city INTO groups
RETURN { 
  "city" : city, 
  "usersInCity" : groups 
}

在上面的示例中, 数组用户将按属性城市分组。结果是一个新的文档数组, 每个元素都有一个不同的 u. 城市值。元素从原始的数组 (这里: 用户) 每个城市被使可利用在可变的小组。这是由于进入条款。

"COLLECT" 还允许指定多个组条件。单个组条件可以用逗号分隔:

FOR u IN users
COLLECT country = u.country, city = u.city INTO groups
RETURN { 
  "country" : country, 
  "city" : city, 
  "usersInCity" : groups 
}

在上面的示例中, 数组用户首先按国家和城市分组, 对于每个不同的国家和城市组合, 用户将被返回。

丢弃过时的变量

第三种形式的COLLECT允许使用任意 projectionExpression 改写 groupsVariable 的内容:

FOR u IN users
COLLECT country = u.country, city = u.city INTO groups = u.name
RETURN { 
  "country" : country, 
  "city" : city, 
  "userNames" : groups 
}

在上面的例子中, 只有 projectionExpression 是 u 名称。因此, 仅将此属性复制到每个文档的 groupsVariable 中。这可能比将范围内的所有变量复制到 groupsVariable 中要有效得多, 因为它会在没有 projectionExpression 的情况下发生。

下面的表达式也可以用于任意计算:

FOR u IN users
COLLECT country = u.country, city = u.city INTO groups = { 
  "name" : u.name, 
  "isActive" : u.status == "active"
}
RETURN { 
  "country" : country, 
  "city" : city, 
  "usersInCity" : groups 
}

COLLECT还提供一个可选的保留子句, 可用于控制将哪些变量复制到创建的变量中。如果未指定保留子句, 则范围中的所有变量都将作为 sub-attributes 复制到 groupsVariable 中。这是安全的, 但如果范围内有许多变量或变量包含大量数据, 则会对性能产生负面影响。

下面的示例将复制到 groupsVariable 中的变量限制为仅名称。您和 someCalculation 在作用域中的变量也不会被复制到 groupsVariable 中, 因为它们没有在 "保留" 子句中列出:

FOR u IN users
LET name = u.name
LET someCalculation = u.value1 + u.value2
COLLECT city = u.city INTO groups KEEP name 
RETURN { 
  "city" : city, 
  "userNames" : groups[*].name 
}

保持是仅有效的与入的组合。在 "保留" 子句中只能使用有效的变量名。保持支持多个变量名的规范。

组长度计算

"COLLECT" 还提供了一个特殊的计数子句, 可用于有效地确定组成员的数量。

最简单的表单只返回使其进入collect的项的数量:

FOR u IN users
COLLECT WITH COUNT INTO length
RETURN length

上述内容等同于, 但效率高于:

RETURN LENGTH(
  FOR u IN users
    RETURN length
)

使用 count 子句还可以有效地计算每个组中的项数:

FOR u IN users
COLLECT age = u.age WITH COUNT INTO length
RETURN { 
  "age" : age, 
  "count" : length 
}

聚合

COLLECT语句可用于执行每个组的数据聚合。为只确定组长度, 与计数入变异的收集可以使用如前面描述。

对于其他聚合, 可以对收集结果运行聚合函数:

FOR u IN users
COLLECT ageGroup = FLOOR(u.age / 5) * 5 INTO g
RETURN { 
  "ageGroup" : ageGroup,
  "minAge" : MIN(g[*].u.age),
  "maxAge" : MAX(g[*].u.age)
}

REMOVE

  • remove * 关键字可用于从集合中移除文档。在一个 单台服务器, 则在事务中执行文档删除。 完全没有时尚。对于切分集合, 整个删除操作 不是事务性的。

每个 * remove * 操作仅限于单个集合, 而 集合名称 不得为动态。 每个 AQL 查询只允许每个集合使用单 * 删除 * 语句, 并且 它不能后跟访问同一集合的读取操作, 遍历操作或可以读取文档的 AQL 函数。

删除操作的语法为:

REMOVE keyExpression IN collection options
  • collection * 必须包含要删除文档的集合的名称 从。* keyExpression * 必须是包含文档标识的表达式。 这可以是一个字符串 (然后必须包含 文档密钥 或 文档, 它必须包含 * _key * 属性。

因此, 下列查询是等效的:

FOR u IN users
  REMOVE { _key: u._key } IN users

FOR u IN users
  REMOVE u._key IN users

FOR u IN users
  REMOVE u IN users

注意 : 删除操作可以删除任意文档, 并且文档 不需要与前面的 * 声明所产生的相同:

FOR i IN 1..1000
  REMOVE { _key: CONCAT('test', i) } IN users

FOR u IN users
  FILTER u.active == false
  REMOVE { _key: u._key } IN backup

设置查询选项

  • option * 可用于禁止在尝试 删除不存在的文档。例如, 以下查询将失败, 如果一个 to-be 删除的文档不存在:
FOR i IN 1..1000
  REMOVE { _key: CONCAT('test', i) } IN users

通过指定 * ignoreErrors * 查询选项, 可以抑制这些错误, 以便 查询完成:

FOR i IN 1..1000
  REMOVE { _key: CONCAT('test', i) } IN users OPTIONS { ignoreErrors: true }

为了确保在查询返回时已将数据写入磁盘, waitForSync * 查询选项:

FOR i IN 1..1000
  REMOVE { _key: CONCAT('test', i) } IN users OPTIONS { waitForSync: true }

返回已删除的文档

已删除的文档也可以由查询返回。在这种情况下, "REMOVE" 语句后面必须有一个 "RETURN" 语句 (中间的 ' LET ' 语句 也允许). "REMOVE" 引入了 pseudo-value "旧" 来引用已删除的 文件:

REMOVE keyExpression IN collection options RETURN OLD

下面是一个示例, 它使用名为 "已删除" 的变量来捕获被删除的 文件.对于每个已删除的文档, 将返回文档密钥。

FOR u IN users
  REMOVE u IN users 
  LET removed = OLD 
  RETURN removed._key

UPDATE

  • update * 关键字可用于部分更新集合中的文档。在一个 单一服务器, 更新执行事务在一个全有的时尚。 对于切分的集合, 整个更新操作不是事务性的。

每个 * UPDATE * 操作仅限于单个集合, 而 集合名称 不得为动态。 每个 AQL 查询只允许每个集合的单 * UPDATE * 语句, 并且 它不能后跟访问同一集合的读取操作, 遍历操作或可以读取文档的 AQL 函数。 系统属性 * _id , * _key * 和 * _rev * 不能更新, _from * 和 *_to * 可以。

更新操作的两个语法是:

UPDATE document IN collection options
UPDATE keyExpression WITH document IN collection options
  • collection * 必须包含文档的集合名称 进行更新。* document * 必须是包含属性和值的文档 要更新。使用第一个语法时, * document * 也必须包含 * _key * 属性来标识要更新的文档。
FOR u IN users
  UPDATE { _key: u._key, name: CONCAT(u.firstName, " ", u.lastName) } IN users

下面的查询无效, 因为它不包含 * _key * 属性和 因此不可能确定要更新的文档:

FOR u IN users
  UPDATE { name: CONCAT(u.firstName, " ", u.lastName) } IN users

使用第二个语法时, * keyExpression * 提供文档标识。 这可以是一个字符串 (随后必须包含文档密钥) 或 文档, 它必须包含 * _key * 属性。

下列查询是等效的:

FOR u IN users
  UPDATE u._key WITH { name: CONCAT(u.firstName, " ", u.lastName) } IN users

FOR u IN users
  UPDATE { _key: u._key } WITH { name: CONCAT(u.firstName, " ", u.lastName) } IN users

FOR u IN users
  UPDATE u WITH { name: CONCAT(u.firstName, " ", u.lastName) } IN users

更新操作可能会更新不需要相同的任意文档 由前 * FOR * 的陈述所产生的部分:

FOR i IN 1..1000
  UPDATE CONCAT('test', i) WITH { foobar: true } IN users

FOR u IN users
  FILTER u.active == false
  UPDATE u WITH { status: 'inactive' } IN backup

使用文档属性的当前值

"WITH" 子句中不支持 $this "OLD" ( 在 "更新" 之后可用)。若要访问当前属性值, 可以 通常通过 "for" 循环的变量来引用文档, 这是用来 循环访问集合:

FOR doc IN users
  UPDATE doc WITH {
    fullName: CONCAT(doc.firstName, " ", doc.lastName)
  } IN users

如果没有循环, 因为单个文档只更新, 那么 可能不是像上面的变量 ("doc"), 这将让你引用 正在更新的文档:

UPDATE "users/john" WITH { ... } IN users

若要在这种情况下访问当前值, 必须检索文档 并首先存储在变量中:

LET doc = DOCUMENT("users/john")
UPDATE doc WITH {
  fullName: CONCAT(doc.firstName, " ", doc.lastName)
} IN users

可以通过这种方式修改现有属性的当前值, 要递增计数器, 例如:

UPDATE doc WITH {
  karma: doc.karma + 1
} IN users

如果属性 "karma" 还不存在, "karma" 被评估为 * 为 null 。 该表达式 "null + 1" 导致新属性 "karma" 被设置为 * 1 。 如果属性确实存在, 则它会增加 * 1 *。

当然, 数组也可以被突变:

UPDATE doc WITH {
  hobbies: PUSH(doc.hobbies, "swimming")
} IN users

如果属性 "hobbies" 还不存在, 它就会被方便地初始化 作为 "[swimming]", 否则延长。

设置查询选项

  • option * 可用于禁止在尝试 更新不存在的文档或违反唯一的键约束:
FOR i IN 1..1000
  UPDATE {
    _key: CONCAT('test', i)
  } WITH {
    foobar: true
  } IN users OPTIONS { ignoreErrors: true }

更新操作将只更新 * document * 中指定的属性, 并 保持其他属性不变。内部属性 (如 * _id , * _key , * _rev , * _from * 和 * _to ) 不能更新, 并在 * document * 中指定时被忽略。 更新文档将使用服务器生成的值修改文档的修订号。

在更新具有 null 值的属性时, ArangoDB 不会删除该属性 从文档中, 但存储一个空值。删除更新中的属性 操作, 请将它们设置为 null 并提供 * keepNull * 选项:

FOR u IN users
  UPDATE u WITH {
    foobar: true,
    notNeeded: null
  } IN users OPTIONS { keepNull: false }

上述查询将从文档中删除 * notNeeded * 属性, 并更新 * foobar * 属性正常。

还有一个选项 * mergeObjects *, 控制是否将对象内容 如果对象属性同时出现在 * UPDATE * 查询和 to-be 更新的文档。

以下查询将更新后的文档的 * name * 属性设置为精确 在查询中指定的值相同。这是由于 mergeObjects * 选项 被设置为 * false *:

FOR u IN users
  UPDATE u WITH {
    name: { first: "foo", middle: "b.", last: "baz" }
  } IN users OPTIONS { mergeObjects: false }

相反, 下面的查询将合并 * name * 属性的内容。 具有查询中指定值的原始文档:

FOR u IN users
  UPDATE u WITH {
    name: { first: "foo", middle: "b.", last: "baz" }
  } IN users OPTIONS { mergeObjects: true }
  • name * 中存在于 to-be 更新的文档中的属性, 但不在 现在将保留查询。两者中存在的属性将被改写 在查询中指定的值。

注: * mergeObjects * 为 * true * 的默认值 , 因此无需指定 明确.

为了确保数据在更新查询返回时是持久的, 有 * waitForSync * 查询选项:

FOR u IN users
  UPDATE u WITH {
    foobar: true
  } IN users OPTIONS { waitForSync: true }

返回修改后的文档

修改后的文档也可以由查询返回。在这种情况下, "UPDATE 语句需要遵循 "RETURN" 语句 (中间的 ' LET ' 语句 也是允许的)。这些语句可以引用 pseudo-values 的 "OLD" 和 "NEW"。 "OLD" pseudo-value 指更新前的文档修订, 以及 "NEW" 是指更新后的文档修订。

"OLD" 和 "NEW" 都将包含所有文档属性, 即使没有指定 在 update 表达式中。

UPDATE document IN collection options RETURN OLD
UPDATE document IN collection options RETURN NEW
UPDATE keyExpression WITH document IN collection options RETURN OLD
UPDATE keyExpression WITH document IN collection options RETURN NEW

下面是一个示例, 它使用名为 "previous" 的变量来捕获原始 修改前的文档。对于每个已修改的文档, 将返回文档密钥。

FOR u IN users
  UPDATE u WITH { value: "test" } 
  LET previous = OLD 
  RETURN previous._key

下面的查询使用 "NEW" pseudo-value 返回更新的文档, 没有某些系统属性:

FOR u IN users
  UPDATE u WITH { value: "test" } 
  LET updated = NEW 
  RETURN UNSET(updated, "_key", "_id", "_rev")

还可以返回 "旧" 和 "新":

FOR u IN users
  UPDATE u WITH { value: "test" } 
  RETURN { before: OLD, after: NEW }

REPLACE

  • REPLACE * 关键字可用于完全替换集合中的文档。在一个 单台服务器, 替换操作执行事务在一个全有-没有 时尚.对于切分集合, 整个替换操作不是事务性的。

每个 * REPLACE * 操作仅限于单个集合, 而 每个 AQL 查询只允许每个集合使用单 * REPLACE * 语句, 并且 它不能后跟访问同一集合的读取操作, 遍历操作或可以读取文档的 AQL 函数。 系统属性 * _id *, * _key * 和 * _rev * 不能被替换, * _from * 和 * _to * 可以。

替换操作的两个语法为:

REPLACE document IN collection options
REPLACE keyExpression WITH document IN collection options
  • collection * 必须包含文档的集合名称 被替换。* document * 为替换文件。使用第一个语法时, * document * 还必须包含 * _key * 属性, 以标识要替换的文档。
FOR u IN users
  REPLACE { _key: u._key, name: CONCAT(u.firstName, u.lastName), status: u.status } IN users

下面的查询无效, 因为它不包含 * _key * 属性和 因此不可能确定要替换的文档:

FOR u IN users
  REPLACE { name: CONCAT(u.firstName, u.lastName, status: u.status) } IN users

使用第二个语法时, * keyExpression * 提供文档标识。 这可以是一个字符串 (随后必须包含文档密钥) 或 文档, 它必须包含 * _key * 属性。

下列查询是等效的:

FOR u IN users
  REPLACE { _key: u._key, name: CONCAT(u.firstName, u.lastName) } IN users

FOR u IN users
  REPLACE u._key WITH { name: CONCAT(u.firstName, u.lastName) } IN users

FOR u IN users
  REPLACE { _key: u._key } WITH { name: CONCAT(u.firstName, u.lastName) } IN users

FOR u IN users
  REPLACE u WITH { name: CONCAT(u.firstName, u.lastName) } IN users

替换将完全替换现有文档, 但不会修改值 内部属性 (如 * _id , * _key , * _from * 和 * _to *)。替换文档 将使用服务器生成的值修改文档的修订号。

替换操作可以更新不需要相同的任意文档 由前 * FOR * 的陈述所产生的部分:

FOR i IN 1..1000
  REPLACE CONCAT('test', i) WITH { foobar: true } IN users

FOR u IN users
  FILTER u.active == false
  REPLACE u WITH { status: 'inactive', name: u.name } IN backup

设置查询选项

  • option * 可用于禁止在尝试 替换不存在的文档或在违反唯一键约束时:
FOR i IN 1..1000
  REPLACE { _key: CONCAT('test', i) } WITH { foobar: true } IN users OPTIONS { ignoreErrors: true }

为了确保在替换查询返回时数据是持久的, 有 * waitForSync * 查询选项:

FOR i IN 1..1000
  REPLACE { _key: CONCAT('test', i) } WITH { foobar: true } IN users OPTIONS { waitForSync: true }

返回修改后的文档

修改后的文档也可以由查询返回。在这种情况下, "REPLACE" 语句后面必须有一个 "RETURN" 语句 (中间的 ' LET' 语句是 允许的, 太)。"OLD" pseudo-value 可用于引用文档修订版之前 替换, "NEW" 是指替换后的文档修订。

"OLD" 和 "NEW" 都将包含所有文档属性, 即使没有指定 在 "替换" 表达式中。

REPLACE document IN collection options RETURN OLD
REPLACE document IN collection options RETURN NEW
REPLACE keyExpression WITH document IN collection options RETURN OLD
REPLACE keyExpression WITH document IN collection options RETURN NEW

下面是一个示例, 它使用名为 "previous" 的变量返回原始 修改前的文档。对于每个被替换的文档, 文档密钥将 返回:

FOR u IN users
  REPLACE u WITH { value: "test" } 
  LET previous = OLD 
  RETURN previous._key

下面的查询使用 "NEW" pseudo-value 返回替换的 文档 (不含某些系统属性):

FOR u IN users
  REPLACE u WITH { value: "test" } 
  LET replaced = NEW 
  RETURN UNSET(replaced, '_key', '_id', '_rev')

INSERT

  • INSERT * 关键字可用于将新文档插入到集合中。在一个 单服务器, 插入操作在事务中执行。 时尚.对于切分集合, 整个插入操作不是事务性的。

每个 * INSERT * 操作仅限于单个集合, 而 每个 AQL 查询只允许每个集合使用单 * INSERT * 语句, 并且 它不能后跟访问同一集合的读取操作, 遍历操作或可以读取文档的 AQL 函数。

插入操作的语法为:

INSERT document IN collection options

: * INTO * 关键字也允许在 * IN *。

  • collection * 必须包含文档的集合名称 入。* document * 是要插入的文档, 它可能包含也可能不包括
  • _key * 属性。如果不提供 * _key * 属性, ArangoDB 将自动 值为 * _key * 值。插入文档也将自动文档 文档的修订号。
FOR i IN 1..100
  INSERT { value: i } IN numbers

当插入到 edge collection, 在文档中指定属性 * _from * 和 * _to * 为必填项:

FOR u IN users
  FOR p IN products
    FILTER u._key == p.recommendedBy
    INSERT { _from: u._id, _to: p._id } IN recommendations

设置查询选项

  • option * 可用于禁止在违反唯一 关键约束:
FOR i IN 1..1000
  INSERT {
    _key: CONCAT('test', i),
    name: "test",
    foobar: true
  } INTO users OPTIONS { ignoreErrors: true }

为了确保在插入查询返回时数据是持久的, 有 * waitForSync * 查询选项:

FOR i IN 1..1000
  INSERT {
    _key: CONCAT('test', i),
    name: "test",
    foobar: true
  } INTO users OPTIONS { waitForSync: true }

返回插入的文档

插入的文档也可以由查询返回。在这种情况下, "INSERT" 语句可以是 "RETURN" 语句 (中间的 "LET" 语句也是允许的)。 要引用插入的文档, "INSERT" 语句引入了一个 pseudo-value 命名为 "NEW"。

"NEW" 中包含的文档将包含所有属性, 即使是自动生成的 数据库 (例如 "_id"、"_key"、"_rev")。

INSERT document IN collection options RETURN NEW

下面是一个示例, 它使用名为 "inserted" 的变量返回插入的 文件.对于每个插入的文档, 将返回文档密钥:

FOR i IN 1..100
  INSERT { value: i } 
  LET inserted = NEW 
  RETURN inserted._key

WITH

AQL 查询可以选择以 WITH 语句和 查询使用的集合。所有在 WITH 中指定的集合将 在查询开始时读锁定, 除了其他集合的查询 使用 AQL 查询分析器检测到的。

WITH managers, usersHaveManagers
FOR v, e, p IN OUTBOUND 'users/1' GRAPH 'userGraph'
  RETURN { v, e, p }

document graph key-value

arangodb 安装

利用一个引擎,一个 query 语法,一项数据库技术,以及多个数据 模型,来最大力度满足项目的灵活性

ArangoDB 是一个开源的分布式原生多模型数据库 (Apache 2 license)。 其理念是: 利用一个引擎,一个 query 语法,一项数据库技术,以及多个数据 模型,来最大力度满足项目的灵活性,简化技术堆栈,简化数据库运维,降低运营成本。

  1. 多数据模型:可以灵活的使用 document, graph, key-value 或者他们的组合作为你的数据模型
  2. 方便的查询:支持类似 SQL 的查询语法 AQL,或者通过 REST 以及其他查询
  3. Ruby 和 JS 扩展:没有语言范围限制,你可以从前台到后台都使用同一种语言
  4. 高性能以及低空间占用:ArangoDB 比其他 NoSQL 都要快,同时占用的空间更小
  5. 简单易用:可以在几秒内启动并且使用,同时可以通过图形界面来管理你的 ArangoDB
  6. 开源且免费:ArangoDB 遵守 Apache 协议

我下载的是debian的

``` sudo dpkg -i arangodb-xxx.deb arangosh 提示 Connected to ArangoDB 'http+tcp://127.0.0.1:8529' version: 3.2.0 [server], database: '_system', username: 'root'

```

直接访问 http://127.0.0.1:8529/ 便进入后台管理 先修改root密码

linux nosql 集群

avocadodb/arangodb集群

一个arangodb集群由多任务运行形成集群。

一个arangodb集群由多任务运行形成集群。 arangodb本身不会启动或监视这些任务。 因此,它需要某种监控和启动这些任务的监督者。

手工配置集群是非常简单的。

一个代理角色 两个数据节点角色 一个控制器角色

一下将讲解每个角色所需的参数

集群将由 控制器->代理->数据节点的方向进行

代理与数据节点都可以是多个

代理节点 (Agency)

要启动一个代理,首先要通过agency.activate参数激活。

代理节点数量要通过agency.size=3进行设置 当然 也可以只用1个

在初始化过程中,代理必须相互查找。 这样做至少提供一个共同的agency.endpoint。 指定agency.my-address自己的ip。

单代理节点时

在cluster下配置参数

//监听ip
server.endpoint=tcp://0.0.0.0:5001
//关闭掉密码验证
server.authentication=false 
agency.activate=true 
agency.size=1 
//代理节点
agency.endpoint=tcp://127.0.0.1:5001 
agency.supervision=true 
多代理节点配置

主代理节点配置

server.endpoint=tcp://0.0.0.0:5001
//  服务器监听节点
agency.my-address=tcp://127.0.0.1:5001
//  代理监听节点
server.authentication=false
//  密码验证关闭
agency.activate=true
agency.size=3
//   代理节点数量
agency.endpoint=tcp://127.0.0.1:5001
//   监听主代理节点的ip
agency.supervision=true

子代理节点配置

server.endpoint=tcp://0.0.0.0:5002
agency.my-address=tcp://127.0.0.1:5002
server.authentication=false
agency.activate=true
agency.size=3
agency.endpoint=tcp://127.0.0.1:5001
agency.supervision=true 

所有节点agency.endpoint指向同一个ip/port

控制器和数据节点的配置

数据节点配置

server.authentication=false
server.endpoint=tcp://0.0.0.0:8529
cluster.my-address=tcp://127.0.0.1:8529
cluster.my-local-info=db1
cluster.my-role=PRIMARY
cluster.agency-endpoint=tcp://127.0.0.1:5001
cluster.agency-endpoint=tcp://127.0.0.1:5002

控制器节点配置

server.authentication=false
server.endpoint=tcp://0.0.0.0:8531
cluster.my-address=tcp://127.0.0.1:8531
cluster.my-local-info=coord1
cluster.my-role=COORDINATOR
cluster.agency-endpoint=tcp://127.0.0.1:5001
cluster.agency-endpoint=tcp://127.0.0.1:5002

启动每个节点

1

2

javascript js nosql restful

CouchDB 安装

CouchDB 是一个开源的面向文档的数据库管理系统

CouchDB 是一个开源的面向文档的数据库管理系统,可以通过 RESTful JavaScript Object Notation (JSON) API 访问。术语 “Couch” 是 “Cluster Of Unreliable Commodity Hardware” 的首字母缩写,它反映了 CouchDB 的目标具有高度可伸缩性,提供了高可用性和高可靠性,即使运行在容易出现故障的硬件上也是如此。CouchDB 最初是用 C++ 编写的,但在 2008 年 4 月,这个项目转移到 Erlang OTP 平台进行容错测试

直接下载不知道为啥总会在编译release出错,大概是没rebar配置文件

这里直接github下拉

git clone https://github.com/apache/couchdb

安装编译环境

debian

sudo apt-get --no-install-recommends -y install \
    build-essential pkg-config erlang \
    libicu-dev libmozjs185-dev libcurl4-openssl-dev

redhat

sudo yum install autoconf autoconf-archive automake \
    curl-devel erlang-asn1 erlang-erts erlang-eunit gcc-c++ \
    erlang-os_mon erlang-xmerl erlang-erl_interface help2man \
    js-devel-1.8.5 libicu-devel libtool perl-Test-Harness

生成配置文件

./configure  --disable-docs #文档也会编译出错。。谁知道咋回事呢。。不过官方文档支持直接下载。所以可有可无这里禁用掉

make 

make release
这里就编译出来了   直接在  rel目录下的 couchdb  执行bin下的couchdb 即可  ,如果报错一般是端口占用  去etc/default.ini修改端口即可

运行无误后  浏览器访问 http://localhost:5984/_utils/index.html#verifyinstall

执行初次安装
java leveldb linux nosql rocksdb

leveldb-rocksdb java使用

rocksdb是在leveldb上开发来的

leveldb-rocksdb在java中的demo

(arangodb储存引擎用的rocksdb,然而rocksdb是在leveldb上开发来的)

rocksdb

package net.oschina.itags.gateway.service;
import org.rocksdb.Options;
import org.rocksdb.RocksDB;
import org.rocksdb.RocksDBException;

public class BaseRocksDb {
    public final static RocksDB rocksDB() throws RocksDBException {

        Options options = new Options().setCreateIfMissing(true);
        RocksDB.loadLibrary();
        RocksDB db=RocksDB.open(options,"./rock");
        return db;
    }
}

leveldb

package net.oschina.itags.gateway.service;
import org.iq80.leveldb.*;
import org.iq80.leveldb.impl.Iq80DBFactory;

import java.io.File;
import java.io.IOException;
import java.nio.charset.Charset;

public class BaseLevelDb {

public static final DB db() throws IOException {
    boolean cleanup = true;
    Charset charset = Charset.forName("utf-8");
    String path = "./level";

//init
    DBFactory factory = Iq80DBFactory.factory;
    File dir = new File(path);
//如果数据不需要reload则每次重启尝试清理磁盘中path下的旧数据
    if(cleanup) {
        factory.destroy(dir,null);//清除文件夹内的所有文件
    }
    Options options = new Options().createIfMissing(true);
//重新open新的db
    DB db = factory.open(dir,options);
  return db;
}
}
arangodb nodejs nosql

arangodb-node 使用

arangodb-node 使用 nodejs

Install

With NPM

npm install arangojs

With bower

bower install arangojs

From source

git clone https://github.com/arangodb/arangojs.git
cd arangojs
npm install
npm run dist

Basic usage example

// ES2015-style
import arangojs, {Database, aql} from 'arangojs';
let db1 = arangojs(); // convenience short-hand
let db2 = new Database();
let {query, bindVars} = aql`RETURN ${Date.now()}`;

// or plain old Node-style
var arangojs = require('arangojs');
var db1 = arangojs();
var db2 = new arangojs.Database();
var aql = arangojs.aql(['RETURN ', ''], Date.now());
var query = aql.query;
var bindVars = aql.bindVars;

API

All asynchronous functions take an optional Node-style callback (or "errback") as the last argument with the following arguments:

  • err: an Error object if an error occurred, or null if no error occurred.
  • result: the function's result (if applicable).

For expected API errors, err will be an instance of ArangoError. For any other error responses (4xx/5xx status code), err will be an instance of the apropriate http-errors error type. If the response indicates success but the response body could not be parsed, err will be a SyntaxError. In all of these cases the error object will additionally have a response property containing the server response object.

If Promise is defined globally, asynchronous functions return a promise if no callback is provided.

If you want to use promises in environments that don't provide the global Promise constructor, use a promise polyfill like es6-promise or inject a ES6-compatible promise implementation like bluebird into the global scope.

Examples

// Node-style callbacks
db.createDatabase('mydb', function (err, info) {
    if (err) console.error(err.stack);
    else {
        // database created
    }
});

// Using promises with ES2015 arrow functions
db.createDatabase('mydb')
.then(info => {
    // database created
}, err => console.error(err.stack));

// Using proposed ES.next "async/await" syntax
try {
    let info = await db.createDatabase('mydb');
    // database created
} catch (err) {
    console.error(err.stack);
}

Table of Contents

Database API

new Database

new Database([config]): Database

Creates a new Database instance.

If config is a string, it will be interpreted as config.url.

Arguments

  • config: Object (optional)

An object with the following properties:

  • url: string (Default: http://localhost:8529)

    Base URL of the ArangoDB server.

    If you want to use ArangoDB with HTTP Basic authentication, you can provide the credentials as part of the URL, e.g. http://user:pass@localhost:8529.

    The driver automatically uses HTTPS if you specify an HTTPS url.

    If you need to support self-signed HTTPS certificates, you may have to add your certificates to the agentOptions, e.g.:

    js agentOptions: { ca: [ fs.readFileSync('.ssl/sub.class1.server.ca.pem'), fs.readFileSync('.ssl/ca.pem') ] }

  • databaseName: string (Default: _system)

    Name of the active database.

  • arangoVersion: number (Default: 20300)

    Value of the x-arango-version header.

  • headers: Object (optional)

    An object with additional headers to send with every request.

  • agent: Agent (optional)

    An http Agent instance to use for connections.

    By default a new http.Agent (or https.Agent) instance will be created using the agentOptions.

    This option has no effect when using the browser version of arangojs.

  • agentOptions: Object (Default: see below)

    An object with options for the agent. This will be ignored if agent is also provided.

    Default: {maxSockets: 3, keepAlive: true, keepAliveMsecs: 1000}.

    In the browser version of arangojs this option can be used to pass additional options to the underlying calls of the xhr module. The options keepAlive and keepAliveMsecs have no effect in the browser but maxSockets will still be used to limit the amount of parallel requests made by arangojs.

  • promise: Class (optional)

    The Promise implementation to use or false to disable promises entirely.

    By default the global Promise constructor will be used if available.

Manipulating databases

These functions implement the HTTP API for manipulating databases.

database.useDatabase

database.useDatabase(databaseName): this

Updates the Database instance and its connection string to use the given databaseName, then returns itself.

Arguments

  • databaseName: string

The name of the database to use.

Examples

var db = require('arangojs')();
db.useDatabase('test');
// The database instance now uses the database "test".
database.createDatabase

async database.createDatabase(databaseName, [users]): Object

Creates a new database with the given databaseName.

Arguments

  • databaseName: string

Name of the database to create.

  • users: ArrayObject (optional)

If specified, the array must contain objects with the following properties:

  • username: string

    The username of the user to create for the database.

  • passwd: string (Default: empty)

    The password of the user.

  • active: boolean (Default: true)

    Whether the user is active.

  • extra: Object (optional)

    An object containing additional user data.

Examples

var db = require('arangojs')();
db.createDatabase('mydb', [{username: 'root'}])
.then(info => {
    // the database has been created
});
database.get

async database.get(): Object

Fetches the database description for the active database from the server.

Examples

var db = require('arangojs')();
db.get()
.then(info => {
    // the database exists
});
database.listDatabases

async database.listDatabases(): Array string

Fetches all databases from the server and returns an array of their names.

Examples

var db = require('arangojs')();
db.listDatabases()
.then(names => {
    // databases is an array of database names
});
database.listUserDatabases

async database.listUserDatabases(): Array string

Fetches all databases accessible to the active user from the server and returns an array of their names.

Examples

var db = require('arangojs')();
db.listUserDatabases()
.then(names => {
    // databases is an array of database names
});
database.dropDatabase

async database.dropDatabase(databaseName): Object

Deletes the database with the given databaseName from the server.

var db = require('arangojs')();
db.dropDatabase('mydb')
.then(() => {
    // database "mydb" no longer exists
})
database.truncate

async database.truncate([excludeSystem]): Object

Deletes all documents in all collections in the active database.

Arguments

  • excludeSystem: boolean (Default: true)

Whether system collections should be excluded.

Examples

var db = require('arangojs')();

db.truncate()
.then(() => {
    // all non-system collections in this database are now empty
});

// -- or --

db.truncate(false)
.then(() => {
    // I've made a huge mistake...
});

Accessing collections

These functions implement the HTTP API for accessing collections.

database.collection

database.collection(collectionName): DocumentCollection

Returns a DocumentCollection instance for the given collection name.

Arguments

  • collectionName: string

Name of the edge collection.

Examples

var db = require('arangojs')();
var collection = db.collection('potatos');
database.edgeCollection

database.edgeCollection(collectionName): EdgeCollection

Returns an EdgeCollection instance for the given collection name.

Arguments

  • collectionName: string

Name of the edge collection.

Examples

var db = require('arangojs')();
var collection = db.edgeCollection('potatos');
database.listCollections

async database.listCollections([excludeSystem]): ArrayObject

Fetches all collections from the database and returns an array of collection descriptions.

Arguments

  • excludeSystem: boolean (Default: true)

Whether system collections should be excluded from the results.

Examples

var db = require('arangojs')();

db.listCollections()
.then(collections => {
    // collections is an array of collection descriptions
    // not including system collections
});

// -- or --

db.listCollections(false)
.then(collections => {
    // collections is an array of collection descriptions
    // including system collections
});
database.collections

async database.collections([excludeSystem]): Array<Collection>

Fetches all collections from the database and returns an array of DocumentCollection and EdgeCollection instances for the collections.

Arguments

  • excludeSystem: boolean (Default: true)

Whether system collections should be excluded from the results.

Examples

var db = require('arangojs')();

db.listCollections()
.then(collections => {
    // collections is an array of DocumentCollection
    // and EdgeCollection instances
    // not including system collections
});

// -- or --

db.listCollections(false)
.then(collections => {
    // collections is an array of DocumentCollection
    // and EdgeCollection instances
    // including system collections
});

Accessing graphs

These functions implement the HTTP API for accessing general graphs.

database.graph

database.graph(graphName): Graph

Returns a Graph instance representing the graph with the given graph name.

database.listGraphs

async database.listGraphs(): ArrayObject

Fetches all graphs from the database and returns an array of graph descriptions.

Examples

var db = require('arangojs')();
db.listGraphs()
.then(graphs => {
    // graphs is an array of graph descriptions
});
database.graphs

async database.graphs(): Array<Graph>

Fetches all graphs from the database and returns an array of Graph instances for the graphs.

Examples

var db = require('arangojs')();
db.graphs()
.then(graphs => {
    // graphs is an array of Graph instances
});

Transactions

This function implements the HTTP API for transactions.

database.transaction

async database.transaction(collections, action, [params,] [lockTimeout]): Object

Performs a server-side transaction and returns its return value.

Arguments

  • collections: Object

An object with the following properties:

  • read: Array string (optional)

    An array of names (or a single name) of collections that will be read from during the transaction.

  • write: Array string (optional)

    An array of names (or a single name) of collections that will be written to or read from during the transaction.

  • action: string

A string evaluating to a JavaScript function to be executed on the server.

  • params: Array<any> (optional)

Parameters that will be passed to the action function.

  • lockTimeout: number (optional)

Determines how long the database will wait while attemping to gain locks on collections used by the transaction before timing out.

If collections is an array or string, it will be treated as collections.write.

Please note that while action should be a string evaluating to a well-formed JavaScript function, it's not possible to pass in a JavaScript function directly because the function needs to be evaluated on the server and will be transmitted in plain text.

For more information on transactions, see the HTTP API documentation for transactions.

Examples

var db = require('arangojs')();
var action = String(function () {
    // This code will be executed inside ArangoDB!
    var db = require('org/arangodb').db;
    return db._query('FOR user IN _users RETURN u.user').toArray<any>();
});
db.transaction({read: '_users'}, action)
.then(result => {
    // result contains the return value of the action
});

Queries

This function implements the HTTP API for single roundtrip AQL queries.

For collection-specific queries see simple queries.

database.query

async database.query(query, [bindVars,] [opts]): Cursor

Performs a database query using the given query and bindVars, then returns a new Cursor instance for the result list.

Arguments

  • query: string

An AQL query string or a query builder instance.

  • bindVars: Object (optional)

An object defining the variables to bind the query to.

  • opts: Object (optional)

Additional options that will be passed to the query API.

If opts.count is set to true, the cursor will have a count property set to the query result count.

If query is an object with query and bindVars properties, those will be used as the values of the respective arguments instead.

Examples

var db = require('arangojs')();
var active = true;

// Using ES2015 string templates
var aql = require('arangojs').aql;
db.query(aql`
    FOR u IN _users
    FILTER u.authData.active == ${active}
    RETURN u.user
`)
.then(cursor => {
    // cursor is a cursor for the query result
});

// -- or --

// Using the query builder
var qb = require('aqb');
db.query(
    qb.for('u').in('_users')
    .filter(qb.eq('u.authData.active', '@active'))
    .return('u.user'),
    {active: true}
)
.then(cursor => {
    // cursor is a cursor for the query result
});

// -- or --

// Using plain arguments
db.query(
    'FOR u IN _users'
    + ' FILTER u.authData.active == @active'
    + ' RETURN u.user',
    {active: true}
)
.then(cursor => {
    // cursor is a cursor for the query result
});
aql

aql(strings, ...args): Object

Template string handler for AQL queries. Converts an ES2015 template string to an object that can be passed to database.query by converting arguments to bind variables.

Any Collection instances will automatically be converted to collection bind variables.

Examples

var db = require('arangojs')();
var aql = require('arangojs').aql;
var userCollection = db.collection('_users');
var role = 'admin';
db.query(aql`
    FOR user IN ${userCollection}
    FILTER user.role == ${role}
    RETURN user
`)
.then(cursor => {
    // cursor is a cursor for the query result
});
// -- is equivalent to --
db.query(
  'FOR user IN @@value0 FILTER user.role == @value1 RETURN user',
  {'@value0': userCollection.name, value1: role}
)
.then(cursor => {
    // cursor is a cursor for the query result
});

Managing AQL user functions

These functions implement the HTTP API for managing AQL user functions.

database.listFunctions

async database.listFunctions(): ArrayObject

Fetches a list of all AQL user functions registered with the database.

Examples

var db = require('arangojs')();
db.listFunctions()
.then(functions => {
    // functions is a list of function descriptions
})
database.createFunction

async database.createFunction(name, code): Object

Creates an AQL user function with the given name and code if it does not already exist or replaces it if a function with the same name already existed.

Arguments

  • name: string

A valid AQL function name, e.g.: "myfuncs::accounting::calculate_vat".

  • code: string

A string evaluating to a JavaScript function (not a JavaScript function object).

Examples

var db = require('arangojs')();
var aql = require('arangojs').aql;
db.createFunction(
  'ACME::ACCOUNTING::CALCULATE_VAT',
  String(function (price) {
      return price * 0.19;
  })
)
// Use the new function in an AQL query with template handler:
.then(() => db.query(aql`
    FOR product IN products
    RETURN MERGE(
      {vat: ACME::ACCOUNTING::CALCULATE_VAT(product.price)},
      product
    )
`))
.then(cursor => {
    // cursor is a cursor for the query result
});
database.dropFunction

async database.dropFunction(name, [group]): Object

Deletes the AQL user function with the given name from the database.

Arguments

  • name: string

The name of the user function to drop.

  • group: boolean (Default: false)

If set to true, all functions with a name starting with name will be deleted; otherwise only the function with the exact name will be deleted.

Examples

var db = require('arangojs')();
db.dropFunction('ACME::ACCOUNTING::CALCULATE_VAT')
.then(() => {
    // the function no longer exists
});

Arbitrary HTTP routes

database.route

database.route([path,] [headers]): Route

Returns a new Route instance for the given path (relative to the database) that can be used to perform arbitrary HTTP requests.

Arguments

  • path: string (optional)

The database-relative URL of the route.

  • headers: Object (optional)

Default headers that should be sent with each request to the route.

If path is missing, the route will refer to the base URL of the database.

For more information on Route instances see the Route API below.

Examples

var db = require('arangojs')();
var myFoxxService = db.route('my-foxx-service');
myFoxxService.post('users', {
    username: 'admin',
    password: 'hunter2'
})
.then(response => {
    // response.body is the result of
    // POST /_db/_system/my-foxx-service/users
    // with JSON request body '{"username": "admin", "password": "hunter2"}'
});

Cursor API

Cursor instances provide an abstraction over the HTTP API's limitations. Unless a method explicitly exhausts the cursor, the driver will only fetch as many batches from the server as necessary. Like the server-side cursors, Cursor instances are incrementally depleted as they are read from.

var db = require('arangojs')();
db.query('FOR x IN 1..100 RETURN x')
// query result list: [1, 2, 3, ..., 99, 100]
.then(cursor => {
    cursor.next())
    .then(value => {
        value === 1;
        // remaining result list: [2, 3, 4, ..., 99, 100]
    });
});

cursor.count

cursor.count: number

The total number of documents in the query result. This is only available if the count option was used.

cursor.all

async cursor.all(): ArrayObject

Exhausts the cursor, then returns an array containing all values in the cursor's remaining result list.

Examples

// query result list: [1, 2, 3, 4, 5]
cursor.all()
.then(vals => {
    // vals is an array containing the entire query result
    Array.isArray(vals);
    vals.length === 5;
    vals; // [1, 2, 3, 4, 5]
    cursor.hasNext() === false;
});

cursor.next

async cursor.next(): Object

Advances the cursor and returns the next value in the cursor's remaining result list. If the cursor has already been exhausted, returns undefined instead.

Examples

// query result list: [1, 2, 3, 4, 5]
cursor.next()
.then(val => {
    val === 1;
    // remaining result list: [2, 3, 4, 5]
    return cursor.next();
})
.then(val2 => {
    val2 === 2;
    // remaining result list: [3, 4, 5]
});

cursor.hasNext

cursor.hasNext(): boolean

Returns true if the cursor has more values or false if the cursor has been exhausted.

Examples

cursor.all() // exhausts the cursor
.then(() => {
    cursor.hasNext() === false;
});

cursor.each

async cursor.each(fn): any

Advances the cursor by applying the function fn to each value in the cursor's remaining result list until the cursor is exhausted or fn explicitly returns false.

Returns the last return value of fn.

Equivalent to Array.prototype.forEach (except async).

Arguments

  • fn: Function

A function that will be invoked for each value in the cursor's remaining result list until it explicitly returns false or the cursor is exhausted.

The function receives the following arguments:

  • value: any

    The value in the cursor's remaining result list.

  • index: number

    The index of the value in the cursor's remaining result list.

  • cursor: Cursor

    The cursor itself.

Examples

var results = [];
function doStuff(value) {
    var VALUE = value.toUpperCase();
    results.push(VALUE);
    return VALUE;
}
// query result list: ['a', 'b', 'c']
cursor.each(doStuff)
.then(last => {
    String(results) === 'A,B,C';
    cursor.hasNext() === false;
    last === 'C';
});

cursor.every

async cursor.every(fn): boolean

Advances the cursor by applying the function fn to each value in the cursor's remaining result list until the cursor is exhausted or fn returns a value that evaluates to false.

Returns false if fn returned a value that evalutes to false, or true otherwise.

Equivalent to Array.prototype.every (except async).

Arguments

  • fn: Function

A function that will be invoked for each value in the cursor's remaining result list until it returns a value that evaluates to false or the cursor is exhausted.

The function receives the following arguments:

  • value: any

    The value in the cursor's remaining result list.

  • index: number

    The index of the value in the cursor's remaining result list.

  • cursor: Cursor

    The cursor itself.

function even(value) {
    return value % 2 === 0;
}
// query result list: [0, 2, 4, 5, 6]
cursor.every(even)
.then(result => {
    result === false; // 5 is not even
    cursor.hasNext() === true;
    cursor.next()
    .then(value => {
        value === 6; // next value after 5
    });
});

cursor.some

async cursor.some(fn): boolean

Advances the cursor by applying the function fn to each value in the cursor's remaining result list until the cursor is exhausted or fn returns a value that evaluates to true.

Returns true if fn returned a value that evalutes to true, or false otherwise.

Equivalent to Array.prototype.some (except async).

Examples

function even(value) {
    return value % 2 === 0;
}
// query result list: [1, 3, 4, 5]
cursor.some(even)
.then(result => {
    result === true; // 4 is even
    cursor.hasNext() === true;
    cursor.next()
    .then(value => {
        value === 5; // next value after 4
    });
});

cursor.map

cursor.map(fn): Array<any>

Advances the cursor by applying the function fn to each value in the cursor's remaining result list until the cursor is exhausted.

Returns an array of the return values of fn.

Equivalent to Array.prototype.map (except async).

Arguments

  • fn: Function

A function that will be invoked for each value in the cursor's remaining result list until the cursor is exhausted.

The function receives the following arguments:

  • value: any

    The value in the cursor's remaining result list.

  • index: number

    The index of the value in the cursor's remaining result list.

  • cursor: Cursor

    The cursor itself.

Examples

function square(value) {
    return value * value;
}
// query result list: [1, 2, 3, 4, 5]
cursor.map(square)
.then(result => {
    result.length === 5;
    result; // [1, 4, 9, 16, 25]
    cursor.hasNext() === false;
});

cursor.reduce

cursor.reduce(fn, [accu]): any

Exhausts the cursor by reducing the values in the cursor's remaining result list with the given function fn. If accu is not provided, the first value in the cursor's remaining result list will be used instead (the function will not be invoked for that value).

Equivalent to Array.prototype.reduce (except async).

Arguments

  • fn: Function

A function that will be invoked for each value in the cursor's remaining result list until the cursor is exhausted.

The function receives the following arguments:

  • accu: any

    The return value of the previous call to fn. If this is the first call, accu will be set to the accu value passed to reduce or the first value in the cursor's remaining result list.

  • value: any

    The value in the cursor's remaining result list.

  • index: number

    The index of the value in the cursor's remaining result list.

  • cursor: Cursor

    The cursor itself.

Examples

function add(a, b) {
    return a + b;
}
// query result list: [1, 2, 3, 4, 5]

var baseline = 1000;
cursor.reduce(add, baseline)
.then(result => {
    result === (baseline + 1 + 2 + 3 + 4 + 5);
    cursor.hasNext() === false;
});

// -- or --

cursor.reduce(add)
.then(result => {
    result === (1 + 2 + 3 + 4 + 5);
    cursor.hasNext() === false;
});

Route API

Route instances provide access for arbitrary HTTP requests. This allows easy access to Foxx services and other HTTP APIs not covered by the driver itself.

route.route

route.route([path], [headers]): Route

Returns a new Route instance for the given path (relative to the current route) that can be used to perform arbitrary HTTP requests.

Arguments

  • path: string (optional)

The relative URL of the route.

  • headers: Object (optional)

Default headers that should be sent with each request to the route.

If path is missing, the route will refer to the base URL of the database.

Examples

var db = require('arangojs')();
var route = db.route('my-foxx-service');
var users = route.route('users');
// equivalent to db.route('my-foxx-service/users')

route.get

async route.get([path,] [qs]): Response

Performs a GET request to the given URL and returns the server response.

Arguments

  • path: string (optional)

The route-relative URL for the request. If omitted, the request will be made to the base URL of the route.

  • qs: string (optional)

The query string for the request. If qs is an object, it will be translated to a query string.

Examples

var db = require('arangojs')();
var route = db.route('my-foxx-service');
route.get()
.then(response => {
    // response.body is the response body of calling
    // GET _db/_system/my-foxx-service
});

// -- or --

route.get('users')
.then(response => {
    // response.body is the response body of calling
    // GET _db/_system/my-foxx-service/users
});

// -- or --

route.get('users', {group: 'admin'})
.then(response => {
    // response.body is the response body of calling
    // GET _db/_system/my-foxx-service/users?group=admin
});

route.post

async route.post([path,] [body, [qs]]): Response

Performs a POST request to the given URL and returns the server response.

Arguments

  • path: string (optional)

The route-relative URL for the request. If omitted, the request will be made to the base URL of the route.

  • body: string (optional)

The response body. If body is an object, it will be encoded as JSON.

  • qs: string (optional)

The query string for the request. If qs is an object, it will be translated to a query string.

Examples

var db = require('arangojs')();
var route = db.route('my-foxx-service');
route.post()
.then(response => {
    // response.body is the response body of calling
    // POST _db/_system/my-foxx-service
});

// -- or --

route.post('users')
.then(response => {
    // response.body is the response body of calling
    // POST _db/_system/my-foxx-service/users
});

// -- or --

route.post('users', {
    username: 'admin',
    password: 'hunter2'
})
.then(response => {
    // response.body is the response body of calling
    // POST _db/_system/my-foxx-service/users
    // with JSON request body {"username": "admin", "password": "hunter2"}
});

// -- or --

route.post('users', {
    username: 'admin',
    password: 'hunter2'
}, {admin: true})
.then(response => {
    // response.body is the response body of calling
    // POST _db/_system/my-foxx-service/users?admin=true
    // with JSON request body {"username": "admin", "password": "hunter2"}
});

route.put

async route.put([path,] [body, [qs]]): Response

Performs a PUT request to the given URL and returns the server response.

Arguments

  • path: string (optional)

The route-relative URL for the request. If omitted, the request will be made to the base URL of the route.

  • body: string (optional)

The response body. If body is an object, it will be encoded as JSON.

  • qs: string (optional)

The query string for the request. If qs is an object, it will be translated to a query string.

Examples

var db = require('arangojs')();
var route = db.route('my-foxx-service');
route.put()
.then(response => {
    // response.body is the response body of calling
    // PUT _db/_system/my-foxx-service
});

// -- or --

route.put('users/admin')
.then(response => {
    // response.body is the response body of calling
    // PUT _db/_system/my-foxx-service/users
});

// -- or --

route.put('users/admin', {
    username: 'admin',
    password: 'hunter2'
})
.then(response => {
    // response.body is the response body of calling
    // PUT _db/_system/my-foxx-service/users/admin
    // with JSON request body {"username": "admin", "password": "hunter2"}
});

// -- or --

route.put('users/admin', {
    username: 'admin',
    password: 'hunter2'
}, {admin: true})
.then(response => {
    // response.body is the response body of calling
    // PUT _db/_system/my-foxx-service/users/admin?admin=true
    // with JSON request body {"username": "admin", "password": "hunter2"}
});

route.patch

async route.patch([path,] [body, [qs]]): Response

Performs a PATCH request to the given URL and returns the server response.

Arguments

  • path: string (optional)

The route-relative URL for the request. If omitted, the request will be made to the base URL of the route.

  • body: string (optional)

The response body. If body is an object, it will be encoded as JSON.

  • qs: string (optional)

The query string for the request. If qs is an object, it will be translated to a query string.

Examples

var db = require('arangojs')();
var route = db.route('my-foxx-service');
route.patch()
.then(response => {
    // response.body is the response body of calling
    // PATCH _db/_system/my-foxx-service
});

// -- or --

route.patch('users/admin')
.then(response => {
    // response.body is the response body of calling
    // PATCH _db/_system/my-foxx-service/users
});

// -- or --

route.patch('users/admin', {
    password: 'hunter2'
})
.then(response => {
    // response.body is the response body of calling
    // PATCH _db/_system/my-foxx-service/users/admin
    // with JSON request body {"password": "hunter2"}
});

// -- or --

route.patch('users/admin', {
    password: 'hunter2'
}, {admin: true})
.then(response => {
    // response.body is the response body of calling
    // PATCH _db/_system/my-foxx-service/users/admin?admin=true
    // with JSON request body {"password": "hunter2"}
});

route.delete

async route.delete([path,] [qs]): Response

Performs a DELETE request to the given URL and returns the server response.

Arguments

  • path: string (optional)

The route-relative URL for the request. If omitted, the request will be made to the base URL of the route.

  • qs: string (optional)

The query string for the request. If qs is an object, it will be translated to a query string.

Examples

var db = require('arangojs')();
var route = db.route('my-foxx-service');
route.delete()
.then(response => {
    // response.body is the response body of calling
    // DELETE _db/_system/my-foxx-service
});

// -- or --

route.delete('users/admin')
.then(response => {
    // response.body is the response body of calling
    // DELETE _db/_system/my-foxx-service/users/admin
});

// -- or --

route.delete('users/admin', {permanent: true})
.then(response => {
    // response.body is the response body of calling
    // DELETE _db/_system/my-foxx-service/users/admin?permanent=true
});

route.head

async route.head([path,] [qs]): Response

Performs a HEAD request to the given URL and returns the server response.

Arguments

  • path: string (optional)

The route-relative URL for the request. If omitted, the request will be made to the base URL of the route.

  • qs: string (optional)

The query string for the request. If qs is an object, it will be translated to a query string.

Examples

var db = require('arangojs')();
var route = db.route('my-foxx-service');
route.head()
.then(response => {
    // response is the response object for
    // HEAD _db/_system/my-foxx-service
});

route.request

async route.request([opts]): Response

Performs an arbitrary request to the given URL and returns the server response.

Arguments

  • opts: Object (optional)

An object with any of the following properties:

  • path: string (optional)

    The route-relative URL for the request. If omitted, the request will be made to the base URL of the route.

  • absolutePath: boolean (Default: false)

    Whether the path is relative to the connection's base URL instead of the route.

  • body: string (optional)

    The response body. If body is an object, it will be encoded as JSON.

  • qs: string (optional)

    The query string for the request. If qs is an object, it will be translated to a query string.

  • headers: Object (optional)

    An object containing additional HTTP headers to be sent with the request.

  • method: string (Default: "GET")

    HTTP method of this request.

Examples

var db = require('arangojs')();
var route = db.route('my-foxx-service');
route.request({
    path: 'hello-world',
    method: 'POST',
    body: {hello: 'world'},
    qs: {admin: true}
})
.then(response => {
    // response.body is the response body of calling
    // POST _db/_system/my-foxx-service/hello-world?admin=true
    // with JSON request body '{"hello": "world"}'
});

Collection API

These functions implement the HTTP API for manipulating collections.

The Collection API is implemented by all Collection instances, regardless of their specific type. I.e. it represents a shared subset between instances of DocumentCollection, EdgeCollection, GraphVertexCollection and GraphEdgeCollection.

Getting information about the collection

See the HTTP API documentation for details.

collection.get

async collection.get(): Object

Retrieves general information about the collection.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');
collection.get()
.then(data => {
    // data contains general information about the collection
});
collection.properties

async collection.properties(): Object

Retrieves the collection's properties.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');
collection.properties()
.then(data => {
    // data contains the collection's properties
});
collection.count

async collection.count(): Object

Retrieves information about the number of documents in a collection.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');
collection.count()
.then(data => {
    // data contains the collection's count
});
collection.figures

async collection.figures(): Object

Retrieves statistics for a collection.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');
collection.figures()
.then(data => {
    // data contains the collection's figures
});
collection.revision

async collection.revision(): Object

Retrieves the collection revision ID.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');
collection.revision()
.then(data => {
    // data contains the collection's revision
});
collection.checksum

async collection.checksum([opts]): Object

Retrieves the collection checksum.

Arguments

  • opts: Object (optional)

For information on the possible options see the HTTP API for getting collection information.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');
collection.checksum()
.then(data => {
    // data contains the collection's checksum
});

Manipulating the collection

These functions implement the HTTP API for modifying collections.

collection.create

async collection.create([properties]): Object

Creates a collection with the given properties for this collection's name, then returns the server response.

Arguments

  • properties: Object (optional)

For more information on the properties object, see the HTTP API documentation for creating collections.

Examples

var db = require('arangojs')();
collection = db.collection('potatos');
collection.create()
.then(() => {
    // the document collection "potatos" now exists
});

// -- or --

var collection = var collection = db.edgeCollection('friends');
collection.create({
    waitForSync: true // always sync document changes to disk
})
.then(() => {
    // the edge collection "friends" now exists
});
collection.load

async collection.load([count]): Object

Tells the server to load the collection into memory.

Arguments

  • count: boolean (Default: true)

If set to false, the return value will not include the number of documents in the collection (which may speed up the process).

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');
collection.load(false)
.then(() => {
    // the collection has now been loaded into memory
});
collection.unload

async collection.unload(): Object

Tells the server to remove the collection from memory.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');
collection.unload()
.then(() => {
    // the collection has now been unloaded from memory
});
collection.setProperties

async collection.setProperties(properties): Object

Replaces the properties of the collection.

Arguments

  • properties: Object

For information on the properties argument see the HTTP API for modifying collections.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');
collection.setProperties({waitForSync: true})
.then(result => {
    result.waitForSync === true;
    // the collection will now wait for data being written to disk
    // whenever a document is changed
});
collection.rename

async collection.rename(name): Object

Renames the collection. The Collection instance will automatically update its name when the rename succeeds.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');
collection.rename('new-collection-name')
.then(result => {
    result.name === 'new-collection-name';
    collection.name === result.name;
    // result contains additional information about the collection
});
collection.rotate

async collection.rotate(): Object

Rotates the journal of the collection.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');
collection.rotate()
.then(data => {
    // data.result will be true if rotation succeeded
});
collection.truncate

async collection.truncate(): Object

Deletes all documents in the collection in the database.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');
collection.truncate()
.then(() => {
    // the collection "some-collection" is now empty
});
collection.drop

async collection.drop(): Object

Deletes the collection from the database.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');
collection.drop()
.then(() => {
    // the collection "some-collection" no longer exists
});

Manipulating indexes

These functions implement the HTTP API for manipulating indexes.

collection.createIndex

async collection.createIndex(details): Object

Creates an arbitrary index on the collection.

Arguments

  • details: Object

For information on the possible properties of the details object, see the HTTP API for manipulating indexes.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');
collection.createIndex({type: 'cap', size: 20})
.then(index => {
    index.id; // the index's handle
    // the index has been created
});
collection.createCapConstraint

async collection.createCapConstraint(size): Object

Creates a cap constraint index on the collection.

Note: This method is not available when using the driver with ArangoDB 3.0 and higher as cap constraints are no longer supported.

Arguments

  • size: Object

An object with any of the following properties:

  • size: number (optional)

    The maximum number of documents in the collection.

  • byteSize: number (optional)

    The maximum size of active document data in the collection (in bytes).

If size is a number, it will be interpreted as size.size.

For more information on the properties of the size object see the HTTP API for creating cap constraints.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');

collection.createCapConstraint(20)
.then(index => {
    index.id; // the index's handle
    index.size === 20;
    // the index has been created
});

// -- or --

collection.createCapConstraint({size: 20})
.then(index => {
    index.id; // the index's handle
    index.size === 20;
    // the index has been created
});
collection.createHashIndex

async collection.createHashIndex(fields, [opts]): Object

Creates a hash index on the collection.

Arguments

  • fields: Array string

An array of names of document fields on which to create the index. If the value is a string, it will be wrapped in an array automatically.

  • opts: Object (optional)

Additional options for this index. If the value is a boolean, it will be interpreted as opts.unique.

For more information on hash indexes, see the HTTP API for hash indexes.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');

collection.createHashIndex('favorite-color')
.then(index => {
    index.id; // the index's handle
    index.fields; // ['favorite-color']
    // the index has been created
});

// -- or --

collection.createHashIndex(['favorite-color'])
.then(index => {
    index.id; // the index's handle
    index.fields; // ['favorite-color']
    // the index has been created
});
collection.createSkipList

async collection.createSkipList(fields, [opts]): Object

Creates a skiplist index on the collection.

Arguments

  • fields: Array string

An array of names of document fields on which to create the index. If the value is a string, it will be wrapped in an array automatically.

  • opts: Object (optional)

Additional options for this index. If the value is a boolean, it will be interpreted as opts.unique.

For more information on skiplist indexes, see the HTTP API for skiplist indexes.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');

collection.createSkipList('favorite-color')
.then(index => {
    index.id; // the index's handle
    index.fields; // ['favorite-color']
    // the index has been created
});

// -- or --

collection.createSkipList(['favorite-color'])
.then(index => {
    index.id; // the index's handle
    index.fields; // ['favorite-color']
    // the index has been created
});
collection.createGeoIndex

async collection.createGeoIndex(fields, [opts]): Object

Creates a geo-spatial index on the collection.

Arguments

  • fields: Array string

An array of names of document fields on which to create the index. Currently, geo indexes must cover exactly one field. If the value is a string, it will be wrapped in an array automatically.

  • opts: Object (optional)

An object containing additional properties of the index.

For more information on the properties of the opts object see the HTTP API for manipulating geo indexes.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');

collection.createGeoIndex(['longitude', 'latitude'])
.then(index => {
    index.id; // the index's handle
    index.fields; // ['longitude', 'latitude']
    // the index has been created
});

// -- or --

collection.createGeoIndex('location', {geoJson: true})
.then(index => {
    index.id; // the index's handle
    index.fields; // ['location']
    // the index has been created
});
collection.createFulltextIndex

async collection.createFulltextIndex(fields, [minLength]): Object

Creates a fulltext index on the collection.

Arguments

  • fields: Array string

An array of names of document fields on which to create the index. Currently, fulltext indexes must cover exactly one field. If the value is a string, it will be wrapped in an array automatically.

  • minLength (optional):

Minimum character length of words to index. Uses a server-specific default value if not specified.

For more information on fulltext indexes, see the HTTP API for fulltext indexes.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');

collection.createFulltextIndex('description')
.then(index => {
    index.id; // the index's handle
    index.fields; // ['description']
    // the index has been created
});

// -- or --

collection.createFulltextIndex(['description'])
.then(index => {
    index.id; // the index's handle
    index.fields; // ['description']
    // the index has been created
});
collection.index

async collection.index(indexHandle): Object

Fetches information about the index with the given indexHandle and returns it.

Arguments

  • indexHandle: string

The handle of the index to look up. This can either be a fully-qualified identifier or the collection-specific key of the index. If the value is an object, its id property will be used instead.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');
collection.createFulltextIndex('description')
.then(index => {
    collection.index(index.id)
    .then(result => {
        result.id === index.id;
        // result contains the properties of the index
    });

    // -- or --

    collection.index(index.id.split('/')[1])
    .then(result => {
        result.id === index.id;
        // result contains the properties of the index
    });
});
collection.indexes

async collection.indexes(): ArrayObject

Fetches a list of all indexes on this collection.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');
collection.createFulltextIndex('description')
.then(() => collection.indexes())
.then(indexes => {
    indexes.length === 1;
    // indexes contains information about the index
});
collection.dropIndex

async collection.dropIndex(indexHandle): Object

Deletes the index with the given indexHandle from the collection.

Arguments

  • indexHandle: string

The handle of the index to delete. This can either be a fully-qualified identifier or the collection-specific key of the index. If the value is an object, its id property will be used instead.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');
collection.createFulltextIndex('description')
.then(index => {
    collection.dropIndex(index.id)
    .then(() => {
        // the index has been removed from the collection
    });

    // -- or --

    collection.dropIndex(index.id.split('/')[1])
    .then(() => {
        // the index has been removed from the collection
    });
});

Simple queries

These functions implement the HTTP API for simple queries.

collection.all

async collection.all([opts]): Cursor

Performs a query to fetch all documents in the collection. Returns a new Cursor instance for the query results.

Arguments

  • opts: Object (optional)

For information on the possible options see the HTTP API for returning all documents.

collection.any

async collection.any(): Object

Fetches a document from the collection at random.

collection.first

async collection.first([opts]): ArrayObject

Performs a query to fetch the first documents in the collection. Returns an array of the matching documents.

Note: This method is not available when using the driver with ArangoDB 3.0 and higher as the corresponding API method has been removed.

Arguments

  • opts: Object (optional)

For information on the possible options see the HTTP API for returning the first documents of a collection.

If opts is a number it is treated as opts.count.

collection.last

async collection.last([opts]): ArrayObject

Performs a query to fetch the last documents in the collection. Returns an array of the matching documents.

Note: This method is not available when using the driver with ArangoDB 3.0 and higher as the corresponding API method has been removed.

Arguments

  • opts: Object (optional)

For information on the possible options see the HTTP API for returning the last documents of a collection.

If opts is a number it is treated as opts.count.

collection.byExample

async collection.byExample(example, [opts]): Cursor

Performs a query to fetch all documents in the collection matching the given example. Returns a new Cursor instance for the query results.

Arguments

  • example: Object

An object representing an example for documents to be matched against.

  • opts: Object (optional)

For information on the possible options see the HTTP API for fetching documents by example.

collection.firstExample

async collection.firstExample(example): Object

Fetches the first document in the collection matching the given example.

Arguments

  • example: Object

An object representing an example for documents to be matched against.

collection.removeByExample

async collection.removeByExample(example, [opts]): Object

Removes all documents in the collection matching the given example.

Arguments

  • example: Object

An object representing an example for documents to be matched against.

  • opts: Object (optional)

For information on the possible options see the HTTP API for removing documents by example.

collection.replaceByExample

async collection.replaceByExample(example, newValue, [opts]): Object

Replaces all documents in the collection matching the given example with the given newValue.

Arguments

  • example: Object

An object representing an example for documents to be matched against.

  • newValue: Object

The new value to replace matching documents with.

  • opts: Object (optional)

For information on the possible options see the HTTP API for replacing documents by example.

collection.updateByExample

async collection.updateByExample(example, newValue, [opts]): Object

Updates (patches) all documents in the collection matching the given example with the given newValue.

Arguments

  • example: Object

An object representing an example for documents to be matched against.

  • newValue: Object

The new value to update matching documents with.

  • opts: Object (optional)

For information on the possible options see the HTTP API for updating documents by example.

collection.lookupByKeys

async collection.lookupByKeys(keys): ArrayObject

Fetches the documents with the given keys from the collection. Returns an array of the matching documents.

Arguments

  • keys: Array

An array of document keys to look up.

collection.removeByKeys

async collection.removeByKeys(keys, [opts]): Object

Deletes the documents with the given keys from the collection.

Arguments

  • keys: Array

An array of document keys to delete.

  • opts: Object (optional)

For information on the possible options see the HTTP API for removing documents by keys.

collection.fulltext

async collection.fulltext(fieldName, query, [opts]): Cursor

Performs a fulltext query in the given fieldName on the collection.

Arguments

  • fieldName: String

Name of the field to search on documents in the collection.

  • query: String

Fulltext query string to search for.

  • opts: Object (optional)

For information on the possible options see the HTTP API for fulltext queries.

Bulk importing documents

This function implements the HTTP API for bulk imports.

collection.import

async collection.import(data, [opts]): Object

Bulk imports the given data into the collection.

Arguments

  • data: Array Array any | ArrayObject

The data to import. This can be an array of documents:

js [ {key1: value1, key2: value2}, // document 1 {key1: value1, key2: value2}, // document 2 ... ]

Or it can be an array of value arrays following an array of keys.

js [ ['key1', 'key2'], // key names [value1, value2], // document 1 [value1, value2], // document 2 ... ]

  • opts: Object (optional) If opts is set, it must be an object with any of the following properties:

  • waitForSync: boolean (Default: false)

    Wait until the documents have been synced to disk.

  • details: boolean (Default: false)

    Whether the response should contain additional details about documents that could not be imported.false*.

  • type: string (Default: "auto")

    Indicates which format the data uses. Can be "documents", "array" or "auto".

If data is a JavaScript array, it will be transmitted as a line-delimited JSON stream. If opts.type is set to "array", it will be transmitted as regular JSON instead. If data is a string, it will be transmitted as it is without any processing.

For more information on the opts object, see the HTTP API documentation for bulk imports.

Examples

var db = require('arangojs')();
var collection = db.collection('users');

collection.import(
    [// document stream
        {username: 'admin', password: 'hunter2'},
        {username: 'jcd', password: 'bionicman'},
        {username: 'jreyes', password: 'amigo'},
        {username: 'ghermann', password: 'zeitgeist'}
    ]
)
.then(result => {
    result.created === 4;
});

// -- or --

collection.import(
    [// array stream with header
        ['username', 'password'], // keys
        ['admin', 'hunter2'], // row 1
        ['jcd', 'bionicman'], // row 2
        ['jreyes', 'amigo'],
        ['ghermann', 'zeitgeist']
    ]
)
.then(result => {
    result.created === 4;
});

// -- or --

collection.import(
    // raw line-delimited JSON array stream with header
    '["username", "password"]\r\n' +
    '["admin", "hunter2"]\r\n' +
    '["jcd", "bionicman"]\r\n' +
    '["jreyes", "amigo"]\r\n' +
    '["ghermann", "zeitgeist"]\r\n'
)
.then(result => {
    result.created === 4;
});

Manipulating documents

These functions implement the HTTP API for manipulating documents.

collection.replace

async collection.replace(documentHandle, newValue, [opts]): Object

Replaces the content of the document with the given documentHandle with the given newValue and returns an object containing the document's metadata.

Note: The policy option is not available when using the driver with ArangoDB 3.0 as it is redundant when specifying the rev option.

Arguments

  • documentHandle: string

The handle of the document to replace. This can either be the _id or the _key of a document in the collection, or a document (i.e. an object with an _id or _key property).

  • newValue: Object

The new data of the document.

  • opts: Object (optional)

If opts is set, it must be an object with any of the following properties:

  • waitForSync: boolean (Default: false)

    Wait until the document has been synced to disk. Default: false.

  • rev: string (optional)

    Only replace the document if it matches this revision.

  • policy: string (optional)

    Determines the behaviour when the revision is not matched:

    • if policy is set to "last", the document will be replaced regardless of the revision.
    • if policy is set to "error" or not set, the replacement will fail with an error.

If a string is passed instead of an options object, it will be interpreted as the rev option.

For more information on the opts object, see the HTTP API documentation for working with documents.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');
var doc = {number: 1, hello: 'world'};
collection.save(doc)
.then(doc1 => {
    collection.replace(doc1, {number: 2})
    .then(doc2 => {
        doc2._id === doc1._id;
        doc2._rev !== doc1._rev;
        collection.document(doc1)
        .then(doc3 => {
            doc3._id === doc1._id;
            doc3._rev === doc2._rev;
            doc3.number === 2;
            doc3.hello === undefined;
        })
    });
});
collection.update

async collection.update(documentHandle, newValue, [opts]): Object

Updates (merges) the content of the document with the given documentHandle with the given newValue and returns an object containing the document's metadata.

Note: The policy option is not available when using the driver with ArangoDB 3.0 as it is redundant when specifying the rev option.

Arguments

  • documentHandle: string

Handle of the document to update. This can be either the _id or the _key of a document in the collection, or a document (i.e. an object with an _id or _key property).

  • newValue: Object

The new data of the document.

  • opts: Object (optional)

If opts is set, it must be an object with any of the following properties:

  • waitForSync: boolean (Default: false)

    Wait until document has been synced to disk.

  • keepNull: boolean (Default: true)

    If set to false, properties with a value of null indicate that a property should be deleted.

  • mergeObjects: boolean (Default: true)

    If set to false, object properties that already exist in the old document will be overwritten rather than merged. This does not affect arrays.

  • rev: string (optional)

    Only update the document if it matches this revision.

  • policy: string (optional)

    Determines the behaviour when the revision is not matched:

    • if policy is set to "last", the document will be replaced regardless of the revision.
    • if policy is set to "error" or not set, the replacement will fail with an error.

If a string is passed instead of an options object, it will be interpreted as the rev option.

For more information on the opts object, see the HTTP API documentation for working with documents.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');
var doc = {number: 1, hello: 'world'};
collection.save(doc)
.then(doc1 => {
    collection.update(doc1, {number: 2})
    .then(doc2 => {
        doc2._id === doc1._id;
        doc2._rev !== doc1._rev;
        collection.document(doc2)
        .then(doc3 => {
          doc3._id === doc2._id;
          doc3._rev === doc2._rev;
          doc3.number === 2;
          doc3.hello === doc.hello;
        });
    });
});
collection.remove

async collection.remove(documentHandle, [opts]): Object

Deletes the document with the given documentHandle from the collection.

Note: The policy option is not available when using the driver with ArangoDB 3.0 as it is redundant when specifying the rev option.

Arguments

  • documentHandle: string

The handle of the document to delete. This can be either the _id or the _key of a document in the collection, or a document (i.e. an object with an _id or _key property).

  • opts: Object (optional)

If opts is set, it must be an object with any of the following properties:

  • waitForSync: boolean (Default: false)

    Wait until document has been synced to disk.

  • rev: string (optional)

    Only update the document if it matches this revision.

  • policy: string (optional)

    Determines the behaviour when the revision is not matched:

    • if policy is set to "last", the document will be replaced regardless of the revision.
    • if policy is set to "error" or not set, the replacement will fail with an error.

If a string is passed instead of an options object, it will be interpreted as the rev option.

For more information on the opts object, see the HTTP API documentation for working with documents.

Examples

var db = require('arangojs')();
var collection = db.collection('some-collection');

collection.remove('some-doc')
.then(() => {
    // document 'some-collection/some-doc' no longer exists
});

// -- or --

collection.remove('some-collection/some-doc')
.then(() => {
    // document 'some-collection/some-doc' no longer exists
});
collection.list

async collection.list([type]): Array string

Retrieves a list of references for all documents in the collection.

Arguments

  • type: string (Default: "id")

The format of the document references:

  • if type is set to "id", each reference will be the _id of the document.
  • if type is set to "key", each reference will be the _key of the document.
  • if type is set to "path", each reference will be the URI path of the document.

DocumentCollection API

The DocumentCollection API extends the Collection API (see above) with the following methods.

documentCollection.document

async documentCollection.document(documentHandle): Object

Retrieves the document with the given documentHandle from the collection.

Arguments

  • documentHandle: string

The handle of the document to retrieve. This can be either the _id or the _key of a document in the collection, or a document (i.e. an object with an _id or _key property).

Examples

var db = require('arangojs')();
var collection = db.collection('my-docs');

collection.document('some-key')
.then(doc => {
    // the document exists
    doc._key === 'some-key';
    doc._id === 'my-docs/some-key';
});

// -- or --

collection.document('my-docs/some-key')
.then(doc => {
    // the document exists
    doc._key === 'some-key';
    doc._id === 'my-docs/some-key';
});

documentCollection.save

async documentCollection.save(data): Object

Creates a new document with the given data and returns an object containing the document's metadata.

Arguments

  • data: Object

The data of the new document, may include a _key.

Examples

var db = require('arangojs')();
var collection = db.collection('my-docs');
var doc = {some: 'data'};
collection.save(doc)
.then(doc1 => {
    doc1._key; // the document's key
    doc1._id === ('my-docs/' + doc1._key);
    collection.document(doc)
    .then(doc2 => {
        doc2._id === doc1._id;
        doc2._rev === doc1._rev;
        doc2.some === 'data';
    });
});

EdgeCollection API

The EdgeCollection API extends the Collection API (see above) with the following methods.

edgeCollection.edge

async edgeCollection.edge(documentHandle): Object

Retrieves the edge with the given documentHandle from the collection.

Arguments

  • documentHandle: string

The handle of the edge to retrieve. This can be either the _id or the _key of an edge in the collection, or an edge (i.e. an object with an _id or _key property).

Examples

var db = require('arangojs')();
var collection = var collection = db.edgeCollection('edges');

collection.edge('some-key')
.then(edge => {
    // the edge exists
    edge._key === 'some-key';
    edge._id === 'edges/some-key';
});

// -- or --

collection.edge('edges/some-key')
.then(edge => {
    // the edge exists
    edge._key === 'some-key';
    edge._id === 'edges/some-key';
});

edgeCollection.save

async edgeCollection.save(data, [fromId, toId]): Object

Creates a new edge between the documents fromId and toId with the given data and returns an object containing the edge's metadata.

Arguments

  • data: Object

The data of the new edge. If fromId and toId are not specified, the data needs to contain the properties _from and _to.

  • fromId: string (optional)

The handle of the start vertex of this edge. This can be either the _id of a document in the database, the _key of an edge in the collection, or a document (i.e. an object with an _id or _key property).

  • toId: string (optional)

The handle of the end vertex of this edge. This can be either the _id of a document in the database, the _key of an edge in the collection, or a document (i.e. an object with an _id or _key property).

Examples

var db = require('arangojs')();
var collection = db.edgeCollection('edges');
var edge = {some: 'data'};

collection.save(
    edge,
    'vertices/start-vertex',
    'vertices/end-vertex'
)
.then(edge1 => {
    edge1._key; // the edge's key
    edge1._id === ('edges/' + edge1._key);
    collection.edge(edge)
    .then(edge2 => {
        edge2._key === edge1._key;
        edge2._rev = edge1._rev;
        edge2.some === edge.some;
        edge2._from === 'vertices/start-vertex';
        edge2._to === 'vertices/end-vertex';
    });
});

// -- or --

collection.save({
    some: 'data',
    _from: 'verticies/start-vertex',
    _to: 'vertices/end-vertex'
})
.then(edge => {
    // ...
})

edgeCollection.edges

async edgeCollection.edges(documentHandle): ArrayObject

Retrieves a list of all edges of the document with the given documentHandle.

Arguments

  • documentHandle: string

The handle of the document to retrieve the edges of. This can be either the _id of a document in the database, the _key of an edge in the collection, or a document (i.e. an object with an _id or _key property).

Examples

var db = require('arangojs')();
var collection = db.edgeCollection('edges');
collection.import([
    ['_key', '_from', '_to'],
    ['x', 'vertices/a', 'vertices/b'],
    ['y', 'vertices/a', 'vertices/c'],
    ['z', 'vertices/d', 'vertices/a']
])
.then(() => collection.edges('vertices/a'))
.then(edges => {
    edges.length === 3;
    edges.map(function (edge) {return edge._key;}); // ['x', 'y', 'z']
});

edgeCollection.inEdges

async edgeCollection.inEdges(documentHandle): Array Object

Retrieves a list of all incoming edges of the document with the given documentHandle.

Arguments

  • documentHandle: string

The handle of the document to retrieve the edges of. This can be either the _id of a document in the database, the _key of an edge in the collection, or a document (i.e. an object with an _id or _key property).

Examples

var db = require('arangojs')();
var collection = db.edgeCollection('edges');
collection.import([
    ['_key', '_from', '_to'],
    ['x', 'vertices/a', 'vertices/b'],
    ['y', 'vertices/a', 'vertices/c'],
    ['z', 'vertices/d', 'vertices/a']
])
.then(() => collection.inEdges('vertices/a'))
.then(edges => {
    edges.length === 1;
    edges[0]._key === 'z';
});

edgeCollection.outEdges

async edgeCollection.outEdges(documentHandle): Array Object

Retrieves a list of all outgoing edges of the document with the given documentHandle.

Arguments

  • documentHandle: string

The handle of the document to retrieve the edges of. This can be either the _id of a document in the database, the _key of an edge in the collection, or a document (i.e. an object with an _id or _key property).

Examples

var db = require('arangojs')();
var collection = db.edgeCollection('edges');
collection.import([
    ['_key', '_from', '_to'],
    ['x', 'vertices/a', 'vertices/b'],
    ['y', 'vertices/a', 'vertices/c'],
    ['z', 'vertices/d', 'vertices/a']
])
.then(() => collection.outEdges('vertices/a'))
.then(edges => {
    edges.length === 2;
    edges.map(function (edge) {return edge._key;}); // ['x', 'y']
});

edgeCollection.traversal

async edgeCollection.traversal(startVertex, opts): Object

Performs a traversal starting from the given startVertex and following edges contained in this edge collection.

Arguments

  • startVertex: string

The handle of the start vertex. This can be either the _id of a document in the database, the _key of an edge in the collection, or a document (i.e. an object with an _id or _key property).

  • opts: Object

See the HTTP API documentation for details on the additional arguments.

Please note that while opts.filter, opts.visitor, opts.init, opts.expander and opts.sort should be strings evaluating to well-formed JavaScript code, it's not possible to pass in JavaScript functions directly because the code needs to be evaluated on the server and will be transmitted in plain text.

Examples

var db = require('arangojs')();
var collection = db.edgeCollection('edges');
collection.import([
    ['_key', '_from', '_to'],
    ['x', 'vertices/a', 'vertices/b'],
    ['y', 'vertices/b', 'vertices/c'],
    ['z', 'vertices/c', 'vertices/d']
])
.then(() => collection.traversal('vertices/a', {
    direction: 'outbound',
    visitor: 'result.vertices.push(vertex._key);',
    init: 'result.vertices = [];'
}))
.then(result => {
    result.vertices; // ['a', 'b', 'c', 'd']
});

Graph API

These functions implement the HTTP API for manipulating graphs.

graph.get

async graph.get(): Object

Retrieves general information about the graph.

Examples

var db = require('arangojs')();
var graph = db.graph('some-graph');
graph.get()
.then(data => {
    // data contains general information about the graph
});

graph.create

async graph.create(properties): Object

Creates a graph with the given properties for this graph's name, then returns the server response.

Arguments

  • properties: Object

For more information on the properties object, see the HTTP API documentation for creating graphs.

Examples

var db = require('arangojs')();
var graph = db.graph('some-graph');
graph.create({
    edgeDefinitions: [
        {
            collection: 'edges',
            from: [
                'start-vertices'
            ],
            to: [
                'end-vertices'
            ]
        }
    ]
})
.then(graph => {
    // graph is a Graph instance
    // for more information see the Graph API below
});

graph.drop

async graph.drop([dropCollections]): Object

Deletes the graph from the database.

Arguments

  • dropCollections: boolean (optional)

If set to true, the collections associated with the graph will also be deleted.

Examples

var db = require('arangojs')();
var graph = db.graph('some-graph');
graph.drop()
.then(() => {
    // the graph "some-graph" no longer exists
});

Manipulating vertices

graph.vertexCollection

graph.vertexCollection(collectionName): GraphVertexCollection

Returns a new GraphVertexCollection instance with the given name for this graph.

Arguments

  • collectionName: string

Name of the vertex collection.

Examples

var db = require('arangojs')();
var graph = db.graph('some-graph');
var collection = graph.vertexCollection('vertices');
collection.name === 'vertices';
// collection is a GraphVertexCollection
graph.addVertexCollection

async graph.addVertexCollection(collectionName): Object

Adds the collection with the given collectionName to the graph's vertex collections.

Arguments

  • collectionName: string

Name of the vertex collection to add to the graph.

Examples

var db = require('arangojs')();
var graph = db.graph('some-graph');
graph.addVertexCollection('vertices')
.then(() => {
    // the collection "vertices" has been added to the graph
});
graph.removeVertexCollection

async graph.removeVertexCollection(collectionName, [dropCollection]): Object

Removes the vertex collection with the given collectionName from the graph.

Arguments

  • collectionName: string

Name of the vertex collection to remove from the graph.

  • dropCollection: boolean (optional)

If set to true, the collection will also be deleted from the database.

Examples

var db = require('arangojs')();
var graph = db.graph('some-graph');

graph.removeVertexCollection('vertices')
.then(() => {
    // collection "vertices" has been removed from the graph
});

// -- or --

graph.removeVertexCollection('vertices', true)
.then(() => {
    // collection "vertices" has been removed from the graph
    // the collection has also been dropped from the database
    // this may have been a bad idea
});

Manipulating edges

graph.edgeCollection

graph.edgeCollection(collectionName): GraphEdgeCollection

Returns a new GraphEdgeCollection instance with the given name bound to this graph.

Arguments

  • collectionName: string

Name of the edge collection.

Examples

var db = require('arangojs')();
// assuming the collections "edges" and "vertices" exist
var graph = db.graph('some-graph');
var collection = graph.edgeCollection('edges');
collection.name === 'edges';
// collection is a GraphEdgeCollection
graph.addEdgeDefinition

async graph.addEdgeDefinition(definition): Object

Adds the given edge definition definition to the graph.

Arguments

  • definition: Object

For more information on edge definitions see the HTTP API for managing graphs.

Examples

var db = require('arangojs')();
// assuming the collections "edges" and "vertices" exist
var graph = db.graph('some-graph');
graph.addEdgeDefinition({
    collection: 'edges',
    from: ['vertices'],
    to: ['vertices']
})
.then(() => {
    // the edge definition has been added to the graph
});
graph.replaceEdgeDefinition

async graph.replaceEdgeDefinition(collectionName, definition): Object

Replaces the edge definition for the edge collection named collectionName with the given definition.

Arguments

  • collectionName: string

Name of the edge collection to replace the definition of.

  • definition: Object

For more information on edge definitions see the HTTP API for managing graphs.

Examples

var db = require('arangojs')();
// assuming the collections "edges", "vertices" and "more-vertices" exist
var graph = db.graph('some-graph');
graph.replaceEdgeDefinition('edges', {
    collection: 'edges',
    from: ['vertices'],
    to: ['more-vertices']
})
.then(() => {
    // the edge definition has been modified
});
graph.removeEdgeDefinition

async graph.removeEdgeDefinition(definitionName, [dropCollection]): Object

Removes the edge definition with the given definitionName form the graph.

Arguments

  • definitionName: string

Name of the edge definition to remove from the graph.

  • dropCollection: boolean (optional)

If set to true, the edge collection associated with the definition will also be deleted from the database.

Examples

var db = require('arangojs')();
var graph = db.graph('some-graph');

graph.removeEdgeDefinition('edges')
.then(() => {
    // the edge definition has been removed
});

// -- or --

graph.removeEdgeDefinition('edges', true)
.then(() => {
    // the edge definition has been removed
    // and the edge collection "edges" has been dropped
    // this may have been a bad idea
});
graph.traversal

async graph.traversal(startVertex, opts): Object

Performs a traversal starting from the given startVertex and following edges contained in any of the edge collections of this graph.

Arguments

  • startVertex: string

The handle of the start vertex. This can be either the _id of a document in the graph or a document (i.e. an object with an _id property).

  • opts: Object

See the HTTP API documentation for details on the additional arguments.

Please note that while opts.filter, opts.visitor, opts.init, opts.expander and opts.sort should be strings evaluating to well-formed JavaScript functions, it's not possible to pass in JavaScript functions directly because the functions need to be evaluated on the server and will be transmitted in plain text.

Examples

var db = require('arangojs')();
var graph = db.graph('some-graph');
var collection = graph.edgeCollection('edges');
collection.import([
    ['_key', '_from', '_to'],
    ['x', 'vertices/a', 'vertices/b'],
    ['y', 'vertices/b', 'vertices/c'],
    ['z', 'vertices/c', 'vertices/d']
])
.then(() => graph.traversal('vertices/a', {
    direction: 'outbound',
    visitor: 'result.vertices.push(vertex._key);',
    init: 'result.vertices = [];'
}))
.then(result => {
    result.vertices; // ['a', 'b', 'c', 'd']
});

GraphVertexCollection API

The GraphVertexCollection API extends the Collection API (see above) with the following methods.

graphVertexCollection.remove

async graphVertexCollection.remove(documentHandle): Object

Deletes the vertex with the given documentHandle from the collection.

Arguments

  • documentHandle: string

The handle of the vertex to retrieve. This can be either the _id or the _key of a vertex in the collection, or a vertex (i.e. an object with an _id or _key property).

Examples

var graph = db.graph('some-graph');
var collection = graph.vertexCollection('vertices');

collection.remove('some-key')
.then(() => {
    // document 'vertices/some-key' no longer exists
});

// -- or --

collection.remove('vertices/some-key')
.then(() => {
    // document 'vertices/some-key' no longer exists
});

graphVertexCollection.vertex

async graphVertexCollection.vertex(documentHandle): Object

Retrieves the vertex with the given documentHandle from the collection.

Arguments

  • documentHandle: string

The handle of the vertex to retrieve. This can be either the _id or the _key of a vertex in the collection, or a vertex (i.e. an object with an _id or _key property).

Examples

var graph = db.graph('some-graph');
var collection = graph.vertexCollection('vertices');

collection.vertex('some-key')
.then(doc => {
    // the vertex exists
    doc._key === 'some-key';
    doc._id === 'vertices/some-key';
});

// -- or --

collection.vertex('vertices/some-key')
.then(doc => {
    // the vertex exists
    doc._key === 'some-key';
    doc._id === 'vertices/some-key';
});

graphVertexCollection.save

async graphVertexCollection.save(data): Object

Creates a new vertex with the given data.

Arguments

  • data: Object

The data of the vertex.

Examples

var db = require('arangojs')();
var graph = db.graph('some-graph');
var collection = graph.vertexCollection('vertices');
collection.save({some: 'data'})
.then(doc => {
    doc._key; // the document's key
    doc._id === ('vertices/' + doc._key);
    doc.some === 'data';
});

GraphEdgeCollection API

The GraphEdgeCollection API extends the Collection API (see above) with the following methods.

graphEdgeCollection.remove

async graphEdgeCollection.remove(documentHandle): Object

Deletes the edge with the given documentHandle from the collection.

Arguments

  • documentHandle: string

The handle of the edge to retrieve. This can be either the _id or the _key of an edge in the collection, or an edge (i.e. an object with an _id or _key property).

Examples

var graph = db.graph('some-graph');
var collection = graph.edgeCollection('edges');

collection.remove('some-key')
.then(() => {
    // document 'edges/some-key' no longer exists
});

// -- or --

collection.remove('edges/some-key')
.then(() => {
    // document 'edges/some-key' no longer exists
});

graphEdgeCollection.edge

async graphEdgeCollection.edge(documentHandle): Object

Retrieves the edge with the given documentHandle from the collection.

Arguments

  • documentHandle: string

The handle of the edge to retrieve. This can be either the _id or the _key of an edge in the collection, or an edge (i.e. an object with an _id or _key property).

Examples

var graph = db.graph('some-graph');
var collection = graph.edgeCollection('edges');

collection.edge('some-key')
.then(edge => {
    // the edge exists
    edge._key === 'some-key';
    edge._id === 'edges/some-key';
});

// -- or --

collection.edge('edges/some-key')
.then(edge => {
    // the edge exists
    edge._key === 'some-key';
    edge._id === 'edges/some-key';
});

graphEdgeCollection.save

async graphEdgeCollection.save(data, [fromId, toId]): Object

Creates a new edge between the vertices fromId and toId with the given data.

Arguments

  • data: Object

The data of the new edge. If fromId and toId are not specified, the data needs to contain the properties _from and _to.

  • fromId: string (optional)

The handle of the start vertex of this edge. This can be either the _id of a document in the database, the _key of an edge in the collection, or a document (i.e. an object with an _id or _key property).

  • toId: string (optional)

The handle of the end vertex of this edge. This can be either the _id of a document in the database, the _key of an edge in the collection, or a document (i.e. an object with an _id or _key property).

Examples

var db = require('arangojs')();
var graph = db.graph('some-graph');
var collection = graph.edgeCollection('edges');
collection.save(
    {some: 'data'},
    'vertices/start-vertex',
    'vertices/end-vertex'
)
.then(edge => {
    edge._key; // the edge's key
    edge._id === ('edges/' + edge._key);
    edge.some === 'data';
    edge._from === 'vertices/start-vertex';
    edge._to === 'vertices/end-vertex';
});

graphEdgeCollection.edges

async graphEdgeCollection.edges(documentHandle): Array Object

Retrieves a list of all edges of the document with the given documentHandle.

Arguments

  • documentHandle: string

The handle of the document to retrieve the edges of. This can be either the _id of a document in the database, the _key of an edge in the collection, or a document (i.e. an object with an _id or _key property).

Examples

var db = require('arangojs')();
var graph = db.graph('some-graph');
var collection = graph.edgeCollection('edges');
collection.import([
    ['_key', '_from', '_to'],
    ['x', 'vertices/a', 'vertices/b'],
    ['y', 'vertices/a', 'vertices/c'],
    ['z', 'vertices/d', 'vertices/a']
])
.then(() => collection.edges('vertices/a'))
.then(edges => {
    edges.length === 3;
    edges.map(function (edge) {return edge._key;}); // ['x', 'y', 'z']
});

graphEdgeCollection.inEdges

async graphEdgeCollection.inEdges(documentHandle): Array Object

Retrieves a list of all incoming edges of the document with the given documentHandle.

Arguments

  • documentHandle: string

The handle of the document to retrieve the edges of. This can be either the _id of a document in the database, the _key of an edge in the collection, or a document (i.e. an object with an _id or _key property).

Examples

var db = require('arangojs')();
var graph = db.graph('some-graph');
var collection = graph.edgeCollection('edges');
collection.import([
    ['_key', '_from', '_to'],
    ['x', 'vertices/a', 'vertices/b'],
    ['y', 'vertices/a', 'vertices/c'],
    ['z', 'vertices/d', 'vertices/a']
])
.then(() => collection.inEdges('vertices/a'))
.then(edges => {
    edges.length === 1;
    edges[0]._key === 'z';
});

graphEdgeCollection.outEdges

async graphEdgeCollection.outEdges(documentHandle): ArrayObject

Retrieves a list of all outgoing edges of the document with the given documentHandle.

Arguments

  • documentHandle: string

The handle of the document to retrieve the edges of. This can be either the _id of a document in the database, the _key of an edge in the collection, or a document (i.e. an object with an _id or _key property).

Examples

var db = require('arangojs')();
var graph = db.graph('some-graph');
var collection = graph.edgeCollection('edges');
collection.import([
    ['_key', '_from', '_to'],
    ['x', 'vertices/a', 'vertices/b'],
    ['y', 'vertices/a', 'vertices/c'],
    ['z', 'vertices/d', 'vertices/a']
])
.then(() => collection.outEdges('vertices/a'))
.then(edges => {
    edges.length === 2;
    edges.map(function (edge) {return edge._key;}); // ['x', 'y']
});

graphEdgeCollection.traversal

async graphEdgeCollection.traversal(startVertex, opts): Object

Performs a traversal starting from the given startVertex and following edges contained in this edge collection.

Arguments

  • startVertex: string

The handle of the start vertex. This can be either the _id of a document in the database, the _key of an edge in the collection, or a document (i.e. an object with an _id or _key property).

  • opts: Object

See the HTTP API documentation for details on the additional arguments.

Please note that while opts.filter, opts.visitor, opts.init, opts.expander and opts.sort should be strings evaluating to well-formed JavaScript code, it's not possible to pass in JavaScript functions directly because the code needs to be evaluated on the server and will be transmitted in plain text.

Examples

var db = require('arangojs')();
var graph = db.graph('some-graph');
var collection = graph.edgeCollection('edges');
collection.import([
    ['_key', '_from', '_to'],
    ['x', 'vertices/a', 'vertices/b'],
    ['y', 'vertices/b', 'vertices/c'],
    ['z', 'vertices/c', 'vertices/d']
])
.then(() => collection.traversal('vertices/a', {
    direction: 'outbound',
    visitor: 'result.vertices.push(vertex._key);',
    init: 'result.vertices = [];'
}))
.then(result => {
    result.vertices; // ['a', 'b', 'c', 'd']
});

License

The Apache License, Version 2.0. For more information, see the accompanying LICENSE file.