关于文章 linux

install linux mysql mysql8

mysql 8.x 安装略有不同 记录一下

linux安装mysql 8.x

mysql 8.x 安装与 5.x 略有不同 这里大概记录一下

my.ini

[mysqld]

port=3306

max_connections=200
max_connect_errors=10
character-set-server=utf8
default-storage-engine=INNODB
default_authentication_plugin=mysql_native_password
[mysql]
default-character-set=utf8
[client]
port=3306
default-character-set=utf8

初始化

./bin/mysqld --initialize

复制出密码来

//修改root的密码

ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY '新密码';  

//创建新用户

CREATE USER 'free'@'%' IDENTIFIED WITH mysql_native_password BY 'free';

// 授权所有权限

GRANT ALL PRIVILEGES ON *.* TO 'free'@'%';

// 授权部分

GRANT SELECT,INSERT,UPDATE,DELETE,CREATE,DROP ON *.* TO 'free'@'%';
java kafka linux python,發佈,訂閱,发布,订阅

kafka 发布订阅分组

kafka 发布订阅分组 發佈訂閱

惹不起的kafka

启动zookeeper

./zookeeper-server-start.sh ../config/zookeeper.pperties

启动kafka

./kafka-server-start.sh ../config/server.propertis

记得给配置文件配置分区数

代码

发布

from kafka import KafkaProducer
producer = KafkaProducer(bootstrap_servers='localhost:9092')

if __name__ == '__main__':
    for a in range(1,2):
        producer.send('chat',partition=1,value=b'some_message_bytes')
        producer.flush()

订阅分区

from kafka import KafkaConsumer

if __name__ == '__main__':

    consumer = KafkaConsumer("chat",bootstrap_servers=['localhost:9092'])
    consumer.subscribe(pattern="1")
    for msg in consumer:
        print(msg)

惹不起 惹不起 rabbitMQ 的2500个 队列才3g内存

kafka 一个topic (概念上的队列) 分区(打的tag) 2000个 竟然 需要40g 储存。。。

口区。。。。

我要去试试 nsq !!!

linux pub rabbitmq sub 廣播

RabbitMQ 路由绑定与广播

RabbitMQ 路由綁定與廣播

rabbitMQ 实现组播与广播

rabbitmq 分路由和队列

路由分三个模式

topic 模糊匹配 * 匹配一个字符 # 匹配多个字符

fanout 广播

direct 全匹配

生产者代码

# !/usr/bin/env python
import pika
import time
credentials = pika.PlainCredentials('guest','guest')



if __name__ == '__main__':
    # 声明queue

        connection = pika.BlockingConnection(pika.ConnectionParameters(
        '127.0.0.1', 5672, '/', credentials))
        # channel.exchange_declare()
        for a in range(1,1000000):
            channel = connection.channel()
            channel.queue_declare(queue="chat."+str(a),durable=False)
            channel.basic_publish(exchange='amq.topic',
                                  routing_key="chat.*",
                                  body='Hello World!')
            channel.queue_bind(exchange='amq.topic',
                               queue="chat."+str(a),
                               routing_key="chat.*")
            channel.basic_publish(exchange='amq.topic',
                                  routing_key="chat.*",
                                  body='Hello World!')
            channel.close()
            time.sleep(1)
            print(" [x] Sent 'Hello World!'")

        connection.close()

消费者代码

# _*_coding:utf-8_*_
import pika
import time
credentials = pika.PlainCredentials('guest','guest')
connection = pika.BlockingConnection(pika.ConnectionParameters(
    '127.0.0.1',5672,'/',credentials))


if __name__ == '__main__':
    while True:
        channel = connection.channel()
        channel.exchange_declare(exchange='topic_logs',type='topic')
        method_frame, header_frame, body = channel.consume("chat")
        if method_frame:
            print(method_frame, header_frame, body)
            channel.basic_ack(method_frame.delivery_tag)
        else:
            time.sleep(1)
            print('No message returned')
arch linux rime 简体中文

rime设置为默认简体

rime设置为默认简体

转载 https://github.com/ModerRAS/ModerRAS.github.io/blob/master/_posts/2018-11-07-rime%E8%AE%BE%E7%BD%AE%E4%B8%BA%E9%BB%98%E8%AE%A4%E7%AE%80%E4%BD%93.md

写在开始

我的Arch Linux上面安装的rime-ibus默认输入是繁体中文,但是我日常要用简体中文,然后每次切换输入法的时候都要按一次F4我感觉很麻烦,所以就找了一下怎么修改简体字。

修改默认的那个的配置文件

我的这个配置文件在这里~/.config/ibus/rime/build/,所以看一下这个目录下面的文件:

bopomofo.prism.bin   
bopomofo_tw.prism.bin    
cangjie5.prism.bin    
default.yaml                   
luna_pinyin_fluency.schema.yaml  
luna_pinyin.schema.yaml     
luna_pinyin_simp.schema.yaml  
luna_quanpin.schema.yaml  
stroke.schema.yaml      
terra_pinyin.schema.yaml
bopomofo.schema.yaml  
bopomofo_tw.schema.yaml  
cangjie5.schema.yaml  
luna_pinyin_fluency.prism.bin  
luna_pinyin.prism.bin            
luna_pinyin_simp.prism.bin  
luna_quanpin.prism.bin        
stroke.prism.bin          
terra_pinyin.prism.bin

然后我用的是明月拼音所以我第一反应看到的是luna_pinyin.schema.yaml,似乎也猜对了。拉到底找到这里:

switches:
  - name: ascii_mode
    reset: 0
    states: ["中文", "西文"]
  - name: full_shape
    states: ["半角", "全角"]
  - name: simplification
    states: ["漢字", "汉字"]
  - name: ascii_punct
    states: ["。,", ".,"]

然后改成这样:

switches:
  - name: ascii_mode
    reset: 0
    states: ["中文", "西文"]
  - name: full_shape
    states: ["半角", "全角"]
  - name: simplification
    reset: 1
    states: ["漢字", "汉字"]
  - name: ascii_punct
    states: ["。,", ".,"]

其实就是加一个reset的选择选成简体中文,就这样就结束了。

写在最后

这个就是这样子了,还算容易,但是不是很好找,因为网上一般都是用F4或者ctrl+`实现的, 这种就要每次打开都要设置一遍,比较麻烦,所以这样子会比较省事一点

asyncio linux python socket

python3.7的asyncio的socket server

python3.7的asyncio的socket server

猝不及防的看了一下文档 发现出现这个简单实用的语法糖?

import asyncio

async def client_connected(reader:asyncio.StreamReader, writer: asyncio.StreamWriter):
    e=await reader.read(10*1024*1024)
    print(e)
    writer.write(b"200 hello world")
    await writer.drain()
    writer.close()

async def main(host, port):
    srv = await asyncio.start_server(
        client_connected, host, port)
    await srv.serve_forever()

if __name__ == "__main__":
    asyncio.run(main('127.0.0.1', 8080))
img linux openwrt 虚拟机文件制作

img 虚拟机文件制作

img 虚拟机文件制作

pacman -S qemu
qemu-img convert -f raw LEDE-17.01.2-R7.3.3-x64-combined-squashfs.img  -O  vmdk lede.vmdk
asoc linux mariadb mysql
bash find linux rm unix
fcitx linux sogou xfce4

解决xfce4的fcitx不能唤醒的问题

解决xfce4的fcitx不能唤醒的问题

在 ~/.xprofile

export XIM="fcitx"
export XIM_PROGRAM="fcitx"
export XMODIFIERS="@im=fcitx"
export GTK_IM_MODULE="fcitx"
export QT_IM_MODULE="fcitx"
cache linux mount tmp tmpfs

tmp挂载在tmpfs上

mount tmpfs /tmp -t tmpfs -o size=128m

一条命令

mount tmpfs /tmp -t tmpfs -o size=128m
django linux python tornado wsgi

使用tornado回调启动django

使用tornado回调启动django

使用tornado回调启动django

import os
import sys
from tornado.options import options, define, parse_command_line
import django.core.handlers.wsgi
import tornado.httpserver
import tornado.ioloop
import tornado.web
import tornado.wsgi
from django.core.wsgi import get_wsgi_application
sys.path.append(os.path.dirname(os.path.abspath(__file__)))
os.environ['DJANGO_SETTINGS_MODULE'] = "tornago.settings"

define('port', type=int, default=8000)


def main():
    parse_command_line()
    wsgi_app = tornado.wsgi.WSGIContainer(get_wsgi_application())
    tornado_app = tornado.web.Application(

        [('.*', tornado.web.FallbackHandler, dict(fallback=wsgi_app)),]
    )
    server = tornado.httpserver.HTTPServer(tornado_app)
    server.listen(options.port)
    tornado.ioloop.IOLoop.instance().start()

if __name__ == '__main__':
    main()
80 linux nginx root
image img linux losetup mount

原始的linux挂载多分区镜像

挂载多分区镜像

多分区镜像挂载

  • 查看loop设备
losetup -f
  • 查看起始位置
有cfdisk的话

cfdisk  ./xxx.img

没有cfdisk的话

fdisk  -l  ./xxx.img
  • 起始位置乘以512进行挂载
losetup -o xxx乘以512 /dev/loop0  xxx.img
  • 真正的挂载目录
mount /dev/loop0  xxx
  • 想要卸载
umount 卸载目录


losetup  -d /dev/loop0
cron linux python schedule

schedule python的定时任务神器

python的定时任务

最近需要玩玩定时任务

。=。

上代码

import schedule
import time

def job():
    print("I'm working...")

schedule.every(10).minutes.do(job)
schedule.every().hour.do(job)
schedule.every().day.at("10:30").do(job)
schedule.every().monday.do(job)
schedule.every().wednesday.at("13:15").do(job)

while True:
    schedule.run_pending()
    time.sleep(1)
aosc arch deepin dmenu i3wm linux rofi ubuntu

i3wm dmenu的优秀替代品 rofi 窗口切换/搜索

i3wm dmenu的优秀替代品 rofi 窗口切换/搜索

rofi

https://github.com/DaveDavenport/rofi/

效果- -还不错

速度很快 比gnome-do之类的要爽的多

上配置文件

bindsym $mod+d exec --no-startup-id "rofi -combi-modi window,drun,run,ssh -show combi -modi combi"

备份一份

~/.config/i3/config

# i3 config file (v4)
# Please see http://i3wm.org/docs/userguide.html for a complete reference!

# Set mod key (Mod1=<Alt>, Mod4=<Super>)
set $mod Mod4

# set default desktop layout (default is tiling)
# workspace_layout tabbed <stacking|tabbed>

# Configure border style <normal|1pixel|pixel xx|none|pixel>
new_window pixel 1
new_float normal

# Hide borders
hide_edge_borders none

# change borders
bindsym $mod+u border none
bindsym $mod+y border pixel 1
bindsym $mod+n border normal

# Font for window titles. Will also be used by the bar unless a different font
# is used in the bar {} block below.
font xft:Noto Sans 10

# Use Mouse+$mod to drag floating windows
floating_modifier $mod

# start a terminal
# bindsym $mod+Return exec terminal
bindsym $mod+Return exec terminator

# kill focused window
bindsym $mod+Shift+q kill

# start program launcher
bindsym $mod+Shift+x exec --no-startup-id dmenu-frecency
bindsym $mod+d exec --no-startup-id "rofi -combi-modi window,drun,run,ssh -show combi -modi combi"
# launch categorized menu
bindsym $mod+z exec --no-startup-id morc_menu

################################################################################################
## sound-section - DO NOT EDIT if you wish to automatically upgrade Alsa -> Pulseaudio later! ##
################################################################################################

exec --no-startup-id volumeicon
#bindsym $mod+Ctrl+m exec terminal -e 'alsamixer'
#exec --no-startup-id pulseaudio
#exec --no-startup-id pa-applet
bindsym $mod+Ctrl+m exec pavucontrol

################################################################################################

# Screen brightness controls
bindsym XF86MonBrightnessUp exec "xbacklight -inc 10; notify-send 'brightness up'"
bindsym XF86MonBrightnessDown exec "xbacklight -dec 10; notify-send 'brightness down'"

# Start Applications
bindsym $mod+Ctrl+b exec --no-startup-id terminal -e 'bmenu'
bindsym $mod+F2 exec --no-startup-id firefox
bindsym $mod+F3 exec --no-startup-id pcmanfm
# bindsym $mod+F3 exec ranger
# bindsym $mod+Shift+F3 exec gksu nautilus
bindsym $mod+F5 exec terminator -e 'htop'
bindsym $mod+t exec --no-startup-id pkill compton
bindsym $mod+Ctrl+t exec --no-startup-id compton -b
bindsym $mod+Shift+d --release exec "killall dunst; exec notify-send 'restart dunst'"
bindsym Print exec --no-startup-id ~/.config/scrot/i3-scrot
bindsym $mod+Print exec --no-startup-id ~/.config/scrot/i3-scrot -w
bindsym Shift+Print exec --no-startup-id ~/.config/scrot/i3-scrot -s
bindsym $mod+Shift+h exec xdg-open /usr/share/doc/aosc/i3_help.pdf
bindsym $mod+Ctrl+x --release exec --no-startup-id xkill

# focus_follows_mouse no

# change focus
bindsym $mod+j focus left
bindsym $mod+k focus down
bindsym $mod+l focus up
bindsym $mod+odiaeresis focus right

# alternatively, you can use the cursor keys:
bindsym $mod+Left focus left
bindsym $mod+Down focus down
bindsym $mod+Up focus up
bindsym $mod+Right focus right

# move focused window
bindsym $mod+Shift+j move left
bindsym $mod+Shift+k move down
bindsym $mod+Shift+l move up
bindsym $mod+Shift+odiaeresis move right

# alternatively, you can use the cursor keys:
bindsym $mod+Shift+Left move left
bindsym $mod+Shift+Down move down
bindsym $mod+Shift+Up move up
bindsym $mod+Shift+Right move right

# workspace back and forth (with/without active container)
workspace_auto_back_and_forth yes
bindsym $mod+b workspace back_and_forth
bindsym $mod+Shift+b move container to workspace back_and_forth; workspace back_and_forth

# split orientation
bindsym $mod+h split h;exec notify-send 'tile horizontally'
bindsym $mod+v split v;exec notify-send 'tile vertically'
bindsym $mod+q split toggle

# toggle fullscreen mode for the focused container
bindsym $mod+f fullscreen toggle

# change container layout (stacked, tabbed, toggle split)
bindsym $mod+s layout stacking
bindsym $mod+w layout tabbed
bindsym $mod+e layout toggle split

# toggle tiling / floating
bindsym $mod+Shift+space floating toggle

# change focus between tiling / floating windows
bindsym $mod+space focus mode_toggle

# toggle sticky
bindsym $mod+Shift+s sticky toggle

# focus the parent container
bindsym $mod+a focus parent

# move the currently focused window to the scratchpad
bindsym $mod+Shift+minus move scratchpad

# Show the next scratchpad window or hide the focused scratchpad window.
# If there are multiple scratchpad windows, this command cycles through them.
bindsym $mod+minus scratchpad show

#navigate workspaces next / previous
bindsym $mod+Ctrl+Right workspace next
bindsym $mod+Ctrl+Left workspace prev

# Workspace names
# to display names or symbols instead of plain workspace numbers you can use
# something like: set $ws1 1:mail
#                 set $ws2 2:
set $ws1 1
set $ws2 2
set $ws3 3
set $ws4 4
set $ws5 5
set $ws6 6
set $ws7 7
set $ws8 8

# switch to workspace
bindsym $mod+1 workspace $ws1
bindsym $mod+2 workspace $ws2
bindsym $mod+3 workspace $ws3
bindsym $mod+4 workspace $ws4
bindsym $mod+5 workspace $ws5
bindsym $mod+6 workspace $ws6
bindsym $mod+7 workspace $ws7
bindsym $mod+8 workspace $ws8

# Move focused container to workspace
bindsym $mod+Ctrl+1 move container to workspace $ws1
bindsym $mod+Ctrl+2 move container to workspace $ws2
bindsym $mod+Ctrl+3 move container to workspace $ws3
bindsym $mod+Ctrl+4 move container to workspace $ws4
bindsym $mod+Ctrl+5 move container to workspace $ws5
bindsym $mod+Ctrl+6 move container to workspace $ws6
bindsym $mod+Ctrl+7 move container to workspace $ws7
bindsym $mod+Ctrl+8 move container to workspace $ws8

# Move to workspace with focused container
bindsym $mod+Shift+1 move container to workspace $ws1; workspace $ws1
bindsym $mod+Shift+2 move container to workspace $ws2; workspace $ws2
bindsym $mod+Shift+3 move container to workspace $ws3; workspace $ws3
bindsym $mod+Shift+4 move container to workspace $ws4; workspace $ws4
bindsym $mod+Shift+5 move container to workspace $ws5; workspace $ws5
bindsym $mod+Shift+6 move container to workspace $ws6; workspace $ws6
bindsym $mod+Shift+7 move container to workspace $ws7; workspace $ws7
bindsym $mod+Shift+8 move container to workspace $ws8; workspace $ws8

# Open applications on specific workspaces
# assign [class="Thunderbird"] $ws1
# assign [class="Pale moon"] $ws2
# assign [class="Pcmanfm"] $ws3
# assign [class="Skype"] $ws5

# Open specific applications in floating mode
for_window [title="alsamixer"] floating enable border pixel 1
for_window [class="Calamares"] floating enable border normal
for_window [class="Clipgrab"] floating enable
for_window [title="File Transfer*"] floating enable
for_window [class="Galculator"] floating enable border pixel 1
for_window [class="GParted"] floating enable border normal
for_window [title="i3_help"] floating enable sticky enable border normal
for_window [class="Lightdm-gtk-greeter-settings"] floating enable
for_window [class="Lxappearance"] floating enable sticky enable border normal
for_window [title="MuseScore: Play Panel"] floating enable
for_window [class="Nitrogen"] floating enable sticky enable border normal
for_window [class="Oblogout"] fullscreen enable
for_window [class="Pavucontrol"] floating enable
for_window [class="qt5ct"] floating enable sticky enable border normal
for_window [class="Qtconfig-qt4"] floating enable sticky enable border normal
for_window [class="Simple-scan"] floating enable border normal
for_window [class="(?i)System-config-printer.py"] floating enable border normal
for_window [class="Skype"] floating enable border normal
for_window [class="Timeset-gui"] floating enable border normal
for_window [class="(?i)virtualbox"] floating enable border normal
for_window [class="Xfburn"] floating enable

# switch to workspace with urgent window automatically
for_window [urgent=latest] focus

# reload the configuration file
bindsym $mod+Shift+c reload

# restart i3 inplace (preserves your layout/session, can be used to upgrade i3)
bindsym $mod+Shift+r restart

# exit i3 (logs you out of your X session)
bindsym $mod+Shift+e exec "i3-nagbar -t warning -m 'You pressed the exit shortcut. Do you really want to exit i3? This will end your X session.' -b 'Yes, exit i3' 'i3-msg exit'"

# Set shut down, restart and locking features
bindsym $mod+0 mode "$mode_system"
set $mode_system (e)xit, switch_(u)ser, (s)uspend, (h)ibernate, (r)eboot, (Shift+s)hutdown
mode "$mode_system" {
    bindsym l exec --no-startup-id i3exit lock, mode "default"
    bindsym s exec --no-startup-id i3exit suspend, mode "default"
    bindsym u exec --no-startup-id i3exit switch_user, mode "default"
    bindsym e exec --no-startup-id i3exit logout, mode "default"
    bindsym h exec --no-startup-id i3exit hibernate, mode "default"
    bindsym r exec --no-startup-id i3exit reboot, mode "default"
    bindsym Shift+s exec --no-startup-id i3exit shutdown, mode "default"

    # exit system mode: "Enter" or "Escape"
    bindsym Return mode "default"
    bindsym Escape mode "default"
}

# Resize window (you can also use the mouse for that)
bindsym $mod+r mode "resize"
mode "resize" {
        # These bindings trigger as soon as you enter the resize mode
        # Pressing left will shrink the window’s width.
        # Pressing right will grow the window’s width.
        # Pressing up will shrink the window’s height.
        # Pressing down will grow the window’s height.
        bindsym j resize shrink width 5 px or 5 ppt
        bindsym k resize grow height 5 px or 5 ppt
        bindsym l resize shrink height 5 px or 5 ppt
        bindsym odiaeresis resize grow width 5 px or 5 ppt

        # same bindings, but for the arrow keys
        bindsym Left resize shrink width 5 px or 5 ppt
        bindsym Down resize grow height 5 px or 5 ppt
        bindsym Up resize shrink height 5 px or 5 ppt
        bindsym Right resize grow width 5 px or 5 ppt

        # exit resize mode: Enter or Escape
        bindsym Return mode "default"
        bindsym Escape mode "default"
}

# Lock screen
bindsym $mod+9 exec --no-startup-id i3lock -c 000000

# Autostart applications
# exec --no-startup-id /usr/lib/gnome-settings-daemon/gnome-settings-daemon-localeexec
exec --no-startup-id /usr/lib/polkit-gnome/polkit-gnome-authentication-agent-1
exec --no-startup-id nitrogen --restore; sleep 1; compton -b
exec --no-startup-id nm-applet
exec --no-startup-id xdg-user-dirs-update

#exec --no-startup-id conky -c ~/.config/conky/conky_cherubim
#exec --no-startup-id conky -c ~/.config/conky/conky_i3shortcuts
#exec --no-startup-id conky -c ~/.config/conky/conky_weather
#exec --no-startup-id conky -c ~/.config/conky/conky_rss
#exec --no-startup-id conky -c ~/.config/conky/conky_status
#exec --no-startup-id conky -c ~/.config/conky/conky_webmonitor
exec --no-startup-id ~/.config/conky/autoconky.py
exec --no-startup-id fcitx -d
exec --no-startup-id guake
# exec --no-startup-id blueman
# exec --no-startup-id xautolock -time 10 -locker blurlock


bar {
        status_command i3blocks
        position top

        colors {
            background #071E31

            focused_workspace #3685e2 #3685e2 #fafafa
            active_workspace #5294e2 #5294e2 #fafafa
            inactive_workspace #404552 #404552 #fafafa
            urgent_workspace #ff5757 #ff5757 #fafafa
        }
}




# Start i3bar to display a workspace bar (plus the system information i3status if available)
#bar {
#   status_command i3status
#   position top

## please set your primary output first. Example: 'xrandr --output eDP1 --primary'
#   tray_output primary
#   tray_output eDP1
#
#   bindsym button4 nop
#   bindsym button5 nop
#   font xft:Noto Sans 10.5
#   strip_workspace_numbers yes

#   colors {
#   background $transparent
#       background #2B2C2B
#                statusline #F9FAF9
#       separator  #454947
#
#                                  border  backgr. text
#       focused_workspace  #F9FAF9 #16A085 #2B2C2B
#       active_workspace   #595B5B #353836 #FDF6E3
#       inactive_workspace #595B5B #353836 #EEE8D5
#       urgent_workspace   #16A085 #FDF6E3 #E5201D
#   }
#}

# hide/unhide i3status bar
bindsym $mod+m bar mode toggle

# Theme colors
# class                 border  backgr. text    indic.  child_border
client.focused          #808280 #808280 #80FFF9 #FDF6E3
client.focused_inactive #434745 #434745 #16A085 #454948
client.unfocused        #434745 #434745 #16A085 #454948
client.urgent           #CB4B16 #FDF6E3 #16A085 #268BD2
client.placeholder      #000000 #0c0c0c #ffffff #000000 #0c0c0c

client.background       #2B2C2B

#############################
### settings for i3-gaps: ###
#############################

# Set inner/outer gaps
#gaps inner 10
#gaps outer -4

# Additionally, you can issue commands with the following syntax. This is useful to bind keys to changing the gap size.
# gaps inner|outer current|all set|plus|minus <px>
# gaps inner all set 10
# gaps outer all plus 5

# Smart gaps (gaps used if only more than one container on the workspace)
#smart_gaps on

# Smart borders (draw borders around container only if it is not the only container on this workspace) 
# on|no_gaps (on=always activate and no_gaps=only activate if the gap size to the edge of the screen is 0)
#smart_borders on

# Press $mod+Shift+g to enter the gap mode. Choose o or i for modifying outer/inner gaps. Press one of + / - (in-/decrement for current workspace) or 0 (remove gaps for current workspace). If you also press Shift with these keys, the change will be global for all workspaces.
#set $mode_gaps Gaps: (o) outer, (i) inner
#set $mode_gaps_outer Outer Gaps: +|-|0 (local), Shift + +|-|0 (global)
#set $mode_gaps_inner Inner Gaps: +|-|0 (local), Shift + +|-|0 (global)
#bindsym $mod+Shift+g mode "$mode_gaps"

#mode "$mode_gaps" {
#        bindsym o      mode "$mode_gaps_outer"
#        bindsym i      mode "$mode_gaps_inner"
#        bindsym Return mode "default"
#        bindsym Escape mode "default"
#}
#mode "$mode_gaps_inner" {
#        bindsym plus  gaps inner current plus 5
#        bindsym minus gaps inner current minus 5
#        bindsym 0     gaps inner current set 0
#
#
#        bindsym Shift+plus  gaps inner all plus 5
#        bindsym Shift+minus gaps inner all minus 5
#        bindsym Shift+0     gaps inner all set 0
#
#        bindsym Return mode "default"
#        bindsym Escape mode "default"
#}
#mode "$mode_gaps_outer" {
#        bindsym plus  gaps outer current plus 5
#        bindsym minus gaps outer current minus 5
#        bindsym 0     gaps outer current set 0
#
#        bindsym Shift+plus  gaps outer all plus 5
#        bindsym Shift+minus gaps outer all minus 5
#        bindsym Shift+0     gaps outer all set 0
#
#        bindsym Return mode "default"
#        bindsym Escape mode "default"
#}

~/.dmenurc

#
# ~/.dmenurc
#

## define the font for dmenu to be used
DMENU_FN="Noto-10.5"

## background colour for unselected menu-items
DMENU_NB="#2B2C2B"

## textcolour for unselected menu-items
DMENU_NF="#F9FAF9"

## background colour for selected menu-items
DMENU_SB="#16A085"

## textcolour for selected menu-items
DMENU_SF="#F9FAF9"

## command for the terminal application to be used:
TERMINAL_CMD="terminal -e"

## export our variables
DMENU_OPTIONS="-fn $DMENU_FN -nb $DMENU_NB -nf $DMENU_NF -sf $DMENU_SF -sb $DMENU_SB"

~/.dmrc

[Desktop]
Language=zh_CN.utf8
Session=i3

~/.i3blocks.conf

# i3blocks config file
#
# Please see man i3blocks for a complete reference!
# The man page is also hosted at http://vivien.github.io/i3blocks
#
# List of valid properties:
#
# align
# color
# command
# full_text
# instance
# interval
# label
# min_width
# name
# separator
# separator_block_width
# short_text
# signal
# urgent

# Global properties
#
# The top properties below are applied to every block, but can be overridden.
# Each block command defaults to the script name to avoid boilerplate.
command=~/.config/blocks/$BLOCK_NAME
separator_block_width=15
markup=none


# Generic media player support
#
# This displays "ARTIST - SONG" if a music is playing.
# Supported players are: spotify, vlc, audacious, xmms2, mplayer, and others.

[bandwidth]
instance=wlp3s0;in
color=#FFD700
label=
interval=3
separator=false

[bandwidth]
instance=wlp3s0;out
color=#FFD700
label=
interval=3
separator=false

[network]
label=
instance=enp4s0f2
interval=10
separator=false

[ssid]
label=
color=#00BFFF
interval=60
separator=false

[network]
label=
color=#00ff00
instance=wlp3s0
interval=10
separator=false

[ip-address]
label=
color=#DB7093
interval=60

[mediaplayer]
#instance=spotify
label=🎵
color=#C62F2F
interval=5
signal=10

[audio]
label=
color=#87CEEB
interval=5
separator=false

[microphone]
label=
color=#87CEEB
interval=5

[packages]
label=
interval=300

[space]
label=
color=#bd93f9
interval=30

[bluetooth]
label=
color=#3365A4
interval=10

[temperature]
instance=Core
label=
color=#FFA500
interval=5

[load]
label=
color=#32CD32
interval=10
separator=false

[cpu]
label=
color=#008DF6
interval=2

[memory]
label=
color=#F0B28A
instance=mem;free
interval=30

[memory]
label=
instance=swap;total
interval=30
#[load_average]
#interval=10

# Battery indicator
#
# The battery instance defaults to 0.
#[battery]
#label=BAT
#label=⚡
#instance=1
#interval=30

[battery]
command=~/.config/blocks/battery/battery
markup=pango
interval=30


# Date Time
#
[time]
label=
command=date '+%Y-%m-%d  %H:%M:%S'
color=#1DE9B6
interval=3

[user]
label=
color=#90CAF9
interval=once

# Key indicators
#
# Add the following bindings to i3 config file:
#
# bindsym --release Caps_Lock exec pkill -SIGRTMIN+11 i3blocks
# bindsym --release Num_Lock  exec pkill -SIGRTMIN+11 i3blocks
#[keyindicator]
#instance=CAPS
#interval=once
#signal=11

#[keyindicator]
#instance=NUM
#interval=once
#signal=11
aosc arch fedora i3 i3-wm linux wm 亮度

装上i3-wm后解决亮度调节

linux屏幕亮度调节解决办法

修改grub

sudo vi /etc/default/grub

修改内容

GRUB_CMDLINE_LINUX="acpi_backlight=vendor"

更新grub.conf

sudo update-grub

设置亮度

echo 500 > /sys/class/backlight/intel_backlight/brightness
linux tidb

TiDB Binary 部署方案详解(备份)

TiDB Binary 部署方案详解


title: TiDB Binary 部署方案详解 category: deployment


TiDB Binary 部署指导

概述

一个完整的 TiDB 集群包括 PD,TiKV 以及 TiDB。启动顺序依次是 PD,TiKV 以及 TiDB。在关闭数据库服务时,请按照启动的相反顺序进行逐一关闭服务。

阅读本章前,请先确保阅读 TiDB 整体架构部署建议

本文档描述了三种场景的二进制部署方式:

TiDB 组件及默认端口

1. TiDB 数据库组件(必装)

组件 默认端口 协议 说明
ssh 22 TCP sshd 服务
TiDB 4000 TCP 应用及 DBA 工具访问通信端口
TiDB 10080 TCP TiDB 状态信息上报通信端口
TiKV 20160 TCP TiKV 通信端口
PD 2379 TCP 提供 TiDB 和 PD 通信端口
PD 2380 TCP PD 集群节点间通信端口

2. TiDB 数据库组件(选装)

组件 默认端口 协议 说明
Prometheus 9090 TCP Prometheus 服务通信端口
Pushgateway 9091 TCP TiDB, TiKV, PD 监控聚合和上报端口
Node_exporter 9100 TCP TiDB 集群每个节点的系统信息上报通信端口
Grafana 3000 TCP Web 监控服务对外服务和客户端(浏览器)访问端口
alertmanager 9093 TCP 告警服务端口

TiDB 安装前系统配置与检查

操作系统检查

配置 描述
支持平台 请查看和了解系统部署建议
文件系统 TiDB 部署环境推荐使用 ext4 文件系统
Swap 空间 TiDB 部署推荐关闭 Swap 空间
Disk Block Size 设置系统磁盘 Block 大小为 4096

网络与防火墙

配置 描述
防火墙 / 端口 请查看 TiDB 所需端口在各个节点之间是否能正常访问

操作系统参数

配置 说明
Nice Limits 系统用户 tidb 的 nice 值设置为缺省值 0
min_free_kbytes sysctl.conf 中关于 vm.min_free_kbytes 的设置需要足够高
User Open Files Limit 对数据库管理员 tidb 的 open 文件数设置为 1000000
System Open File Limits 对系统的 open 文件数设置为 1000000
User Process Limits limits.conf 配置的 tidb 用户的 nproc 为 4096
Address Space Limits limits.conf 配置的 tidb 用户空间为 unlimited
File Size Limits limits.conf 配置的 tidb 用户 fsize 为 unlimited
Disk Readahead 设置数据磁盘 readahead 至少为 4096
NTP 服务 为各个节点配置 NTP 时间同步服务
SELinux 关闭各个节点的 SELinux 服务
CPU Frequency Scaling TiDB 推荐打开 CPU 超频
Transparent Hugepages 针对 Red Hat 7+ 和 CentOS 7+ 系统, Transparent Hugepages 必须被设置为 always
I/O Scheduler 设置数据磁盘 I/0 Schedule 设置为 deadline 模式
vm.swappiness 设置 vm.swappiness = 0

注意:请联系系统管理员进行操作系统参数调整。

数据库运行用户设置

配置 说明
LANG 环境设定 设置 LANG = en_US.UTF8
TZ 时区设定 确保所有节点的时区 TZ 设置为一样的值

创建系统数据库运行账户

在 Linux 环境下,在每台安装节点上创建 tidb 作为数据库系统运行用户并设置集群节点之间的 ssh 互信访问。以下是一个示例,具体创建用户与开通 ssh 互信访问请联系系统管理员进行。

# useradd tidb
# usermod -a -G tidb tidb
# su - tidb
Last login: Tue Aug 22 12:06:23 CST 2017 on pts/2
-bash-4.2$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/tidb/.ssh/id_rsa):
Created directory '/home/tidb/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/tidb/.ssh/id_rsa.
Your public key has been saved in /home/tidb/.ssh/id_rsa.pub.
The key fingerprint is:
5a:00:e6:df:9e:40:25:2c:2d:e2:6e:ee:74:c6:c3:c1 tidb@t001
The key's randomart image is:
+--[ RSA 2048]----+
|    oo. .        |
|  .oo.oo         |
| . ..oo          |
|  .. o o         |
| .  E o S        |
|  oo . = .       |
| o. * . o        |
| ..o .           |
| ..              |
+-----------------+

-bash-4.2$ cd .ssh
-bash-4.2$ cat id_rsa.pub >> authorized_keys
-bash-4.2$ chmod 644 authorized_keys
-bash-4.2$ ssh-copy-id -i ~/.ssh/id_rsa.pub 192.168.1.100

下载官方 Binary

TiDB 官方提供了支持 Linux 版本的二进制安装包,官方推荐使用 Redhat 7+、CentOS 7+ 以上版本的操作系统,不推荐在 Redhat 6、CentOS 6 上部署 TiDB 集群。

操作系统:Linux ( Redhat 7+,CentOS 7+ )

执行步骤:

# 下载压缩包

wget http://download.pingcap.org/tidb-latest-linux-amd64.tar.gz
wget http://download.pingcap.org/tidb-latest-linux-amd64.sha256

# 检查文件完整性,返回 ok 则正确
sha256sum -c tidb-latest-linux-amd64.sha256

# 解开压缩包
tar -xzf tidb-latest-linux-amd64.tar.gz
cd tidb-latest-linux-amd64

单节点方式快速部署

在获取 TiDB 二进制文件包后,我们可以在单机上面,运行和测试 TiDB 集群,请按如下步骤依次启动 PD,TiKV,TiDB。

注意:以下启动各个应用程序组件实例的时候,请选择后台启动,避免前台失效后程序自动退出。

步骤一. 启动 PD:

./bin/pd-server --data-dir=pd \
                --log-file=pd.log

步骤二. 启动 TiKV:

./bin/tikv-server --pd="127.0.0.1:2379" \
                  --data-dir=tikv \
                  --log-file=tikv.log

步骤三. 启动 TiDB:

./bin/tidb-server --store=tikv \
                  --path="127.0.0.1:2379" \
                  --log-file=tidb.log

步骤四. 使用 MySQL 客户端连接 TiDB:

mysql -h 127.0.0.1 -P 4000 -u root -D test

功能性测试部署

如果只是对 TiDB 进行测试,并且机器数量有限,我们可以只启动一台 PD 测试整个集群。

这里我们使用四个节点,部署一个 PD,三个 TiKV,以及一个 TiDB,各个节点以及所运行服务信息如下:

Name Host IP Services
node1 192.168.199.113 PD1, TiDB
node2 192.168.199.114 TiKV1
node3 192.168.199.115 TiKV2
node4 192.168.199.116 TiKV3

请按如下步骤依次启动 PD 集群,TiKV 集群以及 TiDB:

注意:以下启动各个应用程序组件实例的时候,请选择后台启动,避免前台失效后程序自动退出。

步骤一. 在 node1 启动 PD:

./bin/pd-server --name=pd1 \
                --data-dir=pd1 \
                --client-urls="http://192.168.199.113:2379" \
                --peer-urls="http://192.168.199.113:2380" \
                --initial-cluster="pd1=http://192.168.199.113:2380" \
                --log-file=pd.log

步骤二. 在 node2,node3,node4 启动 TiKV:

./bin/tikv-server --pd="192.168.199.113:2379" \
                  --addr="192.168.199.114:20160" \
                  --data-dir=tikv1 \
                  --log-file=tikv.log

./bin/tikv-server --pd="192.168.199.113:2379" \
                  --addr="192.168.199.115:20160" \
                  --data-dir=tikv2 \
                  --log-file=tikv.log

./bin/tikv-server --pd="192.168.199.113:2379" \
                  --addr="192.168.199.116:20160" \
                  --data-dir=tikv3 \
                  --log-file=tikv.log

步骤三. 在 node1 启动 TiDB:

./bin/tidb-server --store=tikv \
                  --path="192.168.199.113:2379" \
                  --log-file=tidb.log

步骤四. 使用 MySQL 客户端连接 TiDB:

mysql -h 192.168.199.113 -P 4000 -u root -D test

多节点集群模式部署

在生产环境中,我们推荐多节点部署 TiDB 集群,首先请参考部署建议。

这里我们使用六个节点,部署三个 PD,三个 TiKV,以及一个 TiDB,各个节点以及所运行服务信息如下:

Name Host IP Services
node1 192.168.199.113 PD1, TiDB
node2 192.168.199.114 PD2
node3 192.168.199.115 PD3
node4 192.168.199.116 TiKV1
node5 192.168.199.117 TiKV2
node6 192.168.199.118 TiKV3

请按如下步骤依次启动 PD 集群,TiKV 集群以及 TiDB:

步骤一 . 在 node1,node2,node3 依次启动 PD:

./bin/pd-server --name=pd1 \
                --data-dir=pd1 \
                --client-urls="http://192.168.199.113:2379" \
                --peer-urls="http://192.168.199.113:2380" \
                --initial-cluster="pd1=http://192.168.199.113:2380,pd2=http://192.168.199.114:2380,pd3=http://192.168.199.115:2380" \
                -L "info" \
                --log-file=pd.log

./bin/pd-server --name=pd2 \
                --data-dir=pd2 \
                --client-urls="http://192.168.199.114:2379" \
                --peer-urls="http://192.168.199.114:2380" \
                --initial-cluster="pd1=http://192.168.199.113:2380,pd2=http://192.168.199.114:2380,pd3=http://192.168.199.115:2380" \
                --join="http://192.168.199.113:2379" \
                -L "info" \
                --log-file=pd.log

./bin/pd-server --name=pd3 \
                --data-dir=pd3 \
                --client-urls="http://192.168.199.115:2379" \
                --peer-urls="http://192.168.199.115:2380" \
                --initial-cluster="pd1=http://192.168.199.113:2380,pd2=http://192.168.199.114:2380,pd3=http://192.168.199.115:2380" \
                --join="http://192.168.199.113:2379" \
                -L "info" \
                --log-file=pd.log

步骤二. 在 node4,node5,node6 启动 TiKV:

./bin/tikv-server --pd="192.168.199.113:2379,192.168.199.114:2379,192.168.199.115:2379" \
                  --addr="192.168.199.116:20160" \
                  --data-dir=tikv1 \
                  --log-file=tikv.log

./bin/tikv-server --pd="192.168.199.113:2379,192.168.199.114:2379,192.168.199.115:2379" \
                  --addr="192.168.199.117:20160" \
                  --data-dir=tikv2 \
                  --log-file=tikv.log

./bin/tikv-server --pd="192.168.199.113:2379,192.168.199.114:2379,192.168.199.115:2379" \
                  --addr="192.168.199.118:20160" \
                  --data-dir=tikv3 \
                  --log-file=tikv.log

步骤三. 在 node1 启动 TiDB:

./bin/tidb-server --store=tikv \
                  --path="192.168.199.113:2379,192.168.199.114:2379,192.168.199.115:2379" \
                  --log-file=tidb.log

步骤四. 使用 MySQL 客户端连接 TiDB:

mysql -h 192.168.199.113 -P 4000 -u root -D test

注意:在生产环境中启动 TiKV 时,建议使用 --config 参数指定配置文件路径,如果不设置这个参数,TiKV 不会读取配置文件。同样,在生产环境中部署 PD 时,也建议使用 --config 参数指定配置文件路径。

TiKV 调优参见:TiKV 性能参数调优

注意:如果使用 nohup 在生产环境中启动集群,需要将启动命令放到一个脚本文件里面执行,否则会出现因为 Shell 退出导致 nohup 启动的进程也收到异常信号退出的问题,具体参考进程异常退出。

TiDB 监控和告警环境安装

安装部署监控和告警环境的系统信息如下:

Name Host IP Services
node1 192.168.199.113 node_export, pushgateway, Prometheus, Grafana
node2 192.168.199.114 node_export
node3 192.168.199.115 node_export
node4 192.168.199.116 node_export

获取二进制包

# 下载压缩包
wget https://github.com/prometheus/prometheus/releases/download/v1.5.2/prometheus-1.5.2.linux-amd64.tar.gz
wget https://github.com/prometheus/node_exporter/releases/download/v0.14.0-rc.2/node_exporter-0.14.0-rc.2.linux-amd64.tar.gz
wget https://grafanarel.s3.amazonaws.com/builds/grafana-4.1.2-1486989747.linux-x64.tar.gz
wget https://github.com/prometheus/pushgateway/releases/download/v0.3.1/pushgateway-0.3.1.linux-amd64.tar.gz

# 解开压缩包
tar -xzf prometheus-1.5.2.linux-amd64.tar.gz
tar -xzf node_exporter-0.14.0-rc.1.linux-amd64.tar.gz
tar -xzf grafana-4.1.2-1486989747.linux-x64.tar.gz
tar -xzf pushgateway-0.3.1.linux-amd64.tar.gz

启动监控服务

在 node1,node2,node3,node4 启动 node_exporter

$cd node_exporter-0.14.0-rc.1.linux-amd64

#启动 node_exporter 服务
./node_exporter --web.listen-address=":9100" \
    --log.level="info"

在 node1 启动 pushgateway:

$cd pushgateway-0.3.1.linux-amd64

#启动 pushgateway 服务
./pushgateway \
    --log.level="info" \
    --web.listen-address=":9091"

在 node1 启动 Prometheus:

$cd prometheus-1.5.2.linux-amd64

# 修改配置文件

vi prometheus.yml

...
global:
  scrape_interval:     15s # By default, scrape targets every 15 seconds.
  evaluation_interval: 15s # By default, scrape targets every 15 seconds.
  # scrape_timeout is set to the global default (10s).
  labels:
    cluster: 'test-cluster'
    monitor: "prometheus"

scrape_configs:
  - job_name: 'overwritten-cluster'
    scrape_interval: 3s
    honor_labels: true # don't overwrite job & instance labels
    static_configs:
      - targets: ['192.168.199.113:9091']

  - job_name: "overwritten-nodes"
    honor_labels: true # don't overwrite job & instance labels
    static_configs:
    - targets:
      - '192.168.199.113:9100'
      - '192.168.199.114:9100'
      - '192.168.199.115:9100'
      - '192.168.199.116:9100'
...

# 启动 Prometheus:
./prometheus \
    --config.file="/data1/tidb/deploy/conf/prometheus.yml" \
    --web.listen-address=":9090" \
    --web.external-url="http://192.168.199.113:9090/" \
    --log.level="info" \
    --storage.local.path="/data1/tidb/deploy/data.metrics" \
    --storage.local.retention="360h0m0s"

在 node1 启动 Grafana:

cd grafana-4.1.2-1486989747.linux-x64

#编辑配置文件

vi grafana.ini

...

# The http port  to use
http_port = 3000

# The public facing domain name used to access grafana from a browser
domain = 192.168.199.113

...

#启动 Grafana 服务
./grafana-server \
    --homepath="/data1/tidb/deploy/opt/grafana" \
    --config="/data1/tidb/deploy/opt/grafana/conf/grafana.ini"
deepin linux stalonetray wine 异常 托盘 无响应

deepin wineqq tim 微信 托盘异常无反应解决= =

stalonetray解决wine程序在deepin下托盘异常

上代码~~~

sudo apt install stalonetray

然后

nano ~/.stalonetrayrc

再然后~~

# background 
background "#777777"

# decorations
# 可选值: all, title, border, none
decorations none

# display # as usual
# dockapp_mode # set dockapp mode, which can be either simple (for
# e.g. OpenBox, wmaker for WindowMaker, or none
# (default). NEW in 0.8.
dockapp_mode none
# fuzzy_edges [] # enable fuzzy edges and set fuzziness level. level
# can be from 0 (disabled) to 3; this setting works
# with tinting and/or transparent and/or pixmap
# backgrounds
fuzzy_edges 0

# geometry 
geometry 1x1+0+0

# grow_gravity # 可选值有:N, S, E, W, NW, NE, SW, SE; 托盘图标的增长方式。
grow_gravity NW

# icon_gravity # 托盘图标的方向: NW, NE, SW, SE
icon_gravity NW

# icon_size # spe
icon_size 24

# log_level # controls the amount of logging output, level can
# be err (default), info, or trace (enabled only
# when stalonetray configured with --enable-debug)
# NEW in 0.8.
log_level err

# kludges kludge[,kludge] # enable specific kludges to work around
# non-conforming WMs and/or stalonetray bugs.
# NEW in 0.8. Argument is a
# comma-separated list of
# * fix_window_pos - fix tray window position on
# erroneous moves by WM
# * force_icon_size - ignore resize events on all
# icons; force their size to be equal to
# icon_size
# * use_icon_hints - use icon window hints to
# dtermine icon size

# max_geometry # maximal tray dimensions; 0 in width/height means
# no limit
max_geometry 0x0

# no_shrink [] # disables shrink-back mode
no_shrink false

# parent_bg [] # whether to use pseudo-transparency
# (looks better when reparented into smth like FvwmButtons)
parent_bg false

# pixmap_bg <path_to_xpm> # use pixmap from specified xpm file for (tiled) background
# pixmap_bg /home/user/.stalonetraybg.xpm

# scrollbars # enable/disable scrollbars; mode is either
# vertical, horizontal, all or none (default)
# NEW in 0.8.
scrollbars none

# scrollbars-size # scrollbars step in pixels; default is slot_size / 4
# scrollbars-step 8

# scrollbars-step # scrollbars step in pixels; default is slot_size / 2
# scrollbars-step 32

# slot_size # specifies size of icon slot, defaults to
# icon_size NEW in 0.8.

# skip_taskbar [] # hide tray`s window from the taskbar
skip_taskbar true

# sticky [] # make a tray`s window sticky across the
# desktops/pages
sticky true

# tint_color # set tinting color
tint_color white

# tint_level # set tinting level; level ranges from 0 (disabled)
# to 255
tint_level 0

# transparent [] # whether to use root-transparency (background
# image must be set with Esetroot or compatible utility)
transparent false

# vertical [] # whether to use vertical layout (horisontal layout
# is used by default)
vertical false

# window_layer # set the EWMH-compatible window layer; one of:
# bootom, normal, top
window_layer normal

# window_strut # enable/disable window struts for tray window (to
# avoid converting of tray window by maximized
# windows); mode defines to which screen border tray
# will be attached; it can be either top, bottom,
# left, right, none or auto (default)
window_strut auto

# window_type # set the EWMH-compatible window type; one of:
# desktop, dock, normal, toolbar, utility
window_type dock

# xsync [] # whether to operate on X server synchronously (SLOOOOW)
xsync false

= = 然而比较蛋疼

linux master password postgresql 主从 安装 配置

从入门到差点放弃,postgresql极简安装+主从配置

postgresql安装,配置主从

安装

解压后

pgsql/bin/initdb -D /usr/local/pgsql/data
local/pgsql/bin/postgres -D /usr/local/pgsql/data >logfile 2>&1 &
local/pgsql/bin/createdb test
local/pgsql/bin/psql test

账户设置

创建用户

CREATE USER oschina WITH PASSWORD 'oschina123';
CREATE ROLE adam WITH LOGIN CREATEDB PASSWORD '654321';  记得添加login权限

改密码

ALTER ROLE davide WITH PASSWORD 'hu8jmn3';

让一个角色能够创建其他角色和新的数据库:

ALTER ROLE miriam CREATEROLE CREATEDB;

查看所有数据库

psql -l 

删除数据库

dropdb mydb

使用数据库

psql mydb

创建数据库

createdb mydb

配置主从

1.主创建同步账号

CREATE USER replica replication LOGIN CONNECTION LIMIT 3 ENCRYPTED PASSWORD 'replica';

2,postgresql.conf

wal_level = hot_standby  # 这个是设置主为wal的主机

max_wal_senders = 32 # 这个设置了可以最多有几个流复制连接,差不多有几个从,就设置几个
wal_keep_segments = 256 # 设置流复制保留的最多的xlog数目
wal_sender_timeout = 60s # 设置流复制主机发送数据的超时时间
max_connections = 100 # 这个设置要注意下,从库的max_connections必须要大于主库的

3,

pg_hda.conf

host    all     all     0.0.0.0/0       md5

4,

pg_basebackup -F p --progress -D /data/replica -h 192.168.1.12 -p 5432 -U replica --password 

5,复制recovery.conf

6,re的内容

standby_mode = on  # 这个说明这台机器为从库
primary_conninfo = 'host=10.12.12.10 port=5432 user=replica password=replica'  # 这个说明这台机器对应主库的信息

recovery_target_timeline = 'latest' # 这个说明这个流复制同步到最新的数据

postgresql。conf

max_connections = 1000 # 一般查多于写的应用从库的最大连接数要比较大

hot_standby = on  # 说明这台机器不仅仅是用于数据归档,也用于数据查询
max_standby_streaming_delay = 30s # 数据流备份的最大延迟时间
wal_receiver_status_interval = 1s  # 多久向主报告一次从的状态,当然从每次数据复制都会向主报告状态,这里只是设置最长的间隔时间
hot_standby_feedback = on # 如果有错误的数据复制,是否向主进行反馈

测试成果

主的机器上sender进程 从的机器上receiver进程

主的机器上

select * from pg_stat_replication;
pid              | 8467       # sender的进程
usesysid         | 44673      # 复制的用户id
usename          | replica    # 复制的用户用户名
application_name | walreceiver  
client_addr      | 10.12.12.12 # 复制的客户端地址
client_hostname  |
client_port      | 55804  # 复制的客户端端口
backend_start    | 2015-05-12 07:31:16.972157+08  # 这个主从搭建的时间
backend_xmin     |
state            | streaming  # 同步状态 startup: 连接中、catchup: 同步中、streaming: 同步
sent_location    | 3/CF123560 # Master传送WAL的位置
write_location   | 3/CF123560 # Slave接收WAL的位置
flush_location   | 3/CF123560 # Slave同步到磁盘的WAL位置
replay_location  | 3/CF123560 # Slave同步到数据库的WAL位置
sync_priority    | 0  #同步Replication的优先度
                      0: 异步、1~?: 同步(数字越小优先度越高)
sync_state       | async  # 有三个值,async: 异步、sync: 同步、potential: 虽然现在是异步模式,但是有可能升级到同步模式

最后注意几个坑

  • systemd 启动的话 配置文件可能在etc下

  • hba配置文件放开ip

  • 创建用户时给的权限

  • 记得给文件夹postgres的 用户组和用户身份

ab aiohttp linux python python3.6 python3.7 torando uvloop

python3.7 和python3.6 压力测试 aiohttp-tornado-uvloop

aiohttp-tornado-uvloop 在python3.7下进行了压力测试

统一的ab

ab -n 10000 -c 1000 "http://0.0.0.0:8080/"

结果

aiohttp
    asyncio
        3.7 : 4000
        3.6 : 3300
    uvloop
        3.7 : 4300
        3.6 : 4700
tornado
    ioloop
        3.7 : 3100
        3.6 : 1300
    uvloop
        3.7 : 1700
        3.6 : 1700

aiohttp

from aiohttp import web

async def handle(request):
    name = request.match_info.get('name', "Anonymous")
    text = "Hello, " + name
    return web.Response(text=text)

app = web.Application()
app.add_routes([web.get('/', handle),
                web.get('/{name}', handle)])

web.run_app(app)

aiohttp + uvloop

from aiohttp import web
import uvloop
import asyncio
async def handle(request):
    name = request.match_info.get('name', "Anonymous")
    text = "Hello, " + name
    return web.Response(text=text)

app = web.Application()
app.add_routes([web.get('/', handle),
                web.get('/{name}', handle)])
asyncio.set_event_loop_policy(uvloop.EventLoopPolicy())
loop = asyncio.get_event_loop()
app._set_loop(loop)

web.run_app(app)

tornado

import tornado.ioloop
import tornado.web

class MainHandler(tornado.web.RequestHandler):
    def get(self):
        self.write("Hello, world")

def make_app():
    return tornado.web.Application([
        (r"/", MainHandler),
    ])

if __name__ == "__main__":
    app = make_app()
    app.listen(8080)
    tornado.ioloop.IOLoop.current().start()

tornado + uvloop

import tornado.ioloop
import tornado.web

class MainHandler(tornado.web.RequestHandler):
    def get(self):
        self.write("Hello, world")

import uvloop
import asyncio

from tornado.platform.asyncio import BaseAsyncIOLoop


class TornadoUvloop(BaseAsyncIOLoop):

    def initialize(self, **kwargs):
        loop = uvloop.new_event_loop()
        try:
            super(TornadoUvloop, self).initialize(
                loop, close_loop=True, **kwargs)
        except Exception:
            loop.close()
            raise

def make_app():
    return tornado.web.Application([
        (r"/", MainHandler),
    ])

if __name__ == "__main__":
    app = make_app()
    app.listen(8080)
    tornado.ioloop.IOLoop.configure(TornadoUvloop)
    tornado.ioloop.IOLoop.current().start()

依赖

aiohttp==3.3.2
async-timeout==3.0.0
attrs==18.1.0
chardet==3.0.4
idna==2.7
multidict==4.3.1
tornado==5.0.2
uvloop==0.10.2
yarl==1.2.6

看来python3.7的asyncio的速度有很大的提升

3.7.0 anaconda conda linux miniconda python

linux 下安逸的食用 python 3.7.0

论最小代价切换使用python3.7

python 3.7.0 出来了 到处都是水文

那么如何最快的速度上手体验

下载miniconda

https://mirrors.ustc.edu.cn/anaconda/miniconda/

安装

bash miniconda-xxxxxxxx-.sh

切换到虚拟环境

source ~/miniconda/bin/activate

安装python3.7.0

conda create --name python37 python=3.7

切换到3.7.0

conda activate python37
deepin linux nopassword opensuse sudo sudoer

sudo 配置密码默认超时时间

sudo 配置密码默认超时时间

之前一直用的是nopasswd = =,然后觉得似乎不大好

好吧

sudoer 可以配置 第一次输入密码后的超时时长

vim /etc/sudoer

添加上一行

Defaults        env_reset,timestamp_timeout=30


#30 代表了30分钟

如图

完美!

测试成功

aiohttp asyncio callback linux tornado

aiohttp 类 tornado 代码风格模式去开发 !!!完美阿!!!!

使用aiohttp进行tornado代码风格的开发

仔细读了一下 aiohttp 的文档

http://aiohttp.readthedocs.io/en/stable/index.html

= = 竟然!!!

竟然有了!!!!!

类tornado开发特色的!!! 贼开心

from aiohttp import web
class basic(web.View):
    def out(self,text):
        return web.Response(text=text)
class Index(basic):
    async def get(self):
        name = self.request.match_info.get('name', "Anonymous")
        text = "Hello, " + name
        return self.out(text=text)
app = web.Application()
app.add_routes([web.view('/', Index),])
if __name__ == "__main__":
    web.run_app(app)
linux mysql tidb

TiDB 用户账户管理

几乎完全兼容mysql的tidb的用户管理


title: TiDB 用户账户管理 category: user guide


TiDB 用户账户管理

用户名和密码

TiDB 将用户账户存储在 mysql.user 系统表里面。每个账户由用户名和 host 作为标识。每个账户可以设置一个密码。

通过 MySQL 客户端连接到 TiDB 服务器,通过指定的账户和密码登陆:

shell> mysql --port 4000 --user xxx --password

使用缩写的命令行参数则是:

shell> mysql -P 4000 -u xxx -p

添加用户

添加用户有两种方式:

  • 通过标准的用户管理的 SQL 语句创建用户以及授予权限,比如 CREATE USERGRANT
  • 直接通过 INSERTUPDATEDELETE 操作授权表。

推荐的方式是使用第一种。第二种方式修改容易导致一些不完整的修改,因此不推荐。还有另一种可选方式是使用第三方工具的图形化界面工具。

下面的例子用 CREATE USERGRANT 语句创建了四个账户:

mysql> CREATE USER 'finley'@'localhost' IDENTIFIED BY 'some_pass';
mysql> GRANT ALL PRIVILEGES ON *.* TO 'finley'@'localhost' WITH GRANT OPTION;
mysql> CREATE USER 'finley'@'%' IDENTIFIED BY 'some_pass';
mysql> GRANT ALL PRIVILEGES ON *.* TO 'finley'@'%' WITH GRANT OPTION;
mysql> CREATE USER 'admin'@'localhost' IDENTIFIED BY 'admin_pass';
mysql> GRANT RELOAD,PROCESS ON *.* TO 'admin'@'localhost';
mysql> CREATE USER 'dummy'@'localhost';

使用 SHOW GRANTS 可以看到为一个用户授予的权限:

mysql> SHOW GRANTS FOR 'admin'@'localhost';
+-----------------------------------------------------+
| Grants for admin@localhost                          |
+-----------------------------------------------------+
| GRANT RELOAD, PROCESS ON *.* TO 'admin'@'localhost' |
+-----------------------------------------------------+

删除用户

使用 DROP USER 语句可以删除用户,例如:

mysql> DROP USER 'jeffrey'@'localhost';

保留用户账户

TiDB 在数据库初始化时会生成一个 'root'@'%' 的默认账户。

设置资源限制

暂不支持。

设置密码

TiDB 将密码存在 mysql.user 系统数据库里面。只有拥有 CREATE USER 权限,或者拥有 mysql 数据库权限( INSERT 权限用于创建, UPDATE 权限用于更新)的用户才能够设置或修改密码。

CREATE USER 创建用户时可以通过 IDENTIFIED BY 指定密码:

CREATE USER 'jeffrey'@'localhost' IDENTIFIED BY 'mypass';

为一个已存在的账户修改密码,可以通过 SET PASSWORD FOR 或者 ALTER USER 语句完成:

SET PASSWORD FOR 'root'@'%' = 'xxx';

或者

ALTER USER 'jeffrey'@'localhost' IDENTIFIED BY 'mypass';

更多在官方文档 https://github.com/pingcap/docs-cn

linux python queue zbus

记录一次zbus在Python上推送报Exception in thread Thread-2异常

记录一次zbus在Python上推送报Exception in thread Thread-2异常的解决办法,实际上是python的包管理确实坑了我

昨天还好好的 今天zbus 推送的时候一直报Exception in thread Thread-2

用git 回退到昨天

依旧报

神奇的。。。

修改zbus存放index目录

无效

重新创建一个venv环境

= = 成功

重新的去检查pip freeze

挑出我用的包

重新干掉venv 生成

安装

== 解决了 抓狂、、、

经过考虑应该是卸载不干净依赖的包的锅

= = 继续填坑

java linux mq python rpc zbus 队列

python使用zbus队列尝试

zbus小巧而极速的MQ, RPC实现, 支持HTTP/TCP代理,开放易扩展,多语言支撑微服务,系统总线架构

小巧而极速的MQ, RPC实现, 支持HTTP/TCP代理,开放易扩展,多语言支撑微服务,系统总线架构

最近再想做对外api服务,再纠结数据库异步驱动后

突然想起了zbus = =

这似乎是一个代价更小的方案

先试试官方demo

发布者

broker = Broker('localhost:15555') 

p = Producer(broker) 
p.declare('MyTopic') 

msg = Message()
msg.topic = 'MyTopic'
msg.body = 'hello world'

res = p.publish(msg)

消费者

broker = Broker('localhost:15555')  

def message_handler(msg, client):
    print(msg)

c = Consumer(broker, 'MyTopic')
c.message_handler = message_handler 
c.start()

消费者会一直阻塞 只要有资源 就会取到 然后调用回调

经过测试 body可以随意写 都可以序列化

那么数据入库的结构就可以这样

tornado -> httpapi -> 队列塞进去

队列 <---> 消费者 --> 数据库

deepin linux 深度系统 蓝牙 键盘
arch deepin fedora linux opensuse 开发

跟风发博客-用linux作开发

五年日用加夜用linux的生活经验

deepin安装后要做的几件事

我不会上任何图,这是一个多少带点干货的博客

  • 第一件事 安装bash的自动提示和git,pv,htop 等等让命令行用的舒服的包
sudo apt install  bash-completion pv git htop nginx redis mongodb axel vim nano aria2 wget

安装后 需要一定的配置 bash开启自动补全

bash 显示配置 linux bash显示git分支
禁用utc 开启免密码sudo 内存小可以开启zram

这能让你用的时候舒服不少

  • 第二步 安装开发环境

1 nodejs 安装 点击进入

2 python 安装 点击进入

3 python 配置 点击进入

4 golang 安装 点击进入

5 ruby 安装 点击进入

6 java 安装

sudo apt install oracle-java

ide 肯定是IntelliJ 全家桶

vscode sublime atom 三件套

  • 浏览器

chrome firefox chromium opera yandex vivaldi

  • 视频

ffmpeg vlc mpv

  • 游戏

我的世界 steam

关于发行版

刚玩linux的小白几乎都会纠结这件事

然而linux不同发行版之间的差别只在于你学会前有多难受

不要带上国产非国产的有色眼睛

更不要去想着装不装逼

恩,学会后也就那样了

那么说说

deepin 最推荐的linux发行版

只需要一点点配置,当然如果上面我说的几个配置你并不认为多的话

arch 需要更多的配置 如果你的网络足够好,并且百度谷歌能力强

ubuntu/opensuse/debian/fedora/centos/

排名不分先后

都需要添加各种第三方源

如果你现在知道什么是源

相当于安装几个应用商店

关于办公 QQ office

QQ,tim,wps 在deepin上直接就有

在其他发行版也有其他方案

但是我想这中浪费时间的事情你并不喜欢

linux 还可以做什么

linux只是一个普通平凡的系统

它把一切选择权交给你

放平心态你才能享受这权力带来的乐趣

basic linux linuxvb vb vf 开源vb

有趣的好玩具gambas仿照vb的开源实现

Gambas 是一个面向对象的BASIC语言分支和一个附带的IDE

Gambas 是一个面向对象的BASIC语言分支和一个附带的IDE

可惜的是只能在unix-like系统下运行

但是不得不说是有趣儿的

linux 下编译安装

http://gambas.sourceforge.net/zh/main.html#

下载

wget https://gitlab.com/gambas/gambas/-/archive/3.11.3/gambas-3.11.3.tar.bz2

解压

tar xvf  gambas-3.11.3.tar.bz2

编译

cd  gambas-3.11.3

configure prefix=/home/user/basic

make -j8

make install

gambas 自带应用商店

可以下载第三方组建,以及各种demo

测试了一下 基本上都可以执行

jpg linux 压缩

linux下压缩jpg

linux下压缩jpg 很简单拉

jpegoptim  -m50 xxx.jpg

-m50是代表了有损压缩

linux swap 性能

swap性能优化

swap性能优化

swappiness sysctl 参数代表了内核对于交换空间的喜好(或厌恶)程度。Swappiness 可以有 0 到 100 的值。设置这个参数为较低的值会减少内存的交换,从而提升一些系统上的响应度。

/etc/sysctl.d/90-swappiness.conf
vm.swappiness=1
vm.vfs_cache_pressure=50

优先级

如果你有多于一个交换文件或交换分区,你可以给它们各自分配一个优先级值(0 到 32767)。系统会在使用较低优先级的交换区域前优先使用较高优先级的交换区域。例如,如果你有一个较快的磁盘 (/dev/sda) 和一个较慢的磁盘 (/dev/sdb),给较快的设备分配一个更高的优先级。优先级可以在 fstab 中通过 pri 参数指定:

/dev/sda1 none swap defaults,pri=100 0 0
/dev/sdb2 none swap defaults,pri=10  0 0

或者通过 swapon 的 −p (或者 −−priority) 参数:

swapon -p 100 /dev/sda1
install linux postgresql 最佳部署

【转】PostgreSQL on Linux 最佳部署手册

PostgreSQL其实安装很简单,但是那仅仅是可用,并不是好用。

作者

digoal

日期

2016-11-21

标签

Linux , PostgreSQL , Install , 最佳部署


背景

数据库的安装一直以来都挺复杂的,特别是Oracle,现在身边都还有安装Oracle数据库赚外快的事情。

PostgreSQL其实安装很简单,但是那仅仅是可用,并不是好用。很多用户使用默认的方法安装好数据库之后,然后测试一通性能,发现性能不行就不用了。

原因不用说,多方面没有优化的结果。

PostgreSQL数据库为了适应更多的场景能使用,默认的参数都设得非常保守,通常需要优化,比如检查点,SHARED BUFFER等。

本文将介绍一下PostgreSQL on Linux的最佳部署方法,其实在我的很多文章中都有相关的内容,但是没有总结成一篇文档。

OS与硬件认证检查

目的是确认服务器与OS通过certification

Intel Xeon v3和v4的cpu,能支持的RHEL的最低版本是不一样的,

详情请见:https://access.redhat.com/support/policy/intel

Intel Xeon v3和v4的cpu,能支持的Oracle Linux 的最低版本是不一样的,

详情请见:http://linux.oracle.com/pls/apex/f?p=117:1

第一:RedHat生态系统--来自RedHat的认证列表https://access.redhat.com/ecosystem

第二:Oracle Linux 对服务器和存储的硬件认证列表 http://linux.oracle.com/pls/apex/f?p=117:1

安装常用包

# yum -y install coreutils glib2 lrzsz mpstat dstat sysstat e4fsprogs xfsprogs ntp readline-devel zlib-devel openssl-devel pam-devel libxml2-devel libxslt-devel python-devel tcl-devel gcc make smartmontools flex bison perl-devel perl-ExtUtils* openldap-devel jadetex  openjade bzip2

配置OS内核参数

1. sysctl

注意某些参数,根据内存大小配置(已说明)

含义详见

《DBA不可不知的操作系统内核参数》

# vi /etc/sysctl.conf

# add by digoal.zhou
fs.aio-max-nr = 1048576
fs.file-max = 76724600
kernel.core_pattern= /data01/corefiles/core_%e_%u_%t_%s.%p         
# /data01/corefiles事先建好,权限777,如果是软链接,对应的目录修改为777
kernel.sem = 4096 2147483647 2147483646 512000    
# 信号量, ipcs -l-u 查看,每16个进程一组,每组信号量需要17个信号量。
kernel.shmall = 107374182      
# 所有共享内存段相加大小限制(建议内存的80%)
kernel.shmmax = 274877906944   
# 最大单个共享内存段大小(建议为内存一半), >9.2的版本已大幅降低共享内存的使用
kernel.shmmni = 819200         
# 一共能生成多少共享内存段,每个PG数据库集群至少2个共享内存段
net.core.netdev_max_backlog = 10000
net.core.rmem_default = 262144       
# The default setting of the socket receive buffer in bytes.
net.core.rmem_max = 4194304          
# The maximum receive socket buffer size in bytes
net.core.wmem_default = 262144       
# The default setting (in bytes) of the socket send buffer.
net.core.wmem_max = 4194304          
# The maximum send socket buffer size in bytes.
net.core.somaxconn = 4096
net.ipv4.tcp_max_syn_backlog = 4096
net.ipv4.tcp_keepalive_intvl = 20
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_time = 60
net.ipv4.tcp_mem = 8388608 12582912 16777216
net.ipv4.tcp_fin_timeout = 5
net.ipv4.tcp_synack_retries = 2
net.ipv4.tcp_syncookies = 1    
# 开启SYN Cookies。当出现SYN等待队列溢出时,启用cookie来处理,可防范少量的SYN攻击
net.ipv4.tcp_timestamps = 1    
# 减少time_wait
net.ipv4.tcp_tw_recycle = 0    
# 如果=1则开启TCP连接中TIME-WAIT套接字的快速回收,但是NAT环境可能导致连接失败,建议服务端关闭它
net.ipv4.tcp_tw_reuse = 1      
# 开启重用。允许将TIME-WAIT套接字重新用于新的TCP连接
net.ipv4.tcp_max_tw_buckets = 262144
net.ipv4.tcp_rmem = 8192 87380 16777216
net.ipv4.tcp_wmem = 8192 65536 16777216
net.nf_conntrack_max = 1200000
net.netfilter.nf_conntrack_max = 1200000
vm.dirty_background_bytes = 409600000       
#  系统脏页到达这个值,系统后台刷脏页调度进程 pdflush(或其他) 自动将(dirty_expire_centisecs/100)秒前的脏页刷到磁盘
vm.dirty_expire_centisecs = 3000             
#  比这个值老的脏页,将被刷到磁盘。3000表示30秒。
vm.dirty_ratio = 95                          
#  如果系统进程刷脏页太慢,使得系统脏页超过内存 95 % 时,则用户进程如果有写磁盘的操作(如fsync, fdatasync等调用),则需要主动把系统脏页刷出。
#  有效防止用户进程刷脏页,在单机多实例,并且使用CGROUP限制单实例IOPS的情况下非常有效。  
vm.dirty_writeback_centisecs = 100            
#  pdflush(或其他)后台刷脏页进程的唤醒间隔, 100表示1秒。
vm.mmap_min_addr = 65536
vm.overcommit_memory = 0     
#  在分配内存时,允许少量over malloc, 如果设置为 1, 则认为总是有足够的内存,内存较少的测试环境可以使用 1 .  
vm.overcommit_ratio = 90     
#  当overcommit_memory = 2 时,用于参与计算允许指派的内存大小。
vm.swappiness = 0            
#  关闭交换分区
vm.zone_reclaim_mode = 0     
# 禁用 numa, 或者在vmlinux中禁止. 
net.ipv4.ip_local_port_range = 40000 65535    
# 本地自动分配的TCP, UDP端口号范围
fs.nr_open=20480000
# 单个进程允许打开的文件句柄上限

# 以下参数请注意
# vm.extra_free_kbytes = 4096000
# vm.min_free_kbytes = 2097152
# 如果是小内存机器,以上两个值不建议设置
# vm.nr_hugepages = 66536    
#  建议shared buffer设置超过64GB时 使用大页,页大小 /proc/meminfo Hugepagesize
# vm.lowmem_reserve_ratio = 1 1 1
# 对于内存大于64G时,建议设置,否则建议默认值 256 256 32

2. 生效配置

sysctl -p

配置OS资源限制

# vi /etc/security/limits.conf

# nofile超过1048576的话,一定要先将sysctl的fs.nr_open设置为更大的值,并生效后才能继续设置nofile.

* soft    nofile  1024000
* hard    nofile  1024000
* soft    nproc   unlimited
* hard    nproc   unlimited
* soft    core    unlimited
* hard    core    unlimited
* soft    memlock unlimited
* hard    memlock unlimited

最好在关注一下/etc/security/limits.d目录中的文件内容,会覆盖limits.conf的配置。

已有进程的ulimit请查看/proc/pid/limits,例如

Limit                     Soft Limit           Hard Limit           Units     
Max cpu time              unlimited            unlimited            seconds   
Max file size             unlimited            unlimited            bytes     
Max data size             unlimited            unlimited            bytes     
Max stack size            10485760             unlimited            bytes     
Max core file size        0                    unlimited            bytes     
Max resident set          unlimited            unlimited            bytes     
Max processes             11286                11286                processes 
Max open files            1024                 4096                 files     
Max locked memory         65536                65536                bytes     
Max address space         unlimited            unlimited            bytes     
Max file locks            unlimited            unlimited            locks     
Max pending signals       11286                11286                signals   
Max msgqueue size         819200               819200               bytes     
Max nice priority         0                    0                    
Max realtime priority     0                    0                    
Max realtime timeout      unlimited            unlimited            us

如果你要启动其他进程,建议退出SHELL再进一遍,确认ulimit环境配置已生效,再启动。

配置OS防火墙

(建议按业务场景设置,我这里先清掉)

iptables -F

配置范例

# 私有网段
-A INPUT -s 192.168.0.0/16 -j ACCEPT
-A INPUT -s 10.0.0.0/8 -j ACCEPT
-A INPUT -s 172.16.0.0/16 -j ACCEPT

selinux

如果没有这方面的需求,建议禁用

# vi /etc/sysconfig/selinux 

SELINUX=disabled
SELINUXTYPE=targeted

关闭不必要的OS服务

chkconfig --list|grep on  
关闭不必要的,例如 
chkconfig iscsi off

部署文件系统

注意SSD对齐,延长寿命,避免写放大。

parted -s /dev/sda mklabel gpt
parted -s /dev/sda mkpart primary 1MiB 100%

格式化(如果你选择ext4的话)

mkfs.ext4 /dev/sda1 -m 0 -O extent,uninit_bg -E lazy_itable_init=1 -T largefile -L u01

建议使用的ext4 mount选项

# vi /etc/fstab

LABEL=u01 /u01     ext4        defaults,noatime,nodiratime,nodelalloc,barrier=0,data=writeback    0 0

# mkdir /u01
# mount -a

为什么需要data=writeback?

pic

建议pg_xlog放到独立的IOPS性能贼好的块设备中。

设置SSD盘的调度为deadline

如果不是SSD的话,还是使用CFQ,否则建议使用DEADLINE。

临时设置(比如sda盘)

echo deadline > /sys/block/sda/queue/scheduler

永久设置

编辑grub文件修改块设备调度策略

vi /boot/grub.conf

elevator=deadline

注意,如果既有机械盘,又有SSD,那么可以使用/etc/rc.local,对指定磁盘修改为对应的调度策略。

关闭透明大页、numa

加上前面的默认IO调度,如下

vi /boot/grub.conf

elevator=deadline numa=off transparent_hugepage=never 

编译器

建议使用较新的编译器,安装 gcc 6.2.0 参考

《PostgreSQL clang vs gcc 编译》

如果已安装好,可以分发给不同的机器。

cd ~
tar -jxvf gcc6.2.0.tar.bz2
tar -jxvf python2.7.12.tar.bz2


# vi /etc/ld.so.conf

/home/digoal/gcc6.2.0/lib
/home/digoal/gcc6.2.0/lib64
/home/digoal/python2.7.12/lib

# ldconfig

环境变量

# vi ~/env_pg.sh

export PS1="$USER@`/bin/hostname -s`-> "
export PGPORT=$1
export PGDATA=/$2/digoal/pg_root$PGPORT
export LANG=en_US.utf8
export PGHOME=/home/digoal/pgsql9.6
export LD_LIBRARY_PATH=/home/digoal/gcc6.2.0/lib:/home/digoal/gcc6.2.0/lib64:/home/digoal/python2.7.12/lib:$PGHOME/lib:/lib64:/usr/lib64:/usr/local/lib64:/lib:/usr/lib:/usr/local/lib:$LD_LIBRARY_PATH
export PATH=/home/digoal/gcc6.2.0/bin:/home/digoal/python2.7.12/bin:/home/digoal/cmake3.6.3/bin:$PGHOME/bin:$PATH:.
export DATE=`date +"%Y%m%d%H%M"`
export MANPATH=$PGHOME/share/man:$MANPATH
export PGHOST=$PGDATA
export PGUSER=postgres
export PGDATABASE=postgres
alias rm='rm -i'
alias ll='ls -lh'
unalias vi

icc, clang

如果你想使用ICC或者clang编译PostgreSQL,请参考

《[转载]用intel编译器icc编译PostgreSQL》

《PostgreSQL clang vs gcc 编译》

编译PostgreSQL

建议使用NAMED_POSIX_SEMAPHORES

src/backend/port/posix_sema.c

create sem : 
named :
mySem = sem_open(semname, O_CREAT | O_EXCL,
(mode_t) IPCProtection, (unsigned) 1);


unamed :
/*
* PosixSemaphoreCreate
*
* Attempt to create a new unnamed semaphore.
*/
static void
PosixSemaphoreCreate(sem_t * sem)
{
if (sem_init(sem, 1, 1) < 0)
elog(FATAL, "sem_init failed: %m");
}


remove sem : 

#ifdef USE_NAMED_POSIX_SEMAPHORES
/* Got to use sem_close for named semaphores */
if (sem_close(sem) < 0)
elog(LOG, "sem_close failed: %m");
#else
/* Got to use sem_destroy for unnamed semaphores */
if (sem_destroy(sem) < 0)
elog(LOG, "sem_destroy failed: %m");
#endif

编译项

. ~/env_pg.sh 1921 u01

cd postgresql-9.6.1
export USE_NAMED_POSIX_SEMAPHORES=1
LIBS=-lpthread CC="/home/digoal/gcc6.2.0/bin/gcc" CFLAGS="-O3 -flto" ./configure --prefix=/home/digoal/pgsql9.6
LIBS=-lpthread CC="/home/digoal/gcc6.2.0/bin/gcc" CFLAGS="-O3 -flto" make world -j 64
LIBS=-lpthread CC="/home/digoal/gcc6.2.0/bin/gcc" CFLAGS="-O3 -flto" make install-world

如果你是开发环境,需要调试,建议这样编译。

cd postgresql-9.6.1
export USE_NAMED_POSIX_SEMAPHORES=1
LIBS=-lpthread CC="/home/digoal/gcc6.2.0/bin/gcc" CFLAGS="-O0 -flto -g -ggdb -fno-omit-frame-pointer" ./configure --prefix=/home/digoal/pgsql9.6 --enable-cassert
LIBS=-lpthread CC="/home/digoal/gcc6.2.0/bin/gcc" CFLAGS="-O0 -flto -g -ggdb -fno-omit-frame-pointer" make world -j 64
LIBS=-lpthread CC="/home/digoal/gcc6.2.0/bin/gcc" CFLAGS="-O0 -flto -g -ggdb -fno-omit-frame-pointer" make install-world

初始化数据库集群

pg_xlog建议放在IOPS最好的分区。

. ~/env_pg.sh 1921 u01
initdb -D $PGDATA -E UTF8 --locale=C -U postgres -X /u02/digoal/pg_xlog$PGPORT

配置postgresql.conf

以PostgreSQL 9.6, 512G内存主机为例

最佳到文件末尾即可,重复的会以末尾的作为有效值。  

$ vi postgresql.conf

listen_addresses = '0.0.0.0'
port = 1921
max_connections = 5000
unix_socket_directories = '.'
tcp_keepalives_idle = 60
tcp_keepalives_interval = 10
tcp_keepalives_count = 10
shared_buffers = 128GB                      # 1/4 主机内存
maintenance_work_mem = 2GB                  # min( 2G, (1/4 主机内存)/autovacuum_max_workers )
dynamic_shared_memory_type = posix
vacuum_cost_delay = 0
bgwriter_delay = 10ms
bgwriter_lru_maxpages = 1000
bgwriter_lru_multiplier = 10.0
bgwriter_flush_after = 0                    # IO很好的机器,不需要考虑平滑调度
max_worker_processes = 128
max_parallel_workers_per_gather = 0         #  如果需要使用并行查询,设置为大于1 ,不建议超过 主机cores-2
old_snapshot_threshold = -1
backend_flush_after = 0  # IO很好的机器,不需要考虑平滑调度, 否则建议128~256kB
wal_level = replica
synchronous_commit = off
full_page_writes = on   # 支持原子写超过BLOCK_SIZE的块设备,在对齐后可以关闭。或者支持cow的文件系统可以关闭。
wal_buffers = 1GB       # min( 2047MB, shared_buffers/32 ) = 512MB
wal_writer_delay = 10ms
wal_writer_flush_after = 0  # IO很好的机器,不需要考虑平滑调度, 否则建议128~256kB
checkpoint_timeout = 30min  # 不建议频繁做检查点,否则XLOG会产生很多的FULL PAGE WRITE(when full_page_writes=on)。
max_wal_size = 256GB       # 建议是SHARED BUFFER的2倍
min_wal_size = 64GB        # max_wal_size/4
checkpoint_completion_target = 0.05          # 硬盘好的情况下,可以让检查点快速结束,恢复时也可以快速达到一致状态。否则建议0.5~0.9
checkpoint_flush_after = 0                   # IO很好的机器,不需要考虑平滑调度, 否则建议128~256kB
archive_mode = on
archive_command = '/bin/date'      #  后期再修改,如  'test ! -f /disk1/digoal/arch/%f && cp %p /disk1/digoal/arch/%f'
max_wal_senders = 8
random_page_cost = 1.3  # IO很好的机器,不需要考虑离散和顺序扫描的成本差异
parallel_tuple_cost = 0
parallel_setup_cost = 0
min_parallel_relation_size = 0
effective_cache_size = 300GB                          # 看着办,扣掉会话连接RSS,shared buffer, autovacuum worker, 剩下的都是OS可用的CACHE。
force_parallel_mode = off
log_destination = 'csvlog'
logging_collector = on
log_truncate_on_rotation = on
log_checkpoints = on
log_connections = on
log_disconnections = on
log_error_verbosity = verbose
log_timezone = 'PRC'
vacuum_defer_cleanup_age = 0
hot_standby_feedback = off                             # 建议关闭,以免备库长事务导致 主库无法回收垃圾而膨胀。
max_standby_archive_delay = 300s
max_standby_streaming_delay = 300s
autovacuum = on
log_autovacuum_min_duration = 0
autovacuum_max_workers = 16                            # CPU核多,并且IO好的情况下,可多点,但是注意16*autovacuum mem,会消耗较多内存,所以内存也要有基础。  
autovacuum_naptime = 45s                               # 建议不要太高频率,否则会因为vacuum产生较多的XLOG。
autovacuum_vacuum_scale_factor = 0.1
autovacuum_analyze_scale_factor = 0.1
autovacuum_freeze_max_age = 1600000000
autovacuum_multixact_freeze_max_age = 1600000000
vacuum_freeze_table_age = 1500000000
vacuum_multixact_freeze_table_age = 1500000000
datestyle = 'iso, mdy'
timezone = 'PRC'
lc_messages = 'C'
lc_monetary = 'C'
lc_numeric = 'C'
lc_time = 'C'
default_text_search_config = 'pg_catalog.english'
shared_preload_libraries='pg_stat_statements'

## 如果你的数据库有非常多小文件(比如有几十万以上的表,还有索引等,并且每张表都会被访问到时),建议FD可以设多一些,避免进程需要打开关闭文件。
## 但是不要大于前面章节系统设置的ulimit -n(open files)
max_files_per_process=655360

配置pg_hba.conf

避免不必要的访问,开放允许的访问,建议务必使用密码访问。

$ vi pg_hba.conf

host replication xx 0.0.0.0/0 md5  # 流复制

host all postgres 0.0.0.0/0 reject # 拒绝超级用户从网络登录
host all all 0.0.0.0/0 md5  # 其他用户登陆

启动数据库

pg_ctl start

好了,你的PostgreSQL数据库基本上部署好了,可以愉快的玩耍了。

Count

arangodb linux mysql nosql 分布式

arangodb-php 使用

ArangoDB 是一个开源的分布式原生多模型数据库

ArangoDB 是一个开源的分布式原生多模型数据库 (Apache 2 license)。 其理念是: 利用一个引擎,一个 query 语法,一项数据库技术,以及多个数据 模型,来最大力度满足项目的灵活性,简化技术堆栈,简化数据库运维,降低运营成本。

  1. 多数据模型:可以灵活的使用 document, graph, key-value 或者他们的组合作为你的数据模型
  2. 方便的查询:支持类似 SQL 的查询语法 AQL,或者通过 REST 以及其他查询
  3. Ruby 和 JS 扩展:没有语言范围限制,你可以从前台到后台都使用同一种语言
  4. 高性能以及低空间占用:ArangoDB 比其他 NoSQL 都要快,同时占用的空间更小
  5. 简单易用:可以在几秒内启动并且使用,同时可以通过图形界面来管理你的 ArangoDB
  6. 开源且免费:ArangoDB 遵守 Apache 协议

arangodb-php 暂时还没有什么中文资料

arandodb-php的示例代码也不是很清楚 这里尝试了一下curd的简单操作

/**
 * Created by PhpStorm.
 * User: free
 * Date: 17-7-28
 * Time: 下午10:05
 */
//使用方法
//$connection=new arango();
//
//$id=new ArangoDocumentHandler($connection->c);
//
//
//$data=$id->get('user',aaaa);//返回的是json  可先转为数组操作


//composer require triagens/arangodb


//require 'vendor/autoload.php';

use triagens\ArangoDb\Collection as ArangoCollection;
use triagens\ArangoDb\CollectionHandler as ArangoCollectionHandler;
use triagens\ArangoDb\Connection as ArangoConnection;
use triagens\ArangoDb\ConnectionOptions as ArangoConnectionOptions;
use triagens\ArangoDb\DocumentHandler as ArangoDocumentHandler;
use triagens\ArangoDb\Document as ArangoDocument;
use triagens\ArangoDb\Exception as ArangoException;
use triagens\ArangoDb\Export as ArangoExport;
use triagens\ArangoDb\ConnectException as ArangoConnectException;
use triagens\ArangoDb\ClientException as ArangoClientException;
use triagens\ArangoDb\ServerException as ArangoServerException;
use triagens\ArangoDb\Statement as ArangoStatement;
use triagens\ArangoDb\UpdatePolicy as ArangoUpdatePolicy;

class arango
{
    public function __construct(){
        $connectionOptions = [
            // database name
            ArangoConnectionOptions::OPTION_DATABASE => 'free',
            // server endpoint to connect to
            ArangoConnectionOptions::OPTION_ENDPOINT => 'tcp://127.0.0.1:8529',
            // authorization type to use (currently supported: 'Basic')
            ArangoConnectionOptions::OPTION_AUTH_TYPE => 'Basic',
            // user for basic authorization
            ArangoConnectionOptions::OPTION_AUTH_USER => 'root',
            // password for basic authorization
            ArangoConnectionOptions::OPTION_AUTH_PASSWD => 'free',
            // connection persistence on server. can use either 'Close' (one-time connections) or 'Keep-Alive' (re-used connections)
            ArangoConnectionOptions::OPTION_CONNECTION => 'Keep-Alive',
            // connect timeout in seconds
            ArangoConnectionOptions::OPTION_TIMEOUT => 3,
            // whether or not to reconnect when a keep-alive connection has timed out on server
            ArangoConnectionOptions::OPTION_RECONNECT => true,
            // optionally create new collections when inserting documents
            ArangoConnectionOptions::OPTION_CREATE => true,
            // optionally create new collections when inserting documents
            ArangoConnectionOptions::OPTION_UPDATE_POLICY => ArangoUpdatePolicy::LAST,
        ];


// turn on exception logging (logs to whatever PHP is configured)
        ArangoException::enableLogging();


        $this->c = new ArangoConnection($connectionOptions);
//        $connect->auth()

    }
}
kv linux nosql redis

ardb 兼容redis多种储存引擎的好玩轮子

Ardb是一个新的构建在持久化Key/Value存储实现上的NoSQL DB服务实现

Ardb是一个新的构建在持久化Key/Value存储实现上的NoSQL DB服务实现,支持list/set/sorted set/bitset/hash/table等复杂的数据结构,以Redis协议对外提供访问接口。

支持多种储存引擎

git clone https://github.com/yinqiwen/ardb

storage_engine=rocksdb make
storage_engine=leveldb make
storage_engine=lmdb make
storage_engine=wiredtiger make
storage_engine=perconaft make
storage_engine=forestdb make


make dist就可以了

rocksdb facebook基于leveldb的闪存储存引擎

点击下载

leveldb Leveldb是一个google实现的非常高效的kv数据库

点击下载

lmdb是openLDAP项目开发的嵌入式(作为一个库嵌入到宿主程序)存储引擎

点击下载

wiredtiger mongodb的储存引擎

点击下载

perconaft percona公司的轮子 他家优化的各种数据库都挺不错

点击下载

ForestDB 是一个快速的 Key-Value 存储引擎,基于层次B +树单词查找树。由 Couchbase 缓存和存储团队开发。

谁知道什么鬼!! 编译失败了一个!!!!!!

linux nosql 集群

avocadodb/arangodb集群

一个arangodb集群由多任务运行形成集群。

一个arangodb集群由多任务运行形成集群。 arangodb本身不会启动或监视这些任务。 因此,它需要某种监控和启动这些任务的监督者。

手工配置集群是非常简单的。

一个代理角色 两个数据节点角色 一个控制器角色

一下将讲解每个角色所需的参数

集群将由 控制器->代理->数据节点的方向进行

代理与数据节点都可以是多个

代理节点 (Agency)

要启动一个代理,首先要通过agency.activate参数激活。

代理节点数量要通过agency.size=3进行设置 当然 也可以只用1个

在初始化过程中,代理必须相互查找。 这样做至少提供一个共同的agency.endpoint。 指定agency.my-address自己的ip。

单代理节点时

在cluster下配置参数

//监听ip
server.endpoint=tcp://0.0.0.0:5001
//关闭掉密码验证
server.authentication=false 
agency.activate=true 
agency.size=1 
//代理节点
agency.endpoint=tcp://127.0.0.1:5001 
agency.supervision=true 
多代理节点配置

主代理节点配置

server.endpoint=tcp://0.0.0.0:5001
//  服务器监听节点
agency.my-address=tcp://127.0.0.1:5001
//  代理监听节点
server.authentication=false
//  密码验证关闭
agency.activate=true
agency.size=3
//   代理节点数量
agency.endpoint=tcp://127.0.0.1:5001
//   监听主代理节点的ip
agency.supervision=true

子代理节点配置

server.endpoint=tcp://0.0.0.0:5002
agency.my-address=tcp://127.0.0.1:5002
server.authentication=false
agency.activate=true
agency.size=3
agency.endpoint=tcp://127.0.0.1:5001
agency.supervision=true 

所有节点agency.endpoint指向同一个ip/port

控制器和数据节点的配置

数据节点配置

server.authentication=false
server.endpoint=tcp://0.0.0.0:8529
cluster.my-address=tcp://127.0.0.1:8529
cluster.my-local-info=db1
cluster.my-role=PRIMARY
cluster.agency-endpoint=tcp://127.0.0.1:5001
cluster.agency-endpoint=tcp://127.0.0.1:5002

控制器节点配置

server.authentication=false
server.endpoint=tcp://0.0.0.0:8531
cluster.my-address=tcp://127.0.0.1:8531
cluster.my-local-info=coord1
cluster.my-role=COORDINATOR
cluster.agency-endpoint=tcp://127.0.0.1:5001
cluster.agency-endpoint=tcp://127.0.0.1:5002

启动每个节点

1

2

java leveldb linux nosql rocksdb

leveldb-rocksdb java使用

rocksdb是在leveldb上开发来的

leveldb-rocksdb在java中的demo

(arangodb储存引擎用的rocksdb,然而rocksdb是在leveldb上开发来的)

rocksdb

package net.oschina.itags.gateway.service;
import org.rocksdb.Options;
import org.rocksdb.RocksDB;
import org.rocksdb.RocksDBException;

public class BaseRocksDb {
    public final static RocksDB rocksDB() throws RocksDBException {

        Options options = new Options().setCreateIfMissing(true);
        RocksDB.loadLibrary();
        RocksDB db=RocksDB.open(options,"./rock");
        return db;
    }
}

leveldb

package net.oschina.itags.gateway.service;
import org.iq80.leveldb.*;
import org.iq80.leveldb.impl.Iq80DBFactory;

import java.io.File;
import java.io.IOException;
import java.nio.charset.Charset;

public class BaseLevelDb {

public static final DB db() throws IOException {
    boolean cleanup = true;
    Charset charset = Charset.forName("utf-8");
    String path = "./level";

//init
    DBFactory factory = Iq80DBFactory.factory;
    File dir = new File(path);
//如果数据不需要reload则每次重启尝试清理磁盘中path下的旧数据
    if(cleanup) {
        factory.destroy(dir,null);//清除文件夹内的所有文件
    }
    Options options = new Options().createIfMissing(true);
//重新open新的db
    DB db = factory.open(dir,options);
  return db;
}
}
linux nosql redis 持久化

redis持久化

redis持久化

redis rdb模式保存

save  #io保存

bgsave #异步保存

#自动保存
save  5     1       #Redis服务器在5秒之内,对数据库进行了至少1次修改,就执行bgsave命令
save  300   10      #Redis服务器在300秒之内,对数据库进行了至少10次修改,就执行bgsave命令
save  60    10000   #Redis服务器在60秒之内,对数据库进行了至少10000次修改,就执行bgsave命令

###  redis AOF持久化配置

```bash
appendonly yes
appendfsync=always#everysec#no

appendfsync设置为always时,服务器在每个事件循环中将aof_buf缓冲区中的所有内容写入并同步到AOF文件。从效率来说,是三个选项值当中最慢的一个,但从安全性来说,always是最安全的,因为即使出现故障停机,AOF持久化也只会丢失一个事件循环中所产生的命令数据。

appendfsync设置为everysec时,服务器在每个事件循环中将aof_buf缓冲区中的所有内容写入到AOF文件,并且每隔一秒将再次对AOF文件进行同步,并且这个同步操作是由一个线程专门负责执行的。从效率上来说,everysec模式足够快,并且就算出现故障停机,数据库也只丢失一秒钟的命令数据。

appendfsync设置为no时,服务器在每个事件循环中,将aof_buf缓冲区中的所有内容写入到AOF文件,但并不对AOF文件进行同步,何时同步由操作系统决定。从效率上来说,与everysec模式相当。AOF文件写入速度是最快的,但是单次同步时长是三种模式中最长的,当出现故障停机时,服务器将丢失上次同步AOF文件之后的所有写命令数据。

leveldb linux nosql redis ssdb

兼容redis的持久化储存ssdb

一个高性能的支持丰富数据结构的 NoSQL 数据库, 用于替代 Redis.

一个高性能的支持丰富数据结构的 NoSQL 数据库, 用于替代 Redis.

特性

  • 替代 Redis 数据库, Redis 的 100 倍容量
  • LevelDB 网络支持, 使用 C/C++ 开发
  • Redis API 兼容, 支持 Redis 客户端
  • 适合存储集合数据, 如 list, hash, zset...
  • 客户端 API 支持的语言包括: C++, PHP, Python, Java, Go
  • 持久化的队列服务
  • 主从复制, 负载均衡

安装ssdb(linux)

wget --no-check-certificate https://github.com/ideawu/ssdb/archive/master.zip
unzip master
cd ssdb-master
make
# optional, install ssdb in /usr/local/ssdb
sudo make install

启动

# start master
./ssdb-server ssdb.conf

# or start as daemon
./ssdb-server -d ssdb.conf

php使用

require_once('SSDB.php');
$ssdb = new SimpleSSDB('127.0.0.1', 8888);
$resp = $ssdb->set('key', '123');
$resp = $ssdb->get('key');
echo $resp; // output: 123
date linux ntp

linux下ntp时间同步

linux下ntp时间同步

安装ntp /或者名字是ntpd
systemctl start ntp /ntpd
systemctl enable ntp /ntpd

timedatectl set-timezone GMT
timedatectl set-timezone Asia/Shanghai
timedatectl set-ntp yes
hostname hosts linux

linux下hostname在线修改

linux下hostname在线修改

1如果只是修改hostname可以通过如下命令


hostname newHostname
注意这种修改方式只有当前有效等服务器重启后hostname就会失效回到原来的hostname

2如果需要永久修改hostname可通过如下命令


vi /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=xxx

修改其中的HOSTNAME项不过此种方法需要重启后生效

3于是在不重启又能永久修改hostname的方式是结合上述两种这样便能做到不用重启当前也生效重启后也生效
git linux 免密码

git免密码登录

git免密码登录

--global              使用全局配置文件
--system              使用系统级配置文件
--local               使用仓库级配置文件
git config --local credential.helper store
centos linux systemd 服务器 防火墙

centos上firewall的使用

systemctl是CentOS7的服务管理工具中主要的工具,它融合之前service和chkconfig的功能于一体

systemctl是CentOS7的服务管理工具中主要的工具,它融合之前service和chkconfig的功能于一体
启动一个服务:systemctl start firewalld.service
关闭一个服务:systemctl stop firewalld.service
重启一个服务:systemctl restart firewalld.service
显示一个服务的状态:systemctl status firewalld.service
在开机时启用一个服务:systemctl enable firewalld.service
在开机时禁用一个服务:systemctl disable firewalld.service
查看服务是否开机启动:systemctl is-enabled firewalld.service
查看已启动的服务列表:systemctl list-unit-files|grep enabled
查看启动失败的服务列表:systemctl --failed

配置firewalld-cmd

查看版本: firewall-cmd --version
查看帮助: firewall-cmd --help
显示状态: firewall-cmd --state
查看所有打开的端口: firewall-cmd --zone=public --list-ports
更新防火墙规则: firewall-cmd --reload
查看区域信息:  firewall-cmd --get-active-zones
查看指定接口所属区域: firewall-cmd --get-zone-of-interface=eth0
拒绝所有包:firewall-cmd --panic-on
取消拒绝状态: firewall-cmd --panic-off
查看是否拒绝: firewall-cmd --query-panic

那怎么开启一个端口呢
添加
firewall-cmd --zone=public --add-port=7676/tcp --permanent    --permanent永久生效,没有此参数重启后失效)
重新载入
firewall-cmd --reload
查看
firewall-cmd --zone= public --query-port=80/tcp
删除
firewall-cmd --zone= public --remove-port=80/tcp --permanent
linux postgresql 内核参数

[转] DBA不可不知的操作系统内核参数

DBA不可不知的操作系统内核参数

作者

digoal

日期

2016-08-03

标签

PostgreSQL , 内核参数 , Linux


背景

操作系统为了适应更多的硬件环境,许多初始的设置值,宽容度都很高。

如果不经调整,这些值可能无法适应HPC,或者硬件稍好些的环境。

无法发挥更好的硬件性能,甚至可能影响某些应用软件的使用,特别是数据库。

数据库关心的OS内核参数

512GB 内存为例

1.

参数

fs.aio-max-nr  

支持系统

CentOS 6, 7       

参数解释

aio-nr & aio-max-nr:    
.  
aio-nr is the running total of the number of events specified on the    
io_setup system call for all currently active aio contexts.    
.  
If aio-nr reaches aio-max-nr then io_setup will fail with EAGAIN.    
.  
Note that raising aio-max-nr does not result in the pre-allocation or re-sizing    
of any kernel data structures.    
.  
aio-nr & aio-max-nr:    
.  
aio-nr shows the current system-wide number of asynchronous io requests.    
.  
aio-max-nr allows you to change the maximum value aio-nr can grow to.    

推荐设置

fs.aio-max-nr = 1xxxxxx  
.  
PostgreSQL, Greenplum 均未使用io_setup创建aio contexts. 无需设置。    
如果Oracle数据库,要使用aio的话,需要设置它。    
设置它也没什么坏处,如果将来需要适应异步IO,可以不需要重新修改这个设置。   

2.

参数

fs.file-max  

支持系统

CentOS 6, 7       

参数解释

file-max & file-nr:    
.  
The value in file-max denotes the maximum number of file handles that the Linux kernel will allocate.   
.  
When you get lots of error messages about running out of file handles,   
you might want to increase this limit.    
.  
Historically, the kernel was able to allocate file handles dynamically,   
but not to free them again.     
.  
The three values in file-nr denote :      
the number of allocated file handles ,     
the number of allocated but unused file handles ,     
the maximum number of file handles.     
.  
Linux 2.6 always reports 0 as the number of free    
file handles -- this is not an error, it just means that the    
number of allocated file handles exactly matches the number of    
used file handles.    
.  
Attempts to allocate more file descriptors than file-max are reported with printk,   
look for "VFS: file-max limit <number> reached".    

推荐设置

fs.file-max = 7xxxxxxx  
.  
PostgreSQL 有一套自己管理的VFS,真正打开的FD与内核管理的文件打开关闭有一套映射的机制,所以真实情况不需要使用那么多的file handlers。     
max_files_per_process 参数。     
假设1GB内存支撑100个连接,每个连接打开1000个文件,那么一个PG实例需要打开10万个文件,一台机器按512G内存来算可以跑500个PG实例,则需要5000万个file handler。     
以上设置绰绰有余。     

3.

参数

kernel.core_pattern  

支持系统

CentOS 6, 7       

参数解释

core_pattern:    
.  
core_pattern is used to specify a core dumpfile pattern name.    
. max length 128 characters; default value is "core"    
. core_pattern is used as a pattern template for the output filename;    
certain string patterns (beginning with '%') are substituted with    
their actual values.    
. backward compatibility with core_uses_pid:    
If core_pattern does not include "%p" (default does not)    
and core_uses_pid is set, then .PID will be appended to    
the filename.    
. corename format specifiers:    
%<NUL>  '%' is dropped    
%%      output one '%'    
%p      pid    
%P      global pid (init PID namespace)    
%i      tid    
%I      global tid (init PID namespace)    
%u      uid    
%g      gid    
%d      dump mode, matches PR_SET_DUMPABLE and    
/proc/sys/fs/suid_dumpable    
%s      signal number    
%t      UNIX time of dump    
%h      hostname    
%e      executable filename (may be shortened)    
%E      executable path    
%<OTHER> both are dropped    
. If the first character of the pattern is a '|', the kernel will treat    
the rest of the pattern as a command to run.  The core dump will be    
written to the standard input of that program instead of to a file.    

推荐设置

kernel.core_pattern = /xxx/core_%e_%u_%t_%s.%p    
.  
这个目录要777的权限,如果它是个软链,则真实目录需要777的权限  
mkdir /xxx  
chmod 777 /xxx  
留足够的空间  

4.

参数

kernel.sem   

支持系统

CentOS 6, 7       

参数解释

kernel.sem = 4096 2147483647 2147483646 512000    
.  
4096 每组多少信号量 (>=17, PostgreSQL 每16个进程一组, 每组需要17个信号量) ,     
2147483647 总共多少信号量 (2^31-1 , 且大于4096*512000 ) ,     
2147483646 每个semop()调用支持多少操作 (2^31-1),     
512000 多少组信号量 (假设每GB支持100个连接, 512GB支持51200个连接, 加上其他进程, > 51200*2/16 绰绰有余)     
.  
# sysctl -w kernel.sem="4096 2147483647 2147483646 512000"    
.  
# ipcs -s -l    
------ Semaphore Limits --------    
max number of arrays = 512000    
max semaphores per array = 4096    
max semaphores system wide = 2147483647    
max ops per semop call = 2147483646    
semaphore max value = 32767    

推荐设置

kernel.sem = 4096 2147483647 2147483646 512000    
.  
4096可能能够适合更多的场景, 所以大点无妨,关键是512000 arrays也够了。    

5.

参数

kernel.shmall = 107374182    
kernel.shmmax = 274877906944    
kernel.shmmni = 819200    

支持系统

CentOS 6, 7        

参数解释

假设主机内存 512GB    
.  
shmmax 单个共享内存段最大 256GB (主机内存的一半,单位字节)      
shmall 所有共享内存段加起来最大 (主机内存的80%,单位PAGE)      
shmmni 一共允许创建819200个共享内存段 (每个数据库启动需要2个共享内存段。  将来允许动态创建共享内存段,可能需求量更大)     
.  
# getconf PAGE_SIZE    
4096    

推荐设置

kernel.shmall = 107374182    
kernel.shmmax = 274877906944    
kernel.shmmni = 819200    
.  
9.2以及以前的版本,数据库启动时,对共享内存段的内存需求非常大,需要考虑以下几点  
Connections:    (1800 + 270 * max_locks_per_transaction) * max_connections  
Autovacuum workers: (1800 + 270 * max_locks_per_transaction) * autovacuum_max_workers  
Prepared transactions:  (770 + 270 * max_locks_per_transaction) * max_prepared_transactions  
Shared disk buffers:    (block_size + 208) * shared_buffers  
WAL buffers:    (wal_block_size + 8) * wal_buffers  
Fixed space requirements:   770 kB  
.  
以上建议参数根据9.2以前的版本设置,后期的版本同样适用。  

6.

参数

net.core.netdev_max_backlog  

支持系统

CentOS 6, 7     

参数解释

netdev_max_backlog    
------------------    
Maximum number  of  packets,  queued  on  the  INPUT  side,    
when the interface receives packets faster than kernel can process them.    

推荐设置

net.core.netdev_max_backlog=1xxxx    
.  
INPUT链表越长,处理耗费越大,如果用了iptables管理的话,需要加大这个值。    

7.

参数

net.core.rmem_default  
net.core.rmem_max  
net.core.wmem_default  
net.core.wmem_max  

支持系统

CentOS 6, 7     

参数解释

rmem_default    
------------    
The default setting of the socket receive buffer in bytes.    
.  
rmem_max    
--------    
The maximum receive socket buffer size in bytes.    
.  
wmem_default    
------------    
The default setting (in bytes) of the socket send buffer.    
.  
wmem_max    
--------    
The maximum send socket buffer size in bytes.    

推荐设置

net.core.rmem_default = 262144    
net.core.rmem_max = 4194304    
net.core.wmem_default = 262144    
net.core.wmem_max = 4194304    

8.

参数

net.core.somaxconn   

支持系统

CentOS 6, 7        

参数解释

somaxconn - INTEGER    
Limit of socket listen() backlog, known in userspace as SOMAXCONN.    
Defaults to 128.    
See also tcp_max_syn_backlog for additional tuning for TCP sockets.    

推荐设置

net.core.somaxconn=4xxx    

9.

参数

net.ipv4.tcp_max_syn_backlog  

支持系统

CentOS 6, 7         

参数解释

tcp_max_syn_backlog - INTEGER    
Maximal number of remembered connection requests, which have not    
received an acknowledgment from connecting client.    
The minimal value is 128 for low memory machines, and it will    
increase in proportion to the memory of machine.    
If server suffers from overload, try increasing this number.    

推荐设置

net.ipv4.tcp_max_syn_backlog=4xxx    
pgpool-II 使用了这个值,用于将超过num_init_child以外的连接queue。     
所以这个值决定了有多少连接可以在队列里面等待。    

10.

参数

net.ipv4.tcp_keepalive_intvl=20    
net.ipv4.tcp_keepalive_probes=3    
net.ipv4.tcp_keepalive_time=60     

支持系统

CentOS 6, 7        

参数解释

tcp_keepalive_time - INTEGER    
How often TCP sends out keepalive messages when keepalive is enabled.    
Default: 2hours.    
.  
tcp_keepalive_probes - INTEGER    
How many keepalive probes TCP sends out, until it decides that the    
connection is broken. Default value: 9.    
.  
tcp_keepalive_intvl - INTEGER    
How frequently the probes are send out. Multiplied by    
tcp_keepalive_probes it is time to kill not responding connection,    
after probes started. Default value: 75sec i.e. connection    
will be aborted after ~11 minutes of retries.    

推荐设置

net.ipv4.tcp_keepalive_intvl=20    
net.ipv4.tcp_keepalive_probes=3    
net.ipv4.tcp_keepalive_time=60    
.  
连接空闲60秒后, 每隔20秒发心跳包, 尝试3次心跳包没有响应,关闭连接。 从开始空闲,到关闭连接总共历时120秒。    

11.

参数

net.ipv4.tcp_mem=8388608 12582912 16777216    

支持系统

CentOS 6, 7    

参数解释

tcp_mem - vector of 3 INTEGERs: min, pressure, max    
单位 page    
min: below this number of pages TCP is not bothered about its    
memory appetite.    
.  
pressure: when amount of memory allocated by TCP exceeds this number    
of pages, TCP moderates its memory consumption and enters memory    
pressure mode, which is exited when memory consumption falls    
under "min".    
.  
max: number of pages allowed for queueing by all TCP sockets.    
.  
Defaults are calculated at boot time from amount of available    
memory.    
64GB 内存,自动计算的值是这样的    
net.ipv4.tcp_mem = 1539615      2052821 3079230    
.  
512GB 内存,自动计算得到的值是这样的    
net.ipv4.tcp_mem = 49621632     66162176        99243264    
.  
这个参数让操作系统启动时自动计算,问题也不大  

推荐设置

net.ipv4.tcp_mem=8388608 12582912 16777216    
.  
这个参数让操作系统启动时自动计算,问题也不大  

12.

参数

net.ipv4.tcp_fin_timeout  

支持系统

CentOS 6, 7        

参数解释

tcp_fin_timeout - INTEGER    
The length of time an orphaned (no longer referenced by any    
application) connection will remain in the FIN_WAIT_2 state    
before it is aborted at the local end.  While a perfectly    
valid "receive only" state for an un-orphaned connection, an    
orphaned connection in FIN_WAIT_2 state could otherwise wait    
forever for the remote to close its end of the connection.    
Cf. tcp_max_orphans    
Default: 60 seconds    

推荐设置

net.ipv4.tcp_fin_timeout=5    
.  
加快僵尸连接回收速度   

13.

参数

net.ipv4.tcp_synack_retries  

支持系统

CentOS 6, 7         

参数解释

tcp_synack_retries - INTEGER    
Number of times SYNACKs for a passive TCP connection attempt will    
be retransmitted. Should not be higher than 255. Default value    
is 5, which corresponds to 31seconds till the last retransmission    
with the current initial RTO of 1second. With this the final timeout    
for a passive TCP connection will happen after 63seconds.    

推荐设置

net.ipv4.tcp_synack_retries=2    
.  
缩短tcp syncack超时时间  

14.

参数

net.ipv4.tcp_syncookies  

支持系统

CentOS 6, 7         

参数解释

tcp_syncookies - BOOLEAN    
Only valid when the kernel was compiled with CONFIG_SYN_COOKIES    
Send out syncookies when the syn backlog queue of a socket    
overflows. This is to prevent against the common 'SYN flood attack'    
Default: 1    
.  
Note, that syncookies is fallback facility.    
It MUST NOT be used to help highly loaded servers to stand    
against legal connection rate. If you see SYN flood warnings    
in your logs, but investigation shows that they occur    
because of overload with legal connections, you should tune    
another parameters until this warning disappear.    
See: tcp_max_syn_backlog, tcp_synack_retries, tcp_abort_on_overflow.    
.  
syncookies seriously violate TCP protocol, do not allow    
to use TCP extensions, can result in serious degradation    
of some services (f.e. SMTP relaying), visible not by you,    
but your clients and relays, contacting you. While you see    
SYN flood warnings in logs not being really flooded, your server    
is seriously misconfigured.    
.  
If you want to test which effects syncookies have to your    
network connections you can set this knob to 2 to enable    
unconditionally generation of syncookies.    

推荐设置

net.ipv4.tcp_syncookies=1    
.  
防止syn flood攻击   

15.

参数

net.ipv4.tcp_timestamps  

支持系统

CentOS 6, 7         

参数解释

tcp_timestamps - BOOLEAN    
Enable timestamps as defined in RFC1323.    

推荐设置

net.ipv4.tcp_timestamps=1    
.  
tcp_timestamps 是 tcp 协议中的一个扩展项,通过时间戳的方式来检测过来的包以防止 PAWS(Protect Against Wrapped  Sequence numbers),可以提高 tcp 的性能。  

16.

参数

net.ipv4.tcp_tw_recycle  
net.ipv4.tcp_tw_reuse  
net.ipv4.tcp_max_tw_buckets  

支持系统

CentOS 6, 7         

参数解释

tcp_tw_recycle - BOOLEAN    
Enable fast recycling TIME-WAIT sockets. Default value is 0.    
It should not be changed without advice/request of technical    
experts.    
.  
tcp_tw_reuse - BOOLEAN    
Allow to reuse TIME-WAIT sockets for new connections when it is    
safe from protocol viewpoint. Default value is 0.    
It should not be changed without advice/request of technical    
experts.    
.  
tcp_max_tw_buckets - INTEGER  
Maximal number of timewait sockets held by system simultaneously.  
If this number is exceeded time-wait socket is immediately destroyed  
and warning is printed.   
This limit exists only to prevent simple DoS attacks,   
you _must_ not lower the limit artificially,   
but rather increase it (probably, after increasing installed memory),    
if network conditions require more than default value.   

推荐设置

net.ipv4.tcp_tw_recycle=0    
net.ipv4.tcp_tw_reuse=1    
net.ipv4.tcp_max_tw_buckets = 2xxxxx    
.  
net.ipv4.tcp_tw_recycle和net.ipv4.tcp_timestamps不建议同时开启    

17.

参数

net.ipv4.tcp_rmem  
net.ipv4.tcp_wmem  

支持系统

CentOS 6, 7         

参数解释

tcp_wmem - vector of 3 INTEGERs: min, default, max    
min: Amount of memory reserved for send buffers for TCP sockets.    
Each TCP socket has rights to use it due to fact of its birth.    
Default: 1 page    
.  
default: initial size of send buffer used by TCP sockets.  This    
value overrides net.core.wmem_default used by other protocols.    
It is usually lower than net.core.wmem_default.    
Default: 16K    
.  
max: Maximal amount of memory allowed for automatically tuned    
send buffers for TCP sockets. This value does not override    
net.core.wmem_max.  Calling setsockopt() with SO_SNDBUF disables    
automatic tuning of that socket's send buffer size, in which case    
this value is ignored.    
Default: between 64K and 4MB, depending on RAM size.    
.  
tcp_rmem - vector of 3 INTEGERs: min, default, max    
min: Minimal size of receive buffer used by TCP sockets.    
It is guaranteed to each TCP socket, even under moderate memory    
pressure.    
Default: 1 page    
.  
default: initial size of receive buffer used by TCP sockets.    
This value overrides net.core.rmem_default used by other protocols.    
Default: 87380 bytes. This value results in window of 65535 with    
default setting of tcp_adv_win_scale and tcp_app_win:0 and a bit    
less for default tcp_app_win. See below about these variables.    
.  
max: maximal size of receive buffer allowed for automatically    
selected receiver buffers for TCP socket. This value does not override    
net.core.rmem_max.  Calling setsockopt() with SO_RCVBUF disables    
automatic tuning of that socket's receive buffer size, in which    
case this value is ignored.    
Default: between 87380B and 6MB, depending on RAM size.    

推荐设置

net.ipv4.tcp_rmem=8192 87380 16777216    
net.ipv4.tcp_wmem=8192 65536 16777216    
.  
许多数据库的推荐设置,提高网络性能  

18.

参数

net.nf_conntrack_max  
net.netfilter.nf_conntrack_max  

支持系统

CentOS 6    

参数解释

nf_conntrack_max - INTEGER    
Size of connection tracking table.    
Default value is nf_conntrack_buckets value * 4.    

推荐设置

net.nf_conntrack_max=1xxxxxx    
net.netfilter.nf_conntrack_max=1xxxxxx    

19.

参数

vm.dirty_background_bytes   
vm.dirty_expire_centisecs   
vm.dirty_ratio   
vm.dirty_writeback_centisecs   

支持系统

CentOS 6, 7        

参数解释

==============================================================    
.  
dirty_background_bytes    
.  
Contains the amount of dirty memory at which the background kernel    
flusher threads will start writeback.    
.  
Note: dirty_background_bytes is the counterpart of dirty_background_ratio. Only    
one of them may be specified at a time. When one sysctl is written it is    
immediately taken into account to evaluate the dirty memory limits and the    
other appears as 0 when read.    
.  
==============================================================    
.  
dirty_background_ratio    
.  
Contains, as a percentage of total system memory, the number of pages at which    
the background kernel flusher threads will start writing out dirty data.    
.  
==============================================================    
.  
dirty_bytes    
.  
Contains the amount of dirty memory at which a process generating disk writes    
will itself start writeback.    
.  
Note: dirty_bytes is the counterpart of dirty_ratio. Only one of them may be    
specified at a time. When one sysctl is written it is immediately taken into    
account to evaluate the dirty memory limits and the other appears as 0 when    
read.    
.  
Note: the minimum value allowed for dirty_bytes is two pages (in bytes); any    
value lower than this limit will be ignored and the old configuration will be    
retained.    
.  
==============================================================    
.  
dirty_expire_centisecs    
.  
This tunable is used to define when dirty data is old enough to be eligible    
for writeout by the kernel flusher threads.  It is expressed in 100'ths    
of a second.  Data which has been dirty in-memory for longer than this    
interval will be written out next time a flusher thread wakes up.    
.  
==============================================================    
.  
dirty_ratio    
.  
Contains, as a percentage of total system memory, the number of pages at which    
a process which is generating disk writes will itself start writing out dirty    
data.    
.  
==============================================================    
.  
dirty_writeback_centisecs    
.  
The kernel flusher threads will periodically wake up and write `old' data    
out to disk.  This tunable expresses the interval between those wakeups, in    
100'ths of a second.    
.  
Setting this to zero disables periodic writeback altogether.    
.  
==============================================================    

推荐设置

vm.dirty_background_bytes = 4096000000    
vm.dirty_expire_centisecs = 6000    
vm.dirty_ratio = 80    
vm.dirty_writeback_centisecs = 50    
.  
减少数据库进程刷脏页的频率,dirty_background_bytes根据实际IOPS能力以及内存大小设置    

20.

参数

vm.extra_free_kbytes  

支持系统

CentOS 6    

参数解释

extra_free_kbytes    
.  
This parameter tells the VM to keep extra free memory   
between the threshold where background reclaim (kswapd) kicks in,   
and the threshold where direct reclaim (by allocating processes) kicks in.    
.  
This is useful for workloads that require low latency memory allocations    
and have a bounded burstiness in memory allocations,   
for example a realtime application that receives and transmits network traffic    
(causing in-kernel memory allocations) with a maximum total message burst    
size of 200MB may need 200MB of extra free memory to avoid direct reclaim    
related latencies.    
.  
目标是尽量让后台进程回收内存,比用户进程提早多少kbytes回收,因此用户进程可以快速分配内存。    

推荐设置

vm.extra_free_kbytes=4xxxxxx    

21.

参数

vm.min_free_kbytes  

支持系统

CentOS 6, 7         

参数解释

min_free_kbytes:    
.  
This is used to force the Linux VM to keep a minimum number    
of kilobytes free.  The VM uses this number to compute a    
watermark[WMARK_MIN] value for each lowmem zone in the system.    
Each lowmem zone gets a number of reserved free pages based    
proportionally on its size.    
.  
Some minimal amount of memory is needed to satisfy PF_MEMALLOC    
allocations; if you set this to lower than 1024KB, your system will    
become subtly broken, and prone to deadlock under high loads.    
.  
Setting this too high will OOM your machine instantly.    

推荐设置

vm.min_free_kbytes = 2xxxxxx    
.  
防止在高负载时系统无响应,减少内存分配死锁概率。    

22.

参数

vm.mmap_min_addr  

支持系统

CentOS 6, 7       

参数解释

mmap_min_addr    
.  
This file indicates the amount of address space  which a user process will    
be restricted from mmapping.  Since kernel null dereference bugs could    
accidentally operate based on the information in the first couple of pages    
of memory userspace processes should not be allowed to write to them.  By    
default this value is set to 0 and no protections will be enforced by the    
security module.  Setting this value to something like 64k will allow the    
vast majority of applications to work correctly and provide defense in depth    
against future potential kernel bugs.    

推荐设置

vm.mmap_min_addr=6xxxx    
.  
防止内核隐藏的BUG导致的问题  

23.

参数

vm.overcommit_memory   
vm.overcommit_ratio   

支持系统

CentOS 6, 7         

参数解释

==============================================================    
.  
overcommit_kbytes:    
.  
When overcommit_memory is set to 2, the committed address space is not    
permitted to exceed swap plus this amount of physical RAM. See below.    
.  
Note: overcommit_kbytes is the counterpart of overcommit_ratio. Only one    
of them may be specified at a time. Setting one disables the other (which    
then appears as 0 when read).    
.  
==============================================================    
.  
overcommit_memory:    
.  
This value contains a flag that enables memory overcommitment.    
.  
When this flag is 0,   
the kernel attempts to estimate the amount    
of free memory left when userspace requests more memory.    
.  
When this flag is 1,   
the kernel pretends there is always enough memory until it actually runs out.    
.  
When this flag is 2,   
the kernel uses a "never overcommit"    
policy that attempts to prevent any overcommit of memory.    
Note that user_reserve_kbytes affects this policy.    
.  
This feature can be very useful because there are a lot of    
programs that malloc() huge amounts of memory "just-in-case"    
and don't use much of it.    
.  
The default value is 0.    
.  
See Documentation/vm/overcommit-accounting and    
security/commoncap.c::cap_vm_enough_memory() for more information.    
.  
==============================================================    
.  
overcommit_ratio:    
.  
When overcommit_memory is set to 2,   
the committed address space is not permitted to exceed   
swap + this percentage of physical RAM.    
See above.    
.  
==============================================================    

推荐设置

vm.overcommit_memory = 0    
vm.overcommit_ratio = 90    
.  
vm.overcommit_memory = 0 时 vm.overcommit_ratio可以不设置   

24.

参数

vm.swappiness   

支持系统

CentOS 6, 7         

参数解释

swappiness    
.  
This control is used to define how aggressive the kernel will swap    
memory pages.    
Higher values will increase agressiveness, lower values    
decrease the amount of swap.    
.  
The default value is 60.    

推荐设置

vm.swappiness = 0    

25.

参数

vm.zone_reclaim_mode   

支持系统

CentOS 6, 7         

参数解释

zone_reclaim_mode:    
.  
Zone_reclaim_mode allows someone to set more or less aggressive approaches to    
reclaim memory when a zone runs out of memory. If it is set to zero then no    
zone reclaim occurs. Allocations will be satisfied from other zones / nodes    
in the system.    
.  
This is value ORed together of    
.  
1       = Zone reclaim on    
2       = Zone reclaim writes dirty pages out    
4       = Zone reclaim swaps pages    
.  
zone_reclaim_mode is disabled by default.  For file servers or workloads    
that benefit from having their data cached, zone_reclaim_mode should be    
left disabled as the caching effect is likely to be more important than    
data locality.    
.  
zone_reclaim may be enabled if it's known that the workload is partitioned    
such that each partition fits within a NUMA node and that accessing remote    
memory would cause a measurable performance reduction.  The page allocator    
will then reclaim easily reusable pages (those page cache pages that are    
currently not used) before allocating off node pages.    
.  
Allowing zone reclaim to write out pages stops processes that are    
writing large amounts of data from dirtying pages on other nodes. Zone    
reclaim will write out dirty pages if a zone fills up and so effectively    
throttle the process. This may decrease the performance of a single process    
since it cannot use all of system memory to buffer the outgoing writes    
anymore but it preserve the memory on other nodes so that the performance    
of other processes running on other nodes will not be affected.    
.  
Allowing regular swap effectively restricts allocations to the local    
node unless explicitly overridden by memory policies or cpuset    
configurations.    

推荐设置

vm.zone_reclaim_mode=0    
.  
不使用NUMA  

26.

参数

net.ipv4.ip_local_port_range  

支持系统

CentOS 6, 7         

参数解释

ip_local_port_range - 2 INTEGERS  
Defines the local port range that is used by TCP and UDP to  
choose the local port. The first number is the first, the  
second the last local port number. The default values are  
32768 and 61000 respectively.  
.  
ip_local_reserved_ports - list of comma separated ranges  
Specify the ports which are reserved for known third-party  
applications. These ports will not be used by automatic port  
assignments (e.g. when calling connect() or bind() with port  
number 0). Explicit port allocation behavior is unchanged.  
.  
The format used for both input and output is a comma separated  
list of ranges (e.g. "1,2-4,10-10" for ports 1, 2, 3, 4 and  
10). Writing to the file will clear all previously reserved  
ports and update the current list with the one given in the  
input.  
.  
Note that ip_local_port_range and ip_local_reserved_ports  
settings are independent and both are considered by the kernel  
when determining which ports are available for automatic port  
assignments.  
.  
You can reserve ports which are not in the current  
ip_local_port_range, e.g.:  
.  
$ cat /proc/sys/net/ipv4/ip_local_port_range  
32000   61000  
$ cat /proc/sys/net/ipv4/ip_local_reserved_ports  
8080,9148  
.  
although this is redundant. However such a setting is useful  
if later the port range is changed to a value that will  
include the reserved ports.  
.  
Default: Empty  

推荐设置

net.ipv4.ip_local_port_range=40000 65535    
.  
限制本地动态端口分配范围,防止占用监听端口。  

27.

参数

vm.nr_hugepages  

支持系统

CentOS 6, 7  

参数解释

==============================================================  
nr_hugepages  
Change the minimum size of the hugepage pool.  
See Documentation/vm/hugetlbpage.txt  
==============================================================  
nr_overcommit_hugepages  
Change the maximum size of the hugepage pool. The maximum is  
nr_hugepages + nr_overcommit_hugepages.  
See Documentation/vm/hugetlbpage.txt  
.  
The output of "cat /proc/meminfo" will include lines like:  
......  
HugePages_Total: vvv  
HugePages_Free:  www  
HugePages_Rsvd:  xxx  
HugePages_Surp:  yyy  
Hugepagesize:    zzz kB  
.  
where:  
HugePages_Total is the size of the pool of huge pages.  
HugePages_Free  is the number of huge pages in the pool that are not yet  
allocated.  
HugePages_Rsvd  is short for "reserved," and is the number of huge pages for  
which a commitment to allocate from the pool has been made,  
but no allocation has yet been made.  Reserved huge pages  
guarantee that an application will be able to allocate a  
huge page from the pool of huge pages at fault time.  
HugePages_Surp  is short for "surplus," and is the number of huge pages in  
the pool above the value in /proc/sys/vm/nr_hugepages. The  
maximum number of surplus huge pages is controlled by  
/proc/sys/vm/nr_overcommit_hugepages.  
.  
/proc/filesystems should also show a filesystem of type "hugetlbfs" configured  
in the kernel.  
.  
/proc/sys/vm/nr_hugepages indicates the current number of "persistent" huge  
pages in the kernel's huge page pool.  "Persistent" huge pages will be  
returned to the huge page pool when freed by a task.  A user with root  
privileges can dynamically allocate more or free some persistent huge pages  
by increasing or decreasing the value of 'nr_hugepages'.  

推荐设置

如果要使用PostgreSQL的huge page,建议设置它。    
大于数据库需要的共享内存即可。    

28.

参数

fs.nr_open

支持系统

CentOS 6, 7

参数解释

nr_open:

This denotes the maximum number of file-handles a process can
allocate. Default value is 1024*1024 (1048576) which should be
enough for most machines. Actual limit depends on RLIMIT_NOFILE
resource limit.

它还影响security/limits.conf 的文件句柄限制,单个进程的打开句柄不能大于fs.nr_open,所以要加大文件句柄限制,首先要加大nr_open

推荐设置

对于有很多对象(表、视图、索引、序列、物化视图等)的PostgreSQL数据库,建议设置为2000万,
例如fs.nr_open=20480000

数据库关心的资源限制

1. 通过/etc/security/limits.conf设置,或者ulimit设置

2. 通过/proc/$pid/limits查看当前进程的设置

#        - core - limits the core file size (KB)  
#        - memlock - max locked-in-memory address space (KB)  
#        - nofile - max number of open files  建议设置为1000万 , 但是必须设置sysctl, fs.nr_open大于它,否则会导致系统无法登陆。
#        - nproc - max number of processes  
以上四个是非常关心的配置  
....  
#        - data - max data size (KB)  
#        - fsize - maximum filesize (KB)  
#        - rss - max resident set size (KB)  
#        - stack - max stack size (KB)  
#        - cpu - max CPU time (MIN)  
#        - as - address space limit (KB)  
#        - maxlogins - max number of logins for this user  
#        - maxsyslogins - max number of logins on the system  
#        - priority - the priority to run user process with  
#        - locks - max number of file locks the user can hold  
#        - sigpending - max number of pending signals  
#        - msgqueue - max memory used by POSIX message queues (bytes)  
#        - nice - max nice priority allowed to raise to values: [-20, 19]  
#        - rtprio - max realtime priority  

数据库关心的IO调度规则

1. 目前操作系统支持的IO调度策略包括cfq, deadline, noop 等。

/kernel-doc-xxx/Documentation/block  
-r--r--r-- 1 root root   674 Apr  8 16:33 00-INDEX  
-r--r--r-- 1 root root 55006 Apr  8 16:33 biodoc.txt  
-r--r--r-- 1 root root   618 Apr  8 16:33 capability.txt  
-r--r--r-- 1 root root 12791 Apr  8 16:33 cfq-iosched.txt  
-r--r--r-- 1 root root 13815 Apr  8 16:33 data-integrity.txt  
-r--r--r-- 1 root root  2841 Apr  8 16:33 deadline-iosched.txt  
-r--r--r-- 1 root root  4713 Apr  8 16:33 ioprio.txt  
-r--r--r-- 1 root root  2535 Apr  8 16:33 null_blk.txt  
-r--r--r-- 1 root root  4896 Apr  8 16:33 queue-sysfs.txt  
-r--r--r-- 1 root root  2075 Apr  8 16:33 request.txt  
-r--r--r-- 1 root root  3272 Apr  8 16:33 stat.txt  
-r--r--r-- 1 root root  1414 Apr  8 16:33 switching-sched.txt  
-r--r--r-- 1 root root  3916 Apr  8 16:33 writeback_cache_control.txt  

如果你要详细了解这些调度策略的规则,可以查看WIKI或者看内核文档。

从这里可以看到它的调度策略

cat /sys/block/vdb/queue/scheduler   
noop [deadline] cfq   

修改

echo deadline > /sys/block/hda/queue/scheduler  

或者修改启动参数

grub.conf  
elevator=deadline  

从很多测试结果来看,数据库使用deadline调度,性能会更稳定一些。

其他

1. 关闭透明大页

2. 禁用NUMA

3. SSD的对齐

Count

deepin linux 定制 深度

从头构建deepin 个性化系统

从头构建deepin 个性化系统很简单流畅不卡顿

apt install debootstrap

找到配置文件
/usr/share/debootstrap/scripts/sid

复制一份

cp sid panda

nano panda

keyring /usr/share/keyrings/debian-archive-keyring.gpg

keyring /usr/share/keyrings/deepin-keyring.gpg

debootstrap panda  rootfs  url

chroot rootfs

apt install locales

apt install fcitx

apt install fcitx-rime

apt install mate

apt install grun2

dpkg --add-architecture i386

修改配置文件

/etc/apt/source.list
golang linux

linux golang 安装

golang

废话不多说 nano .bashrc

export GOROOT=$HOME/go1.X
export PATH=$PATH:$GOROOT/bin

source .bashrc

git linux 代理 设置

linux git 设置代理

linux git 设置代理

gitproxy

#!/bin/bash
case $1 in
on)
git config --global http.proxy 'socks5://127.0.0.1:1080'
git config --global https.proxy 'socks5://127.0.0.1:1080'
;;
off)
git config --global --unset http.proxy
git config --global --unset https.proxy
;;
status)
git config --get http.proxy
git config --get https.proxy
;;
esac
exit 0
install linux

asoc 安装文档备份

asoc 安装文档备份

Available languages | English | 简体中文

Installation of AOSC OS on x86-64 systems/environments are generally universal for all systems of this architectures. But for some specific device configurations and virtualized environments, here below are some extra notes:

Forenotes

  • Any commands listed below starting with a # means that the commands are run as the root user.

Choosing a Tarball

All AMD64/x86-64 tarballs are generic (universal for all supported devices), the only thing you would have to do here is choosing your favourite one - appropriate for your taste and your use case.

Note: Another consideration is whether your device is capable for a specific variant, please consult the AMD64/x86-64 system requirements page for more information.

Bootable

  • Base
  • KDE/Plasma
  • GNOME
  • MATE
  • XFCE
  • LXDE
  • i3 Window Manager

Non-bootable

  • Container
  • BuildKit

We are not going to discuss the deployment of Container and BuildKit in this guide, please check for the guide in AOSC Cadet Training.

Preparing an Installation Environment

It is impossible to install AOSC OS without a working Live environment or an installed copy of Linux distribution on your local storage. Live disc images are not yet available for AOSC OS.

For installing AOSC OS, we recommend that you use GParted Live, dumped to your USB flash drive - and our guide will assume that you are using GParted Live.

Warning: Be sure that you downloaded the amd64 version, or else you won't be able to enter AOSC OS chroot environment!

Note: You may not be able to connect to network when using VMware.

# dd if=nameofimage.iso of=/dev/sdX bs=4M

Where:

  • nameofimage.iso is the filename of your downloaded GParted Live ISO file.
  • /dev/sdX is the device file for your USB flash disk.

After you are done, boot to GParted Live.

Preparing partitions

On AMD64/x86-64, AOSC OS supports GUID (EFI) or MBR (traditional BIOS) partition tables - if you plan on multi-booting AOSC OS with other Linux distributions, Microsoft Windows, or Apple macOS, they generally uses GUID on newer machines, and MBR on older ones.

It is relatively easy to use GParted, provided with GParted Live to configure your partitions. For more details on how to configure your partition with GParted, please refer to the GParted Manual.

Extra Notes

  • If you plan on installing AOSC OS across multiple partitions, please make sure you created a /etc/fstab file before you reboot to AOSC OS - details discussed later.
  • If you plan on using the ESP (EFI System Partition) as your /boot partition, extra actions may be needed when updating the Linux Kernel - details discussed later.

Un-tar!

With partitions configured, you are now ready to unpack the AOSC OS system tarball you have downloaded. Before you start un-tar-ing your tarball, mount your system partition(s) first. Say, if you wanted to install AOSC OS on partition /dev/sda2:

# mount -v /dev/sda2 /mnt

Additionally, say, if you have /dev/sda3 for /home:

# mkdir -v /mnt/home
# mount -b /dev/sda3 /mnt/home

And now, un-tar the tarball:

# cd /mnt
# tar --numeric-owner -pxf /path/to/tarball/tarball.tar.xz

For a more exciting experience, add verbosity:

# cd /mnt
# tar --numeric-owner -pxvf /path/to/tarball/tarball.tar.xz

Initial Configuration

Here below are some extra steps before you configure your bootloader - strongly recommended to avoid potential issues later.

Bind mount system/pseudo directories

# mkdir /mnt/run/udev
# for i in dev proc sys run/udev; do mount --rbind /$i /mnt/$i; done

/etc/fstab Generation

If you have chosen to use multi-partition layout for your AOSC OS installation, you will need to configure your /etc/fstab file, one fast way to achieve this is by installing the genfstab package:

# chroot /mnt apt update
# chroot /mnt apt install genfstab

And generate a /etc/fstab file:

# /mnt/usr/bin/genfstab -U -p /mnt >> /mnt/etc/fstab

Chroot

Enter AOSC OS chroot environment:

# chroot /mnt /bin/bash

If you failed to enter chroot, you have probably not downloaded the amd64 version (gosh, we got it in bold as well...).

Note: Commands in all sections below are executed from chroot.

Update, Your, System!

New tarball releases comes out roughly each season (or longer depending on developers' availability), and it is generally a wise choice to update your system first - just to get rid of some old bugs since the tarball's release:

# apt update
# apt full-upgrade

Initialization RAM Disk

Use the following command to create initialization RAM disk for AOSC OS.

# sh /var/ab/triggered/dracut

Bootloader Configuration

Now you should be able to configure your bootloader, we will use GRUB for the purpose of this installation guide. Installation of GRUB differs for EFI and BIOS systems, and thus they will be separated to two sections.

Note: You would need GRUB 2.02 (grub version 2:2.0.2) to support NVMe-based storage devices as boot drives.

Note: All commands below are run from within chroot.

EFI Systems

To install GRUB for EFI systems, mount your ESP partition, generally /dev/sda1 to /efi (change device name if appropriate):

# mount /dev/sda1 /efi

Then, install GRUB to the partition, and generate a GRUB configuration:

# grub-install --target=x86_64-efi --bootloader-id=AOSC-GRUB --efi-directory=/efi
# grub-mkconfig -o /boot/grub/grub.cfg

For some Bay Trail devices, you might need to install for i386-efi target instead - do not use the following command unless you are sure about what you are doing:

# grub-install --target=i386-efi --bootloader-id=AOSC-GRUB --efi-directory=/efi
# grub-mkconfig -o /boot/grub/grub.cfg

BIOS Systems

Installation and configuration of GRUB is straight forward on BIOS systems, only thing to look out for is where the MBR for your hard drive(s) are. In our example, we assume that your MBR is located on /dev/sda, but it may vary, but in most cases, MBR is located on the hard drive, but not on a partition.

# grub-install --target=i386-pc /dev/sda
# grub-mkconfig -o /boot/grub/grub.cfg

User, and Post-installation Configuration

All tarballs do not come with a default user and root user is disabled, you would have to create your own account before you reboot into AOSC OS - while leaving the password empty for the root user - you can always use sudo for your superuser needs.

Add a user

To add a new user aosc, use the useradd command:

# useradd -m -G wheel -s /bin/bash aosc

Setting password

Although it is not required to protect the newly created user aosc with a password, it is highly recommend to do so:

# passwd aosc

Enabling Root

Although strongly discouraged, you can enable the root user by setting a password for root:

# passwd root

Notes: Decent Linux users need not the root user.

Setting System Timezone

Timezone info are stored in /usr/share/zoneinfo/<region>/<city>.

# ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime

Setting System Language

AOSC OS enables all languages with UTF-8 encoding by default. In rare cases where you (really) want to disable some languages or enable non UTF-8 encodings, edit /etc/locale.gen as needed and execute locale-gen as root (which might take a long time).

To set default language for all users, edit /etc/locale.conf. For example, to set system lanaguage to Chinese Simplified (China):

LANG=zh_CN.UTF-8

Notes: After you rebooted the computer into the new system, you may use the localectl command to do this:

# localectl set-locale "LANG=zh_CN.UTF-8"

Setting System Hostname

To set a hostname for the system, edit /etc/hostname. For example, to set the hostname to be MyNewComputer:

MyNewComputer

Notes: After you rebooted the computer into the new system, you may use the hostnamectl command to do this:

# hostnamectl set-hostname yourhostname
bash git linux

linux bash显示git分支

linux bash显示git分支

在bashrc中加入

find_git_branch () {
  local dir=. head
  until [ "$dir" -ef / ]; do
    if [ -f "$dir/.git/HEAD" ]; then
      head=$(< "$dir/.git/HEAD")
      if [[ $head = ref:\ refs/heads/* ]]; then
        git_branch=" → ${head#*/*/}"
      elif [[ $head != '' ]]; then
        git_branch=" → (detached)"
      else
        git_branch=" → (unknow)"
      fi
      return
    fi
    dir="../$dir"
  done
  git_branch=''
}

PROMPT_COMMAND="find_git_branch; $PROMPT_COMMAND"
# Heree

black=$'\[\e[1;30m\]'

red=$'\[\e[1;31m\]'

green=$'\[\e[1;32m\]'

yellow=$'\[\e[1;33m\]'

blue=$'\[\e[1;34m\]'

magenta=$'\[\e[1;35m\]'

cyan=$'\[\e[1;36m\]'

white=$'\[\e[1;37m\]'

normal=$'\[\e[m\]'



PS1="$white[$magenta\u$white@$green\h$white:$cyan\w$yellow\$git_branch$white]\$ $normal"

然后

source .bashrc
bash comp linux 自动补全

bash开启自动补全

bash开启自动补全

linux bash开启自动补全

前提已经安装好了 bash-completion

在bashrc中加入

if [ -r /etc/bash_completion ]; then

  # Source completion code.

    . /etc/bash_completion

  fi

或者暴力一点

source /usr/share/bash-completion/bash_completion

然后

source .bashrc
chrome linux

chrome插件 background与popup通讯

chrome插件 background与popup通讯

监听

chrome.extension.onRequest.addListener(function(request, sender, sendResponse) {console.log(request)});

发送消息

chrome.extension.sendRequest(data);
android cordova linux

cordova入门到放弃?

cordova入门到放弃?

安装androidsdk

设置JAVA_HOME环境变量,指定为JDK安装路径 设置ANDROID_HOME环境变量,指定为Android SDK安装路径 同时也建议你添加Android SDK的tools和platform-tools目录到你的PATH

~/.bash_profile

export ANDROID_HOME=/Development/android-sdk/
export PATH=${PATH}:/Development/android-sdk/platform-tools:/Development/android-sdk/tools
//运行$ source ~/.bash_profile

安装cordova

sudo npm install -g cordova

创建app项目

cordova create hello com.example.hello HelloWorld

添加平台

 cordova platform add android --save

编译app

cordova build

部署虚拟机内的app(adb先链接虚拟机或者手机)

 cordova emulate android
 cordova run android

添加插件

cordova plugin search camera

换图标 config.xml

用尖括号包裹
    icon src="res/ios/icon.png" platform="ios" width="57" height="57" density="mdpi" 

储存数据(js向)

### LocalStorage

```javascript ar storage = window.localStorage; var value = storage.getItem(key); // 传递键的名字获取对应的值。 storage.setItem(key, value) // 传递键的名字和对应的值去添加或者更新这个键值对。 storage.removeItem(key) // 传递键的名字去从LocalStorage里删除这个键值对。

```

### WebSQL

```javascript var db = window.openDatabase(name, version, displayName, estimatedSize);

``` * name (string): 数据库的唯一名称,会被存在磁盘当中。 * version (string): 数据库的版本。 * displayName (string): 数据库的让人容易懂的名字,会被系统用来向用户描述这个数据库。 * estimatedSize (number): 数据库的预期最大容量,字节为单位。当数据库容量增长的时候,可能会提示用户获取授权。如果你设置了一个合理的容量,以后的提示就会比较少。

### IndexedDB

    var db;
    var databaseName = 'myDB';
    var databaseVersion = 1;
    var openRequest = window.indexedDB.open(databaseName, databaseVersion);
    openRequest.onerror = function (event) {
    console.log(openRequest.errorCode);
    };
    openRequest.onsuccess = function (event) {
    // 数据库已经打开并初始化 - 我们进行的不错,可以继续了.
    db = openRequest.result;
    displayData();
    };
    openRequest.onupgradeneeded = function (event) {
    // 一个新的数据库,或者一个新的版本号被传递给open方法调用。
    var db = event.target.result;
    db.onerror = function () {
        console.log(db.errorCode);
    };

    // 创建一个存储对象。key是用来标识这个存储对象。
    //keyPath参数定义了key存储的地方。如果keyPath指定了,那么这个存储对象只能包含JavaScript对象,
    //而且每个对象必须有一个属性名字和keyPath相同(除非autoIncrement选项是true)。
    var store = db.createObjectStore('customers', { keyPath: 'customerId' });

    // 定义我们想要使用的索引。我们创建的存储对象不需要包含这些属性,它们只会出现在指定的索引中。 
    //
    // 使用方式: store.createIndex(indexName, keyPath[, parameters]);
    //
    // All these values could have duplicates, so set unique to false
    store.createIndex('firstName', 'firstName', { unique: false });
    store.createIndex('lastName', 'lastName', { unique: false });
    store.createIndex('street', 'street', { unique: false });
    store.createIndex('city', 'city', { unique: false });
    store.createIndex('zipCode', 'zipCode', { unique: false });
    store.createIndex('country', 'country', { unique: false });

    // 一旦store创建成功,就可以存储数据了。
    store.transaction.oncomplete = function (event) {
        // transaction方法获取到存储对象名字和索引的数组(或者一个单字符串去得到一个单独的存储对象),它们都在事务的作用域里。
        // 事务是只读的,除非指定了readwrite选项。
        //它返回了一个对象,这个对象提供了objectStore方法去访问transaction作用域里的对象。
        var customerStore = db.transaction('customers', 'readwrite').objectStore('customers');
        customers.forEach(function (customer) {
            customerStore.add(customer);
        });
    };
};

function displayData() {
}

Cordova-sqlite-storage

//安装
cordova plugin add cordova-sqlite-storage --save


//实例化
var db = null;

document.addEventListener('deviceready', function() {
  db = window.sqlitePlugin.openDatabase({name: 'demo.db', location: 'default'});
});

//使用标准事务 API 填充数据库:

  db.transaction(function(tx) {
    tx.executeSql('CREATE TABLE IF NOT EXISTS DemoTable (name, score)');
    tx.executeSql('INSERT INTO DemoTable VALUES (?,?)', ['Alice', 101]);
    tx.executeSql('INSERT INTO DemoTable VALUES (?,?)', ['Betty', 202]);
  }, function(error) {
    console.log('Transaction ERROR: ' + error.message);
  }, function() {
    console.log('Populated database OK');
  });

  //使用标准事务 API 检查数据:
db.transaction(function(tx) {
    tx.executeSql('SELECT count(*) AS mycount FROM DemoTable', [], function(tx, rs) {
      console.log('Record count (expected to be 2): ' + rs.rows.item(0).mycount);
    }, function(tx, error) {
      console.log('SELECT error: ' + error.message);
    });
  });

  //使用 SQL 批处理 API 填充数据库:

   db.sqlBatch([
    'CREATE TABLE IF NOT EXISTS DemoTable (name, score)',
    [ 'INSERT INTO DemoTable VALUES (?,?)', ['Alice', 101] ],
    [ 'INSERT INTO DemoTable VALUES (?,?)', ['Betty', 202] ],
  ], function() {
    console.log('Populated database OK');
  }, function(error) {
    console.log('SQL batch ERROR: ' + error.message);
  });

其他api文档

bsd freebsd linux

DragonFlyBSD填坑

DragonFlyBSD填坑

虚拟机下安装

修改源

/usr/local/etc/pkg/repos/df-latest.conf

安装git xorg input-device vim lumina bash bash-completion

pkg install git xorg vim lumina bash bash-completion xf86-video-intel29 xf86-input-mouse wqy-fonts

初始配置

1,修改默认bash并且配置bash自动补全

chsh 
修改  /usr/local/bin/bash

修改  .bashrc

source  /usr/local/share/bash-completion/bash_completion.sh

2.初始化Xorg的配置文件

Xorg  -configure
会生成配置文件 
xorg.conf.new

mv xorg.conf.new  /etc/X11/xorg.conf

重启后

start-lumina-desktop

解决字体模糊来自

rm /usr/local/etc/fonts/conf.d/85-wqy.conf
linux utc
linux

繁体转简体

繁体转简体

iconv -f GB18030 -t utf-8 1金尤中.txt | opencc -c t2s |> b.txt

繁体转简体

$ echo '歐幾里得 西元前三世紀的希臘數學家' | opencc -c t2s 欧几里得 西元前三世纪的希腊数学家

简体转繁体

$ echo '欧几里得 西元前三世纪的希腊数学家' | opencc -c s2t 歐幾里得 西元前三世紀的希臘數學家

可以通过以下方式直接对文件进行繁简转换

$ opencc -i zhwiki_raw.txt -o zhwiki_t2s.txt -c t2s.json ```

blog highlightjs linux
ftp git linux

linux 下gitftp

linux 下gitftp

使用

# Setup
git config git-ftp.url "ftp://ftp.example.net:21/public_html"
git config git-ftp.user "ftp-user"
git config git-ftp.password "secr3t"

# Upload all files
git ftp init

# Or if the files are already there
git ftp catchup

# Work and deploy
echo "new content" >> index.txt
git commit index.txt -m "Add new content"
git ftp push
# 1 file to sync:
# [1 of 1] Buffered for upload 'index.txt'.
# Uploading ...
# Last deployment changed to ded01b27e5c785fb251150805308d3d0f8117387.

https://github.com/git-ftp/git-ftp

fcitx ibus linux rime

linux下卸载fcitx然后安装ibus-rime

搜狗这么不招待见。=。 所以换rime试试 结果遇到了不小的坑

搜狗这么不招待见。=。 所以换rime试试 结果遇到了不小的坑

1,wine下不能唤醒 2,java程序下不能唤醒

安装过程

apt install ibus ibus-gtk ibus-gtk3 ibus-qt4 ibus-rime librime-data-luna-pinyin

安装完毕后

发现各种问题

nano /etc/profile

加入

export XIM="ibus"
export XIM_PROGRAM="ibus"
export XMODIFIERS="@im=ibus"
export GTK_IM_MODULE="ibus"
export QT_IM_MODULE="ibus"

ibus-daemon -dx
linux

js-Async/Await和Promise

linux

js 读取文件数据

input   type='file'

f = document.querySelector('.importbutton').files[0]
var reads= new FileReader()
reads.readAsText(f)
reads.result
fonts linux monofonts 等宽字体

linux一键安装常用等宽字体

linux一键安装常用等宽字体

等宽字体快速安装脚本


运行./install.sh 安装全部字体

快速安装

    # clone
    git clone https://github.com/zhenruyan/codefont --depth=1
    # install
    cd fonts
    ./install.sh
    # clean-up a bit
    cd ..
    rm -rf fonts

demo

demo

已收录字体

先后不代表排名

  • 3270

  • anka-coder

  • AnonymousPro

  • Arimo

  • aurulent

  • average

  • bitstream-vera

  • bpmono

  • camingocode

  • code-new-roman

  • consolamono

  • Cousine

  • cutive

  • D2Coding

  • dejavu

  • DroidSansMono

  • effects-eighty

  • fantasque-sans

  • fifteen

  • FiraMono

  • fixedsys

  • gnu-freefont

  • gnutypewriter

  • go-mono

  • gohu

  • hack

  • hermit

  • Inconsolata

  • InputMono

  • iosevka

  • latin-modern

  • lekton

  • liberation

  • LiberationMono

  • luculent

  • luxi

  • meslo

  • Monofur

  • monoid

  • mononoki

  • mplus

  • notcouriersans

  • NotoMono

  • nova

  • office-code-pro

  • overpass

  • oxygen

  • ProFont

  • proggy-clean

  • quinze

  • roboto

  • space

  • sudo

  • SourceCodePro

  • SymbolNeu

  • tex-gyre-cursor

  • Tinos

  • UbuntuMono

  • unifont

  • verily

  • vt323

linux mysql nosql rocksdb

myrocks编译

myrocks编译

先准备编译环境(其实gayhub上都有wiki可是就是英文的。。)

deb系的系统

sudo apt-get update
sudo apt-get -y install g++ cmake libbz2-dev libaio-dev bison \
zlib1g-dev libsnappy-dev libboost-all-dev
sudo apt-get -y install libgflags-dev libreadline6-dev libncurses5-dev \
libssl-dev liblz4-dev gdb git

rpm系的系统

sudo yum install cmake gcc-c++ bzip2-devel libaio-devel bison \
zlib-devel snappy-devel boost-devel
sudo yum install gflags-devel readline-devel ncurses-devel \
openssl-devel lz4-devel gdb git

简单粗暴的下载和编译过程

git clone https://github.com/facebook/mysql-5.6.git
cd mysql-5.6
git submodule init
git submodule update
cmake . -DCMAKE_BUILD_TYPE=RelWithDebInfo -DWITH_SSL=system \
-DWITH_ZLIB=bundled -DMYSQL_MAINTAINER_MODE=0 -DENABLED_LOCAL_INFILE=1 \
-DENABLE_DTRACE=0 -DCMAKE_CXX_FLAGS="-march=native"
make -j8

make package

。=。不出意外的话 轻松写意的就编译完了。。。但是出了一大堆意外。。。随后我会放出已编译好的包

linux

myrocks安装

linux

myrocks安装

已编译好的下载包 http://pan.baidu.com/s/1eR4vrO2 kbnd

先下载好

然后解压

查看基础文档

配置文件如下

[mysqld]
rocksdb
default-storage-engine=rocksdb
skip-innodb
default-tmp-storage-engine=MyISAM
collation-server=latin1_bin

init_connect='SET NAMES utf8'

log-bin
binlog-format=ROW
basedir =/home/free/rock
datadir =/home/free/rock/data
port =3309
server_id = 100032
socket =/home/free/rock/rock.sock

sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES 

运行命令

path替换你的路径

安装

mysql_install_db --defaults-file=/path/to/my.cnf

运行

mysqld_safe --defaults-file=/path/to/my.cnf

安装后的设置

mysql --socket=xxxx

set password=password('root');
grant all privileges on *.* to root@'%' identified by 'root';

flush privileges;

grant grant option on *.* to 'root'@'%';
kv linux nosql

myrocks介绍-freeidea

RocksDB为快速而又低延迟的存储设备(例如闪存或者高速硬盘)而特殊优化处理。

MyRocks简介

RocksDB为快速而又低延迟的存储设备(例如闪存或者高速硬盘)而特殊优化处理。 RocksDB将最大限度的发挥闪存和RAM的高度率读写性能。

RocksDB是facebook基于LevelDB实现的,目前为facebook内部大量业务提供服务。经过facebook大量工作,将RocksDB作为MySQL的一个存储引擎移植到MySQL,称之为MyRocks。

RocksDB与innodb的比较

  • innodb空间浪费, B tree分裂导致page内有较多空闲,page利用率不高。innodb现有的压缩效率也不高,压缩以block为单位,也会造成浪费。

  • 写入放大:innodb 更新以页为单位,最坏的情况更新N行会更新N个页。RocksDB append only方式 另外,innodb开启double write也会增加写入。

  • RocksDB对齐开销小:SST file (默认2MB)需要对齐,但远大于4k, RocksDB_block_size(默认4k) 不需要对齐,因此对齐浪费空间较少

  • RocksDB索引前缀相同值压缩存储

  • RocksDB占总数据量90%的最底层数据,行内不需要存储系统列seqid (innodb聚簇索引列包含trxid,roll_ptr等信息)

说了这么多都特么是复制的。。该上干货了。。。这玩意是源码。。还没编译好的。。

myrocks编译

fonts linux 字体渲染

解决netbeans字体渲染-freeidea

解决netbeans字体渲染-freeidea

解决netbeans字体渲染

linux下netbeans字体渲染简直就是一坨。。。。 看瞎眼了要

琢磨了琢磨。。应该理论上全网首创。

jetbrains家的idea也是java开发的,但是字体很漂亮

某群友说 jetbrains家 对自己引用的openjdk字体做了很多优化

。=。 恩。。直接把netbeans的jre删了 复制上idea的jre

简直完美!!!

瞬间爆炸!!!

netbeans虽然在ide里不温不火,但是提供了完整的工作流 在细节方面做的做的还是挺精致

再装个emmet

完美

linux nginx

nginx 反向代理

nginx 反向代理

nginx.cfg

user www www;
worker_processes 1;
error_log logs/error.log;
pid logs/nginx.pid;
worker_rlimit_nofile 65535;
events {
    use epoll;
    worker_connections 65535;
}
http {
    include mime.types;
    default_type application/octet-stream;
    include /usr/local/nginx/conf/reverse-proxy.conf;
    sendfile on;
    keepalive_timeout 65;
    gzip on;
    client_max_body_size 50m; #缓冲区代理缓冲用户端请求的最大字节数,可以理解为保存到本地再传给用户
    client_body_buffer_size 256k;
    client_header_timeout 3m;
    client_body_timeout 3m;
    send_timeout 3m;
    proxy_connect_timeout 300s; #nginx跟后端服务器连接超时时间(代理连接超时)
    proxy_read_timeout 300s; #连接成功后,后端服务器响应时间(代理接收超时)
    proxy_send_timeout 300s;
    proxy_buffer_size 64k; #设置代理服务器(nginx)保存用户头信息的缓冲区大小
    proxy_buffers 4 32k; #proxy_buffers缓冲区,网页平均在32k以下的话,这样设置
    proxy_busy_buffers_size 64k; #高负荷下缓冲大小(proxy_buffers*2)
    proxy_temp_file_write_size 64k; #设定缓存文件夹大小,大于这个值,将从upstream服务器传递请求,而不缓冲到磁盘
    proxy_ignore_client_abort on; #不允许代理端主动关闭连接
    server {
        listen 80;
        server_name localhost;
        location / {
            root html;
            index index.html index.htm;
        }
        error_page 500 502 503 504 /50x.html;
        location = /50x.html {
            root html;
        }
    }
}

映射不同端口

server
{
    listen 80;
    server_name xxx123.tk;
    location / {
        proxy_redirect off;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_pass http://192.168.10.38:3000;
    }
    access_log logs/xxx123.tk_access.log;
}

server
{
    listen 80;
    server_name xxx456.tk;
    location / {
        proxy_redirect off;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_pass http://192.168.10.40:80;
    }
    access_log logs/xxx456.tk_access.log;
}

负载均衡

upstream monitor_server {
    server 192.168.0.131:80;
        server 192.168.0.132:80;
}

server
{
    listen 80;
    server_name nagios.xxx123.tk;
    location / {
        proxy_redirect off;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_pass http://monitor_server;
    }
    access_log logs/nagios.xxx123.tk_access.log;
}

真实ip地址

log_format access '$HTTP_X_REAL_IP - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" $HTTP_X_Forwarded_For';

access_log logs/access.log access;
linux nginx tcp

nginx-tcp代理

nginx-tcp代理

编译的时侯加上 --with-stream
stream 
{
    upstream cloudsocket 
    {
    hash $remote_addr consistent;
    server 10.x.xx.14:1831 weight=5 max_fails=3 fail_timeout=30s;
    }
server 
    {
    listen 8081;
    proxy_connect_timeout 1s;
    proxy_timeout 3s;
    proxy_pass cloudsocket;
    }
}




官方负载均衡的案例
worker_processes auto;

error_log /var/log/nginx/error.log info;

events {
    worker_connections  1024;
}

stream {
    upstream backend {
        hash $remote_addr consistent;

        server backend1.example.com:12345 weight=5;
        server 127.0.0.1:12345            max_fails=3 fail_timeout=30s;
        server unix:/tmp/backend3;
    }

    upstream dns {
       server 192.168.0.1:53535;
       server dns.example.com:53;
    }

    server {
        listen 12345;
        proxy_connect_timeout 1s;
        proxy_timeout 3s;
        proxy_pass backend;
    }

    server {
        listen 127.0.0.1:53 udp;
        proxy_responses 1;
        proxy_timeout 20s;
        proxy_pass dns;
    }

    server {
        listen [::1]:12345;
        proxy_pass unix:/tmp/stream.socket;
    }
}
linux node path python user

nodejs安装

nodejs安装

linux 下nodejs安装

一直再用nvm配置nodejs 很好用

有人说解压配置环境变量就可以

试了一下

确实 有个好处。。安装 -g 的时候 再也不用sudo了

export NODE=/home/free/bin/node
export PATH=$PATH:$NODE/bin:$NODE/lib/node_modules/npm/bin/node-gyp-bins

设置一下淘宝源

npm config set registry https://registry.npm.taobao.org 
boom kill linux oom

linux oom-killer机制

linux oom-killer机制 linux内存不足会把程序kill掉

linux内存不足会把程序kill掉

为了保护重要进程不被oom-killer掉,我们可以:

echo -17 > /proc/<pid>/oom_adj,-17表示禁用OOM

我们也可以对把整个系统的OOM给禁用掉:

    sysctl -w vm.panic_on_oom=1

    sysctl -p
linux phalcon php

php的c扩展框架 phalcon 的简单使用

php的c扩展框架 phalcon 的简单使用

Phalcon 是开源、全功能栈、使用 C 扩展编写、针对高性能优化的php框架

生成控制器

 phalcon create-controller --name index2

基本配置

[database]
adapter  = Mysql
host     = "127.0.0.1:port"
username = "root"
password = "root"
dbname   = "tests"

[phalcon]
controllersDir = "../app/controllers/"
modelsDir      = "../app/models/"
viewsDir       = "../app/views/"
baseUri        = "/store/"

生成模型

phalcon model products
  • --name=s Table name 表名
  • --schema=s Name of the schema. [optional] schema名
  • --namespace=s Model’s namespace [optional] 模型命名空间
  • --get-set Attributes will be protected and have setters/getters. [optional] 设置字段访问属性为私有 并添加setters/getters方法
  • --extends=s Model extends the class name supplied [optional] 指定扩展类名
  • --excludefields=l Excludes fields defined in a comma separated list [optional]
  • --doc Helps to improve code completion on IDEs [optional] 辅助IDE的自动完成功能
  • --directory=s Base path on which project will be created [optional] 项目的根目录
  • --force Rewrite the model. [optional] 重写模型
  • --trace Shows the trace of the framework in case of exception. [optional] 出错时显示框架trace信息
  • --mapcolumn Get some code for map columns. [optional] 生成字映射的代码
  • --abstract Abstract Model [optional] 抽象模型

生成简单的curd

phalcon scaffold --table-name products

开启webtools

phalcon webtools enable/disable(开启关闭)

ide自动补全

文档

linux mysql pg postgresql 数据库

postgresql安装

postgresql安装

pgsql/bin/initdb -D /usr/local/pgsql/data
local/pgsql/bin/postgres -D /usr/local/pgsql/data >logfile 2>&1 &
local/pgsql/bin/createdb test
local/pgsql/bin/psql test

账户设置

创建用户
CREATE USER davide WITH PASSWORD 'jw8s0F4';
改密码
ALTER ROLE davide WITH PASSWORD 'hu8jmn3';
让一个角色能够创建其他角色和新的数据库:
ALTER ROLE miriam CREATEROLE CREATEDB;
linux node nodejs py python

linux 下python 安装

linux 下python 安装

``` ./configure --prefix=/home/free/usr/bin/python

make -j8

make install

然后

nano .bashrc

export PYTHON=/home/free/bin/python export PATH=$PYTHON/bin:$PATH

要把$PATH放在最后面 ```

linux nodejs nosql 区块链

读ebookcoin源码-区块链的原理与实现

读ebookcoin源码-区块链的原理与实现

读ebookcoin源码-区块链的原理与实现


比特币大红大紫的前提下,几乎所有人都对区块链技术看好

区块链会颠覆xx行业 xxx需要p2p和区块链技术声一片

作为小白虽然连工作都没稳定下来

但也总要跟一根形势

github下翻来覆去 找到一个区块链开源项目 ebookcoin

p2p网络+分布数据库

理清一下思路

理清一下思路

如果没理解错的

健壮的p2p网络+加密传输协议+通过特殊算法保持子节点一致性的数据库

p2p网络的建设
理想状态的p2p

子节点从基础节点获取节点列表->每个节点不断发现与链接其他节点

最开始p2p网络是这样的

NAT下的节点打洞
linux mirrors rsync

rsync同步 mirrors

rsync同步 mirrors

rsync -avlzHP --delete   源链接    本地目录  


rsync -avlzHP --delete-after


区别在于先删除老的文件和后删除老的文件

在磁盘紧张的时候

应急
linux nodejs python ruby

linux 下ruby安装

linux 下ruby安装

当编安装home目录啊

./configure --prefix=/home/free/usr/bin/ruby

make -j8

make  install

然后

nano  .bashrc

export RUBY=/home/free/bin/ruby
export PATH=$PATH:$RUBY/bin

如果系统内ruby了

要把$PATH放在最后面
linux sudo

linux sudo 用法

linux sudo 用法

linux sudo 免密码

nano /etc/sudoer


free    ALL=(ALL:ALL) NOPASSWD:ALL
linux services system

linux systemd的services文件编写要注意的 用法

linux systemd的services文件编写要注意的 用法

execstart 要写绝对路径

User=nginx
Group=nginx
PIDFile=/run/nginx.pid

Type=forking                               # 定义启动类型
#  simple : 默认值,通过ExecStart字段启动进程
#  notify : 类似于simple,服务启动结束后会发出通知信号,然后Systemd再启动其他服务

EnvironmentFile=-/etc/sysconfig/nginx      # 依赖环境,可以指定多个
EnvironmentFile=-/etc/default/nginx

ExecStartPre=/usr/bin/rm -f /run/nginx.pid # 启动服务之前执行的命令
ExecStartPre=/usr/sbin/nginx -t
ExecStart=/usr/sbin/nginx                  # 启动时,多个会被最后一个覆盖
ExecStartPost=                             # 启动服务之后执行的命令
ExecReload=/bin/kill -s HUP $MAINPID       # 重启服务时执行的命令
ExecStop=                                  # 停止服务时执行的命令
ExecStopPost=                              # 停止服务之后执行的命令
TimeoutStopSec=5                           # 设置停止超时时间

KillMode=process                           # 重启行为配置,详见如下介绍
#  control-group : 默认值,当前控制组里面的所有子进程,都会被杀掉
#  process       : 只杀主进程,信号可以通过如下方式定义
#  mixed         : 主进程将收到SIGTERM信号,子进程收到SIGKILL信号
#  none          : 没有进程会被杀掉,只是执行服务的stop命令
KillSignal=SIGQUIT

Restart=on-failure                         # 意外失败后重启方式,正常停止不重启
#  no          : 默认值,退出后不重启
#  on-success  : 只有正常退出时(退出状态码为0),才会重启
#  on-failure  : 非正常退出时 (退出状态码非0),包括被信号终止和超时,才会重启
#  on-abnormal : 只有被信号终止和超时,才会重启
#  on-abort    : 只有在收到没有捕捉到的信号终止时,才会重启
#  on-watchdog : 超时退出,才会重启
#  always      : 不管是什么退出原因,总是重启
RestartSec=10                              # 重启服务之前,需要等待的秒数,默认100ms

PrivateTmp=True                            # 给服务分配独立的临时空间
linux upterm
javascript linux vue

vue的简单记录

vue的简单记录

vue的简单记录

虽然是vue的文档是中文的 但是确实蛮啰嗦

  • 数据绑定
v-bind:title="message"
{{ message }}

var app2 = new Vue({
  el: '#app-2',
  data: {
    message: 'hello '
  }
})
  • 判断
v-if="ok"
v-else
v-show="ok"
  • 循环
遍历

v-for="todo in todos"
{{ todo.text }}
var app4 = new Vue({
  el: '#app-4',
  data: {
    todos: [
      { text: '学习 JavaScript' },
      { text: '学习 Vue' },
      { text: '整个牛项目' }
    ]
  }
})


app4.todos.push({ text: '新项目' })

遍历时提取数组索引
v-for="(item, index) in items"

遍历对象
v-for="(value, key, index) in object"
  • 事件监听
v-on:click="reverseMessage"

var app5 = new Vue({
  el: '#app-5',
  data: {
    message: 'Hello Vue.js!'
  },
  methods: {
    reverseMessage: function () {
      this.message = this.message.split('').reverse().join('')
    }
  }
})


阻止单击事件冒泡
v-on:click.stop="doThis"
提交事件不再重载页面 
v-on:submit.prevent="onSubmit"
修饰符可以串联 
v-on:click.stop.prevent="doThat"
只有修饰符
v-on:submit.prevent
添加事件侦听器时使用事件捕获模式
v-on:click.capture="doThis"
只当事件在该元素本身(比如不是子元素)触发时触发回调
v-on:click.self="doThat"

按键别名

.enter
.tab
.delete (捕获 “删除”  “退格” )
.esc
.space
.up
.down
.left
.right
  • 双向绑定
v-model="message"

var app6 = new Vue({
  el: '#app-6',
  data: {
    message: 'Hello Vue!'
  }
})
linux swap zram

开启 zram

开启 zram

1、导入模块zram,会产生新的块设备/dev/zram0


modprobe zram num_devices=1


2、调整容量,这里使用512M内存


echo 512M > /sys/block/zram0/disksize


3、创建swap分区


mkswap /dev/zram0
4、挂载swap分区


swapon -p 100 /dev/zram0
bsd freebsd linux

bsd安装后的设置

bsd安装后的设置

sshd_config
修改root的ssh登录方式
PermitRootLogin no
禁止其他用户除了to
AllowUsers towheel

更换portsnap源

========================

1.安装ports

#portsnap fetch -s portsnap1.chinafreebsd.cn
2.编辑portsnap配置文件

#ee /etc/portsnap.conf
3.将默认服务器注释掉

#SERVERNAME=portsnap.FreeBSD.org
4.更改服务器为chinafreebsd镜像

SERVERNAME=portsnap1.chinafreebsd.cn
5.下载ports快照

#portsnap fetch extract
6.更新

#portsnap update

更换port源

=======================

如果不想让源码直接从源码所为维护的官方主站上直接获取,那么我们需要为 ports 
系统指定远程缓存目录,也就是通常所说的 ports 换源。换源的方法如下:
1、为了能使用官方ports源,你需要修改 /etc/make.conf 添加如下配置:

MASTER_SITE_OVERRIDE?=\
http://distcache.FreeBSD.org/ports-distfiles/
其中第一行意义为使用地址覆盖指令覆盖掉 ports 中默认下载地址,
而第二行则表示需要使用的新地址。
2、使用第三方非授权源替代官方源,需要修改 /etc/make.conf 添加如下配置:

MASTER_SITE_OVERRIDE?=\
http://ftp2.za.freebsd.org/pub/FreeBSD/ports/distfiles/
3.如需使用China FreeBSD Wiki源,需要修改 /etc/make.conf 添加如下配置:

MASTER_SITE_OVERRIDE?=\
http://ports1.chinafreebsd.cn/distfiles/
4.如需使用多个源地址,需要修改 /etc/make.conf 添加如下配置:

MASTER_SITE_OVERRIDE?=\
http://ports1.chinafreebsd.cn/distfiles/ \
http://ftp2.za.freebsd.org/pub/FreeBSD/ports/distfiles/ \
http://distcache.FreeBSD.org/ports-distfiles/
其中第2、3行结尾处的“\”表示换行。

更换pkg源

1. PKG二进制仓储源文件存储目录 的使用方法
系统中常用的PKG二进制仓储源文件存储目录有两个,一个是系统级仓储文件目录,
另一个是用户级仓储文件目录,系统级仓储文件目录为 /etc/pkg/
 而用户级别仓储文件目录为 /usr/local/etc/pkg/repos/
 这个目录在默认情况下并不存在于系统中,需要用户手工建立,
 而这两个路径均由 /usr/local/etc/pkg.conf  REPOS_DIR 变量控制,原则上可以由用户自由定制。

2. 建立通用户级PKG二进制仓储源文件存储目录
#mkdir -p /usr/local/etc/pkg/repos

3. 建立常用的用户级PKG二进制仓储源文件
#cd /usr/local/etc/pkg/repos
建立仓储源文件的格式要求为:必须使用 .conf 为后缀名结尾的文件。
最好使用 0. 或者 1- 等数字为前缀的文件名,因为同时启用多个 PKG 源文件时
PKG源文件名的第一个数字前缀直接影响着源的使用优先级,其文件格式的建立如下:



定制第一个仓储文件为 0.bme.conf

bme: {
  url: "pkg+http://pkg0.bme.freebsd.org/${ABI}/quarterly",
  mirror_type: "srv",
  signature_type: "none",
  fingerprints: "/usr/share/keys/pkg",
  enabled: yes
}
定制第二个仓储文件为 1.nyi.conf

nyi: {
  url: "pkg+http://pkg0.nyi.freebsd.org/${ABI}/quarterly",
  mirror_type: "srv",
  signature_type: "none",
  fingerprints: "/usr/share/keys/pkg",
  enabled: yes
}
定制第三个仓储文件为 2.ydx.conf

ydx: {
  url: "pkg+http://pkg0.ydx.freebsd.org/${ABI}/quarterly",
  mirror_type: "srv",
  signature_type: "none",
  fingerprints: "/usr/share/keys/pkg",
  enabled: yes
}
定制第四个仓储文件为 3.isc.conf

isc: {
  url: "pkg+http://pkg0.isc.freebsd.org/${ABI}/quarterly",
  mirror_type: "srv",
  signature_type: "none",
  fingerprints: "/usr/share/keys/pkg",
  enabled: yes
}
定制第五个仓储文件为 4.chinafreebsd.conf

chinafreebsd: {
  url: "pkg+http://pkg1.chinafreebsd.cn/${ABI}/quarterly",
  mirror_type: "srv",
  signature_type: "none",
  fingerprints: "/usr/share/keys/pkg",
  enabled: yes
}
重要:
 PKG 源为本站私有 PKG 源,速度要优于以上源地址,建议使用此源!!!
另外,建议只选用一个最快的PKG源,而不是同时启用多个,如果同时启用了多个 PKG 源,
那么在安装软件包或升级 PKG 源时候请使用 -r 选项指定要操作的 PKG 源。
比如:pkg update -r chinafreebsd 或者 pkg install -r chinafreebsd -y xxxx 
4. 禁用默认 PKG 仓储源
可以直接禁用系统级源文件,换源的主要目的是增加 pkg install 命令下载速度,
或者是不能忍受默认源的龟速,用户可以直接禁用系统默认的官方源。比如:

#echo "FreeBSD: { enabled: no }" > /usr/local/etc/pkg/repos/FreeBSD.conf
注意: 
此步骤不是必须,如果保留系统仓储源文件,则系统默认源将和用户源一同有效
5. 更新源
所有源在使用前最好进行一次强制更新。比如:

#pkg update -f 
Updating nyi repository catalogue...
Fetching meta.txz: 100%    944 B   0.9kB/s    00:01    
Fetching packagesite.txz: 100%    6 MiB 268.1kB/s    00:22    
Processing entries: 100%
nyi repository update completed. 25828 packages processed.
Updating ydx repository catalogue...
Fetching meta.txz: 100%    944 B   0.9kB/s    00:01    
Fetching packagesite.txz: 100%    1 KiB   0.0kB/s    01:00  
......
或者更新某个想要使用的源。比如:

#pkg update -r bme 
Updating bme repository catalogue...
Fetching meta.txz: 100%    944 B   0.9kB/s    00:01    
Fetching packagesite.txz: 100%    6 MiB 268.1kB/s    00:22    
Processing entries: 100%
bme repository update completed. 25828 packages processed.
6. 验证当前生效源
使用 pkg -vv 命令可以打印当前 PKG 的配置信息,以及所有生效源配置,例如:

#pkg -vv
Version                 : 1.8.8
PKG_DBDIR = "/var/db/pkg";
PKG_CACHEDIR = "/var/cache/pkg";
PORTSDIR = "/usr/ports";
INDEXDIR = "";
INDEXFILE = "INDEX-11";
HANDLE_RC_SCRIPTS = false;
DEFAULT_ALWAYS_YES = false;
ASSUME_ALWAYS_YES = false;
REPOS_DIR [
    "/etc/pkg/",
    "/usr/local/etc/pkg/repos/",
]
PLIST_KEYWORDS_DIR = "";
SYSLOG = true;
ABI = "FreeBSD:11:amd64";
ALTABI = "freebsd:11:x86:64";
DEVELOPER_MODE = false;
VULNXML_SITE = "http://vuxml.freebsd.org/freebsd/vuln.xml.bz2";
FETCH_RETRY = 3;
PKG_PLUGINS_DIR = "/usr/local/lib/pkg/";
PKG_ENABLE_PLUGINS = true;
PLUGINS [
]
DEBUG_SCRIPTS = false;
PLUGINS_CONF_DIR = "/usr/local/etc/pkg/";
PERMISSIVE = false;
REPO_AUTOUPDATE = true;
NAMESERVER = "";
HTTP_USER_AGENT = "pkg/1.8.8";
EVENT_PIPE = "";
FETCH_TIMEOUT = 30;
UNSET_TIMESTAMP = false;
SSH_RESTRICT_DIR = "";
PKG_ENV {
}
PKG_SSH_ARGS = "";
DEBUG_LEVEL = 0;
ALIAS {
    all-depends = "query %dn-%dv";
    annotations = "info -A";
    build-depends = "info -qd";
    cinfo = "info -Cx";
    comment = "query -i \"%c\"";
    csearch = "search -Cx";
    desc = "query -i \"%e\"";
    download = "fetch";
    iinfo = "info -ix";
    isearch = "search -ix";
    prime-list = "query -e '%a = 0' '%n'";
    leaf = "query -e '%#r == 0' '%n-%v'";
    list = "info -ql";
    noauto = "query -e '%a == 0' '%n-%v'";
    options = "query -i \"%n - %Ok: %Ov\"";
    origin = "info -qo";
    provided-depends = "info -qb";
    raw = "info -R";
    required-depends = "info -qr";
    roptions = "rquery -i \"%n - %Ok: %Ov\"";
    shared-depends = "info -qB";
    show = "info -f -k";
    size = "info -sq";
}
CUDF_SOLVER = "";
SAT_SOLVER = "";
RUN_SCRIPTS = true;
CASE_SENSITIVE_MATCH = false;
LOCK_WAIT = 1;
LOCK_RETRIES = 5;
SQLITE_PROFILE = false;
WORKERS_COUNT = 0;
READ_LOCK = false;
PLIST_ACCEPT_DIRECTORIES = false;
IP_VERSION = 0;
AUTOMERGE = true;
VERSION_SOURCE = "";
CONSERVATIVE_UPGRADE = true;
PKG_CREATE_VERBOSE = false;
AUTOCLEAN = false;
DOT_FILE = "";
REPOSITORIES {
}
VALID_URL_SCHEME [
    "pkg+http",
    "pkg+https",
    "https",
    "http",
    "file",
    "ssh",
    "ftp",
    "ftps",
    "pkg+ssh",
    "pkg+ftp",
    "pkg+ftps",
]
ALLOW_BASE_SHLIBS = false;
WARN_SIZE_LIMIT = 1048576;


Repositories:
  bme: { 
    url             : "pkg+http://pkg0.bme.freebsd.org/FreeBSD:11:amd64/quarterly",
    enabled         : yes,
    priority        : 0,
    mirror_type     : "SRV",
    fingerprints    : "/usr/share/keys/pkg"
  }
  nyi: { 
    url             : "pkg+http://pkg0.nyi.freebsd.org/FreeBSD:11:amd64/quarterly",
    enabled         : yes,
    priority        : 0,
    mirror_type     : "SRV",
    fingerprints    : "/usr/share/keys/pkg"
  }
  ydx: { 
    url             : "pkg+http://pkg0.ydx.freebsd.org/FreeBSD:11:amd64/quarterly",
    enabled         : yes,
    priority        : 0,
    mirror_type     : "SRV",
    fingerprints    : "/usr/share/keys/pkg"
  }
  isc: { 
    url             : "pkg+http://pkg0.isc.freebsd.org/FreeBSD:11:amd64/quarterly",
    enabled         : yes,
    priority        : 0,
    mirror_type     : "SRV",
    fingerprints    : "/usr/share/keys/pkg"
  }
  chinafreebsd: { 
    url             : "pkg+http://pkg1.chinafreebsd.cn/FreeBSD:11:amd64/quarterly",
    enabled         : yes,
    priority        : 0,
    mirror_type     : "SRV",
    fingerprints    : "/usr/share/keys/pkg"
  }


7.  -r 选项使用任意指定源安装软件
安装二进制安装包时可以使用 -r 选项选定要使用的源,
源名称为源文件中第一行中冒号之前的名称 比如 bme

#pkg install -y -r bme vim-lite
linux mysql php

mysql解压安装,以及单机同时运行多个版本

mysql解压安装,以及单机同时运行多个版本

bin/mysqld --initialize --user=mysql 
--basedir=/usr/local/mysql --datadir=/data/mysql
如果失败的话
 bin/mysqld --defaults-file=C:\my.ini --initialize  

 此处需要注意记录生成的临时密码如上文YLi>7ecpe;YP

 cp my-default.cnf /etc/my.cnf

bin/mysqld_safe --defaults-file=my.cnf

my.cnf
basedir=/home/free/sqls
datadir=/home/free/sqls/data
port=9999
server_id=29922
socket=/home/free/sqls/my.socke

设置默认utf-8
character_set_server=utf8 
init_connect='SET NAMES utf8'

mysql --socket=xxxx

set password=password('root');
grant all privileges on *.* to root@'%' identified by 'root';

flush privileges;

grant grant option on *.* to 'root'@'%';
base64 linux

linux 下base64 用法

linux 下base64 用法

linux 下base64 用法

编码

base64 file >  file.text

解码

base64 -d file.text > file