问题

chrome里有命令行输出的时候,右侧会有VM:xxx或者源文件名:行数的一个跳转链接

然而如果封装了console函数或者例如用了vconsole之类的库,

它的错误输出,可能就不是你所期望的自己的代码

解决

如果只是简单的封装log

搜到的大多方案都是形如var newlogfunction = console.log.bind(window.console)

或者是原来的调用变成返回函数,然后再调用大概变成newconsole()()的写法

然而和我期望的需求有些不符,

目前期望的是一个通用的函数内,增加一点调试时的参数输出,希望能输出调用者的位置

也搜索了call stack相关,目前看来有些地方能用console.trace(),但在webpack里就不行了?

有搜到过用new Error()但它的stack字段是个string ,还真有人字符分割解析这个string

目前一个比较科学的方案是用chrome的blackbox文件功能,对webpack里的无效,但是可以对例如vconsole进行blackbox,这样就能展示到原来调用的位置。

同时根据这个思考,可以在调试的版本中对文件的分块进行指定,让公共文件和具体文件webpack打分开的chunk

参考

https://gist.github.com/paulirish/c307a5a585ddbcc17242

https://stackoverflow.com/questions/9559725/extending-console-log-without-affecting-log-line

https://stackoverflow.com/questions/13815640/a-proper-wrapper-for-console-log-with-correct-line-number/32928812

https://developer.mozilla.org/en-US/docs/Web/API/Console/trace

https://developers.google.com/web/tools/chrome-devtools/javascript/reference

https://developer.chrome.com/devtools/docs/blackboxing

ENOSPC

启动其它项目报错 ENOSPC: System limit for number of file watchers reached

也可能它自己报Visual Studio Code is unable to watch for file changes in this large workspace" (error ENOSPC)

多次尝试都是关掉vscode就好了,但之前一直没有更准确的定位问题,以及真的是它的锅吗

网上更多的说改改系统的最大限制吧。

先看看cat /proc/sys/fs/inotify/max_user_watches,大多都是8192,然后说改/etc/sysctl.conf改到fs.inotify.max_user_watches=524288

作为一个网易云音乐不用sudo启动不了就拒绝使用网易云音乐的人,又怎么会轻易该系统的东西//虽然linux是真的好改

我先是自己去看了 ps -aux | grep code/code | awk '{print $2}' | xargs -I {} ls -1 /proc/{}/fd | wc 占用不算多也不算少但是和8192还是差几个数量级(以2为基的数量级)

然后又搜了些资料,大概有anon_inodeinotify这两个关键字,但没具体说是怎么查看

最后是搜到了这个脚本

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
#!/bin/sh

# Get the procs sorted by the number of inotify watchers
#
# From `man find`:
# %h Leading directories of file's name (all but the last element). If the file name contains no slashes (since it
# is in the current directory) the %h specifier expands to `.'.
# %f File's name with any leading directories removed (only the last element).
lines=$(
find /proc/*/fd \
-lname anon_inode:inotify \
-printf '%hinfo/%f\n' 2>/dev/null \
\
| xargs grep -c '^inotify' \
| sort -n -t: -k2 -r \
)

printf "\n%10s\n" "INOTIFY"
printf "%10s\n" "WATCHER"
printf "%10s %5s %s\n" " COUNT " "PID" "CMD"
printf -- "----------------------------------------\n"
for line in $lines; do
watcher_count=$(echo $line | sed -e 's/.*://')
pid=$(echo $line | sed -e 's/\/proc\/\([0-9]*\)\/.*/\1/')
cmdline=$(ps --columns 120 -o command -h -p $pid)
printf "%8d %7d %s\n" "$watcher_count" "$pid" "$cmdline"
done

也就是 /proc/具体pid/fdinfo/具体文件 的以inotify开头的

这个也可以通过/proc/具体pid/fd/具体fd -> anon_inode:inotify查看有哪个symbolic link是指向anon_inode:inotify

可以看到其它的程序都用很少,就idea-IU-193.5662.53/bin/fsnotifier64/usr/share/code/code使用是在4000数量级的

然后启动一个node 又是1000+

所以最后还是改系统配置+改vsc的排除文件

inotify

TODO 记录的意义,和具体文件查看

1024 inotify wd:175 ino:a7cc6 sdev:800002 mask:fc6 ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:c67c0a00f48cb089

TODO 我记得最开始跑脚本有看到vscode用8000+ 的难道我看错了?反正后来暂时没有重现

vscode 的建议

一个也是改系统的

另一个是增加配置中files.watcherExclude的文件glob描述,增加以后 再启动似乎从8000+降低到1000+ (需要重启)

参考

https://howchoo.com/g/m2uzodviywm/node-increase-file-watcher-system-limit

https://unix.stackexchange.com/questions/15509/whos-consuming-my-inotify-resources/426001#426001

https://github.com/fatso83/dotfiles/blob/master/utils/scripts/inotify-consumers

https://code.visualstudio.com/docs/setup/linux#_visual-studio-code-is-unable-to-watch-for-file-changes-in-this-large-workspace-error-enospc

https://www.tldp.org/LDP/Linux-Filesystem-Hierarchy/html/proc.html

http://man7.org/linux/man-pages/man7/inotify.7.html

同步多路IO复用

CODE

五个子进程 2000本地端口SOCK_STREAM的demo

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <wait.h>
#include <signal.h>
#include <errno.h>
#include <sys/select.h>
#include <sys/time.h>
#include <unistd.h>
#include <string.h>
#include <arpa/inet.h>

#include <poll.h>

#include <sys/epoll.h>

#define MAXBUF 256
#define CHILD 5

void child_process(void) {
sleep(2);
char msg[MAXBUF];
struct sockaddr_in addr = {0};
int sockfd,num=1;
srandom(getpid());
/* Create socket and connect to server */
sockfd = socket(AF_INET, SOCK_STREAM, 0);
addr.sin_family = AF_INET;
addr.sin_port = htons(2000);
addr.sin_addr.s_addr = inet_addr("127.0.0.1");

connect(sockfd, (struct sockaddr*)&addr, sizeof(addr));

printf("child {%d} connected \n", getpid());
while(true){
int sl = (random() % 10 ) + 1;
num++;
sleep(sl);
sprintf (msg, "Test message %d from client %d", num, getpid());
write(sockfd, msg, strlen(msg)); /* Send message -> 127.0.0.1:2000 */
}
}

void selectDemo(int sockfd){
int fds[CHILD];
fd_set rset;
socklen_t maxfd=0;
for (int i=0;i<CHILD;i++) {
fds[i] = accept(sockfd,NULL,NULL);
printf("fds[%d]=%d\n",i,fds[i]);
if(fds[i] > maxfd)
maxfd = fds[i];
}

while(true){
FD_ZERO(&rset);
for (int i = 0; i< CHILD; i++ ) {
FD_SET(fds[i],&rset);
}

puts("round again");
select(maxfd+1, &rset, NULL, NULL, NULL); // 返回值是 ready的个数 >= 0

for(int i=0;i<CHILD;i++) {
if (FD_ISSET(fds[i], &rset)){
char buffer[MAXBUF];
int n = read(fds[i], buffer, MAXBUF);
buffer[n] = '\0';
puts(buffer); // 主进程把子进程的内容打印
}
}
}
}

void pollDemo(int sockfd){
pollfd pollfds[CHILD]; // 这里变成了 数组,因此长度不再受到限制
for(int i=0;i<CHILD;i++){
pollfds[i].fd = accept(sockfd,NULL,NULL);
pollfds[i].events = POLLIN;
}
sleep(1);
while(true){
puts("round again");
poll(pollfds, CHILD, 50000);

for(int i=0;i<CHILD;i++) { // 但这里检查状态时 依然是 轮询
if (pollfds[i].revents & POLLIN){
pollfds[i].revents = 0; // 不需要每次 把所有的重设置到set里,只需要把已经就绪的状态恢复掉
char buffer[MAXBUF];
int n = read(pollfds[i].fd, buffer, MAXBUF);
buffer[n] = '\0';
puts(buffer);
}
}
}
}

void epollDemo(int sockfd){
int epfd = epoll_create(233); // create a context in the kernel 文档说 只要是正值就行 具体值被忽略了
for(int i=0;i<CHILD;i++) {
static struct epoll_event ev; // 注意这里是static
ev.data.fd = accept(sockfd,NULL,NULL); // 这也可以 自定义其它的data值
ev.events = EPOLLIN; // 这里还可以设置 EPOLLET 的bit位 启用edge-triggered
epoll_ctl(epfd, EPOLL_CTL_ADD, ev.data.fd, &ev); // add and remove file descriptors to/from the context using epoll_ctl
}
while(true){
puts("round again");
struct epoll_event events[CHILD];
int nfds = epoll_wait(epfd, events, CHILD, 50000); // 把就绪的写入到events中 写了nfds个

for(int i=0;i<nfds;i++) { // 遍历的都是就绪的
char buffer[MAXBUF];
int n = read(events[i].data.fd, buffer, MAXBUF);
buffer[n] = '\0';
puts(buffer);
}
}
}


int main() {
int sockfd;
struct sockaddr_in addr;
for(int i=0;i<CHILD;i++) {
if(fork() == 0) {
child_process(); // 子进程
exit(0);
}
}
// 主进程

sockfd = socket(AF_INET, SOCK_STREAM, 0);
memset(&addr, 0, sizeof (addr));
addr.sin_family = AF_INET;
addr.sin_port = htons(2000);
addr.sin_addr.s_addr = INADDR_ANY;
bind(sockfd,(struct sockaddr*)&addr ,sizeof(addr));
listen (sockfd, CHILD);

// 三种 demo 解除注释 使用
// selectDemo(sockfd);
// pollDemo(sockfd);
// epollDemo(sockfd);
return 0;
}

select

man 2 select

1
2
3
4
5
6
7
8
int select(int nfds, fd_set *readfds, fd_set *writefds, fd_set *exceptfds, struct timeval *timeout);

void FD_CLR(int fd, fd_set *set); // 从set中移除fd
int FD_ISSET(int fd, fd_set *set); // 测试set中是否设置fd
void FD_SET(int fd, fd_set *set); // 在set中设置fd
void FD_ZERO(fd_set *set); // fd zero

int pselect(int nfds, fd_set *readfds, fd_set *writefds, fd_set *exceptfds, const struct timespec *timeout, const sigset_t *sigmask);

nfds should be set to the highest-numbered file descriptor in any of the three sets, plus 1. The indicated file descriptors in each set are checked, up to this limit (but see BUGS).

Three independent sets of file descriptors are watched.

The file descriptors listed in readfds will be watched to see if characters become available for reading (more precisely, to see if a read will not block; in particular, a file descriptor is also ready on end-of-file).

The file descriptors in writefds will be watched to see if space is available for write (though a large write may still block).

The file descriptors in exceptfds will be watched for exceptional conditions. (For examples of some exceptional conditions, see the discussion of POLLPRI in poll(2).)

The time structures involved are defined in <sys/time.h> and look like

1
2
3
4
struct timeval {
long tv_sec; /* seconds */
long tv_usec; /* microseconds */
};

and

1
2
3
4
struct timespec {
long tv_sec; /* seconds */
long tv_nsec; /* nanoseconds */
};

我们可以看到有时 会多个fd同时 就绪,

并且每次select后需要 再走一边FD_ZERO -> FD_SET

当就绪(或者超时)的时候,需要for所有的fd用FD_ISSET来判断就绪状态

单个进程能够监视的文件描述符的数量存在最大限制,它由FD_SETSIZE设置通常为1024,最大数量可以通过修改宏定义甚至重新编译内核的方式来满足。

select.h

内核/用户空间的拷贝问题,select需要维护一个用来存放大量fd的数据结构,这样会使得用户空间和内核空间在传递该结构时复制开销大。

轮询扫描: 也就是for+FD_ISSET

水平触发:应用程序如果没有完成对一个已经就绪的文件描述符进行IO,那么之后再次select调用还是会将这些文件描述符通知进程。

poll

1
2
3
4
5
6
7
int poll(struct pollfd *fds, nfds_t nfds, int timeout);

struct pollfd {
int fd; /* file descriptor */
short events; /* requested events */
short revents; /* returned events */
};

采用数组指针+长度的参数形式

返回值

1
2
3
4
5
On success, a positive number is returned; this is the number of
structures which have nonzero revents fields (in other words, those
descriptors with events or errors reported). A value of 0 indicates
that the call timed out and no file descriptors were ready. On
error, -1 is returned, and errno is set appropriately.

水平触发

其和select不同的地方:采用数组的方式替换原有fd_set数据结构,而使其没有连接数的限制。

虽然也是轮询,但是假设是单个fd,但fd的值很大的情况下,poll就会比select效率好

上面看到了只需要一次初始化,和恢复已经就绪的fd,不需要每次初始化

可移植性:select( ) is more portable, as some Unix systems do not support poll( )

epoll

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
int epoll_create(int size); // Since Linux 2.6.8, the size argument is ignored, but must be greater than zero;
int epoll_ctl(int epfd, int op, int fd, struct epoll_event *event);
int epoll_wait(int epfd, struct epoll_event *events,int maxevents, int timeout);

typedef union epoll_data {
void *ptr;
int fd;
uint32_t u32;
uint64_t u64;
} epoll_data_t;

struct epoll_event {
uint32_t events; /* Epoll events */
epoll_data_t data; /* User data variable */
};

前面两种,都是 user space设置,要用时调用 select/poll 进入 kernel 态

  • create a context in the kernel using epoll_create
  • add and remove file descriptors to/from the context using epoll_ctl
  • wait for events in the context using epoll_wait ,据说这里做了内存映射优化

epoll_ctl这里要有fd参数,epoll_event中也有epoll_data 中有fd

然而 里面的epoll_data是个union也就是调用者自己喜欢放什么就放什么,不论是下面的fd还是

Level-triggered(默认) and edge-triggered

LT模式:若就绪的事件一次没有处理完要做的事件,就会一直去处理。即就会将没有处理完的事件继续放回到就绪队列之中(即那个内核中的链表),一直进行处理。 

ET模式:就绪的事件只能处理一次,若没有处理完会在下次的其它事件就绪时再进行处理。而若以后再也没有就绪的事件,那么剩余的那部分数据也会随之而丢失。 

由此可见:ET模式的效率比LT模式的效率要高很多。只是如果使用ET模式,就要保证每次进行数据处理时,要将其处理完,不能造成数据丢失,这样对编写代码的人要求就比较高。 
注意:ET模式只支持非阻塞的读写:为了保证数据的完整性。

总结

上面的函数都有一些保证原子性的操作函数,例如pselect,epoll_pwait

例如epoll_pwait()等价于

1
2
3
4
5
sigset_t origmask;

pthread_sigmask(SIG_SETMASK, &sigmask, &origmask);
ready = epoll_wait(epfd, &events, maxevents, timeout);
pthread_sigmask(SIG_SETMASK, &origmask, NULL);

有的地方说

表面上看epoll的性能最好,但是在连接数少并且连接都十分活跃的情况下,select和poll的性能可能比epoll好,毕竟epoll的通知机制需要很多函数回调

reference

man 2 listen

man 2 read

man 2 select

man 2 poll

man 7 epoll

man 2 epoll_create

man 2 epoll_ctl

http://www.ulduzsoft.com/2014/01/select-poll-epoll-practical-difference-for-system-architects/

https://devarea.com/linux-io-multiplexing-select-vs-poll-vs-epoll/

Using poll() instead of select()

Example: Using asynchronous I/O

Example: Nonblocking I/O and select()

The method to epoll’s madness

disable ping

echo 1 > /proc/sys/net/ipv4/icmp_echo_ignore_all

/etc/sysctl.conf 追加

net.ipv4.icmp_echo_ignore_all = 1

list running program

ps -ef | awk '$8~/^\// {for(i=8;i<=NF;i++)printf $i" "; print "" }' | sort | uniq

port and net

lsof -i

ifconfig the value of TX bytes

hethogs

clamav

clamscan -r -i /home/ -l /var/log/clamscan.log

freshclam before use

clamtk for ui

relative

awk

anti virus

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
const JSEncrypt = require('node-jsencrypt');
let encrypt;
function rsa(pwd, publicKey) {
if (!encrypt) {
encrypt = new JSEncrypt();
}
if (!pwd || typeof pwd !== 'string') {
return '';
}
let newPwd = pwd;
if (newPwd.length > 230) {
newPwd = newPwd.substr(0, 230);
}
encrypt.setPublicKey(publicKey);
let result = encrypt.encrypt(newPwd);
let tryTimes = 0;
while (result.length !== 344) {
// 如果加密后的字符串长度不是344,后端必然解密失败
result = encrypt.encrypt(newPwd);
if (tryTimes > 10) {
// 最多重试十次
return '';
}
tryTimes += 1;
}
return result;
}
const pk = 'publicKey';
const pwd = '密码';
console.log(rsa(pwd,pk));

tcpdump -S -e -vv -i wlo1 host xx.xx.xx.xx

步骤

yarn add protobuf cors protobufjs

生成js

pbjs -t static-module -w es6 -o ./proto/msgProto.js ./proto/message.proto

运行server:node index.js

代码清单

proto/message.proto

1
2
3
4
message Message {
required string text = 1;
required string lang = 2;
}

server代码index.js

注意修改let protoFolderName = '../'

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
// https://protobufjs.github.io/protobuf.js/

const path = require('path')
const express = require('express')
const cors = require('cors')
const app = express()
app.use(cors())
const messages = [
{text: 'hey', lang: 'english'},
{text: 'isänme', lang: 'tatar'},
{text: 'hej', lang: 'swedish'}
];
let protoFolderName = '../'
app.use (function(req, res, next) {
if (!req.is('application/octet-stream')){
return next()
}
var data = [] // List of Buffer objects
req.on('data', function(chunk) {
data.push(chunk) // Append Buffer object
})
req.on('end', function() {
if (data.length <= 0 ) return next()
data = Buffer.concat(data) // Make one large Buffer of it
console.log('Received buffer', data)
req.raw = data
next()
})
})


let ProtoBuf = require('protobufjs')
let root = ProtoBuf.loadSync(
path.join(__dirname,
protoFolderName,
'message.proto')
)

let Message = root.lookupType("Message");

app.get('/api/messages', (req, res, next)=>{
let msg = Message.create(messages[Math.round(Math.random()*2)])
console.log('Encode and decode: ', Message.decode(Message.encode(msg).finish()))
console.log('Buffer we are sending: ', Message.encode(msg).finish())
// res.send(msg.encode().toBuffer(), 'binary') // alternative
res.send(Message.encode(msg).finish())
// res.send(Buffer.from(msg.toArrayBuffer()), 'binary') // alternative
})

app.post('/api/messages', (req, res, next)=>{
if (req.raw) {
try {
// Decode the Message
let msg = Message.decode(req.raw)
console.log('Received "%s" in %s', msg.text, msg.lang)
console.log('Received :',msg);

msg = Message.create(messages[Math.round(Math.random()*2)])
console.log('Encode and decode: ', Message.decode(Message.encode(msg).finish()))
console.log('Buffer we are sending: ', Message.encode(msg).finish())
// res.send(msg.encode().toBuffer(), 'binary') // alternative
res.send(Message.encode(msg).finish())
} catch (err) {
console.log('Processing failed:', err)
next(err)
}
} else {
console.log("Not binary data")
}
})

app.all('*', (req, res)=>{
res.status(400).send('Not supported')
})

const PORT=3001;
app.listen(PORT,()=>{
console.log(`app listening on port ${PORT}!`);
});

vue 代码

<template>
  <div class="container">
    {{ msg }}
    <button @click="postProtobuf()">
      post
    </button>
    <button @click="getProtobuf()">
      get
    </button>
    <code>
      proto生成js "proto2js": "pbjs -t static-module -w es6 -o ./proto/msgProto.js ./proto/message.proto",
    </code>
    <code>
      proto生成ts "js2ts": "pbts -o ./proto/msgProto.d.ts ./proto/msgProto.js",
    </code>
  </div>
</template>

<script>
/* eslint-disable new-cap */
import * as msgProto from '~/proto/msgProto.js'

export default {
  components: {},
  data() {
    return {
      val: process.env.baseUrl,
      msg: {}
    }
  },
  mounted() {
  },
  methods: {
    getProtobuf() {
      this.$axios.get(
        'http://127.0.0.1:3001/api/messages',
        { responseType: 'arraybuffer' }
      ).then((response) => {
        console.log('Response from the server: ', response)
        const data = new Uint8Array(response.data) // important !
        const ret = msgProto.Message.decode(data)
        console.log('Decoded message', ret)
        this.msg = ret.toJSON()
      })
    },
    postProtobuf() {
      const msg = new msgProto.Message({ text: 'yx xr', lang: 'slang' })
      const buffer = msgProto.Message.encode(msg).finish()
      console.log('de(en(msg))', msgProto.Message.decode(buffer))
      console.log('send:', msg)
      console.log('send buffer:', buffer)
      this.$axios.$post(
        'http://127.0.0.1:3001/api/messages',
        buffer,
        {
          headers: { 'Content-Type': 'application/octet-stream' },
          responseType: 'arraybuffer'
        }
      ).then((response) => {
        console.log('Response from the server: ', response)
        const data = new Uint8Array(response) // important !
        const ret = msgProto.Message.decode(data)
        console.log('Decoded message', ret)
        this.msg = ret.toJSON()
      }).catch(function(response) {
        console.log(response)
      })
    }
  }
}
</script>

<style>
.container {
  margin: 0 auto;
  min-height: 100vh;
  display: flex;
  flex-direction: column;
  justify-content: center;
  align-items: center;
  text-align: center;
}

.title {
  font-family: 'Quicksand', 'Source Sans Pro', -apple-system, BlinkMacSystemFont,
  'Segoe UI', Roboto, 'Helvetica Neue', Arial, sans-serif;
  display: block;
  font-weight: 300;
  font-size: 100px;
  color: #35495e;
  letter-spacing: 1px;
}

.subtitle {
  font-weight: 300;
  font-size: 42px;
  color: #526488;
  word-spacing: 5px;
  padding-bottom: 15px;
}

.links {
  padding-top: 15px;
}
</style>

文档学习

有中文看中文 :-)

入门

先看 node等版本

node --version
v11.8.0
npm --version
6.12.0
npx --version
10.2.0

安装

npm install --global gulp-cli

npx mkdirp my-project && cd my-project

npm init && npm install --save-dev gulp

检查版本

gulp --version
CLI version: 2.2.0
Local version: 4.0.2

创建配置 touch gulpfile.js

写入

1
2
3
4
function defaultTask(cb) {
// place code for your default task here
cb();
}

exports.default = defaultTask

执行 gulp完成

任务

查看任务gulp --tasks

多个任务可以通过series()parallel()来合并, 其中series是按顺序串行,parallel是并行,

例子

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
const { series } = require('gulp');

// `clean` 函数并未被导出(export),因此被认为是私有任务(private task)。
// 它仍然可以被用在 `series()` 组合中。
function clean(cb) {
// body omitted
cb();
}

// `build` 函数被导出(export)了,因此它是一个公开任务(public task),并且可以被 `gulp` 命令直接调用。
// 它也仍然可以被用在 `series()` 组合中。
function build(cb) {
// body omitted
cb();
}

exports.build = build;
exports.default = series(clean, build);

可以任意深度嵌套!

异步执行

可以返回 stream、promise、event emitter、child process 或 observable ,否则使用callback

stream

1
2
3
4
5
6
7
8
const { src, dest } = require('gulp');

function streamTask() {
return src('*.js')
.pipe(dest('output'));
}

exports.default = streamTask;

promise

1
2
3
4
5
function promiseTask() {
return Promise.resolve('the value is ignored');
}

exports.default = promiseTask;

event emitter

1
2
3
4
5
6
7
8
9
10
const { EventEmitter } = require('events');

function eventEmitterTask() {
const emitter = new EventEmitter();
// Emit has to happen async otherwise gulp isn't listening yet
setTimeout(() => emitter.emit('finish'), 250);
return emitter;
}

exports.default = eventEmitterTask;

child process

1
2
3
4
5
6
7
const { exec } = require('child_process');

function childProcessTask() {
return exec('date');
}

exports.default = childProcessTask;

observable

1
2
3
4
5
6
7
const { Observable } = require('rxjs');

function observableTask() {
return Observable.of(1, 2, 3);
}

exports.default = observableTask;

callback

1
2
3
4
5
6
function callbackError(cb) {
// `cb()` should be called by some async work
cb(new Error('kaboom'));
}

exports.default = callbackError;

gulp 不再支持同步任务(Synchronous tasks)了。因为同步任务常常会导致难以调试的细微错误,例如忘记从任务(task)中返回 stream。

流Stream

常见流使用:src,dest,pipe()

1
2
3
4
5
6
7
8
9
10
11
12
13
14
const { src, dest } = require('gulp');
const babel = require('gulp-babel');
const uglify = require('gulp-uglify');
const rename = require('gulp-rename');

exports.default = function() {
return src('src/*.js')
.pipe(babel())
.pipe(src('vendor/*.js'))
.pipe(dest('output/'))
.pipe(uglify())
.pipe(rename({ extname: '.min.js' }))
.pipe(dest('output/'));
}

Glob 详解

glob 是由普通字符和/或通配字符组成的字符串,用于匹配文件路径。可以利用一个或多个 glob 在文件系统中定位文件。

至少一个匹配项目

分隔符永远是/不分系统

\\用于转译,如 \\*表示普通的星号 不是通配符

避免使用path, __dirname,__filename来创建glob

单个星号 不能匹配层级文件夹

两个星号 能匹配任意层级文件夹

取反!

['script/**/*.js', '!scripts/vendor/', 'scripts/vendor/react.js']

['**/*.js', '!node_modules/']

注意书写 会影响执行速度

插件 plugins

https://gulpjs.com/plugins/

实质是流转换器 插件应当总是用来转换文件的。其他操作都应该使用(非插件的) Node 模块或库来实现。

条件插件 gulp-if

文件监控 watch

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
const { watch, series } = require('gulp');

function clean(cb) {
// body omitted
cb();
}

function javascript(cb) {
// body omitted
cb();
}

function css(cb) {
// body omitted
cb();
}

// 可以只关联一个任务
watch('src/*.css', css);
// 或者关联一个任务组合
watch('src/*.js', series(clean, javascript));

指定监控的事件’add’、’addDir’、’change’、’unlink’、’unlinkDir’、’ready’、’error’。此外,还有一个 ‘all’ 事件,它表示除 ‘ready’ 和 ‘error’ 之外的所有事件。

1
2
3
4
5
6
7
const { watch } = require('gulp');

// 所有事件都将被监控
watch('src/*.js', { events: 'all' }, function(cb) {
// body omitted
cb();
});

如需在第一次文件修改之前执行,也就是调用 watch() 之后立即执行,请将 ignoreInitial 参数设置为 false。

watch(...,{ignoreInitial:false},...)

延迟 { delay: 500 }

API

Vinyl

Vinyl 对象 (the virtual file objects)

文件元数据作为 Node 的 fs.Stats 实例提供。它是 Vinyl 实例的 stat 属性,并在内部用于确定 Vinyl 对象是否表示目录或符号链接(symbolic link)。当写入文件系统时,权限和时间值将从 Vinyl 对象的 stat 属性同步。

src()

dest()

symlink()

lastRun()

series()

parallel()

watch()

task()

registry()

tree()

Vinyl

Vinyl.isVinyl()

Vinyl.isCustomProp()

参考

https://www.gulpjs.com.cn/docs/getting-started/quick-start/

What

因为一些不可描述的原因,dropbox能用但是速度感人

现在需要一款能够 在 android+Linux+iPadOS三端做同步的

iPad上的Documents配上ssh可以完成Linux+iPadOS的同步, // android似乎并不行

google drive就和dropbox是同样的问题

baiduYunPan ??? 如果你不怕你的个人资料有一天变成 “根据相关法律法规” 的话 // 不过值得一提的是 这玩意支持linux了哦 虽然目测是个 浏览器壳 当然也就没有自动备份同步功能

然后搜了搜开源的 ownCloud 和 nextCloud ,看介绍基本是一家的

还有人说 用svn/git来搞,你确定喜欢在手机和ipad上输命令?

开干

client就不说了,各个平台下下来安装就行,说说 server

我目前的电脑环境Ubuntu18.04 x86_64 Linux 4.15.0-65-generic , bash 4.4.20 , 15925MiB

警告第一条命令将会进入root,进入后你将有所有权限,请自重

sudo -s
apt install apache2
service apache2 start
apt install mariadb-server
mysql_secure_installation
apt install php libapache2-mod-php php-mysql php-gd php-json php-mysql php-curl php-mbstring php-intl php-imagick php-xml php-zip
apt install phpmyadmin
ln -s /etc/phpmyadmin/apache.conf /etc/apache2/conf-available/phpmyadmin.conf
a2enconf phpmyadmin
a2enmod ssl
a2ensite default-ssl
service apache2 reload
mariadb
MariaDB [(none)]> CREATE DATABASE nextcloud;
MariaDB [(none)]> CREATE USER nextcloud IDENTIFIED BY 'p@ssc0de';
MariaDB [(none)]> grant usage on *.* TO nextcloud@localhost IDENTIFIED BY 'p@ssc0de';
MariaDB [(none)]> GRANT ALL privileges ON nextcloud.* TO nextcloud@localhost;
MariaDB [(none)]> FLUSH PRIVILEGES;

访问 https://localhost/phpmyadmin/

(用户名,密码)=(nextcloud,p@ssc0de)

https://download.nextcloud.com/server/releases/nextcloud-17.0.0.zip下载最新的zip包 解压到 /var/www/html/nextcloud

chown -R www-data:www-data nextcloud/

通过https://localhost/nextcloud即可访问 设置admin 密码,填写刚刚的配置即可

手机连接会无权限,跟着提示修改/var/www/html/nextcloud/config/config.php的配置即可

失败的尝试

下载wget https://raw.githubusercontent.com/nextcloud/vm/master/nextcloud_install_production.sh

然后sudo bash nextcloud_install_production.sh

https://github.com/nextcloud/server 下源码或者下releases

nextCloud 额外的应用

把搜到的release包解压到/var/www/html/nextcloud/apps

index.php/settings/apps中启用

例如

1
2
3
4
5
calendar.tar.gz
checksum.tar.gz
files_readmemd.tar.gz
richdocuments.tar.gz
spreed-7.0.2.tar.gz

参考

https://docs.nextcloud.com/server/17/admin_manual/installation/source_installation.html#example-installation-on-ubuntu-18-04-lts-server

https://github.com/nextcloud/server.git

https://nextcloud.com/install/#instructions-server

https://askubuntu.com/questions/387062/how-to-solve-the-phpmyadmin-not-found-issue-after-upgrading-php-and-apache

https://www.youtube.com/watch?v=QXfsi0pwgYw

总结

现在看来主要问题在于 直接snap的 会和当前本地开得一堆服务或多或少冲突?? 我查运行状态 都是在运行,查端口 没看到监听

另外一个就是 文件夹权限要搞成 www-data:www-data

在一个就是mysql的db和账户创建,只玩过CURD,很久没摸也忘了

客户端口没啥难度

emmmmmmmm 似乎也不能在ipados上搞… 只能网页勉强

Docker

  1. 本地建立文件夹 /data/nextcloudserver/ 给它存东西用
  2. 一条docker命令docker run -d -p 8080:80 -v /data/nextcloudserver/:/var/www/html --name=nextcloudserver nextcloud ,注意docker的习惯冒号左边是宿主机器的东西,右边是容器内的东西,这里映射了端口和磁盘

这样就可以跑了,本地访问都是ui操作,没啥好说的

  1. 配置config/config.php中的trusted_domains (无脑的话root进,或者docker起个busybox连接一下),注意这里的写法是星号来匹配应该192.168.*.*,而不是192.168.0.0/24, 保存就行

或者官方的仓库下

https://github.com/nextcloud/docker/blob/master 文件夹.examples/docker-compose/insecure/mariadb/apache

执行docker-compose up -d

官方仓库自签https 证书

内网要用的话,可以考虑。毕竟没有CA中心,管内网的还是有办法搞你XD

.examples/docker-compose/with-nginx-proxy-self-signed-ssl/mariadb/fpm

sed玩得熟的可以sed替换哈

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
diff --git a/.examples/docker-compose/with-nginx-proxy-self-signed-ssl/mariadb/fpm/db.env b/.examples/docker-compose/with-nginx-proxy-self-signed-ssl/mariadb/fpm/db.env
index a436605..e9872a4 100644
--- a/.examples/docker-compose/with-nginx-proxy-self-signed-ssl/mariadb/fpm/db.env
+++ b/.examples/docker-compose/with-nginx-proxy-self-signed-ssl/mariadb/fpm/db.env
@@ -1,3 +1,3 @@
-MYSQL_PASSWORD=
+MYSQL_PASSWORD=123
MYSQL_DATABASE=nextcloud
MYSQL_USER=nextcloud
diff --git a/.examples/docker-compose/with-nginx-proxy-self-signed-ssl/mariadb/fpm/docker-compose.yml b/.examples/docker-compose/with-nginx-proxy-self-signed-ssl/mariadb/fpm/docker-compose.yml
index 3d60f7e..0d9fa81 100644
--- a/.examples/docker-compose/with-nginx-proxy-self-signed-ssl/mariadb/fpm/docker-compose.yml
+++ b/.examples/docker-compose/with-nginx-proxy-self-signed-ssl/mariadb/fpm/docker-compose.yml
@@ -8,7 +8,7 @@ services:
volumes:
- db:/var/lib/mysql
environment:
- - MYSQL_ROOT_PASSWORD=
+ - MYSQL_ROOT_PASSWORD=123
env_file:
- db.env

@@ -30,7 +30,7 @@ services:
volumes:
- nextcloud:/var/www/html:ro
environment:
- - VIRTUAL_HOST=
+ - VIRTUAL_HOST=nextcloud.cromarmot.com
depends_on:
- app
networks:
@@ -59,11 +59,11 @@ services:
volumes:
- certs:/certs
environment:
- - SSL_SUBJECT=servhostname.local
- - CA_SUBJECT=my@example.com
- - SSL_KEY=/certs/servhostname.local.key
- - SSL_CSR=/certs/servhostname.local.csr
- - SSL_CERT=/certs/servhostname.local.crt
+ - SSL_SUBJECT=nextcloud.cromarmot.com
+ - CA_SUBJECT=cromarmot@example.com
+ - SSL_KEY=/certs/nextcloud.cromarmot.com.key
+ - SSL_CSR=/certs/nextcloud.cromarmot.com.csr
+ - SSL_CERT=/certs/nextcloud.cromarmot.com.crt
networks:
- proxy-tier

然后你需要在要连接的电脑上把hosts文件增加 ip到nextcloud.cromarmot.com 的映射

执行docker-compose up -d

移除

执行docker-compose down

注意,只会移除container和network,不会移除volume, 如果要移除,要手动移除

What

基本讲就是 throttle(截流)和debounce(去抖)

功能都是为了避免快速反复触发

比如 用户快速点击按钮

假设设置为1s

throttle相当于,执行的最小间隔为1s

debounce也是执行最小间隔为1s,但是假设用户持续的一直快速点击,函数不会执行,需要等到最后一次点击后过1s才执行

直接看lodash的

throttle:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
function throttle(func, wait, options) {
let leading = true
let trailing = true

if (typeof func !== 'function') {
throw new TypeError('Expected a function')
}
if (isObject(options)) {
leading = 'leading' in options ? !!options.leading : leading
trailing = 'trailing' in options ? !!options.trailing : trailing
}
return debounce(func, wait, {
leading,
trailing,
'maxWait': wait
})
}

export default throttle

debounce:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
function debounce(func, wait, options) {
let lastArgs,
lastThis,
maxWait,
result,
timerId,
lastCallTime

let lastInvokeTime = 0
let leading = false
let maxing = false
let trailing = true

// Bypass `requestAnimationFrame` by explicitly setting `wait=0`.
const useRAF = (!wait && wait !== 0 && typeof root.requestAnimationFrame === 'function')

if (typeof func !== 'function') {
throw new TypeError('Expected a function')
}
wait = +wait || 0
if (isObject(options)) {
leading = !!options.leading
maxing = 'maxWait' in options
maxWait = maxing ? Math.max(+options.maxWait || 0, wait) : maxWait
trailing = 'trailing' in options ? !!options.trailing : trailing
}

function invokeFunc(time) {
const args = lastArgs
const thisArg = lastThis

lastArgs = lastThis = undefined
lastInvokeTime = time
result = func.apply(thisArg, args)
return result
}

function startTimer(pendingFunc, wait) {
if (useRAF) {
root.cancelAnimationFrame(timerId)
return root.requestAnimationFrame(pendingFunc)
}
return setTimeout(pendingFunc, wait)
}

function cancelTimer(id) {
if (useRAF) {
return root.cancelAnimationFrame(id)
}
clearTimeout(id)
}

function leadingEdge(time) {
// Reset any `maxWait` timer.
lastInvokeTime = time
// Start the timer for the trailing edge.
timerId = startTimer(timerExpired, wait)
// Invoke the leading edge.
return leading ? invokeFunc(time) : result
}

function remainingWait(time) {
const timeSinceLastCall = time - lastCallTime
const timeSinceLastInvoke = time - lastInvokeTime
const timeWaiting = wait - timeSinceLastCall

return maxing
? Math.min(timeWaiting, maxWait - timeSinceLastInvoke)
: timeWaiting
}

function shouldInvoke(time) {
const timeSinceLastCall = time - lastCallTime
const timeSinceLastInvoke = time - lastInvokeTime

// Either this is the first call, activity has stopped and we're at the
// trailing edge, the system time has gone backwards and we're treating
// it as the trailing edge, or we've hit the `maxWait` limit.
return (lastCallTime === undefined || (timeSinceLastCall >= wait) ||
(timeSinceLastCall < 0) || (maxing && timeSinceLastInvoke >= maxWait))
}

function timerExpired() {
const time = Date.now()
if (shouldInvoke(time)) {
return trailingEdge(time)
}
// Restart the timer.
timerId = startTimer(timerExpired, remainingWait(time))
}

function trailingEdge(time) {
timerId = undefined

// Only invoke if we have `lastArgs` which means `func` has been
// debounced at least once.
if (trailing && lastArgs) {
return invokeFunc(time)
}
lastArgs = lastThis = undefined
return result
}

function cancel() {
if (timerId !== undefined) {
cancelTimer(timerId)
}
lastInvokeTime = 0
lastArgs = lastCallTime = lastThis = timerId = undefined
}

function flush() {
return timerId === undefined ? result : trailingEdge(Date.now())
}

function pending() {
return timerId !== undefined
}

function debounced(...args) {
const time = Date.now()
const isInvoking = shouldInvoke(time)

lastArgs = args
lastThis = this
lastCallTime = time

if (isInvoking) {
if (timerId === undefined) {
return leadingEdge(lastCallTime)
}
if (maxing) {
// Handle invocations in a tight loop.
timerId = startTimer(timerExpired, wait)
return invokeFunc(lastCallTime)
}
}
if (timerId === undefined) {
timerId = startTimer(timerExpired, wait)
}
return result
}
debounced.cancel = cancel
debounced.flush = flush
debounced.pending = pending
return debounced
}

export default debounce

直接的参数区别

1
2
3
4
5
6
7
8
9
10
throttle

let leading = true
let maxing = true && maxWait
let trailing = true

debounce
let leading = false
let maxing = false
let trailing = true

Vue 有一个官方教程中的使用方法如下

https://cn.vuejs.org/v2/guide/computed.html#%E4%BE%A6%E5%90%AC%E5%99%A8

之前我也写过一个 在methods中 计算出一个函数,但是那样的返回和计算不能使用箭头函数,因为运行函数和执行的this不同,ide也报错,XD 虽然可用

参考

https://github.com/lodash/lodash/blob/master/throttle.js

https://github.com/lodash/lodash/blob/master/debounce.js

0%