Run

1
2
3
yarn global add jasmine-core karma karma-chrome-launcher karma-jasmine karma-jasmine-html-reporter
karma init # 一路默认
vim karma.conf.js

一些改动

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
@@ -21,8 +21,12 @@ module.exports = function (config) {

// list of files / patterns to load in the browser
files: [
+ 'src/**/*.js'
],

+ client: {
+ clearContext: false, // leave Jasmine Spec Runner output visible in browser
+ },


// list of files / patterns to exclude
@@ -37,7 +41,7 @@ module.exports = function (config) {
// test results reporter to use
// possible values: 'dots', 'progress'
// available reporters: https://npmjs.org/browse/keyword/karma-reporter
- reporters: ['progress'],
+ reporters: ['progress', 'kjhtml'],


// web server port

编写你的测试代码,语法见jasmine文档

启动

1
karma start

karma-jasmine-html-reporter 这个赞极少 却超高的周下载量 所以大多是连带下载的吧

版本兼容各种问题

目前一组能用的

1
2
3
4
5
6
7
{
"jasmine-core": "^3.5.0",
"karma": "^5.0.1",
"karma-chrome-launcher": "^3.1.0",
"karma-jasmine": "^3.1.1",
"karma-jasmine-html-reporter": "^1.5.3"
}

cnpm

不知道cnpm搞了什么私货,同样的命令 npm 能用 cnpm就是报错。目前采取全局yarn按加项目内karam.conf.js了 (感觉操作不科学啊

import

报错 大概意思不在module内

搜到很多加plugins或者在预处理调用webpack的,暂时没有采用

可行的一种方案,目前把files的配置写成

1
[{pattern:'src/**/*.js',type:'module'}]

files

需要包括源代码 和 测试代码

Repo

https://github.com/CroMarmot/karma.jasmine.demo

refs

http://karma-runner.github.io/4.0/config/files.html

http://karma-runner.github.io/4.0/intro/how-it-works.html

通用函数可变参数模板

// 迷,不是有处理args的办法吗?)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
#include<iostream>
void showall() { return; }

template <typename R1 ,typename... Args>
void showall(R1 var, Args...args) {
std::cout << var << std::endl;
showall(args...);
}

int main(int argc, char * args[]) {
showall(1, 2, 3, 4, 5);
showall("gxjun","dadw","dasds");
showall(1.0,2,"3");
return 0;
}

仿函数

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
#include<iostream>
#include<functional>
using namespace std;
using namespace std::placeholders;

template <typename R1 , typename R2>
struct Calc
{
void add(R1 a) {
cout << a << endl;
};
void add_1(R1 a, R1 b) {
cout << a + b << endl;
}
};

int main(int argc, char * args[]) {

//函数指针
void(Calc<int, double>::*fc)(int a) = &Calc<int, double >::add;
// fc(25);
//显然上面的式子比较的麻烦

Calc < int, int> calc;
auto fun = bind(&Calc<int, int >::add, &calc, _1);
auto fun_2 = bind(&Calc<int, int >::add_1, &calc, _1,_2);
fun(123);
fun_2(12,24);
cin.get();
return 0;
}

refs

yield

yield*

function*

function*

demo

直接丢浏览器或者node里跑代码

函数的值产生和继续执行

1
2
3
4
5
6
7
8
9
10
11
12
13
14

function* countAppleSales () {
let saleList = [3, 7, 5]
for (let i = 0; i < saleList.length; i++) {
console.log('in func before yield');
yield saleList[i];
console.log('in func after yield');
}
}

let appleStore = countAppleSales() // Generator { }
setInterval(()=>{
console.log(appleStore.next());
},2000);

嵌套

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
function* g1() {
yield 2;
yield 3;
yield 4;
}

function* g2() {
yield 1;
yield* g1(); // here
yield 5;
}

const iterator = g2();

console.log(iterator.next()); // {value: 1, done: false}
console.log(iterator.next()); // {value: 2, done: false}
console.log(iterator.next()); // {value: 3, done: false}
console.log(iterator.next()); // {value: 4, done: false}
console.log(iterator.next()); // {value: 5, done: false}
console.log(iterator.next()); // {value: undefined, done: true}

遍历输出

1
2
3
4
5
6
7
8
9
10
11
12
function* foo() {
yield 'a';
yield 'b';
yield 'c';
}

let str = '';
for (const val of foo()) { // here
str = str + val;
}

console.log(str);

在函数调用是可以传参数,同时,在next调用也是可以传参数

问题

chrome里有命令行输出的时候,右侧会有VM:xxx或者源文件名:行数的一个跳转链接

然而如果封装了console函数或者例如用了vconsole之类的库,

它的错误输出,可能就不是你所期望的自己的代码

解决

如果只是简单的封装log

搜到的大多方案都是形如var newlogfunction = console.log.bind(window.console)

或者是原来的调用变成返回函数,然后再调用大概变成newconsole()()的写法

然而和我期望的需求有些不符,

目前期望的是一个通用的函数内,增加一点调试时的参数输出,希望能输出调用者的位置

也搜索了call stack相关,目前看来有些地方能用console.trace(),但在webpack里就不行了?

有搜到过用new Error()但它的stack字段是个string ,还真有人字符分割解析这个string

目前一个比较科学的方案是用chrome的blackbox文件功能,对webpack里的无效,但是可以对例如vconsole进行blackbox,这样就能展示到原来调用的位置。

同时根据这个思考,可以在调试的版本中对文件的分块进行指定,让公共文件和具体文件webpack打分开的chunk

参考

https://gist.github.com/paulirish/c307a5a585ddbcc17242

https://stackoverflow.com/questions/9559725/extending-console-log-without-affecting-log-line

https://stackoverflow.com/questions/13815640/a-proper-wrapper-for-console-log-with-correct-line-number/32928812

https://developer.mozilla.org/en-US/docs/Web/API/Console/trace

https://developers.google.com/web/tools/chrome-devtools/javascript/reference

https://developer.chrome.com/devtools/docs/blackboxing

ENOSPC

启动其它项目报错 ENOSPC: System limit for number of file watchers reached

也可能它自己报Visual Studio Code is unable to watch for file changes in this large workspace" (error ENOSPC)

多次尝试都是关掉vscode就好了,但之前一直没有更准确的定位问题,以及真的是它的锅吗

网上更多的说改改系统的最大限制吧。

先看看cat /proc/sys/fs/inotify/max_user_watches,大多都是8192,然后说改/etc/sysctl.conf改到fs.inotify.max_user_watches=524288

作为一个网易云音乐不用sudo启动不了就拒绝使用网易云音乐的人,又怎么会轻易该系统的东西//虽然linux是真的好改

我先是自己去看了 ps -aux | grep code/code | awk '{print $2}' | xargs -I {} ls -1 /proc/{}/fd | wc 占用不算多也不算少但是和8192还是差几个数量级(以2为基的数量级)

然后又搜了些资料,大概有anon_inodeinotify这两个关键字,但没具体说是怎么查看

最后是搜到了这个脚本

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
#!/bin/sh

# Get the procs sorted by the number of inotify watchers
#
# From `man find`:
# %h Leading directories of file's name (all but the last element). If the file name contains no slashes (since it
# is in the current directory) the %h specifier expands to `.'.
# %f File's name with any leading directories removed (only the last element).
lines=$(
find /proc/*/fd \
-lname anon_inode:inotify \
-printf '%hinfo/%f\n' 2>/dev/null \
\
| xargs grep -c '^inotify' \
| sort -n -t: -k2 -r \
)

printf "\n%10s\n" "INOTIFY"
printf "%10s\n" "WATCHER"
printf "%10s %5s %s\n" " COUNT " "PID" "CMD"
printf -- "----------------------------------------\n"
for line in $lines; do
watcher_count=$(echo $line | sed -e 's/.*://')
pid=$(echo $line | sed -e 's/\/proc\/\([0-9]*\)\/.*/\1/')
cmdline=$(ps --columns 120 -o command -h -p $pid)
printf "%8d %7d %s\n" "$watcher_count" "$pid" "$cmdline"
done

也就是 /proc/具体pid/fdinfo/具体文件 的以inotify开头的

这个也可以通过/proc/具体pid/fd/具体fd -> anon_inode:inotify查看有哪个symbolic link是指向anon_inode:inotify

可以看到其它的程序都用很少,就idea-IU-193.5662.53/bin/fsnotifier64/usr/share/code/code使用是在4000数量级的

然后启动一个node 又是1000+

所以最后还是改系统配置+改vsc的排除文件

inotify

TODO 记录的意义,和具体文件查看

1024 inotify wd:175 ino:a7cc6 sdev:800002 mask:fc6 ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:c67c0a00f48cb089

TODO 我记得最开始跑脚本有看到vscode用8000+ 的难道我看错了?反正后来暂时没有重现

vscode 的建议

一个也是改系统的

另一个是增加配置中files.watcherExclude的文件glob描述,增加以后 再启动似乎从8000+降低到1000+ (需要重启)

参考

https://howchoo.com/g/m2uzodviywm/node-increase-file-watcher-system-limit

https://unix.stackexchange.com/questions/15509/whos-consuming-my-inotify-resources/426001#426001

https://github.com/fatso83/dotfiles/blob/master/utils/scripts/inotify-consumers

https://code.visualstudio.com/docs/setup/linux#_visual-studio-code-is-unable-to-watch-for-file-changes-in-this-large-workspace-error-enospc

https://www.tldp.org/LDP/Linux-Filesystem-Hierarchy/html/proc.html

http://man7.org/linux/man-pages/man7/inotify.7.html

同步多路IO复用

CODE

五个子进程 2000本地端口SOCK_STREAM的demo

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <wait.h>
#include <signal.h>
#include <errno.h>
#include <sys/select.h>
#include <sys/time.h>
#include <unistd.h>
#include <string.h>
#include <arpa/inet.h>

#include <poll.h>

#include <sys/epoll.h>

#define MAXBUF 256
#define CHILD 5

void child_process(void) {
sleep(2);
char msg[MAXBUF];
struct sockaddr_in addr = {0};
int sockfd,num=1;
srandom(getpid());
/* Create socket and connect to server */
sockfd = socket(AF_INET, SOCK_STREAM, 0);
addr.sin_family = AF_INET;
addr.sin_port = htons(2000);
addr.sin_addr.s_addr = inet_addr("127.0.0.1");

connect(sockfd, (struct sockaddr*)&addr, sizeof(addr));

printf("child {%d} connected \n", getpid());
while(true){
int sl = (random() % 10 ) + 1;
num++;
sleep(sl);
sprintf (msg, "Test message %d from client %d", num, getpid());
write(sockfd, msg, strlen(msg)); /* Send message -> 127.0.0.1:2000 */
}
}

void selectDemo(int sockfd){
int fds[CHILD];
fd_set rset;
socklen_t maxfd=0;
for (int i=0;i<CHILD;i++) {
fds[i] = accept(sockfd,NULL,NULL);
printf("fds[%d]=%d\n",i,fds[i]);
if(fds[i] > maxfd)
maxfd = fds[i];
}

while(true){
FD_ZERO(&rset);
for (int i = 0; i< CHILD; i++ ) {
FD_SET(fds[i],&rset);
}

puts("round again");
select(maxfd+1, &rset, NULL, NULL, NULL); // 返回值是 ready的个数 >= 0

for(int i=0;i<CHILD;i++) {
if (FD_ISSET(fds[i], &rset)){
char buffer[MAXBUF];
int n = read(fds[i], buffer, MAXBUF);
buffer[n] = '\0';
puts(buffer); // 主进程把子进程的内容打印
}
}
}
}

void pollDemo(int sockfd){
pollfd pollfds[CHILD]; // 这里变成了 数组,因此长度不再受到限制
for(int i=0;i<CHILD;i++){
pollfds[i].fd = accept(sockfd,NULL,NULL);
pollfds[i].events = POLLIN;
}
sleep(1);
while(true){
puts("round again");
poll(pollfds, CHILD, 50000);

for(int i=0;i<CHILD;i++) { // 但这里检查状态时 依然是 轮询
if (pollfds[i].revents & POLLIN){
pollfds[i].revents = 0; // 不需要每次 把所有的重设置到set里,只需要把已经就绪的状态恢复掉
char buffer[MAXBUF];
int n = read(pollfds[i].fd, buffer, MAXBUF);
buffer[n] = '\0';
puts(buffer);
}
}
}
}

void epollDemo(int sockfd){
int epfd = epoll_create(233); // create a context in the kernel 文档说 只要是正值就行 具体值被忽略了
for(int i=0;i<CHILD;i++) {
static struct epoll_event ev; // 注意这里是static
ev.data.fd = accept(sockfd,NULL,NULL); // 这也可以 自定义其它的data值
ev.events = EPOLLIN; // 这里还可以设置 EPOLLET 的bit位 启用edge-triggered
epoll_ctl(epfd, EPOLL_CTL_ADD, ev.data.fd, &ev); // add and remove file descriptors to/from the context using epoll_ctl
}
while(true){
puts("round again");
struct epoll_event events[CHILD];
int nfds = epoll_wait(epfd, events, CHILD, 50000); // 把就绪的写入到events中 写了nfds个

for(int i=0;i<nfds;i++) { // 遍历的都是就绪的
char buffer[MAXBUF];
int n = read(events[i].data.fd, buffer, MAXBUF);
buffer[n] = '\0';
puts(buffer);
}
}
}


int main() {
int sockfd;
struct sockaddr_in addr;
for(int i=0;i<CHILD;i++) {
if(fork() == 0) {
child_process(); // 子进程
exit(0);
}
}
// 主进程

sockfd = socket(AF_INET, SOCK_STREAM, 0);
memset(&addr, 0, sizeof (addr));
addr.sin_family = AF_INET;
addr.sin_port = htons(2000);
addr.sin_addr.s_addr = INADDR_ANY;
bind(sockfd,(struct sockaddr*)&addr ,sizeof(addr));
listen (sockfd, CHILD);

// 三种 demo 解除注释 使用
// selectDemo(sockfd);
// pollDemo(sockfd);
// epollDemo(sockfd);
return 0;
}

select

man 2 select

1
2
3
4
5
6
7
8
int select(int nfds, fd_set *readfds, fd_set *writefds, fd_set *exceptfds, struct timeval *timeout);

void FD_CLR(int fd, fd_set *set); // 从set中移除fd
int FD_ISSET(int fd, fd_set *set); // 测试set中是否设置fd
void FD_SET(int fd, fd_set *set); // 在set中设置fd
void FD_ZERO(fd_set *set); // fd zero

int pselect(int nfds, fd_set *readfds, fd_set *writefds, fd_set *exceptfds, const struct timespec *timeout, const sigset_t *sigmask);

nfds should be set to the highest-numbered file descriptor in any of the three sets, plus 1. The indicated file descriptors in each set are checked, up to this limit (but see BUGS).

Three independent sets of file descriptors are watched.

The file descriptors listed in readfds will be watched to see if characters become available for reading (more precisely, to see if a read will not block; in particular, a file descriptor is also ready on end-of-file).

The file descriptors in writefds will be watched to see if space is available for write (though a large write may still block).

The file descriptors in exceptfds will be watched for exceptional conditions. (For examples of some exceptional conditions, see the discussion of POLLPRI in poll(2).)

The time structures involved are defined in <sys/time.h> and look like

1
2
3
4
struct timeval {
long tv_sec; /* seconds */
long tv_usec; /* microseconds */
};

and

1
2
3
4
struct timespec {
long tv_sec; /* seconds */
long tv_nsec; /* nanoseconds */
};

我们可以看到有时 会多个fd同时 就绪,

并且每次select后需要 再走一边FD_ZERO -> FD_SET

当就绪(或者超时)的时候,需要for所有的fd用FD_ISSET来判断就绪状态

单个进程能够监视的文件描述符的数量存在最大限制,它由FD_SETSIZE设置通常为1024,最大数量可以通过修改宏定义甚至重新编译内核的方式来满足。

select.h

内核/用户空间的拷贝问题,select需要维护一个用来存放大量fd的数据结构,这样会使得用户空间和内核空间在传递该结构时复制开销大。

轮询扫描: 也就是for+FD_ISSET

水平触发:应用程序如果没有完成对一个已经就绪的文件描述符进行IO,那么之后再次select调用还是会将这些文件描述符通知进程。

poll

1
2
3
4
5
6
7
int poll(struct pollfd *fds, nfds_t nfds, int timeout);

struct pollfd {
int fd; /* file descriptor */
short events; /* requested events */
short revents; /* returned events */
};

采用数组指针+长度的参数形式

返回值

1
2
3
4
5
On success, a positive number is returned; this is the number of
structures which have nonzero revents fields (in other words, those
descriptors with events or errors reported). A value of 0 indicates
that the call timed out and no file descriptors were ready. On
error, -1 is returned, and errno is set appropriately.

水平触发

其和select不同的地方:采用数组的方式替换原有fd_set数据结构,而使其没有连接数的限制。

虽然也是轮询,但是假设是单个fd,但fd的值很大的情况下,poll就会比select效率好

上面看到了只需要一次初始化,和恢复已经就绪的fd,不需要每次初始化

可移植性:select( ) is more portable, as some Unix systems do not support poll( )

epoll

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
int epoll_create(int size); // Since Linux 2.6.8, the size argument is ignored, but must be greater than zero;
int epoll_ctl(int epfd, int op, int fd, struct epoll_event *event);
int epoll_wait(int epfd, struct epoll_event *events,int maxevents, int timeout);

typedef union epoll_data {
void *ptr;
int fd;
uint32_t u32;
uint64_t u64;
} epoll_data_t;

struct epoll_event {
uint32_t events; /* Epoll events */
epoll_data_t data; /* User data variable */
};

前面两种,都是 user space设置,要用时调用 select/poll 进入 kernel 态

  • create a context in the kernel using epoll_create
  • add and remove file descriptors to/from the context using epoll_ctl
  • wait for events in the context using epoll_wait ,据说这里做了内存映射优化

epoll_ctl这里要有fd参数,epoll_event中也有epoll_data 中有fd

然而 里面的epoll_data是个union也就是调用者自己喜欢放什么就放什么,不论是下面的fd还是

Level-triggered(默认) and edge-triggered

LT模式:若就绪的事件一次没有处理完要做的事件,就会一直去处理。即就会将没有处理完的事件继续放回到就绪队列之中(即那个内核中的链表),一直进行处理。 

ET模式:就绪的事件只能处理一次,若没有处理完会在下次的其它事件就绪时再进行处理。而若以后再也没有就绪的事件,那么剩余的那部分数据也会随之而丢失。 

由此可见:ET模式的效率比LT模式的效率要高很多。只是如果使用ET模式,就要保证每次进行数据处理时,要将其处理完,不能造成数据丢失,这样对编写代码的人要求就比较高。 
注意:ET模式只支持非阻塞的读写:为了保证数据的完整性。

总结

上面的函数都有一些保证原子性的操作函数,例如pselect,epoll_pwait

例如epoll_pwait()等价于

1
2
3
4
5
sigset_t origmask;

pthread_sigmask(SIG_SETMASK, &sigmask, &origmask);
ready = epoll_wait(epfd, &events, maxevents, timeout);
pthread_sigmask(SIG_SETMASK, &origmask, NULL);

有的地方说

表面上看epoll的性能最好,但是在连接数少并且连接都十分活跃的情况下,select和poll的性能可能比epoll好,毕竟epoll的通知机制需要很多函数回调

reference

man 2 listen

man 2 read

man 2 select

man 2 poll

man 7 epoll

man 2 epoll_create

man 2 epoll_ctl

http://www.ulduzsoft.com/2014/01/select-poll-epoll-practical-difference-for-system-architects/

https://devarea.com/linux-io-multiplexing-select-vs-poll-vs-epoll/

Using poll() instead of select()

Example: Using asynchronous I/O

Example: Nonblocking I/O and select()

The method to epoll’s madness

disable ping

echo 1 > /proc/sys/net/ipv4/icmp_echo_ignore_all

/etc/sysctl.conf 追加

net.ipv4.icmp_echo_ignore_all = 1

list running program

ps -ef | awk '$8~/^\// {for(i=8;i<=NF;i++)printf $i" "; print "" }' | sort | uniq

port and net

lsof -i

ifconfig the value of TX bytes

hethogs

clamav

clamscan -r -i /home/ -l /var/log/clamscan.log

freshclam before use

clamtk for ui

relative

awk

anti virus

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
const JSEncrypt = require('node-jsencrypt');
let encrypt;
function rsa(pwd, publicKey) {
if (!encrypt) {
encrypt = new JSEncrypt();
}
if (!pwd || typeof pwd !== 'string') {
return '';
}
let newPwd = pwd;
if (newPwd.length > 230) {
newPwd = newPwd.substr(0, 230);
}
encrypt.setPublicKey(publicKey);
let result = encrypt.encrypt(newPwd);
let tryTimes = 0;
while (result.length !== 344) {
// 如果加密后的字符串长度不是344,后端必然解密失败
result = encrypt.encrypt(newPwd);
if (tryTimes > 10) {
// 最多重试十次
return '';
}
tryTimes += 1;
}
return result;
}
const pk = 'publicKey';
const pwd = '密码';
console.log(rsa(pwd,pk));

tcpdump -S -e -vv -i wlo1 host xx.xx.xx.xx

步骤

yarn add protobuf cors protobufjs

生成js

pbjs -t static-module -w es6 -o ./proto/msgProto.js ./proto/message.proto

运行server:node index.js

代码清单

proto/message.proto

1
2
3
4
message Message {
required string text = 1;
required string lang = 2;
}

server代码index.js

注意修改let protoFolderName = '../'

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
// https://protobufjs.github.io/protobuf.js/

const path = require('path')
const express = require('express')
const cors = require('cors')
const app = express()
app.use(cors())
const messages = [
{text: 'hey', lang: 'english'},
{text: 'isänme', lang: 'tatar'},
{text: 'hej', lang: 'swedish'}
];
let protoFolderName = '../'
app.use (function(req, res, next) {
if (!req.is('application/octet-stream')){
return next()
}
var data = [] // List of Buffer objects
req.on('data', function(chunk) {
data.push(chunk) // Append Buffer object
})
req.on('end', function() {
if (data.length <= 0 ) return next()
data = Buffer.concat(data) // Make one large Buffer of it
console.log('Received buffer', data)
req.raw = data
next()
})
})


let ProtoBuf = require('protobufjs')
let root = ProtoBuf.loadSync(
path.join(__dirname,
protoFolderName,
'message.proto')
)

let Message = root.lookupType("Message");

app.get('/api/messages', (req, res, next)=>{
let msg = Message.create(messages[Math.round(Math.random()*2)])
console.log('Encode and decode: ', Message.decode(Message.encode(msg).finish()))
console.log('Buffer we are sending: ', Message.encode(msg).finish())
// res.send(msg.encode().toBuffer(), 'binary') // alternative
res.send(Message.encode(msg).finish())
// res.send(Buffer.from(msg.toArrayBuffer()), 'binary') // alternative
})

app.post('/api/messages', (req, res, next)=>{
if (req.raw) {
try {
// Decode the Message
let msg = Message.decode(req.raw)
console.log('Received "%s" in %s', msg.text, msg.lang)
console.log('Received :',msg);

msg = Message.create(messages[Math.round(Math.random()*2)])
console.log('Encode and decode: ', Message.decode(Message.encode(msg).finish()))
console.log('Buffer we are sending: ', Message.encode(msg).finish())
// res.send(msg.encode().toBuffer(), 'binary') // alternative
res.send(Message.encode(msg).finish())
} catch (err) {
console.log('Processing failed:', err)
next(err)
}
} else {
console.log("Not binary data")
}
})

app.all('*', (req, res)=>{
res.status(400).send('Not supported')
})

const PORT=3001;
app.listen(PORT,()=>{
console.log(`app listening on port ${PORT}!`);
});

vue 代码

<template>
  <div class="container">
    {{ msg }}
    <button @click="postProtobuf()">
      post
    </button>
    <button @click="getProtobuf()">
      get
    </button>
    <code>
      proto生成js "proto2js": "pbjs -t static-module -w es6 -o ./proto/msgProto.js ./proto/message.proto",
    </code>
    <code>
      proto生成ts "js2ts": "pbts -o ./proto/msgProto.d.ts ./proto/msgProto.js",
    </code>
  </div>
</template>

<script>
/* eslint-disable new-cap */
import * as msgProto from '~/proto/msgProto.js'

export default {
  components: {},
  data() {
    return {
      val: process.env.baseUrl,
      msg: {}
    }
  },
  mounted() {
  },
  methods: {
    getProtobuf() {
      this.$axios.get(
        'http://127.0.0.1:3001/api/messages',
        { responseType: 'arraybuffer' }
      ).then((response) => {
        console.log('Response from the server: ', response)
        const data = new Uint8Array(response.data) // important !
        const ret = msgProto.Message.decode(data)
        console.log('Decoded message', ret)
        this.msg = ret.toJSON()
      })
    },
    postProtobuf() {
      const msg = new msgProto.Message({ text: 'yx xr', lang: 'slang' })
      const buffer = msgProto.Message.encode(msg).finish()
      console.log('de(en(msg))', msgProto.Message.decode(buffer))
      console.log('send:', msg)
      console.log('send buffer:', buffer)
      this.$axios.$post(
        'http://127.0.0.1:3001/api/messages',
        buffer,
        {
          headers: { 'Content-Type': 'application/octet-stream' },
          responseType: 'arraybuffer'
        }
      ).then((response) => {
        console.log('Response from the server: ', response)
        const data = new Uint8Array(response) // important !
        const ret = msgProto.Message.decode(data)
        console.log('Decoded message', ret)
        this.msg = ret.toJSON()
      }).catch(function(response) {
        console.log(response)
      })
    }
  }
}
</script>

<style>
.container {
  margin: 0 auto;
  min-height: 100vh;
  display: flex;
  flex-direction: column;
  justify-content: center;
  align-items: center;
  text-align: center;
}

.title {
  font-family: 'Quicksand', 'Source Sans Pro', -apple-system, BlinkMacSystemFont,
  'Segoe UI', Roboto, 'Helvetica Neue', Arial, sans-serif;
  display: block;
  font-weight: 300;
  font-size: 100px;
  color: #35495e;
  letter-spacing: 1px;
}

.subtitle {
  font-weight: 300;
  font-size: 42px;
  color: #526488;
  word-spacing: 5px;
  padding-bottom: 15px;
}

.links {
  padding-top: 15px;
}
</style>

文档学习

有中文看中文 :-)

入门

先看 node等版本

node --version
v11.8.0
npm --version
6.12.0
npx --version
10.2.0

安装

npm install --global gulp-cli

npx mkdirp my-project && cd my-project

npm init && npm install --save-dev gulp

检查版本

gulp --version
CLI version: 2.2.0
Local version: 4.0.2

创建配置 touch gulpfile.js

写入

1
2
3
4
function defaultTask(cb) {
// place code for your default task here
cb();
}

exports.default = defaultTask

执行 gulp完成

任务

查看任务gulp --tasks

多个任务可以通过series()parallel()来合并, 其中series是按顺序串行,parallel是并行,

例子

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
const { series } = require('gulp');

// `clean` 函数并未被导出(export),因此被认为是私有任务(private task)。
// 它仍然可以被用在 `series()` 组合中。
function clean(cb) {
// body omitted
cb();
}

// `build` 函数被导出(export)了,因此它是一个公开任务(public task),并且可以被 `gulp` 命令直接调用。
// 它也仍然可以被用在 `series()` 组合中。
function build(cb) {
// body omitted
cb();
}

exports.build = build;
exports.default = series(clean, build);

可以任意深度嵌套!

异步执行

可以返回 stream、promise、event emitter、child process 或 observable ,否则使用callback

stream

1
2
3
4
5
6
7
8
const { src, dest } = require('gulp');

function streamTask() {
return src('*.js')
.pipe(dest('output'));
}

exports.default = streamTask;

promise

1
2
3
4
5
function promiseTask() {
return Promise.resolve('the value is ignored');
}

exports.default = promiseTask;

event emitter

1
2
3
4
5
6
7
8
9
10
const { EventEmitter } = require('events');

function eventEmitterTask() {
const emitter = new EventEmitter();
// Emit has to happen async otherwise gulp isn't listening yet
setTimeout(() => emitter.emit('finish'), 250);
return emitter;
}

exports.default = eventEmitterTask;

child process

1
2
3
4
5
6
7
const { exec } = require('child_process');

function childProcessTask() {
return exec('date');
}

exports.default = childProcessTask;

observable

1
2
3
4
5
6
7
const { Observable } = require('rxjs');

function observableTask() {
return Observable.of(1, 2, 3);
}

exports.default = observableTask;

callback

1
2
3
4
5
6
function callbackError(cb) {
// `cb()` should be called by some async work
cb(new Error('kaboom'));
}

exports.default = callbackError;

gulp 不再支持同步任务(Synchronous tasks)了。因为同步任务常常会导致难以调试的细微错误,例如忘记从任务(task)中返回 stream。

流Stream

常见流使用:src,dest,pipe()

1
2
3
4
5
6
7
8
9
10
11
12
13
14
const { src, dest } = require('gulp');
const babel = require('gulp-babel');
const uglify = require('gulp-uglify');
const rename = require('gulp-rename');

exports.default = function() {
return src('src/*.js')
.pipe(babel())
.pipe(src('vendor/*.js'))
.pipe(dest('output/'))
.pipe(uglify())
.pipe(rename({ extname: '.min.js' }))
.pipe(dest('output/'));
}

Glob 详解

glob 是由普通字符和/或通配字符组成的字符串,用于匹配文件路径。可以利用一个或多个 glob 在文件系统中定位文件。

至少一个匹配项目

分隔符永远是/不分系统

\\用于转译,如 \\*表示普通的星号 不是通配符

避免使用path, __dirname,__filename来创建glob

单个星号 不能匹配层级文件夹

两个星号 能匹配任意层级文件夹

取反!

['script/**/*.js', '!scripts/vendor/', 'scripts/vendor/react.js']

['**/*.js', '!node_modules/']

注意书写 会影响执行速度

插件 plugins

https://gulpjs.com/plugins/

实质是流转换器 插件应当总是用来转换文件的。其他操作都应该使用(非插件的) Node 模块或库来实现。

条件插件 gulp-if

文件监控 watch

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
const { watch, series } = require('gulp');

function clean(cb) {
// body omitted
cb();
}

function javascript(cb) {
// body omitted
cb();
}

function css(cb) {
// body omitted
cb();
}

// 可以只关联一个任务
watch('src/*.css', css);
// 或者关联一个任务组合
watch('src/*.js', series(clean, javascript));

指定监控的事件’add’、’addDir’、’change’、’unlink’、’unlinkDir’、’ready’、’error’。此外,还有一个 ‘all’ 事件,它表示除 ‘ready’ 和 ‘error’ 之外的所有事件。

1
2
3
4
5
6
7
const { watch } = require('gulp');

// 所有事件都将被监控
watch('src/*.js', { events: 'all' }, function(cb) {
// body omitted
cb();
});

如需在第一次文件修改之前执行,也就是调用 watch() 之后立即执行,请将 ignoreInitial 参数设置为 false。

watch(...,{ignoreInitial:false},...)

延迟 { delay: 500 }

API

Vinyl

Vinyl 对象 (the virtual file objects)

文件元数据作为 Node 的 fs.Stats 实例提供。它是 Vinyl 实例的 stat 属性,并在内部用于确定 Vinyl 对象是否表示目录或符号链接(symbolic link)。当写入文件系统时,权限和时间值将从 Vinyl 对象的 stat 属性同步。

src()

dest()

symlink()

lastRun()

series()

parallel()

watch()

task()

registry()

tree()

Vinyl

Vinyl.isVinyl()

Vinyl.isCustomProp()

参考

https://www.gulpjs.com.cn/docs/getting-started/quick-start/

0%