问题

chrome里有命令行输出的时候,右侧会有VM:xxx或者源文件名:行数的一个跳转链接

然而如果封装了console函数或者例如用了vconsole之类的库,

它的错误输出,可能就不是你所期望的自己的代码

解决

如果只是简单的封装log

搜到的大多方案都是形如var newlogfunction = console.log.bind(window.console)

或者是原来的调用变成返回函数,然后再调用大概变成newconsole()()的写法

然而和我期望的需求有些不符,

目前期望的是一个通用的函数内,增加一点调试时的参数输出,希望能输出调用者的位置

也搜索了call stack相关,目前看来有些地方能用console.trace(),但在webpack里就不行了?

有搜到过用new Error()但它的stack字段是个string ,还真有人字符分割解析这个string

目前一个比较科学的方案是用chrome的blackbox文件功能,对webpack里的无效,但是可以对例如vconsole进行blackbox,这样就能展示到原来调用的位置。

同时根据这个思考,可以在调试的版本中对文件的分块进行指定,让公共文件和具体文件webpack打分开的chunk

参考

https://gist.github.com/paulirish/c307a5a585ddbcc17242

https://stackoverflow.com/questions/9559725/extending-console-log-without-affecting-log-line

https://stackoverflow.com/questions/13815640/a-proper-wrapper-for-console-log-with-correct-line-number/32928812

https://developer.mozilla.org/en-US/docs/Web/API/Console/trace

https://developers.google.com/web/tools/chrome-devtools/javascript/reference

https://developer.chrome.com/devtools/docs/blackboxing

ENOSPC

启动其它项目报错 ENOSPC: System limit for number of file watchers reached

也可能它自己报Visual Studio Code is unable to watch for file changes in this large workspace" (error ENOSPC)

多次尝试都是关掉vscode就好了,但之前一直没有更准确的定位问题,以及真的是它的锅吗

网上更多的说改改系统的最大限制吧。

先看看cat /proc/sys/fs/inotify/max_user_watches,大多都是8192,然后说改/etc/sysctl.conf改到fs.inotify.max_user_watches=524288

作为一个网易云音乐不用sudo启动不了就拒绝使用网易云音乐的人,又怎么会轻易该系统的东西//虽然linux是真的好改

我先是自己去看了 ps -aux | grep code/code | awk '{print $2}' | xargs -I {} ls -1 /proc/{}/fd | wc 占用不算多也不算少但是和8192还是差几个数量级(以2为基的数量级)

然后又搜了些资料,大概有anon_inodeinotify这两个关键字,但没具体说是怎么查看

最后是搜到了这个脚本

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
#!/bin/sh

# Get the procs sorted by the number of inotify watchers
#
# From `man find`:
# %h Leading directories of file's name (all but the last element). If the file name contains no slashes (since it
# is in the current directory) the %h specifier expands to `.'.
# %f File's name with any leading directories removed (only the last element).
lines=$(
find /proc/*/fd \
-lname anon_inode:inotify \
-printf '%hinfo/%f\n' 2>/dev/null \
\
| xargs grep -c '^inotify' \
| sort -n -t: -k2 -r \
)

printf "\n%10s\n" "INOTIFY"
printf "%10s\n" "WATCHER"
printf "%10s %5s %s\n" " COUNT " "PID" "CMD"
printf -- "----------------------------------------\n"
for line in $lines; do
watcher_count=$(echo $line | sed -e 's/.*://')
pid=$(echo $line | sed -e 's/\/proc\/\([0-9]*\)\/.*/\1/')
cmdline=$(ps --columns 120 -o command -h -p $pid)
printf "%8d %7d %s\n" "$watcher_count" "$pid" "$cmdline"
done

也就是 /proc/具体pid/fdinfo/具体文件 的以inotify开头的

这个也可以通过/proc/具体pid/fd/具体fd -> anon_inode:inotify查看有哪个symbolic link是指向anon_inode:inotify

可以看到其它的程序都用很少,就idea-IU-193.5662.53/bin/fsnotifier64/usr/share/code/code使用是在4000数量级的

然后启动一个node 又是1000+

所以最后还是改系统配置+改vsc的排除文件

inotify

TODO 记录的意义,和具体文件查看

1024 inotify wd:175 ino:a7cc6 sdev:800002 mask:fc6 ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:c67c0a00f48cb089

TODO 我记得最开始跑脚本有看到vscode用8000+ 的难道我看错了?反正后来暂时没有重现

vscode 的建议

一个也是改系统的

另一个是增加配置中files.watcherExclude的文件glob描述,增加以后 再启动似乎从8000+降低到1000+ (需要重启)

参考

https://howchoo.com/g/m2uzodviywm/node-increase-file-watcher-system-limit

https://unix.stackexchange.com/questions/15509/whos-consuming-my-inotify-resources/426001#426001

https://github.com/fatso83/dotfiles/blob/master/utils/scripts/inotify-consumers

https://code.visualstudio.com/docs/setup/linux#_visual-studio-code-is-unable-to-watch-for-file-changes-in-this-large-workspace-error-enospc

https://www.tldp.org/LDP/Linux-Filesystem-Hierarchy/html/proc.html

http://man7.org/linux/man-pages/man7/inotify.7.html

同步多路IO复用

CODE

五个子进程 2000本地端口SOCK_STREAM的demo

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <wait.h>
#include <signal.h>
#include <errno.h>
#include <sys/select.h>
#include <sys/time.h>
#include <unistd.h>
#include <string.h>
#include <arpa/inet.h>

#include <poll.h>

#include <sys/epoll.h>

#define MAXBUF 256
#define CHILD 5

void child_process(void) {
sleep(2);
char msg[MAXBUF];
struct sockaddr_in addr = {0};
int sockfd,num=1;
srandom(getpid());
/* Create socket and connect to server */
sockfd = socket(AF_INET, SOCK_STREAM, 0);
addr.sin_family = AF_INET;
addr.sin_port = htons(2000);
addr.sin_addr.s_addr = inet_addr("127.0.0.1");

connect(sockfd, (struct sockaddr*)&addr, sizeof(addr));

printf("child {%d} connected \n", getpid());
while(true){
int sl = (random() % 10 ) + 1;
num++;
sleep(sl);
sprintf (msg, "Test message %d from client %d", num, getpid());
write(sockfd, msg, strlen(msg)); /* Send message -> 127.0.0.1:2000 */
}
}

void selectDemo(int sockfd){
int fds[CHILD];
fd_set rset;
socklen_t maxfd=0;
for (int i=0;i<CHILD;i++) {
fds[i] = accept(sockfd,NULL,NULL);
printf("fds[%d]=%d\n",i,fds[i]);
if(fds[i] > maxfd)
maxfd = fds[i];
}

while(true){
FD_ZERO(&rset);
for (int i = 0; i< CHILD; i++ ) {
FD_SET(fds[i],&rset);
}

puts("round again");
select(maxfd+1, &rset, NULL, NULL, NULL); // 返回值是 ready的个数 >= 0

for(int i=0;i<CHILD;i++) {
if (FD_ISSET(fds[i], &rset)){
char buffer[MAXBUF];
int n = read(fds[i], buffer, MAXBUF);
buffer[n] = '\0';
puts(buffer); // 主进程把子进程的内容打印
}
}
}
}

void pollDemo(int sockfd){
pollfd pollfds[CHILD]; // 这里变成了 数组,因此长度不再受到限制
for(int i=0;i<CHILD;i++){
pollfds[i].fd = accept(sockfd,NULL,NULL);
pollfds[i].events = POLLIN;
}
sleep(1);
while(true){
puts("round again");
poll(pollfds, CHILD, 50000);

for(int i=0;i<CHILD;i++) { // 但这里检查状态时 依然是 轮询
if (pollfds[i].revents & POLLIN){
pollfds[i].revents = 0; // 不需要每次 把所有的重设置到set里,只需要把已经就绪的状态恢复掉
char buffer[MAXBUF];
int n = read(pollfds[i].fd, buffer, MAXBUF);
buffer[n] = '\0';
puts(buffer);
}
}
}
}

void epollDemo(int sockfd){
int epfd = epoll_create(233); // create a context in the kernel 文档说 只要是正值就行 具体值被忽略了
for(int i=0;i<CHILD;i++) {
static struct epoll_event ev; // 注意这里是static
ev.data.fd = accept(sockfd,NULL,NULL); // 这也可以 自定义其它的data值
ev.events = EPOLLIN; // 这里还可以设置 EPOLLET 的bit位 启用edge-triggered
epoll_ctl(epfd, EPOLL_CTL_ADD, ev.data.fd, &ev); // add and remove file descriptors to/from the context using epoll_ctl
}
while(true){
puts("round again");
struct epoll_event events[CHILD];
int nfds = epoll_wait(epfd, events, CHILD, 50000); // 把就绪的写入到events中 写了nfds个

for(int i=0;i<nfds;i++) { // 遍历的都是就绪的
char buffer[MAXBUF];
int n = read(events[i].data.fd, buffer, MAXBUF);
buffer[n] = '\0';
puts(buffer);
}
}
}


int main() {
int sockfd;
struct sockaddr_in addr;
for(int i=0;i<CHILD;i++) {
if(fork() == 0) {
child_process(); // 子进程
exit(0);
}
}
// 主进程

sockfd = socket(AF_INET, SOCK_STREAM, 0);
memset(&addr, 0, sizeof (addr));
addr.sin_family = AF_INET;
addr.sin_port = htons(2000);
addr.sin_addr.s_addr = INADDR_ANY;
bind(sockfd,(struct sockaddr*)&addr ,sizeof(addr));
listen (sockfd, CHILD);

// 三种 demo 解除注释 使用
// selectDemo(sockfd);
// pollDemo(sockfd);
// epollDemo(sockfd);
return 0;
}

select

man 2 select

1
2
3
4
5
6
7
8
int select(int nfds, fd_set *readfds, fd_set *writefds, fd_set *exceptfds, struct timeval *timeout);

void FD_CLR(int fd, fd_set *set); // 从set中移除fd
int FD_ISSET(int fd, fd_set *set); // 测试set中是否设置fd
void FD_SET(int fd, fd_set *set); // 在set中设置fd
void FD_ZERO(fd_set *set); // fd zero

int pselect(int nfds, fd_set *readfds, fd_set *writefds, fd_set *exceptfds, const struct timespec *timeout, const sigset_t *sigmask);

nfds should be set to the highest-numbered file descriptor in any of the three sets, plus 1. The indicated file descriptors in each set are checked, up to this limit (but see BUGS).

Three independent sets of file descriptors are watched.

The file descriptors listed in readfds will be watched to see if characters become available for reading (more precisely, to see if a read will not block; in particular, a file descriptor is also ready on end-of-file).

The file descriptors in writefds will be watched to see if space is available for write (though a large write may still block).

The file descriptors in exceptfds will be watched for exceptional conditions. (For examples of some exceptional conditions, see the discussion of POLLPRI in poll(2).)

The time structures involved are defined in <sys/time.h> and look like

1
2
3
4
struct timeval {
long tv_sec; /* seconds */
long tv_usec; /* microseconds */
};

and

1
2
3
4
struct timespec {
long tv_sec; /* seconds */
long tv_nsec; /* nanoseconds */
};

我们可以看到有时 会多个fd同时 就绪,

并且每次select后需要 再走一边FD_ZERO -> FD_SET

当就绪(或者超时)的时候,需要for所有的fd用FD_ISSET来判断就绪状态

单个进程能够监视的文件描述符的数量存在最大限制,它由FD_SETSIZE设置通常为1024,最大数量可以通过修改宏定义甚至重新编译内核的方式来满足。

select.h

内核/用户空间的拷贝问题,select需要维护一个用来存放大量fd的数据结构,这样会使得用户空间和内核空间在传递该结构时复制开销大。

轮询扫描: 也就是for+FD_ISSET

水平触发:应用程序如果没有完成对一个已经就绪的文件描述符进行IO,那么之后再次select调用还是会将这些文件描述符通知进程。

poll

1
2
3
4
5
6
7
int poll(struct pollfd *fds, nfds_t nfds, int timeout);

struct pollfd {
int fd; /* file descriptor */
short events; /* requested events */
short revents; /* returned events */
};

采用数组指针+长度的参数形式

返回值

1
2
3
4
5
On success, a positive number is returned; this is the number of
structures which have nonzero revents fields (in other words, those
descriptors with events or errors reported). A value of 0 indicates
that the call timed out and no file descriptors were ready. On
error, -1 is returned, and errno is set appropriately.

水平触发

其和select不同的地方:采用数组的方式替换原有fd_set数据结构,而使其没有连接数的限制。

虽然也是轮询,但是假设是单个fd,但fd的值很大的情况下,poll就会比select效率好

上面看到了只需要一次初始化,和恢复已经就绪的fd,不需要每次初始化

可移植性:select( ) is more portable, as some Unix systems do not support poll( )

epoll

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
int epoll_create(int size); // Since Linux 2.6.8, the size argument is ignored, but must be greater than zero;
int epoll_ctl(int epfd, int op, int fd, struct epoll_event *event);
int epoll_wait(int epfd, struct epoll_event *events,int maxevents, int timeout);

typedef union epoll_data {
void *ptr;
int fd;
uint32_t u32;
uint64_t u64;
} epoll_data_t;

struct epoll_event {
uint32_t events; /* Epoll events */
epoll_data_t data; /* User data variable */
};

前面两种,都是 user space设置,要用时调用 select/poll 进入 kernel 态

  • create a context in the kernel using epoll_create
  • add and remove file descriptors to/from the context using epoll_ctl
  • wait for events in the context using epoll_wait ,据说这里做了内存映射优化

epoll_ctl这里要有fd参数,epoll_event中也有epoll_data 中有fd

然而 里面的epoll_data是个union也就是调用者自己喜欢放什么就放什么,不论是下面的fd还是

Level-triggered(默认) and edge-triggered

LT模式:若就绪的事件一次没有处理完要做的事件,就会一直去处理。即就会将没有处理完的事件继续放回到就绪队列之中(即那个内核中的链表),一直进行处理。

ET模式:就绪的事件只能处理一次,若没有处理完会在下次的其它事件就绪时再进行处理。而若以后再也没有就绪的事件,那么剩余的那部分数据也会随之而丢失。

由此可见:ET模式的效率比LT模式的效率要高很多。只是如果使用ET模式,就要保证每次进行数据处理时,要将其处理完,不能造成数据丢失,这样对编写代码的人要求就比较高。
注意:ET模式只支持非阻塞的读写:为了保证数据的完整性。

总结

上面的函数都有一些保证原子性的操作函数,例如pselect,epoll_pwait

例如epoll_pwait()等价于

1
2
3
4
5
sigset_t origmask;

pthread_sigmask(SIG_SETMASK, &sigmask, &origmask);
ready = epoll_wait(epfd, &events, maxevents, timeout);
pthread_sigmask(SIG_SETMASK, &origmask, NULL);

有的地方说

表面上看epoll的性能最好,但是在连接数少并且连接都十分活跃的情况下,select和poll的性能可能比epoll好,毕竟epoll的通知机制需要很多函数回调

reference

man 2 listen

man 2 read

man 2 select

man 2 poll

man 7 epoll

man 2 epoll_create

man 2 epoll_ctl

http://www.ulduzsoft.com/2014/01/select-poll-epoll-practical-difference-for-system-architects/

https://devarea.com/linux-io-multiplexing-select-vs-poll-vs-epoll/

Using poll() instead of select()

Example: Using asynchronous I/O

Example: Nonblocking I/O and select()

The method to epoll’s madness

disable ping

echo 1 > /proc/sys/net/ipv4/icmp_echo_ignore_all

/etc/sysctl.conf 追加

net.ipv4.icmp_echo_ignore_all = 1

list running program

ps -ef | awk '$8~/^\// {for(i=8;i<=NF;i++)printf $i" "; print "" }' | sort | uniq

port and net

lsof -i

ifconfig the value of TX bytes

hethogs

clamav

clamscan -r -i /home/ -l /var/log/clamscan.log

freshclam before use

clamtk for ui

relative

awk

anti virus

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
const JSEncrypt = require('node-jsencrypt');
let encrypt;
function rsa(pwd, publicKey) {
if (!encrypt) {
encrypt = new JSEncrypt();
}
if (!pwd || typeof pwd !== 'string') {
return '';
}
let newPwd = pwd;
if (newPwd.length > 230) {
newPwd = newPwd.substr(0, 230);
}
encrypt.setPublicKey(publicKey);
let result = encrypt.encrypt(newPwd);
let tryTimes = 0;
while (result.length !== 344) {
// 如果加密后的字符串长度不是344,后端必然解密失败
result = encrypt.encrypt(newPwd);
if (tryTimes > 10) {
// 最多重试十次
return '';
}
tryTimes += 1;
}
return result;
}
const pk = 'publicKey';
const pwd = '密码';
console.log(rsa(pwd,pk));

tcpdump -S -e -vv -i wlo1 host xx.xx.xx.xx