分类 Linux 下的文章

系统中又存在NetworkManager又存在systemd-networkd两个网络管理系统,冲突,关闭一个

systemctl stop NetworkManager
systemctl disable NetworkManager
root@cyber-aib:~/android_docker# brctl show
bridge name    bridge id        STP enabled    interfaces
docker0        8000.5ebab53304ca    no    

发现interfaces为空,所以手动

root@cyber-aib:~/android_docker# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 2e:42:2f:7e:37:0b brd ff:ff:ff:ff:ff:ff
    altname enP4p65s0
    inet 192.168.2.118/24 metric 100 brd 192.168.2.255 scope global dynamic eth0
       valid_lft 77250sec preferred_lft 77250sec
    inet6 fe80::2c42:2fff:fe7e:370b/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN group default qlen 1000
    link/ether de:2d:49:53:8e:e4 brd ff:ff:ff:ff:ff:ff
    altname enP3p49s0
20: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 5e:ba:b5:33:04:ca brd ff:ff:ff:ff:ff:ff
    inet 172.17.10.1/24 brd 172.17.10.255 scope global docker0
       valid_lft forever preferred_lft forever
22: vethfa386c9@if21: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 5e:ba:23:97:5c:97 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::5cba:23ff:fe97:5c97/64 scope link 
       valid_lft forever preferred_lft forever
       
//下面这行是关键
root@cyber-aib:~/android_docker# sudo ip link set vethfa386c9 master docker0
//或者brctl addif docker0 vethfa386c9

root@cyber-aib:~/android_docker# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 2e:42:2f:7e:37:0b brd ff:ff:ff:ff:ff:ff
    altname enP4p65s0
    inet 192.168.2.118/24 metric 100 brd 192.168.2.255 scope global dynamic eth0
       valid_lft 77226sec preferred_lft 77226sec
    inet6 fe80::2c42:2fff:fe7e:370b/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN group default qlen 1000
    link/ether de:2d:49:53:8e:e4 brd ff:ff:ff:ff:ff:ff
    altname enP3p49s0
20: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 5e:ba:b5:33:04:ca brd ff:ff:ff:ff:ff:ff
    inet 172.17.10.1/24 brd 172.17.10.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::5cba:b5ff:fe33:4ca/64 scope link 
       valid_lft forever preferred_lft forever
22: vethfa386c9@if21: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether 5e:ba:23:97:5c:97 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::5cba:23ff:fe97:5c97/64 scope link 
       valid_lft forever preferred_lft forever

执行完后发现docker0状态原来为DOWN的立马UP起来了

service docker stop  
ip link set dev docker0 down  
brctl delbr docker0  
iptables -t nat -F POSTROUTING  


brctl addbr docker0  
ip addr add 172.17.10.1/24 dev docker0  
ip link set dev docker0 up  

vim /etc/docker/daemon.json
{
  "insecure-registries":["x.x.x"],
  "bip": "172.17.10.1/24"
} 


systemctl  restart  docker

活了。

When searching:
\n is newline, \r is CR (carriage return = Ctrl-M = ^M)

When replacing:
\r is newline, \n is a null byte (0x00).

key@home:~$ sudo fdisk -l
设备       启动    起点       末尾       扇区   大小 Id 类型
/dev/sdb1          2048    4982527    4980480   2.4G fd Linux raid 自动检测
/dev/sdb2       4982528    9176831    4194304     2G fd Linux raid 自动检测
/dev/sdb3       9437184 1953320351 1943883168 926.9G fd Linux raid 自动检测
key@home:~$ sudo mount /dev/sdb3 /mnt
mount: /tmp/cs: 未知的文件系统类型“linux_raid_member”.

因为是RAID分区, 需要安装mdadm, lvm2

apt install mdadm lvm2
mdadm --assemble --run /dev/md2 /dev/sdb3
lvdisplay

--- Logical volume ---
  LV Path                /dev/vg2/volume_2
  LV Name                volume_2
  VG Name                vg2
  LV UUID                9hU40Y-3J1V-3uiF-vcQp-GoUj-UPEY-FLP0cG
  LV Write Access        read/write
  LV Creation host, time , 
  LV Status              NOT available    <<==关键所在,非激活
  LV Size                926.00 GiB
  Current LE             237056
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto

激活分区命令:

vgchange -ay /dev/vg2

然后lvdisplay应该显示LV status为available

然后挂载:

mount /dev/vg2/volume_2 /mnt
查看RAID分区信息:
mdadm --detail /dev/md2

停止挂载RAID
mdadm --stop /dev/md

手里有个多年前的验证网站, 采用asp加mssql,一直运行在百度云服务器上,最近百度云快到期了,续费太贵.(其实是我穷),手里也没多的WIN服务器, 所以考虑移到我linux服务器上,说实话,一路曲折,花了三四天时间才搞定.

网上找遍google/bing/baidu, 完全没有近几年LINUX跑asp+mssql的文章,都是十年前的centos 6/ ubuntu10时代的,而我服务器采用的debian11并且上面有跑别的应用,不想为架ASP/MSSQL还去DD个UBUNTU10老爷车来弄.

首先安装iasp, 这个也是上世纪产物了,早就停止开发了,并且最后的版本也停止在十年前的iasp2.1.01.tar.gz

因为iasp只支持最高apache2.0, 别装更高版本,肯定不兼容,所以我选择了httpd-2.0.59.tar.gz

第一步apache

这个简单解压出来之前编译安装就行
make && make install
默认就安装到/usr/local/apache2下了

第二步安装java

为什么要安装java呢,因为iasp是运行在java环境的, 这一步非常重要, 我试着安装了好几个版本, 这个也是花我时间最多的,因为考虑到你用什么数据库版本,对应就得安装对应的JAVA版本.
ubuntu/debian能找到mssql最老的版本也是sqlserver2017, 而java中JDBC能连接sqlserver2017的最低要求兼容版本的就是sqljdbc_6.2.2.1_cht.tar.gz(mssql-jdbc-6.2.2.jre7.jar mssql-jdbc-6.2.2.jre8.jar) ,所以要么安装7要么安装8,我最开始就安装了8, 后来跑起来后发现我ASP代码中只要调用比如date time相关时间函数, 就会出错, 跑去看异常发现"java.lang.ClassNotFoundException: sun.io.ByteToCharConverter"
google在stackOverflw一贴子才查到,原来java8已抛弃了sun/io, 但java7也仅是[deprecated]. 所以后来装了java7解决 版本选择的是jdk-7u80-linux-i586.tar.gz(记得安装32位版本)

第三步安装iasp

解压iasp2.1.01.tar.gz包
运行install.sh
/usr/jdk1.7.0_80/bin
/usr/local/iasp
Instant ASP native servlet interface
Apache 2.X
/usr/local/apache2/conf

重新编译mod_iasp.so

cd /usr/local/iasp/ iasp21/bin/apache/source/2.0
/usr/local/apache2/bin/apxs –i –c *.c
编辑apache配置文件
/usr/local/apache/conf/httpd.conf
在文件末尾部分将/usr/local/iasp/iasp21/bin/apache/linux/2.0/mod_iasp.so修改为
/usr/local/apache2/modules/mod_iasp.so

安装sqlserver2017

这个就按微软官网提示步骤安装就行了,也没啥难度
但有一点重要就是ubuntu18+默认系统使用的是openssl1.1, (那18+就是我debian11更不用说了),而sqlserver2017需要使用openssl1.0的动态库,所以需要手动指定一下 .
先停止mssql-server服务
创建两个软链接

sudo ln -s /usr/lib/x86_64-linux-gnu/libssl.so.1.0.0 /opt/mssql/lib/libssl.so
sudo ln -s /usr/lib/x86_64-linux-gnu/libcrypto.so.1.0.0 /opt/mssql/lib/libcrypto.so

编译mssql-server的systemd服务, /etc/systemd/system/multi-user.target.wants/mssql-server.service
增加代码
[Service]
Environment="LD_LIBRARY_PATH=/opt/mssql/lib"
再重启mssql-server服务

启动apache

/usr/local/apache2/bin/apachectl start

启动iasp

ASP里连接Sqlserver使用不了odbc驱动, 得改用JDBC
就得修改IASP的启动脚本start-server.sh, 其实就是将JAVA启动的classpath增加mssql-jdbc-6.2.2.jre7.jar, 打开/usr/local/iasp/iasp21/start-server.sh, 修改如下

/opt/jdk1.7.0_80/bin/java -mx32768000 -classpath /opt/jdk1.7.0_80/bin/../lib/classes.zip:/opt/jdk1.7.0_80/bin/../lib/tools.jar:/usr/local/iasp/iasp21/lib/jsdk.jar:/usr/local/iasp/iasp21/lib/iasplib.jar:/usr/local/iasp/iasp21/lib/activation.jar:/usr/local/iasp/iasp21/lib/asp2j.jar:/usr/local/iasp/iasp21/lib/rjax.jar:/usr/local/iasp/iasp21/lib/ejbcorba.jar:/usr/local/iasp/iasp21/lib/jndi.jar:/usr/local/iasp/iasp21/lib/samples.jar:/usr/local/iasp/iasp21/lib/hado.jar:/usr/local/iasp/iasp21/lib/pop3.jar:/usr/local/iasp/iasp21/lib/mail.jar:/usr/local/iasp/iasp21/lib/iasp_fileup.jar:/usr/local/iasp/iasp21/lib/iasp_mail.jar:/usr/local/iasp/iasp21/lib/iasp_grid.jar:/usr/local/iasp/iasp21/lib/iasp_nntp.jar:/usr/local/iasp/iasp21/lib/iasp_exec.jar:/usr/local/iasp/iasp21/lib/iasp_inet.jar:/usr/local/iasp/iasp21/lib/iasp_upload.jar:/usr/local/iasp/iasp21/lib/iasp_http.jar:/usr/local/iasp/iasp21/lib/iasp_pop3.jar:/usr/local/iasp/iasp21/lib/iasp_chart.jar:/usr/local/iasp/iasp21/lib/iasp_image.jar:/usr/local/iasp/iasp21/lib/iasp_sock.jar:/usr/local/iasp/iasp21/lib/iasp_xmldom.jar:/usr/local/iasp/iasp21/servlets:/usr/local/iasp/iasp21/lib/rjaxADODB.jar:/opt/msSQLjdbc/mssql-jdbc-6.2.2.jre7.jar: servlet.http.HttpServer

然后启动:
/usr/local/iasp/iasp21/start-server.sh

说下我在中间遇到的几个问题和解决方法:

  1. 如果你也是64位系统,编译的mod_iasp.so时, 得修改一下源码,这是iasp的一个BUG. 估计当时只考虑在32位运行导致的. 就是utils.c中的调用apr_pstrcat函数返回的是int型, 而返回的int型在64位是只占4字节的, 再将这地址转成char*,再去strlen计算, 当然就出错了.
    我看过mod_iasp源码(开源的好处啊,若干年后有人需要你的代码还可以修复使用),发现apr_string.h里面有apr_pstrcat有返回char *类型的定义.
    解决办法是在utils.c文件头增加一条#include "apr_strings.h"
  2. 安装32位的java需要支持
    apt install lib32z1代替ia32-libs
  3. asp代码调用response.Redirect无效,改用
    response.write "<script type='text/javascript'>location.href='URL';</script>"

我的运行环境就是腾迅云轻量无忧1h2G
整体运行速度和稳定性, 还有内存CPU占用率还挺满意
debian11+apache+iasp+sqlserver2017
总共大概刚好1G(sqlserver进程就占了800M = =!)
CPU几呼是0(已经关闭腾迅云预警/usr/local/qcloud/YunJing/uninst.sh的环境下)

root@VM-4-6-debian:~# uptime
 00:01:08 up 1 day,  5:20,  1 user,  load average: 0.00, 0.00, 0.00

wget https://go.dev/dl/go1.22.2.linux-amd64.tar.gz
tar -zxvf go1.22.2.linux-amd64.tar.gz -C /usr/local/

cp /etc/profile /etc/profile.bak
echo export GOROOT=/usr/local/go >> /etc/profile
echo export PATH=/usr/local/go/bin:$PATH >> /etc/profile
source /etc/profile
go version

go install github.com/caddyserver/xcaddy/cmd/xcaddy@latest
~/go/bin/xcaddy build --with github.com/caddyserver/forwardproxy@caddy2=github.com/klzgrad/forwardproxy@naive

如果第二条指令执行出错,可以尝试执行go env -w GO111MODULE=on 再重试,还不行的话请自行搜索升级go版本方法

acme.sh申请证书

转换密码

{
  order forward_proxy before file_server
}
:443, na.daehub.com {
  tls /etc/letsencrypt/live/daehub.com/fullchain.pem /etc/letsencrypt/live/daehub.com/privkey.pem
  forward_proxy {
    basic_auth user password
    hide_ip
    hide_via
    probe_resistance
  }
  file_server {
    root /usr/share/nginx/html
  }
}
caddy fmt --overwrite Caddyfile
caddy adapt Caddyfile > config.json

config.json

{
 "apps": {
   "http": {
     "servers": {
       "srv0": {
         "listen": [
           ":57418"
         ],
         "routes": [
           {
             "handle": [
               {
                 "auth_credentials": [
                                        "ZFhObGNqcHdZWE56"
                  ],
                 "handler": "forward_proxy",
                 "hide_ip": true,
                 "hide_via": true,
                 "probe_resistance": {}
               }
             ]
           },
           {
             "handle": [
               {
                 "handler": "reverse_proxy",
                 "headers": {
                   "request": {
                     "set": {
                       "Host": [
                         "{http.reverse_proxy.upstream.hostport}"
                       ],
                       "X-Forwarded-Host": [
                         "{http.request.host}"
                       ]
                     }
                   }
                 },
                 "transport": {
                   "protocol": "http",
                   "tls": {}
                 },
                 "upstreams": [
                   {
                     "dial": "www.cloudreve.org:443"
                   }
                 ]
               }
             ]
           }
         ],
         "tls_connection_policies": [
           {
             "match": {
               "sni": [
                 "1199.eu.org"
               ]
             },
             "certificate_selection": {
               "any_tag": [
                 "cert0"
               ]
             }
           }
         ],
         "automatic_https": {
           "disable": true
         }
       }
     }
   },
   "tls": {
     "certificates": {
       "load_files": [
         {
           "certificate": "/root/.acme.sh/1199.eu.org/fullchain.cer",
           "key": "/root/.acme.sh/1199.eu.org/1199.eu.org.key",
           "tags": [
             "cert0"
           ]
         }
       ]
     }
   }
 }
}
ln -s /root/caddy /usr/bin/caddy

/etc/systemd/system/naive.service

[Unit]
Description=Caddy
Documentation=https://caddyserver.com/docs/
After=network.target network-online.target
Requires=network-online.target

[Service]
Type=notify
User=root
Group=root
ExecStart=/usr/bin/caddy run --environ --config /root/config.json
ExecReload=/usr/bin/caddy reload --config /root/config.json
TimeoutStopSec=5s
LimitNOFILE=1048576
LimitNPROC=512
PrivateTmp=true
ProtectSystem=full
AmbientCapabilities=CAP_NET_BIND_SERVICE

[Install]
WantedBy=multi-user.target

systemctl enable naive.service
systemctl start naive.service

如果懒,也可以一键脚本

wget -N https://gitlab.com/rwkgyg/naiveproxy-yg/raw/main/naiveproxy.sh && bash naiveproxy.sh