AskOverflow.Dev

AskOverflow.Dev Logo AskOverflow.Dev Logo

AskOverflow.Dev Navigation

  • 主页
  • 系统&网络
  • Ubuntu
  • Unix
  • DBA
  • Computer
  • Coding
  • LangChain

Mobile menu

Close
  • 主页
  • 系统&网络
    • 最新
    • 热门
    • 标签
  • Ubuntu
    • 最新
    • 热门
    • 标签
  • Unix
    • 最新
    • 标签
  • DBA
    • 最新
    • 标签
  • Computer
    • 最新
    • 标签
  • Coding
    • 最新
    • 标签
主页 / user-29656

Satish's questions

Martin Hope
Satish
Asked: 2022-06-17 19:39:07 +0800 CST

ceph 存储 MDS 报告缓慢的元数据 IO

  • 0

我在实验室里玩 ceph 存储,我有一个服务器,所以我想把所有服务都安装在一个盒子上,比如 MON、OSD、MDS 等。

我已经使用 loopdevice 创建了两个磁盘(这个服务器有 SSD 磁盘所以速度真的很好)

root@ceph2# losetup -a
/dev/loop1: [64769]:26869770 (/root/100G-2.img)
/dev/loop0: [64769]:26869769 (/root/100G-1.img)

这就是我的ceph -s输出的样子

root@ceph2# ceph -s
  cluster:
    id:     1106ae5c-e5bf-4316-8185-3e559d246ac5
    health: HEALTH_WARN
            1 MDSs report slow metadata IOs
            Reduced data availability: 65 pgs inactive
            Degraded data redundancy: 65 pgs undersized

  services:
    mon: 1 daemons, quorum ceph2 (age 8m)
    mgr: ceph2(active, since 9m)
    mds: 1/1 daemons up
    osd: 2 osds: 2 up (since 20m), 2 in (since 38m)

  data:
    volumes: 1/1 healthy
    pools:   3 pools, 65 pgs
    objects: 0 objects, 0 B
    usage:   11 MiB used, 198 GiB / 198 GiB avail
    pgs:     100.000% pgs not active
             65 undersized+peered

不知道 MDS 缓慢的 IO 错误来自哪里,并且 mds stat 卡在创建

root@ceph2# ceph mds stat
cephfs:1 {0=ceph2=up:creating}

这就是健康细节的样子

root@ceph2# ceph health detail
HEALTH_WARN 1 MDSs report slow metadata IOs; Reduced data availability: 65 pgs inactive; Degraded data redundancy: 65 pgs undersized
[WRN] MDS_SLOW_METADATA_IO: 1 MDSs report slow metadata IOs
    mds.ceph2(mds.0): 31 slow metadata IOs are blocked > 30 secs, oldest blocked for 864 secs
[WRN] PG_AVAILABILITY: Reduced data availability: 65 pgs inactive
    pg 1.0 is stuck inactive for 22m, current state undersized+peered, last acting [1]
    pg 2.0 is stuck inactive for 14m, current state undersized+peered, last acting [0]
    pg 2.1 is stuck inactive for 14m, current state undersized+peered, last acting [1]
    pg 2.2 is stuck inactive for 14m, current state undersized+peered, last acting [0]
    pg 2.3 is stuck inactive for 14m, current state undersized+peered, last acting [1]
    pg 2.4 is stuck inactive for 14m, current state undersized+peered, last acting [1]
    pg 2.5 is stuck inactive for 14m, current state undersized+peered, last acting [1]
    pg 2.6 is stuck inactive for 14m, current state undersized+peered, last acting [1]
    pg 2.7 is stuck inactive for 14m, current state undersized+peered, last acting [1]
    pg 2.8 is stuck inactive for 14m, current state undersized+peered, last acting [0]
    pg 2.c is stuck inactive for 14m, current state undersized+peered, last acting [1]
    pg 2.d is stuck inactive for 14m, current state undersized+peered, last acting [1]
    pg 2.e is stuck inactive for 14m, current state undersized+peered, last acting [1]
    pg 2.f is stuck inactive for 14m, current state undersized+peered, last acting [0]
    pg 2.10 is stuck inactive for 14m, current state undersized+peered, last acting [0]
    pg 2.11 is stuck inactive for 14m, current state undersized+peered, last acting [0]
    pg 2.12 is stuck inactive for 14m, current state undersized+peered, last acting [1]
    pg 2.13 is stuck inactive for 14m, current state undersized+peered, last acting [0]
    pg 2.14 is stuck inactive for 14m, current state undersized+peered, last acting [0]
    pg 2.15 is stuck inactive for 14m, current state undersized+peered, last acting [1]
    pg 2.16 is stuck inactive for 14m, current state undersized+peered, last acting [0]
    pg 2.17 is stuck inactive for 14m, current state undersized+peered, last acting [1]
    pg 2.18 is stuck inactive for 14m, current state undersized+peered, last acting [0]
    pg 2.19 is stuck inactive for 14m, current state undersized+peered, last acting [0]
    pg 2.1a is stuck inactive for 14m, current state undersized+peered, last acting [0]
    pg 2.1b is stuck inactive for 14m, current state undersized+peered, last acting [1]
    pg 3.0 is stuck inactive for 14m, current state undersized+peered, last acting [1]
    pg 3.1 is stuck inactive for 14m, current state undersized+peered, last acting [0]
    pg 3.2 is stuck inactive for 14m, current state undersized+peered, last acting [1]
    pg 3.3 is stuck inactive for 14m, current state undersized+peered, last acting [0]
    pg 3.4 is stuck inactive for 14m, current state undersized+peered, last acting [1]
    pg 3.5 is stuck inactive for 14m, current state undersized+peered, last acting [1]
    pg 3.6 is stuck inactive for 14m, current state undersized+peered, last acting [0]
    pg 3.7 is stuck inactive for 14m, current state undersized+peered, last acting [1]
    pg 3.9 is stuck inactive for 14m, current state undersized+peered, last acting [0]
    pg 3.c is stuck inactive for 14m, current state undersized+peered, last acting [0]
    pg 3.d is stuck inactive for 14m, current state undersized+peered, last acting [1]
    pg 3.e is stuck inactive for 14m, current state undersized+peered, last acting [1]
    pg 3.f is stuck inactive for 14m, current state undersized+peered, last acting [0]
    pg 3.10 is stuck inactive for 14m, current state undersized+peered, last acting [1]
    pg 3.11 is stuck inactive for 14m, current state undersized+peered, last acting [0]
    pg 3.12 is stuck inactive for 14m, current state undersized+peered, last acting [0]
    pg 3.13 is stuck inactive for 14m, current state undersized+peered, last acting [1]
    pg 3.14 is stuck inactive for 14m, current state undersized+peered, last acting [1]
    pg 3.15 is stuck inactive for 14m, current state undersized+peered, last acting [0]
    pg 3.16 is stuck inactive for 14m, current state undersized+peered, last acting [1]
    pg 3.17 is stuck inactive for 14m, current state undersized+peered, last acting [0]
    pg 3.18 is stuck inactive for 14m, current state undersized+peered, last acting [1]
    pg 3.19 is stuck inactive for 14m, current state undersized+peered, last acting [1]
    pg 3.1a is stuck inactive for 14m, current state undersized+peered, last acting [1]
    pg 3.1b is stuck inactive for 14m, current state undersized+peered, last acting [0]
[WRN] PG_DEGRADED: Degraded data redundancy: 65 pgs undersized
    pg 1.0 is stuck undersized for 22m, current state undersized+peered, last acting [1]
    pg 2.0 is stuck undersized for 14m, current state undersized+peered, last acting [0]
    pg 2.1 is stuck undersized for 14m, current state undersized+peered, last acting [1]
    pg 2.2 is stuck undersized for 14m, current state undersized+peered, last acting [0]
    pg 2.3 is stuck undersized for 14m, current state undersized+peered, last acting [1]
    pg 2.4 is stuck undersized for 14m, current state undersized+peered, last acting [1]
    pg 2.5 is stuck undersized for 14m, current state undersized+peered, last acting [1]
    pg 2.6 is stuck undersized for 14m, current state undersized+peered, last acting [1]
    pg 2.7 is stuck undersized for 14m, current state undersized+peered, last acting [1]
    pg 2.8 is stuck undersized for 14m, current state undersized+peered, last acting [0]
    pg 2.c is stuck undersized for 14m, current state undersized+peered, last acting [1]
    pg 2.d is stuck undersized for 14m, current state undersized+peered, last acting [1]
    pg 2.e is stuck undersized for 14m, current state undersized+peered, last acting [1]
    pg 2.f is stuck undersized for 14m, current state undersized+peered, last acting [0]
    pg 2.10 is stuck undersized for 14m, current state undersized+peered, last acting [0]
    pg 2.11 is stuck undersized for 14m, current state undersized+peered, last acting [0]
    pg 2.12 is stuck undersized for 14m, current state undersized+peered, last acting [1]
    pg 2.13 is stuck undersized for 14m, current state undersized+peered, last acting [0]
    pg 2.14 is stuck undersized for 14m, current state undersized+peered, last acting [0]
    pg 2.15 is stuck undersized for 14m, current state undersized+peered, last acting [1]
    pg 2.16 is stuck undersized for 14m, current state undersized+peered, last acting [0]
    pg 2.17 is stuck undersized for 14m, current state undersized+peered, last acting [1]
    pg 2.18 is stuck undersized for 14m, current state undersized+peered, last acting [0]
    pg 2.19 is stuck undersized for 14m, current state undersized+peered, last acting [0]
    pg 2.1a is stuck undersized for 14m, current state undersized+peered, last acting [0]
    pg 2.1b is stuck undersized for 14m, current state undersized+peered, last acting [1]
    pg 3.0 is stuck undersized for 14m, current state undersized+peered, last acting [1]
    pg 3.1 is stuck undersized for 14m, current state undersized+peered, last acting [0]
    pg 3.2 is stuck undersized for 14m, current state undersized+peered, last acting [1]
    pg 3.3 is stuck undersized for 14m, current state undersized+peered, last acting [0]
    pg 3.4 is stuck undersized for 14m, current state undersized+peered, last acting [1]
    pg 3.5 is stuck undersized for 14m, current state undersized+peered, last acting [1]
    pg 3.6 is stuck undersized for 14m, current state undersized+peered, last acting [0]
    pg 3.7 is stuck undersized for 14m, current state undersized+peered, last acting [1]
    pg 3.9 is stuck undersized for 14m, current state undersized+peered, last acting [0]
    pg 3.c is stuck undersized for 14m, current state undersized+peered, last acting [0]
    pg 3.d is stuck undersized for 14m, current state undersized+peered, last acting [1]
    pg 3.e is stuck undersized for 14m, current state undersized+peered, last acting [1]
    pg 3.f is stuck undersized for 14m, current state undersized+peered, last acting [0]
    pg 3.10 is stuck undersized for 14m, current state undersized+peered, last acting [1]
    pg 3.11 is stuck undersized for 14m, current state undersized+peered, last acting [0]
    pg 3.12 is stuck undersized for 14m, current state undersized+peered, last acting [0]
    pg 3.13 is stuck undersized for 14m, current state undersized+peered, last acting [1]
    pg 3.14 is stuck undersized for 14m, current state undersized+peered, last acting [1]
    pg 3.15 is stuck undersized for 14m, current state undersized+peered, last acting [0]
    pg 3.16 is stuck undersized for 14m, current state undersized+peered, last acting [1]
    pg 3.17 is stuck undersized for 14m, current state undersized+peered, last acting [0]
    pg 3.18 is stuck undersized for 14m, current state undersized+peered, last acting [1]
    pg 3.19 is stuck undersized for 14m, current state undersized+peered, last acting [1]
    pg 3.1a is stuck undersized for 14m, current state undersized+peered, last acting [1]
    pg 3.1b is stuck undersized for 14m, current state undersized+peered, last acting [0]

这里有什么问题?你认为这是因为我有一个服务器和 2 个 OSD 吗?

linux storage
  • 1 个回答
  • 335 Views
Martin Hope
Satish
Asked: 2021-11-04 09:42:56 +0800 CST

Ubuntu netplan arp-ip-target 问题

  • 1

我有运行 netplan 版本的 ubuntu 20.04 0.102-0ubuntu1~20.04.2,我正在尝试active-backup使用选项配置绑定arp-ip-target

  bonds:
        bond0:
          dhcp4: no
          interfaces:
            - eno49
            - eno50
          parameters:
            mode: active-backup
            arp-ip-targets: [ 10.64.0.1 ]
            arp-interval: 3000

这是我的债券输出

# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eno49
MII Status: up
MII Polling Interval (ms): 0
Up Delay (ms): 0
Down Delay (ms): 0
Peer Notification Delay (ms): 0
ARP Polling Interval (ms): 3000
ARP IP target/s (n.n.n.n form): 10.64.0.1

Slave Interface: eno50
MII Status: down
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 1
Permanent HW addr: 5c:b9:01:9d:ac:ad
Slave queue ID: 0

Slave Interface: eno49
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 5c:b9:01:9d:ac:ac
Slave queue ID: 0

为了进行测试,我禁用了上游交换机端口以查看我的绑定故障转移,但看起来它不起作用。我还应该解决什么问题?

linux ubuntu
  • 1 个回答
  • 375 Views
Martin Hope
Satish
Asked: 2019-11-14 08:57:02 +0800 CST

grep 提取数字范围

  • 5

我正在寻找日期2019-09-XX,2019-10-XX但不知何故我的 grep 没有帮助,我确定我在这里遗漏了一些东西

  Last Password Change: 2019-10-30
  Last Password Change: 2017-02-07
  Last Password Change: 2019-10-29
  Last Password Change: 2019-11-03
  Last Password Change: 2019-10-31
  Last Password Change: 2018-09-27
  Last Password Change: 2018-09-27
  Last Password Change: 2019-06-27

我正在关注但它不起作用

grep "2019\-[09,10]\-" file 也试过grep "2019\-{09,10}\-" file

linux
  • 3 个回答
  • 253 Views
Martin Hope
Satish
Asked: 2019-11-11 18:28:45 +0800 CST

如何找到 /proc/sys/fs/file-max 的最大限制

  • 1

我正在运行 Jenkins,有很多需要打开文件的作业,所以我将file-max限制增加到 300 万。有时它仍然会达到 300 万,所以我想知道我能走多远。我可以设置/proc/sys/fs/file-max为1000万吗?

我怎么知道硬限制file-max是什么?

我正在运行CentOS 7.7(3.10.X 内核)

linux
  • 1 个回答
  • 538 Views
Martin Hope
Satish
Asked: 2019-11-02 10:44:05 +0800 CST

在文件中使用 sed 或 awk 更改日期格式

  • 3

我有以下格式的文件

----------------------------------------
  Name: cust foo
  mail: [email protected]
  Account Lock: FALSE
  Last Password Change: 20170721085748Z
----------------------------------------
  Name: cust xyz
  mail: [email protected]
  Account Lock: TRUE
  Last Password Change: 20181210131249Z
----------------------------------------
  Name: cust bar
  mail: [email protected]
  Account Lock: FALSE
  Last Password Change: 20170412190854Z
----------------------------------------
  Name: cust abc
  mail: [email protected]
  Account Lock: FALSE
  Last Password Change: 20191030080405Z
----------------------------------------

我想将Last Password Change数据格式更改为YYYY-MM-DD但不确定如何使用sed或awk是否有任何其他方法,我可以尝试循环它并使用date -d选项但不确定是否有更简单的方法来使用正则表达式

linux
  • 3 个回答
  • 1760 Views
Martin Hope
Satish
Asked: 2019-10-23 18:30:57 +0800 CST

命令标准输出到 /dev/null

  • 0

我有一个非常简单的命令,可以生成我想做的 STDOUT,/dev/null但不知何故它不起作用,或者我在这里遗漏了一些东西。

$ ldapsearch -Y GSSAPI -b "cn=users,cn=accounts,dc=example,dc=com" "uid=foo" | grep krbPasswordExpiration | tail -n1 | awk '{print $2}'
SASL/GSSAPI authentication started
SASL username: [email protected]
SASL SSF: 256
SASL data security layer installed.
20200608022954Z     <---- This is my krbPasswordExpiration value.

但是,如果您在上面的命令SASL行中看到这只是我想要执行的标准输出,/dev/null那么我尝试了以下操作,但它似乎不起作用。

$ ldapsearch -Y GSSAPI -b "cn=users,cn=accounts,dc=example,dc=com" "uid=foo" | grep krbPasswordExpiration | tail -n1 | awk '{print $2}' 2> /dev/null

我还有什么办法可以摆脱它?

linux
  • 1 个回答
  • 356 Views
Martin Hope
Satish
Asked: 2019-09-11 07:32:39 +0800 CST

从底部提取行直到正则表达式匹配

  • 5

我有这个输出。

[root@linux ~]# cat /tmp/file.txt
virt-top time  11:25:14 Host foo.example.com x86_64 32/32CPU 1200MHz 65501MB
   ID S RDRQ WRRQ RXBY TXBY %CPU %MEM   TIME    NAME
    1 R    0    0    0    0  0.0  0.0  96:02:53 instance-0000036f
    2 R    0    0    0    0  0.0  0.0  95:44:07 instance-00000372
virt-top time  11:25:17 Host foo.example.com x86_64 32/32CPU 1200MHz 65501MB
   ID S RDRQ WRRQ RXBY TXBY %CPU %MEM   TIME    NAME
    1 R    0    0    0    0  0.6 12.0  96:02:53 instance-0000036f
    2 R    0    0    0    0  0.2 12.0  95:44:08 instance-00000372

你可以看到它有两个块,我想提取最后一个块(如果你看到第一个块,它的 CPU 全部为零,我不在乎)简而言之,我想提取最后几行(注意:有时我有两个以上的实例-*) 否则我可以使用“tail -n 2”

1 R    0    0    0    0  0.6 12.0  96:02:53 instance-0000036f
2 R    0    0    0    0  0.2 12.0  95:44:08 instance-00000372

我已经尝试了 sed/awk/grep 和所有可能的方法,但没有接近期望的结果。

linux
  • 5 个回答
  • 1262 Views
Martin Hope
Satish
Asked: 2019-09-09 11:37:40 +0800 CST

libvirt kvm cpu/内存统计收集

  • 3

我们在 kvm 中运行虚拟机,我正在尝试收集指标并将其发送到 influxdb + grafana 进行绘图。

我可以看到 CPU 统计数据使用virsh但它time在第二次花费中,我如何将此值转换为正确使用%或人类可读的指标?

[root@kvm01 ~]# virsh cpu-stats --total instance-0000047a
Total:
    cpu_time     160808730.755660547 seconds
    user_time       148000.880000000 seconds
    system_time   85012531.050000000 seconds
linux kvm monitoring
  • 1 个回答
  • 1241 Views
Martin Hope
Satish
Asked: 2019-08-20 13:49:20 +0800 CST

多个 [[collectd]] influx.conf

  • 1

我有以下collectd实例在influx.conf文件中运行,一切都很好,但现在我想设置另一个与现有实例完全隔离的实例,我该怎么做?我可以在influx.conf文件中进行以下操作吗?

[[collectd]]
  enabled = true
  bind-address = "0.0.0.0:8096"
  database = "database-1"

[[collectd]]
  enabled = true
  bind-address = "0.0.0.0:8097"
  database = "database-2"
linux database
  • 1 个回答
  • 504 Views
Martin Hope
Satish
Asked: 2019-08-15 19:57:04 +0800 CST

rsyslog 过滤器严重性不起作用

  • 2

我遵循 Rsyslog 配置将日志发送到远程服务器。问题是它向远程服务器发送大量 INFO 消息,我不想要那种噪音。我正在尝试配置过滤器,以便它发送所有严重性日志但不发送信息。

# Ansible managed

$WorkDirectory /var/spool/rsyslog
$template RFC3164fmt,"<%PRI%>%TIMESTAMP% %HOSTNAME% %syslogtag%%msg%"

# Log shipment rsyslog target servers
$ActionQueueFileName ostack-log-01_rsyslog_container-04cb9e3a
$ActionQueueSaveOnShutdown on
$ActionQueueType LinkedList
$ActionResumeRetryCount 250
local7.* @172.28.1.205:514;RFC3164fmt

这就是我所做的,但没有奏效。

local7.*;local7.!=info @172.28.1.205:514;RFC3164fmt

我的操作系统是 Centos7.5 Linux

linux logs
  • 2 个回答
  • 1122 Views
Martin Hope
Satish
Asked: 2018-12-06 15:21:25 +0800 CST

wget 从远程 URL 下载正则表达式模式文件

  • 0

我想*httpd*从远程 CentOS 镜像下载所有 RPM 文件,我正在尝试他遵循命令,但它似乎不起作用

[root@yum foo]# wget -r --no-parent -A "*httpd*" https://mirrors.edge.kernel.org/centos/7.5.1804/os/x86_64/Packages/

我看到它创建了一个目录结构,但目录中没有文件。

[root@yum foo]# ls
mirrors.edge.kernel.org

我做错了什么?

linux regular-expression
  • 1 个回答
  • 1162 Views
Martin Hope
Satish
Asked: 2018-10-26 09:23:49 +0800 CST

使用 sed 或 awk 从文件中提取字段

  • 0

我有 bash 脚本来收集所有硬件信息,但缺少以下内存信息,所以这就是我想要做的。

以下命令为您提供DIMM内存模块的状态,

[root@Linux ~]# hpasmcli -s 'show dimm'

DIMM Configuration
------------------
Processor #:                     1
Module #:                     1
Present:                      Yes
Form Factor:                  9h
Memory Type:                  DDR3(18h)
Size:                         8192 MB
Speed:                        1333 MHz
Supports Lock Step:           No
Configured for Lock Step:     No
Status:                       Ok

Processor #:                     1
Module #:                     12
Present:                      Yes
Form Factor:                  9h
Memory Type:                  DDR3(18h)
Size:                         8192 MB
Speed:                        1333 MHz
Supports Lock Step:           No
Configured for Lock Step:     No
Status:                       Ok

Processor #:                     2
Module #:                     1
Present:                      Yes
Form Factor:                  9h
Memory Type:                  DDR3(18h)
Size:                         8192 MB
Speed:                        1333 MHz
Supports Lock Step:           No
Configured for Lock Step:     No
Status:                       Ok

Processor #:                     2
Module #:                     12
Present:                      Yes
Form Factor:                  9h
Memory Type:                  DDR3(18h)
Size:                         8192 MB
Speed:                        1333 MHz
Supports Lock Step:           No
Configured for Lock Step:     No
Status:                       DIMM is degraded

想要在单行中提取Size:和需要它,如下所示Status:

最终输出将如下所示。我可以使用其他语言,如 python 或 perl,但我用 bash 编写,所以我需要 bash 中的一些东西,我可以做多个for loop并使用变量来使其工作,但我需要一些简单或简单的东西,比如sed/awk. 我怎样才能在 sed/awk 中实现这一点?

8192MB - Ok
8192MB - OK
8192MB - OK 
8192MB - DIMM is degraded
linux awk
  • 2 个回答
  • 718 Views
Martin Hope
Satish
Asked: 2018-09-13 06:43:19 +0800 CST

主机名中的正则表达式匹配修复字符串

  • 0

我的主机名如下

www-foo-1001-1-1.example.com

我正在编写脚本,该脚本应该部署具有以下字符串匹配的应用程序1001-<any digit>-<any digit>

示例:脚本应匹配以下主机名。

www-foo-1001-1-49
www-foo-1001-4-37
www-foo-1001-2-12
www-foo-1001-8-4

忽略主机名中的这种模式。

www-foo-1001-1-2-49
www-foo-1001-1-1-49
www-foo-1001-1
www-foo-1001

它必须匹配此模式1001-N-N并忽略其他任何内容。

我想要做的更多细节if then..并返回exit状态代码$?以抛出与标准主机名不匹配的错误。

linux awk
  • 3 个回答
  • 1015 Views
Martin Hope
Satish
Asked: 2018-08-09 14:35:20 +0800 CST

正则表达式格式化文件输出

  • 1

我有包含以下内容的文件:

   foo-6-25.example.com:
         1  Var. Speed   System Board    Normal     Yes     Normal   ( 49)
    --
    foo-5-4.example.com:
         1  Var. Speed   System Board    Normal     Yes     Normal   ( 19)
    --
    foo-8-28.example.com:
         1  Var. Speed   System Board    Normal     Yes     Normal   ( 43)
    --
    foo-9-7.example.com:
         1  Var. Speed   System Board    Normal     Yes     Normal   ( 91)
    --
    foo-5-19.idmz.example.com:
         1  Var. Speed   System Board    Normal     Yes     Normal   ( 19)
    --
    foo-7-3.example.com:
         1  Var. Speed   System Board    Normal     Yes     Normal   ( 20)

我想按以下方式对其进行格式化:服务器名,然后是()括号内的风扇速度

foo-6-25.example.com: ( 49)
foo-5-4.example.com:  ( 19)

不知道如何使用 awk 或任何其他工具来使用它。

linux awk
  • 6 个回答
  • 153 Views
Martin Hope
Satish
Asked: 2018-08-04 20:30:55 +0800 CST

LXC 容器网速问题

  • 2

我在 LXC 容器上运行 openstack,我发现在我的 LXC 容器内网络非常慢,但从主机上它非常快

主持人

[root@ostack-infra-01 ~]# time wget http://mirror.cc.columbia.edu/pub/linux/centos/7.5.1804/updates/x86_64/repodata/0d7e660988dcc434ec5dec72067655f9b0ef44e6164d3fb85bda2bd1b09534db-primary.sqlite.bz2
--2018-08-04 00:24:09--  http://mirror.cc.columbia.edu/pub/linux/centos/7.5.1804/updates/x86_64/repodata/0d7e660988dcc434ec5dec72067655f9b0ef44e6164d3fb85bda2bd1b09534db-primary.sqlite.bz2
Resolving mirror.cc.columbia.edu (mirror.cc.columbia.edu)... 128.59.59.71
Connecting to mirror.cc.columbia.edu (mirror.cc.columbia.edu)|128.59.59.71|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 4515677 (4.3M) [application/x-bzip2]
Saving to: ‘0d7e660988dcc434ec5dec72067655f9b0ef44e6164d3fb85bda2bd1b09534db-primary.sqlite.bz2’

100%[===========================================================================================================================================>] 4,515,677   23.1MB/s   in 0.2s

2018-08-04 00:24:09 (23.1 MB/s) - ‘0d7e660988dcc434ec5dec72067655f9b0ef44e6164d3fb85bda2bd1b09534db-primary.sqlite.bz2’ saved [4515677/4515677]


real    0m0.209s
user    0m0.008s
sys     0m0.014s

LXC 容器在同一主机上

[root@ostack-infra-01 ~]# lxc-attach -n ostack-infra-01_neutron_server_container-fbf14420
[root@ostack-infra-01-neutron-server-container-fbf14420 ~]# time wget http://mirror.cc.columbia.edu/pub/linux/centos/7.5.1804/updates/x86_64/repodata/0d7e660988dcc434ec5dec72067655f9b0ef44e6164d3fb85bda2bd1b09534db-primary.sqlite.bz2
--2018-08-04 00:24:32--  http://mirror.cc.columbia.edu/pub/linux/centos/7.5.1804/updates/x86_64/repodata/0d7e660988dcc434ec5dec72067655f9b0ef44e6164d3fb85bda2bd1b09534db-primary.sqlite.bz2
Resolving mirror.cc.columbia.edu (mirror.cc.columbia.edu)... 128.59.59.71
Connecting to mirror.cc.columbia.edu (mirror.cc.columbia.edu)|128.59.59.71|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 4515677 (4.3M) [application/x-bzip2]
Saving to: ‘0d7e660988dcc434ec5dec72067655f9b0ef44e6164d3fb85bda2bd1b09534db-primary.sqlite.bz2’

100%[===========================================================================================================================================>] 4,515,677   43.4KB/s   in 1m 58s

2018-08-04 00:26:31 (37.3 KB/s) - ‘0d7e660988dcc434ec5dec72067655f9b0ef44e6164d3fb85bda2bd1b09534db-primary.sqlite.bz2’ saved [4515677/4515677]


real    1m59.121s
user    0m0.002s
sys     0m0.361s

我没有为网络设置任何限制的任何花哨配置,我有其他运行良好和最大速度的主机,你认为这里有什么问题

kernel version Linux ostack-infra-01 3.10.0-862.3.3.el7.x86_64 #1 SMP

CentOS 7.5

linux networking
  • 1 个回答
  • 1299 Views
Martin Hope
Satish
Asked: 2018-07-26 11:47:50 +0800 CST

openstack 实时迁移问题

  • 0

我有两个带有Ceph共享存储 (RBD) 的计算节点 1 和节点 2,我正在尝试设置实时迁移,但它失败并出现以下错误,不确定出了什么问题。

我正在使用 Openstack pike 16.0.16

[root@compute-01 instances]# cat /etc/libvirt/libvirtd.conf
# Ansible managed

listen_tls = 0
listen_tcp = 1
unix_sock_group = "libvirt"
unix_sock_ro_perms = "0777"
unix_sock_rw_perms = "0770"
auth_unix_ro = "none"
auth_unix_rw = "none"
auth_tcp = "none"

nova.log 中出现以下错误

如果第一次进行虚拟机实时迁移,那么它可以工作,但会引发以下错误并且虚拟机进入Error状态

C1 ----> C2 (第一次工作但有错误)

lt] [instance: 4f4009ee-902d-4ee9-ae99-e9bc55267b32] Received unexpected event network-vif-unplugged-251b70a9-2118-4f95-8b35-e9e52f4392e7 for instance
2018-07-25 17:32:39.573 2833 ERROR nova.compute.manager [req-e0cd3865-151e-4d07-8b94-3a8943dafb57 78ca592a663a487d982a4c412ce4d52e d0d9c227e2d34d3a941e3cd16dea06ed - default default] [instance: 4f4009ee-902d-4ee9-ae99-e9bc55267b32] Live migration failed.: AttributeError: 'Guest' object has no attribute 'migrate_configure_max_speed'
2018-07-25 17:32:39.573 2833 ERROR nova.compute.manager [instance: 4f4009ee-902d-4ee9-ae99-e9bc55267b32] Traceback (most recent call last):
2018-07-25 17:32:39.573 2833 ERROR nova.compute.manager [instance: 4f4009ee-902d-4ee9-ae99-e9bc55267b32]   File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/nova/compute/manager.py", line 5580, in _do_live_migration
2018-07-25 17:32:39.573 2833 ERROR nova.compute.manager [instance: 4f4009ee-902d-4ee9-ae99-e9bc55267b32]     block_migration, migrate_data)
2018-07-25 17:32:39.573 2833 ERROR nova.compute.manager [instance: 4f4009ee-902d-4ee9-ae99-e9bc55267b32]   File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 6436, in live_migration
2018-07-25 17:32:39.573 2833 ERROR nova.compute.manager [instance: 4f4009ee-902d-4ee9-ae99-e9bc55267b32]     migrate_data)
2018-07-25 17:32:39.573 2833 ERROR nova.compute.manager [instance: 4f4009ee-902d-4ee9-ae99-e9bc55267b32]   File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 6944, in _live_migration
2018-07-25 17:32:39.573 2833 ERROR nova.compute.manager [instance: 4f4009ee-902d-4ee9-ae99-e9bc55267b32]     guest.migrate_configure_max_speed(
2018-07-25 17:32:39.573 2833 ERROR nova.compute.manager [instance: 4f4009ee-902d-4ee9-ae99-e9bc55267b32] AttributeError: 'Guest' object has no attribute 'migrate_configure_max_speed'
2018-07-25 17:32:39.573 2833 ERROR nova.compute.manager [instance: 4f4009ee-902d-4ee9-ae99-e9bc55267b32]
2018-07-25 17:32:41.646 2833 WARNING nova.compute.manager [req-eb9e883f-08c3-427d-89c3-cdcf012e7c8b 78ca592a663a487d982a4c412ce4d52e d0d9c227e2d34d3a941e3cd16dea06ed - default default] [instance: 4f4009ee-902d-4ee9-ae99-e9bc55267b32] Received unexpected event network-vif-plugged-251b70a9-2118-4f95-8b35-e9e52f4392e7 for instance
2018-07-25 17:32:49.516 2833 WARNING nova.compute.manager [req-d70c12f9-42fc-43be-ae8e-6dd6b21b1b1f 78ca592a663a487d982a4c412ce4d52e d0d9c227e2d34d3a941e3cd16dea06ed - default default] [instance: 4f4009ee-902d-4ee9-ae99-e9bc55267b32] Received unexpected event network-vif-plugged-251b70a9-2118-4f95-8b35-e9e52f4392e7 for instance

要修复 VM 的错误状态,我必须做

[root@ostack-infra-01-utility-container-a8dbff46 ~]# nova list
+--------------------------------------+------+--------+------------+-------------+-----------------------+
| ID                                   | Name | Status | Task State | Power State | Networks              |
+--------------------------------------+------+--------+------------+-------------+-----------------------+
| 4f4009ee-902d-4ee9-ae99-e9bc55267b32 | d1   | ERROR  | -          | NOSTATE     | net-vlan31=10.31.1.10 |
+--------------------------------------+------+--------+------------+-------------+-----------------------+

nova reset-state --active 4f4009ee-902d-4ee9-ae99-e9bc55267b32

即使在成功迁移后,我的 Horizo​​n 仪表板仍然显示C1

现在 VM 在 C2 上运行,我必须将其移回 C1 我收到以下错误,看起来 nova 在迁移 VM 后没有清理文件,或者可能是因为它处于错误状态并且它没有清理以前的文件..

2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [req-285a7f42-99a1-47a2-86c1-79140fcfca31 040960f6067d42c2b52c3fcac9ebde6d 2349c3efbf8a4c6ba6dc3b961160c81b - default default] [instance: aa58095d-7027-488e-901e-f3259353de0d] Pre live migration failed at ostack-compute-02.v1v0x.net: DestinationDiskExists_Remote: The supplied disk path (/var/lib/nova/instances/aa58095d-7027-488e-901e-f3259353de0d) already exists, it is expected not to exist.
Traceback (most recent call last):

  File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 160, in _process_incoming
    res = self.dispatcher.dispatch(message)

  File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 213, in dispatch
    return self._do_dispatch(endpoint, method, ctxt, args)

  File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 183, in _do_dispatch
    result = func(ctxt, **new_args)

  File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/nova/exception_wrapper.py", line 76, in wrapped
    function_name, call_dict, binary)

  File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
    self.force_reraise()

  File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
    six.reraise(self.type_, self.value, self.tb)

  File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/nova/exception_wrapper.py", line 67, in wrapped
    return f(self, context, *args, **kw)

  File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/nova/compute/utils.py", line 880, in decorated_function
    return function(self, context, *args, **kwargs)

  File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/nova/compute/manager.py", line 218, in decorated_function
    kwargs['instance'], e, sys.exc_info())

  File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
    self.force_reraise()

  File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
    six.reraise(self.type_, self.value, self.tb)

  File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/nova/compute/manager.py", line 206, in decorated_function
    return function(self, context, *args, **kwargs)

  File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/nova/compute/manager.py", line 5507, in pre_live_migration
    migrate_data)

  File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 7072, in pre_live_migration
    raise exception.DestinationDiskExists(path=instance_dir)

DestinationDiskExists: The supplied disk path (/var/lib/nova/instances/aa58095d-7027-488e-901e-f3259353de0d) already exists, it is expected not to exist.
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d] Traceback (most recent call last):
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]   File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/nova/compute/manager.py", line 5562, in _do_live_migration
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]     block_migration, disk, dest, migrate_data)
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]   File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/nova/compute/rpcapi.py", line 745, in pre_live_migration
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]     disk=disk, migrate_data=migrate_data)
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]   File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/oslo_messaging/rpc/client.py", line 169, in call
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]     retry=self.retry)
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]   File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/oslo_messaging/transport.py", line 123, in _send
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]     timeout=timeout, retry=retry)
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]   File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 566, in send
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]     retry=retry)
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]   File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 557, in _send
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]     raise result
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d] DestinationDiskExists_Remote: The supplied disk path (/var/lib/nova/instances/aa58095d-7027-488e-901e-f3259353de0d) already exists, it is expected not to exist.
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d] Traceback (most recent call last):
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]   File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 160, in _process_incoming
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]     res = self.dispatcher.dispatch(message)
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]   File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 213, in dispatch
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]     return self._do_dispatch(endpoint, method, ctxt, args)
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]   File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 183, in _do_dispatch
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]     result = func(ctxt, **new_args)
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]   File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/nova/exception_wrapper.py", line 76, in wrapped
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]     function_name, call_dict, binary)
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]   File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]     self.force_reraise()
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]   File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]     six.reraise(self.type_, self.value, self.tb)
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]   File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/nova/exception_wrapper.py", line 67, in wrapped
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]     return f(self, context, *args, **kw)
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]   File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/nova/compute/utils.py", line 880, in decorated_function
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]     return function(self, context, *args, **kwargs)
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]   File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/nova/compute/manager.py", line 218, in decorated_function
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]     kwargs['instance'], e, sys.exc_info())
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]   File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]     self.force_reraise()
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]   File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]     six.reraise(self.type_, self.value, self.tb)
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]   File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/nova/compute/manager.py", line 206, in decorated_function
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]     return function(self, context, *args, **kwargs)
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]   File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/nova/compute/manager.py", line 5507, in pre_live_migration
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]     migrate_data)
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]   File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 7072, in pre_live_migration
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]     raise exception.DestinationDiskExists(path=instance_dir)
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d] DestinationDiskExists: The supplied disk path (/var/lib/nova/instances/aa58095d-7027-488e-901e-f3259353de0d) already exists, it is expected not to exist.
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]
2018-07-25 19:42:06.908 8785 WARNING nova.compute.manager [req-cda02056-75c7-463c-ac7e-2925ba2cd29c 78ca592a663a487d982a4c412ce4d52e d0d9c227e2d34d3a941e3cd16dea06ed - default default] [instance: aa58095d-7027-488e-901e-f3259353de0d] Received unexpected event network-vif-unplugged-7a356ab1-e0d3-4a69-9aa9-f71329caa17f for instance
2018-07-25 19:42:07.821 8785 ERROR nova.virt.libvirt.driver [req-cda02056-75c7-463c-ac7e-2925ba2cd29c 78ca592a663a487d982a4c412ce4d52e d0d9c227e2d34d3a941e3cd16dea06ed - default default] [instance: aa58095d-7027-488e-901e-f3259353de0d] Live Migration failure: Domain not found: no domain with matching name 'instance-00000056': libvirtError: Domain not found: no domain with matching name 'instance-00000056'
2018-07-25 19:42:08.294 8785 WARNING nova.compute.manager [req-5d3abd38-c8a9-499d-b97f-1d9e748a19d3 78ca592a663a487d982a4c412ce4d52e d0d9c227e2d34d3a941e3cd16dea06ed - default default] [instance: aa58095d-7027-488e-901e-f3259353de0d] Received unexpected event network-vif-plugged-7a356ab1-e0d3-4a69-9aa9-f71329caa17f for instance
2018-07-25 19:42:14.934 8785 WARNING nova.compute.manager [req-e9442ba9-1700-4221-a495-31d76a2b5bf0 78ca592a663a487d982a4c412ce4d52e d0d9c227e2d34d3a941e3cd16dea06ed - default default] [instance: aa58095d-7027-488e-901e-f3259353de0d] Received unexpected event network-vif-plugged-7a356ab1-e0d3-4a69-9aa9-f71329caa17f for instance
linux kvm
  • 1 个回答
  • 779 Views
Martin Hope
Satish
Asked: 2018-07-07 07:53:00 +0800 CST

Linux 绑定与 VLAN 问题

  • 3

您认为以下配置有意义吗?BONDTING_OPTVLAN接口是否支持?我想确保我的接口在上游设备关闭时故障转移。

ifcfg-债券0

$ cat /etc/sysconfig/network-scripts/ifcfg-bond0
NAME=bond0
DEVICE=bond0
BOOTPROTO=none
ONBOOT=yes
BONDING_OPTS="mode=1 miimon=500 downdelay=1000 primary=eno1 primary_reselect=always"

ifcfg-bond0.10

$ cat /etc/sysconfig/network-scripts/ifcfg-bond0.10
NAME=bond0.10
DEVICE=bond0.10
ONPARENT=yes
BOOTPROTO=dhcp
VLAN=yes
BONDING_OPTS="mode=1 arp_interval=1000 arp_ip_target=10.10.0.1 miimon=500 downdelay=1000 primary=eno1 primary_reselect=always"
NM_CONTROLLED=no

ifcfg-bond0.20

$ cat /etc/sysconfig/network-scripts/ifcfg-bond0.20
NAME=bond0.20
DEVICE=bond0.20
ONPARENT=yes
BOOTPROTO=dhcp
VLAN=yes
BONDING_OPTS="mode=1 arp_interval=1000 arp_ip_target=74.xx.xx.1 miimon=500 downdelay=1000 primary=eno1 primary_reselect=always"
NM_CONTROLLED=no
linux vlan
  • 1 个回答
  • 8174 Views
Martin Hope
Satish
Asked: 2018-06-16 07:54:15 +0800 CST

systemd-networkd dhcp_hostname 选项

  • 2

我已经配置systemd-networkd设置我的网络,我已经创建了 vlan10,我想客户端将主机名发送到 DHCP 以在我的 DDNS 服务器中注册,所以问题是networkd支持DHCP_HOSTNAME=选项吗?

[root@localhost network]# cat vlan10.network
[Match]
Name=vlan10

[Network]
DHCP=yes

我有多个 VLAN,我想将两个不同的 vlan 主机名发送到 dhcp 服务器以在其中注册DNS它们

vlan10 将发送主机名foo.vlan10.example.com

vlan 20 将发送主机名foo.vlan20.examplee.com

linux networking
  • 1 个回答
  • 3482 Views
Martin Hope
Satish
Asked: 2017-12-17 11:35:16 +0800 CST

sed 提取第一个字段并移动到特定位置

  • 2

我有这个文件

10.1.1.1    www1           
10.1.1.2    www2           
10.1.1.3    www3            

我想提取第一个IP address字段并将其移动到以下位置http://www.foo.com=10.1.1.1/test.php

10.1.1.1    www1           # http://www.foo.com=10.1.1.1/test.php
10.1.1.2    www2           # http://www.foo.com=10.1.1.2/test.php
10.1.1.3    www3           # http://www.foo.com=10.1.1.3/test.php

我可以做到这一点,for loop但我想sed用单班轮技巧做到这一点。

linux sed
  • 2 个回答
  • 1076 Views

Sidebar

Stats

  • 问题 205573
  • 回答 270741
  • 最佳答案 135370
  • 用户 68524
  • 热门
  • 回答
  • Marko Smith

    模块 i915 可能缺少固件 /lib/firmware/i915/*

    • 3 个回答
  • Marko Smith

    无法获取 jessie backports 存储库

    • 4 个回答
  • Marko Smith

    如何将 GPG 私钥和公钥导出到文件

    • 4 个回答
  • Marko Smith

    我们如何运行存储在变量中的命令?

    • 5 个回答
  • Marko Smith

    如何配置 systemd-resolved 和 systemd-networkd 以使用本地 DNS 服务器来解析本地域和远程 DNS 服务器来解析远程域?

    • 3 个回答
  • Marko Smith

    dist-upgrade 后 Kali Linux 中的 apt-get update 错误 [重复]

    • 2 个回答
  • Marko Smith

    如何从 systemctl 服务日志中查看最新的 x 行

    • 5 个回答
  • Marko Smith

    Nano - 跳转到文件末尾

    • 8 个回答
  • Marko Smith

    grub 错误:你需要先加载内核

    • 4 个回答
  • Marko Smith

    如何下载软件包而不是使用 apt-get 命令安装它?

    • 7 个回答
  • Martin Hope
    user12345 无法获取 jessie backports 存储库 2019-03-27 04:39:28 +0800 CST
  • Martin Hope
    Carl 为什么大多数 systemd 示例都包含 WantedBy=multi-user.target? 2019-03-15 11:49:25 +0800 CST
  • Martin Hope
    rocky 如何将 GPG 私钥和公钥导出到文件 2018-11-16 05:36:15 +0800 CST
  • Martin Hope
    Evan Carroll systemctl 状态显示:“状态:降级” 2018-06-03 18:48:17 +0800 CST
  • Martin Hope
    Tim 我们如何运行存储在变量中的命令? 2018-05-21 04:46:29 +0800 CST
  • Martin Hope
    Ankur S 为什么 /dev/null 是一个文件?为什么它的功能不作为一个简单的程序来实现? 2018-04-17 07:28:04 +0800 CST
  • Martin Hope
    user3191334 如何从 systemctl 服务日志中查看最新的 x 行 2018-02-07 00:14:16 +0800 CST
  • Martin Hope
    Marko Pacak Nano - 跳转到文件末尾 2018-02-01 01:53:03 +0800 CST
  • Martin Hope
    Kidburla 为什么真假这么大? 2018-01-26 12:14:47 +0800 CST
  • Martin Hope
    Christos Baziotis 在一个巨大的(70GB)、一行、文本文件中替换字符串 2017-12-30 06:58:33 +0800 CST

热门标签

linux bash debian shell-script text-processing ubuntu centos shell awk ssh

Explore

  • 主页
  • 问题
    • 最新
    • 热门
  • 标签
  • 帮助

Footer

AskOverflow.Dev

关于我们

  • 关于我们
  • 联系我们

Legal Stuff

  • Privacy Policy

Language

  • Pt
  • Server
  • Unix

© 2023 AskOverflow.DEV All Rights Reserve