AskOverflow.Dev

AskOverflow.Dev Logo AskOverflow.Dev Logo

AskOverflow.Dev Navigation

  • Início
  • system&network
  • Ubuntu
  • Unix
  • DBA
  • Computer
  • Coding
  • LangChain

Mobile menu

Close
  • Início
  • system&network
    • Recentes
    • Highest score
    • tags
  • Ubuntu
    • Recentes
    • Highest score
    • tags
  • Unix
    • Recentes
    • tags
  • DBA
    • Recentes
    • tags
  • Computer
    • Recentes
    • tags
  • Coding
    • Recentes
    • tags
Início / user-29656

Satish's questions

Martin Hope
Satish
Asked: 2022-06-17 19:39:07 +0800 CST

Ceph storage MDSs relatam IOs de metadados lentos

  • 0

Estou brincando com o armazenamento ceph no laboratório e tenho um único servidor, então pensei em instalar todos os serviços em uma única caixa, como MON, OSD, MDS etc.

Eu criei dois discos usando loopdevice (este servidor tem disco SSDs, então a velocidade é muito boa)

root@ceph2# losetup -a
/dev/loop1: [64769]:26869770 (/root/100G-2.img)
/dev/loop0: [64769]:26869769 (/root/100G-1.img)

Esta é a aparência da minha ceph -ssaída

root@ceph2# ceph -s
  cluster:
    id:     1106ae5c-e5bf-4316-8185-3e559d246ac5
    health: HEALTH_WARN
            1 MDSs report slow metadata IOs
            Reduced data availability: 65 pgs inactive
            Degraded data redundancy: 65 pgs undersized

  services:
    mon: 1 daemons, quorum ceph2 (age 8m)
    mgr: ceph2(active, since 9m)
    mds: 1/1 daemons up
    osd: 2 osds: 2 up (since 20m), 2 in (since 38m)

  data:
    volumes: 1/1 healthy
    pools:   3 pools, 65 pgs
    objects: 0 objects, 0 B
    usage:   11 MiB used, 198 GiB / 198 GiB avail
    pgs:     100.000% pgs not active
             65 undersized+peered

Não sei de onde vem o erro de E/S lento do MDS e o status do mds preso na criação

root@ceph2# ceph mds stat
cephfs:1 {0=ceph2=up:creating}

É assim que os detalhes de saúde se parecem

root@ceph2# ceph health detail
HEALTH_WARN 1 MDSs report slow metadata IOs; Reduced data availability: 65 pgs inactive; Degraded data redundancy: 65 pgs undersized
[WRN] MDS_SLOW_METADATA_IO: 1 MDSs report slow metadata IOs
    mds.ceph2(mds.0): 31 slow metadata IOs are blocked > 30 secs, oldest blocked for 864 secs
[WRN] PG_AVAILABILITY: Reduced data availability: 65 pgs inactive
    pg 1.0 is stuck inactive for 22m, current state undersized+peered, last acting [1]
    pg 2.0 is stuck inactive for 14m, current state undersized+peered, last acting [0]
    pg 2.1 is stuck inactive for 14m, current state undersized+peered, last acting [1]
    pg 2.2 is stuck inactive for 14m, current state undersized+peered, last acting [0]
    pg 2.3 is stuck inactive for 14m, current state undersized+peered, last acting [1]
    pg 2.4 is stuck inactive for 14m, current state undersized+peered, last acting [1]
    pg 2.5 is stuck inactive for 14m, current state undersized+peered, last acting [1]
    pg 2.6 is stuck inactive for 14m, current state undersized+peered, last acting [1]
    pg 2.7 is stuck inactive for 14m, current state undersized+peered, last acting [1]
    pg 2.8 is stuck inactive for 14m, current state undersized+peered, last acting [0]
    pg 2.c is stuck inactive for 14m, current state undersized+peered, last acting [1]
    pg 2.d is stuck inactive for 14m, current state undersized+peered, last acting [1]
    pg 2.e is stuck inactive for 14m, current state undersized+peered, last acting [1]
    pg 2.f is stuck inactive for 14m, current state undersized+peered, last acting [0]
    pg 2.10 is stuck inactive for 14m, current state undersized+peered, last acting [0]
    pg 2.11 is stuck inactive for 14m, current state undersized+peered, last acting [0]
    pg 2.12 is stuck inactive for 14m, current state undersized+peered, last acting [1]
    pg 2.13 is stuck inactive for 14m, current state undersized+peered, last acting [0]
    pg 2.14 is stuck inactive for 14m, current state undersized+peered, last acting [0]
    pg 2.15 is stuck inactive for 14m, current state undersized+peered, last acting [1]
    pg 2.16 is stuck inactive for 14m, current state undersized+peered, last acting [0]
    pg 2.17 is stuck inactive for 14m, current state undersized+peered, last acting [1]
    pg 2.18 is stuck inactive for 14m, current state undersized+peered, last acting [0]
    pg 2.19 is stuck inactive for 14m, current state undersized+peered, last acting [0]
    pg 2.1a is stuck inactive for 14m, current state undersized+peered, last acting [0]
    pg 2.1b is stuck inactive for 14m, current state undersized+peered, last acting [1]
    pg 3.0 is stuck inactive for 14m, current state undersized+peered, last acting [1]
    pg 3.1 is stuck inactive for 14m, current state undersized+peered, last acting [0]
    pg 3.2 is stuck inactive for 14m, current state undersized+peered, last acting [1]
    pg 3.3 is stuck inactive for 14m, current state undersized+peered, last acting [0]
    pg 3.4 is stuck inactive for 14m, current state undersized+peered, last acting [1]
    pg 3.5 is stuck inactive for 14m, current state undersized+peered, last acting [1]
    pg 3.6 is stuck inactive for 14m, current state undersized+peered, last acting [0]
    pg 3.7 is stuck inactive for 14m, current state undersized+peered, last acting [1]
    pg 3.9 is stuck inactive for 14m, current state undersized+peered, last acting [0]
    pg 3.c is stuck inactive for 14m, current state undersized+peered, last acting [0]
    pg 3.d is stuck inactive for 14m, current state undersized+peered, last acting [1]
    pg 3.e is stuck inactive for 14m, current state undersized+peered, last acting [1]
    pg 3.f is stuck inactive for 14m, current state undersized+peered, last acting [0]
    pg 3.10 is stuck inactive for 14m, current state undersized+peered, last acting [1]
    pg 3.11 is stuck inactive for 14m, current state undersized+peered, last acting [0]
    pg 3.12 is stuck inactive for 14m, current state undersized+peered, last acting [0]
    pg 3.13 is stuck inactive for 14m, current state undersized+peered, last acting [1]
    pg 3.14 is stuck inactive for 14m, current state undersized+peered, last acting [1]
    pg 3.15 is stuck inactive for 14m, current state undersized+peered, last acting [0]
    pg 3.16 is stuck inactive for 14m, current state undersized+peered, last acting [1]
    pg 3.17 is stuck inactive for 14m, current state undersized+peered, last acting [0]
    pg 3.18 is stuck inactive for 14m, current state undersized+peered, last acting [1]
    pg 3.19 is stuck inactive for 14m, current state undersized+peered, last acting [1]
    pg 3.1a is stuck inactive for 14m, current state undersized+peered, last acting [1]
    pg 3.1b is stuck inactive for 14m, current state undersized+peered, last acting [0]
[WRN] PG_DEGRADED: Degraded data redundancy: 65 pgs undersized
    pg 1.0 is stuck undersized for 22m, current state undersized+peered, last acting [1]
    pg 2.0 is stuck undersized for 14m, current state undersized+peered, last acting [0]
    pg 2.1 is stuck undersized for 14m, current state undersized+peered, last acting [1]
    pg 2.2 is stuck undersized for 14m, current state undersized+peered, last acting [0]
    pg 2.3 is stuck undersized for 14m, current state undersized+peered, last acting [1]
    pg 2.4 is stuck undersized for 14m, current state undersized+peered, last acting [1]
    pg 2.5 is stuck undersized for 14m, current state undersized+peered, last acting [1]
    pg 2.6 is stuck undersized for 14m, current state undersized+peered, last acting [1]
    pg 2.7 is stuck undersized for 14m, current state undersized+peered, last acting [1]
    pg 2.8 is stuck undersized for 14m, current state undersized+peered, last acting [0]
    pg 2.c is stuck undersized for 14m, current state undersized+peered, last acting [1]
    pg 2.d is stuck undersized for 14m, current state undersized+peered, last acting [1]
    pg 2.e is stuck undersized for 14m, current state undersized+peered, last acting [1]
    pg 2.f is stuck undersized for 14m, current state undersized+peered, last acting [0]
    pg 2.10 is stuck undersized for 14m, current state undersized+peered, last acting [0]
    pg 2.11 is stuck undersized for 14m, current state undersized+peered, last acting [0]
    pg 2.12 is stuck undersized for 14m, current state undersized+peered, last acting [1]
    pg 2.13 is stuck undersized for 14m, current state undersized+peered, last acting [0]
    pg 2.14 is stuck undersized for 14m, current state undersized+peered, last acting [0]
    pg 2.15 is stuck undersized for 14m, current state undersized+peered, last acting [1]
    pg 2.16 is stuck undersized for 14m, current state undersized+peered, last acting [0]
    pg 2.17 is stuck undersized for 14m, current state undersized+peered, last acting [1]
    pg 2.18 is stuck undersized for 14m, current state undersized+peered, last acting [0]
    pg 2.19 is stuck undersized for 14m, current state undersized+peered, last acting [0]
    pg 2.1a is stuck undersized for 14m, current state undersized+peered, last acting [0]
    pg 2.1b is stuck undersized for 14m, current state undersized+peered, last acting [1]
    pg 3.0 is stuck undersized for 14m, current state undersized+peered, last acting [1]
    pg 3.1 is stuck undersized for 14m, current state undersized+peered, last acting [0]
    pg 3.2 is stuck undersized for 14m, current state undersized+peered, last acting [1]
    pg 3.3 is stuck undersized for 14m, current state undersized+peered, last acting [0]
    pg 3.4 is stuck undersized for 14m, current state undersized+peered, last acting [1]
    pg 3.5 is stuck undersized for 14m, current state undersized+peered, last acting [1]
    pg 3.6 is stuck undersized for 14m, current state undersized+peered, last acting [0]
    pg 3.7 is stuck undersized for 14m, current state undersized+peered, last acting [1]
    pg 3.9 is stuck undersized for 14m, current state undersized+peered, last acting [0]
    pg 3.c is stuck undersized for 14m, current state undersized+peered, last acting [0]
    pg 3.d is stuck undersized for 14m, current state undersized+peered, last acting [1]
    pg 3.e is stuck undersized for 14m, current state undersized+peered, last acting [1]
    pg 3.f is stuck undersized for 14m, current state undersized+peered, last acting [0]
    pg 3.10 is stuck undersized for 14m, current state undersized+peered, last acting [1]
    pg 3.11 is stuck undersized for 14m, current state undersized+peered, last acting [0]
    pg 3.12 is stuck undersized for 14m, current state undersized+peered, last acting [0]
    pg 3.13 is stuck undersized for 14m, current state undersized+peered, last acting [1]
    pg 3.14 is stuck undersized for 14m, current state undersized+peered, last acting [1]
    pg 3.15 is stuck undersized for 14m, current state undersized+peered, last acting [0]
    pg 3.16 is stuck undersized for 14m, current state undersized+peered, last acting [1]
    pg 3.17 is stuck undersized for 14m, current state undersized+peered, last acting [0]
    pg 3.18 is stuck undersized for 14m, current state undersized+peered, last acting [1]
    pg 3.19 is stuck undersized for 14m, current state undersized+peered, last acting [1]
    pg 3.1a is stuck undersized for 14m, current state undersized+peered, last acting [1]
    pg 3.1b is stuck undersized for 14m, current state undersized+peered, last acting [0]

O que estaria errado aqui? Você acha que isso é porque eu tenho um único servidor e 2 OSD?

linux storage
  • 1 respostas
  • 335 Views
Martin Hope
Satish
Asked: 2021-11-04 09:42:56 +0800 CST

Problema de destino arp-ip-ip do Ubuntu netplan

  • 1

Eu tenho o Ubuntu 20.04 executando a versão netplan 0.102-0ubuntu1~20.04.2e estou tentando configurar o active-backupvínculo usando a opçãoarp-ip-target

  bonds:
        bond0:
          dhcp4: no
          interfaces:
            - eno49
            - eno50
          parameters:
            mode: active-backup
            arp-ip-targets: [ 10.64.0.1 ]
            arp-interval: 3000

Aqui está minha saída de títulos

# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eno49
MII Status: up
MII Polling Interval (ms): 0
Up Delay (ms): 0
Down Delay (ms): 0
Peer Notification Delay (ms): 0
ARP Polling Interval (ms): 3000
ARP IP target/s (n.n.n.n form): 10.64.0.1

Slave Interface: eno50
MII Status: down
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 1
Permanent HW addr: 5c:b9:01:9d:ac:ad
Slave queue ID: 0

Slave Interface: eno49
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 5c:b9:01:9d:ac:ac
Slave queue ID: 0

para testar, desabilito a porta do switch upstream para ver meu failover de vínculo, mas parece que não funciona. o que mais devo fazer para solucionar problemas?

linux ubuntu
  • 1 respostas
  • 375 Views
Martin Hope
Satish
Asked: 2019-11-14 08:57:02 +0800 CST

grep extrair intervalo de números

  • 5

Estou grepping date 2019-09-XXe , de 2019-10-XXalguma forma, meu grep não está ajudando, tenho certeza de que estou perdendo algo aqui

  Last Password Change: 2019-10-30
  Last Password Change: 2017-02-07
  Last Password Change: 2019-10-29
  Last Password Change: 2019-11-03
  Last Password Change: 2019-10-31
  Last Password Change: 2018-09-27
  Last Password Change: 2018-09-27
  Last Password Change: 2019-06-27

Estou seguindo e não funciona

grep "2019\-[09,10]\-" file também tenteigrep "2019\-{09,10}\-" file

linux
  • 3 respostas
  • 253 Views
Martin Hope
Satish
Asked: 2019-11-11 18:28:45 +0800 CST

Como encontrar o limite máximo de /proc/sys/fs/file-max

  • 1

Estou executando o Jenkins com muitos trabalhos que exigem muitos arquivos abertos, então aumentei file-maxo limite para 3 milhões. Ele ainda atinge 3 milhões às vezes, então estou me perguntando até onde posso ir. Posso definir /proc/sys/fs/file-maxpara 10 milhões?

Como eu sei qual é o limite rígido de file-max?

Estou executando CentOS 7.7(kernel 3.10.X)

linux
  • 1 respostas
  • 538 Views
Martin Hope
Satish
Asked: 2019-11-02 10:44:05 +0800 CST

Alterar formato de data com sed ou awk no arquivo

  • 3

Eu tenho um arquivo que tem o seguinte formato

----------------------------------------
  Name: cust foo
  mail: [email protected]
  Account Lock: FALSE
  Last Password Change: 20170721085748Z
----------------------------------------
  Name: cust xyz
  mail: [email protected]
  Account Lock: TRUE
  Last Password Change: 20181210131249Z
----------------------------------------
  Name: cust bar
  mail: [email protected]
  Account Lock: FALSE
  Last Password Change: 20170412190854Z
----------------------------------------
  Name: cust abc
  mail: [email protected]
  Account Lock: FALSE
  Last Password Change: 20191030080405Z
----------------------------------------

Eu quero alterar Last Password Changeo formato de dados para, YYYY-MM-DDmas não tenho certeza de como fazer isso com sedou awkou existe algum outro método, posso tentar fazer um loop e usar a date -dopção, mas não tenho certeza se há uma maneira mais fácil de fazer com regex

linux
  • 3 respostas
  • 1760 Views
Martin Hope
Satish
Asked: 2019-10-23 18:30:57 +0800 CST

comando stdout para /dev/null

  • 0

Eu tenho um comando muito simples que gera STDOUT que eu quero fazer, /dev/nullmas de alguma forma não está funcionando ou estou faltando algo aqui.

$ ldapsearch -Y GSSAPI -b "cn=users,cn=accounts,dc=example,dc=com" "uid=foo" | grep krbPasswordExpiration | tail -n1 | awk '{print $2}'
SASL/GSSAPI authentication started
SASL username: [email protected]
SASL SSF: 256
SASL data security layer installed.
20200608022954Z     <---- This is my krbPasswordExpiration value.

Mas se você vir na linha de comando acima SASL, que é apenas stdout, que eu quero fazer /dev/null, tentei seguir, mas parece que não está funcionando.

$ ldapsearch -Y GSSAPI -b "cn=users,cn=accounts,dc=example,dc=com" "uid=foo" | grep krbPasswordExpiration | tail -n1 | awk '{print $2}' 2> /dev/null

que outra maneira eu posso me livrar dele?

linux
  • 1 respostas
  • 356 Views
Martin Hope
Satish
Asked: 2019-09-11 07:32:39 +0800 CST

extrair linhas de baixo até a correspondência de regex

  • 5

Eu tenho essa saída.

[root@linux ~]# cat /tmp/file.txt
virt-top time  11:25:14 Host foo.example.com x86_64 32/32CPU 1200MHz 65501MB
   ID S RDRQ WRRQ RXBY TXBY %CPU %MEM   TIME    NAME
    1 R    0    0    0    0  0.0  0.0  96:02:53 instance-0000036f
    2 R    0    0    0    0  0.0  0.0  95:44:07 instance-00000372
virt-top time  11:25:17 Host foo.example.com x86_64 32/32CPU 1200MHz 65501MB
   ID S RDRQ WRRQ RXBY TXBY %CPU %MEM   TIME    NAME
    1 R    0    0    0    0  0.6 12.0  96:02:53 instance-0000036f
    2 R    0    0    0    0  0.2 12.0  95:44:08 instance-00000372

Você pode ver que ele tem dois blocos e eu quero extrair o último bloco (se você vir o primeiro bloco, ele tem toda a CPU zero, o que eu não me importo) resumindo, quero extrair seguindo as últimas linhas (Observações: às vezes eu tenho mais de duas instâncias -*) caso contrário eu poderia usar "tail -n 2"

1 R    0    0    0    0  0.6 12.0  96:02:53 instance-0000036f
2 R    0    0    0    0  0.2 12.0  95:44:08 instance-00000372

Eu tentei sed/awk/grep e todas as formas possíveis, mas não cheguei perto do resultado desejado.

linux
  • 5 respostas
  • 1262 Views
Martin Hope
Satish
Asked: 2019-09-09 11:37:40 +0800 CST

coleção de estatísticas de CPU/memória libvirt kvm

  • 3

Estamos executando máquinas virtuais em kvm e estou tentando coletar métricas e enviá-las para influxdb + grafana para gráficos.

Posso ver as estatísticas da CPU usando virsh, mas timeem segundo gasto, como faço para converter esse valor em uso adequado %ou métricas legíveis por humanos?

[root@kvm01 ~]# virsh cpu-stats --total instance-0000047a
Total:
    cpu_time     160808730.755660547 seconds
    user_time       148000.880000000 seconds
    system_time   85012531.050000000 seconds
linux kvm monitoring
  • 1 respostas
  • 1241 Views
Martin Hope
Satish
Asked: 2019-08-20 13:49:20 +0800 CST

vários [[collectd]] em influx.conf

  • 1

Tenho a seguinte collectdinstância rodando em influx.confarquivo e está tudo ok, mas agora quero configurar outra instância totalmente isolada com uma existente como faço isso? E posso fazer o seguinte no influx.confarquivo?

[[collectd]]
  enabled = true
  bind-address = "0.0.0.0:8096"
  database = "database-1"

[[collectd]]
  enabled = true
  bind-address = "0.0.0.0:8097"
  database = "database-2"
linux database
  • 1 respostas
  • 504 Views
Martin Hope
Satish
Asked: 2019-08-15 19:57:04 +0800 CST

A gravidade do filtro rsyslog não está funcionando

  • 2

Eu tenho a seguinte configuração do Rsyslog para enviar logs para servidores remotos. O problema é que está enviando muitas mensagens INFO para o servidor remoto e eu não quero esse ruído. Estou tentando configurar o filtro para enviar todos os logs de gravidade, mas não o INFO.

# Ansible managed

$WorkDirectory /var/spool/rsyslog
$template RFC3164fmt,"<%PRI%>%TIMESTAMP% %HOSTNAME% %syslogtag%%msg%"

# Log shipment rsyslog target servers
$ActionQueueFileName ostack-log-01_rsyslog_container-04cb9e3a
$ActionQueueSaveOnShutdown on
$ActionQueueType LinkedList
$ActionResumeRetryCount 250
local7.* @172.28.1.205:514;RFC3164fmt

Foi o que eu fiz e não funcionou.

local7.*;local7.!=info @172.28.1.205:514;RFC3164fmt

Meu SO é Centos7.5 Linux

linux logs
  • 2 respostas
  • 1122 Views
Martin Hope
Satish
Asked: 2018-12-06 15:21:25 +0800 CST

wget baixar arquivos de padrão regex do URL remoto

  • 0

Eu quero baixar todos os *httpd*arquivos RPM do espelho remoto do CentOS e estou tentando seguir o comando, mas não parece funcionar

[root@yum foo]# wget -r --no-parent -A "*httpd*" https://mirrors.edge.kernel.org/centos/7.5.1804/os/x86_64/Packages/

Vejo que criou uma estrutura de diretórios, mas não há arquivos nos diretórios.

[root@yum foo]# ls
mirrors.edge.kernel.org

O que estou fazendo errado?

linux regular-expression
  • 1 respostas
  • 1162 Views
Martin Hope
Satish
Asked: 2018-10-26 09:23:49 +0800 CST

extrair campo do arquivo usando sed ou awk

  • 0

Eu tenho o script bash para coletar todas as informações de hardware, mas está faltando a seguinte informação de memória, então é isso que estou tentando fazer.

O comando a seguir fornece o status do DIMMmódulo de memória,

[root@Linux ~]# hpasmcli -s 'show dimm'

DIMM Configuration
------------------
Processor #:                     1
Module #:                     1
Present:                      Yes
Form Factor:                  9h
Memory Type:                  DDR3(18h)
Size:                         8192 MB
Speed:                        1333 MHz
Supports Lock Step:           No
Configured for Lock Step:     No
Status:                       Ok

Processor #:                     1
Module #:                     12
Present:                      Yes
Form Factor:                  9h
Memory Type:                  DDR3(18h)
Size:                         8192 MB
Speed:                        1333 MHz
Supports Lock Step:           No
Configured for Lock Step:     No
Status:                       Ok

Processor #:                     2
Module #:                     1
Present:                      Yes
Form Factor:                  9h
Memory Type:                  DDR3(18h)
Size:                         8192 MB
Speed:                        1333 MHz
Supports Lock Step:           No
Configured for Lock Step:     No
Status:                       Ok

Processor #:                     2
Module #:                     12
Present:                      Yes
Form Factor:                  9h
Memory Type:                  DDR3(18h)
Size:                         8192 MB
Speed:                        1333 MHz
Supports Lock Step:           No
Configured for Lock Step:     No
Status:                       DIMM is degraded

deseja extrair Size:e Status:precisa disso em uma única linha, como seguir

A saída final terá a seguinte aparência. Eu posso usar outra linguagem como python ou perl, mas escrevi em bash, então preciso de algo em bash, posso fazer vários for loope brincar com variável para fazê-lo funcionar, mas preciso de algo fácil ou curto como sed/awk. como posso conseguir isso em sed/awk?

8192MB - Ok
8192MB - OK
8192MB - OK 
8192MB - DIMM is degraded
linux awk
  • 2 respostas
  • 718 Views
Martin Hope
Satish
Asked: 2018-09-13 06:43:19 +0800 CST

Regex match fix string no hostname

  • 0

Eu tenho o nome do host como a seguir

www-foo-1001-1-1.example.com

Estou escrevendo um script para o qual deve implantar o aplicativo que tem a seguinte correspondência de string1001-<any digit>-<any digit>

Exemplo: o script deve corresponder ao seguinte nome de host.

www-foo-1001-1-49
www-foo-1001-4-37
www-foo-1001-2-12
www-foo-1001-8-4

Ignore esse padrão no nome do host.

www-foo-1001-1-2-49
www-foo-1001-1-1-49
www-foo-1001-1
www-foo-1001

ele deve corresponder a esse padrão 1001-N-Ne ignorar qualquer outra coisa.

Mais detalhes que eu quero fazer if then..e retornar exito código de status com $?o erro de lançamento que não corresponde ao nome do host padrão.

linux awk
  • 3 respostas
  • 1015 Views
Martin Hope
Satish
Asked: 2018-08-09 14:35:20 +0800 CST

Regex para formatar a saída de arquivos

  • 1

Tenho um arquivo com o seguinte conteúdo:

   foo-6-25.example.com:
         1  Var. Speed   System Board    Normal     Yes     Normal   ( 49)
    --
    foo-5-4.example.com:
         1  Var. Speed   System Board    Normal     Yes     Normal   ( 19)
    --
    foo-8-28.example.com:
         1  Var. Speed   System Board    Normal     Yes     Normal   ( 43)
    --
    foo-9-7.example.com:
         1  Var. Speed   System Board    Normal     Yes     Normal   ( 91)
    --
    foo-5-19.idmz.example.com:
         1  Var. Speed   System Board    Normal     Yes     Normal   ( 19)
    --
    foo-7-3.example.com:
         1  Var. Speed   System Board    Normal     Yes     Normal   ( 20)

Eu quero formatá-lo da seguinte maneira: nome do servidor e, em seguida, a velocidade do FAN que está entre ()colchetes

foo-6-25.example.com: ( 49)
foo-5-4.example.com:  ( 19)

Não tenho certeza de como usar isso usando o awk ou qualquer outra ferramenta.

linux awk
  • 6 respostas
  • 153 Views
Martin Hope
Satish
Asked: 2018-08-04 20:30:55 +0800 CST

Problema de velocidade da rede do contêiner LXC

  • 2

Estou executando o openstack no contêiner LXC e descobri que dentro da minha rede de contêineres LXC é muito lento, mas do host é muito rápido

HOSPEDEIRO

[root@ostack-infra-01 ~]# time wget http://mirror.cc.columbia.edu/pub/linux/centos/7.5.1804/updates/x86_64/repodata/0d7e660988dcc434ec5dec72067655f9b0ef44e6164d3fb85bda2bd1b09534db-primary.sqlite.bz2
--2018-08-04 00:24:09--  http://mirror.cc.columbia.edu/pub/linux/centos/7.5.1804/updates/x86_64/repodata/0d7e660988dcc434ec5dec72067655f9b0ef44e6164d3fb85bda2bd1b09534db-primary.sqlite.bz2
Resolving mirror.cc.columbia.edu (mirror.cc.columbia.edu)... 128.59.59.71
Connecting to mirror.cc.columbia.edu (mirror.cc.columbia.edu)|128.59.59.71|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 4515677 (4.3M) [application/x-bzip2]
Saving to: ‘0d7e660988dcc434ec5dec72067655f9b0ef44e6164d3fb85bda2bd1b09534db-primary.sqlite.bz2’

100%[===========================================================================================================================================>] 4,515,677   23.1MB/s   in 0.2s

2018-08-04 00:24:09 (23.1 MB/s) - ‘0d7e660988dcc434ec5dec72067655f9b0ef44e6164d3fb85bda2bd1b09534db-primary.sqlite.bz2’ saved [4515677/4515677]


real    0m0.209s
user    0m0.008s
sys     0m0.014s

Contêiner LXC no mesmo host

[root@ostack-infra-01 ~]# lxc-attach -n ostack-infra-01_neutron_server_container-fbf14420
[root@ostack-infra-01-neutron-server-container-fbf14420 ~]# time wget http://mirror.cc.columbia.edu/pub/linux/centos/7.5.1804/updates/x86_64/repodata/0d7e660988dcc434ec5dec72067655f9b0ef44e6164d3fb85bda2bd1b09534db-primary.sqlite.bz2
--2018-08-04 00:24:32--  http://mirror.cc.columbia.edu/pub/linux/centos/7.5.1804/updates/x86_64/repodata/0d7e660988dcc434ec5dec72067655f9b0ef44e6164d3fb85bda2bd1b09534db-primary.sqlite.bz2
Resolving mirror.cc.columbia.edu (mirror.cc.columbia.edu)... 128.59.59.71
Connecting to mirror.cc.columbia.edu (mirror.cc.columbia.edu)|128.59.59.71|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 4515677 (4.3M) [application/x-bzip2]
Saving to: ‘0d7e660988dcc434ec5dec72067655f9b0ef44e6164d3fb85bda2bd1b09534db-primary.sqlite.bz2’

100%[===========================================================================================================================================>] 4,515,677   43.4KB/s   in 1m 58s

2018-08-04 00:26:31 (37.3 KB/s) - ‘0d7e660988dcc434ec5dec72067655f9b0ef44e6164d3fb85bda2bd1b09534db-primary.sqlite.bz2’ saved [4515677/4515677]


real    1m59.121s
user    0m0.002s
sys     0m0.361s

Não tenho nenhuma configuração sofisticada de nenhum limite definido para rede, tenho outro host que está funcionando bem e velocidade máxima, o que você acha errado aqui

kernel version Linux ostack-infra-01 3.10.0-862.3.3.el7.x86_64 #1 SMP

CentOS 7.5

linux networking
  • 1 respostas
  • 1299 Views
Martin Hope
Satish
Asked: 2018-07-26 11:47:50 +0800 CST

problema de migração ao vivo do openstack

  • 0

Eu tenho dois node1 de computação e node2 com Cepharmazenamento compartilhado (RBD), estou tentando configurar a migração ao vivo, mas está falhando com o seguinte erro, não tenho certeza do que está errado.

Estou usando o Openstack pike 16.0.16

[root@compute-01 instances]# cat /etc/libvirt/libvirtd.conf
# Ansible managed

listen_tls = 0
listen_tcp = 1
unix_sock_group = "libvirt"
unix_sock_ro_perms = "0777"
unix_sock_rw_perms = "0770"
auth_unix_ro = "none"
auth_unix_rw = "none"
auth_tcp = "none"

Seguinte erro em nova.log

Se fizer a primeira migração ao vivo da VM, funcionará, mas lançará o seguinte erro e a VM passará para o Errorstatus

C1 ----> C2 (funciona na primeira vez, mas com erro)

lt] [instance: 4f4009ee-902d-4ee9-ae99-e9bc55267b32] Received unexpected event network-vif-unplugged-251b70a9-2118-4f95-8b35-e9e52f4392e7 for instance
2018-07-25 17:32:39.573 2833 ERROR nova.compute.manager [req-e0cd3865-151e-4d07-8b94-3a8943dafb57 78ca592a663a487d982a4c412ce4d52e d0d9c227e2d34d3a941e3cd16dea06ed - default default] [instance: 4f4009ee-902d-4ee9-ae99-e9bc55267b32] Live migration failed.: AttributeError: 'Guest' object has no attribute 'migrate_configure_max_speed'
2018-07-25 17:32:39.573 2833 ERROR nova.compute.manager [instance: 4f4009ee-902d-4ee9-ae99-e9bc55267b32] Traceback (most recent call last):
2018-07-25 17:32:39.573 2833 ERROR nova.compute.manager [instance: 4f4009ee-902d-4ee9-ae99-e9bc55267b32]   File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/nova/compute/manager.py", line 5580, in _do_live_migration
2018-07-25 17:32:39.573 2833 ERROR nova.compute.manager [instance: 4f4009ee-902d-4ee9-ae99-e9bc55267b32]     block_migration, migrate_data)
2018-07-25 17:32:39.573 2833 ERROR nova.compute.manager [instance: 4f4009ee-902d-4ee9-ae99-e9bc55267b32]   File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 6436, in live_migration
2018-07-25 17:32:39.573 2833 ERROR nova.compute.manager [instance: 4f4009ee-902d-4ee9-ae99-e9bc55267b32]     migrate_data)
2018-07-25 17:32:39.573 2833 ERROR nova.compute.manager [instance: 4f4009ee-902d-4ee9-ae99-e9bc55267b32]   File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 6944, in _live_migration
2018-07-25 17:32:39.573 2833 ERROR nova.compute.manager [instance: 4f4009ee-902d-4ee9-ae99-e9bc55267b32]     guest.migrate_configure_max_speed(
2018-07-25 17:32:39.573 2833 ERROR nova.compute.manager [instance: 4f4009ee-902d-4ee9-ae99-e9bc55267b32] AttributeError: 'Guest' object has no attribute 'migrate_configure_max_speed'
2018-07-25 17:32:39.573 2833 ERROR nova.compute.manager [instance: 4f4009ee-902d-4ee9-ae99-e9bc55267b32]
2018-07-25 17:32:41.646 2833 WARNING nova.compute.manager [req-eb9e883f-08c3-427d-89c3-cdcf012e7c8b 78ca592a663a487d982a4c412ce4d52e d0d9c227e2d34d3a941e3cd16dea06ed - default default] [instance: 4f4009ee-902d-4ee9-ae99-e9bc55267b32] Received unexpected event network-vif-plugged-251b70a9-2118-4f95-8b35-e9e52f4392e7 for instance
2018-07-25 17:32:49.516 2833 WARNING nova.compute.manager [req-d70c12f9-42fc-43be-ae8e-6dd6b21b1b1f 78ca592a663a487d982a4c412ce4d52e d0d9c227e2d34d3a941e3cd16dea06ed - default default] [instance: 4f4009ee-902d-4ee9-ae99-e9bc55267b32] Received unexpected event network-vif-plugged-251b70a9-2118-4f95-8b35-e9e52f4392e7 for instance

Para corrigir o estado de erro da VM, tenho que fazer

[root@ostack-infra-01-utility-container-a8dbff46 ~]# nova list
+--------------------------------------+------+--------+------------+-------------+-----------------------+
| ID                                   | Name | Status | Task State | Power State | Networks              |
+--------------------------------------+------+--------+------------+-------------+-----------------------+
| 4f4009ee-902d-4ee9-ae99-e9bc55267b32 | d1   | ERROR  | -          | NOSTATE     | net-vlan31=10.31.1.10 |
+--------------------------------------+------+--------+------------+-------------+-----------------------+

nova reset-state --active 4f4009ee-902d-4ee9-ae99-e9bc55267b32

Mesmo após a migração bem-sucedida, meu painel do horizonte mostra que ainda está ativadoC1

Agora VM rodando em C2 e eu tenho que movê-lo de volta para C1, estou recebendo o seguinte erro, parece que o nova não limpa os arquivos após a migração da VM ou pode ser porque estava em Error stat e não limpou o arquivo anterior ..

2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [req-285a7f42-99a1-47a2-86c1-79140fcfca31 040960f6067d42c2b52c3fcac9ebde6d 2349c3efbf8a4c6ba6dc3b961160c81b - default default] [instance: aa58095d-7027-488e-901e-f3259353de0d] Pre live migration failed at ostack-compute-02.v1v0x.net: DestinationDiskExists_Remote: The supplied disk path (/var/lib/nova/instances/aa58095d-7027-488e-901e-f3259353de0d) already exists, it is expected not to exist.
Traceback (most recent call last):

  File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 160, in _process_incoming
    res = self.dispatcher.dispatch(message)

  File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 213, in dispatch
    return self._do_dispatch(endpoint, method, ctxt, args)

  File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 183, in _do_dispatch
    result = func(ctxt, **new_args)

  File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/nova/exception_wrapper.py", line 76, in wrapped
    function_name, call_dict, binary)

  File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
    self.force_reraise()

  File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
    six.reraise(self.type_, self.value, self.tb)

  File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/nova/exception_wrapper.py", line 67, in wrapped
    return f(self, context, *args, **kw)

  File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/nova/compute/utils.py", line 880, in decorated_function
    return function(self, context, *args, **kwargs)

  File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/nova/compute/manager.py", line 218, in decorated_function
    kwargs['instance'], e, sys.exc_info())

  File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
    self.force_reraise()

  File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
    six.reraise(self.type_, self.value, self.tb)

  File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/nova/compute/manager.py", line 206, in decorated_function
    return function(self, context, *args, **kwargs)

  File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/nova/compute/manager.py", line 5507, in pre_live_migration
    migrate_data)

  File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 7072, in pre_live_migration
    raise exception.DestinationDiskExists(path=instance_dir)

DestinationDiskExists: The supplied disk path (/var/lib/nova/instances/aa58095d-7027-488e-901e-f3259353de0d) already exists, it is expected not to exist.
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d] Traceback (most recent call last):
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]   File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/nova/compute/manager.py", line 5562, in _do_live_migration
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]     block_migration, disk, dest, migrate_data)
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]   File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/nova/compute/rpcapi.py", line 745, in pre_live_migration
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]     disk=disk, migrate_data=migrate_data)
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]   File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/oslo_messaging/rpc/client.py", line 169, in call
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]     retry=self.retry)
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]   File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/oslo_messaging/transport.py", line 123, in _send
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]     timeout=timeout, retry=retry)
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]   File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 566, in send
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]     retry=retry)
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]   File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 557, in _send
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]     raise result
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d] DestinationDiskExists_Remote: The supplied disk path (/var/lib/nova/instances/aa58095d-7027-488e-901e-f3259353de0d) already exists, it is expected not to exist.
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d] Traceback (most recent call last):
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]   File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 160, in _process_incoming
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]     res = self.dispatcher.dispatch(message)
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]   File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 213, in dispatch
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]     return self._do_dispatch(endpoint, method, ctxt, args)
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]   File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 183, in _do_dispatch
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]     result = func(ctxt, **new_args)
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]   File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/nova/exception_wrapper.py", line 76, in wrapped
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]     function_name, call_dict, binary)
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]   File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]     self.force_reraise()
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]   File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]     six.reraise(self.type_, self.value, self.tb)
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]   File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/nova/exception_wrapper.py", line 67, in wrapped
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]     return f(self, context, *args, **kw)
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]   File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/nova/compute/utils.py", line 880, in decorated_function
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]     return function(self, context, *args, **kwargs)
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]   File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/nova/compute/manager.py", line 218, in decorated_function
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]     kwargs['instance'], e, sys.exc_info())
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]   File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]     self.force_reraise()
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]   File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]     six.reraise(self.type_, self.value, self.tb)
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]   File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/nova/compute/manager.py", line 206, in decorated_function
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]     return function(self, context, *args, **kwargs)
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]   File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/nova/compute/manager.py", line 5507, in pre_live_migration
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]     migrate_data)
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]   File "/openstack/venvs/nova-16.0.16/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 7072, in pre_live_migration
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]     raise exception.DestinationDiskExists(path=instance_dir)
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d] DestinationDiskExists: The supplied disk path (/var/lib/nova/instances/aa58095d-7027-488e-901e-f3259353de0d) already exists, it is expected not to exist.
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]
2018-07-25 19:42:04.143 8785 ERROR nova.compute.manager [instance: aa58095d-7027-488e-901e-f3259353de0d]
2018-07-25 19:42:06.908 8785 WARNING nova.compute.manager [req-cda02056-75c7-463c-ac7e-2925ba2cd29c 78ca592a663a487d982a4c412ce4d52e d0d9c227e2d34d3a941e3cd16dea06ed - default default] [instance: aa58095d-7027-488e-901e-f3259353de0d] Received unexpected event network-vif-unplugged-7a356ab1-e0d3-4a69-9aa9-f71329caa17f for instance
2018-07-25 19:42:07.821 8785 ERROR nova.virt.libvirt.driver [req-cda02056-75c7-463c-ac7e-2925ba2cd29c 78ca592a663a487d982a4c412ce4d52e d0d9c227e2d34d3a941e3cd16dea06ed - default default] [instance: aa58095d-7027-488e-901e-f3259353de0d] Live Migration failure: Domain not found: no domain with matching name 'instance-00000056': libvirtError: Domain not found: no domain with matching name 'instance-00000056'
2018-07-25 19:42:08.294 8785 WARNING nova.compute.manager [req-5d3abd38-c8a9-499d-b97f-1d9e748a19d3 78ca592a663a487d982a4c412ce4d52e d0d9c227e2d34d3a941e3cd16dea06ed - default default] [instance: aa58095d-7027-488e-901e-f3259353de0d] Received unexpected event network-vif-plugged-7a356ab1-e0d3-4a69-9aa9-f71329caa17f for instance
2018-07-25 19:42:14.934 8785 WARNING nova.compute.manager [req-e9442ba9-1700-4221-a495-31d76a2b5bf0 78ca592a663a487d982a4c412ce4d52e d0d9c227e2d34d3a941e3cd16dea06ed - default default] [instance: aa58095d-7027-488e-901e-f3259353de0d] Received unexpected event network-vif-plugged-7a356ab1-e0d3-4a69-9aa9-f71329caa17f for instance
linux kvm
  • 1 respostas
  • 779 Views
Martin Hope
Satish
Asked: 2018-07-07 07:53:00 +0800 CST

Vínculo Linux com pergunta VLAN

  • 3

Você acha que a seguinte configuração faz sentido? Suportado BONDTING_OPTna interface VLAN? Quero garantir que minha interface falhe quando o dispositivo upstream estiver inativo.

ifcfg-bond0

$ cat /etc/sysconfig/network-scripts/ifcfg-bond0
NAME=bond0
DEVICE=bond0
BOOTPROTO=none
ONBOOT=yes
BONDING_OPTS="mode=1 miimon=500 downdelay=1000 primary=eno1 primary_reselect=always"

ifcfg-bond0.10

$ cat /etc/sysconfig/network-scripts/ifcfg-bond0.10
NAME=bond0.10
DEVICE=bond0.10
ONPARENT=yes
BOOTPROTO=dhcp
VLAN=yes
BONDING_OPTS="mode=1 arp_interval=1000 arp_ip_target=10.10.0.1 miimon=500 downdelay=1000 primary=eno1 primary_reselect=always"
NM_CONTROLLED=no

ifcfg-bond0.20

$ cat /etc/sysconfig/network-scripts/ifcfg-bond0.20
NAME=bond0.20
DEVICE=bond0.20
ONPARENT=yes
BOOTPROTO=dhcp
VLAN=yes
BONDING_OPTS="mode=1 arp_interval=1000 arp_ip_target=74.xx.xx.1 miimon=500 downdelay=1000 primary=eno1 primary_reselect=always"
NM_CONTROLLED=no
linux vlan
  • 1 respostas
  • 8174 Views
Martin Hope
Satish
Asked: 2018-06-16 07:54:15 +0800 CST

opção systemd-networkd dhcp_hostname

  • 2

Eu configurei systemd-networkdpara configurar minha rede, criei vlan10 e quero que o cliente envie o nome do host para o DHCP para se registrar no meu servidor DDNS, então a pergunta é a opção de networkdsuporte DHCP_HOSTNAME=?

[root@localhost network]# cat vlan10.network
[Match]
Name=vlan10

[Network]
DHCP=yes

Eu tenho várias VLANs e quero enviar dois nomes de host vlan diferentes para o servidor dhcp para registrá-los, DNSpor exemplo

vlan10 enviará o nome do hostfoo.vlan10.example.com

vlan 20 enviará o nome do hostfoo.vlan20.examplee.com

linux networking
  • 1 respostas
  • 3482 Views
Martin Hope
Satish
Asked: 2017-12-17 11:35:16 +0800 CST

sed extrai o primeiro campo e move para um local específico

  • 2

eu tenho esse arquivo

10.1.1.1    www1           
10.1.1.2    www2           
10.1.1.3    www3            

Eu quero extrair o primeiro IP addresscampo e movê-lo para o próximo lugar comhttp://www.foo.com=10.1.1.1/test.php

10.1.1.1    www1           # http://www.foo.com=10.1.1.1/test.php
10.1.1.2    www2           # http://www.foo.com=10.1.1.2/test.php
10.1.1.3    www3           # http://www.foo.com=10.1.1.3/test.php

Eu posso fazer isso, for loopmas eu quero fazer isso com sedum truque de forro único.

linux sed
  • 2 respostas
  • 1076 Views

Sidebar

Stats

  • Perguntas 205573
  • respostas 270741
  • best respostas 135370
  • utilizador 68524
  • Highest score
  • respostas
  • Marko Smith

    Possível firmware ausente /lib/firmware/i915/* para o módulo i915

    • 3 respostas
  • Marko Smith

    Falha ao buscar o repositório de backports jessie

    • 4 respostas
  • Marko Smith

    Como exportar uma chave privada GPG e uma chave pública para um arquivo

    • 4 respostas
  • Marko Smith

    Como podemos executar um comando armazenado em uma variável?

    • 5 respostas
  • Marko Smith

    Como configurar o systemd-resolved e o systemd-networkd para usar o servidor DNS local para resolver domínios locais e o servidor DNS remoto para domínios remotos?

    • 3 respostas
  • Marko Smith

    apt-get update error no Kali Linux após a atualização do dist [duplicado]

    • 2 respostas
  • Marko Smith

    Como ver as últimas linhas x do log de serviço systemctl

    • 5 respostas
  • Marko Smith

    Nano - pule para o final do arquivo

    • 8 respostas
  • Marko Smith

    erro grub: você precisa carregar o kernel primeiro

    • 4 respostas
  • Marko Smith

    Como baixar o pacote não instalá-lo com o comando apt-get?

    • 7 respostas
  • Martin Hope
    user12345 Falha ao buscar o repositório de backports jessie 2019-03-27 04:39:28 +0800 CST
  • Martin Hope
    Carl Por que a maioria dos exemplos do systemd contém WantedBy=multi-user.target? 2019-03-15 11:49:25 +0800 CST
  • Martin Hope
    rocky Como exportar uma chave privada GPG e uma chave pública para um arquivo 2018-11-16 05:36:15 +0800 CST
  • Martin Hope
    Evan Carroll status systemctl mostra: "Estado: degradado" 2018-06-03 18:48:17 +0800 CST
  • Martin Hope
    Tim Como podemos executar um comando armazenado em uma variável? 2018-05-21 04:46:29 +0800 CST
  • Martin Hope
    Ankur S Por que /dev/null é um arquivo? Por que sua função não é implementada como um programa simples? 2018-04-17 07:28:04 +0800 CST
  • Martin Hope
    user3191334 Como ver as últimas linhas x do log de serviço systemctl 2018-02-07 00:14:16 +0800 CST
  • Martin Hope
    Marko Pacak Nano - pule para o final do arquivo 2018-02-01 01:53:03 +0800 CST
  • Martin Hope
    Kidburla Por que verdadeiro e falso são tão grandes? 2018-01-26 12:14:47 +0800 CST
  • Martin Hope
    Christos Baziotis Substitua a string em um arquivo de texto enorme (70 GB), uma linha 2017-12-30 06:58:33 +0800 CST

Hot tag

linux bash debian shell-script text-processing ubuntu centos shell awk ssh

Explore

  • Início
  • Perguntas
    • Recentes
    • Highest score
  • tag
  • help

Footer

AskOverflow.Dev

About Us

  • About Us
  • Contact Us

Legal Stuff

  • Privacy Policy

Language

  • Pt
  • Server
  • Unix

© 2023 AskOverflow.DEV All Rights Reserve