AskOverflow.Dev

AskOverflow.Dev Logo AskOverflow.Dev Logo

AskOverflow.Dev Navigation

  • Início
  • system&network
  • Ubuntu
  • Unix
  • DBA
  • Computer
  • Coding
  • LangChain

Mobile menu

Close
  • Início
  • system&network
    • Recentes
    • Highest score
    • tags
  • Ubuntu
    • Recentes
    • Highest score
    • tags
  • Unix
    • Recentes
    • tags
  • DBA
    • Recentes
    • tags
  • Computer
    • Recentes
    • tags
  • Coding
    • Recentes
    • tags
Início / user-65750

Tombart's questions

Martin Hope
Tombart
Asked: 2024-09-25 17:39:31 +0800 CST

E/S terrivelmente lenta em matriz RAID NVMe mdadm

  • 9

Tenho um AMD EPYC 7502P 32-Coreservidor Linux (kernel 6.10.6) com 6 drives NVMe, onde de repente o desempenho de E/S caiu. Todas as operações levam muito tempo. Instalar atualizações de pacotes leva horas em vez de segundos (talvez minutos).

Eu tentei rodar fioem um sistema de arquivos com RAID5. Há uma diferença enorme na clatmétrica:

    clat (nsec): min=190, max=359716k, avg=16112.91, stdev=592031.05

stdevo valor é extremo.

saída completa:

$ fio --name=random-write --ioengine=posixaio --rw=randwrite --bs=4k --numjobs=1 --size=4g --iodepth=1 --runtime=60 --time_based --end_fsync=1
random-write: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=posixaio, iodepth=1
fio-3.33
Starting 1 process
random-write: Laying out IO file (1 file / 4096MiB)
Jobs: 1 (f=1): [F(1)][100.0%][w=53.3MiB/s][w=13.6k IOPS][eta 00m:00s]
random-write: (groupid=0, jobs=1): err= 0: pid=48391: Wed Sep 25 09:17:02 2024
  write: IOPS=45.5k, BW=178MiB/s (186MB/s)(10.6GiB/61165msec); 0 zone resets
    slat (nsec): min=552, max=123137, avg=2016.89, stdev=468.03
    clat (nsec): min=190, max=359716k, avg=16112.91, stdev=592031.05
     lat (usec): min=10, max=359716, avg=18.13, stdev=592.03
    clat percentiles (usec):
     |  1.00th=[   11],  5.00th=[   12], 10.00th=[   14], 20.00th=[   15],
     | 30.00th=[   15], 40.00th=[   15], 50.00th=[   15], 60.00th=[   16],
     | 70.00th=[   16], 80.00th=[   16], 90.00th=[   17], 95.00th=[   18],
     | 99.00th=[   20], 99.50th=[   22], 99.90th=[   42], 99.95th=[  119],
     | 99.99th=[  186]
   bw (  KiB/s): min=42592, max=290232, per=100.00%, avg=209653.41, stdev=46502.99, samples=105
   iops        : min=10648, max=72558, avg=52413.32, stdev=11625.75, samples=105
  lat (nsec)   : 250=0.01%, 500=0.01%, 1000=0.01%
  lat (usec)   : 10=0.01%, 20=99.15%, 50=0.76%, 100=0.03%, 250=0.06%
  lat (usec)   : 500=0.01%, 750=0.01%, 1000=0.01%
  lat (msec)   : 2=0.01%, 4=0.01%, 10=0.01%, 500=0.01%
  cpu          : usr=12.62%, sys=30.97%, ctx=2800981, majf=0, minf=28
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,2784519,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: bw=178MiB/s (186MB/s), 178MiB/s-178MiB/s (186MB/s-186MB/s), io=10.6GiB (11.4GB), run=61165-61165msec

Disk stats (read/write):
    md1: ios=0/710496, merge=0/0, ticks=0/12788992, in_queue=12788992, util=23.31%, aggrios=319833/649980, aggrmerge=0/0, aggrticks=118293/136983, aggrin_queue=255276, aggrutil=14.78%
  nvme1n1: ios=318781/638009, merge=0/0, ticks=118546/131154, in_queue=249701, util=14.71%
  nvme5n1: ios=321508/659460, merge=0/0, ticks=118683/138996, in_queue=257679, util=14.77%
  nvme2n1: ios=320523/647922, merge=0/0, ticks=120634/134284, in_queue=254918, util=14.71%
  nvme3n1: ios=320809/651642, merge=0/0, ticks=118823/135985, in_queue=254808, util=14.73%
  nvme0n1: ios=316267/642934, merge=0/0, ticks=116772/143909, in_queue=260681, util=14.75%
  nvme4n1: ios=321110/659918, merge=0/0, ticks=116300/137570, in_queue=253870, util=14.78%

Provavelmente um disco está com defeito. Existe alguma maneira de determinar qual é o disco lento?

Todos os discos têm atributos SMART semelhantes, nada de excepcional. SAMSUNG 7T:

Model Number:                       SAMSUNG MZQL27T6HBLA-00A07
Firmware Version:                   GDC5902Q
Data Units Read:                    2,121,457,831 [1.08 PB]
Data Units Written:                 939,728,748 [481 TB]
Controller Busy Time:               40,224
Power Cycles:                       5
Power On Hours:                     6,913

o desempenho de gravação parece ser muito semelhante:

iostat -xh
Linux 6.10.6+bpo-amd64 (ts01b)  25/09/24        _x86_64_        (64 CPU)

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           5.0%    0.0%    4.3%    0.6%    0.0%   90.2%

     r/s     rkB/s   rrqm/s  %rrqm r_await rareq-sz Device
    0.12      7.3k     0.00   0.0%    0.43    62.9k md0
 6461.73    548.7M     0.00   0.0%    0.22    87.0k md1
 3583.93     99.9M     9.60   0.3%    1.13    28.5k nvme0n1
 3562.77     98.9M     0.80   0.0%    1.15    28.4k nvme1n1
 3584.54     99.8M     9.74   0.3%    1.18    28.5k nvme2n1
 3565.96     98.8M     1.06   0.0%    1.16    28.4k nvme3n1
 3585.04     99.9M     9.78   0.3%    1.16    28.5k nvme4n1
 3577.56     99.0M     0.86   0.0%    1.17    28.3k nvme5n1

     w/s     wkB/s   wrqm/s  %wrqm w_await wareq-sz Device
    0.00      0.0k     0.00   0.0%    0.00     4.0k md0
  366.41    146.5M     0.00   0.0%   14.28   409.4k md1
 8369.26     32.7M     1.18   0.0%    3.73     4.0k nvme0n1
 8364.63     32.7M     1.12   0.0%    3.63     4.0k nvme1n1
 8355.48     32.6M     1.10   0.0%    3.56     4.0k nvme2n1
 8365.23     32.7M     1.10   0.0%    3.46     4.0k nvme3n1
 8365.37     32.7M     1.25   0.0%    3.37     4.0k nvme4n1
 8356.70     32.6M     1.06   0.0%    3.29     4.0k nvme5n1

     d/s     dkB/s   drqm/s  %drqm d_await dareq-sz Device
    0.00      0.0k     0.00   0.0%    0.00     0.0k md0
    0.00      0.0k     0.00   0.0%    0.00     0.0k md1
    0.00      0.0k     0.00   0.0%    0.00     0.0k nvme0n1
    0.00      0.0k     0.00   0.0%    0.00     0.0k nvme1n1
    0.00      0.0k     0.00   0.0%    0.00     0.0k nvme2n1
    0.00      0.0k     0.00   0.0%    0.00     0.0k nvme3n1
    0.00      0.0k     0.00   0.0%    0.00     0.0k nvme4n1
    0.00      0.0k     0.00   0.0%    0.00     0.0k nvme5n1

     f/s f_await  aqu-sz  %util Device
    0.00    0.00    0.00   0.0% md0
    0.00    0.00    6.68  46.8% md1
    0.00    0.00   35.24  14.9% nvme0n1
    0.00    0.00   34.50  14.6% nvme1n1
    0.00    0.00   33.98  14.9% nvme2n1
    0.00    0.00   33.06  14.6% nvme3n1
    0.00    0.00   32.33  14.8% nvme4n1
    0.00    0.00   31.72  14.6% nvme5n1

tipo de problema parece ser interrupções

$ dstat -tf --int24 60
----system---- -------------------------------interrupts------------------------------
     time     | 120   128   165   199   213   342   LOC   PMI   IWI   RES   CAL   TLB 
25-09 10:53:45|2602  2620  2688  2695  2649  2725   136k   36  1245  2739   167k  795 
25-09 10:54:45|  64    64    65    64    66    65  2235     1    26    16  2156     3 
25-09 10:55:45|  33    31    32    32    32    30  2050     1    24    10  2162    20 
25-09 10:56:45|  31    31    30    35    30    33  2303     1    26    63  2245     9 
25-09 10:57:45|  36    29    27    34    35    35  2016     1    23    72  2645    10 
25-09 10:58:45|   9     8     9     8     7     8  1766     0    27     4  1892    15 
25-09 10:59:45|  59    62    59    58    60    60  1585     1    22    20  1704     9 
25-09 11:00:45|  25    21    21    26    26    26  1605     0    26    10  1862    10 
25-09 11:01:45|  34    32    32    33    36    31  1515     0    23    24  1948    10 
25-09 11:02:45|  21    23    23    25    22    24  1772     0    27    27  1781     9 

os campos com interrupções aumentadas são mapeados para 9-edgetodas as unidades nvme[0-5]q9, por exemplo:

$ cat /proc/interrupts | grep 120:
IR-PCI-MSIX-0000:01:00.0    9-edge      nvme2q9

EDIT: 9-edgeProvavelmente são dispositivos Metadisk (RAID de software).

linux
  • 2 respostas
  • 75 Views
Martin Hope
Tombart
Asked: 2021-10-01 00:48:22 +0800 CST

Como gerar certificados para o puppetserver de compilação (secundário)?

  • 0

Estou tentando escalar puppetserver , para ter redundância, usando DNS round robin. O secundário puppetserver(versão 7.4.0) está configurado para usar a autoridade de CA do primário puppetserver:

/etc/puppetlabs/puppet/puppet.conf:

[main]
ca_name = Puppet CA: puppet-ca-master.company.com
ca_server = puppet-ca-master.company.com
[agent]
server = puppet-ca-master.company.com
runinterval=1800

No servidor secundário, desativei o serviço CA, pois poderia haver apenas uma única autoridade de certificação em /etc/puppetlabs/puppetserver/services.d/ca.cfg:

# To enable the CA service, leave the following line uncommented
# puppetlabs.services.ca.certificate-authority-service/certificate-authority-service
# To disable the CA service, comment out the above line and uncomment the line below
puppetlabs.services.ca.certificate-authority-disabled-service/certificate-authority-disabled-service
puppetlabs.trapperkeeper.services.watcher.filesystem-watch-service/filesystem-watch-service

Eu removi certificados do secundário, para buscar o certificado assinado do certificado do mestre da CA:

rm -rf /etc/puppetlabs/puppet/ssl && mkdir -p /etc/puppetlabs/puppet/ssl/certs
chmod 0700 /etc/puppetlabs/puppet/ssl
chown -R puppet /etc/puppetlabs/puppet/ssl

No entanto, o puppetserverserviço se recusa a iniciar devido à falta de certificado:

2021-09-30T09:06:18.220+02:00 ERROR [async-dispatch-2] [p.t.internal] Error during service start!!!
java.lang.IllegalArgumentException: Unable to open 'ssl-cert' file: /etc/puppetlabs/puppet/ssl/certs/secondary-puppetserver.company.com.pem

Quando tento executar puppet agent -tno puppetserver secundário, ele falha ao assinar o certificado:

Couldn't fetch certificate from CA server; you might still need to sign this agent's certificate (secondary-puppetserver.company.com)

Além disso, a chave privada é gerada, mas não a pública:

ll /etc/puppetlabs/puppet/ssl/public_keys/
total 0
puppet ssl-certificate puppetmaster
  • 1 respostas
  • 387 Views
Martin Hope
Tombart
Asked: 2021-09-07 13:29:58 +0800 CST

Como depurar a falha de segmentação do PostgreSQL?

  • 3

Eu tenho uma instância do PostgreSQL 13 que continua travando:

LOG:  server process (PID 10722) was terminated by signal 11: Segmentation fault
DETAIL:  Failed process was running: COMMIT
LOG:  terminating any other active server processes
WARNING:  terminating connection because of crash of another server process
DETAIL:  The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.

Eu atualizei /etc/postgresql/13/main/pg_ctl.confpara incluir dumps principais

pg_ctl_options = '--core-files'

e reiniciou postgresqlo serviço. Agora parece permitir dumps principais:

$ for f in `pgrep postgres`; do cat /proc/$f/limits | grep core; done
Max core file size        unlimited            unlimited            bytes 

gdbbacktrace dá a seguinte saída

$ gdb /usr/lib/postgresql/13/bin/postgres 13/main/core.postgres.12264

Program terminated with signal SIGSEGV, Segmentation fault.
#0  slot_deform_heap_tuple (natts=5, offp=0x557cc2e60720, tuple=<optimized out>, slot=0x557cc2e606d8) at ./build/../src/backend/executor/execTuples.c:930
930     ./build/../src/backend/executor/execTuples.c: No such file or directory.
(gdb) bt
#0  slot_deform_heap_tuple (natts=5, offp=0x557cc2e60720, tuple=<optimized out>, slot=0x557cc2e606d8) at ./build/../src/backend/executor/execTuples.c:930
#1  tts_buffer_heap_getsomeattrs (slot=0x557cc2e606d8, natts=5) at ./build/../src/backend/executor/execTuples.c:695
#2  0x0000557cc1d3998c in slot_getsomeattrs_int (slot=slot@entry=0x557cc2e606d8, attnum=5) at ./build/../src/backend/executor/execTuples.c:1912
#3  0x0000557cc1d28fba in slot_getsomeattrs (attnum=<optimized out>, slot=0x557cc2e606d8) at ./build/../src/include/executor/tuptable.h:344
#4  ExecInterpExpr (state=0x557cc2e620a8, econtext=0x557cc2ea1768, isnull=<optimized out>) at ./build/../src/backend/executor/execExprInterp.c:482
#5  0x0000557cc1d5548d in ExecEvalExprSwitchContext (isNull=0x7ffdd2599507, econtext=0x557cc2ea1768, state=0x557cc2e620a8) at ./build/../src/include/executor/executor.h:322
#6  ExecQual (econtext=0x557cc2ea1768, state=0x557cc2e620a8) at ./build/../src/include/executor/executor.h:391
#7  MJFillInner (node=0x557cc2ea1558) at ./build/../src/backend/executor/nodeMergejoin.c:494
#8  0x0000557cc1d55ce8 in ExecMergeJoin (pstate=0x557cc2ea1558) at ./build/../src/backend/executor/nodeMergejoin.c:1353
#9  0x0000557cc1d2cc83 in ExecProcNode (node=0x557cc2ea1558) at ./build/../src/include/executor/executor.h:248
#10 ExecutePlan (execute_once=<optimized out>, dest=0x557cc2e1a630, direction=<optimized out>, numberTuples=0, sendTuples=<optimized out>, operation=CMD_SELECT, use_parallel_mode=<optimized out>, planstate=0x557cc2ea1558, 
    estate=0x557cc2ea12f8) at ./build/../src/backend/executor/execMain.c:1632
#11 standard_ExecutorRun (queryDesc=0x557cc2e1a5a0, direction=<optimized out>, count=0, execute_once=<optimized out>) at ./build/../src/backend/executor/execMain.c:350
#12 0x00007f0ec05ae09d in pgss_ExecutorRun (queryDesc=0x557cc2e1a5a0, direction=ForwardScanDirection, count=0, execute_once=<optimized out>) at ./build/../contrib/pg_stat_statements/pg_stat_statements.c:1045
#13 0x0000557cc1cdbcd4 in PersistHoldablePortal (portal=portal@entry=0x557cc2d44b78) at ./build/../src/backend/commands/portalcmds.c:407
#14 0x0000557cc1ff95f9 in HoldPortal (portal=portal@entry=0x557cc2d44b78) at ./build/../src/backend/utils/mmgr/portalmem.c:642
#15 0x0000557cc1ff9e7d in PreCommit_Portals (isPrepare=isPrepare@entry=false) at ./build/../src/backend/utils/mmgr/portalmem.c:738
#16 0x0000557cc1c001c4 in CommitTransaction () at ./build/../src/backend/access/transam/xact.c:2087
#17 0x0000557cc1c015d5 in CommitTransactionCommand () at ./build/../src/backend/access/transam/xact.c:3085
#18 0x0000557cc1ea211d in finish_xact_command () at ./build/../src/backend/tcop/postgres.c:2662
#19 0x0000557cc1ea4703 in exec_simple_query (query_string=0x557cc2c9cd28 "COMMIT") at ./build/../src/backend/tcop/postgres.c:1264
#20 0x0000557cc1ea6143 in PostgresMain (argc=<optimized out>, argv=argv@entry=0x557cc2cf6c68, dbname=<optimized out>, username=<optimized out>) at ./build/../src/backend/tcop/postgres.c:4339
#21 0x0000557cc1e25bcd in BackendRun (port=0x557cc2ce94d0, port=0x557cc2ce94d0) at ./build/../src/backend/postmaster/postmaster.c:4526
#22 BackendStartup (port=0x557cc2ce94d0) at ./build/../src/backend/postmaster/postmaster.c:4210
#23 ServerLoop () at ./build/../src/backend/postmaster/postmaster.c:1739
#24 0x0000557cc1e26b41 in PostmasterMain (argc=5, argv=<optimized out>) at ./build/../src/backend/postmaster/postmaster.c:1412
#25 0x0000557cc1b70f4f in main (argc=5, argv=0x557cc2c96c30) at ./build/../src/backend/main/main.c:210

Adicionar log_statement = 'all'a /etc/postgresql/13/main/postgresql.confrealmente não ajuda, pois postmasterencerra todos os processos imediatamente e a consulta não é gravada nos logs.

aqui está a stracesaída após a execuçãoCOMMIT

[pid 20006] pwrite64(29, "COMMIT", 6, 15936) = 6
[pid 20006] pwrite64(29, "\0", 1, 15942) = 1
[pid 20006] close(29)                   = 0
[pid 20006] --- SIGSEGV {si_signo=SIGSEGV, si_code=SEGV_MAPERR, si_addr=0x10} ---
[pid 20006] +++ killed by SIGSEGV (core dumped) +++
<... select resumed> )                  = ? ERESTARTNOHAND (To be restarted if no handler)
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_DUMPED, si_pid=20006, si_uid=108, si_status=SIGSEGV, si_utime=0, si_stime=0} ---
wait4(-1, [{WIFSIGNALED(s) && WTERMSIG(s) == SIGSEGV && WCOREDUMP(s)}], WNOHANG, NULL) = 20006
write(2, "2021-09-08 13:38:51.853 UTC [299"..., 198) = 198
write(2, "2021-09-08 13:38:51.853 UTC [299"..., 88) = 88
kill(19324, SIGQUIT)                    = 0
kill(-19324, SIGQUIT)                   = 0
kill(19331, SIGQUIT)                    = 0
kill(-19331, SIGQUIT)                   = 0
kill(19320, SIGQUIT)                    = 0
kill(-19320, SIGQUIT)                   = 0
kill(19319, SIGQUIT)                    = 0
kill(-19319, SIGQUIT)                   = 0
kill(19321, SIGQUIT)                    = 0
kill(-19321, SIGQUIT)                   = 0
kill(19322, SIGQUIT)                    = 0
kill(-19322, SIGQUIT)                   = 0
kill(19323, SIGQUIT)                    = 0
kill(-19323, SIGQUIT)                   = 0
wait4(-1, 0x7ffe90814374, WNOHANG, NULL) = 0
rt_sigreturn({mask=[]})                 = -1 EINTR (Interrupted system call)
rt_sigprocmask(SIG_SETMASK, ~[ILL TRAP ABRT BUS FPE SEGV CONT SYS RTMIN RT_1], NULL, 8) = 0
rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
select(7, [5 6], NULL, NULL, {tv_sec=5, tv_usec=0}) = ? ERESTARTNOHAND (To be restarted if no handler)
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=19320, si_uid=108, si_status=2, si_utime=14, si_stime=3} ---

Existe uma maneira de rastrear a consulta SQL exata que foi executada?

debian postgresql segmentation-fault
  • 1 respostas
  • 2626 Views
Martin Hope
Tombart
Asked: 2021-01-07 15:08:12 +0800 CST

ntpd falha ao sincronizar TIME_ERROR: 0x41: Relógio não sincronizado

  • 0

No Debian 10, ntpd [email protected]falha ao sincronizar com o seguinte erro:

kernel reports TIME_ERROR: 0x41: Clock Unsynchronize

aqui está ntp.conf:

disable monitor

statsdir /var/log/ntpstats

restrict -4 default kod nomodify notrap nopeer noquery
restrict -6 default kod nomodify notrap nopeer noquery
restrict 127.0.0.1
restrict ::1

server 0.us.pool.ntp.org iburst
server 1.us.pool.ntp.org iburst
server 2.us.pool.ntp.org iburst
server 3.us.pool.ntp.org iburst

server   127.127.1.0
fudge    127.127.1.0 stratum 10
restrict 127.127.1.0

driftfile /var/lib/ntp/drift

ntpq -c sysinfo:

associd=0 status=0614 leap_none, sync_ntp, 1 event, freq_mode,
system peer:        50-205-57-38-static.hfc.comcastbusiness.net:123
system peer mode:   client
leap indicator:     00
stratum:            2
log2 precision:     -23
root delay:         70.634
root dispersion:    3.569
reference ID:       50.205.57.38
reference time:     e3a0c049.c39d770a  Wed, Jan  6 2021 23:03:37.764
system jitter:      0.723169
clock jitter:       1.177
clock wander:       0.000
broadcast delay:    -50.000
symm. auth. delay:  0.000

ntpq -c lpeers:

     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
 LOCAL(0)        .LOCL.          10 l  286   64   20    0.000    0.000   0.000
*50-205-57-38-st .GPS.            1 u   19   64   37   70.631    1.618   1.843
-ns1.backplanedn 173.162.192.156  2 u   14   64   37   84.235   -1.575   2.852
+c-73-239-136-18 74.6.168.73      3 u   11   64   37   48.606    1.598   2.522
+time-d.bbnx.net 252.74.143.178   2 u   14   64   37   92.632    0.623   0.799

timedatectl:

               Local time: Wed 2021-01-06 23:06:44 UTC
           Universal time: Wed 2021-01-06 23:06:44 UTC
                 RTC time: Wed 2021-01-06 23:06:44
                Time zone: Etc/UTC (UTC, +0000)
System clock synchronized: no
              NTP service: inactive
          RTC in local TZ: no

Alguma ideia do que pode estar errado?

ntp ntpd
  • 1 respostas
  • 16045 Views
Martin Hope
Tombart
Asked: 2019-08-15 04:13:24 +0800 CST

Os pacotes do Docker não estão sendo mascarados (apesar das regras NAT)

  • 0

Em uma máquina com Debian 9 (kernel Linux 4.9) tenho um Docker (18.06.1) com alguns containers em modo brigde. Por algum motivo estranho alguns pacotes do Docker conseguem ignorar a MASQUERADEregra, enp2s0é uma interface pública (o Docker usa docker0interface com 172.17.0.1).

$ tcpdump -vvlnn -i enp2s0 port 3000 and src net 172.16.0.0/12
tcpdump: listening on enp2s0, link-type EN10MB (Ethernet), capture size 262144 bytes
11:57:49.918655 IP (tos 0x0, ttl 63, id 62271, offset 0, flags [DF], proto TCP (6), length 52)
    172.17.0.2.55664 > x.x.x.x.3000: Flags [F.], cksum 0xe40c (correct), seq 9863202, ack 476959401, win 856, options [nop,nop,TS val 1382910659 ecr 2481487487], length 0
11:57:50.126683 IP (tos 0x0, ttl 63, id 62272, offset 0, flags [DF], proto TCP (6), length 52)
    172.17.0.2.55664 > x.x.x.x.3000: Flags [F.], cksum 0xe3d8 (correct), seq 0, ack 1, win 856, options [nop,nop,TS val 1382910711 ecr 2481487487], length 0
11:57:50.546660 IP (tos 0x0, ttl 63, id 62273, offset 0, flags [DF], proto TCP (6), length 52)
    172.17.0.2.55664 > x.x.x.x.3000: Flags [F.], cksum 0xe36f (correct), seq 0, ack 1, win 856, options [nop,nop,TS val 1382910816 ecr 2481487487], length 0

Regras NAT de iptables-save:

*nat
:PREROUTING ACCEPT [11397418:724275374]
:INPUT ACCEPT [39095:3038067]
:OUTPUT ACCEPT [1328340:79997617]
:POSTROUTING ACCEPT [5102467:306147980]
:DOCKER - [0:0]
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -o enp2s0 -j MASQUERADE
-A POSTROUTING -s 172.17.0.3/32 -d 172.17.0.3/32 -p tcp -m tcp --dport 5501 -j MASQUERADE
-A POSTROUTING -s 172.17.0.3/32 -d 172.17.0.3/32 -p tcp -m tcp --dport 5500 -j MASQUERADE
-A POSTROUTING -s 172.17.0.2/32 -d 172.17.0.2/32 -p tcp -m tcp --dport 3000 -j MASQUERADE
-A DOCKER -i docker0 -j RETURN
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 48842 -j DNAT --to-destination 172.17.0.3:5501
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 48841 -j DNAT --to-destination 172.17.0.3:5500
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 13119 -j DNAT --to-destination 172.17.0.2:3000

Tentei adicionar MANGLEregras para pegar esses pacotes, mas até agora sem sucesso:

*mangle
:PREROUTING ACCEPT [44457014385:7315518035795]
:INPUT ACCEPT [404840097:241773793538]
:FORWARD ACCEPT [44052174279:7073744241603]
:OUTPUT ACCEPT [526370610:171137381220]
:POSTROUTING ACCEPT [44578544703:7244881613871]
:bogus - [0:0]
:spoofing - [0:0]
-A PREROUTING -s 192.168.0.0/24 -i enp2s0 -j spoofing
-A PREROUTING -s 10.0.0.0/8 -i enp2s0 -j spoofing
-A PREROUTING -s 172.16.0.0/12 -i enp2s0 -j spoofing
-A PREROUTING -s 127.0.0.0/8 ! -i lo -j spoofing
-A PREROUTING -p tcp -m tcp --tcp-flags FIN,SYN FIN,SYN -j bogus
-A PREROUTING -p tcp -m tcp --tcp-flags SYN,RST SYN,RST -j bogus
-A PREROUTING -p tcp -m tcp --tcp-flags FIN,RST FIN,RST -j bogus
-A bogus -j LOG --log-prefix "BOGUS: "
-A bogus -j DROP
-A spoofing -j LOG --log-prefix "IP SPOOF: "
-A spoofing -j DROP
COMMIT

Alguma ideia de como posso bloquear esses pacotes?

Pacotes encaminhados:

iptables -vnL FORWARD
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
  44G 7074G DOCKER-USER  all  --  *      *       0.0.0.0/0            0.0.0.0/0           
  44G 7074G DOCKER-ISOLATION-STAGE-1  all  --  *      *       0.0.0.0/0            0.0.0.0/0           
  16G 4358G ACCEPT     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
  54M 3269M DOCKER     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0           
  28G 2712G ACCEPT     all  --  docker0 !docker0  0.0.0.0/0            0.0.0.0/0           
    0     0 ACCEPT     all  --  docker0 docker0  0.0.0.0/0            0.0.0.0/0           
    0     0 DROP       all  --  *      *       0.0.0.0/0            0.0.0.0/0            state INVALID
    0     0 ACCEPT     all  --  docker0 enp2s0  0.0.0.0/0            0.0.0.0/0           
    0     0 ACCEPT     all  --  enp2s0 docker0  0.0.0.0/0            0.0.0.0/0           
    0     0 LOG        all  --  *      *       0.0.0.0/0            0.0.0.0/0            LOG flags 0 level 4 prefix "fw forward drop "
    0     0 ACCEPT     all  --  *      *       0.0.0.0/0            0.0.0.0/0            state NEW

Regras de encaminhamento (parcialmente injetadas pelo Docker):

-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A FORWARD -m state --state INVALID -j DROP
-A FORWARD -i docker0 -o enp2s0 -j ACCEPT
-A FORWARD -i enp2s0 -o docker0 -j ACCEPT

Além disso OUTPUT, a cadeia deve estar descartando pacotes inválidos:

-A OUTPUT -m state --state INVALID -j DROP
iptables
  • 1 respostas
  • 1181 Views
Martin Hope
Tombart
Asked: 2017-05-25 04:54:45 +0800 CST

hwclock: Não é possível acessar o Hardware Clock através de qualquer método conhecido

  • 7

Em um servidor Debian, estou tendo problemas com hwclock:

$ hwclock --show 
hwclock: Cannot access the Hardware Clock via any known method.
hwclock: Use the --debug option to see the details of our search for an access method.

O sistema é executado no kernel de backports Debian 4.9.18-1~bpo8+1 (2017-04-10).

Aqui está a saída de depuração:

$ hwclock --debug
hwclock from util-linux 2.25.2
hwclock: cannot open /dev/rtc: Device or resource busy
No usable clock interface found.
hwclock: Cannot access the Hardware Clock via any known method.

fonte de relógio:

$ cat /sys/devices/system/clocksource/clocksource0/current_clocksource
tsc

Finalmente, rtco dispositivo existe:

$ ls -l /dev/rtc*
lrwxrwxrwx 1 root root      4 Apr 29 16:41 /dev/rtc -> rtc0
crw------- 1 root root 253, 0 Apr 29 16:41 /dev/rtc0
debian
  • 3 respostas
  • 29743 Views
Martin Hope
Tombart
Asked: 2015-01-09 00:42:46 +0800 CST

systemd-journal no contêiner Debian Jessie LXC consome 100% da CPU

  • 3

Depois de criar um novo LXC baseado no Debian Jessie, em um Ubuntu 14.04, o systemd-journal consome toda a CPU disponível.

lxc-create -n jessie -t debian
debian
  • 1 respostas
  • 3978 Views

Sidebar

Stats

  • Perguntas 205573
  • respostas 270741
  • best respostas 135370
  • utilizador 68524
  • Highest score
  • respostas
  • Marko Smith

    Você pode passar usuário/passar para autenticação básica HTTP em parâmetros de URL?

    • 5 respostas
  • Marko Smith

    Ping uma porta específica

    • 18 respostas
  • Marko Smith

    Verifique se a porta está aberta ou fechada em um servidor Linux?

    • 7 respostas
  • Marko Smith

    Como automatizar o login SSH com senha?

    • 10 respostas
  • Marko Smith

    Como posso dizer ao Git para Windows onde encontrar minha chave RSA privada?

    • 30 respostas
  • Marko Smith

    Qual é o nome de usuário/senha de superusuário padrão para postgres após uma nova instalação?

    • 5 respostas
  • Marko Smith

    Qual porta o SFTP usa?

    • 6 respostas
  • Marko Smith

    Linha de comando para listar usuários em um grupo do Windows Active Directory?

    • 9 respostas
  • Marko Smith

    O que é um arquivo Pem e como ele difere de outros formatos de arquivo de chave gerada pelo OpenSSL?

    • 3 respostas
  • Marko Smith

    Como determinar se uma variável bash está vazia?

    • 15 respostas
  • Martin Hope
    Davie Ping uma porta específica 2009-10-09 01:57:50 +0800 CST
  • Martin Hope
    kernel O scp pode copiar diretórios recursivamente? 2011-04-29 20:24:45 +0800 CST
  • Martin Hope
    Robert ssh retorna "Proprietário incorreto ou permissões em ~/.ssh/config" 2011-03-30 10:15:48 +0800 CST
  • Martin Hope
    Eonil Como automatizar o login SSH com senha? 2011-03-02 03:07:12 +0800 CST
  • Martin Hope
    gunwin Como lidar com um servidor comprometido? 2011-01-03 13:31:27 +0800 CST
  • Martin Hope
    Tom Feiner Como posso classificar a saída du -h por tamanho 2009-02-26 05:42:42 +0800 CST
  • Martin Hope
    Noah Goodrich O que é um arquivo Pem e como ele difere de outros formatos de arquivo de chave gerada pelo OpenSSL? 2009-05-19 18:24:42 +0800 CST
  • Martin Hope
    Brent Como determinar se uma variável bash está vazia? 2009-05-13 09:54:48 +0800 CST

Hot tag

linux nginx windows networking ubuntu domain-name-system amazon-web-services active-directory apache-2.4 ssh

Explore

  • Início
  • Perguntas
    • Recentes
    • Highest score
  • tag
  • help

Footer

AskOverflow.Dev

About Us

  • About Us
  • Contact Us

Legal Stuff

  • Privacy Policy

Language

  • Pt
  • Server
  • Unix

© 2023 AskOverflow.DEV All Rights Reserve