AskOverflow.Dev

AskOverflow.Dev Logo AskOverflow.Dev Logo

AskOverflow.Dev Navigation

  • 主页
  • 系统&网络
  • Ubuntu
  • Unix
  • DBA
  • Computer
  • Coding
  • LangChain

Mobile menu

Close
  • 主页
  • 系统&网络
    • 最新
    • 热门
    • 标签
  • Ubuntu
    • 最新
    • 热门
    • 标签
  • Unix
    • 最新
    • 标签
  • DBA
    • 最新
    • 标签
  • Computer
    • 最新
    • 标签
  • Coding
    • 最新
    • 标签
主页 / user-300331

ElToro1966's questions

Martin Hope
ElToro1966
Asked: 2022-01-29 09:52:01 +0800 CST

浮士德客户端没有连接到 Kafka

  • 1

我很难从运行浮士德脚本的客户端连接到运行 Kafka 的机器。脚本如下所示:

import faust
import logging
from asyncio import sleep


class Test(faust.Record):
    msg: str


app = faust.App('myapp', broker='kafka://10.0.0.20:9092')
topic = app.topic('test', value_type=Test)


@app.agent(topic)
async def hello(messages):
    async for message in messages:
        print(f'Received {message.msg}')


@app.timer(interval=5.0)
async def example_sender():
    await hello.send(
        value=Test(msg='Hello World!'),
    )


if __name__ == '__main__':
    app.main()

当我运行脚本时:

# faust -A myapp worker -l info
┌ƒaµS† v0.8.1─┬─────────────────────────────────────────────────┐
│ id          │ myapp                                           │
│ transport   │ [URL('kafka://10.0.0.20:9092')]                 │
│ store       │ memory:                                         │
│ web         │ http://hubbabubba:6066                   │
│ log         │ -stderr- (info)                                 │
│ pid         │ 260765                                          │
│ hostname    │ hubbabubba                               │
│ platform    │ CPython 3.8.10 (Linux x86_64)                   │
│ drivers     │                                                 │
│   transport │ aiokafka=0.7.2                                  │
│   web       │ aiohttp=3.8.1                                   │
│ datadir     │ /Git/faust-kafka/myapp-data    │
│ appdir      │ /Git/faust-kafka/myapp-data/v1 │
└─────────────┴─────────────────────────────────────────────────┘
[2022-01-28 13:09:57,018] [260765] [INFO] [^Worker]: Starting... 
[2022-01-28 13:09:57,021] [260765] [INFO] [^-App]: Starting... 
[2022-01-28 13:09:57,021] [260765] [INFO] [^--Monitor]: Starting... 
[2022-01-28 13:09:57,021] [260765] [INFO] [^--Producer]: Starting... 
[2022-01-28 13:09:57,022] [260765] [INFO] [^---ProducerBuffer]: Starting... 
[2022-01-28 13:09:57,024] [260765] [ERROR] Unable connect to "10.0.0.20:9092": [Errno 113] Connect call failed ('10.0.0.20', 9092) 
[2022-01-28 13:09:57,025] [260765] [ERROR] [^Worker]: Error: KafkaConnectionError("Unable to bootstrap from [('10.0.0.20', 9092, <AddressFamily.AF_INET: 2>)]") 
Traceback (most recent call last):
  File "/Git/faust-kafka/venv/lib/python3.8/site-packages/mode/worker.py", line 276, in execute_from_commandline
    self.loop.run_until_complete(self._starting_fut)
  File "/usr/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/Git/faust-kafka/venv/lib/python3.8/site-packages/mode/services.py", line 759, in start
    await self._default_start()
  File "/media/eric/DISK3/Git/faust-kafka/venv/lib/python3.8/site-packages/mode/services.py", line 766, in _default_start
    await self._actually_start()...
  File "/Git/faust-kafka/venv/lib/python3.8/site-packages/aiokafka/client.py", line 249, in bootstrap
    raise KafkaConnectionError(
kafka.errors.KafkaConnectionError: KafkaConnectionError: Unable to bootstrap from [('10.0.0.20', 9092, <AddressFamily.AF_INET: 2>)]
[2022-01-28 13:09:57,027] [260765] [INFO] [^Worker]: Stopping... 
[2022-01-28 13:09:57,027] [260765] [INFO] [^-App]: Stopping... 
[2022-01-28 13:09:57,027] [260765] [INFO] [^-App]: Flush producer buffer... 
[2022-01-28 13:09:57,028] [260765] [INFO] [^--TableManager]: Stopping... 
[2022-01-28 13:09:57,028] [260765] [INFO] [^---Fetcher]: Stopping... 
[2022-01-28 13:09:57,028] [260765] [INFO] [^---Conductor]: Stopping... 
[2022-01-28 13:09:57,028] [260765] [INFO] [^--AgentManager]: Stopping... 
[2022-01-28 13:09:57,029] [260765] [INFO] [^Agent: myapp.hello]: Stopping... 
[2022-01-28 13:09:57,029] [260765] [INFO] [^--ReplyConsumer]: Stopping... 
[2022-01-28 13:09:57,029] [260765] [INFO] [^--LeaderAssignor]: Stopping... 
[2022-01-28 13:09:57,029] [260765] [INFO] [^--Consumer]: Stopping... 
[2022-01-28 13:09:57,030] [260765] [INFO] [^--Web]: Stopping... 
[2022-01-28 13:09:57,030] [260765] [INFO] [^--CacheBackend]: Stopping... 
[2022-01-28 13:09:57,030] [260765] [INFO] [^--Producer]: Stopping... 
[2022-01-28 13:09:57,030] [260765] [INFO] [^---ProducerBuffer]: Stopping... 
[2022-01-28 13:09:57,031] [260765] [INFO] [^--Monitor]: Stopping... 
[2022-01-28 13:09:57,032] [260765] [INFO] [^Worker]: Gathering service tasks... 
[2022-01-28 13:09:57,032] [260765] [INFO] [^Worker]: Gathering all futures... 
[2022-01-28 13:09:58,033] [260765] [INFO] [^Worker]: Closing event loop

Kafka (v.2.8.1) 在 10.0.0.20 端口 9092 上运行。Kafka 配置如下所示:

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# see kafka.server.KafkaConfig for additional details and defaults

############################# Server Basics #############################

# The id of the broker. This must be set to a unique integer for each broker.
broker.id=0

############################# Socket Server Settings #############################

# The address the socket server listens on. It will get the value returned from 
# java.net.InetAddress.getCanonicalHostName() if not configured.
#   FORMAT:
#     listeners = listener_name://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092
listeners=PLAINTEXT://:9092

# Hostname and port the broker will advertise to producers and consumers. If not set, 
# it uses the value for "listeners" if configured.  Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
advertised.listeners=PLAINTEXT://localhost:9092

# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL

# The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads=3

# The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads=8

# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400

# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400

# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600


############################# Log Basics #############################

# A comma separated list of directories under which to store log files
log.dirs=/tmp/kafka-logs

# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1

# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1

############################# Internal Topic Settings  #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1

############################# Log Flush Policy #############################

# Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
#    1. Durability: Unflushed data may be lost if you are not using replication.
#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.

# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000

# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000

############################# Log Retention Policy #############################

# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.

# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=168

# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824

# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824

# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000

############################# Zookeeper #############################

# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=localhost:2181

# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=18000


############################# Group Coordinator Settings #############################

# The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
# The default value for this is 3 seconds.
# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
group.initial.rebalance.delay.ms=0

卡夫卡经纪人一开始就很顺利:

$ sudo bin/kafka-server-start.sh -daemon config/server.properties 

我得到了这个话题:

$ bin/kafka-topics.sh --bootstrap-server localhost:9092 --create --replication-factor 1 --partitions 1 --topic test

然后我检查:

$ bin/kafka-topics.sh --bootstrap-server localhost:9092 --list
test

所以我想知道我在哪里搞砸了。顺便说一句:可以从客户端机器访问服务器:

$ ping -c 5 10.0.0.20 -p 9092
PATTERN: 0x9092
PING 10.0.0.20 (10.0.0.20) 56(84) bytes of data.
64 bytes from 10.0.0.20: icmp_seq=1 ttl=64 time=0.468 ms
64 bytes from 10.0.0.20: icmp_seq=2 ttl=64 time=0.790 ms
64 bytes from 10.0.0.20: icmp_seq=3 ttl=64 time=0.918 ms
64 bytes from 10.0.0.20: icmp_seq=4 ttl=64 time=0.453 ms
64 bytes from 10.0.0.20: icmp_seq=5 ttl=64 time=0.827 ms

--- 10.0.0.20 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4095ms
rtt min/avg/max/mdev = 0.453/0.691/0.918/0.192 ms
networking python opensuse firewalld kafka
  • 1 个回答
  • 550 Views
Martin Hope
ElToro1966
Asked: 2015-10-22 00:46:53 +0800 CST

可能的思科路由器黑客?

  • 3

我们有一个来自 ISP 的 Cisco EPC3928AD EuroDocsis 3.0 2-PORT 语音网关。路由器连接到防火墙(一个运行 iptables 和 Wireshark 的 Ubuntu-box)。我们的 LAN (10.0.0.1/24) 超出了防火墙。没有其他设备连接到路由器。路由器的WIFI已被禁用。

几天前,我们在获取邮件或浏览时发现了问题。连接开始变慢,有时我们根本没有连接。这种行为似乎是在不规则的时间段(大约 1-30 分钟)内随机发生的。LAN 上的所有设备都会受到影响。Skype 等某些服务不受影响。

ISP 对路由器和与 WAN 其余部分的连接进行了检查。他们发现调制解调器本身、信号强度或电缆都没有问题。他们还设置了对调制解调器打开的 WAN 段的监控,该段运行了几天而没有发现任何问题。

我们的局域网没有 DHCP。我们还关闭了调制解调器中的 DHCP。面向 WAN 的防火墙上的 NIC 设置为 192.168.0.201。尽管我们的 LAN 有静态地址,并且每个 NIC 上的 DNS 配置都设置为 ISP 推荐的 DNS,但他们告诉我们,在路由器中激活 DHCP“有时会有所帮助”......

我们继续激活起始地址为 192.168.0.201 且范围为 1 的 DHCP。我们还为面向调制解调器的 NIC 的 MAC 保留了 192.168.0.201。接下来发生的事情让我们感到困惑:在路由器的“预分配的 DHCP IP 地址”列表中,一个未知的 MAC,00:11:e6:de:ad:07(00:11:e6 属于科学亚特兰大,思科的一部分)正在占用192.168.0.201。此外,在路由器的“已连接设备摘要”中,出现了相同的 MAC,但这次使用的是 LAN 上的 IP (10.0.0.74)!

我们重新启动了路由器,但无济于事。相同的未知 MAC 再次出现,这一次 LAN 地址 (10.0.0.2) 已被 LAN 上的工作站使用。在 IP 表中阻止 MAC 会使 MAC 从“连接的设备摘要”中消失,但仍位于“预分配的 DHCP IP 地址”列表中。我们将 IP-range 设置为 2,因此它现在占用 192.168.0.202 而不是 192.168.0.201。

重新启动路由器或将其与防火墙断开连接无济于事。未知的 MAC 不断出现。连接的间歇性问题仍然存在。到底是怎么回事?这是某种黑客行为吗?任何输入将不胜感激。

security
  • 1 个回答
  • 2765 Views
Martin Hope
ElToro1966
Asked: 2015-07-21 04:35:28 +0800 CST

iptables 基于时间的规则无效 - 具有 2 个 NIC 的网关

  • 1

我有一个具有以下拓扑的网络: - 带有面向网关的 NIC 的 WAN 调制解调器:192.168.0.1 - 带有两个 NIC 的 Ubuntu 14.04 网关:1)Eth0(面向调制解调器):192.168.0.201 2)Eth1(面向 LAN): 10.0.0.1

我正在尝试使用iptables每天和一天中的时间限制对 Internet 和 LAN(来自 Internet)的访问,但这些规则似乎没有任何效果。

在 rc.local 中,我有以下设置:

++#!/bin/sh -e
#
# rc.local
# turning on address verification
echo -n "Enabling source address verification..."
echo 1 > /proc/sys/net/ipv4/conf/default/rp_filter
echo "done"

    #just for the sake of turning the networks off and on... not sure if it would work turning them back on only at the end of script ? -- Also flushing NICs
    ip addr flush eth0;
    ip addr flush eth1;
    ifconfig eth0 down;
    ifconfig eth1 down;
    ifconfig lo down;
    ifconfig lo up;
    ifconfig eth0 up;
    ifconfig eth1 up;
    ifconfig eth0 192.168.0.201 netmask 255.255.255.0
    ifconfig eth1 10.0.0.1 netmask 255.255.255.0
    #routing table check up :
    route add 127.0.0.1 dev lo;
    route add -net 127.0.0.0/8 dev lo;
    route add -net 10.0.0.0/24 dev eth1;
    route add -net 192.168.0.0/8 dev eth0;
    route add default gw 192.168.0.1;
    # turn fowarding off while configuring iptables :
    sysctl net/ipv4/ip_forward=0
    iptables -F
    iptables -X
    iptables -P INPUT DROP
    iptables -P OUTPUT ACCEPT
    iptables -P FORWARD ACCEPT
    iptables -t nat -F
    iptables -t nat -X
    iptables -t mangle -F
    iptables -t mangle -X
    #and on again once the policies are set
    sysctl net/ipv4/ip_forward=1
    #limiting LAN clients
    iptables -A FORWARD -d 10.0.0.74 -m time --timestart 20:00 --timestop 10:00 --days Sun,Mon,Tue,Wed,Thu,Fri -j DROP
    iptables -A FORWARD -d 10.0.0.228 -m time --timestart 20:00 --timestop 10:00 --days Sun,Mon,Tue,Wed,Thu,Fri -j DROP
    iptables -A FORWARD -d 10.0.0.121 -m time --timestart 20:00 --timestop 10:00 --days Sun,Mon,Tue,Wed,Thu,Fri -j DROP
    iptables -A FORWARD -d 10.0.0.221 -m time --timestart 20:00 --timestop 10:00 --days Sun,Mon,Tue,Wed,Thu,Fri -j DROP
    iptables -A FORWARD -d 10.0.0.2 -m time --timestart 10:00 --timestop 20:00 --days Sun,Mon,Tue,Wed,Thu,Fri -j DROP
    #block IPs
    iptables -A INPUT -s 173.194.45.189 -j DROP
    iptables -A INPUT -s 208.92.53.87 -j DROP
    #masquerade on wan card :
    iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
    #accept all packets in lo and protect against spoofing :
    iptables -A INPUT -i lo -j ACCEPT
    iptables -A OUTPUT -o lo -j ACCEPT
    iptables -A INPUT -i !lo -s 127.0.0.0/8 -j LOG
    iptables -A INPUT -i !lo -s 127.0.0.0/8 -j DROP
    iptables -A FORWARD -i !lo -s 127.0.0.0/8 -j LOG
    iptables -A FORWARD -i !lo -s 127.0.0.0/8 -j DROP
    #accept only established input but all output on WAN card
    iptables -A FORWARD -i eth0 -o eth1 -m state --state RELATED,ESTABLISHED -j ACCEPT
    iptables -A FORWARD -i eth1 -j ACCEPT 
    iptables -A OUTPUT -o eth0 -m state --state NEW,ESTABLISHED,RELATED -j ACCEPT
    iptables -A INPUT -i eth0 -m state --state ESTABLISHED,RELATED -j ACCEPT 
    #just forget the invalid packets :
    iptables -A OUTPUT -o eth0 -m state --state INVALID -j DROP
    iptables -A INPUT -i eth0 -m state --state INVALID -j LOG
    iptables -A INPUT -i eth0 -m state --state INVALID -j DROP
    #not sure whether to put this before or after spoofing protection ?
    iptables -A INPUT -i eth1 -j ACCEPT
    iptables -A OUTPUT -o eth1 -j ACCEPT
    #against spoofing on LAN card input :
    iptables -A INPUT -i !eth1 -s 10.0.0.0/24 -j LOG
    iptables -A INPUT -i !eth1 -s 10.0.0.0/24 -j DROP
exit 0

用iptables -L列出规则我得到:

Chain INPUT (policy DROP)
target     prot opt source               destination         
DROP       all  --  173.194.45.189       anywhere            
DROP       all  --  208.92.53.87         anywhere            
ACCEPT     tcp  --  10.0.0.0/24          anywhere             ctstate NEW,RELATED,ESTABLISHED tcp dpt:sunrpc
ACCEPT     udp  --  10.0.0.0/24          anywhere             ctstate NEW,RELATED,ESTABLISHED udp dpt:sunrpc
ACCEPT     all  --  anywhere             anywhere            
LOG        all  --  127.0.0.0/8          anywhere             LOG level warning
DROP       all  --  127.0.0.0/8          anywhere            
ACCEPT     all  --  anywhere             anywhere             state RELATED,ESTABLISHED
LOG        all  --  anywhere             anywhere             state INVALID LOG level warning
DROP       all  --  anywhere             anywhere             state INVALID
ACCEPT     all  --  anywhere             anywhere            
LOG        all  --  10.0.0.0/24          anywhere             LOG level warning
DROP       all  --  10.0.0.0/24          anywhere            

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination 
LOG        all  --  127.0.0.0/8          anywhere             LOG level warning
DROP       all  --  127.0.0.0/8          anywhere            
ACCEPT     all  --  anywhere             anywhere             state RELATED,ESTABLISHED
ACCEPT     all  --  anywhere             anywhere            

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
ACCEPT     all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere             state NEW,RELATED,ESTABLISHED
DROP       all  --  anywhere             anywhere             state INVALID
ACCEPT     all  --  anywhere             anywhere

不存在基于时间的规则。有人能看出为什么吗?注意:这是应该激活以下规则的一天和一天中的时间:

iptables -A FORWARD -d 10.0.0.2 -m time --timestart 10:00 --timestop 20:00 --days Sun,Mon,Tue,Wed,Thu,Fri -j DROP  
iptables
  • 1 个回答
  • 4314 Views

Sidebar

Stats

  • 问题 205573
  • 回答 270741
  • 最佳答案 135370
  • 用户 68524
  • 热门
  • 回答
  • Marko Smith

    新安装后 postgres 的默认超级用户用户名/密码是什么?

    • 5 个回答
  • Marko Smith

    SFTP 使用什么端口?

    • 6 个回答
  • Marko Smith

    命令行列出 Windows Active Directory 组中的用户?

    • 9 个回答
  • Marko Smith

    什么是 Pem 文件,它与其他 OpenSSL 生成的密钥文件格式有何不同?

    • 3 个回答
  • Marko Smith

    如何确定bash变量是否为空?

    • 15 个回答
  • Martin Hope
    Tom Feiner 如何按大小对 du -h 输出进行排序 2009-02-26 05:42:42 +0800 CST
  • Martin Hope
    Noah Goodrich 什么是 Pem 文件,它与其他 OpenSSL 生成的密钥文件格式有何不同? 2009-05-19 18:24:42 +0800 CST
  • Martin Hope
    Brent 如何确定bash变量是否为空? 2009-05-13 09:54:48 +0800 CST
  • Martin Hope
    cletus 您如何找到在 Windows 中打开文件的进程? 2009-05-01 16:47:16 +0800 CST

热门标签

linux nginx windows networking ubuntu domain-name-system amazon-web-services active-directory apache-2.4 ssh

Explore

  • 主页
  • 问题
    • 最新
    • 热门
  • 标签
  • 帮助

Footer

AskOverflow.Dev

关于我们

  • 关于我们
  • 联系我们

Legal Stuff

  • Privacy Policy

Language

  • Pt
  • Server
  • Unix

© 2023 AskOverflow.DEV All Rights Reserve