AskOverflow.Dev

AskOverflow.Dev Logo AskOverflow.Dev Logo

AskOverflow.Dev Navigation

  • 主页
  • 系统&网络
  • Ubuntu
  • Unix
  • DBA
  • Computer
  • Coding
  • LangChain

Mobile menu

Close
  • 主页
  • 系统&网络
    • 最新
    • 热门
    • 标签
  • Ubuntu
    • 最新
    • 热门
    • 标签
  • Unix
    • 最新
    • 标签
  • DBA
    • 最新
    • 标签
  • Computer
    • 最新
    • 标签
  • Coding
    • 最新
    • 标签
主页 / user-10683

alexus's questions

Martin Hope
alexus
Asked: 2025-03-20 02:31:59 +0800 CST

bind9 | 命名:setgid():操作不允许

  • 5

我收到了 Victoria Risk [email protected]的通知

新的 BIND 版本可用:9.18.35、9.20.7、9.21.6

在我提取最新的图像后,bind9 现在无法启动,因为出现以下新错误:

$ docker compose up
[+] Running 2/1
 ✔ Network bind_bind  Created                                                                                                                                                                                                                                                            0.2s
 ✔ Container bind9    Created                                                                                                                                                                                                                                                            0.0s
Attaching to bind9
bind9  | named: setgid(): Operation not permitted
bind9 exited with code 1
$ 

在提取最新的图像之前,bind9 工作得很好。

我的环境:

$ uname -a
Linux gamma 6.1.0-26-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.112-1 (2024-09-30) x86_64 GNU/Linux
$ docker --version
Docker version 27.3.1, build ce12230
$ docker compose version
Docker Compose version v2.29.7
$

我的docker-compose

$ cat docker-compose.yml
services:
  bind:
    image: internetsystemsconsortium/bind9:9.20
    container_name: bind9
    env_file: .env
    expose:
      - 80
      - 443
      - 853
    networks:
      - default
    ports:
      - 0.0.0.0:53:53/udp
      - 0.0.0.0:53:53/tcp
      - 0.0.0.0:443:443/tcp
      - 0.0.0.0:853:853/tcp
      - 127.0.0.1:953:953/tcp
    restart: always
    volumes:
      - ./etc/bind:/etc/bind
      - ./var/cache/bind:/var/cache/bind
      - ./var/lib/bind:/var/lib/bind
      - ./var/log:/var/log

networks:
  default:
    name: bind_bind
    driver: bridge

$

named.conf:

$ cat ./etc/bind/named.conf
# Use with the following in named.conf, adjusting the allow list as needed:
key "rndc-key" {
    algorithm hmac-sha256;
    secret "XYZ";
};
#
controls {
    inet 127.0.0.1 port 953
        allow { 127.0.0.1; } keys { "rndc-key"; };
};
# End of named.conf

logging {
        channel stdout {
                stderr;
                severity debug;
#                severity info;
                print-category yes;
                print-severity yes;
                print-time yes;
        };
        category security { stdout; };
        category dnssec   { stdout; };
        category default  { stdout; };
        category queries  { stdout; };
};

tls local-tls {
    key-file "/etc/bind/letsencrypt/X.Y.Z/privkey.pem";
    cert-file "/etc/bind/letsencrypt/X.Y.Z/fullchain.pem";
};

http local-http-server {
    endpoints { "/dns-query";  };
};

options {
    allow-query-cache { any; };
        allow-recursion {
        X.X.X.X/Y;
    };
        allow-transfer {
        X.X.X.X/Y;
    };
        allow-update { none; };
        directory "/var/cache/bind";
    http-port 80;
    https-port 443;
    listen-on port 53 { any; };
    listen-on port 80 tls none http local-http-server { any; };
    listen-on port 443 tls local-tls http local-http-server { any; };
    listen-on-v6 port 53 { any; };
    listen-on-v6 port 80 tls none http local-http-server { any; };
    listen-on-v6 port 443 tls local-tls http local-http-server { any; };

        max-cache-size 100M;
        max-cache-ttl 3600;
        max-ncache-ttl 3600;

        # https://kb.isc.org/docs/bind-best-practices-authoritative#6-prepare-for-abuse-of-any-externalfacing-servers
        rate-limit {
#                slip 2; // Every other response truncated
#                window 15; // Seconds to bucket
                responses-per-second 5;// # of good responses per prefix-length/sec
#                referrals-per-second 5; // referral responses
#                nodata-per-second 5; // nodata responses
#                nxdomains-per-second 5; // nxdomain responses
#                errors-per-second 5; // error responses
#                all-per-second 20; // When we drop all
#                log-only no; // Debugging mode
#                pps-scale 250; // x / 1000 * per-second
#                // = new drop limit
#                exempt-clients { 127.0.0.1; 192.153.154.0/24; 192.160.238.0/24 };
#                ipv4-prefix-length 24; // Define the IPv4 block size
#                ipv6-prefix-length 56; // Define the IPv6 block size
#                max-table-size 20000; // 40 bytes * this number = max memory
#                min-table-size 500; // pre-allocate to speed startup
        };

    version none;
};

$

请指教。


不幸的是,internetsystemsconsortium用户/团队没有第三级版本特定标签,这使我无法进行回滚:https://hub.docker.com/r/internetsystemsconsortium/bind9/tags

docker
  • 1 个回答
  • 51 Views
Martin Hope
alexus
Asked: 2024-11-14 06:44:10 +0800 CST

bind9'-响应率限制

  • 5

我在 Docker 容器内运行 BIND9,并注意到很多这样的消息:

bind9 | 2024 年 11 月 13 日 22:39:13.792 查询:信息:客户端 @0x7f81a0077000 200.55.244.14#7459 (.):查询:. IN ANY +E(0) (172.19.0.2)

我现在正尝试按照BIND 最佳实践 - 权威和/或使用响应速率限制 (RRL) 来实现“响应速率限制(RRL)” 。但是,即使添加了以下配置,似乎也没有任何区别:

options {
         …
          rate-limit {
              responses-per-second 5;
          };
      };

我的环境:

$ docker --version
Docker version 27.3.1, build ce12230
$ docker compose version
Docker Compose version v2.29.7
$ cat /etc/debian_version
12.7
$ uname -a
Linux X 6.1.0-26-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.112-1 (2024-09-30) x86_64 GNU/Linux
$ docker images internetsystemsconsortium/bind9
REPOSITORY                        TAG       IMAGE ID       CREATED       SIZE
internetsystemsconsortium/bind9   9.21      2fe7f58e77a3   4 weeks ago   371MB
$

请指教)提前谢谢!

docker
  • 1 个回答
  • 56 Views
Martin Hope
alexus
Asked: 2024-02-08 05:28:08 +0800 CST

在命名空间之间复制 Kubernetes Secret

  • 6

我尝试将名称空间dd-es-remote-ca中的秘密复制到名称空间default中kube-system。尽管在此过程中没有遇到任何错误,但密钥并未成功复制。

% kubectl --namespace default get secret dd-es-remote-ca -o yaml | sed 's/namespace: default/namespace: kube-system/'
apiVersion: v1
data:
  ca.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURNekNDQWh1Z0F3SUJBZ0lRYlJYNlJ6amFSQ0gwcWZ1VTNldFNSakFOQmdrcWhraUc5dzBCQVFzRkFEQWsKTVFzd0NRWURWUVFMRXdKa1pERVZNQk1HQTFVRUF4TU1aR1F0ZEhKaGJuTndiM0owTUI0WERUSTBNREl3TnpJdwpORGMxT0ZvWERUSTFNREl3TmpJd05UYzFPRm93SkRFTE1Ba0dBMVVFQ3hNQ1pHUXhGVEFUQmdOVkJBTVRER1JrCkxYUnlZVzV6Y0c5eWREQ0NBU0l3RFFZSktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQUs4VzdNUEIKOU1YTU9KTGlqbnB3czYwR3gySm0vNzdiNmlxUGN1Y3lPZEVnZkVyZFJWcU13OWtyUlY5ZG92UFFLcHhxdXpZMgpLemFpYnpLb0ZVVWZpSEVJL0p2N1Y4c3ZWNkJSK3FudXFMbE9XcG90cWNiU205WGNUWmo4Mm5xU3o4UytSSFgrCmNMM1phTG0rczdlWHRIWDF1YzdYTEFMdDFyc1dFQkYrbWFoUmxXclR2VW1PdFdsL25wQ0FTY001MlROVW0yVTUKNUI0eklxa29OeXQrdG9IS0ZZYjZpZnVPNWlBUGtwVDBNQjRFbE1KNk45b1ZPTmllNDdGV0ZJUStqVmxyNXcrbQpEQzVnbVN4eE9oaCs0MHRoc1FBN1hLSkUvY3hCaVlxMmUyckxDM1JDdTUrWXJEUUpiaFpaT2dOb2QwT2syQVpqCis4QkNSaExIOEdlcldTVUNBd0VBQWFOaE1GOHdEZ1lEVlIwUEFRSC9CQVFEQWdLRU1CMEdBMVVkSlFRV01CUUcKQ0NzR0FRVUZCd01CQmdnckJnRUZCUWNEQWpBUEJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJSdQp4dEVqL3Rxenl5RWFNZ0F4NFc2SkpNeWRYVEFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBbFFIbEtPcm1TUEZFCjNZR3p1Zm1ZaWVacUlZTXBBclJ2alR6SUFtams5SnFqbVYxK2Y2QXVFdE9CYy9NeHdwSEtuTHNZZVhRNUd1QVAKT0tsVTBiWGNiTmFocnd0Smg5YjVxWThSU1lMbmlrdTZxbHloK2taZHdZSTRxNW16dlVIeEJtUmpnaWJSbk05UQpLSE82QVkvbGtyNkQ5K0dJOHNDTlNLU2N5VUxVaVg0Um9sblpkUEdabmplbGFXNFBMMTVCRXNiMkRGTTV6WmVaCjBrR1p5RXhDWXdoWVZXKzlQVEo5aTRhcThYYmkvMDF4YlpxOWRIekhzd0E5SHBFcXpBVW1zeGJLQVJJUDhoNlAKdzRPQ21xbWFmY1MyaUtvdGtjQ0JrL1BiN0kzU0lmc09mczExTnVnNUFxOUYwUmd1a1FNUTJmT3pqM1FEbG1WYgpYcjZ5bE5ENGtRPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
kind: Secret
metadata:
  creationTimestamp: "2024-02-07T20:57:58Z"
  labels:
    elasticsearch.k8s.elastic.co/cluster-name: dd
  name: dd-es-remote-ca
  namespace: kube-system
  ownerReferences:
  - apiVersion: elasticsearch.k8s.elastic.co/v1
    blockOwnerDeletion: true
    controller: true
    kind: Elasticsearch
    name: dd
    uid: 21e623d1-8711-4273-8d55-82d7f85ea5eb
  resourceVersion: "84366"
  uid: ec59fa6f-910c-4934-bdca-9f5105a67512
type: Opaque
%
% kubectl get secret dd-es-remote-ca --namespace default -o yaml | sed 's/namespace: default/namespace: kube\-system/g' | kubectl apply -f -
secret/dd-es-remote-ca created
% kubectl --namespace kube-system get secrets
No resources found in kube-system namespace.
%

我究竟做错了什么?请指教)谢谢!


% kubectl --namespace default get secret dd-es-remote-ca -o yaml | yq 'del ( .metadata.creationTimestamp, .metadata.resourceVersion, .metadata.uid )' | sed 's/namespace: default/namespace: kube-system/' | kubectl apply -f -
secret/dd-es-remote-ca created
% kubectl --namespace kube-system get secrets
No resources found in kube-system namespace.
% kubectl --namespace default get secret dd-es-remote-ca -o yaml | yq 'del ( .metadata.creationTimestamp, .metadata.resourceVersion, .metadata.uid )' | sed 's/namespace: default/namespace: kube-system/'
apiVersion: v1
data:
  ca.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURNekNDQWh1Z0F3SUJBZ0lRYlJYNlJ6amFSQ0gwcWZ1VTNldFNSakFOQmdrcWhraUc5dzBCQVFzRkFEQWsKTVFzd0NRWURWUVFMRXdKa1pERVZNQk1HQTFVRUF4TU1aR1F0ZEhKaGJuTndiM0owTUI0WERUSTBNREl3TnpJdwpORGMxT0ZvWERUSTFNREl3TmpJd05UYzFPRm93SkRFTE1Ba0dBMVVFQ3hNQ1pHUXhGVEFUQmdOVkJBTVRER1JrCkxYUnlZVzV6Y0c5eWREQ0NBU0l3RFFZSktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQUs4VzdNUEIKOU1YTU9KTGlqbnB3czYwR3gySm0vNzdiNmlxUGN1Y3lPZEVnZkVyZFJWcU13OWtyUlY5ZG92UFFLcHhxdXpZMgpLemFpYnpLb0ZVVWZpSEVJL0p2N1Y4c3ZWNkJSK3FudXFMbE9XcG90cWNiU205WGNUWmo4Mm5xU3o4UytSSFgrCmNMM1phTG0rczdlWHRIWDF1YzdYTEFMdDFyc1dFQkYrbWFoUmxXclR2VW1PdFdsL25wQ0FTY001MlROVW0yVTUKNUI0eklxa29OeXQrdG9IS0ZZYjZpZnVPNWlBUGtwVDBNQjRFbE1KNk45b1ZPTmllNDdGV0ZJUStqVmxyNXcrbQpEQzVnbVN4eE9oaCs0MHRoc1FBN1hLSkUvY3hCaVlxMmUyckxDM1JDdTUrWXJEUUpiaFpaT2dOb2QwT2syQVpqCis4QkNSaExIOEdlcldTVUNBd0VBQWFOaE1GOHdEZ1lEVlIwUEFRSC9CQVFEQWdLRU1CMEdBMVVkSlFRV01CUUcKQ0NzR0FRVUZCd01CQmdnckJnRUZCUWNEQWpBUEJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJSdQp4dEVqL3Rxenl5RWFNZ0F4NFc2SkpNeWRYVEFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBbFFIbEtPcm1TUEZFCjNZR3p1Zm1ZaWVacUlZTXBBclJ2alR6SUFtams5SnFqbVYxK2Y2QXVFdE9CYy9NeHdwSEtuTHNZZVhRNUd1QVAKT0tsVTBiWGNiTmFocnd0Smg5YjVxWThSU1lMbmlrdTZxbHloK2taZHdZSTRxNW16dlVIeEJtUmpnaWJSbk05UQpLSE82QVkvbGtyNkQ5K0dJOHNDTlNLU2N5VUxVaVg0Um9sblpkUEdabmplbGFXNFBMMTVCRXNiMkRGTTV6WmVaCjBrR1p5RXhDWXdoWVZXKzlQVEo5aTRhcThYYmkvMDF4YlpxOWRIekhzd0E5SHBFcXpBVW1zeGJLQVJJUDhoNlAKdzRPQ21xbWFmY1MyaUtvdGtjQ0JrL1BiN0kzU0lmc09mczExTnVnNUFxOUYwUmd1a1FNUTJmT3pqM1FEbG1WYgpYcjZ5bE5ENGtRPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
kind: Secret
metadata:
  labels:
    elasticsearch.k8s.elastic.co/cluster-name: dd
  name: dd-es-remote-ca
  namespace: kube-system
  ownerReferences:
    - apiVersion: elasticsearch.k8s.elastic.co/v1
      blockOwnerDeletion: true
      controller: true
      kind: Elasticsearch
      name: dd
      uid: 21e623d1-8711-4273-8d55-82d7f85ea5eb
type: Opaque
%
kubernetes
  • 1 个回答
  • 127 Views
Martin Hope
alexus
Asked: 2023-06-30 03:06:52 +0800 CST

jq: 错误:X/0 未在 <top-level> 定义,第 1 行:

  • 5

我正在尝试从以下 JSON 中提取值:

% export test='{"a-b-c":"x-y-z"}'
% echo $test
{"a-b-c":"x-y-z"}
% echo $test | jq .a-b-c
jq: error: b/0 is not defined at <top-level>, line 1:
.a-b-c
jq: error: c/0 is not defined at <top-level>, line 1:
.a-b-c
jq: 2 compile errors
% echo $test | jq '."a-b-c"'
"x-y-z"
%

虽然最后一行“有效”,我的最终目标是拥有 shell 脚本并用变量替换“abc”参数,但是由于我必须使用单引号,因此实际值没有被传递。

% export var1=a-b-c
% echo $var1
a-b-c
% echo $test | jq '."$var1"'
null
%

请指教)

bash
  • 2 个回答
  • 667 Views
Martin Hope
alexus
Asked: 2023-05-23 19:32:53 +0800 CST

ansible - 无法解析模块/操作'amazon.aws.s3_object'/在配置的模块路径中找不到模块amazon.aws.s3_object

  • 5

根据要求,我确实安装了所有必需的软件包,但是amazon.aws.s3_object其中之一是:

  • couldn't be resolved/found
  • was not found in configured module paths

步骤,我曾经重现我的问题:

% docker run -it debian:stable-slim bash
root@6140e6e2c06c:/# apt-get -qq update && apt-get -yqq install ansible python3-boto3 python3-botocore
root@6140e6e2c06c:/# uname -a
Linux 6140e6e2c06c 5.15.49-linuxkit #1 SMP PREEMPT Tue Sep 13 07:51:32 UTC 2022 aarch64 GNU/Linux
root@6140e6e2c06c:/# cat /etc/debian_version
11.7
root@6140e6e2c06c:/# ansible --version
ansible 2.10.8
  config file = None
  configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python3/dist-packages/ansible
  executable location = /usr/bin/ansible
  python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110]
root@6140e6e2c06c:/# python3 --version
Python 3.9.2
root@6140e6e2c06c:/#
root@6140e6e2c06c:/# ansible localhost -m amazon.aws.s3_object
[WARNING]: No inventory was parsed, only implicit localhost is available
localhost | FAILED! => {
    "msg": "The module amazon.aws.s3_object was not found in configured module paths"
}
root@6140e6e2c06c:/#...

先感谢您!

amazon-web-services
  • 2 个回答
  • 48 Views
Martin Hope
alexus
Asked: 2023-05-08 10:13:22 +0800 CST

您应该将任务的 `loop_control` 选项中的 `loop_var` 值设置为其他值以避免变量

  • 5

我的环境:

# cat /etc/debian_version
11.7
# ansible-playbook --version
ansible-playbook [core 2.13.1]
  config file = None
  configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/local/lib/python3.9/dist-packages/ansible
  ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
  executable location = /usr/local/bin/ansible-playbook
  python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110]
  jinja version = 3.1.2
  libyaml = True
#

以下任务是我的 Ansible 角色的一部分:

- name: _file - state:absent
  ansible.builtin.file:
    path: "{{ item.path }}"
    state: absent
  loop: "{{ find.files | flatten }}"
  register: file
  when: find.matched is defined and find.matched != 0

然而,我注意到一条警告信息:

[警告]:任务:XYZ:_file - 状态:不存在:循环变量“item”已在使用中。您应该将任务选项loop_var中的值设置为其他值,以避免变量冲突和意外行为。loop_control

阅读一些之后:

  • 使用 loop_var 定义内部和外部变量名称
  • 将控件添加到循环

我将任务修改为如下内容:

- name: _file - state:absent
  ansible.builtin.file:
    path: "{{ item.path }}"
    state: absent
  loop: "{{ find.files | flatten }}"
  loop_control:
    label: "{{ item.path }}"
    loop_var: find
  register: file
  when: find.matched is defined and find.matched != 0

这个任务在我的角色中被调用了两次,第一次工作正常,第二次失败

TASK [XYZ : _file - state:absent] ******************************************************************************************************************************************************************************
[WARNING]: TASK: XYZ : _file - state:absent: The loop variable 'find' is already in use. You should set the `loop_var` value in the `loop_control` option for the task to something else to avoid variable
collisions and unexpected behavior.
failed: [127.0.0.1] (item=None) => {"ansible_loop_var": "find", "changed": false, "find": {"atime": 1683511713.8529496, "checksum": "ecd34202c34bf761e4c2c9800a39e18dffad5d9e", "ctime": 1683511714.972948, "dev": 2049, "gid": 0, "gr_name": "root", "inode": 150677, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0644", "mtime": 1683511714.972948, "nlink": 1, "path": "tmp/FILE.CSV", "pw_name": "root", "rgrp": true, "roth": true, "rusr": true, "size": 642, "uid": 0, "wgrp": false, "woth": false, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}, "msg": "Failed to template loop_control.label: 'item' is undefined", "skip_reason": "Conditional result was False"}

请指教。

ansible
  • 1 个回答
  • 36 Views
Martin Hope
alexus
Asked: 2020-06-27 13:03:50 +0800 CST

应设置为 65000 以避免操作中断

  • 6

我正在关注将 Solr 投入生产 Apache Solr 参考指南 8.5solr ,但是在重新启动服务时无法克服警告:

# service solr restart
*** [WARN] *** Your open file limit is currently 1024.
 It should be set to 65000 to avoid operational disruption.
 If you no longer wish to see this warning, set SOLR_ULIMIT_CHECKS to false in your profile or solr.in.sh
*** [WARN] ***  Your Max Processes Limit is currently 1024.
 It should be set to 65000 to avoid operational disruption.
 If you no longer wish to see this warning, set SOLR_ULIMIT_CHECKS to false in your profile or solr.in.sh
Sending stop command to Solr running on port 8983 ... waiting up to 180 seconds to allow Jetty process 16065 to stop gracefully.
Waiting up to 180 seconds to see Solr running on port 8983 [\]
Started Solr server on port 8983 (pid=16320). Happy searching!

# 

我的系统:

# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 6.10 (Santiago)
# uname -a
Linux X.X.X 2.6.32-754.30.2.el6.x86_64 #1 SMP Fri May 29 04:45:43 EDT 2020 x86_64 x86_64 x86_64 GNU/Linux
# 

之后man 5 limits.conf:

# cat /etc/security/limits.d/498-solr.conf
solr    hard    nofile  65000
solr    hard    nproc   65000
#

但是,在重新启动 solr 服务时,我仍然收到该警告消息。

请指教)

提前致谢!


@MirceaVutcovici:

# grep 'Max processes' /proc/$(pgrep solr)/limits
grep: /proc//limits: No such file or directory
# pgrep solr
# echo $?
1
# ps ax | grep solr
 1926 ?        Sl     2:37 java -server -Xms512m -Xmx512m -XX:+UseG1GC -XX:+PerfDisableSharedMem -XX:+ParallelRefProcEnabled -XX:MaxGCPauseMillis=250 -XX:+UseLargePages -XX:+AlwaysPreTouch -verbose:gc -XX:+PrintHeapAtGC -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime -Xloggc:/var/solr/logs/solr_gc.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=9 -XX:GCLogFileSize=20M -Dsolr.jetty.inetaccess.includes= -Dsolr.jetty.inetaccess.excludes= -Dsolr.log.dir=/var/solr/logs -Djetty.port=8983 -DSTOP.PORT=7983 -DSTOP.KEY=solrrocks -Dhost=rlos.uftwf.local -Duser.timezone=UTC -Djetty.home=/opt/solr/server -Dsolr.solr.home=/var/solr/data -Dsolr.data.home= -Dsolr.install.dir=/opt/solr -Dsolr.default.confdir=/opt/solr/server/solr/configsets/_default/conf -Dlog4j.configurationFile=/var/solr/log4j2.xml -Xss256k -Dsolr.jetty.https.port=8983 -Dsolr.log.muteconsole -XX:OnOutOfMemoryError=/opt/solr/bin/oom_solr.sh 8983 /var/solr/logs -jar start.jar --module=http
 9029 pts/0    S+     0:00 grep solr
# grep 'Max processes' /proc/1926/limits
Max processes             1024                 65000                processes
#
ulimit solr rhel6
  • 1 个回答
  • 9048 Views
Martin Hope
alexus
Asked: 2020-02-08 09:20:24 +0800 CST

Grafana - 从环境变量覆盖的配置

  • 0

我正在尝试使用环境变量/ root_url进行配置

# grep GF_ROOT_URL docker-compose.override.yml 
                        - GF_ROOT_URL=https://g.x.com/
# 

然而,当我邀请用户加入 Grafana 时,他们在邀请中被邀请localhost(而不是 root_url)......

请指教)

environment-variables docker grafana
  • 1 个回答
  • 1211 Views
Martin Hope
alexus
Asked: 2019-11-23 08:22:17 +0800 CST

字符串字段中的值 True(类型 bool)被转换为 u'True'(类型字符串)

  • 18

我正在尝试遵循参数/示例,但在执行 Ansible'playbook 时遇到以下警告消息:

TASK [apt (pre)] ********************************************************************************************
[WARNING]: The value True (type bool) in a string field was converted to u'True' (type string). If this does
not look like what you expect, quote the entire value to ensure it does not change.

剧本的相关部分:

- name: apt (pre)
  apt:
    update_cache: yes
    upgrade: yes

请指教。

ansible
  • 1 个回答
  • 6619 Views
Martin Hope
alexus
Asked: 2019-08-10 08:21:53 +0800 CST

服务器错误(超时):创建“STDIN”时出错:超时:请求未在请求的超时 30 秒内完成

  • 0

我正在尝试部署 Elasticsearch 集群,但遇到以下错误(超时):

$ cat <<EOF | kubectl apply -f -
> apiVersion: elasticsearch.k8s.elastic.co/v1alpha1
> kind: Elasticsearch
> metadata:
>   name: quickstart
> spec:
>   version: 7.2.0
>   nodes:
>   - nodeCount: 1
>     config:
>       node.master: true
>       node.data: true
>       node.ingest: true
> EOF
Error from server (Timeout): error when creating "STDIN": Timeout: request did not complete within requested timeout 30s
$ echo $?
1
$ time kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T11:13:54Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.7-gke.19", GitCommit:"bebe882824db5431820e3d59851c8fb52cb41675", GitTreeState:"clean", BuildDate:"2019-07-26T00:09:47Z", GoVersion:"go1.11.5b4", Compiler:"gc", Platform:"linux/amd64"}

real    0m2.085s
user    0m0.061s
sys 0m0.019s
$ 

请指教。


通过文件而不是标准输入:

$ kubectl apply -f ./elasticsearch.yaml
Error from server (Timeout): error when creating "./elasticsearch.yaml": Timeout: request did not complete within requested timeout 30s
$

kubectl get pods --all-namespaces=true:

$ kubectl get pods --all-namespaces=true
NAMESPACE             NAME                                                     READY   STATUS    RESTARTS   AGE
elastic-system        elastic-operator-0                                       1/1     Running   1          2m20s
gitlab-managed-apps   certmanager-cert-manager-6df979599b-njkwf                1/1     Running   0          4m33s
gitlab-managed-apps   ingress-nginx-ingress-controller-7cf6944677-n4bkj        1/1     Running   0          5m58s
gitlab-managed-apps   ingress-nginx-ingress-default-backend-7f7bf55777-rv699   1/1     Running   0          5m58s
gitlab-managed-apps   prometheus-kube-state-metrics-5d5958bc-qzm2n             1/1     Running   0          4m9s
gitlab-managed-apps   prometheus-prometheus-server-5c476cc89-2j4vw             2/2     Running   0          4m9s
gitlab-managed-apps   runner-gitlab-runner-7f886d8cbb-5lpfh                    1/1     Running   0          3m25s
gitlab-managed-apps   tiller-deploy-5c85978967-2hdcb                           1/1     Running   0          6m29s
kube-system           calico-node-j6gq9                                        2/2     Running   0          21m
kube-system           calico-node-vertical-autoscaler-579467d76c-7vgcn         1/1     Running   4          23m
kube-system           calico-typha-65bfd5544b-dp8bk                            1/1     Running   0          21m
kube-system           calico-typha-horizontal-autoscaler-847fc7bc8d-vwz6b      1/1     Running   0          23m
kube-system           calico-typha-vertical-autoscaler-dc95cc498-qzfm2         1/1     Running   4          23m
kube-system           event-exporter-v0.2.4-5f88c66fb7-8c52j                   2/2     Running   0          23m
kube-system           fluentd-gcp-scaler-59b7b75cd7-82jzd                      1/1     Running   0          23m
kube-system           fluentd-gcp-v3.2.0-cq2sr                                 2/2     Running   0          22m
kube-system           heapster-v1.6.1-7447959494-pdvl5                         3/3     Running   0          22m
kube-system           ip-masq-agent-wwff4                                      1/1     Running   0          23m
kube-system           kube-dns-6987857fdb-67fjq                                4/4     Running   0          23m
kube-system           kube-dns-autoscaler-bb58c6784-kk8nv                      1/1     Running   0          23m
kube-system           kube-proxy-gke-test-default-pool-56270fe6-k846           1/1     Running   0          23m
kube-system           l7-default-backend-fd59995cd-9bt9g                       1/1     Running   0          23m
kube-system           metrics-server-v0.3.1-57c75779f-vxn2g                    2/2     Running   0          22m
kube-system           prometheus-to-sd-zwcr5                                   1/1     Running   0          23m
$ 

kubectl -n elastic-system logs statefulset.apps/elastic-operator:

$ kubectl -n elastic-system logs statefulset.apps/elastic-operator
{"level":"info","ts":1566587049.3942742,"logger":"manager","msg":"Setting up client for manager"}
{"level":"info","ts":1566587049.3945758,"logger":"manager","msg":"Setting up manager"}
{"level":"info","ts":1566587049.3946166,"logger":"manager","msg":"Exposing Prometheus metrics on /metrics","port":8080}
{"level":"info","ts":1566587049.527746,"logger":"manager","msg":"Setting up scheme"}
{"level":"info","ts":1566587049.5363684,"logger":"manager","msg":"Setting up controllers","roles":["all"]}
{"level":"info","ts":1566587049.5364962,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"license-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1566587049.536699,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"license-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1566587049.5368743,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"trial-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1566587049.5369632,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"apmserver-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1566587049.5370846,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"apmserver-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1566587049.53719,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"apmserver-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1566587049.5372877,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"apmserver-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1566587049.5373478,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"apmserver-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1566587049.5374265,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"apm-es-association-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1566587049.537455,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"apm-es-association-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1566587049.53751,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"kibana-association-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1566587049.5376432,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"kibana-association-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1566587049.5377114,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"kibana-association-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1566587049.5377367,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"kibana-association-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1566587049.5378115,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"elasticsearch-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1566587049.537839,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"elasticsearch-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1566587049.5379903,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"elasticsearch-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1566587049.5380125,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"elasticsearch-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1566587049.5380402,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"elasticsearch-controller","source":"channel source: 0xc0005bc190"}
{"level":"info","ts":1566587049.538163,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"kibana-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1566587049.5381901,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"kibana-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1566587049.5382109,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"kibana-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1566587049.5382414,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"kibana-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1566587049.5382638,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"kibana-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1566587049.5382783,"logger":"manager","msg":"Setting up webhooks"}
{"level":"info","ts":1566587049.5690362,"logger":"manager","msg":"Starting the manager","uuid":"c80dcec9-c5d8-11e9-944f-8a11f67f61c1","namespace":"elastic-system","version":"0.9.0","build_hash":"8280d41","build_date":"2019-07-29T14:26:01Z","build_snapshot":"false"}
{"level":"info","ts":1566587049.669517,"logger":"kubebuilder.webhook","msg":"installing webhook configuration in cluster"}
{"level":"info","ts":1566587049.669637,"logger":"kubebuilder.controller","msg":"Starting Controller","controller":"trial-controller"}
{"level":"info","ts":1566587049.6697185,"logger":"kubebuilder.controller","msg":"Starting Controller","controller":"apmserver-controller"}
{"level":"info","ts":1566587049.6696646,"logger":"kubebuilder.controller","msg":"Starting Controller","controller":"elasticsearch-controller"}
{"level":"info","ts":1566587049.66953,"logger":"kubebuilder.controller","msg":"Starting Controller","controller":"apm-es-association-controller"}
{"level":"info","ts":1566587049.669866,"logger":"kubebuilder.controller","msg":"Starting Controller","controller":"kibana-controller"}
{"level":"info","ts":1566587049.6695576,"logger":"kubebuilder.controller","msg":"Starting Controller","controller":"license-controller"}
{"level":"info","ts":1566587049.669851,"logger":"kubebuilder.controller","msg":"Starting Controller","controller":"kibana-association-controller"}
{"level":"info","ts":1566587049.7698772,"logger":"kubebuilder.controller","msg":"Starting workers","controller":"apmserver-controller","worker count":1}
{"level":"info","ts":1566587049.7700884,"logger":"kubebuilder.controller","msg":"Starting workers","controller":"trial-controller","worker count":1}
{"level":"info","ts":1566587049.770116,"logger":"kubebuilder.controller","msg":"Starting workers","controller":"kibana-controller","worker count":1}
{"level":"info","ts":1566587049.7701266,"logger":"kubebuilder.controller","msg":"Starting workers","controller":"kibana-association-controller","worker count":1}
{"level":"info","ts":1566587049.7701535,"logger":"kubebuilder.controller","msg":"Starting workers","controller":"license-controller","worker count":1}
{"level":"info","ts":1566587049.7701898,"logger":"kubebuilder.controller","msg":"Starting workers","controller":"apm-es-association-controller","worker count":1}
{"level":"info","ts":1566587049.7704868,"logger":"kubebuilder.controller","msg":"Starting workers","controller":"elasticsearch-controller","worker count":1}
{"level":"info","ts":1566587049.8052213,"logger":"kubebuilder.webhook","msg":"starting the webhook server."}
$ 
elasticsearch
  • 1 个回答
  • 1182 Views
Martin Hope
alexus
Asked: 2019-06-15 11:04:00 +0800 CST

监听 tcp 127.0.0.1:9090:绑定:地址已在使用中

  • 2

如何找到哪个服务正在使用端口?

# cat /etc/redhat-release 
Red Hat Enterprise Linux Server release 7.6 (Maipo)
# netstat -natpv | grep 9090
tcp6       0      0 :::9090                 :::*                    LISTEN      1/systemd           
# 

请指教。


# systemctl status cockpit.service 
● cockpit.service - Cockpit Web Service
   Loaded: loaded (/usr/lib/systemd/system/cockpit.service; static; vendor preset: disabled)
   Active: inactive (dead) since Mon 2019-06-10 12:43:51 EDT; 4 days ago
     Docs: man:cockpit-ws(8)
 Main PID: 15922 (code=exited, status=0/SUCCESS)

Jun 10 12:41:48 X.X.X systemd[1]: Starting Cockpit Web Service...
Jun 10 12:41:48 X.X.X systemd[1]: Started Cockpit Web Service.
Jun 10 12:41:48 X.X.X cockpit-ws[15922]: Using certificate: /etc/cockpit/ws-certs.d/0-self-signed.cert
Jun 10 12:42:05 X.X.X cockpit-session[16311]: pam_ssh_add: Identity added: /root/.ssh/id_rsa (/root/.ssh/id_rsa)
Jun 10 12:42:07 X.X.X cockpit-ws[15922]: logged in user session
Jun 10 12:42:07 X.X.X cockpit-ws[15922]: New connection to session from 10.52.208.221
Jun 10 12:42:21 X.X.X cockpit-ws[15922]: WebSocket from 10.52.208.221 for session closed
Jun 10 12:42:36 X.X.X cockpit-ws[15922]: session timed out
# lsof -i :9090
COMMAND PID USER   FD   TYPE  DEVICE SIZE/OFF NODE NAME
systemd   1 root   75u  IPv6 7761202      0t0  TCP *:websm (LISTEN)
# 
systemd
  • 3 个回答
  • 4502 Views
Martin Hope
alexus
Asked: 2018-09-25 07:34:20 +0800 CST

安装和设置 kubectl - Kubernetes

  • 1

我正在尝试关注Install and Set Up kubectl - Kubernetes,但不断收到错误消息:

# yum --quiet install kubectl
https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64/repodata/repomd.xml: [Errno 14] curl#60 - "Peer's Certificate issuer is not recognized."
Trying other mirror.
It was impossible to connect to the CentOS servers.
This could mean a connectivity issue in your environment, such as the requirement to configure a proxy,
or a transparent proxy that tampers with TLS security, or an incorrect system clock.
You can try to solve this issue by using the instructions on https://wiki.centos.org/yum-errors
If above article doesn't help to resolve this issue please use https://bugs.centos.org/.



 One of the configured repositories failed (Kubernetes),
 and yum doesn't have enough cached data to continue. At this point the only
 safe thing yum can do is fail. There are a few ways to work "fix" this:

     1. Contact the upstream for the repository and get them to fix the problem.

     2. Reconfigure the baseurl/etc. for the repository, to point to a working
        upstream. This is most often useful if you are using a newer
        distribution release than is supported by the repository (and the
        packages for the previous distribution release still work).

     3. Run the command with the repository temporarily disabled
            yum --disablerepo=kubernetes ...

     4. Disable the repository permanently, so yum won't use it by default. Yum
        will then just ignore the repository until you permanently enable it
        again or use --enablerepo for temporary usage:

            yum-config-manager --disable kubernetes
        or
            subscription-manager repos --disable=kubernetes

     5. Configure the failing repository to be skipped, if it is unavailable.
        Note that yum will try to contact the repo. when it runs most commands,
        so will have to try and fail each time (and thus. yum will be be much
        slower). If it is a very temporary problem though, this is often a nice
        compromise:

            yum-config-manager --save --setopt=kubernetes.skip_if_unavailable=true

failure: repodata/repomd.xml from kubernetes: [Errno 256] No more mirrors to try.
https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64/repodata/repomd.xml: [Errno 14] curl#60 - "Peer's Certificate issuer is not recognized."
# 

我还尝试使用curl访问相同的 URL 并得到以下信息:

# curl https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64/repodata/repomd.xml
curl: (60) Peer's Certificate issuer is not recognized.
More details here: http://curl.haxx.se/docs/sslcerts.html

curl performs SSL certificate verification by default, using a "bundle"
 of Certificate Authority (CA) public keys (CA certs). If the default
 bundle file isn't adequate, you can specify an alternate file
 using the --cacert option.
If this HTTPS server uses a certificate signed by a CA represented in
 the bundle, the certificate verification probably failed due to a
 problem with the certificate (it might be expired, or the name might
 not match the domain name in the URL).
If you'd like to turn off curl's verification of the certificate, use
 the -k (or --insecure) option.
# 

有没有办法以-k/--insecure某种方式在 yum 的 repo 文件中使用?

请指教。

centos
  • 3 个回答
  • 3060 Views
Martin Hope
alexus
Asked: 2018-02-08 14:16:01 +0800 CST

GCE / VPC = 操作超时

  • 0

我遵循使用防火墙规则 | 专有网络 | Google Cloud Platform VPC 文档,但是每当我尝试连接时,我都会得到以下信息:

操作超时

客户端 - 详细模式:

$ sftp -v -P 2222 [email protected]
OpenSSH_6.2p2, OpenSSL 0.9.8y 5 Feb 2013
debug1: Reading configuration data /home/alexus/.ssh/config
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: Connecting to x.x.x [x.x.x.x] port 2222.
debug1: connect to address x.x.x.x port 2222: Operation timed out
ssh: connect to host x.x.x port 2222: Operation timed out
Connection closed
$

服务器端:

由于我的数据包没有到达目的地,因此日志中没有任何内容(或者我找错了地方)。


快速回顾:

  • 专有网络:
    • 创建防火墙规则
    • 名称/描述 (sftp2222/sftp)
    • 指定目标标签(tag)
    • 指定的源 IP 范围 (xxxx/32)
    • 指定的协议和端口 (tcp:2222)
  • 普通教育证书
    • 网络标签特定实例

请指教。

firewall
  • 3 个回答
  • 94 Views
Martin Hope
alexus
Asked: 2017-11-19 17:07:15 +0800 CST

状态:来自守护程序的错误响应:节点 elk12 不明确(找到 2 个匹配项),代码:1

  • 0

我正在使用以下环境:带有 Docker CE 的 Debian 9:

# cat /etc/os-release 
PRETTY_NAME="Debian GNU/Linux 9 (stretch)"
NAME="Debian GNU/Linux"
VERSION_ID="9"
VERSION="9 (stretch)"
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
# docker --version
Docker version 17.09.0-ce, build afdb6d4
# 

docker node ls:

# docker node ls | grep elk12
2keku0oj8zhsy6uyvyl4gd4d7     elk12               Down                Active              Reachable
tbwbpkl5qys4wwxbisga3y2oe *   elk12               Ready               Active              Reachable
# docker node inspect elk12
[]
Status: Error response from daemon: node elk12 is ambiguous (2 matches found), Code: 1
#

我不能使用docker node rm elk12,因为我根据上面的输出我有 2。

如何继续从列表中删除“关闭”节点,最好不影响工作集群)。我不相信我什至不再拥有那个节点(可能是很久以前的一些剩菜)......

请指教。


更新:

# docker node rm 2keku0oj8zhsy6uyvyl4gd4d7
Error response from daemon: rpc error: code = FailedPrecondition desc = node 2keku0oj8zhsy6uyvyl4gd4d7 is a cluster manager and is a member of the raft cluster. It must be demoted to worker before removal
# docker node demote 2keku0oj8zhsy6uyvyl4gd4d7
Manager 2keku0oj8zhsy6uyvyl4gd4d7 demoted in the swarm.
# docker node rm 2keku0oj8zhsy6uyvyl4gd4d7
2keku0oj8zhsy6uyvyl4gd4d7
# docker node ls | grep elk12
tbwbpkl5qys4wwxbisga3y2oe     elk12               Ready               Active              Reachable
# 
docker
  • 1 个回答
  • 2622 Views
Martin Hope
alexus
Asked: 2017-04-04 08:41:40 +0800 CST

curl 无法获取本地颁发者证书

  • 2

我正在使用Let's Encrypt证书,即使当我使用浏览器访问服务器时,浏览器报告页面是“安全的”,但是当我使用它时我得到了关注curl。

# curl https://X.X.X
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: http://curl.haxx.se/docs/sslcerts.html

curl performs SSL certificate verification by default, using a "bundle"
 of Certificate Authority (CA) public keys (CA certs). If the default
 bundle file isn't adequate, you can specify an alternate file
 using the --cacert option.
If this HTTPS server uses a certificate signed by a CA represented in
 the bundle, the certificate verification probably failed due to a
 problem with the certificate (it might be expired, or the name might
 not match the domain name in the URL).
If you'd like to turn off curl's verification of the certificate, use
 the -k (or --insecure) option.
# 

我的系统:

# cat /etc/debian_version 
8.7
# cat /etc/os-release 
PRETTY_NAME="Debian GNU/Linux 8 (jessie)"
NAME="Debian GNU/Linux"
VERSION_ID="8"
VERSION="8 (jessie)"
ID=debian
HOME_URL="http://www.debian.org/"
SUPPORT_URL="http://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
# 
curl
  • 1 个回答
  • 3290 Views
Martin Hope
alexus
Asked: 2016-11-02 08:21:43 +0800 CST

httpd 作为上下文运行 unconfined_u:system_r:httpd_t:s0(SELinux 处于许可状态)

  • 0

我在开始时遇到问题httpd:

# service httpd status
httpd is stopped
# service httpd start
Starting httpd: [Tue Nov 01 12:02:53 2016] [notice] SELinux policy enabled; httpd running as context unconfined_u:system_r:httpd_t:s0
                                                           [FAILED]
# tail /var/log/httpd/error_log
[Tue Nov 01 12:59:57 2016] [notice] SELinux policy enabled; httpd running as context unconfined_u:system_r:httpd_t:s0
[Tue Nov 01 13:00:11 2016] [notice] SELinux policy enabled; httpd running as context unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
[Tue Nov 01 13:00:11 2016] [info] Init: Initialized OpenSSL library
[Tue Nov 01 13:00:49 2016] [notice] SELinux policy enabled; httpd running as context unconfined_u:system_r:httpd_t:s0
[Tue Nov 01 13:05:15 2016] [notice] SELinux policy enabled; httpd running as context unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
[Tue Nov 01 13:05:15 2016] [info] Init: Initialized OpenSSL library
[Tue Nov 01 14:38:56 2016] [notice] SELinux policy enabled; httpd running as context system_u:system_r:httpd_t:s0
[Tue Nov 01 14:40:38 2016] [notice] SELinux policy enabled; httpd running as context system_u:system_r:httpd_t:s0
[Tue Nov 01 14:59:55 2016] [notice] SELinux policy enabled; httpd running as context unconfined_u:system_r:httpd_t:s0
[Tue Nov 01 15:00:40 2016] [notice] SELinux policy enabled; httpd running as context system_u:system_r:httpd_t:s0
# cat /var/log/httpd/error_log | grep -v 'SELinux policy enabled'
[Tue Nov 01 12:30:07 2016] [info] Init: Initialized OpenSSL library
[Tue Nov 01 12:30:52 2016] [info] Init: Initialized OpenSSL library
[Tue Nov 01 12:31:17 2016] [info] Init: Initialized OpenSSL library
[Tue Nov 01 12:31:35 2016] [info] Init: Initialized OpenSSL library
[Tue Nov 01 12:31:43 2016] [info] Init: Initialized OpenSSL library
[Tue Nov 01 12:32:10 2016] [info] Init: Initialized OpenSSL library
[Tue Nov 01 12:38:22 2016] [info] Init: Initialized OpenSSL library
[Tue Nov 01 13:00:11 2016] [info] Init: Initialized OpenSSL library
[Tue Nov 01 13:05:15 2016] [info] Init: Initialized OpenSSL library
# getenforce 
Permissive
# httpd -t
Syntax OK
# httpd -e debug -k start
[Tue Nov 01 12:32:10 2016] [debug] mod_so.c(246): loaded module authz_host_module
[Tue Nov 01 12:32:10 2016] [debug] mod_so.c(246): loaded module log_config_module
[Tue Nov 01 12:32:10 2016] [debug] mod_so.c(246): loaded module setenvif_module
[Tue Nov 01 12:32:10 2016] [debug] mod_so.c(246): loaded module mime_module
[Tue Nov 01 12:32:10 2016] [debug] mod_so.c(246): loaded module autoindex_module
[Tue Nov 01 12:32:10 2016] [debug] mod_so.c(246): loaded module negotiation_module
[Tue Nov 01 12:32:10 2016] [debug] mod_so.c(246): loaded module dir_module
[Tue Nov 01 12:32:10 2016] [debug] mod_so.c(246): loaded module alias_module
[Tue Nov 01 12:32:10 2016] [debug] mod_so.c(246): loaded module rewrite_module
[Tue Nov 01 12:32:10 2016] [debug] mod_so.c(246): loaded module proxy_module
[Tue Nov 01 12:32:10 2016] [debug] mod_so.c(246): loaded module proxy_http_module
[Tue Nov 01 12:32:10 2016] [debug] mod_so.c(246): loaded module ssl_module
# echo $?
1
# run_init service httpd start
Authenticating root.
Password: 
Starting httpd:                                            [FAILED]
# 

我的环境:

# cat /etc/redhat-release 
Red Hat Enterprise Linux Server release 6.8 (Santiago)
# uname -a
Linux X 2.6.32-642.6.1.el6.x86_64 #1 SMP Thu Aug 25 12:42:19 EDT 2016 x86_64 x86_64 x86_64 GNU/Linux
# rpm -q httpd
httpd-2.2.15-54.el6_8.x86_64
#

请指教。

redhat httpd selinux rhel6
  • 1 个回答
  • 9717 Views
Martin Hope
alexus
Asked: 2016-09-26 11:01:51 +0800 CST

elastic的快照和恢复模块repository_exception

  • 1

我正在使用elk-docker并尝试关注Snapshot And Restore | Elasticsearch 参考 [2.4] | 有弹性并出现以下错误:

# curl --request PUT --data '{ "type": "fs", "settings": {"compress": true, "location": "/run/elasticsearch/backups" } }' localhost:9200/_snapshot/my_backup?pretty
{
  "error" : {
    "root_cause" : [ {
      "type" : "repository_exception",
      "reason" : "[my_backup] failed to create repository"
    } ],
    "type" : "repository_exception",
    "reason" : "[my_backup] failed to create repository",
    "caused_by" : {
      "type" : "creation_exception",
      "reason" : "Guice creation errors:\n\n1) Error injecting constructor, RepositoryException[[my_backup] location [/run/elasticsearch/backups] doesn't match any of the locations specified by path.repo because this setting is empty]\n  at org.elasticsearch.repositories.fs.FsRepository.<init>(Unknown Source)\n  while locating org.elasticsearch.repositories.fs.FsRepository\n  while locating org.elasticsearch.repositories.Repository\n\n1 error",
      "caused_by" : {
        "type" : "repository_exception",
        "reason" : "[my_backup] location [/run/elasticsearch/backups] doesn't match any of the locations specified by path.repo because this setting is empty"
      }
    }
  },
  "status" : 500
}
# 

请指教。

snapshot elasticsearch elk
  • 1 个回答
  • 2056 Views
Martin Hope
alexus
Asked: 2016-05-25 13:15:19 +0800 CST

ntlmssp_handle_neg_flags:得到挑战标志[0x60898205] - 检测到可能的降级!missing_flags [0x00000010] - NT 代码 0x80090302

  • 0

我正在尝试访问 NetApp 上的 SMB/CIFS 资源,使用以下并出现错误:

$ cat /etc/redhat-release 
CentOS Linux release 7.2.1511 (Core) 
$ rpm -qa | grep ^samba-
samba-client-4.2.10-6.el7_2.x86_64
samba-libs-4.2.10-6.el7_2.x86_64
samba-common-libs-4.2.10-6.el7_2.x86_64
samba-common-4.2.10-6.el7_2.noarch
samba-client-libs-4.2.10-6.el7_2.x86_64
samba-common-tools-4.2.10-6.el7_2.x86_64
$ smbclient //X/Y$ -U DOMAIN/user -L
Enter DOMAIN/user's password: 
ntlmssp_handle_neg_flags: Got challenge flags[0x60898205] - possible downgrade detected! missing_flags[0x00000010] - NT code 0x80090302
  NTLMSSP_NEGOTIATE_SIGN
SPNEGO(ntlmssp) login failed: NT code 0x80090302
session setup failed: NT code 0x80090302
$ 

我正在尝试将以下内容与我的smbclient:

--选项==

Set the smb.conf(5) option "<name>" to value "<value>" from the command line. This overrides compiled-in defaults and options read from the configuration file.

然而,我仍然无法访问服务器上的资源,因为我做错了:

$ smbclient //X/Y$ -U DOMAIN/user -L --option='client ntlmv2 auth'=no
Enter DOMAIN/user's password: 
Connection to --option=client ntlmv2 auth=no failed (Error NT_STATUS_UNSUCCESSFUL)
$ 

client ntlmv2 auth=no通过--optionin设置的正确方法是什么smbclient?

centos samba centos7
  • 1 个回答
  • 1751 Views
Martin Hope
alexus
Asked: 2016-04-22 13:34:55 +0800 CST

无法启动容器:无法在网桥上创建端点 X:

  • 1

我正在使用以下系统:

[alexus@wcmisdlin02 Desktop]$ rpm -q docker
docker-1.9.1-25.el7.centos.x86_64
[alexus@wcmisdlin02 Desktop]$ cat /etc/redhat-release 
CentOS Linux release 7.2.1511 (Core) 
[alexus@wcmisdlin02 Desktop]$ uname -a
Linux wcmisdlin02.uftmasterad.org 3.10.0-327.13.1.el7.x86_64 #1 SMP Thu Mar 31 16:04:38 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
[alexus@wcmisdlin02 Desktop]$ 

我指的是Compose File Reference的语法docker-compose.yml:

[alexus@wcmisdlin02 Desktop]$ cat docker-compose.yml 
nginx:
  container_name: nginx
  image: nginx
  ports:
    - "80:80"
[alexus@wcmisdlin02 Desktop]$ docker-compose up
Creating nginx

ERROR: for nginx  Cannot start container fcaba40fb21cc64f514d71eb8117ba0f2102482be6e74615e96261667403a236: failed to create endpoint nginx on network bridge: COMMAND_FAILED: '/sbin/iptables -w2 -t nat -A DOCKER -p tcp -d 0/0 --dport 80 -j DNAT --to-destination 172.17.0.2:80 ! -i docker0' failed: iptables: No chain/target/match by that name.
Attaching to 
[alexus@wcmisdlin02 Desktop]$

如果我要删除ports其中的一部分docker-compose.yml,容器就会启动,但显然网络也没有按照我需要的方式设置。

我需要nginx容器开始监听主机上的 80 端口。

我究竟做错了什么?

centos7 docker docker-compose
  • 2 个回答
  • 7659 Views
Martin Hope
alexus
Asked: 2016-04-08 08:05:22 +0800 CST

fprintd: ** 消息:没有设备在使用,退出

  • 0

我不断收到以下消息/var/log/messages:

4/7/2016, 11:03:49 AM   fprintd[3277]   Launching FprintObject
4/7/2016, 11:03:49 AM   fprintd[3277]   ** Message: D-Bus service launched with name: net.reactivated.Fprint
4/7/2016, 11:03:49 AM   fprintd[3277]   ** Message: entering main loop
4/7/2016, 11:04:20 AM   fprintd[3277]   ** Message: No devices in use, exit

在以下系统上:

$ cat /etc/redhat-release 
CentOS Linux release 7.2.1511 (Core) 
$ uname -a
Linux X 3.10.0-327.13.1.el7.x86_64 #1 SMP Thu Mar 31 16:04:38 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
$ 

即使服务被禁用:

$ systemctl status fprintd
● fprintd.service - Fingerprint Authentication Daemon
   Loaded: loaded (/usr/lib/systemd/system/fprintd.service; static; vendor preset: disabled)
   Active: inactive (dead)
     Docs: man:fprintd(1)

Apr 07 11:05:27 X fprintd[4871]: Launching FprintObject
Apr 07 11:05:27 X fprintd[4871]: ** Message: D-Bus service launched with name: net.reactivated.Fprint
Apr 07 11:05:27 X fprintd[4871]: ** Message: entering main loop
Apr 07 11:05:58 X fprintd[4871]: ** Message: No devices in use, exit
Apr 07 11:18:22 X systemd[1]: Starting Fingerprint Authentication Daemon...
Apr 07 11:18:22 X systemd[1]: Started Fingerprint Authentication Daemon.
Apr 07 11:18:22 X fprintd[7010]: Launching FprintObject
Apr 07 11:18:22 X fprintd[7010]: ** Message: D-Bus service launched with name: net.reactivated.Fprint
Apr 07 11:18:22 X fprintd[7010]: ** Message: entering main loop
Apr 07 11:18:52 X fprintd[7010]: ** Message: No devices in use, exit
$ 

我如何真正禁用,所以这根本不会出现在我的日志中?

$ systemctl status dbus
● dbus.service - D-Bus System Message Bus
   Loaded: loaded (/usr/lib/systemd/system/dbus.service; static; vendor preset: disabled)
   Active: active (running) since Thu 2016-04-07 11:03:16 EDT; 57min ago
 Main PID: 904 (dbus-daemon)
   CGroup: /system.slice/dbus.service
           └─904 /bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation

Apr 07 11:14:35 X dbus[904]: [system] Successfully activated service 'org.freedesktop.hostname1'
Apr 07 11:14:35 X dbus-daemon[904]: dbus[904]: [system] Successfully activated service 'org.freedesktop.hostname1'
Apr 07 11:16:25 X dbus-daemon[904]: dbus[904]: [system] Activating service name='org.freedesktop.problems' (using servicehelper)
Apr 07 11:16:25 X dbus[904]: [system] Activating service name='org.freedesktop.problems' (using servicehelper)
Apr 07 11:16:25 X dbus[904]: [system] Successfully activated service 'org.freedesktop.problems'
Apr 07 11:16:25 X dbus-daemon[904]: dbus[904]: [system] Successfully activated service 'org.freedesktop.problems'
Apr 07 11:18:22 X dbus-daemon[904]: dbus[904]: [system] Activating via systemd: service name='net.reactivated.Fprint' unit='fprintd.service'
Apr 07 11:18:22 X dbus[904]: [system] Activating via systemd: service name='net.reactivated.Fprint' unit='fprintd.service'
Apr 07 11:18:22 X dbus[904]: [system] Successfully activated service 'net.reactivated.Fprint'
Apr 07 11:18:22 X dbus-daemon[904]: dbus[904]: [system] Successfully activated service 'net.reactivated.Fprint'
$ 

我在 CentOS 板上问了同样的问题(fprintd: ** Message: No devices in use, exit - CentOS),但没有得到答案,但无论如何..

centos7
  • 2 个回答
  • 7341 Views

Sidebar

Stats

  • 问题 205573
  • 回答 270741
  • 最佳答案 135370
  • 用户 68524
  • 热门
  • 回答
  • Marko Smith

    新安装后 postgres 的默认超级用户用户名/密码是什么?

    • 5 个回答
  • Marko Smith

    SFTP 使用什么端口?

    • 6 个回答
  • Marko Smith

    命令行列出 Windows Active Directory 组中的用户?

    • 9 个回答
  • Marko Smith

    什么是 Pem 文件,它与其他 OpenSSL 生成的密钥文件格式有何不同?

    • 3 个回答
  • Marko Smith

    如何确定bash变量是否为空?

    • 15 个回答
  • Martin Hope
    Tom Feiner 如何按大小对 du -h 输出进行排序 2009-02-26 05:42:42 +0800 CST
  • Martin Hope
    Noah Goodrich 什么是 Pem 文件,它与其他 OpenSSL 生成的密钥文件格式有何不同? 2009-05-19 18:24:42 +0800 CST
  • Martin Hope
    Brent 如何确定bash变量是否为空? 2009-05-13 09:54:48 +0800 CST
  • Martin Hope
    cletus 您如何找到在 Windows 中打开文件的进程? 2009-05-01 16:47:16 +0800 CST

热门标签

linux nginx windows networking ubuntu domain-name-system amazon-web-services active-directory apache-2.4 ssh

Explore

  • 主页
  • 问题
    • 最新
    • 热门
  • 标签
  • 帮助

Footer

AskOverflow.Dev

关于我们

  • 关于我们
  • 联系我们

Legal Stuff

  • Privacy Policy

Language

  • Pt
  • Server
  • Unix

© 2023 AskOverflow.DEV All Rights Reserve