Traefik 示例发布于:
a) 如何修改此代码以不使用 Letsecnrypt 并仅使用一些有效的 SSL 证书,同时仅使用命令(即命令:部分下)?
b) 如何强制 Traefik 监听“安全”主机端口 8443 并转发到其安全端口 443?
提前致谢
Traefik 示例发布于:
a) 如何修改此代码以不使用 Letsecnrypt 并仅使用一些有效的 SSL 证书,同时仅使用命令(即命令:部分下)?
b) 如何强制 Traefik 监听“安全”主机端口 8443 并转发到其安全端口 443?
提前致谢
如何配置Docker来高效地开发两个依赖的Python仓库?
开发中:使用卷将框架安装到运行时以实现快速迭代(热重载)。
在生产中:构建一个不依赖于主机卷的干净运行时映像。
dev/
├── framework/
│ └── ...
└── runtime/
├── ...
└── Dockerfile
补充:依赖项的开发速度有限(例如,开源项目需要用户验收)。通过普通的 venv 对运行时进行单元测试pip install -e
是一个显而易见的选择——但必须等到依赖项发布后才能在任何 CICD 管道中安装。
如何在所有这些之前在本地运行 docker - 快速移动、打破常规并进行迭代?
我使用 Airflow 的 dag 进行了 docker 化,它基本上会将一个 csv 文件复制到 GCP,然后将其加载到 BigQuery 并进行简单的转换。当我运行 docker-compose run 时,我的 dag 会被执行两次。我无法理解这是由哪部分代码引起的。当我从 UI 手动触发 dag 时,它只运行一次。
我的达格:
from scripts import extract_and_gcpload, load_to_BQ
default_args = {
'owner': 'shweta',
'start_date': datetime(2025, 4, 24),
'retries': 0
}
with DAG(
'spacex_etl_dag',
default_args=default_args,
schedule_interval=None,
schedule=None,
catchup=False #prevents Airflow from running missed periods
) as dag:
extract_and_upload = PythonOperator(
task_id="extract_and_upload_to_gcs",
python_callable=extract_and_gcpload.load_to_gcp_pipeline,
)
load_to_bq = PythonOperator(
task_id="load_to_BQ",
python_callable=load_to_BQ.load_csv_to_bigquery
)
run_dbt = BashOperator(
task_id="run_dbt",
bash_command="cd '/opt/airflow/dbt/my_dbt' && dbt run --profiles-dir /opt/airflow/dbt"
)
extract_and_upload >> load_to_bq >> run_dbt
我的入口点文件startscript.sh:
#!/bin/bash
set -euo pipefail
log() {
echo "[$(date +'%Y-%m-%d %H:%M:%S')] $1"
}
# Optional: Run DB init & parse DAGs
log "Initializing Airflow DB..."
airflow db upgrade
log "Parsing DAGs..."
airflow scheduler --num-runs 1
DAG_ID="spacex_etl_dag"
log "Unpausing DAG: $DAG_ID"
airflow dags unpause "$DAG_ID" || true
log "Triggering DAG: $DAG_ID"
airflow dags trigger "$DAG_ID" || true
log "Creating admin user (if not exists)..."
airflow users create \
--username admin \
--firstname Admin \
--lastname User \
--role Admin \
--email [email protected] \
--password admin || true
if [[ "$1" == "webserver" || "$1" == "scheduler" ]]; then
log "Starting Airflow: $1"
exec airflow "$@"
else
log "Executing: $@"
exec "$@"
fi
我的 docker-compose.yaml 文件:
services:
airflow-webserver:
build:
context: .
dockerfile: Dockerfile
container_name: airflow-webserver
env_file: .env
restart: always
environment:
AIRFLOW__CORE__DAGS_ARE_PAUSED_AT_CREATION: 'false'
AIRFLOW__LOGGING__REMOTE_LOGGING: 'False'
AIRFLOW__CORE__LOAD_EXAMPLES: 'false'
AIRFLOW__DATABASE__SQL_ALCHEMY_CONN: postgresql+psycopg2://airflow:airflow@postgres/airflow
GOOGLE_APPLICATION_CREDENTIALS: /opt/airflow/secrets/llms-395417-c18ea70a3f54.json
volumes:
- ./dags:/opt/airflow/dags
- ./scripts:/opt/airflow/scripts
- ./dbt:/opt/airflow/dbt
- ./secrets:/opt/airflow/secrets
ports:
- 8080:8080
command: webserver
airflow-scheduler:
build:
context: .
dockerfile: Dockerfile
container_name: airflow-scheduler
env_file: .env
restart: always
environment:
AIRFLOW__CORE__DAGS_ARE_PAUSED_AT_CREATION: 'false'
AIRFLOW__LOGGING__REMOTE_LOGGING: 'False'
AIRFLOW__CORE__LOAD_EXAMPLES: 'false'
AIRFLOW__DATABASE__SQL_ALCHEMY_CONN: postgresql+psycopg2://airflow:airflow@postgres/airflow
GOOGLE_APPLICATION_CREDENTIALS: /opt/airflow/secrets/llms-395417-c18ea70a3f54.json
volumes:
- ./dags:/opt/airflow/dags
- ./dbt:/opt/airflow/dbt
- ./secrets:/opt/airflow/secrets
- ./scripts:/opt/airflow/scripts
depends_on:
- postgres
command: scheduler
postgres:
image: postgres:13
environment:
POSTGRES_USER: airflow
POSTGRES_PASSWORD: airflow
POSTGRES_DB: airflow
volumes:
- postgres-db-volume:/var/lib/postgresql/data
volumes:
postgres-db-volume:
我对 AWS 和 docker 还很陌生。
在我的本地开发环境中,我想从应用服务器上传文件并将文件放置在localstack s3中。但是,当我尝试从应用服务器访问localstack s3时,出现以下错误。
AggregateError [ECONNREFUSED]:
at internalConnectMultiple (node:net:1139:18)
at afterConnectMultiple (node:net:1712:7) {
code: 'ECONNREFUSED',
'$metadata': { attempts: 3, totalRetryDelay: 105 },
[errors]: [
Error: connect ECONNREFUSED ::1:4566
at createConnectionError (node:net:1675:14)
at afterConnectMultiple (node:net:1705:16) {
errno: -111,
code: 'ECONNREFUSED',
syscall: 'connect',
address: '::1',
port: 4566
},
Error: connect ECONNREFUSED 127.0.0.1:4566
at createConnectionError (node:net:1675:14)
at afterConnectMultiple (node:net:1705:16) {
errno: -111,
code: 'ECONNREFUSED',
syscall: 'connect',
address: '127.0.0.1',
port: 4566
}
]
}
docker-compose 配置如下。我参考了这个链接。
version: '3'
services:
test-app:
build:
dockerfile: Dockerfile
context: .
args:
- VARIANT=22-bookworm
container_name: test-app
stdin_open: true
dns:
# Set the DNS server to be the LocalStack container
- 10.0.2.20
networks:
- ls
localstack:
container_name: localstack
image: localstack/localstack
ports:
- "4566:4566"
- "4510-4559:4510-4559"
environment:
- SERVICES=s3
networks:
ls:
# Set the container IP address in the 10.0.2.0/24 subnet
ipv4_address: 10.0.2.20
networks:
ls:
ipam:
config:
# Specify the subnet range for IP address allocation
- subnet: 10.0.2.0/24
该应用程序在 NestJS 上运行,并且 S3Client 已初始化并注入以下代码片段:
import { S3Client } from '@aws-sdk/client-s3';
import type { Provider } from '@nestjs/common';
export const S3_CLIENT_TOKEN = 'S3Client';
export const s3ClientProvider: Provider = {
provide: S3_CLIENT_TOKEN,
useValue: new S3Client({
endpoint: 'https://localhost.localstack.cloud:4566',
region: 'ap-northeast-1',
credentials: {
accessKeyId: '',
secretAccessKey: '',
},
}),
};
控制器使用以下代码片段:
@Post()
async uploadFile(
@AuthorizedUser() _user: User,
@UploadedFile() file: Express.Multer.File,
): Promise<void> {
const command = new PutObjectCommand({
Bucket: 'test',
Key: file.filename,
Body: await readFile(file.path),
});
const response = await this.s3Client.send(command);
return;
}
有人能帮我解决这个问题吗?提前谢谢了 :)
我期望出现购物篮不存在错误而不是网络错误,或者文件上传成功。
我尝试了此链接中的步骤,但没有得到我预期的结果。
构建docker镜像:
(base) raphy@raohy:~/.talos/stunner/apps/nexus$ export MYREPO=raphaelcollab/stunner
(base) raphy@raohy:~/.talos/stunner/apps/nexus$ sudo docker build -t $MYREPO/nexus .
[+] Building 71.6s (29/29) FINISHED docker:default
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 2.79kB 0.0s
=> [internal] load metadata for docker.io/library/debian:bookworm-20240701-slim 0.0s
=> [internal] load metadata for docker.io/hexpm/elixir:1.17.2-erlang-27.0.1-debian-bookworm-20240701-slim 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 1.31kB 0.0s
=> [internal] load build context 0.1s
=> => transferring context: 158.99kB 0.0s
=> [stage-1 1/6] FROM docker.io/library/debian:bookworm-20240701-slim 0.0s
=> [builder 1/17] FROM docker.io/hexpm/elixir:1.17.2-erlang-27.0.1-debian-bookworm-20240701-slim 0.1s
=> [stage-1 2/6] RUN apt-get update -y && apt-get install -y libstdc++6 openssl libncurses5 locales ca-certificates && apt-get clean && rm -f /var/lib/apt/lists/*_* 9.4s
=> [builder 2/17] RUN apt-get update -y && apt-get install -y build-essential git pkg-config libssl-dev && apt-get clean && rm -f /var/lib/apt/lists/*_* 28.4s
=> [stage-1 3/6] RUN sed -i '/en_US.UTF-8/s/^# //g' /etc/locale.gen && locale-gen 1.5s
=> [stage-1 4/6] WORKDIR /app 0.1s
=> [stage-1 5/6] RUN chown nobody /app 0.2s
=> [builder 3/17] WORKDIR /app 0.0s
=> [builder 4/17] RUN mix local.hex --force && mix local.rebar --force 2.1s
=> [builder 5/17] COPY mix.exs mix.lock ./ 0.1s
=> [builder 6/17] RUN mix deps.get --only prod 2.9s
=> [builder 7/17] RUN mkdir config 0.2s
=> [builder 8/17] COPY config/config.exs config/prod.exs config/ 0.1s
=> [builder 9/17] RUN mix deps.compile 27.5s
=> [builder 10/17] COPY priv priv 0.0s
=> [builder 11/17] COPY lib lib 0.1s
=> [builder 12/17] COPY assets assets 0.1s
=> [builder 13/17] RUN mix assets.deploy 6.6s
=> [builder 14/17] RUN mix compile 0.7s
=> [builder 15/17] COPY config/runtime.exs config/ 0.1s
=> [builder 16/17] COPY rel rel 0.0s
=> [builder 17/17] RUN mix release 1.4s
=> [stage-1 6/6] COPY --from=builder --chown=nobody:root /app/_build/prod/rel/nexus ./ 0.4s
=> exporting to image 0.6s
=> => exporting layers 0.5s
=> => writing image sha256:a0e0d292a593181591b1e354c7c5364717e1043cd2fd21acb8f7d3673ed8372d 0.0s
=> => naming to docker.io/raphaelcollab/stunner/nexus
注销和登录:
(base) raphy@raohy:~/.talos/stunner/apps/nexus$ docker logout
Removing login credentials for https://index.docker.io/v1/
(base) raphy@raohy:~/.talos/stunner/apps/nexus$ docker login -u raphaelcollab
创建 docker 标签:(docker push 错误“拒绝:请求的资源访问被拒绝”)
(base) raphy@raohy:~/.talos/stunner/apps/nexus$ docker tag docker.io/raphaelcollab/stunner/nexus:latest docker.io/raphaelcollab/stunner/nexus
尝试推送到 docker repo:
(base) raphy@raohy:~/.talos/stunner/apps/nexus$ docker push docker.io/raphaelcollab/stunner/nexus
Using default tag: latest
The push refers to repository [docker.io/raphaelcollab/stunner/nexus]
0603097560bf: Preparing
b5a39b1ed0d7: Preparing
6ecbac9c9860: Preparing
49d482126ff0: Preparing
c5a36b2dc9a4: Preparing
32148f9f6c5a: Waiting
denied: requested access to the resource is denied
操作系统:Ubuntu 24.04
附录1:docker login -u 的输出:
(base) raphy@raohy:~/.talos/stunner/apps/nexus$ docker login -u
raphaelcollab
i Info → A Personal Access Token (PAT) can be used instead.
To create a PAT, visit https://app.docker.com/settings
Password:
Login Succeeded
(base) raphy@raohy:~/.talos/stunner/apps/nexus$
附录2:
这是我在 docker 中创建的 repo:
附录3:
我只能创建名为 stunner 的 repo,而不能创建名为 stunner/nexus 的 repo
当我推送到 docker.io/raphaelcollab/stunner/nexus:latest 时,访问被拒绝:
(base) raphy@raohy:~/.talos/stunner/apps/nexus$ docker tag
raphaelcollab/stunner/nexus:latest raphaelcollab/stunner/
nexus:latest
(base) raphy@raohy:~/.talos/stunner/apps/nexus$ docker push docker.io/raphaelcollab/stunner/nexus:latest
The push refers to repository [docker.io/raphaelcollab/stunner/nexus]
0603097560bf: Preparing
b5a39b1ed0d7: Preparing
6ecbac9c9860: Preparing
49d482126ff0: Preparing
c5a36b2dc9a4: Preparing
32148f9f6c5a: Waiting
denied: requested access to the resource is denied
如何让它工作?
我一直尝试使用 helm chart(作为子图)在我的家庭实验室中托管的 kubernetes 集群上部署 airflow。
我在将本地测试的 dag(在 docker-compose 上)迁移到 Kubernetes 集群时遇到了一些问题。在 Kubernetes 上使用 Airflow 时,可以使用 CeleryExecutor 吗?如果可以,我应该如何更改 dag 中的 DockerOperators?在寻找答案的过程中,我偶然发现了以下 stackoverflow 帖子,它建议在 Kubernetes 上运行时使用 DockerOperator 不是一个好选择,对吗?
这是否意味着我不应该使用 CeleryExecutor ?
将 dags 从 docker-compose 迁移到 Kubernetes 时,最佳实践是什么?谢谢
我试图弄清楚有关镜像 ID、清单和摘要的 docker 术语,以便了解 docker 如何维护分布式镜像的完整性。并且有一个(许多)意想不到的点。我刚刚保存了一个现有镜像,将其解压回来,但无法理解其manifest.json
文件中的内容:
$ docker save 000908da321f > a.tar
$ tar -xvf ./a.tar -C a
$ ls a
blobs index.json manifest.json oci-layout
{
"schemaVersion":2,
"mediaType":"application/vnd.oci.image.index.v1+json",
"manifests":
[
{
"mediaType":"application/vnd.oci.image.manifest.v1+json",
"digest":"sha256:dbe15f62d97cfdb1271a9612e4df8bd5d79b11404dcaed42b82e4cf796338f37",
"size":1011
}
]
}
我可以dbe15
在 中找到a/blob/sha256/
。这也是有道理的:它是一个mediaType manifest
带有config
和 的layers
内部。重新制作带有 的主题标签sha256sum
可以匹配它们。
但manifest.json
它不遵循图像规范架构。它是一个列表,并且没有schemaVersion
and mediaType
:
[
{
"Config": "blobs/sha256/000908da321ffa9418f599f3476fece162f3903f3f2e9fdd508c8b91ee9008ff",
"Layers": [
"blobs/sha256/08000c18d16dadf9553d747a58cf44023423a9ab010aab96cf263d2216b8b350",
...
],
"LayerSources": { ... },
"RepoTags": null
}
]
它看起来有点像,但实际上不是。这个文件是什么?当它不遵循图像清单的架构时,docker image inspect <image ID>
调用它的原因是什么?manifest.json
我正在使用构建秘密提供诗歌认证,但它没有被安装到环境中,我从docker 文档中复制并修改了示例。
export DOCKER_BUILDKIT=1
docker build \
--secret id=poetry_johndoe_auth_username,env=POETRY_HTTP_BASIC_JOHNDOE_GITLAB_USERNAME \
--secret id=poetry_johndoe_auth_password,env=POETRY_HTTP_BASIC_JOHNDOE_GITLAB_PASSWORD \
-t example:latest .
在dockerfile中我正在安装到环境中
RUN --mount=type=secret,id=poetry_johndoe_auth_username,env=POETRY_HTTP_BASIC_JOHNDOE_GITLAB_USERNAME \
--mount=type=secret,id=poetry_johndoe_auth_password,env=POETRY_HTTP_BASIC_JOHNDOE_GITLAB_PASSWORD \
pip install -r /tmp/requirements.txt
CI 返回以下错误
Dockerfile:17
--------------------
16 |
17 | >>> RUN --mount=type=secret,id=poetry_johndoe_auth_username,env=POETRY_HTTP_BASIC_JOHNDOE_GITLAB_USERNAME \
18 | >>> --mount=type=secret,id=poetry_johndoe_auth_password,env=POETRY_HTTP_BASIC_JOHNDOE_GITLAB_PASSWORD \
19 | >>> pip install -r /tmp/requirements.txt
20 |
--------------------
ERROR: failed to solve: unexpected key 'env' in 'env=POETRY_HTTP_BASIC_JOHNDOE_GITLAB_USERNAME'
我正在尝试使用 Apache 客户端连接从 docker 映像部署的 FTP 服务器:
import org.apache.commons.net.ftp.FTPClient
import java.io.IOException
object FtpClient {
def main(args: Array[String]): Unit = {
val ftpClient = new FTPClient()
try {
ftpClient.connect("localhost", 21)
val success = ftpClient.login("one", "1234")
if (success) {
println("Success connect")
}
} catch {
case e: IOException => throw new RuntimeException(e)
} finally {
//
}
}
}
FTP 服务器运行者:
docker run -d -p 21:21 -p 21000-21010:21000-21010 -e USERS="one|1234" -e ADDRESS=ftp.site.domain delfer/alpine-ftp-server
我收到以下错误:
Exception in thread "main" java.lang.RuntimeException: org.apache.commons.net.ftp.FTPConnectionClosedException: Connection closed without indication.
at ru.spb.client.FtpClient$.main(FtpClient.scala:23)
at ru.spb.client.FtpClient.main(FtpClient.scala)
Caused by: org.apache.commons.net.ftp.FTPConnectionClosedException: Connection closed without indication.
at org.apache.commons.net.ftp.FTP.getReply(FTP.java:568)
at org.apache.commons.net.ftp.FTP.getReply(FTP.java:556)
编辑:
Docker 日志返回:
Changing password for one
New password:
Bad password: too short
Retype password:
adduser: /ftp/one: No such file or directory
passwd: password for one changed by root
seems like pidfd_open syscall does not work, falling back to
polling
failed to watch for direct child exit (pidfd_open error): Operation not permitted
也许我错过了一些重要的连接设置?