AskOverflow.Dev

AskOverflow.Dev Logo AskOverflow.Dev Logo

AskOverflow.Dev Navigation

  • 主页
  • 系统&网络
  • Ubuntu
  • Unix
  • DBA
  • Computer
  • Coding
  • LangChain

Mobile menu

Close
  • 主页
  • 系统&网络
    • 最新
    • 热门
    • 标签
  • Ubuntu
    • 最新
    • 热门
    • 标签
  • Unix
    • 最新
    • 标签
  • DBA
    • 最新
    • 标签
  • Computer
    • 最新
    • 标签
  • Coding
    • 最新
    • 标签
主页 / server / 问题 / 711104
Accepted
Peter M
Peter M
Asked: 2015-08-06 10:16:36 +0800 CST2015-08-06 10:16:36 +0800 CST 2015-08-06 10:16:36 +0800 CST

将 cloudfront 日志导入 logstash:错误:不是此包装器的合法参数,因为它不响应“读取”

  • 772

Logstash 版本 1.5.0.1

我正在尝试使用 logstash s3 输入插件来下载 cloudfront 日志和cloudfront 编解码器插件来过滤流。

我使用bin/plugin install logstash-codec-cloudfront.

我得到以下信息:错误:对象:#Version:1.0 不是此包装器的合法参数,因为它不响应“读取”。

这是来自 /var/logs/logstash/logstash.log 的完整错误消息

 {:timestamp=>"2015-08-05T13:35:20.809000-0400", :message=>"A plugin had an unrecoverable error. Will restart this plugin.\n  Plugin: <LogStash::Inputs::S3 bucket=>\"[BUCKETNAME]\", prefix=>\"cloudfront/\", region=>\"us-east-1\", type=>\"cloudfront\", secret_access_key=>\"[SECRETKEY]/1\", access_key_id=>\"[KEYID]\", sincedb_path=>\"/opt/logstash_input/s3/cloudfront/sincedb\", backup_to_dir=>\"/opt/logstash_input/s3/cloudfront/backup\", temporary_directory=>\"/var/lib/logstash/logstash\">\n  Error: Object: #Version: 1.0\n is not a legal argument to this wrapper, cause it doesn't respond to \"read\".", :level=>:error}

我的 logstash 配置文件:/etc/logstash/conf.d/cloudfront.conf

input {
  s3 {
    bucket => "[BUCKETNAME]"
    delete => false
    interval => 60 # seconds
    prefix => "cloudfront/"
    region => "us-east-1"
    type => "cloudfront"
    codec => "cloudfront"
    secret_access_key => "[SECRETKEY]"
    access_key_id => "[KEYID]"
    sincedb_path => "/opt/logstash_input/s3/cloudfront/sincedb"
    backup_to_dir => "/opt/logstash_input/s3/cloudfront/backup"
    use_ssl => true
  }
}

我正在成功使用类似的 s3 输入流将我的 cloudtrail 日志记录到基于stackoverflow 帖子中的答案的 logstash 中。

来自 s3 的 CloudFront 日志文件(我只包含文件中的标头):

 #Version: 1.0
 #Fields: date time x-edge-location sc-bytes c-ip cs-method cs(Host) cs-uri-stem sc-status cs(Referer) cs(User-Agent) cs-uri-query cs(Cookie) x-edge-result-type x-edge-request-id x-host-header cs-protocol cs-bytes time-taken x-forwarded-for ssl-protocol ssl-cipher x-edge-response-result-type

根据 cloudfront 插件 github repo cloudfront_spec.rb 和官方 AWS CloudFront Access Logs文档中的第 26-29 行,标头看起来基本上是正确的格式。

有任何想法吗?谢谢!

[2015 年 9 月 9 日更新]

根据这篇文章,我尝试使用gzip_lines编解码器插件,安装bin/plugin install logstash-codec-gzip_lines并使用过滤器解析文件,不幸的是我得到了完全相同的错误。看起来这是日志文件的第一个字符的问题#。

作为记录,这是新的尝试,包括由于四个新字段而解析云端日志文件的更新模式:

/etc/logstash/conf.d/cloudfront.conf

input {
  s3 {
    bucket => "[BUCKETNAME]"
    delete => false
    interval => 60 # seconds
    prefix => "cloudfront/"
    region => "us-east-1"
    type => "cloudfront"
    codec => "gzip_lines"
    secret_access_key => "[SECRETKEY]"
    access_key_id => "[KEYID]"
    sincedb_path => "/opt/logstash_input/s3/cloudfront/sincedb"
    backup_to_dir => "/opt/logstash_input/s3/cloudfront/backup"
    use_ssl => true
  }
}
filter {
    grok {
    type => "cloudfront"
    pattern => "%{DATE_EU:date}\t%{TIME:time}\t%{WORD:x_edge_location}\t(?:%{NUMBER:sc_bytes}|-)\t%{IPORHOST:c_ip}\t%{WORD:cs_method}\t%{HOSTNAME:cs_host}\t%{NOTSPACE:cs_uri_stem}\t%{NUMBER:sc_status}\t%{GREEDYDATA:referrer}\t%{GREEDYDATA:User_Agent}\t%{GREEDYDATA:cs_uri_stem}\t%{GREEDYDATA:cookies}\t%{WORD:x_edge_result_type}\t%{NOTSPACE:x_edge_request_id}\t%{HOSTNAME:x_host_header}\t%{URIPROTO:cs_protocol}\t%{INT:cs_bytes}\t%{GREEDYDATA:time_taken}\t%{GREEDYDATA:x_forwarded_for}\t%{GREEDYDATA:ssl_protocol}\t%{GREEDYDATA:ssl_cipher}\t%{GREEDYDATA:x_edge_response_result_type}"
  }

mutate {
    type => "cloudfront"
        add_field => [ "listener_timestamp", "%{date} %{time}" ]
    }

date {
      type => "cloudfront"
      match => [ "listener_timestamp", "yy-MM-dd HH:mm:ss" ]
    }

}
amazon-s3
  • 3 3 个回答
  • 3683 Views

3 个回答

  • Voted
  1. Best Answer
    Daniel
    2015-10-24T08:11:45+08:002015-10-24T08:11:45+08:00

    我有同样的问题,从

        codec > "gzip_lines"
    

    至

        codec => "plain"
    

    在输入中为我修复了它。看起来 S3 输入会自动解压缩 gzip 文件。https://github.com/logstash-plugins/logstash-input-s3/blob/master/lib/logstash/inputs/s3.rb#L13

    • 1
  2. Morgan Christiansson
    2020-02-09T02:01:44+08:002020-02-09T02:01:44+08:00

    这在此处报告为错误https://github.com/logstash-plugins/logstash-codec-cloudfront/issues/2

    自 2016 年以来未修复

    • 1
  3. Peter M
    2015-12-31T14:47:09+08:002015-12-31T14:47:09+08:00

    FTR 这里是为我工作的完整配置:

    input {
      s3 {
        bucket => "[BUCKET NAME]"
        delete => false
        interval => 60 # seconds
        prefix => "CloudFront/"
        region => "us-east-1"
        type => "cloudfront"
        codec => "plain"
        secret_access_key => "[SECRETKEY]"
        access_key_id => "[KEYID]"
        sincedb_path => "/opt/logstash_input/s3/cloudfront/sincedb"
        backup_to_dir => "/opt/logstash_input/s3/cloudfront/backup"
        use_ssl => true
      }
    }
    
    filter {
            if [type] == "cloudfront" {
                    if ( ("#Version: 1.0" in [message]) or ("#Fields: date" in [message])) {
                            drop {}
                    }
    
                    grok {
                            match => { "message" => "%{DATE_EU:date}\t%{TIME:time}\t%{WORD:x_edge_location}\t(?:%{NUMBER:sc_bytes}|-)\t%{IPORHOST:c_ip}\t%{WORD:cs_method}\t%{HOSTNAME:cs_host}\t%{NOTSPACE:cs_uri_stem}\t%{NUMBER:sc_status}\t%{GREEDYDATA:referrer}\t%{GREEDYDATA:User_Agent}\t%{GREEDYDATA:cs_uri_stem}\t%{GREEDYDATA:cookies}\t%{WORD:x_edge_result_type}\t%{NOTSPACE:x_edge_request_id}\t%{HOSTNAME:x_host_header}\t%{URIPROTO:cs_protocol}\t%{INT:cs_bytes}\t%{GREEDYDATA:time_taken}\t%{GREEDYDATA:x_forwarded_for}\t%{GREEDYDATA:ssl_protocol}\t%{GREEDYDATA:ssl_cipher}\t%{GREEDYDATA:x_edge_response_result_type}" }
                    }
    
                    mutate {
                            add_field => [ "received_at", "%{@timestamp}" ]
                            add_field => [ "listener_timestamp", "%{date} %{time}" ]
                    }
    
                    date {
                            match => [ "listener_timestamp", "yy-MM-dd HH:mm:ss" ]
                    }
    
                    date {
                            locale => "en"
                            timezone => "UCT"
                            match => [ "listener_timestamp", "yy-MM-dd HH:mm:ss" ]
                            target => "@timestamp"
                            add_field => { "debug" => "timestampMatched"}
                    }
            }
    }
    
    • 0

相关问题

  • 如何将 Symantec Backup Exec 与 S3 Amazon 存储结合使用?

  • Amazon S3 是否对您的数据做出任何形式的保证?

  • Amazon S3 存储如何计费?[关闭]

  • 使用亚马逊 S3 时的注意事项

  • 用于备份等的 S3 接口 [关闭]

Sidebar

Stats

  • 问题 205573
  • 回答 270741
  • 最佳答案 135370
  • 用户 68524
  • 热门
  • 回答
  • Marko Smith

    新安装后 postgres 的默认超级用户用户名/密码是什么?

    • 5 个回答
  • Marko Smith

    SFTP 使用什么端口?

    • 6 个回答
  • Marko Smith

    命令行列出 Windows Active Directory 组中的用户?

    • 9 个回答
  • Marko Smith

    什么是 Pem 文件,它与其他 OpenSSL 生成的密钥文件格式有何不同?

    • 3 个回答
  • Marko Smith

    如何确定bash变量是否为空?

    • 15 个回答
  • Martin Hope
    Tom Feiner 如何按大小对 du -h 输出进行排序 2009-02-26 05:42:42 +0800 CST
  • Martin Hope
    Noah Goodrich 什么是 Pem 文件,它与其他 OpenSSL 生成的密钥文件格式有何不同? 2009-05-19 18:24:42 +0800 CST
  • Martin Hope
    Brent 如何确定bash变量是否为空? 2009-05-13 09:54:48 +0800 CST
  • Martin Hope
    cletus 您如何找到在 Windows 中打开文件的进程? 2009-05-01 16:47:16 +0800 CST

热门标签

linux nginx windows networking ubuntu domain-name-system amazon-web-services active-directory apache-2.4 ssh

Explore

  • 主页
  • 问题
    • 最新
    • 热门
  • 标签
  • 帮助

Footer

AskOverflow.Dev

关于我们

  • 关于我们
  • 联系我们

Legal Stuff

  • Privacy Policy

Language

  • Pt
  • Server
  • Unix

© 2023 AskOverflow.DEV All Rights Reserve