AskOverflow.Dev

AskOverflow.Dev Logo AskOverflow.Dev Logo

AskOverflow.Dev Navigation

  • 主页
  • 系统&网络
  • Ubuntu
  • Unix
  • DBA
  • Computer
  • Coding
  • LangChain

Mobile menu

Close
  • 主页
  • 系统&网络
    • 最新
    • 热门
    • 标签
  • Ubuntu
    • 最新
    • 热门
    • 标签
  • Unix
    • 最新
    • 标签
  • DBA
    • 最新
    • 标签
  • Computer
    • 最新
    • 标签
  • Coding
    • 最新
    • 标签
主页 / user-28679

Alex F's questions

Martin Hope
Alex F
Asked: 2014-09-25 10:16:55 +0800 CST

ES随着时间慢慢填满堆并挂在14GB,而最大索引是164MB?

  • 2

我对 Elasticsearch 有疑问,有时,它会尝试连续运行 GC,因为这个无法释放,因为据说堆大小设置为 14GB(最小和最大)已完全分配:

(...)
[2014-09-18 13:43:45,984][INFO ][monitor.jvm              ] [staging02.onldev] [gc][old][1128185][65590] duration [7.1s], collections
 [1]/[7.2s], total [7.1s]/[9.3h], memory [13.9gb]->[13.9gb]/[13.9gb], all_pools {[young] [532.5mb]->[532.5mb]/[532.5mb]}{[survivor] [
49.9mb]->[49.6mb]/[66.5mb]}{[old] [13.3gb]->[13.3gb]/[13.3gb]}
[2014-09-18 13:43:53,307][INFO ][monitor.jvm              ] [staging02.onldev] [gc][old][1128186][65591] duration [7.2s], collections
 [1]/[7.3s], total [7.2s]/[9.3h], memory [13.9gb]->[13.9gb]/[13.9gb], all_pools {[young] [532.5mb]->[532.5mb]/[532.5mb]}{[survivor] [
49.6mb]->[49.7mb]/[66.5mb]}{[old] [13.3gb]->[13.3gb]/[13.3gb]}
[2014-09-18 13:43:58,647][INFO ][monitor.jvm              ] [staging02.onldev] [gc][old][1128187][65592] duration [5.2s], collections
 [1]/[5.3s], total [5.2s]/[9.3h], memory [13.9gb]->[13.9gb]/[13.9gb], all_pools {[young] [532.5mb]->[532.5mb]/[532.5mb]}{[survivor] [
49.7mb]->[49.8mb]/[66.5mb]}{[old] [13.3gb]->[13.3gb]/[13.3gb]}

此时ES无响应,我们重启一下

当我观察 ES 堆,并且我们的应用程序工作人员使用 ES 时,堆内存会增长,每隔几分钟 GC 就会运行一次,堆几乎会再次清空,但并非完全清空。并且在很多天里慢慢地,堆中似乎没有可用的内存。看起来是否存在内存泄漏,除了我们使用 Tire gem 的 Ruby 代码中怎么可能,因为我们正在谈论 ES heap ?ES 的某些使用模式会导致 ES 泄漏内存吗?

基本上,ES 是一个具有 16GB RAM、无副本、5 个索引和每个索引 1 个分片的专用服务器。它使用 java-1.7.0-openjdk-1.7.0.65-2.5.1.2.el6_5.x86_64 运行,使用 mlockall,最小和最大堆都设置为 14GB。没有其他东西在服务器上运行。我们使用 Elasticsearch 0.90.x 是因为开发团队负担不起更换他们用来连接 Ruby 工作者的 Tire gem

products
size: 164Mi (164Mi)
docs: 98,760 (157,138)

product_brands
size: 4.52Mi (4.52Mi)
docs: 5,123 (5,123)

product_categories
size: 358ki (358ki)
docs: 538 (538)

store_company_categories
size: 389ki (389ki)
docs: 4,028 (4,028)

stores
size: 1.44Mi (1.44Mi)
docs: 1,090 (1,090)

最大的索引是products, 在 Bigdesk 中显示为 164MB。随着时间的推移,ES 如何使用到 14GB?

索引元数据有问题吗?

{
state: open
settings: {
index.analysis.filter.french_stop.stopwords.0: alors
index.analysis.filter.french_stop.stopwords.1: au
index.analysis.filter.french_stop.stopwords.4: autre
index.analysis.filter.french_stop.stopwords.5: avant
index.analysis.filter.french_stop.stopwords.2: aucuns
index.analysis.filter.french_stop.stopwords.3: aussi
index.analysis.filter.french_stop.stopwords.22: dehors
index.analysis.filter.french_stop.stopwords.8: bon
index.analysis.filter.french_stop.stopwords.23: depuis
index.analysis.filter.french_stop.stopwords.9: car
index.analysis.filter.french_stop.stopwords.20: du
index.analysis.filter.french_stop.stopwords.6: avec
index.analysis.filter.french_stop.stopwords.21: dedans
index.analysis.filter.french_stop.stopwords.7: avoir
index.analysis.filter.french_stop.stopwords.29: droite
index.analysis.filter.french_stop.stopwords.28: dos
index.analysis.filter.french_stop.stopwords.27: donc
index.analysis.filter.french_stop.stopwords.26: doit
index.analysis.filter.french_stop.stopwords.25: devrait
index.analysis.filter.french_stop.stopwords.24: deux
index.analysis.analyzer.nGram_analyzer.type: custom
index.analysis.filter.nGram_filter.token_chars.0: letter
index.analysis.analyzer.product_analyzer.type: custom
index.analysis.filter.nGram_filter.token_chars.1: digit
index.analysis.filter.nGram_filter.token_chars.2: punctuation
index.analysis.filter.french_stemmer.type: stemmer
index.analysis.filter.nGram_filter.type: nGram
index.analysis.filter.french_stop.stopwords.10: ce
index.analysis.filter.french_stop.stopwords.11: cela
index.analysis.filter.french_stop.stopwords.12: ces
index.analysis.analyzer.product_analyzer.filter.0: lowercase
index.analysis.filter.french_stop.stopwords.91: sans
index.analysis.filter.french_stop.stopwords.18: dans
index.analysis.analyzer.product_analyzer.filter.1: french_stemmer
index.analysis.filter.french_stop.stopwords.92: ses
index.analysis.filter.french_stop.stopwords.17: comment
index.analysis.analyzer.product_analyzer.filter.2: asciifolding
index.analysis.analyzer.product_analyzer.filter.3: unique
index.analysis.filter.french_stop.stopwords.90: sa
index.analysis.filter.french_stop.stopwords.19: des
index.analysis.filter.french_stop.stopwords.14: chaque
index.analysis.analyzer.product_analyzer.filter.4: french_stop
index.analysis.filter.french_stop.stopwords.13: ceux
index.analysis.filter.nGram_filter.min_gram: 2
index.analysis.filter.french_stop.stopwords.16: comme
index.analysis.analyzer.category_analyzer.type: custom
index.analysis.filter.french_stop.stopwords.15: ci
index.analysis.filter.french_stop.stopwords.99: soyez
index.analysis.filter.french_stop.stopwords.97: sont
index.analysis.filter.french_stop.stopwords.98: sous
index.analysis.filter.french_stop.stopwords.95: sien
index.analysis.filter.french_stop.stopwords.96: son
index.analysis.filter.french_stop.stopwords.93: seulement
index.analysis.filter.french_stop.stopwords.94: si
index.analysis.analyzer.nGram_analyzer.tokenizer: whitespace
index.analysis.filter.french_stop.stopwords.80: plupart
index.analysis.filter.french_stop.stopwords.81: pour
index.number_of_replicas: 0
index.analysis.filter.french_stop.stopwords.82: pourquoi
index.analysis.filter.french_stop.stopwords.83: quand
index.analysis.filter.french_stop.stopwords.84: que
index.analysis.filter.french_stop.stopwords.85: quel
index.analysis.filter.french_stop.stopwords.86: quelle
index.analysis.filter.french_stop.stopwords.87: quelles
index.analysis.filter.french_stop.stopwords.88: quels
index.analysis.filter.french_stop.stopwords.89: qui
index.analysis.analyzer.product_analyzer.tokenizer: standard
index.analysis.filter.french_stop.stopwords.79: pièce
index.analysis.filter.french_stop.stopwords.70: ou
index.analysis.filter.french_stop.stopwords.73: parce
index.analysis.filter.french_stop.stopwords.74: parole
index.uuid: B_JF7UG5R6S_ZC0L0IMFYw
index.analysis.filter.french_stop.stopwords.71: où
index.analysis.filter.french_stop.stopwords.72: par
index.analysis.filter.french_stop.stopwords.77: peut
index.analysis.filter.french_stop.stopwords.78: peu
index.analysis.filter.french_stop.stopwords.75: pas
index.analysis.filter.french_stop.stopwords.76: personnes
index.analysis.filter.french_stop.stopwords.68: nous
index.analysis.filter.french_stop.stopwords.69: nouveaux
index.analysis.filter.french_stop.stopwords.65: ni
index.analysis.analyzer.category_analyzer.filter.0: lowercase
index.analysis.filter.french_stop.stopwords.64: même
index.analysis.filter.french_stop.stopwords.67: notre
index.analysis.filter.french_stop.stopwords.66: nommés
index.analysis.filter.french_stop.stopwords.61: moins
index.analysis.filter.french_stop.stopwords.60: mine
index.analysis.analyzer.category_analyzer.filter.1: french_stemmer
index.analysis.filter.french_stop.stopwords.63: mot
index.analysis.analyzer.category_analyzer.filter.2: french_stop
index.analysis.filter.french_stop.stopwords.62: mon
index.analysis.filter.french_stop.stopwords.120: ça
index.analysis.filter.french_stop.stopwords.121: étaient
index.analysis.filter.french_stop.stopwords.122: état
index.analysis.filter.french_stop.stopwords.123: étions
index.analysis.filter.french_stop.stopwords.124: été
index.analysis.filter.french_stop.stopwords.125: être
index.analysis.filter.nGram_filter.max_gram: 20
index.analysis.filter.french_stop.stopwords.126: rayon
index.analysis.filter.french_stop.stopwords.127: rayons
index.analysis.filter.french_stop.stopwords.128: root
index.number_of_shards: 1
index.analysis.filter.french_stop.stopwords.129: roots
index.analysis.filter.french_stop.stopwords.59: mes
index.analysis.filter.french_stop.stopwords.57: maintenant
index.analysis.filter.french_stop.stopwords.58: mais
index.analysis.filter.french_stop.stopwords.56: ma
index.analysis.filter.french_stop.stopwords.55: là
index.analysis.analyzer.whitespace_analyzer.tokenizer: whitespace
index.analysis.filter.french_stop.stopwords.54: leur
index.analysis.filter.french_stop.stopwords.53: les
index.analysis.filter.french_stop.stopwords.52: le
index.analysis.filter.french_stop.stopwords.51: la
index.analysis.analyzer.whitespace_analyzer.type: custom
index.analysis.filter.french_stop.stopwords.50: juste
index.analysis.analyzer.whitespace_analyzer.filter.1: french_stemmer
index.analysis.analyzer.whitespace_analyzer.filter.0: lowercase
index.analysis.filter.french_stop.type: stop
index.analysis.analyzer.whitespace_analyzer.filter.2: asciifolding
index.analysis.filter.french_stop.stopwords.114: voie
index.analysis.filter.french_stop.stopwords.115: voient
index.analysis.filter.french_stop.stopwords.112: tu
index.analysis.filter.french_stop.stopwords.113: valeur
index.analysis.filter.french_stop.stopwords.110: trop
index.analysis.filter.french_stop.stopwords.111: très
index.version.created: 901399
index.analysis.filter.french_stop.stopwords.46: ici
index.analysis.filter.french_stop.stopwords.47: il
index.analysis.filter.french_stop.stopwords.48: ils
index.analysis.filter.french_stop.stopwords.49: je
index.analysis.filter.french_stop.stopwords.118: vous
index.analysis.filter.french_stop.stopwords.119: vu
index.analysis.filter.french_stop.stopwords.116: vont
index.analysis.filter.french_stop.stopwords.117: votre
index.analysis.filter.french_stop.stopwords.41: fois
index.analysis.filter.nGram_filter.token_chars.3: symbol
index.analysis.filter.french_stop.stopwords.40: faites
index.analysis.analyzer.category_analyzer.tokenizer: standard
index.analysis.filter.french_stop.stopwords.43: force
index.analysis.filter.french_stop.stopwords.42: font
index.analysis.filter.french_stop.stopwords.45: hors
index.analysis.filter.french_stop.stopwords.44: haut
index.analysis.filter.french_stop.stopwords.101: sur
index.analysis.filter.french_stop.stopwords.102: ta
index.analysis.analyzer.nGram_analyzer.filter.3: nGram_filter
index.analysis.filter.french_stop.stopwords.103: tandis
index.analysis.analyzer.nGram_analyzer.filter.2: french_stemmer
index.analysis.filter.french_stop.stopwords.104: tellement
index.analysis.filter.french_stemmer.name: minimal_french
index.analysis.filter.french_stop.stopwords.100: sujet
index.analysis.filter.french_stop.stopwords.37: et
index.analysis.filter.french_stop.stopwords.109: tout
index.analysis.filter.french_stop.stopwords.38: eu
index.analysis.filter.french_stop.stopwords.35: essai
index.analysis.filter.french_stop.stopwords.36: est
index.analysis.analyzer.nGram_analyzer.filter.1: asciifolding
index.analysis.filter.french_stop.stopwords.105: tels
index.analysis.analyzer.nGram_analyzer.filter.0: lowercase
index.analysis.filter.french_stop.stopwords.106: tes
index.analysis.filter.french_stop.stopwords.39: fait
index.analysis.filter.french_stop.stopwords.107: ton
index.analysis.filter.french_stop.stopwords.108: tous
index.analysis.filter.french_stop.stopwords.30: début
index.analysis.filter.french_stop.stopwords.34: encore
index.analysis.filter.french_stop.stopwords.33: en
index.analysis.filter.french_stop.stopwords.32: elles
index.analysis.filter.french_stop.stopwords.31: elle
}

mappings: {
product_category: {
properties: {
tags: {
analyzer: category_analyzer
type: string
}
ancestry_path: {
type: string
}
name: {
analyzer: product_analyzer
type: string
}
leaf?: {
type: boolean
}
category_depth_0: {
properties: {
tags: {
type: string
}
name: {
analyzer: product_analyzer
type: string
}
}
}
name_suggest: {
index_analyzer: nGram_analyzer
search_analyzer: whitespace_analyzer
type: string
}
category_depth_3: {
properties: {
name: {
type: string
}
}
}
self_and_ancestors_ids: {
type: string
}
depth: {
type: integer
}
category_depth_1: {
properties: {
tags: {
type: string
}
name: {
analyzer: product_analyzer
type: string
}
}
}
category_depth_2: {
properties: {
tags: {
type: string
}
name: {
analyzer: product_analyzer
type: string
}
}
}
}
}
}

aliases: [ ]
}

我尝试使用 6GB 的最小/最大堆大小,但它表现出相同的行为,只是很快变得无响应。

java
  • 1 个回答
  • 1052 Views
Martin Hope
Alex F
Asked: 2014-03-22 05:58:24 +0800 CST

MongoDB 和 ZFS 性能不佳:磁盘总是忙于读取而只进行写入

  • 5

在 ZFSonlinux 上使用 MongoDB(我相信它是映射数据库)时,我遇到了巨大的性能问题。

我们的 Mongodb 几乎只有写入。在没有 ZFS 的副本上,当应用程序每 30 秒写入一次数据库时,磁盘完全繁忙约 5 秒,中间没有磁盘活动,因此我将其作为比较的基准行为。
在具有 ZFS 的副本上,磁盘一直都很忙,副本挣扎着跟上 MongoDB 主数据库的最新状态。我在所有副本上都启用了 lz4 压缩,并且节省的空间很大,所以应该有更少的数据打到磁盘

所以在这些 ZFS 服务器上,我首先设置了默认记录大小 = 128k。然后我在重新同步 Mongo 数据之前擦除了数据并设置了 recordsize=8k。然后我再次擦拭并尝试recordsize = 1k。我也试过没有校验和的recordsize=8k

尽管如此,它并没有解决任何问题,磁盘始终保持 100% 忙碌。只有一次在记录大小 = 8k 的服务器上,磁盘比任何非 ZFS 副本都忙得多,但是在尝试不同的设置并再次尝试记录大小 = 8k 后,磁盘为 100%,我看不到以前的良好行为,并且在任何其他副本上也看不到它。

此外,应该几乎只有写入,但看到在不同设置下的所有副本上,磁盘完全忙于 75% 的读取和只有 25% 的写入

(注意,我相信 MongoDB 是映射数据库。有人告诉我在 AIO 模式下尝试 MongoDB,但我没有找到如何设置它,并且在另一台运行 MySQL InnoDB 的服务器上我意识到 ZFSonLinux 无论如何都不支持 AIO。)

我的服务器是 CentOS 6.5 内核 2.6.32-431.5.1.el6.x86_64。spl-0.6.2-1.el6.x86_64 zfs-0.6.2-1.el6.x86_64

#PROD 13:44:55 root@rum-mongo-backup-1:~: zfs list
NAME                     USED  AVAIL  REFER  MOUNTPOINT
zfs                      216G  1.56T    32K  /zfs
zfs/mongo_data-rum_a    49.5G  1.56T  49.5G  /zfs/mongo_data-rum_a
zfs/mongo_data-rum_old   166G  1.56T   166G  /zfs/mongo_data-rum_old

#PROD 13:45:20 root@rum-mongo-backup-1:~: zfs list -t snapshot
no datasets available

#PROD 13:45:29 root@rum-mongo-backup-1:~: zfs list -o atime,devices,compression,copies,dedup,mountpoint,recordsize,casesensitivity,xattr,checksum
ATIME  DEVICES  COMPRESS  COPIES          DEDUP  MOUNTPOINT               RECSIZE         CASE  XATTR   CHECKSUM
  off       on       lz4       1            off  /zfs                        128K    sensitive     sa        off
  off       on       lz4       1            off  /zfs/mongo_data-rum_a         8K    sensitive     sa        off
  off       on       lz4       1            off  /zfs/mongo_data-rum_old       8K    sensitive     sa        off

那里会发生什么?我应该怎么看才能弄清楚 ZFS 在做什么或哪个设置设置不当?

EDIT1:
硬件:这些是租用的服务器,Xeon 1230 或 1240 上的 8 个 vcore,16 或 32GB RAM,zfs_arc_max=2147483648使用 HP 硬件 RAID1。所以 ZFS zpool 在 /dev/sda2 上,并不知道有底层 RAID1。即使是 ZFS 的次优设置,我仍然不明白为什么磁盘在读取时阻塞,而 DB 只写入。
我理解许多原因,我们不需要在这里再次公开,这很糟糕,......对于 ZFS,我很快就会有一个 JBOD/NORAID 服务器,我可以用 ZFS 自己的 RAID1 进行相同的测试在 sda2 分区上实现,使用 /、/boot 和交换分区使用 mdadm 执行软件 RAID1。

zfs
  • 5 个回答
  • 6859 Views
Martin Hope
Alex F
Asked: 2013-12-30 01:53:18 +0800 CST

Datadog:是否可以创建适用于许多主机的模板?

  • 2

当我们有许多主机运行相同的服务或相同的角色时,Datadog 有没有办法将模板应用于这些主机,以便它们自动填充警报?

而且我还可以修改模板,它会在所有主机上更新?

或者也许有另一种方法可以在 Datadog 中实现这一点?

monitoring
  • 1 个回答
  • 1371 Views

Sidebar

Stats

  • 问题 205573
  • 回答 270741
  • 最佳答案 135370
  • 用户 68524
  • 热门
  • 回答
  • Marko Smith

    新安装后 postgres 的默认超级用户用户名/密码是什么?

    • 5 个回答
  • Marko Smith

    SFTP 使用什么端口?

    • 6 个回答
  • Marko Smith

    命令行列出 Windows Active Directory 组中的用户?

    • 9 个回答
  • Marko Smith

    什么是 Pem 文件,它与其他 OpenSSL 生成的密钥文件格式有何不同?

    • 3 个回答
  • Marko Smith

    如何确定bash变量是否为空?

    • 15 个回答
  • Martin Hope
    Tom Feiner 如何按大小对 du -h 输出进行排序 2009-02-26 05:42:42 +0800 CST
  • Martin Hope
    Noah Goodrich 什么是 Pem 文件,它与其他 OpenSSL 生成的密钥文件格式有何不同? 2009-05-19 18:24:42 +0800 CST
  • Martin Hope
    Brent 如何确定bash变量是否为空? 2009-05-13 09:54:48 +0800 CST
  • Martin Hope
    cletus 您如何找到在 Windows 中打开文件的进程? 2009-05-01 16:47:16 +0800 CST

热门标签

linux nginx windows networking ubuntu domain-name-system amazon-web-services active-directory apache-2.4 ssh

Explore

  • 主页
  • 问题
    • 最新
    • 热门
  • 标签
  • 帮助

Footer

AskOverflow.Dev

关于我们

  • 关于我们
  • 联系我们

Legal Stuff

  • Privacy Policy

Language

  • Pt
  • Server
  • Unix

© 2023 AskOverflow.DEV All Rights Reserve