我们计划将我们的 Cassandra 服务器从 3.0.11 升级到 4.0.1 这里它说使用 JDK 8。我想知道在撰写本文时是否仍然如此。我相信最新发布的 Oracle 版本是 JDK 17。
我正在运行一个有 5 个节点的 Cassandra 集群,每个节点有 10 个 1Tb 磁盘 (JBOD)。目前,其中一个节点处于有问题的情况下,由于单个磁盘上的磁盘空间不足,大型压缩无法再成功完成。
我试图弄清楚在 JBOD 配置中添加额外磁盘会产生什么影响。
- 是否会自动重新分配现有数据以最佳利用新磁盘?
- 是否只会将新数据写入新添加的磁盘?
- 我可以手动将 sstables 移动到不同的磁盘吗?
- 拆分 sstables 是一种选择吗?
我在网上找到了不完全确定的来源:
- https://stackoverflow.com/questions/23110054/cassandra-adding-disks-increase-storage-volume-without-adding-new-nodes似乎暗示“随着时间的推移,数据将在磁盘之间均匀分布”,但并没有指定这是由于重新平衡还是新数据将仅写入新磁盘的事实(也是旧链接,因此不确定是否仍然相关)。
- http://mail-archives.apache.org/mod_mbox/cassandra-user/201610.mbox/%3cCAMy13tA3cZ++LaVnUsuwkwbR5tvBdhMEOqWij9nrWRODq42rLQ@mail.gmail.com%3e似乎暗示压缩将始终使用 Cassandra 3.2+ 在本地运行数据磁盘.
经过一天的研究,我提出了一个问题。
我无法重命名 cassandra 节点。我想在 D1 上放置一个节点,在机架 1 中的 D2 上放置另一个节点。
但这是我得到的结果
$ nodetool status
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 10.0.0.1 185.58 KiB 256 100.0% 21d6f3db-79d3-4b3b-9fb8-7da42c82610e rack1
UN 10.0.0.2 75.07 KiB 256 100.0% 2bb2e75e-23a6-4dc4-a279-ab28b739255d rack1
两个节点都在“datacenter1”中这是我在第一台设备上的配置:
cassandra.yaml:
cluster_name: 'kban'
num_tokens: 256
hinted_handoff_enabled: true
max_hint_window_in_ms: 10800000
hinted_handoff_throttle_in_kb: 1024
max_hints_delivery_threads: 2
hints_flush_period_in_ms: 10000
max_hints_file_size_in_mb: 128
batchlog_replay_throttle_in_kb: 1024
authenticator: AllowAllAuthenticator
authorizer: AllowAllAuthorizer
role_manager: CassandraRoleManager
roles_validity_in_ms: 2000
permissions_validity_in_ms: 2000
credentials_validity_in_ms: 2000
partitioner: org.apache.cassandra.dht.Murmur3Partitioner
data_file_directories:
- /var/lib/cassandra/data
commitlog_directory: /var/lib/cassandra/commitlog
cdc_enabled: false
disk_failure_policy: stop
commit_failure_policy: stop
prepared_statements_cache_size_mb:
thrift_prepared_statements_cache_size_mb:
key_cache_size_in_mb:
key_cache_save_period: 14400
row_cache_size_in_mb: 0
row_cache_save_period: 0
counter_cache_size_in_mb:
counter_cache_save_period: 7200
saved_caches_directory: /var/lib/cassandra/saved_caches
commitlog_sync: periodic
commitlog_sync_period_in_ms: 10000
commitlog_segment_size_in_mb: 32
seed_provider:
- class_name: org.apache.cassandra.locator.SimpleSeedProvider
parameters:
- seeds: "10.0.0.2"
concurrent_reads: 32
concurrent_writes: 32
concurrent_counter_writes: 32
concurrent_materialized_view_writes: 32
memtable_allocation_type: heap_buffers
index_summary_capacity_in_mb:
index_summary_resize_interval_in_minutes: 60
trickle_fsync: false
trickle_fsync_interval_in_kb: 10240
storage_port: 7000
ssl_storage_port: 7001
listen_address: 10.0.0.1
start_native_transport: true
native_transport_port: 9042
start_rpc: false
rpc_address: 10.0.0.1
rpc_port: 9160
rpc_keepalive: true
rpc_server_type: sync
thrift_framed_transport_size_in_mb: 15
incremental_backups: false
snapshot_before_compaction: false
auto_snapshot: true
column_index_size_in_kb: 64
column_index_cache_size_in_kb: 2
compaction_throughput_mb_per_sec: 16
sstable_preemptive_open_interval_in_mb: 50
read_request_timeout_in_ms: 5000
range_request_timeout_in_ms: 10000
write_request_timeout_in_ms: 2000
counter_write_request_timeout_in_ms: 5000
cas_contention_timeout_in_ms: 1000
truncate_request_timeout_in_ms: 60000
request_timeout_in_ms: 10000
slow_query_log_timeout_in_ms: 500
cross_node_timeout: false
endpoint_snitch: SimpleSnitch
dynamic_snitch_update_interval_in_ms: 100
dynamic_snitch_reset_interval_in_ms: 600000
dynamic_snitch_badness_threshold: 0.1
request_scheduler: org.apache.cassandra.scheduler.NoScheduler
server_encryption_options:
internode_encryption: none
keystore: conf/.keystore
keystore_password: cassandra
truststore: conf/.truststore
truststore_password: cassandra
client_encryption_options:
enabled: false
optional: false
keystore: conf/.keystore
keystore_password: cassandra
internode_compression: dc
inter_dc_tcp_nodelay: false
tracetype_query_ttl: 86400
tracetype_repair_ttl: 604800
enable_user_defined_functions: false
enable_scripted_user_defined_functions: false
windows_timer_interval: 1
transparent_data_encryption_options:
enabled: false
chunk_length_kb: 64
cipher: AES/CBC/PKCS5Padding
key_alias: testing:1
key_provider:
- class_name: org.apache.cassandra.security.JKSKeyProvider
parameters:
- keystore: conf/.keystore
keystore_password: cassandra
store_type: JCEKS
key_password: cassandra
tombstone_warn_threshold: 1000
tombstone_failure_threshold: 100000
batch_size_warn_threshold_in_kb: 5
batch_size_fail_threshold_in_kb: 50
unlogged_batch_across_partitions_warn_threshold: 10
compaction_large_partition_warning_threshold_mb: 100
gc_warn_threshold_in_ms: 1000
back_pressure_enabled: false
back_pressure_strategy:
- class_name: org.apache.cassandra.net.RateBasedBackPressure
parameters:
- high_ratio: 0.90
factor: 5
flow: FAST
enable_materialized_views: true
enable_sasi_indexes: true
cassandra-rackdc.properties:
dc=D1
rack=RACK1
在第二台设备上:
cassandra.yaml:
cluster_name: 'kban'
num_tokens: 256
hinted_handoff_enabled: true
max_hint_window_in_ms: 10800000
hinted_handoff_throttle_in_kb: 1024
max_hints_delivery_threads: 2
hints_flush_period_in_ms: 10000
max_hints_file_size_in_mb: 128
batchlog_replay_throttle_in_kb: 1024
authenticator: AllowAllAuthenticator
authorizer: AllowAllAuthorizer
role_manager: CassandraRoleManager
roles_validity_in_ms: 2000
permissions_validity_in_ms: 2000
credentials_validity_in_ms: 2000
partitioner: org.apache.cassandra.dht.Murmur3Partitioner
data_file_directories:
- /var/lib/cassandra/data
commitlog_directory: /var/lib/cassandra/commitlog
cdc_enabled: false
disk_failure_policy: stop
commit_failure_policy: stop
prepared_statements_cache_size_mb:
thrift_prepared_statements_cache_size_mb:
key_cache_size_in_mb:
key_cache_save_period: 14400
row_cache_size_in_mb: 0
row_cache_save_period: 0
counter_cache_size_in_mb:
counter_cache_save_period: 7200
saved_caches_directory: /var/lib/cassandra/saved_caches
commitlog_sync: periodic
commitlog_sync_period_in_ms: 10000
commitlog_segment_size_in_mb: 32
seed_provider:
- class_name: org.apache.cassandra.locator.SimpleSeedProvider
parameters:
- seeds: "10.0.0.1"
concurrent_reads: 32
concurrent_writes: 32
concurrent_counter_writes: 32
concurrent_materialized_view_writes: 32
memtable_allocation_type: heap_buffers
index_summary_capacity_in_mb:
index_summary_resize_interval_in_minutes: 60
trickle_fsync: false
trickle_fsync_interval_in_kb: 10240
storage_port: 7000
ssl_storage_port: 7001
listen_address: 10.0.0.2
start_native_transport: true
native_transport_port: 9042
start_rpc: false
rpc_address: 10.0.0.2
rpc_port: 9160
rpc_keepalive: true
rpc_server_type: sync
thrift_framed_transport_size_in_mb: 15
incremental_backups: false
snapshot_before_compaction: false
auto_snapshot: true
column_index_size_in_kb: 64
column_index_cache_size_in_kb: 2
compaction_throughput_mb_per_sec: 16
sstable_preemptive_open_interval_in_mb: 50
read_request_timeout_in_ms: 5000
range_request_timeout_in_ms: 10000
write_request_timeout_in_ms: 2000
counter_write_request_timeout_in_ms: 5000
cas_contention_timeout_in_ms: 1000
truncate_request_timeout_in_ms: 60000
request_timeout_in_ms: 10000
slow_query_log_timeout_in_ms: 500
cross_node_timeout: false
endpoint_snitch: SimpleSnitch
dynamic_snitch_update_interval_in_ms: 100
dynamic_snitch_reset_interval_in_ms: 600000
dynamic_snitch_badness_threshold: 0.1
request_scheduler: org.apache.cassandra.scheduler.NoScheduler
server_encryption_options:
internode_encryption: none
keystore: conf/.keystore
keystore_password: cassandra
truststore: conf/.truststore
truststore_password: cassandra
client_encryption_options:
enabled: false
optional: false
keystore: conf/.keystore
keystore_password: cassandra
internode_compression: dc
inter_dc_tcp_nodelay: false
tracetype_query_ttl: 86400
tracetype_repair_ttl: 604800
enable_user_defined_functions: false
enable_scripted_user_defined_functions: false
windows_timer_interval: 1
transparent_data_encryption_options:
enabled: false
chunk_length_kb: 64
cipher: AES/CBC/PKCS5Padding
key_alias: testing:1
key_provider:
- class_name: org.apache.cassandra.security.JKSKeyProvider
parameters:
- keystore: conf/.keystore
keystore_password: cassandra
store_type: JCEKS
key_password: cassandra
tombstone_warn_threshold: 1000
tombstone_failure_threshold: 100000
batch_size_warn_threshold_in_kb: 5
batch_size_fail_threshold_in_kb: 50
unlogged_batch_across_partitions_warn_threshold: 10
compaction_large_partition_warning_threshold_mb: 100
gc_warn_threshold_in_ms: 1000
back_pressure_enabled: false
back_pressure_strategy:
- class_name: org.apache.cassandra.net.RateBasedBackPressure
parameters:
- high_ratio: 0.90
factor: 5
flow: FAST
enable_materialized_views: true
enable_sasi_indexes: true
cassandra-rackdc.properties:
dc=D2
rack=RACK1
我有 3 个节点的 Cassandra 集群。我正在执行一些 colud 迁移活动,为此,我在现有集群中添加了两个节点,结果如下。
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 192.168.1.5 24.07 GB 256 59.4% 804b855f-78f3-42d6-8abf-b9aec73699df rack1
UN 192.168.1.6 24.77 GB 256 59.8% 21f2066f-1794-485c-9c4f-d6d1b286a551 rack1
UN 172.16.2.20 15.96 GB 256 60.3% 2c2f512d-5743-4632-a4b5-cd2cac967897 rack1
UN 172.16.2.21 12.76 GB 256 60.0% 657ff1b6-773a-4782-a506-c4899cdf2a4f rack1
UN 192.168.1.7 17.69 GB 256 60.5% c8c4bc41-4b5c-41e6-bb71-ab90c2ed5eb0 rack1
OWNS 字段以前对于所有节点都是 100%,现在它显示不同的数字,所以是否像整个 100% 数据不在每个节点上一样,假设我通过关闭 Cassandra 来关闭任何节点,那么是否存在数据丢失的风险?
在 CentosOS 7 服务器上安装带有 cassandra 扩展的 php 时我做错了什么吗?
对于依赖项,我安装https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm和https://rpms.remirepo.net/enterprise/remi-release-7.rpm。然后我做:
yum install -y --enablerepo=epel,remi,remi-php70 \
php \
php-mbstring \
php-mysqlnd \
php-pecl-apcu \
php-pecl-cassandra
当命令完成时,它总是给我:
Error: Package: php-pecl-cassandra-1.2.2-1.el7.remi.7.0.x86_64 (remi-php70)
Requires: libuv.so.0.10()(64bit)
如果我尝试搜索 libuv,结果如下(未列出 0.10 版本):
yum search --enablerepo=epel,remi,remi-php70 libuv --showduplicates
Loaded plugins: fastestmirror, ovl
Determining fastest mirrors
* base: mirror.nbtelecom.com.br
* epel: mirror.globo.com
* extras: centos.xpg.com.br
* remi: remi.xpg.com.br
* remi-php70: remi.xpg.com.br
* remi-safe: remi.xpg.com.br
* updates: centosp4.centos.org
============================== N/S matched: libuv ==============================
1:libuv-devel-1.9.1-1.el7.x86_64 : Development libraries for libuv
1:libuv-1.9.1-1.el7.x86_64 : Platform layer for node.js
1:libuv-static-1.9.1-1.el7.x86_64 : Platform layer for node.js - static library
php-pecl-uv-0.1.0-1.el7.remi.7.0.x86_64 : Libuv wrapper
php-pecl-uv-0.1.1-1.el7.remi.7.0.x86_64 : Libuv wrapper
php70-php-pecl-uv-0.1.0-1.el7.remi.x86_64 : Libuv wrapper
php70-php-pecl-uv-0.1.0-1.el7.remi.x86_64 : Libuv wrapper
php70-php-pecl-uv-0.1.1-1.el7.remi.x86_64 : Libuv wrapper
php70-php-pecl-uv-0.1.1-1.el7.remi.x86_64 : Libuv wrapper
php71-php-pecl-uv-0.1.1-1.el7.remi.x86_64 : Libuv wrapper
php71-php-pecl-uv-0.1.1-1.el7.remi.x86_64 : Libuv wrapper
php71-php-pecl-uv-0.1.1-2.el7.remi.x86_64 : Libuv wrapper
php71-php-pecl-uv-0.1.1-2.el7.remi.x86_64 : Libuv wrapper
Name and summary matches only, use "search all" for everything.
我很难在 Apache Cassandra(版本 3.0.9)上恢复快照。据我所知,我正在遵循 datastax 博客上描述的程序以及其他几个程序(例如:http ://datascale.io/cloning-cassandra-clusters-fast-way/ )。然而我可能会丢失一些东西,每次我进行恢复时,数据都会丢失。
设置: 6 个节点集群(1 个 DC,3 个机架,每个机架 2 个节点),复制因子设置为 3。机器托管在 AWS 上。
备份程序(在每个节点上):
nodetool snapshot mykeyspace
cqlsh -e 'DESCRIBE KEYSPACE mykeyspace' > /tmp/mykeyspace.cql
nodetool ring | grep "$(ifconfig | awk '/inet /{print $2}' | head -1)" | awk '{print $NF ","}' | xargs > /tmp/tokens
我得到了 nodetool 快照命令生成的文件,并在 S3 上将它们与令牌和 cql 一起备份。
恢复过程(对于每个节点,除非指定):
(创建新虚拟机后)
- 下载快照、令牌和密钥空间
- 停止服务卡桑德拉
- 删除
/var/lib/cassandra/commitlog/*
和/var/lib/cassandra/system/
- 将令牌插入
cassandra.yaml
- 启动服务卡桑德拉
mykeyspace.cql
仅从一个节点上恢复 mykeyspace- 等待复制并停止服务 cassandra
- 删除
.db
文件夹中的文件/var/lib/cassandra/data/mykeyspace/
- 对于每个表,将快照文件 (
.db
,.crc32
,.txt
) 复制到/var/lib/cassandra/data/mykeyspace/$table/
- 重启服务cassandra
- 运行
nodetool repair mykeyspace -full
,一次一个节点
结果 :
总是有缺失的行,每个表的数量大致相同,但绝不是相同的。我试图“混淆”一些过程,比如在令牌之前恢复密钥空间,nodetool refresh
在修复之前运行,但我每次都遇到同样的问题。
由于我距离“良好”恢复不远,我认为我遗漏了一些非常明显的东西。分析日志并没有真正帮助,因为它们没有显示任何错误/失败消息。
欢迎任何帮助:) 如果需要,我当然可以提供更多信息。
编辑:没有人?我用 cassandra 版本(3.0.9)更新了这个问题,我一开始就忘记了。我再次尝试恢复,但没有运气。我真的没有更多的想法:(
我不知道为什么我无法在 cassandra 上启用节俭协议。
在cassandra.yaml
rpc_start=true
port=9160
.
操作系统centos 7
iptables 没有运行
防火墙没有运行
cassandra ver 3.7(datastax repo)启动并运行。
java version "1.8.0_51"
Java(TM) SE Runtime Environment (build 1.8.0_51-b16)
Java HotSpot(TM) 64-Bit Server VM (build 25.51-b03, mixed mode)
一月安装
启用 JMX
ETC..
cassandra 配置中的一切都很好,除了由于缺少 RAM 而需要交换文件。
selinux 无关紧要,但允许。
我可以从远程连接,但这是来自nodetools enablethrift
error: Could not create ServerSocket on address /10.10.30.11:9160.
-- StackTrace --
org.apache.thrift.transport.TTransportException: Could not create ServerSocket on address /10.10.30.11:9160.
at org.apache.thrift.transport.TNonblockingServerSocket.<init>(TNonblockingServerSocket.java:96)
at org.apache.thrift.transport.TNonblockingServerSocket.<init>(TNonblockingServerSocket.java:79)
at org.apache.thrift.transport.TNonblockingServerSocket.<init>(TNonblockingServerSocket.java:75)
at org.apache.cassandra.thrift.TCustomNonblockingServerSocket.<init>(TCustomNonblockingServerSocket.java:39)
at org.apache.cassandra.thrift.THsHaDisruptorServer$Factory.buildTServer(THsHaDisruptorServer.java:80)
at org.apache.cassandra.thrift.TServerCustomFactory.buildTServer(TServerCustomFactory.java:55)
at org.apache.cassandra.thrift.ThriftServer$ThriftServerThread.<init>(ThriftServer.java:131)
at org.apache.cassandra.thrift.ThriftServer.start(ThriftServer.java:58)
at org.apache.cassandra.service.StorageService.startRPCServer(StorageService.java:408)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71)
at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:275)
at com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
at com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
at com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
at com.sun.jmx.remote.security.MBeanServerAccessController.invoke(MBeanServerAccessController.java:468)
at javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1470)
at javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:76)
at javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1311)
at java.security.AccessController.doPrivileged(Native Method)
at javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1410)
at javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:832)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:323)
at sun.rmi.transport.Transport$1.run(Transport.java:200)
at sun.rmi.transport.Transport$1.run(Transport.java:197)
at java.security.AccessController.doPrivileged(Native Method)
at sun.rmi.transport.Transport.serviceCall(Transport.java:196)
at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:568)
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:826)
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.lambda$run$79(TCPTransport.java:683)
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler$$Lambda$98/201325936.run(Unknown Source)
at java.security.AccessController.doPrivileged(Native Method)
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:682)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
任何想法?
我也可以通过 telnet “连接”到端口 9160。
netstat
输出:
tcp 0 0 10.10.30.11:9160 0.0.0.0:* LISTEN 4395/java
nodetool statusthrift
=> 没有运行。
当我尝试通过 yum 手动安装代理时,正在安装最新版本的代理而不是我当前使用的版本,我如何安装我想要的代理版本我需要在 datastax 存储库中指定任何内容吗?
我尝试通过 opscenter 安装,但无法连接。为了让 opscenter 能够登录到我提供用户名、密码的节点,我将整个私钥文件 (.ppk) 粘贴到 opscenter 登录凭据中,我做错了吗??
嗨,我正在运行一个 5 节点 dse cassandra 集群。每个节点的磁盘使用率大约为 90%,所以我已经从我的密钥空间中删除了数据(我只有一个密钥空间)。但是我的磁盘空间仍然是 90%。无论如何要重新获得已删除数据的磁盘空间吗?
有没有人能够让 cassandra 3.x 在 kubernetes 上运行?
cassandra 2.1.13 运行良好,但 3.3 无法与其他 cassandra 节点通信。除了容器之外,我的 yaml 是相同的。我刚刚将 gcr.io/google-samples/cassandra:v8 从 2.1.13 升级到 3.3
3.x 中是否有一个新设置需要与 2.1 不同的配置?
在所有 5 个节点上运行 nodetool status 表明它们彼此看到并且具有 UN 的状态/状态