在 Sybase (15.0.3) 压缩转储原始平台上的数据库后 - Solaris Sparc 64 - 包含所有必需的步骤(甚至是静默步骤),我试图将它加载到我的 Sybase (15.7) Solaris x64 上(在vmware),[两个系统上的页面大小相等!!!]我收到了这个错误:
1> load database wfcv2 from "compress::1::/20120412_wfcv2_zdump"
2> go
Backup Server session id is: 28. Use this value when executing the
'sp_volchanged' system stored procedure after fulfilling any volume change
request from the Backup Server.
Backup Server: 4.132.1.1: Attempting to open byte stream device:
'compress::1::/20120412_wfcv2_zdump::000'
Backup Server: 4.177.2.1: The database page size of 2048 bytes obtained from ASE
is different from the database page size of -1024 bytes read from the dump
header. The LOAD session must exit.
Backup Server: 1.14.2.2: Unrecoverable I/O or volume error. This DUMP or LOAD
session must exit.
Backup Server: 6.32.2.3: compress::1::/20120412_wfcv2_zdump::000: volume not
valid or not requested (server: , session id: 28.)
Backup Server: 1.14.2.4: Unrecoverable I/O or volume error. This DUMP or LOAD
session must exit.
Msg 8009, Level 16, State 1:
Server 'MYSERVER', Line 1:
Error encountered by Backup Server. Please refer to Backup Server messages for
details.
建议?
来自 PHILL 的问题:您能否发布以下输出:使用 headeronly 从“compress::1::/20120412_wfcv2_zdump”加载数据库 wfcv2 并使用 listonly=full 从“compress::1::/20120412_wfcv2_zdump”加载数据库 wfcv2 – Phil
1> load database wfcv2 from "compress::1::/20120412_wfcv2_zdump" with headeronly
2> go
Backup Server session id is: 31. Use this value when executing the
'sp_volchanged' system stored procedure after fulfilling any volume change
request from the Backup Server.
Backup Server: 4.132.1.1: Attempting to open byte stream device:
'compress::1::/20120412_wfcv2_zdump::000'
Backup Server: 4.177.2.1: The database page size of 2048 bytes obtained from ASE
is different from the database page size of -1024 bytes read from the dump
header. The LOAD session must exit.
Backup Server: 1.14.2.2: Unrecoverable I/O or volume error. This DUMP or LOAD
session must exit.
Msg 8009, Level 16, State 1:
Server 'MYSERVER', Line 1:
Error encountered by Backup Server. Please refer to Backup Server messages for
details.
1>
2>
3> load database wfcv2 from "compress::1::/20120412_wfcv2_zdump" with listonly=full
4> go
Backup Server session id is: 33. Use this value when executing the
'sp_volchanged' system stored procedure after fulfilling any volume change
request from the Backup Server.
Backup Server: 4.22.1.1: Option LISTONLY is not valid for device
'compress::1::/20120412_wfcv2_zdump::000'.
Msg 8009, Level 16, State 1:
Server 'MYSERVER', Line 3:
Error encountered by Backup Server. Please refer to Backup Server messages for
details.
1>
2>
3>
Backup Server messages
-------------------------------------------------------------------------------------------------------------------------
Apr 12 11:38:00 2012: Backup Server: 2.23.1.1: Connection from Server MYSERVER on Host MyMachine with HostProcid 3776.
Apr 12 11:38:00 2012: Backup Server: 4.132.1.1: Attempting to open byte stream device: 'compress::1::/20120412_wfcv2_zdump::000'
Apr 12 11:38:00 2012: Backup Server: 4.177.2.1: The database page size of 2048 bytes obtained from ASE is different from the database
page size of -1024 bytes read from the dump header. The LOAD session must exit.
Apr 12 11:38:00 2012: Backup Server: 1.14.2.2: Unrecoverable I/O or volume error. This DUMP or LOAD session must exit.
Apr 12 11:38:18 2012: Backup Server: 2.23.1.1: Connection from Server MYSERVER on Host MyMachine with HostProcid 3776.
Apr 12 11:38:18 2012: Backup Server: 4.22.1.1: Option LISTONLY is not valid for device 'compress::1::/20120412_wfcv2_zdump::000'.
PHILL 的问题/评论:实际上,我认为这是您的语法。-1024 块大小的东西是红鲱鱼。尝试:从“compress::/20120412_wfcv2_zdump”加载数据库 wfcv2 - 20120412_wfcv2_zdump 在哪个目录?它真的在您盒子上的根 (/) 目录中吗?如果不是,请更改路径。– 菲尔
1)我已经尝试了你的建议并得到了同样的错误!
2)因为我试图加载转储的机器是我的测试机器(而且我在任何地方都没有更多可用空间......),我正在使用/(根)位置放置转储文件以进行加载。是的,这不是正确的做法,但正如我所说的“没有可用空间!”。
来自 PHILL 的问题/评论: LOAD 语法不正确。
您不应该在 LOAD DATABASE 命令中指定: 字符对之间的压缩级别。
假设您的转储文件位于 /20120412_wfcv2_zdump 的本地文件系统上,您的加载命令应该是:
1> 从“compress::/20120412_wfcv2_zdump”加载数据库 wfcv2 2> 去
Sybase 建议优先使用本机“compression = compress_level”选项,而不是旧的“compress::compression_level”选项。如果您使用转储数据库的 native 选项,则在加载数据库时不需要使用“compress::compression_level”。
如前所述,sybase 推荐!
根据我的个人经验,我知道加载语法是正确且有效的。昨天能够将其他 BD 从同一源服务器加载到 MyMachine。只有这个超过 10 GB 空间(+/- 2GB 压缩...)的数据库会导致问题...
来自 PHILL 的问题/评论:你确定你有同样的错误吗?文件名是否正确?ls -al /20120412_wfcv2_zdump 的输出是什么?您可能需要 chmod 777 /20120412_wfcv2_zdump 它 – Phil
1) 是的,名字是正确的!
2)这不是权限问题。我对所有事情都使用 root 用户(是的,这不是正确的做法,但正如我所说,这是我的个人测试机器!)。
PHILL 的问题/评论:好的,我再次阅读了手册。加载格式肯定是从“compress::/20120412_wfcv2_zdump”加载数据库 wfcv2 用于压缩卷,而不是“compress::1::/ ... - 请发布由此生成的错误的输出,以便我看到(我知道你说你试过了,但我还是想看看)。文档甚至声明不要将压缩级别设置为“1”。最后一件事 - 你是否不小心以 ASCII 模式 ftp 文件? - Phil
好的 !来了!...而“您是否以 ASCII 格式 ftp 文件”的答案是否定的!不管怎么说,还是要谢谢你 !
1>
2>
3> load database wfcv2 from "compress::/20120412_wfcv2_zdump"
4> go
Backup Server session id is: 35. Use this value when executing the
'sp_volchanged' system stored procedure after fulfilling any volume change
request from the Backup Server.
Backup Server: 4.132.1.1: Attempting to open byte stream device:
'compress::/20120412_wfcv2_zdump::000'
Backup Server: 4.177.2.1: The database page size of 2048 bytes obtained from ASE
is different from the database page size of -1024 bytes read from the dump
header. The LOAD session must exit.
Backup Server: 1.14.2.2: Unrecoverable I/O or volume error. This DUMP or LOAD
session must exit.
Backup Server: 6.32.2.3: compress::/20120412_wfcv2_zdump::000: volume not valid
or not requested (server: , session id: 35.)
Backup Server: 1.14.2.4: Unrecoverable I/O or volume error. This DUMP or LOAD
session must exit.
Msg 8009, Level 16, State 1:
Server 'MYMACHINE', Line 3:
Error encountered by Backup Server. Please refer to Backup Server messages for
details.
我相信所有这些问题的答案可以是最简单的答案: - 数据损坏......!
以防万一,我会再做一次转储,然后再次尝试加载它!
菲尔,谢谢你的时间!;-)
1)solaris 10 上的 vmware 工具问题肯定存在损坏问题。当网络接口具有高传输/负载操作时(示例:2 GB DB 的副本 ....),它刚刚停止工作,在操作的中间。为了让接口再次工作,我不得不断开并再次连接网络接口(在 vmware 接口中!)。基本上,我必须在 solaris 虚拟机上卸载 vmware 工具。有一个问题,可以实现的最高传输速率约为 300 Kb。基本上我可能需要几个小时来执行一个 2 GB 数据库的简单 ftp 传输,但根本没有任何损坏。如何证明/测试存在/不存在任何腐败。我只是(在源机器上)将数据库转储打包到一个 tar 文件中(是的,额外的 20kb),但是在下载完成后,在目标服务器上,
2)在确定转储没问题后,我得到了一个不同的错误:
我不得不配置一些与加载操作相关的参数,例如:
大型 i/o 缓冲区的数量 -> 32 最大内存
我还不得不将操作系统共享内存调整为sybase引擎......!
我终于能够加载数据库(大小> 2.1 GB)!
;-) 干杯!