我正在阅读有关网络编程的指南,我非常喜欢:https://beej.us/guide/bgnet/html/split/slightly-advanced-techniques.html#serialization
不过,我对一些事情感到困惑。在关于序列化的这一节中,他谈到了出于字节排序的原因而序列化整数,这对我来说很有意义,但他还包括这两个函数 pack754 和 unpack754,用于以 IEEE-754 格式序列化浮点数。
uint64_t pack754(long double f, unsigned bits, unsigned expbits)
{
long double fnorm;
int shift;
long long sign, exp, significand;
unsigned significandbits = bits - expbits - 1; // -1 for sign bit
if (f == 0.0) return 0; // get this special case out of the way
// check sign and begin normalization
if (f < 0) { sign = 1; fnorm = -f; }
else { sign = 0; fnorm = f; }
// get the normalized form of f and track the exponent
shift = 0;
while(fnorm >= 2.0) { fnorm /= 2.0; shift++; }
while(fnorm < 1.0) { fnorm *= 2.0; shift--; }
fnorm = fnorm - 1.0;
// calculate the binary form (non-float) of the significand data
significand = fnorm * ((1LL<<significandbits) + 0.5f);
// get the biased exponent
exp = shift + ((1<<(expbits-1)) - 1); // shift + bias
// return the final answer
return (sign<<(bits-1)) | (exp<<(bits-expbits-1)) | significand;
}
long double unpack754(uint64_t i, unsigned bits, unsigned expbits)
{
long double result;
long long shift;
unsigned bias;
unsigned significandbits = bits - expbits - 1; // -1 for sign bit
if (i == 0) return 0.0;
// pull the significand
result = (i&((1LL<<significandbits)-1)); // mask
result /= (1LL<<significandbits); // convert back to float
result += 1.0f; // add the one back on
// deal with the exponent
bias = (1<<(expbits-1)) - 1;
shift = ((i>>significandbits)&((1LL<<expbits)-1)) - bias;
while(shift > 0) { result *= 2.0; shift--; }
while(shift < 0) { result /= 2.0; shift++; }
// sign it
result *= (i>>(bits-1))&1? -1.0: 1.0;
return result;
}
我感到困惑的是,这些函数的工作原理是先查看符号的第一位,然后查看指数的下一个 X 位,再查看尾数的下一个 Y 位。那么,这是否意味着浮点数在主机上必须已经采用 IEEE-754 格式才能正常工作?
这只是为了解释格式,还是你在现实生活中实际会做的事情?