我正在尝试通过分块来上传文件并将其发送到服务器,在这段代码中,我只是分块,还没有对分块进行处理,所以主要的问题是,当我对文件进行分块时,如果文件是 2GB,如果 chunkCount 为 1000 或更多,我就会通过多个请求对我的服务器进行 ddos 攻击(对吗?),如果 chunkCount 低于 100,我的客户端就会出现很大的延迟
为什么我没有正常上传文件并执行此分块作业? 您可以暂停上传,如果分块字段的上传导致连接问题,您可以从字段分块重试
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Video Thumbnail Generator</title>
<link
href="https://cdn.jsdelivr.net/npm/[email protected]/dist/full.min.css"
rel="stylesheet"
type="text/css"
/>
<script src="https://cdn.tailwindcss.com"></script>
</head>
<body>
<span class="loading loading-spinner loading-lg"></span>
<h1>File Chunk Processor</h1>
<input type="file" id="fileInput" />
<button id="processButton" onclick="DoChunk()">Process File</button>
<script>
function DoChunk() {
let chunkCount = 1700;
let chunkIndex = 0;
processNextChunk(chunkIndex, chunkCount);
}
function processNextChunk(chunkIndex, chunkCount) {
if (chunkIndex < chunkCount) {
processFileChunk('fileInput', chunkIndex, chunkCount, function () {
setTimeout(() => {
processNextChunk(chunkIndex + 1, chunkCount);
}, 100); // Adjust the delay as needed
});
}
}
function processFileChunk(elementId, chunkIndex, chunkCount, callback) {
// Get the file input element
const inputElement = document.getElementById(elementId);
// Check if the input element and file are available
if (!inputElement || !inputElement.files || !inputElement.files[0]) {
console.error('No file selected or element not found');
return;
}
// Get the selected file
const file = inputElement.files[0];
// Calculate the size of each chunk
const chunkSize = Math.ceil(file.size / chunkCount);
const start = chunkIndex * chunkSize;
const end = Math.min(start + chunkSize, file.size);
// Create a Blob for the specific chunk
const chunk = file.slice(start, end);
// Create a FileReader to read the chunk
const reader = new FileReader();
reader.onload = function (event) {
// Get the chunk content as a Base64 string
const base64String = event.target.result.split(',')[1]; // Remove data URL part
// Output or process the chunk as needed
console.log(`Chunk ${chunkIndex + 1} of ${chunkCount}:`);
console.log(base64String);
if (callback) {
callback();
}
};
reader.onerror = function (error) {
console.error('Error reading file chunk:', error);
if (callback) {
callback();
}
};
// Read the chunk as a Data URL (Base64 string)
reader.readAsDataURL(chunk);
}
</script>
</body>
</html>
这是我的 API 代码,即 asp.net,它将块作为 sting 获取,并且内部字符串是长分块的 base64
public async Task<IActionResult> UploadChunkAsync([FromBody] FileChunkRequest request,
CancellationToken cancellationToken = default)
{
var requestToken = _jwtTokenRepository.GetJwtToken();
var loggedInUser = _jwtTokenRepository.ExtractUserDataFromToken(requestToken);
var blobTableClient = _blobClientFactory.BlobTableClient(TableName.StashChunkDetail);
var stashChunkDetail = blobTableClient
.Query<StashChunkDetail>(x => x.RowKey == request.UploadToken && x.UserId == loggedInUser.id)
.SingleOrDefault();
if (stashChunkDetail != null)
{
var currentChunkSize = request.Data.SizeMB();
var isSizeOutOfDeal = stashChunkDetail.TotalUploadedSizeMb + currentChunkSize >
stashChunkDetail.FileSizeMb;
var containerName = Enum.Parse<ContainerName>(stashChunkDetail.PartitionKey);
if (isSizeOutOfDeal)
{
//delete table
// ReSharper disable once MethodSupportsCancellation
await blobTableClient.DeleteEntityAsync(stashChunkDetail.PartitionKey, stashChunkDetail.RowKey);
//delete commit data form blob storage
await _fileUploadService.DeleteObjectAsync(containerName, stashChunkDetail.RowKey);
return BadRequest(
$"the size of chunks is more than {stashChunkDetail.FileSizeMb} MB please request new upload token");
}
var isLast = (request.Data.SizeMB() + stashChunkDetail.TotalUploadedSizeMb) >=
stashChunkDetail.FileSizeMb;
int totalUploadedChunks = stashChunkDetail.TotalUploadedChunks;
int currentChunk;
if (totalUploadedChunks == 0)
{
currentChunk = 0;
}
else
{
currentChunk = totalUploadedChunks + 1;
}
var fileChunkDto = new FileChunkDto()
{
FileFormat = stashChunkDetail.FileFormat,
ContainerName = containerName,
FileName = stashChunkDetail.RowKey,
Data = request.Data,
AccessTier = stashChunkDetail.AccessTier,
CurrentChunk = currentChunk,
TotalUploadedChunks = stashChunkDetail.TotalUploadedChunks
};
await _fileUploadService.UploadChunkAsync(fileChunkDto, cancellationToken);
stashChunkDetail.TotalUploadedChunks += 1;
stashChunkDetail.TotalUploadedSizeMb += request.Data.SizeMB();
// ReSharper disable once MethodSupportsCancellation
await blobTableClient.UpdateEntityAsync(stashChunkDetail, ETag.All);
var responseChunkProgress = new ChunkUploadResponse()
{
TotalUploadedChunks = stashChunkDetail.TotalUploadedChunks,
TotalUploadedSizeMB = stashChunkDetail.TotalUploadedSizeMb
};
return StatusCode(StatusCodes.Status201Created, responseChunkProgress);
}
return BadRequest("Please Request New Upload Token");
}
为了避免服务器过载,您可以等待每个请求完成后再处理文件的下一个块。以下代码片段说明了这一点(通过将每个块发布到https://httpbin.org):