c# - Processing huge data from sql server
问题描述
I have a stored procedure (SQL Server 2016) which currently returns 100K to 200K rows based on the parameters to that SP.
Each row can be a size of 100KB to 200KB. So total size can be around 10GB to 20GB.
My client(background job) has to call this SP and process all rows and send it to another client.
What is the best approach to handle such scenarios?
Currently I am thinking of using streaming enumerator using yield.
Get the record whenever the 'datareader.Read()' read a row and process it and send it to other client.
dataReader = command.ExecuteReader();
while (dataReader.Read())
{
obj = new SomeClass();
// prepare Someclass
yield return obj;
}
Is this approach sufficient to handler such large data?
Is there any better approach to it? (Such as multi threading etc.)
If so how should I approach to it. Any pointers to refer?
Edit: SP has multiple joins and runs couple of times in a day.
解决方案
根据您的描述,我认为它代表了实现 SSIS(集成服务)的一个很好的场景,它可以管理并将最终结果写入 CSV 文件并允许客户进行交换。
推荐阅读
- android - Android:无法解决:com.android.support:appcompat-v7:28.1.1
- php - Laravel 分页上不存在方法链接
- javascript - 增加电子应用程序的默认内存限制
- python - 检索 ec2 实例的公共 dns 名称时出错
- c# - 通过 IHttpClientFactory 在 DelegatingHandler 中注入值
- java - 如何在没有 java 用户的情况下以 sysdba 身份连接到 oracle 数据库?
- authentication - 从 LDAP 用户标识迁移到 ADFS
- atom-editor - Atom 在新窗口中随机打开文件(而不是在新选项卡中)
- ruby-on-rails - 删除关联后如何保存活动记录对象?
- python - 具有 Maxpooling1D 和 channel_first 的 Keras 模型