amazon-web-services - Lambda reading file on S3 - flushing S3 cache
问题描述
I have a problem regarding cache on S3. Basically I have a lambda that reads a file on S3 which is used as configuration. This file is a JSON. I am using python with boto3 to extract the needed info.
Snippet of my code:
s3 = boto3.resource('s3')
bucketname = "configurationbucket"
itemname = "conf.json"
obj = s3.Object(bucketname, itemname)
body = obj.get()['Body'].read()
json_parameters = json.loads(body)
def my_handler(event, context):
# using json_paramters data
The problem is that when I change the json content and I upload the file again on S3, my lambda seems to read the old values, which I suppose is due to S3 doing caching somewhere.
Now I think that there are two ways to solve this problem:
- to force S3 to invalidate its cache content
- to force my lambda to reload the file from S3 without using the cache
I do prefer the first solution, because I think it will reduce computation time (reloading the file is an expensive procedure). So, how can I flush my cache? I didn't find on console or on AWS guide the way to do this in a simple manner
解决方案
问题是,函数处理程序之外的代码只初始化一次。当 lambda 温暖时,它不会重新初始化
def my_handler(event, context):
# read from S3 here
obj = s3.Object(bucketname, itemname)
body = obj.get()['Body'].read()
json_parameters = json.loads(body)
# use json_paramters data
推荐阅读
- mysql - Spring Data JPA 无法在数据库中创建表
- c - C链表结构中的未知类型名称
- flask - 如何显示在 IIS 上运行的 Flask 的错误日志?
- ios - 返回时swiftUI bottomBar工具栏消失
- java - 如何读取 CSV 并将其存储为嵌套地图
- angular - AgGrid:onGridReady 是否符合 Angular 挂钩?
- c# - 根据从下拉列表中选择在计算之间进行更改
- java - -Djava.endorsed.dirs=.... 不支持背书
- r - 如何删除变量中的部分重复信息,但在字段之间重复
- ruby-on-rails - Canvas LMS 非常缓慢并且耗尽了 CPU 周期。可能是什么原因,如何解决?