首页 > 解决方案 > Lambda reading file on S3 - flushing S3 cache

问题描述

I have a problem regarding cache on S3. Basically I have a lambda that reads a file on S3 which is used as configuration. This file is a JSON. I am using python with boto3 to extract the needed info.

Snippet of my code:

s3 = boto3.resource('s3')
bucketname = "configurationbucket"
itemname = "conf.json"
obj = s3.Object(bucketname, itemname)
body = obj.get()['Body'].read()
json_parameters = json.loads(body)  


def my_handler(event, context):
    # using json_paramters data

The problem is that when I change the json content and I upload the file again on S3, my lambda seems to read the old values, which I suppose is due to S3 doing caching somewhere.

Now I think that there are two ways to solve this problem:

I do prefer the first solution, because I think it will reduce computation time (reloading the file is an expensive procedure). So, how can I flush my cache? I didn't find on console or on AWS guide the way to do this in a simple manner

标签: amazon-web-servicescachingamazon-s3aws-lambdaboto3

解决方案


问题是,函数处理程序之外的代码只初始化一次。当 lambda 温暖时,它不会重新初始化

def my_handler(event, context):
    # read from S3 here
    obj = s3.Object(bucketname, itemname)
    body = obj.get()['Body'].read()
    json_parameters = json.loads(body)
    # use json_paramters data

推荐阅读