Skip to content Skip to sidebar Skip to footer

Scrapy Store Images To Amazon S3

I store images in my local server then upload to s3 Now I want to edit it to stored images directly to amazon s3 But ther is error: boto.exception.S3ResponseError: S3ResponseError:

Solution 1:

AWS_ACCESS_KEY_ID = "xxxxxx"AWS_SECRET_ACCESS_KEY = "xxxxxx"IMAGES_STORE = "s3://bucketname/virtual_path/"

how.are.you should be a S3 Bucket that exist into your S3 account, and it will store the images you upload. If you want to store images inside any virtual_path then you need to create this folder into your S3 Bucket.

Solution 2:

I found the cause of the problem is upload policy. The function Key.set_contents_from_string() takes argument policy, default set to S3FileStore.POLICY. So modify the code in scrapy/contrib/pipeline/files.py, change

return threads.deferToThread(k.set_contents_from_string, buf.getvalue(),
                              headers=h, policy=self.POLICY)

to

return threads.deferToThread(k.set_contents_from_string, buf.getvalue(),
                              headers=h)

Maybe you can try it, and share the result here.

Solution 3:

I think the problem is not in your code, actually the problem lies in permission, please check your credentials first and make sure your permissions to access and write on s3 bucket.

import boto
    s3 = boto.connect_s3('access_key', 'secret_key')
    bucket = s3.lookup('bucket_name')
    key = bucket.new_key('testkey')
    key.set_contents_from_string('This is a test')
    key.delete()

If test run successfuly then look into your permission, for setting permission you can look at amazon configuration

Post a Comment for "Scrapy Store Images To Amazon S3"