AWS S3 Storage Service
Since Camel 3.2
Both producer and consumer are supported
The AWS2 S3 component supports storing and retrieving objects from/to Amazon’s S3 service.
Prerequisites
You must have a valid Amazon Web Services developer account, and be signed up to use Amazon S3. More information is available at Amazon S3.
URI Format
aws2-s3://bucketNameOrArn[?options]
The bucket will be created if it don’t already exists.
You can append query options to the URI in the following format,
?options=value&option2=value&…
For example in order to read file hello.txt
from bucket helloBucket
, use the following snippet:
from("aws2-s3://helloBucket?accessKey=yourAccessKey&secretKey=yourSecretKey&prefix=hello.txt")
.to("file:/var/downloaded");
URI Options
The AWS S3 Storage Service component supports 50 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
amazonS3Client (common) |
Autowired Reference to a com.amazonaws.services.s3.AmazonS3 in the registry. |
S3Client |
|
amazonS3Presigner (common) |
Autowired An S3 Presigner for Request, used mainly in createDownloadLink operation |
S3Presigner |
|
autoCreateBucket (common) |
Setting the autocreation of the S3 bucket bucketName. This will apply also in case of moveAfterRead option enabled and it will create the destinationBucket if it doesn’t exist already. |
false |
boolean |
configuration (common) |
The component configuration |
AWS2S3Configuration |
|
overrideEndpoint (common) |
Set the need for overidding the endpoint. This option needs to be used in combination with uriEndpointOverride option |
false |
boolean |
pojoRequest (common) |
If we want to use a POJO request as body or not |
false |
boolean |
policy (common) |
The policy for this queue to set in the com.amazonaws.services.s3.AmazonS3#setBucketPolicy() method. |
String |
|
proxyHost (common) |
To define a proxy host when instantiating the SQS client |
String |
|
proxyPort (common) |
Specify a proxy port to be used inside the client definition. |
Integer |
|
proxyProtocol (common) |
To define a proxy protocol when instantiating the S3 client. There are 2 enums and the value can be one of: HTTP, HTTPS |
HTTPS |
Protocol |
region (common) |
The region in which S3 client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example ap-east-1) You’ll need to use the name Region.EU_WEST_1.id() |
String |
|
trustAllCertificates (common) |
If we want to trust all certificates in case of overriding the endpoint |
false |
boolean |
uriEndpointOverride (common) |
Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option |
String |
|
useDefaultCredentialsProvider (common) |
Set whether the S3 client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in. |
false |
boolean |
customerAlgorithm (common) |
Define the customer algorithm to use in case CustomerKey is enabled |
String |
|
customerKeyId (common) |
Define the id of Customer key to use in case CustomerKey is enabled |
String |
|
customerKeyMD5 (common) |
Define the MD5 of Customer key to use in case CustomerKey is enabled |
String |
|
bridgeErrorHandler (consumer) |
Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. |
false |
boolean |
deleteAfterRead (consumer) |
Delete objects from S3 after they have been retrieved. The delete is only performed if the Exchange is committed. If a rollback occurs, the object is not deleted. If this option is false, then the same objects will be retrieve over and over again on the polls. Therefore you need to use the Idempotent Consumer EIP in the route to filter out duplicates. You can filter using the AWS2S3Constants#BUCKET_NAME and AWS2S3Constants#KEY headers, or only the AWS2S3Constants#KEY header. |
true |
boolean |
delimiter (consumer) |
The delimiter which is used in the com.amazonaws.services.s3.model.ListObjectsRequest to only consume objects we are interested in. |
String |
|
destinationBucket (consumer) |
Define the destination bucket where an object must be moved when moveAfterRead is set to true. |
String |
|
destinationBucketPrefix (consumer) |
Define the destination bucket prefix to use when an object must be moved and moveAfterRead is set to true. |
String |
|
destinationBucketSuffix (consumer) |
Define the destination bucket suffix to use when an object must be moved and moveAfterRead is set to true. |
String |
|
doneFileName (consumer) |
If provided, Camel will only consume files if a done file exists. |
String |
|
fileName (consumer) |
To get the object from the bucket with the given file name |
String |
|
ignoreBody (consumer) |
If it is true, the S3 Object Body will be ignored completely, if it is set to false the S3 Object will be put in the body. Setting this to true, will override any behavior defined by includeBody option. |
false |
boolean |
includeBody (consumer) |
If it is true, the S3Object exchange will be consumed and put into the body and closed. If false the S3Object stream will be put raw into the body and the headers will be set with the S3 object metadata. This option is strongly related to autocloseBody option. In case of setting includeBody to true because the S3Object stream will be consumed then it will also be closed, while in case of includeBody false then it will be up to the caller to close the S3Object stream. However setting autocloseBody to true when includeBody is false it will schedule to close the S3Object stream automatically on exchange completion. |
true |
boolean |
includeFolders (consumer) |
If it is true, the folders/directories will be consumed. If it is false, they will be ignored, and Exchanges will not be created for those |
true |
boolean |
moveAfterRead (consumer) |
Move objects from S3 bucket to a different bucket after they have been retrieved. To accomplish the operation the destinationBucket option must be set. The copy bucket operation is only performed if the Exchange is committed. If a rollback occurs, the object is not moved. |
false |
boolean |
prefix (consumer) |
The prefix which is used in the com.amazonaws.services.s3.model.ListObjectsRequest to only consume objects we are interested in. |
String |
|
autocloseBody (consumer) |
If this option is true and includeBody is false, then the S3Object.close() method will be called on exchange completion. This option is strongly related to includeBody option. In case of setting includeBody to false and autocloseBody to false, it will be up to the caller to close the S3Object stream. Setting autocloseBody to true, will close the S3Object stream automatically. |
true |
boolean |
batchMessageNumber (producer) |
The number of messages composing a batch in streaming upload mode |
10 |
int |
batchSize (producer) |
The batch size (in bytes) in streaming upload mode |
1000000 |
int |
deleteAfterWrite (producer) |
Delete file object after the S3 file has been uploaded |
false |
boolean |
keyName (producer) |
Setting the key name for an element in the bucket through endpoint parameter |
String |
|
lazyStartProducer (producer) |
Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. |
false |
boolean |
multiPartUpload (producer) |
If it is true, camel will upload the file with multi part format, the part size is decided by the option of partSize |
false |
boolean |
namingStrategy (producer) |
The naming strategy to use in streaming upload mode. There are 2 enums and the value can be one of: progressive, random |
progressive |
AWSS3NamingStrategyEnum |
operation (producer) |
The operation to do in case the user don’t want to do only an upload. There are 8 enums and the value can be one of: copyObject, listObjects, deleteObject, deleteBucket, listBuckets, getObject, getObjectRange, createDownloadLink |
AWS2S3Operations |
|
partSize (producer) |
Setup the partSize which is used in multi part upload, the default size is 25M. |
26214400 |
long |
restartingPolicy (producer) |
The restarting policy to use in streaming upload mode. There are 2 enums and the value can be one of: override, lastPart |
override |
AWSS3RestartingPolicyEnum |
storageClass (producer) |
The storage class to set in the com.amazonaws.services.s3.model.PutObjectRequest request. |
String |
|
streamingUploadMode (producer) |
When stream mode is true the upload to bucket will be done in streaming |
false |
boolean |
streamingUploadTimeout (producer) |
While streaming upload mode is true, this option set the timeout to complete upload |
long |
|
awsKMSKeyId (producer) |
Define the id of KMS key to use in case KMS is enabled |
String |
|
useAwsKMS (producer) |
Define if KMS must be used or not |
false |
boolean |
useCustomerKey (producer) |
Define if Customer Key must be used or not |
false |
boolean |
autowiredEnabled (advanced) |
Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. |
true |
boolean |
accessKey (security) |
Amazon AWS Access Key |
String |
|
secretKey (security) |
Amazon AWS Secret Key |
String |
The AWS S3 Storage Service endpoint is configured using URI syntax:
aws2-s3://bucketNameOrArn
with the following path and query parameters:
Path Parameters (1 parameters):
Name | Description | Default | Type |
---|---|---|---|
bucketNameOrArn |
Required Bucket name or ARN |
String |
Query Parameters (68 parameters):
Name | Description | Default | Type |
---|---|---|---|
amazonS3Client (common) |
Autowired Reference to a com.amazonaws.services.s3.AmazonS3 in the registry. |
S3Client |
|
amazonS3Presigner (common) |
Autowired An S3 Presigner for Request, used mainly in createDownloadLink operation |
S3Presigner |
|
autoCreateBucket (common) |
Setting the autocreation of the S3 bucket bucketName. This will apply also in case of moveAfterRead option enabled and it will create the destinationBucket if it doesn’t exist already. |
false |
boolean |
overrideEndpoint (common) |
Set the need for overidding the endpoint. This option needs to be used in combination with uriEndpointOverride option |
false |
boolean |
pojoRequest (common) |
If we want to use a POJO request as body or not |
false |
boolean |
policy (common) |
The policy for this queue to set in the com.amazonaws.services.s3.AmazonS3#setBucketPolicy() method. |
String |
|
proxyHost (common) |
To define a proxy host when instantiating the SQS client |
String |
|
proxyPort (common) |
Specify a proxy port to be used inside the client definition. |
Integer |
|
proxyProtocol (common) |
To define a proxy protocol when instantiating the S3 client. There are 2 enums and the value can be one of: HTTP, HTTPS |
HTTPS |
Protocol |
region (common) |
The region in which S3 client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example ap-east-1) You’ll need to use the name Region.EU_WEST_1.id() |
String |
|
trustAllCertificates (common) |
If we want to trust all certificates in case of overriding the endpoint |
false |
boolean |
uriEndpointOverride (common) |
Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option |
String |
|
useDefaultCredentialsProvider (common) |
Set whether the S3 client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in. |
false |
boolean |
customerAlgorithm (common) |
Define the customer algorithm to use in case CustomerKey is enabled |
String |
|
customerKeyId (common) |
Define the id of Customer key to use in case CustomerKey is enabled |
String |
|
customerKeyMD5 (common) |
Define the MD5 of Customer key to use in case CustomerKey is enabled |
String |
|
bridgeErrorHandler (consumer) |
Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. |
false |
boolean |
deleteAfterRead (consumer) |
Delete objects from S3 after they have been retrieved. The delete is only performed if the Exchange is committed. If a rollback occurs, the object is not deleted. If this option is false, then the same objects will be retrieve over and over again on the polls. Therefore you need to use the Idempotent Consumer EIP in the route to filter out duplicates. You can filter using the AWS2S3Constants#BUCKET_NAME and AWS2S3Constants#KEY headers, or only the AWS2S3Constants#KEY header. |
true |
boolean |
delimiter (consumer) |
The delimiter which is used in the com.amazonaws.services.s3.model.ListObjectsRequest to only consume objects we are interested in. |
String |
|
destinationBucket (consumer) |
Define the destination bucket where an object must be moved when moveAfterRead is set to true. |
String |
|
destinationBucketPrefix (consumer) |
Define the destination bucket prefix to use when an object must be moved and moveAfterRead is set to true. |
String |
|
destinationBucketSuffix (consumer) |
Define the destination bucket suffix to use when an object must be moved and moveAfterRead is set to true. |
String |
|
doneFileName (consumer) |
If provided, Camel will only consume files if a done file exists. |
String |
|
fileName (consumer) |
To get the object from the bucket with the given file name |
String |
|
ignoreBody (consumer) |
If it is true, the S3 Object Body will be ignored completely, if it is set to false the S3 Object will be put in the body. Setting this to true, will override any behavior defined by includeBody option. |
false |
boolean |
includeBody (consumer) |
If it is true, the S3Object exchange will be consumed and put into the body and closed. If false the S3Object stream will be put raw into the body and the headers will be set with the S3 object metadata. This option is strongly related to autocloseBody option. In case of setting includeBody to true because the S3Object stream will be consumed then it will also be closed, while in case of includeBody false then it will be up to the caller to close the S3Object stream. However setting autocloseBody to true when includeBody is false it will schedule to close the S3Object stream automatically on exchange completion. |
true |
boolean |
includeFolders (consumer) |
If it is true, the folders/directories will be consumed. If it is false, they will be ignored, and Exchanges will not be created for those |
true |
boolean |
maxConnections (consumer) |
Set the maxConnections parameter in the S3 client configuration |
60 |
int |
maxMessagesPerPoll (consumer) |
Gets the maximum number of messages as a limit to poll at each polling. Gets the maximum number of messages as a limit to poll at each polling. The default value is 10. Use 0 or a negative number to set it as unlimited. |
10 |
int |
moveAfterRead (consumer) |
Move objects from S3 bucket to a different bucket after they have been retrieved. To accomplish the operation the destinationBucket option must be set. The copy bucket operation is only performed if the Exchange is committed. If a rollback occurs, the object is not moved. |
false |
boolean |
prefix (consumer) |
The prefix which is used in the com.amazonaws.services.s3.model.ListObjectsRequest to only consume objects we are interested in. |
String |
|
sendEmptyMessageWhenIdle (consumer) |
If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead. |
false |
boolean |
autocloseBody (consumer) |
If this option is true and includeBody is false, then the S3Object.close() method will be called on exchange completion. This option is strongly related to includeBody option. In case of setting includeBody to false and autocloseBody to false, it will be up to the caller to close the S3Object stream. Setting autocloseBody to true, will close the S3Object stream automatically. |
true |
boolean |
exceptionHandler (consumer) |
To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. |
ExceptionHandler |
|
exchangePattern (consumer) |
Sets the exchange pattern when the consumer creates an exchange. There are 3 enums and the value can be one of: InOnly, InOut, InOptionalOut |
ExchangePattern |
|
pollStrategy (consumer) |
A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel. |
PollingConsumerPollStrategy |
|
batchMessageNumber (producer) |
The number of messages composing a batch in streaming upload mode |
10 |
int |
batchSize (producer) |
The batch size (in bytes) in streaming upload mode |
1000000 |
int |
deleteAfterWrite (producer) |
Delete file object after the S3 file has been uploaded |
false |
boolean |
keyName (producer) |
Setting the key name for an element in the bucket through endpoint parameter |
String |
|
lazyStartProducer (producer) |
Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. |
false |
boolean |
multiPartUpload (producer) |
If it is true, camel will upload the file with multi part format, the part size is decided by the option of partSize |
false |
boolean |
namingStrategy (producer) |
The naming strategy to use in streaming upload mode. There are 2 enums and the value can be one of: progressive, random |
progressive |
AWSS3NamingStrategyEnum |
operation (producer) |
The operation to do in case the user don’t want to do only an upload. There are 8 enums and the value can be one of: copyObject, listObjects, deleteObject, deleteBucket, listBuckets, getObject, getObjectRange, createDownloadLink |
AWS2S3Operations |
|
partSize (producer) |
Setup the partSize which is used in multi part upload, the default size is 25M. |
26214400 |
long |
restartingPolicy (producer) |
The restarting policy to use in streaming upload mode. There are 2 enums and the value can be one of: override, lastPart |
override |
AWSS3RestartingPolicyEnum |
storageClass (producer) |
The storage class to set in the com.amazonaws.services.s3.model.PutObjectRequest request. |
String |
|
streamingUploadMode (producer) |
When stream mode is true the upload to bucket will be done in streaming |
false |
boolean |
streamingUploadTimeout (producer) |
While streaming upload mode is true, this option set the timeout to complete upload |
long |
|
awsKMSKeyId (producer) |
Define the id of KMS key to use in case KMS is enabled |
String |
|
useAwsKMS (producer) |
Define if KMS must be used or not |
false |
boolean |
useCustomerKey (producer) |
Define if Customer Key must be used or not |
false |
boolean |
backoffErrorThreshold (scheduler) |
The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in. |
int |
|
backoffIdleThreshold (scheduler) |
The number of subsequent idle polls that should happen before the backoffMultipler should kick-in. |
int |
|
backoffMultiplier (scheduler) |
To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured. |
int |
|
delay (scheduler) |
Milliseconds before the next poll. |
500 |
long |
greedy (scheduler) |
If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages. |
false |
boolean |
initialDelay (scheduler) |
Milliseconds before the first poll starts. |
1000 |
long |
repeatCount (scheduler) |
Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever. |
0 |
long |
runLoggingLevel (scheduler) |
The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that. There are 6 enums and the value can be one of: TRACE, DEBUG, INFO, WARN, ERROR, OFF |
TRACE |
LoggingLevel |
scheduledExecutorService (scheduler) |
Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool. |
ScheduledExecutorService |
|
scheduler (scheduler) |
To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler |
none |
Object |
schedulerProperties (scheduler) |
To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler. |
Map |
|
startScheduler (scheduler) |
Whether the scheduler should be auto started. |
true |
boolean |
timeUnit (scheduler) |
Time unit for initialDelay and delay options. There are 7 enums and the value can be one of: NANOSECONDS, MICROSECONDS, MILLISECONDS, SECONDS, MINUTES, HOURS, DAYS |
MILLISECONDS |
TimeUnit |
useFixedDelay (scheduler) |
Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details. |
true |
boolean |
accessKey (security) |
Amazon AWS Access Key |
String |
|
secretKey (security) |
Amazon AWS Secret Key |
String |
Required S3 component options
You have to provide the amazonS3Client in the Registry or your accessKey and secretKey to access the Amazon’s S3.
Batch Consumer
This component implements the Batch Consumer.
This allows you for instance to know how many messages exists in this batch and for instance let the Aggregator aggregate this number of messages.
Usage
Message headers evaluated by the S3 producer
Header | Type | Description |
---|---|---|
|
|
The bucket Name which this object will be stored or which will be used for the current operation |
|
|
The bucket Destination Name which will be used for the current operation |
|
|
The content length of this object. |
|
|
The content type of this object. |
|
|
The content control of this object. |
|
|
The content disposition of this object. |
|
|
The content encoding of this object. |
|
|
The md5 checksum of this object. |
|
|
The Destination key which will be used for the current operation |
|
|
The key under which this object will be stored or which will be used for the current operation |
|
|
The last modified timestamp of this object. |
|
|
The operation to perform. Permitted values are copyObject, deleteObject, listBuckets, deleteBucket, listObjects |
|
|
The storage class of this object. |
|
|
The canned acl that will be applied to the object. see
|
|
|
A well constructed Amazon S3 Access Control List object.
see |
|
|
Support to get or set custom objectMetadata headers. |
|
String |
Sets the server-side encryption algorithm when encrypting the object using AWS-managed keys. For example use AES256. |
|
|
The version Id of the object to be stored or returned from the current operation |
|
|
A map of metadata stored with the object in S3. |
Message headers set by the S3 producer
Header | Type | Description |
---|---|---|
|
|
The ETag value for the newly uploaded object. |
|
|
The optional version ID of the newly uploaded object. |
Message headers set by the S3 consumer
Header | Type | Description |
---|---|---|
|
|
The key under which this object is stored. |
|
|
The name of the bucket in which this object is contained. |
|
|
The hex encoded 128-bit MD5 digest of the associated object according to RFC 1864. This data is used as an integrity check to verify that the data received by the caller is the same data that was sent by Amazon S3. |
|
|
The value of the Last-Modified header, indicating the date and time at which Amazon S3 last recorded a modification to the associated object. |
|
|
The version ID of the associated Amazon S3 object if available. Version IDs are only assigned to objects when an object is uploaded to an Amazon S3 bucket that has object versioning enabled. |
|
|
The Content-Type HTTP header, which indicates the type of content stored in the associated object. The value of this header is a standard MIME type. |
|
|
The base64 encoded 128-bit MD5 digest of the associated object (content - not including headers) according to RFC 1864. This data is used as a message integrity check to verify that the data received by Amazon S3 is the same data that the caller sent. |
|
|
The Content-Length HTTP header indicating the size of the associated object in bytes. |
|
|
The optional Content-Encoding HTTP header specifying what content encodings have been applied to the object and what decoding mechanisms must be applied in order to obtain the media-type referenced by the Content-Type field. |
|
|
The optional Content-Disposition HTTP header, which specifies presentational information such as the recommended filename for the object to be saved as. |
|
|
The optional Cache-Control HTTP header which allows the user to specify caching behavior along the HTTP request/reply chain. |
|
String |
The server-side encryption algorithm when encrypting the object using AWS-managed keys. |
S3 Producer operations
Camel-AWS2-S3 component provides the following operation on the producer side:
-
copyObject
-
deleteObject
-
listBuckets
-
deleteBucket
-
listObjects
-
getObject (this will return an S3Object instance)
-
getObjectRange (this will return an S3Object instance)
-
createDownloadLink
If you don’t specify an operation explicitly the producer will do: - a single file upload - a multipart upload if multiPartUpload option is enabled
Advanced AmazonS3 configuration
If your Camel Application is running behind a firewall or if you need to
have more control over the S3Client
instance configuration, you can
create your own instance and refer to it in your Camel aws2-s3 component configuration:
from("aws2-s3://MyBucket?amazonS3Client=#client&delay=5000&maxMessagesPerPoll=5")
.to("mock:result");
Use KMS with the S3 component
To use AWS KMS to encrypt/decrypt data by using AWS infrastructure you can use the options introduced in 2.21.x like in the following example
from("file:tmp/test?fileName=test.txt")
.setHeader(S3Constants.KEY, constant("testFile"))
.to("aws2-s3://mybucket?amazonS3Client=#client&useAwsKMS=true&awsKMSKeyId=3f0637ad-296a-3dfe-a796-e60654fb128c");
In this way you’ll ask to S3, to use the KMS key 3f0637ad-296a-3dfe-a796-e60654fb128c, to encrypt the file test.txt. When you’ll ask to download this file, the decryption will be done directly before the download.
Static credentials vs Default Credential Provider
You have the possibility of avoiding the usage of explicit static credentials, by specifying the useDefaultCredentialsProvider option and set it to true.
-
Java system properties - aws.accessKeyId and aws.secretKey
-
Environment variables - AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
-
Web Identity Token from AWS STS.
-
The shared credentials and config files.
-
Amazon ECS container credentials - loaded from the Amazon ECS if the environment variable AWS_CONTAINER_CREDENTIALS_RELATIVE_URI is set.
-
Amazon EC2 Instance profile credentials.
For more information about this you can look at AWS credentials documentation
S3 Producer Operation examples
-
Single Upload: This operation will upload a file to S3 based on the body content
from("direct:start").process(new Processor() {
@Override
public void process(Exchange exchange) throws Exception {
exchange.getIn().setHeader(S3Constants.KEY, "camel.txt");
exchange.getIn().setBody("Camel rocks!");
}
})
.to("aws2-s3://mycamelbucket?amazonS3Client=#amazonS3Client")
.to("mock:result");
This operation will upload the file camel.txt with the content "Camel rocks!" in the mycamelbucket bucket
-
Multipart Upload: This operation will perform a multipart upload of a file to S3 based on the body content
from("direct:start").process(new Processor() {
@Override
public void process(Exchange exchange) throws Exception {
exchange.getIn().setHeader(AWS2S3Constants.KEY, "empty.txt");
exchange.getIn().setBody(new File("src/empty.txt"));
}
})
.to("aws2-s3://mycamelbucket?amazonS3Client=#amazonS3Client&multiPartUpload=true&autoCreateBucket=true&partSize=1048576")
.to("mock:result");
This operation will perform a multipart upload of the file empty.txt with based on the content the file src/empty.txt in the mycamelbucket bucket
-
CopyObject: this operation copy an object from one bucket to a different one
from("direct:start").process(new Processor() {
@Override
public void process(Exchange exchange) throws Exception {
exchange.getIn().setHeader(S3Constants.BUCKET_DESTINATION_NAME, "camelDestinationBucket");
exchange.getIn().setHeader(S3Constants.KEY, "camelKey");
exchange.getIn().setHeader(S3Constants.DESTINATION_KEY, "camelDestinationKey");
}
})
.to("aws2-s3://mycamelbucket?amazonS3Client=#amazonS3Client&operation=copyObject")
.to("mock:result");
This operation will copy the object with the name expressed in the header camelDestinationKey to the camelDestinationBucket bucket, from the bucket mycamelbucket.
-
DeleteObject: this operation deletes an object from a bucket
from("direct:start").process(new Processor() {
@Override
public void process(Exchange exchange) throws Exception {
exchange.getIn().setHeader(S3Constants.KEY, "camelKey");
}
})
.to("aws2-s3://mycamelbucket?amazonS3Client=#amazonS3Client&operation=deleteObject")
.to("mock:result");
This operation will delete the object camelKey from the bucket mycamelbucket.
-
ListBuckets: this operation list the buckets for this account in this region
from("direct:start")
.to("aws2-s3://mycamelbucket?amazonS3Client=#amazonS3Client&operation=listBuckets")
.to("mock:result");
This operation will list the buckets for this account
-
DeleteBucket: this operation delete the bucket specified as URI parameter or header
from("direct:start")
.to("aws2-s3://mycamelbucket?amazonS3Client=#amazonS3Client&operation=deleteBucket")
.to("mock:result");
This operation will delete the bucket mycamelbucket
-
ListObjects: this operation list object in a specific bucket
from("direct:start")
.to("aws2-s3://mycamelbucket?amazonS3Client=#amazonS3Client&operation=listObjects")
.to("mock:result");
This operation will list the objects in the mycamelbucket bucket
-
GetObject: this operation get a single object in a specific bucket
from("direct:start").process(new Processor() {
@Override
public void process(Exchange exchange) throws Exception {
exchange.getIn().setHeader(S3Constants.KEY, "camelKey");
}
})
.to("aws2-s3://mycamelbucket?amazonS3Client=#amazonS3Client&operation=getObject")
.to("mock:result");
This operation will return an S3Object instance related to the camelKey object in mycamelbucket bucket.
-
GetObjectRange: this operation get a single object range in a specific bucket
from("direct:start").process(new Processor() {
@Override
public void process(Exchange exchange) throws Exception {
exchange.getIn().setHeader(S3Constants.KEY, "camelKey");
exchange.getIn().setHeader(S3Constants.RANGE_START, "0");
exchange.getIn().setHeader(S3Constants.RANGE_END, "9");
}
})
.to("aws2-s3://mycamelbucket?amazonS3Client=#amazonS3Client&operation=getObjectRange")
.to("mock:result");
This operation will return an S3Object instance related to the camelKey object in mycamelbucket bucket, containing a the bytes from 0 to 9.
-
CreateDownloadLink: this operation will return a download link through S3 Presigner
from("direct:start").process(new Processor() {
@Override
public void process(Exchange exchange) throws Exception {
exchange.getIn().setHeader(S3Constants.KEY, "camelKey");
}
})
.to("aws2-s3://mycamelbucket?accessKey=xxx&secretKey=yyy®ion=region&operation=createDownloadLink")
.to("mock:result");
This operation will return a download link url for the file camel-key in the bucket mycamelbucket and region region
Streaming Upload mode
With the stream mode enabled users will be able to upload data to S3 without knowing ahead of time the dimension of the data, by leveraging multipart upload. The upload will be completed when: the batchSize has been completed or the batchMessageNumber has been reached. There are two possible naming strategy: progressive and random. With the progressive strategy each file will have the name composed by keyName option and a progressive counter, and eventually the file extension (if any), while with the random strategy a UUID will be added after keyName and eventually the file extension will appended.
As an example:
from(kafka("topic1").brokers("localhost:9092"))
.log("Kafka Message is: ${body}")
.to(aws2S3("camel-bucket").streamingUploadMode(true).batchMessageNumber(25).namingStrategy(AWS2S3EndpointBuilderFactory.AWSS3NamingStrategyEnum.progressive).keyName("{{kafkaTopic1}}/{{kafkaTopic1}}.txt"));
from(kafka("topic2").brokers("localhost:9092"))
.log("Kafka Message is: ${body}")
.to(aws2S3("camel-bucket").streamingUploadMode(true).batchMessageNumber(25).namingStrategy(AWS2S3EndpointBuilderFactory.AWSS3NamingStrategyEnum.progressive).keyName("{{kafkaTopic2}}/{{kafkaTopic2}}.txt"));
The default size for a batch is 1 Mb, but you can adjust it according to your requirements.
When you’ll stop your producer route, the producer will take care of flushing the remaining buffered messaged and complete the upload.
In Streaming upload you’ll be able restart the producer from the point where it left. It’s important to note that this feature is critical only when using the progressive naming strategy.
By setting the restartingPolicy to lastPart, you will restart uploading files and contents from the last part number the producer left.
As example: - Start the route with progressive naming strategy and keyname equals to camel.txt, with batchMessageNumber equals to 20, and restartingPolicy equals to lastPart - Send 70 messages. - Stop the route - On your S3 bucket you should now see 4 files: camel.txt, camel-1.txt, camel-2.txt and camel-3.txt, the first three will have 20 messages, while the last one only 10. - Restart the route - Send 25 messages - Stop the route - You’ll now have 2 other files in your bucket: camel-5.txt and camel-6.txt, the first with 20 messages and second with 5 messages. - Go ahead
This won’t be needed when using the random naming strategy.
On the opposite you can specify the override restartingPolicy. In that case you’ll be able to override whatever you written before (for that particular keyName) on your bucket.
In Streaming upload mode the only keyName option that will be taken into account is the endpoint option. Using the header will throw an NPE and this is done by design. Setting the header means potentially change the file name on each exchange and this is against the aim of the streaming upload producer. The keyName needs to be fixed and static. The selected naming strategy will do the rest of the of the work. |
Bucket Autocreation
With the option autoCreateBucket
users are able to avoid the autocreation of an S3 Bucket in case it doesn’t exist. The default for this option is true
.
If set to false any operation on a not-existent bucket in AWS won’t be successful and an error will be returned.
Moving stuff between a bucket and another bucket
Some users like to consume stuff from a bucket and move the content in a different one without using the copyObject feature of this component. If this is case for you, don’t forget to remove the bucketName header from the incoming exchange of the consumer, otherwise the file will be always overwritten on the same original bucket.
MoveAfterRead consumer option
In addition to deleteAfterRead it has been added another option, moveAfterRead. With this option enabled the consumed object will be moved to a target destinationBucket instead of being only deleted. This will require specifying the destinationBucket option. As example:
from("aws2-s3://mycamelbucket?amazonS3Client=#amazonS3Client&moveAfterRead=true&destinationBucket=myothercamelbucket")
.to("mock:result");
In this case the objects consumed will be moved to myothercamelbucket bucket and deleted from the original one (because of deleteAfterRead set to true as default).
You have also the possibility of using a key prefix/suffix while moving the file to a different bucket. The options are destinationBucketPrefix and destinationBucketSuffix.
Taking the above example, you could do something like:
from("aws2-s3://mycamelbucket?amazonS3Client=#amazonS3Client&moveAfterRead=true&destinationBucket=myothercamelbucket&destinationBucketPrefix=RAW(pre-)&destinationBucketSuffix=RAW(-suff)")
.to("mock:result");
In this case the objects consumed will be moved to myothercamelbucket bucket and deleted from the original one (because of deleteAfterRead set to true as default).
So if the file name is test, in the myothercamelbucket you should see a file called pre-test-suff.
Using customer key as encryption
We introduced also the customer key support (an alternative of using KMS). The following code shows an example.
String key = UUID.randomUUID().toString();
byte[] secretKey = generateSecretKey();
String b64Key = Base64.getEncoder().encodeToString(secretKey);
String b64KeyMd5 = Md5Utils.md5AsBase64(secretKey);
String awsEndpoint = "aws2-s3://mycamel?autoCreateBucket=false&useCustomerKey=true&customerKeyId=RAW(" + b64Key + ")&customerKeyMD5=RAW(" + b64KeyMd5 + ")&customerAlgorithm=" + AES256.name();
from("direct:putObject")
.setHeader(AWS2S3Constants.KEY, constant("test.txt"))
.setBody(constant("Test"))
.to(awsEndpoint);
Using a POJO as body
Sometimes build an AWS Request can be complex, because of multiple options. We introduce the possibility to use a POJO as body. In AWS S3 there are multiple operations you can submit, as an example for List brokers request, you can do something like:
from("direct:aws2-s3") .setBody(ListObjectsRequest.builder().bucket(bucketName).build()) .to("aws2-s3://test?amazonS3Client=#amazonS3Client&operation=listObjects&pojoRequest=true")
In this way you’ll pass the request directly without the need of passing headers and options specifically related to this operation.
Create S3 client and add component to registry
Sometimes you would want to perform some advanced configuration using AWS2S3Configuration which also allows to set the S3 client. You can create and set the S3 client in the component configuration as shown in the following example
String awsBucketAccessKey = "your_access_key";
String awsBucketSecretKey = "your_secret_key";
S3Client s3Client = S3Client.builder().credentialsProvider(StaticCredentialsProvider.create(AwsBasicCredentials.create(awsBucketAccessKey, awsBucketSecretKey)))
.region(Region.US_EAST_1).build();
AWS2S3Configuration configuration = new AWS2S3Configuration();
configuration.setAmazonS3Client(s3Client);
configuration.setAutoDiscoverClient(true);
configuration.setBucketName("s3bucket2020");
configuration.setRegion("us-east-1");
Now you can configure the S3 component (using the configuration object created above) and add it to the registry in the configure method before initialization of routes.
AWS2S3Component s3Component = new AWS2S3Component(getContext());
s3Component.setConfiguration(configuration);
s3Component.setLazyStartProducer(true);
camelContext.addComponent("aws2-s3", s3Component);
Now your component will be used for all the operations implemented in camel routes.
Dependencies
Maven users will need to add the following dependency to their pom.xml.
pom.xml
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-aws2-s3</artifactId>
<version>${camel-version}</version>
</dependency>
where ${camel-version}
must be replaced by the actual version of Camel.
Spring Boot Auto-Configuration
When using aws2-s3 with Spring Boot make sure to use the following Maven dependency to have support for auto configuration:
<dependency>
<groupId>org.apache.camel.springboot</groupId>
<artifactId>camel-aws2-s3-starter</artifactId>
<version>x.x.x</version>
<!-- use the same version as your Camel core version -->
</dependency>
The component supports 51 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
camel.component.aws2-s3.access-key |
Amazon AWS Access Key |
String |
|
camel.component.aws2-s3.amazon-s3-client |
Reference to a com.amazonaws.services.s3.AmazonS3 in the registry. The option is a software.amazon.awssdk.services.s3.S3Client type. |
S3Client |
|
camel.component.aws2-s3.amazon-s3-presigner |
An S3 Presigner for Request, used mainly in createDownloadLink operation. The option is a software.amazon.awssdk.services.s3.presigner.S3Presigner type. |
S3Presigner |
|
camel.component.aws2-s3.auto-create-bucket |
Setting the autocreation of the S3 bucket bucketName. This will apply also in case of moveAfterRead option enabled and it will create the destinationBucket if it doesn’t exist already. |
false |
Boolean |
camel.component.aws2-s3.autoclose-body |
If this option is true and includeBody is false, then the S3Object.close() method will be called on exchange completion. This option is strongly related to includeBody option. In case of setting includeBody to false and autocloseBody to false, it will be up to the caller to close the S3Object stream. Setting autocloseBody to true, will close the S3Object stream automatically. |
true |
Boolean |
camel.component.aws2-s3.autowired-enabled |
Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. |
true |
Boolean |
camel.component.aws2-s3.aws-k-m-s-key-id |
Define the id of KMS key to use in case KMS is enabled |
String |
|
camel.component.aws2-s3.batch-message-number |
The number of messages composing a batch in streaming upload mode |
10 |
Integer |
camel.component.aws2-s3.batch-size |
The batch size (in bytes) in streaming upload mode |
1000000 |
Integer |
camel.component.aws2-s3.bridge-error-handler |
Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. |
false |
Boolean |
camel.component.aws2-s3.configuration |
The component configuration. The option is a org.apache.camel.component.aws2.s3.AWS2S3Configuration type. |
AWS2S3Configuration |
|
camel.component.aws2-s3.customer-algorithm |
Define the customer algorithm to use in case CustomerKey is enabled |
String |
|
camel.component.aws2-s3.customer-key-id |
Define the id of Customer key to use in case CustomerKey is enabled |
String |
|
camel.component.aws2-s3.customer-key-m-d5 |
Define the MD5 of Customer key to use in case CustomerKey is enabled |
String |
|
camel.component.aws2-s3.delete-after-read |
Delete objects from S3 after they have been retrieved. The delete is only performed if the Exchange is committed. If a rollback occurs, the object is not deleted. If this option is false, then the same objects will be retrieve over and over again on the polls. Therefore you need to use the Idempotent Consumer EIP in the route to filter out duplicates. You can filter using the AWS2S3Constants#BUCKET_NAME and AWS2S3Constants#KEY headers, or only the AWS2S3Constants#KEY header. |
true |
Boolean |
camel.component.aws2-s3.delete-after-write |
Delete file object after the S3 file has been uploaded |
false |
Boolean |
camel.component.aws2-s3.delimiter |
The delimiter which is used in the com.amazonaws.services.s3.model.ListObjectsRequest to only consume objects we are interested in. |
String |
|
camel.component.aws2-s3.destination-bucket |
Define the destination bucket where an object must be moved when moveAfterRead is set to true. |
String |
|
camel.component.aws2-s3.destination-bucket-prefix |
Define the destination bucket prefix to use when an object must be moved and moveAfterRead is set to true. |
String |
|
camel.component.aws2-s3.destination-bucket-suffix |
Define the destination bucket suffix to use when an object must be moved and moveAfterRead is set to true. |
String |
|
camel.component.aws2-s3.done-file-name |
If provided, Camel will only consume files if a done file exists. |
String |
|
camel.component.aws2-s3.enabled |
Whether to enable auto configuration of the aws2-s3 component. This is enabled by default. |
Boolean |
|
camel.component.aws2-s3.file-name |
To get the object from the bucket with the given file name |
String |
|
camel.component.aws2-s3.ignore-body |
If it is true, the S3 Object Body will be ignored completely, if it is set to false the S3 Object will be put in the body. Setting this to true, will override any behavior defined by includeBody option. |
false |
Boolean |
camel.component.aws2-s3.include-body |
If it is true, the S3Object exchange will be consumed and put into the body and closed. If false the S3Object stream will be put raw into the body and the headers will be set with the S3 object metadata. This option is strongly related to autocloseBody option. In case of setting includeBody to true because the S3Object stream will be consumed then it will also be closed, while in case of includeBody false then it will be up to the caller to close the S3Object stream. However setting autocloseBody to true when includeBody is false it will schedule to close the S3Object stream automatically on exchange completion. |
true |
Boolean |
camel.component.aws2-s3.include-folders |
If it is true, the folders/directories will be consumed. If it is false, they will be ignored, and Exchanges will not be created for those |
true |
Boolean |
camel.component.aws2-s3.key-name |
Setting the key name for an element in the bucket through endpoint parameter |
String |
|
camel.component.aws2-s3.lazy-start-producer |
Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. |
false |
Boolean |
camel.component.aws2-s3.move-after-read |
Move objects from S3 bucket to a different bucket after they have been retrieved. To accomplish the operation the destinationBucket option must be set. The copy bucket operation is only performed if the Exchange is committed. If a rollback occurs, the object is not moved. |
false |
Boolean |
camel.component.aws2-s3.multi-part-upload |
If it is true, camel will upload the file with multi part format, the part size is decided by the option of partSize |
false |
Boolean |
camel.component.aws2-s3.naming-strategy |
The naming strategy to use in streaming upload mode |
AWSS3NamingStrategyEnum |
|
camel.component.aws2-s3.operation |
The operation to do in case the user don’t want to do only an upload |
AWS2S3Operations |
|
camel.component.aws2-s3.override-endpoint |
Set the need for overidding the endpoint. This option needs to be used in combination with uriEndpointOverride option |
false |
Boolean |
camel.component.aws2-s3.part-size |
Setup the partSize which is used in multi part upload, the default size is 25M. |
26214400 |
Long |
camel.component.aws2-s3.pojo-request |
If we want to use a POJO request as body or not |
false |
Boolean |
camel.component.aws2-s3.policy |
The policy for this queue to set in the com.amazonaws.services.s3.AmazonS3#setBucketPolicy() method. |
String |
|
camel.component.aws2-s3.prefix |
The prefix which is used in the com.amazonaws.services.s3.model.ListObjectsRequest to only consume objects we are interested in. |
String |
|
camel.component.aws2-s3.proxy-host |
To define a proxy host when instantiating the SQS client |
String |
|
camel.component.aws2-s3.proxy-port |
Specify a proxy port to be used inside the client definition. |
Integer |
|
camel.component.aws2-s3.proxy-protocol |
To define a proxy protocol when instantiating the S3 client |
Protocol |
|
camel.component.aws2-s3.region |
The region in which S3 client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example ap-east-1) You’ll need to use the name Region.EU_WEST_1.id() |
String |
|
camel.component.aws2-s3.restarting-policy |
The restarting policy to use in streaming upload mode |
AWSS3RestartingPolicyEnum |
|
camel.component.aws2-s3.secret-key |
Amazon AWS Secret Key |
String |
|
camel.component.aws2-s3.storage-class |
The storage class to set in the com.amazonaws.services.s3.model.PutObjectRequest request. |
String |
|
camel.component.aws2-s3.streaming-upload-mode |
When stream mode is true the upload to bucket will be done in streaming |
false |
Boolean |
camel.component.aws2-s3.streaming-upload-timeout |
While streaming upload mode is true, this option set the timeout to complete upload |
Long |
|
camel.component.aws2-s3.trust-all-certificates |
If we want to trust all certificates in case of overriding the endpoint |
false |
Boolean |
camel.component.aws2-s3.uri-endpoint-override |
Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option |
String |
|
camel.component.aws2-s3.use-aws-k-m-s |
Define if KMS must be used or not |
false |
Boolean |
camel.component.aws2-s3.use-customer-key |
Define if Customer Key must be used or not |
false |
Boolean |
camel.component.aws2-s3.use-default-credentials-provider |
Set whether the S3 client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in. |
false |
Boolean |