What is Oracle Cloud Infrastructure Object Storage?
Oracle Object Storage is a scalable, fully programmable, durable cloud storage service. Developers and IT administrators can use this service to store and easily access an unlimited amount of data at low cost.
What can you do with Oracle Object Storage?
With Oracle Object Storage, you can safely and securely store and retrieve data directly from applications or from within the cloud platform, at any time. Oracle Object Storage is agnostic to the data content type and enables a wide variety of use cases. You can send backup and archive data offsite, design Big Data Analytics workloads to generate business insights, or build scale-out web applications. The elasticity of the service enables you to start small and scale applications as they evolve, and you always pay for only what you use.
What is unique about Oracle Cloud Infrastructure's approach to object storage?
Oracle Object Storage service is secure, easy to manage, strongly consistent, and scalable. When a read request is made, Oracle Object Storage serves the most recent copy of the data that was written to the system. Oracle Object Storage is connected to a high-performing, high-bandwidth network with compute and object storage resources co-located on the same network. This means that compute instances running in Oracle Cloud Infrastructure get low latency access to object storage.
What are the core components of the Oracle Object Storage service?
Objects: All data, regardless of content type, is stored as objects in Oracle Object Storage. For example, log files, video files, and audio files are all stored as objects.
Bucket: A bucket is a logical container that stores objects. Buckets can serve as a grouping mechanism to store related objects together.
Namespace: A namespace is the logical entity that lets you control a personal bucket namespace. Oracle Cloud Infrastructure Object Storage bucket names are not global. Bucket names need to be unique within the context of a namespace, but can be repeated across namespaces. Each tenant is associated with one default namespace (tenant name) that spans all compartments.
How do I get started with Oracle Cloud Infrastructure Object Storage?
You can access Oracle Object Storage by using the console, REST API, or SDKs. You can sign up for Oracle Cloud Infrastructure in the following ways:
- Contact your Oracle sales representative.
Your Oracle sales representative can provide you information about the pricing options available to your company. Your sales representative will collect some information from you and initiate the registration process.
- Go to the Oracle Store and sign up for Oracle Cloud Infrastructure.
How durable is data stored in Oracle Cloud Infrastructure Object Storage?
Oracle Object Storage is designed to be highly durable, providing 99.999999999% (Eleven 9's) of annual durability. It achieves this by storing each object redundantly across three servers in different availability domains for regions with multiple availability domains, and in different fault domains in regions with a single availability domain. Existing objects can be accessed as long as one of the three copies is accessible, and new objects can be uploaded as long as two copies can be successfully written. Data integrity is actively monitored using checksums, and corrupt data is detected and automatically repaired. Any loss in data redundancy is detected and remedied, without customer intervention or impact.
Do you use erasure coding in the Object Storage service?
We do not currently use erasure coding in Object Storage. Erasure coding is a way to reduce the capacity required for data redundancy, but it's not required for effective data durability. We keep three full copies of each object in isolated fault domains and/or availability domains. We also perform regular integrity checks of both our object metadata and data to ensure there is no data corruption and to maintain redundancy levels with validated data. These practices let us confidently deliver 99.999999999% (11 nines) durability without using erasure coding. In the future, we may decide to introduce erasure coding, which would reduce the storage capacity and cost of operations that our current configuration requires. However, erasure coding would not change the current durability we offer to customers.
How reliable is Oracle Cloud Infrastructure Object Storage?
Oracle Object Storage is highly reliable. The service is designed for 99.9% availability. Multiple safeguards have been built into the platform to monitor the health of the service to guard against unplanned downtime.
How much data can I store in Oracle Cloud Infrastructure Object Storage?
You can store an unlimited amount of data in Oracle Object Storage. You can create thousands of buckets per account and each bucket can host an unlimited number of objects. Stored objects can be as small as 0 bytes or as large as 10 TiB. Oracle recommends that you use multipart uploads to store objects larger than 100 MiB. For more information, see Service Limits in the Oracle Cloud Infrastructure documentation.
Is Oracle Cloud Infrastructure Object Storage specific to an Availability Domain or to a Region?
Oracle Object Storage is a regional service. It can be accessed through a dedicated regional API endpoint. The API endpoints are:
Native Oracle Cloud Infrastructure Object Storage API endpoints
- US West: https://objectstorage.us-phoenix-1.oraclecloud.com
- US East: https://objectstorage.us-ashburn-1.oraclecloud.com
- EMEA UK: https://objectstorage.uk-london-1.oraclecloud.com
- EMEA Germany: https://objectstorage.eu-frankfurt-1.oraclecloud.com
Swift API endpoints
- US West: https://swiftobjectstorage.us-phoenix-1.oraclecloud.com
- US East: https://swiftobjectstorage.us-ashburn-1.oraclecloud.com
- EMEA UK: https://swiftobjectstorage.uk-london-1.oraclecloud.com
- EMEA Germany: https://swiftobjectstorage.eu-frankfurt-1.oraclecloud.com
Where is my Oracle Cloud Infrastructure Object Storage data stored?
Oracle Object Storage is available in all Oracle Cloud Infrastructure regions and data is stored within those regions. Customers have the flexibility to choose the specific region where data will reside. You can find more information on available regions and Availability Domains here.
How secure is my data in Oracle Cloud Infrastructure Object Storage?
Oracle Object Storage is highly secure. It is tightly integrated with Oracle Cloud Infrastructure Identity and Access Management. By default, only authenticated users that have explicitly been granted access to specific resources can access data stored in Oracle Object Storage. Data is uploaded and downloaded from Oracle Object Storage over SSL endpoints using the HTTPS protocol. All stored data is encrypted, by default. For an additional layer of security, you can encrypt objects prior to sending them to Oracle Object Storage. That gives you total control over not only your data, but also the encryption keys that are used to encrypt the data.
Does Oracle Cloud Infrastructure Object Storage support object-level permission controls?
Oracle decided not to implement object level access controls because we believe it’s more effective and manageable to use compartment- and bucket-level permissions instead. Because we don't have a restriction on bucket count per compartment and tenancy:
- Customers can create a highly organized, secure, and manageable set of permissions around all the data they store.
- Customers don’t need to co-mingle data in buckets that would otherwise require different permissions within the same bucket.
Our Identity and Access Management (IAM) offers a consistent set of policies across all services. IAM also lets you create, apply, and centrally manage detailed permissions both at the compartment and bucket level.
Can I use Oracle Cloud Infrastructure Object Storage as a primary data storage for big data?
Yes, you can use Oracle Object Storage as the primary data repository for big data. This means you can run big data workloads on Oracle Cloud Infrastructure. The object storage HDFS connector provides connectivity to multiple popular big data analytic engines. This connectivity enables the analytics engines to work directly with data stored in Oracle Cloud Infrastrucutre object storage. You can find more information on the HDFS connector here.
Can I access Oracle Cloud Infrastructure Object Storage from anywhere?
You can access Oracle Object Storage from anywhere as long as you have access to an internet connection and the required permissions to access the service. Object storage latency will vary depending on where you are accessing the service from, with higher latency when accessing across a longer distance, all else equal. For example, if data is stored in the US West Region, the latency for accessing data from Nevada will be lower than if the same data were being accessed from London or New York.
Do I need to backup my object storage data?
No, you do not need to backup data stored in Oracle Cloud Infrastructure Object Storage. Oracle Object Storage is an inherently highly durable storage platform. All objects are stored redundantly on multiple storage servers, across multiple Availability Domains, within a region. Data integrity is constantly monitored using checksums and corrupt data is self healed. The native object storage durability characteristics virtually eliminate the need for traditional backups.
Can I use Oracle Cloud Infrastructure Object Storage as a destination for my on-premises backups?
You can use Oracle Object Storage as a destination for your backups, regardless of whether the backup originates in the cloud or in an on-premises data center. Oracle Cloud Infrastructure Block Volumes backups are stored by default in Oracle Cloud Infrastructure Object Storage. You can also direct your Oracle RMAN backups to Object Storage via the Swift API integration. For Oracle RMAN, you need to use the following Swift API endpoints:
- US West: https://swiftobjectstorage.us-phoenix-1.oraclecloud.com
- US East: https://swiftobjectstorage.us-ashburn-1.oraclecloud.com
- EMEA: https://swiftobjectstorage.eu-frankfurt-1.oraclecloud.com
Can Oracle Cloud Infrastructure Object Storage buckets be mounted as traditional NFS/SMB mount points on the bare metal compute instances?
Exposing buckets as NFS/SMB mount points on the bare metal compute instances is not supported. Currently you can access Oracle Object Storage using the native APIs, SDKs or the HDFS connector.
How will I be metered and billed for Oracle Cloud Infrastructure Object Storage usage?
Oracle Object Storage is available as a pay-as-you-go service and charged on the following usage elements:
- Storage used per month, measured in Timed Storage-Byte Hours, aggregated per month.
- Total number of requests received per month. Delete requests are free.
- Outbound Internet Transfer. First 10TB of outbound transfer are free.
Full pricing details for Oracle Cloud Infrastructure Object Storage can be found here.
Where can I find Oracle Cloud Infrastructure Object Storage IP address ranges, to add to my on premises firewall or Oracle Cloud Infrastructure security list?
You can find Object Storage IP address ranges in the Object Storage product documentation.
Does Oracle Cloud Infrastructure Object Storage support server-side encryption?
Yes. Oracle Object Storage supports server-side encryption. All data stored in Oracle Object Storage is automatically encrypted.
How can I enable the Oracle Cloud Infrastructure Object Storage encryption capability?
Encryption is automatically enabled for all data with no action required on the part of customers.
Do I need to do any data decryption on the client?
There is nothing specific that you need to do to decrypt the data. You can continue making normal HTTPS GET requests to retrieve the data.
Are the encryption keys rotated?
Yes. The encryption keys are rotated frequently based on a rigorous internal policy.
Do you support client-side encryption?
Yes, we support client-side encryption. You can encrypt the data prior to sending it to Oracle Object Storage. Sending encrypted data enables you to have full control over your encryption keys and provides a second line of defense against unintended and unauthorized data access.
Do you encrypt both the object data and the user-defined metadata?
Yes. We encrypt both the object data and the user-defined metadata associated with the object.
Which encryption algorithm do you use to encrypt the data?
We use 256-bit Advanced Encryption Standard (AES- 256) to encrypt all data and encryption keys. AES-256 is considered one of the strongest encryption algorithms that exists today.
I need to upload large objects to Oracle Cloud Infrastructure Object Storage. How can I optimize the upload process?
To upload large objects to Oracle Object Storage, consider splitting objects into smaller parts and using multipart upload. Uploading smaller parts in parallel using multipart upload is faster and more efficient than uploading a large object in a single upload. If an upload fails for any reason, instead of restarting the entire object upload, you’ll only need to retry uploading the part of the object that failed. You should consider using the multipart upload to upload all objects that are greater than 100 MiB in size.
Can I associate customer-defined metadata with an object when uploading using multipart upload?
Yes. When you initiate the multipart upload, you can specify the metadata you want to associate with the object. When the object is committed, after all constituent parts are uploaded, the metadata will be associated with the composed object.
What is the maximum number of parts accepted for a multipart upload?
An object can be divided into a maximum of 10,000 parts. Each part must be at least 10 MiB in size. The upper size limit on an object part is 50 GiB. We recommend that you consider using multipart upload to upload objects greater than 100 MiB in size. Regardless of the total number of parts an object has been divided into, the total size of an object cannot exceed 10 TiB.
Can I retry uploading a part if the upload fails?
Yes, you can retry uploading a part when the upload fails for any reason. You must provide the correct upload ID and part number when reinitiating the upload.
Can I replace a part that has already been uploaded?
Yes, you can replace a part after it has been uploaded, but only if the object has not been committed yet. To replace an object part in a multipart upload, make sure that the correct upload ID and part number are used to reinitiate the upload.
Can I pause and resume an object upload?
Yes, you can pause and resume an object upload. If a multipart upload has been initiated for a constituent part, you must let Oracle Object Storage finish uploading the part. Oracle Object Storage does not support pausing and resuming in-progress part uploads.
Can I GET or LIST object parts after the object has been composed from its constituent parts and committed?
No, you cannot 'GET' or 'LIST' the uploaded parts of an object once the multipart upload is complete and the object has been committed. To retrieve a part of the object, you will need to use a Range GET request, which is distinct and separate from multipart upload functionality.
Can I reorder parts of an object before composing the object?
No, the object parts cannot be reordered. The part number determines the sequential order in which parts are committed to the object.
Can I use parts of an object to compose another object?
No, you cannot re-purpose parts of an object to compose another object. An object can only be composed of object parts that share an upload ID.
What is the expected behavior if multiple uploaded parts have the same part number?
If multiple object parts are uploaded using the same part number, the part that was uploaded last takes precedence and is used to compose the object.
What happens to object parts if an object is never committed?
If an upload is initiated, but never completed, Oracle Object Storage maintains the parts in its inventory until you explicitly abort the multipart upload. Oracle Object Storage charges for storage of the object parts regardless of whether or not the object has been committed. You can list active uploads and then decide which uploads to abort. Deleting active uploads deletes all uploaded parts and frees storage space.
Can I abort a multipart upload and delete parts that have already been uploaded?
Yes, you can terminate an in-progress multipart upload by aborting the process. Aborting a multipart upload deletes all object parts associated with a specific upload ID.
Can I append a part to an object after the upload has been committed?
No, you cannot append parts to an object after the upload has been committed.
Can I skip part numbers when uploading parts in a multipart upload?
Yes, you can skip part numbers when uploading parts. Part numbers do not need to be contiguous.
Can I manually delete a part and exclude it from the object upload before committing the upload?
No, you cannot specifically delete uploaded parts associated with an active multipart upload. However, you can choose to exclude uploaded parts when committing the object. These excluded parts are automatically deleted.
How can I verify the integrity of an object uploaded using the multipart upload process?
Oracle Object Storage treats the upload of an object part as it would a normal object upload. You can verify that an object was not unintentionally corrupted by sending the MD5 hash of the object part or by capturing the MD5 hash that is returned in the response to the request. When the upload is committed, you will also receive an MD5 hash of the MD5 hashes of the individual parts that constitute the object. This MD5 hash can be used to validate the integrity of the object as a whole.
Which Oracle Cloud Infrastructure Object Storage clients support multipart uploads?
Multipart upload functionality is supported by Oracle Object Storage native API and the Java Software Development Kit (SDK). Support for the Command Line Interface (CLI) and Python SDK are coming soon.
What is a public bucket?
A public bucket is a bucket type that enables you to freely share data stored in object storage. Anyone with knowledge of the public bucket name and associated namespace can anonymously read data, list objects or get the object metadata. Anonymous PUT operations to post data to a public bucket are not supported. Buckets are private by default, bucket properties must be explicitly set to make a bucket public.
Because public buckets support anonymous data access, be careful and deliberate when creating public buckets. We encourage you to err on the side of caution and use public buckets only when absolutely necessary. Though public buckets are a powerful means to widely share data, there is a security tradeoff. Since anyone can anonymously access data stored in a public bucket, there is no visibility or control over who is accessing your data stored. Often times, Oracle Cloud Infrastructure Identity and Access Management rules or pre-authenticated requests can be a good substitute for public buckets.
How do I create a public bucket?
You can create public buckets using the API, SDK, CLI, and the Oracle Cloud Infrastructure console. Public buckets can be created like any other normal bucket with the difference being you need to set the attribute 'publicAccessType' value to 'ObjectRead'. By default, the value of this variable is set to 'NoPublicAccess'. You can set the value of this attribute when creating the bucket, or after the fact by updating the bucket.
What Oracle Identity and Access Management (IAM) permissions do I need to possess to create a public bucket?
You need to have been granted the IAM permissions BUCKET_CREATE, BUCKET_UPDATE to create a public bucket.
Can I make a public bucket private and vice versa?
Yes, you can make a public bucket private, and vice versa, by updating the bucket attribute 'publicAccessType'.
What are private buckets?
When you create an Object Storage bucket, it's created as a private bucket by default. To share data stored in a private bucket with other groups of users, you need to define the pertinent IAM permission for the group.
Can I limit Object Storage buckets to be accessible only from specific Virtual Cloud Networks or subnets?
Yes, you can define IAM policies on buckets such that requests are only authorized if they originate from a specific VCN or a CIDR block within that VCN. However, you will need to use Oracle Cloud Infrastructure Service Gateway to access Object Storage and pass through such an IAM policy. Access will be blocked if you try to access Oracle Object Storage from instances with a public IP addresses through Internet Gateway, or from instances running in your on-premises network.
Review sample IAM policy documentation to allow only the resources in a specific VCN to write objects to a particular Object Storage bucket. For more information, review the Service Gateway product documents.
What are pre-authenticated requests (PAR)?
Pre-authenticated requests (PAR) offer a mechanism by which you can share data stored in object storage with a third party. PARs eliminate the need to access the object storage data using programmatic interfaces like the API, SDK or the CLI. Using tools like cURL or wget on the PAR will enable you to access data stored in the object storage. You can also use PARs to receive data from anyone. The data received via PARs is posted to an object storage bucket, specified at the time of PAR creation.
PARs can be defined both on buckets and objects, however, PARs defined on a bucket can only receive data and PARs defined on objects can be used both to send and receive data.
When you create a PAR, a unique PAR URL is generated. Anyone with access to this URL can access the resources identified in the pre-authenticated request. By default, PARs have an expiration date which determines the length of time the PAR stays active. Once a PAR expires, it can no longer be used. PAR_MANAGE permissions are required to create and manage PARs. Read and/or write privileges are not required for the object storage resource that you are creating a PAR on. Once created, you can list PARs per object storage bucket, and delete them if necessary, to preempt the PAR expiration date.
When should I use pre-authenticated requests?
You should use PARs when you need to share or receive data from a third party. PARs are useful when the third party cannot, or does not wish to, use normal object storage interfaces like the APIs, SDK, or the CLI to access data. They can use off-the-shelf HTTP tools like cURL.
Be careful when creating and sharing PARs. Once created, anyone who has access to the PAR URL can access the specified object storage resource. There is no obvious way to determine if the PAR usage is being driven by an authorized or unauthorized user.
How can I create pre-authenticated requests?
You can create a PAR using the Oracle Cloud Infrastructure service console, or via the Oracle Cloud Infrastructure SDKs and/or CLI. When creating a PAR, you'll need to specify the object storage resource (object or bucket), actions the end user can take (upload any number of objects to a bucket, upload a specific object to a bucket or read a specific object), and how long the PAR is valid.
Which object storage resources can I define pre-authenticated requests on?
You can define PARs on buckets and objects. You can use PARs defined on a bucket to receive data, however PARs defined on objects can be used both to send and receive data.
What Oracle Identity and Access Management permissions do I need to possess in order to create and manage pre-authenticated requests?
You need to have PAR_MANAGE permissions to create and manage PARs.
How many pre-authenticated requests can I create per bucket or object?
There is no limit on the number of PARs that can be created on a bucket or object.
Can I manage PARs after I generate the PAR URLs?
Yes, once created, PARs can easily be managed. You can list PARs created on buckets and objects. You can also delete PARs, regardless of whether the PAR is active or expired. Once a PAR is deleted, the PAR URL will immediately stop working. PAR URLs will also stop working if permissions of the user that created the PAR change such that they no longer have access to the specified target resource.
Can I update pre-authenticated requests?
No, update operations on PARs is not supported. You cannot extend the expiration date on a PAR or modify the operation defined on the PAR. You will need to create a new PAR if you wish to make any changes to a PAR.
What happens to a previously created pre-authenticated requests when the password of the user who created the PAR changes?
Nothing. One of the benefits of pre-authenticated requests is that they are decoupled from Oracle Cloud Infrastructure user account credentials. Changing passwords has no impact on the validity of the PAR.
What happens to a previously created pre-authenticated requests when the user who created the PAR gets removed or loses permissions to access objects or buckets associated with pre-authenticated requests?
Once the PAR creator gets deleted from the tenancy, the PAR URLs will immediately stop working.
How secure are pre-authenticated requests?
Pre-authenticated requests are generally a secure means of sharing data. Pre-authenticated requests can only be created by users who have PAR_MANAGE permission and can be used through URLs only while the creators have permissions to access objects or buckets associated with PARs.
Be careful when sharing a PAR URL. Make sure that only the intended user gains access to it. Anyone who has access to the PAR URL is automatically granted access to the object storage resource specified in the PAR. There is no obvious way to determine whether the PAR usage came from an authorized or unauthorized user.
Can I create a PAR on a public bucket?
Yes, you can create PARs on a public bucket.
If I create a PAR on a bucket that was initially private and was then updated to become a public bucket, will it continue to work as expected?
Yes, the PAR continues to work if a bucket transitions for being private to public, and vice versa.
Can I retire PARs before they expire?
Yes. You can retire PARs before the expiration date by deleting the PAR. Once deleted, the PAR URL stops working immediately.
How can I create PARs that do not expire?
To create a PAR that theoretically does not expire, set a PAR expiration date that is far out in the future.
How can I track PAR operations?
All PAR create and manage operations are logged in to the audit service. Viewing audit logs provides visibility into all PAR operations performed during a specific time window. Audit logs are maintained for 365 days, after which they are purged. This is in line with how Oracle Audit works across all Oracle Cloud Infrastructure services.
Object Lifecycle Management
What is Object Lifecycle Management?
Object lifecycle management lets you manage the lifecycle of your Object Storage data through automated archiving and deletion, reducing storage costs and saving time. Lifecycle management works by creating a set of rules for a bucket (a lifecycle policy) that archive or delete objects depending on their age. You can narrow the scope of individual lifecycle policy rules by using object name prefix matching criteria. This allows you to create a lifecycle policy that is customized for the needs of different objects within a bucket. For example, you can create a lifecycle policy that automatically migrates objects containing the name prefix "ABC" from standard Object Storage to Archive Storage 30 days after the data was created, and then delete the data 120 days after it was created. If you later decide to keep the archived data for a longer period, you can edit the individual lifecycle policy rule controlling the length of time that qualifying archived objects are retained, while leaving the other lifecycle policy rules unchanged.
How do I create lifecycle policies on my bucket?
You can define lifecycle policies on a bucket using the Oracle Cloud Infrastructure Service Console, CLI, SDK or the API. One lifecycle policy can be defined per bucket, and each lifecycle policy can have up to 1000 rules. Each rule corresponds to an action (archive or delete) that can be executed on objects in the bucket. You can create rules that apply to all objects in the bucket, or only to a subset of objects that use a specific name prefix pattern.
Can I define a lifecycle policy on an Archive Storage bucket?
Yes, you can create lifecycle policies on an Archive Storage bucket. However, only 'Delete' rules are supported. Archived objects cannot be migrated from Archive Storage to standard Object Storage using a lifecycle policy. Note, when creating lifecycle policy rules, Archive Storage has a minimum retention requirement of 90 days. If your lifecycle policy deletes archived data that has not met the retention requirement, you may incur a deletion penalty. The deletion penalty is the prorated cost of storing the data for the full 90 days.
Can I disable rules defined in a lifecycle policies?
Yes, you can disable or re-enable rules defined in a lifecycle policies.
Can I add rules to the lifecycle policy after it was created?
Yes, you can add rules to an existing lifecycle policy. When adding, removing, or changing individual lifecycle policy rules using the CLI, SDK or API, you must provide an edited version of the entire lifecycle policy (including the unchanged rules) in your update. See the documentation for more details.
Do lifecycle policies apply retroactively to data stored in the Object Storage bucket, before the policy was defined?
Yes, lifecycle policies apply to data uploaded to the Object Storage bucket before the policy was created. For example, if a lifecycle policy rule is implemented that archives all objects over 30 days old, and the bucket contains objects that are 40 days old, those objects will be identified immediately by the service as candidates for archiving, and the archiving process will begin.
How are conflicting lifecycle rules evaluated for execution?
Rules are evaluated for conflicts at runtime. Rules that delete objects always take priority over rules that would archive the same objects.
What is cross-region copy?
Cross region copy lets you asynchronously copy objects to other buckets in the same region, to buckets in other regions or to buckets in other tenancies within the same region or in other regions. When copying the objects, you can keep the same name or modify the object name. The object copied to the destination bucket is considered a new object with unique ETag values and MD5 hashes.
How does cross-region copy work?
You can use the Oracle Cloud Infrastructure service console, CLI, SDK or Object Storage API to copy objects between regions. You must specify the source object name, destination namespace, destination region and destination bucket to copy an object. The copy is asynchronous, meaning that the Object Storage processes copy requests as resources become available, using a queue to manage your copy requests. When you submit a copy request, a workrequest id is generated. You can query the workrequest to monitor the copy status of your object. Workrequests can also be canceled using the API, CLI, or an SDK. A canceled workrequest aborts the copy operation.
Can objects be copied to buckets in any Oracle Cloud Infrastructure region?
Yes, object can be copied between any two available Oracle Cloud Infrastructure regions. However, the user initiating the copy must have the required IAM permissions to read and write data in both the source and the destination regions.
Will the copy operation preserve the custom metadata defined on the source object?
Yes, when you copy objects, by default the metadata of the source object is preserved. However, using the API, the CLI, or an SDK, you can optionally modify or delete the object metadata as a part of the copy operation. You can also modify or delete the metadata of the destination object using the console, once the copy operation is complete.
Can I copy objects from a standard object storage bucket to an archive storage bucket and vice versa?
Yes, you can copy objects between standard object storage and archive storage buckets. However, before you can copy an object from an archive storage bucket, you must restore the object.
Can objects be copied between buckets in the same region?
Yes, objects can be copied between buckets in the same region
When an object is copied, will the MD5 hashes of the source and destination objects match?
The MD5 hash of the destination object may not match the MD5 hash of the source object. This is because the Object Storage service may use a chunk size for the destination object that differs from the one used to originally upload the source object.
Can I use the cross-region copy functionality to copy multiple objects at once?
No, you can only use cross-region copy feature to copy one object at a time. However, using the CLI, you can script bulk copying operations from the source to the destination bucket.
Amazon S3 Compatibility API
What is the Amazon S3 Compatibility API?
The Amazon S3 Compatibility API is a set of Object Storage APIs that let you build products and services that interoperate with other storage services, such as Amazon S3.
What are the benefits of the Amazon S3 Compatibility API?
The benefits of the Amazon S3 API include:
- Not being locked into a single vendor storage service
- The ability to continue using your favorite client, application, or service that leverages the Amazon S3 API with Oracle Object Storage
Do you support all of the available Amazon S3 APIs?
No, not all of the available Amazon S3 APIs are supported. See the Amazon S3 Compatibility API documentation for a complete list of currently supported Amazon APIs.
Will Oracle Object Storage continue to support multiple APIs, or standardize on a single API?
Oracle Object Storage will continue to support both the native Object Storage API and the Amazon S3 Compatibility API. The Amazon S3 Compatibility API exists to promote interoperability with other cloud storage platforms. If you want to use all available Oracle Object Storage features, we recommend using the native Object Storage API.
When should I use the Amazon S3 Compatibility API?
You should consider using the Amazon S3 Compatibility API if you wish to use a specific client or application to access the Object Storage service, while leveraging Amazon S3-like APIs. You should also consider using the Amazon S3 Compatibility API if you need your product or service to interoperate with multiple Amazon S3-like object storage targets.
Will there be feature parity across the native Object Storage API and the Amazon S3 Compatibility API?
No, feature parity is not guaranteed across the two sets of APIs. All new Object Storage features will be supported with the native API first, and then opportunistically with the Amazon S3 Compatibility API.
If I write data using the Amazon S3 Compatibility API, can I read it back using the Object Storage Native API, and vice versa?
Yes, the two API sets are congruent. If data is written to Oracle Object Storage using the Amazon S3 Compatibility API, it can be read back using the native Object Storage API. The converse applies when writing and reading data.
How do the Amazon S3 compatibility APIs incorporate the concept of compartments, a concept unique to Oracle Cloud Infrastructure?
All buckets created using the Amazon S3 Compatibility API will be created in the Oracle Cloud Infrastructure ”root” compartment. However, If creating buckets in the root compartment is not acceptable, you can use the console or the Command Line Interface (CLI) to create a bucket in a compartment of your choice. You can then operate on the bucket using the Amazon S3 Compatibility API.
How does authentication work with the Amazon S3 compatibility API?
To use the APIs, you need to create an Amazon S3 Compatibility Access Key/ Secret Key pair using the Oracle Cloud Infrastructure console. This Access Key/ Secret Key combination can then be used with a client of your choice. Note that Oracle Cloud Infrastructure only supports the Signature Version 4 signing mechanism. You can simultaneously have two active API key/password pairs for each Oracle Identity and Access Management user.
Does the Amazon S3 Compatibility API support both the virtual hosted style and path style URLs?
No, the Amazon S3 Compatibility API supports only path style URLs.
Can I reuse buckets I created using the native API or the Oracle Cloud Infrastructure console for work with the Amazon S3 Compatibility API?
Yes, you can reuse buckets created using the native Object Storage API or the console to work with the Amazon S3 Compatibility API.
How does the Oracle Object Storage service handle REST headers that are not supported by Amazon S3 Compatibility API?
If an Amazon S3 API call references unsupported REST headers or header values, those headers or values are ignored while processing the request.
For example, if specifying the header x-amz-server-side-encryption while calling the PUT Object Storage API, the headers are ignored because Oracle Object Storage encrypts all objects by default.
How is encryption supported with the Amazon S3 Compatibility API?
All data in Oracle Object Storage is encrypted by default. Encryption headers are ignored when processing the API calls.
Which clients are officially supported with the Amazon S3 Compatibility API?
We have tested the Amazon S3 compatibility API with the AWS SDK for Java. However, other client that integrate with an Amazon S3-like API should work with Oracle Object Storage, as long as only the supported APIs are being referenced. See the Amazon S3 Compatibility API documentation for a complete list of the Amazon APIs that we currently support.