File: //opt/go/pkg/mod/github.com/aws/
[email protected]/models/apis/transfer/2018-11-05/docs-2.json
{
"version": "2.0",
"service": "<p>Transfer Family is a fully managed service that enables the transfer of files over the File Transfer Protocol (FTP), File Transfer Protocol over SSL (FTPS), or Secure Shell (SSH) File Transfer Protocol (SFTP) directly into and out of Amazon Simple Storage Service (Amazon S3) or Amazon EFS. Additionally, you can use Applicability Statement 2 (AS2) to transfer files into and out of Amazon S3. Amazon Web Services helps you seamlessly migrate your file transfer workflows to Transfer Family by integrating with existing authentication systems, and providing DNS routing with Amazon Route 53 so nothing changes for your customers and partners, or their applications. With your data in Amazon S3, you can use it with Amazon Web Services for processing, analytics, machine learning, and archiving. Getting started with Transfer Family is easy since there is no infrastructure to buy and set up.</p>",
"operations": {
"CreateAccess": "<p>Used by administrators to choose which groups in the directory should have access to upload and download files over the enabled protocols using Transfer Family. For example, a Microsoft Active Directory might contain 50,000 users, but only a small fraction might need the ability to transfer files to the server. An administrator can use <code>CreateAccess</code> to limit the access to the correct set of users who need this ability.</p>",
"CreateAgreement": "<p>Creates an agreement. An agreement is a bilateral trading partner agreement, or partnership, between an Transfer Family server and an AS2 process. The agreement defines the file and message transfer relationship between the server and the AS2 process. To define an agreement, Transfer Family combines a server, local profile, partner profile, certificate, and other attributes.</p> <p>The partner is identified with the <code>PartnerProfileId</code>, and the AS2 process is identified with the <code>LocalProfileId</code>.</p>",
"CreateConnector": "<p>Creates the connector, which captures the parameters for a connection for the AS2 or SFTP protocol. For AS2, the connector is required for sending files to an externally hosted AS2 server. For SFTP, the connector is required when sending files to an SFTP server or receiving files from an SFTP server. For more details about connectors, see <a href=\"https://docs.aws.amazon.com/transfer/latest/userguide/configure-as2-connector.html\">Configure AS2 connectors</a> and <a href=\"https://docs.aws.amazon.com/transfer/latest/userguide/configure-sftp-connector.html\">Create SFTP connectors</a>.</p> <note> <p>You must specify exactly one configuration object: either for AS2 (<code>As2Config</code>) or SFTP (<code>SftpConfig</code>).</p> </note>",
"CreateProfile": "<p>Creates the local or partner profile to use for AS2 transfers.</p>",
"CreateServer": "<p>Instantiates an auto-scaling virtual server based on the selected file transfer protocol in Amazon Web Services. When you make updates to your file transfer protocol-enabled server or when you work with users, use the service-generated <code>ServerId</code> property that is assigned to the newly created server.</p>",
"CreateUser": "<p>Creates a user and associates them with an existing file transfer protocol-enabled server. You can only create and associate users with servers that have the <code>IdentityProviderType</code> set to <code>SERVICE_MANAGED</code>. Using parameters for <code>CreateUser</code>, you can specify the user name, set the home directory, store the user's public key, and assign the user's Identity and Access Management (IAM) role. You can also optionally add a session policy, and assign metadata with tags that can be used to group and search for users.</p>",
"CreateWorkflow": "<p> Allows you to create a workflow with specified steps and step details the workflow invokes after file transfer completes. After creating a workflow, you can associate the workflow created with any transfer servers by specifying the <code>workflow-details</code> field in <code>CreateServer</code> and <code>UpdateServer</code> operations. </p>",
"DeleteAccess": "<p>Allows you to delete the access specified in the <code>ServerID</code> and <code>ExternalID</code> parameters.</p>",
"DeleteAgreement": "<p>Delete the agreement that's specified in the provided <code>AgreementId</code>.</p>",
"DeleteCertificate": "<p>Deletes the certificate that's specified in the <code>CertificateId</code> parameter.</p>",
"DeleteConnector": "<p>Deletes the connector that's specified in the provided <code>ConnectorId</code>.</p>",
"DeleteHostKey": "<p>Deletes the host key that's specified in the <code>HostKeyId</code> parameter.</p>",
"DeleteProfile": "<p>Deletes the profile that's specified in the <code>ProfileId</code> parameter.</p>",
"DeleteServer": "<p>Deletes the file transfer protocol-enabled server that you specify.</p> <p>No response returns from this operation.</p>",
"DeleteSshPublicKey": "<p>Deletes a user's Secure Shell (SSH) public key.</p>",
"DeleteUser": "<p>Deletes the user belonging to a file transfer protocol-enabled server you specify.</p> <p>No response returns from this operation.</p> <note> <p>When you delete a user from a server, the user's information is lost.</p> </note>",
"DeleteWorkflow": "<p>Deletes the specified workflow.</p>",
"DescribeAccess": "<p>Describes the access that is assigned to the specific file transfer protocol-enabled server, as identified by its <code>ServerId</code> property and its <code>ExternalId</code>.</p> <p>The response from this call returns the properties of the access that is associated with the <code>ServerId</code> value that was specified.</p>",
"DescribeAgreement": "<p>Describes the agreement that's identified by the <code>AgreementId</code>.</p>",
"DescribeCertificate": "<p>Describes the certificate that's identified by the <code>CertificateId</code>.</p>",
"DescribeConnector": "<p>Describes the connector that's identified by the <code>ConnectorId.</code> </p>",
"DescribeExecution": "<p>You can use <code>DescribeExecution</code> to check the details of the execution of the specified workflow.</p> <note> <p>This API call only returns details for in-progress workflows.</p> <p> If you provide an ID for an execution that is not in progress, or if the execution doesn't match the specified workflow ID, you receive a <code>ResourceNotFound</code> exception.</p> </note>",
"DescribeHostKey": "<p>Returns the details of the host key that's specified by the <code>HostKeyId</code> and <code>ServerId</code>.</p>",
"DescribeProfile": "<p>Returns the details of the profile that's specified by the <code>ProfileId</code>.</p>",
"DescribeSecurityPolicy": "<p>Describes the security policy that is attached to your server or SFTP connector. The response contains a description of the security policy's properties. For more information about security policies, see <a href=\"https://docs.aws.amazon.com/transfer/latest/userguide/security-policies.html\">Working with security policies for servers</a> or <a href=\"https://docs.aws.amazon.com/transfer/latest/userguide/security-policies-connectors.html\">Working with security policies for SFTP connectors</a>.</p>",
"DescribeServer": "<p>Describes a file transfer protocol-enabled server that you specify by passing the <code>ServerId</code> parameter.</p> <p>The response contains a description of a server's properties. When you set <code>EndpointType</code> to VPC, the response will contain the <code>EndpointDetails</code>.</p>",
"DescribeUser": "<p>Describes the user assigned to the specific file transfer protocol-enabled server, as identified by its <code>ServerId</code> property.</p> <p>The response from this call returns the properties of the user associated with the <code>ServerId</code> value that was specified.</p>",
"DescribeWorkflow": "<p>Describes the specified workflow.</p>",
"ImportCertificate": "<p>Imports the signing and encryption certificates that you need to create local (AS2) profiles and partner profiles.</p>",
"ImportHostKey": "<p>Adds a host key to the server that's specified by the <code>ServerId</code> parameter.</p>",
"ImportSshPublicKey": "<p>Adds a Secure Shell (SSH) public key to a Transfer Family user identified by a <code>UserName</code> value assigned to the specific file transfer protocol-enabled server, identified by <code>ServerId</code>.</p> <p>The response returns the <code>UserName</code> value, the <code>ServerId</code> value, and the name of the <code>SshPublicKeyId</code>.</p>",
"ListAccesses": "<p>Lists the details for all the accesses you have on your server.</p>",
"ListAgreements": "<p>Returns a list of the agreements for the server that's identified by the <code>ServerId</code> that you supply. If you want to limit the results to a certain number, supply a value for the <code>MaxResults</code> parameter. If you ran the command previously and received a value for <code>NextToken</code>, you can supply that value to continue listing agreements from where you left off.</p>",
"ListCertificates": "<p>Returns a list of the current certificates that have been imported into Transfer Family. If you want to limit the results to a certain number, supply a value for the <code>MaxResults</code> parameter. If you ran the command previously and received a value for the <code>NextToken</code> parameter, you can supply that value to continue listing certificates from where you left off.</p>",
"ListConnectors": "<p>Lists the connectors for the specified Region.</p>",
"ListExecutions": "<p>Lists all in-progress executions for the specified workflow.</p> <note> <p>If the specified workflow ID cannot be found, <code>ListExecutions</code> returns a <code>ResourceNotFound</code> exception.</p> </note>",
"ListHostKeys": "<p>Returns a list of host keys for the server that's specified by the <code>ServerId</code> parameter.</p>",
"ListProfiles": "<p>Returns a list of the profiles for your system. If you want to limit the results to a certain number, supply a value for the <code>MaxResults</code> parameter. If you ran the command previously and received a value for <code>NextToken</code>, you can supply that value to continue listing profiles from where you left off.</p>",
"ListSecurityPolicies": "<p>Lists the security policies that are attached to your servers and SFTP connectors. For more information about security policies, see <a href=\"https://docs.aws.amazon.com/transfer/latest/userguide/security-policies.html\">Working with security policies for servers</a> or <a href=\"https://docs.aws.amazon.com/transfer/latest/userguide/security-policies-connectors.html\">Working with security policies for SFTP connectors</a>.</p>",
"ListServers": "<p>Lists the file transfer protocol-enabled servers that are associated with your Amazon Web Services account.</p>",
"ListTagsForResource": "<p>Lists all of the tags associated with the Amazon Resource Name (ARN) that you specify. The resource can be a user, server, or role.</p>",
"ListUsers": "<p>Lists the users for a file transfer protocol-enabled server that you specify by passing the <code>ServerId</code> parameter.</p>",
"ListWorkflows": "<p>Lists all workflows associated with your Amazon Web Services account for your current region.</p>",
"SendWorkflowStepState": "<p>Sends a callback for asynchronous custom steps.</p> <p> The <code>ExecutionId</code>, <code>WorkflowId</code>, and <code>Token</code> are passed to the target resource during execution of a custom step of a workflow. You must include those with their callback as well as providing a status. </p>",
"StartDirectoryListing": "<p>Retrieves a list of the contents of a directory from a remote SFTP server. You specify the connector ID, the output path, and the remote directory path. You can also specify the optional <code>MaxItems</code> value to control the maximum number of items that are listed from the remote directory. This API returns a list of all files and directories in the remote directory (up to the maximum value), but does not return files or folders in sub-directories. That is, it only returns a list of files and directories one-level deep.</p> <p>After you receive the listing file, you can provide the files that you want to transfer to the <code>RetrieveFilePaths</code> parameter of the <code>StartFileTransfer</code> API call.</p> <p>The naming convention for the output file is <code> <i>connector-ID</i>-<i>listing-ID</i>.json</code>. The output file contains the following information:</p> <ul> <li> <p> <code>filePath</code>: the complete path of a remote file, relative to the directory of the listing request for your SFTP connector on the remote server.</p> </li> <li> <p> <code>modifiedTimestamp</code>: the last time the file was modified, in UTC time format. This field is optional. If the remote file attributes don't contain a timestamp, it is omitted from the file listing.</p> </li> <li> <p> <code>size</code>: the size of the file, in bytes. This field is optional. If the remote file attributes don't contain a file size, it is omitted from the file listing.</p> </li> <li> <p> <code>path</code>: the complete path of a remote directory, relative to the directory of the listing request for your SFTP connector on the remote server.</p> </li> <li> <p> <code>truncated</code>: a flag indicating whether the list output contains all of the items contained in the remote directory or not. If your <code>Truncated</code> output value is true, you can increase the value provided in the optional <code>max-items</code> input attribute to be able to list more items (up to the maximum allowed list size of 10,000 items).</p> </li> </ul>",
"StartFileTransfer": "<p>Begins a file transfer between local Amazon Web Services storage and a remote AS2 or SFTP server.</p> <ul> <li> <p>For an AS2 connector, you specify the <code>ConnectorId</code> and one or more <code>SendFilePaths</code> to identify the files you want to transfer.</p> </li> <li> <p>For an SFTP connector, the file transfer can be either outbound or inbound. In both cases, you specify the <code>ConnectorId</code>. Depending on the direction of the transfer, you also specify the following items:</p> <ul> <li> <p>If you are transferring file from a partner's SFTP server to Amazon Web Services storage, you specify one or more <code>RetrieveFilePaths</code> to identify the files you want to transfer, and a <code>LocalDirectoryPath</code> to specify the destination folder.</p> </li> <li> <p>If you are transferring file to a partner's SFTP server from Amazon Web Services storage, you specify one or more <code>SendFilePaths</code> to identify the files you want to transfer, and a <code>RemoteDirectoryPath</code> to specify the destination folder.</p> </li> </ul> </li> </ul>",
"StartServer": "<p>Changes the state of a file transfer protocol-enabled server from <code>OFFLINE</code> to <code>ONLINE</code>. It has no impact on a server that is already <code>ONLINE</code>. An <code>ONLINE</code> server can accept and process file transfer jobs.</p> <p>The state of <code>STARTING</code> indicates that the server is in an intermediate state, either not fully able to respond, or not fully online. The values of <code>START_FAILED</code> can indicate an error condition.</p> <p>No response is returned from this call.</p>",
"StopServer": "<p>Changes the state of a file transfer protocol-enabled server from <code>ONLINE</code> to <code>OFFLINE</code>. An <code>OFFLINE</code> server cannot accept and process file transfer jobs. Information tied to your server, such as server and user properties, are not affected by stopping your server.</p> <note> <p>Stopping the server does not reduce or impact your file transfer protocol endpoint billing; you must delete the server to stop being billed.</p> </note> <p>The state of <code>STOPPING</code> indicates that the server is in an intermediate state, either not fully able to respond, or not fully offline. The values of <code>STOP_FAILED</code> can indicate an error condition.</p> <p>No response is returned from this call.</p>",
"TagResource": "<p>Attaches a key-value pair to a resource, as identified by its Amazon Resource Name (ARN). Resources are users, servers, roles, and other entities.</p> <p>There is no response returned from this call.</p>",
"TestConnection": "<p>Tests whether your SFTP connector is set up successfully. We highly recommend that you call this operation to test your ability to transfer files between local Amazon Web Services storage and a trading partner's SFTP server.</p>",
"TestIdentityProvider": "<p>If the <code>IdentityProviderType</code> of a file transfer protocol-enabled server is <code>AWS_DIRECTORY_SERVICE</code> or <code>API_Gateway</code>, tests whether your identity provider is set up successfully. We highly recommend that you call this operation to test your authentication method as soon as you create your server. By doing so, you can troubleshoot issues with the identity provider integration to ensure that your users can successfully use the service.</p> <p> The <code>ServerId</code> and <code>UserName</code> parameters are required. The <code>ServerProtocol</code>, <code>SourceIp</code>, and <code>UserPassword</code> are all optional. </p> <p>Note the following:</p> <ul> <li> <p> You cannot use <code>TestIdentityProvider</code> if the <code>IdentityProviderType</code> of your server is <code>SERVICE_MANAGED</code>.</p> </li> <li> <p> <code>TestIdentityProvider</code> does not work with keys: it only accepts passwords.</p> </li> <li> <p> <code>TestIdentityProvider</code> can test the password operation for a custom Identity Provider that handles keys and passwords.</p> </li> <li> <p> If you provide any incorrect values for any parameters, the <code>Response</code> field is empty. </p> </li> <li> <p> If you provide a server ID for a server that uses service-managed users, you get an error: </p> <p> <code> An error occurred (InvalidRequestException) when calling the TestIdentityProvider operation: s-<i>server-ID</i> not configured for external auth </code> </p> </li> <li> <p> If you enter a Server ID for the <code>--server-id</code> parameter that does not identify an actual Transfer server, you receive the following error: </p> <p> <code>An error occurred (ResourceNotFoundException) when calling the TestIdentityProvider operation: Unknown server</code>. </p> <p>It is possible your sever is in a different region. You can specify a region by adding the following: <code>--region region-code</code>, such as <code>--region us-east-2</code> to specify a server in <b>US East (Ohio)</b>.</p> </li> </ul>",
"UntagResource": "<p>Detaches a key-value pair from a resource, as identified by its Amazon Resource Name (ARN). Resources are users, servers, roles, and other entities.</p> <p>No response is returned from this call.</p>",
"UpdateAccess": "<p>Allows you to update parameters for the access specified in the <code>ServerID</code> and <code>ExternalID</code> parameters.</p>",
"UpdateAgreement": "<p>Updates some of the parameters for an existing agreement. Provide the <code>AgreementId</code> and the <code>ServerId</code> for the agreement that you want to update, along with the new values for the parameters to update.</p>",
"UpdateCertificate": "<p>Updates the active and inactive dates for a certificate.</p>",
"UpdateConnector": "<p>Updates some of the parameters for an existing connector. Provide the <code>ConnectorId</code> for the connector that you want to update, along with the new values for the parameters to update.</p>",
"UpdateHostKey": "<p>Updates the description for the host key that's specified by the <code>ServerId</code> and <code>HostKeyId</code> parameters.</p>",
"UpdateProfile": "<p>Updates some of the parameters for an existing profile. Provide the <code>ProfileId</code> for the profile that you want to update, along with the new values for the parameters to update.</p>",
"UpdateServer": "<p>Updates the file transfer protocol-enabled server's properties after that server has been created.</p> <p>The <code>UpdateServer</code> call returns the <code>ServerId</code> of the server you updated.</p>",
"UpdateUser": "<p>Assigns new properties to a user. Parameters you pass modify any or all of the following: the home directory, role, and policy for the <code>UserName</code> and <code>ServerId</code> you specify.</p> <p>The response returns the <code>ServerId</code> and the <code>UserName</code> for the updated user.</p> <p>In the console, you can select <i>Restricted</i> when you create or update a user. This ensures that the user can't access anything outside of their home directory. The programmatic way to configure this behavior is to update the user. Set their <code>HomeDirectoryType</code> to <code>LOGICAL</code>, and specify <code>HomeDirectoryMappings</code> with <code>Entry</code> as root (<code>/</code>) and <code>Target</code> as their home directory.</p> <p>For example, if the user's home directory is <code>/test/admin-user</code>, the following command updates the user so that their configuration in the console shows the <i>Restricted</i> flag as selected.</p> <p> <code> aws transfer update-user --server-id <server-id> --user-name admin-user --home-directory-type LOGICAL --home-directory-mappings \"[{\\\"Entry\\\":\\\"/\\\", \\\"Target\\\":\\\"/test/admin-user\\\"}]\"</code> </p>"
},
"shapes": {
"AccessDeniedException": {
"base": "<p>You do not have sufficient access to perform this action.</p>",
"refs": {
}
},
"AddressAllocationId": {
"base": null,
"refs": {
"AddressAllocationIds$member": null
}
},
"AddressAllocationIds": {
"base": null,
"refs": {
"EndpointDetails$AddressAllocationIds": "<p>A list of address allocation IDs that are required to attach an Elastic IP address to your server's endpoint.</p> <p>An address allocation ID corresponds to the allocation ID of an Elastic IP address. This value can be retrieved from the <code>allocationId</code> field from the Amazon EC2 <a href=\"https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_Address.html\">Address</a> data type. One way to retrieve this value is by calling the EC2 <a href=\"https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeAddresses.html\">DescribeAddresses</a> API.</p> <p>This parameter is optional. Set this parameter if you want to make your VPC endpoint public-facing. For details, see <a href=\"https://docs.aws.amazon.com/transfer/latest/userguide/create-server-in-vpc.html#create-internet-facing-endpoint\">Create an internet-facing endpoint for your server</a>.</p> <note> <p>This property can only be set as follows:</p> <ul> <li> <p> <code>EndpointType</code> must be set to <code>VPC</code> </p> </li> <li> <p>The Transfer Family server must be offline.</p> </li> <li> <p>You cannot set this parameter for Transfer Family servers that use the FTP protocol.</p> </li> <li> <p>The server must already have <code>SubnetIds</code> populated (<code>SubnetIds</code> and <code>AddressAllocationIds</code> cannot be updated simultaneously).</p> </li> <li> <p> <code>AddressAllocationIds</code> can't contain duplicates, and must be equal in length to <code>SubnetIds</code>. For example, if you have three subnet IDs, you must also specify three address allocation IDs.</p> </li> <li> <p>Call the <code>UpdateServer</code> API to set or change this parameter.</p> </li> </ul> </note>"
}
},
"AgreementId": {
"base": null,
"refs": {
"CreateAgreementResponse$AgreementId": "<p>The unique identifier for the agreement. Use this ID for deleting, or updating an agreement, as well as in any other API calls that require that you specify the agreement ID.</p>",
"DeleteAgreementRequest$AgreementId": "<p>A unique identifier for the agreement. This identifier is returned when you create an agreement.</p>",
"DescribeAgreementRequest$AgreementId": "<p>A unique identifier for the agreement. This identifier is returned when you create an agreement.</p>",
"DescribedAgreement$AgreementId": "<p>A unique identifier for the agreement. This identifier is returned when you create an agreement.</p>",
"ListedAgreement$AgreementId": "<p>A unique identifier for the agreement. This identifier is returned when you create an agreement.</p>",
"UpdateAgreementRequest$AgreementId": "<p>A unique identifier for the agreement. This identifier is returned when you create an agreement.</p>",
"UpdateAgreementResponse$AgreementId": "<p>A unique identifier for the agreement. This identifier is returned when you create an agreement.</p>"
}
},
"AgreementStatusType": {
"base": null,
"refs": {
"CreateAgreementRequest$Status": "<p>The status of the agreement. The agreement can be either <code>ACTIVE</code> or <code>INACTIVE</code>.</p>",
"DescribedAgreement$Status": "<p>The current status of the agreement, either <code>ACTIVE</code> or <code>INACTIVE</code>.</p>",
"ListedAgreement$Status": "<p>The agreement can be either <code>ACTIVE</code> or <code>INACTIVE</code>.</p>",
"UpdateAgreementRequest$Status": "<p>You can update the status for the agreement, either activating an inactive agreement or the reverse.</p>"
}
},
"Arn": {
"base": null,
"refs": {
"DescribedAgreement$Arn": "<p>The unique Amazon Resource Name (ARN) for the agreement.</p>",
"DescribedCertificate$Arn": "<p>The unique Amazon Resource Name (ARN) for the certificate.</p>",
"DescribedConnector$Arn": "<p>The unique Amazon Resource Name (ARN) for the connector.</p>",
"DescribedHostKey$Arn": "<p>The unique Amazon Resource Name (ARN) for the host key.</p>",
"DescribedProfile$Arn": "<p>The unique Amazon Resource Name (ARN) for the profile.</p>",
"DescribedServer$Arn": "<p>Specifies the unique Amazon Resource Name (ARN) of the server.</p>",
"DescribedUser$Arn": "<p>Specifies the unique Amazon Resource Name (ARN) for the user that was requested to be described.</p>",
"DescribedWorkflow$Arn": "<p>Specifies the unique Amazon Resource Name (ARN) for the workflow.</p>",
"ListTagsForResourceRequest$Arn": "<p>Requests the tags associated with a particular Amazon Resource Name (ARN). An ARN is an identifier for a specific Amazon Web Services resource, such as a server, user, or role.</p>",
"ListTagsForResourceResponse$Arn": "<p>The ARN you specified to list the tags of.</p>",
"ListedAgreement$Arn": "<p>The Amazon Resource Name (ARN) of the specified agreement.</p>",
"ListedCertificate$Arn": "<p>The Amazon Resource Name (ARN) of the specified certificate.</p>",
"ListedConnector$Arn": "<p>The Amazon Resource Name (ARN) of the specified connector.</p>",
"ListedHostKey$Arn": "<p>The unique Amazon Resource Name (ARN) of the host key.</p>",
"ListedProfile$Arn": "<p>The Amazon Resource Name (ARN) of the specified profile.</p>",
"ListedServer$Arn": "<p>Specifies the unique Amazon Resource Name (ARN) for a server to be listed.</p>",
"ListedUser$Arn": "<p>Provides the unique Amazon Resource Name (ARN) for the user that you want to learn about.</p>",
"ListedWorkflow$Arn": "<p>Specifies the unique Amazon Resource Name (ARN) for the workflow.</p>",
"StructuredLogDestinations$member": null,
"TagResourceRequest$Arn": "<p>An Amazon Resource Name (ARN) for a specific Amazon Web Services resource, such as a server, user, or role.</p>",
"UntagResourceRequest$Arn": "<p>The value of the resource that will have the tag removed. An Amazon Resource Name (ARN) is an identifier for a specific Amazon Web Services resource, such as a server, user, or role.</p>"
}
},
"As2ConnectorConfig": {
"base": "<p>Contains the details for an AS2 connector object. The connector object is used for AS2 outbound processes, to connect the Transfer Family customer with the trading partner.</p>",
"refs": {
"CreateConnectorRequest$As2Config": "<p>A structure that contains the parameters for an AS2 connector object.</p>",
"DescribedConnector$As2Config": "<p>A structure that contains the parameters for an AS2 connector object.</p>",
"UpdateConnectorRequest$As2Config": "<p>A structure that contains the parameters for an AS2 connector object.</p>"
}
},
"As2ConnectorSecretId": {
"base": null,
"refs": {
"As2ConnectorConfig$BasicAuthSecretId": "<p>Provides Basic authentication support to the AS2 Connectors API. To use Basic authentication, you must provide the name or Amazon Resource Name (ARN) of a secret in Secrets Manager.</p> <p>The default value for this parameter is <code>null</code>, which indicates that Basic authentication is not enabled for the connector.</p> <p>If the connector should use Basic authentication, the secret needs to be in the following format:</p> <p> <code>{ \"Username\": \"user-name\", \"Password\": \"user-password\" }</code> </p> <p>Replace <code>user-name</code> and <code>user-password</code> with the credentials for the actual user that is being authenticated.</p> <p>Note the following:</p> <ul> <li> <p>You are storing these credentials in Secrets Manager, <i>not passing them directly</i> into this API.</p> </li> <li> <p>If you are using the API, SDKs, or CloudFormation to configure your connector, then you must create the secret before you can enable Basic authentication. However, if you are using the Amazon Web Services management console, you can have the system create the secret for you.</p> </li> </ul> <p>If you have previously enabled Basic authentication for a connector, you can disable it by using the <code>UpdateConnector</code> API call. For example, if you are using the CLI, you can run the following command to remove Basic authentication:</p> <p> <code>update-connector --connector-id my-connector-id --as2-config 'BasicAuthSecretId=\"\"'</code> </p>"
}
},
"As2Id": {
"base": null,
"refs": {
"CreateProfileRequest$As2Id": "<p>The <code>As2Id</code> is the <i>AS2-name</i>, as defined in the <a href=\"https://datatracker.ietf.org/doc/html/rfc4130\">RFC 4130</a>. For inbound transfers, this is the <code>AS2-From</code> header for the AS2 messages sent from the partner. For outbound connectors, this is the <code>AS2-To</code> header for the AS2 messages sent to the partner using the <code>StartFileTransfer</code> API operation. This ID cannot include spaces.</p>",
"DescribedProfile$As2Id": "<p>The <code>As2Id</code> is the <i>AS2-name</i>, as defined in the <a href=\"https://datatracker.ietf.org/doc/html/rfc4130\">RFC 4130</a>. For inbound transfers, this is the <code>AS2-From</code> header for the AS2 messages sent from the partner. For outbound connectors, this is the <code>AS2-To</code> header for the AS2 messages sent to the partner using the <code>StartFileTransfer</code> API operation. This ID cannot include spaces.</p>",
"ListedProfile$As2Id": "<p>The <code>As2Id</code> is the <i>AS2-name</i>, as defined in the <a href=\"https://datatracker.ietf.org/doc/html/rfc4130\">RFC 4130</a>. For inbound transfers, this is the <code>AS2-From</code> header for the AS2 messages sent from the partner. For outbound connectors, this is the <code>AS2-To</code> header for the AS2 messages sent to the partner using the <code>StartFileTransfer</code> API operation. This ID cannot include spaces.</p>"
}
},
"As2Transport": {
"base": null,
"refs": {
"As2Transports$member": null
}
},
"As2Transports": {
"base": null,
"refs": {
"ProtocolDetails$As2Transports": "<p>Indicates the transport method for the AS2 messages. Currently, only HTTP is supported.</p>"
}
},
"CallbackToken": {
"base": null,
"refs": {
"SendWorkflowStepStateRequest$Token": "<p>Used to distinguish between multiple callbacks for multiple Lambda steps within the same execution.</p>"
}
},
"CertDate": {
"base": null,
"refs": {
"DescribedCertificate$ActiveDate": "<p>An optional date that specifies when the certificate becomes active.</p>",
"DescribedCertificate$InactiveDate": "<p>An optional date that specifies when the certificate becomes inactive.</p>",
"DescribedCertificate$NotBeforeDate": "<p>The earliest date that the certificate is valid.</p>",
"DescribedCertificate$NotAfterDate": "<p>The final date that the certificate is valid.</p>",
"ImportCertificateRequest$ActiveDate": "<p>An optional date that specifies when the certificate becomes active.</p>",
"ImportCertificateRequest$InactiveDate": "<p>An optional date that specifies when the certificate becomes inactive.</p>",
"ListedCertificate$ActiveDate": "<p>An optional date that specifies when the certificate becomes active.</p>",
"ListedCertificate$InactiveDate": "<p>An optional date that specifies when the certificate becomes inactive.</p>",
"UpdateCertificateRequest$ActiveDate": "<p>An optional date that specifies when the certificate becomes active.</p>",
"UpdateCertificateRequest$InactiveDate": "<p>An optional date that specifies when the certificate becomes inactive.</p>"
}
},
"CertSerial": {
"base": null,
"refs": {
"DescribedCertificate$Serial": "<p>The serial number for the certificate.</p>"
}
},
"Certificate": {
"base": null,
"refs": {
"CreateServerRequest$Certificate": "<p>The Amazon Resource Name (ARN) of the Certificate Manager (ACM) certificate. Required when <code>Protocols</code> is set to <code>FTPS</code>.</p> <p>To request a new public certificate, see <a href=\"https://docs.aws.amazon.com/acm/latest/userguide/gs-acm-request-public.html\">Request a public certificate</a> in the <i>Certificate Manager User Guide</i>.</p> <p>To import an existing certificate into ACM, see <a href=\"https://docs.aws.amazon.com/acm/latest/userguide/import-certificate.html\">Importing certificates into ACM</a> in the <i>Certificate Manager User Guide</i>.</p> <p>To request a private certificate to use FTPS through private IP addresses, see <a href=\"https://docs.aws.amazon.com/acm/latest/userguide/gs-acm-request-private.html\">Request a private certificate</a> in the <i>Certificate Manager User Guide</i>.</p> <p>Certificates with the following cryptographic algorithms and key sizes are supported:</p> <ul> <li> <p>2048-bit RSA (RSA_2048)</p> </li> <li> <p>4096-bit RSA (RSA_4096)</p> </li> <li> <p>Elliptic Prime Curve 256 bit (EC_prime256v1)</p> </li> <li> <p>Elliptic Prime Curve 384 bit (EC_secp384r1)</p> </li> <li> <p>Elliptic Prime Curve 521 bit (EC_secp521r1)</p> </li> </ul> <note> <p>The certificate must be a valid SSL/TLS X.509 version 3 certificate with FQDN or IP address specified and information about the issuer.</p> </note>",
"DescribedServer$Certificate": "<p>Specifies the ARN of the Amazon Web ServicesCertificate Manager (ACM) certificate. Required when <code>Protocols</code> is set to <code>FTPS</code>.</p>",
"UpdateServerRequest$Certificate": "<p>The Amazon Resource Name (ARN) of the Amazon Web ServicesCertificate Manager (ACM) certificate. Required when <code>Protocols</code> is set to <code>FTPS</code>.</p> <p>To request a new public certificate, see <a href=\"https://docs.aws.amazon.com/acm/latest/userguide/gs-acm-request-public.html\">Request a public certificate</a> in the <i> Amazon Web ServicesCertificate Manager User Guide</i>.</p> <p>To import an existing certificate into ACM, see <a href=\"https://docs.aws.amazon.com/acm/latest/userguide/import-certificate.html\">Importing certificates into ACM</a> in the <i> Amazon Web ServicesCertificate Manager User Guide</i>.</p> <p>To request a private certificate to use FTPS through private IP addresses, see <a href=\"https://docs.aws.amazon.com/acm/latest/userguide/gs-acm-request-private.html\">Request a private certificate</a> in the <i> Amazon Web ServicesCertificate Manager User Guide</i>.</p> <p>Certificates with the following cryptographic algorithms and key sizes are supported:</p> <ul> <li> <p>2048-bit RSA (RSA_2048)</p> </li> <li> <p>4096-bit RSA (RSA_4096)</p> </li> <li> <p>Elliptic Prime Curve 256 bit (EC_prime256v1)</p> </li> <li> <p>Elliptic Prime Curve 384 bit (EC_secp384r1)</p> </li> <li> <p>Elliptic Prime Curve 521 bit (EC_secp521r1)</p> </li> </ul> <note> <p>The certificate must be a valid SSL/TLS X.509 version 3 certificate with FQDN or IP address specified and information about the issuer.</p> </note>"
}
},
"CertificateBodyType": {
"base": null,
"refs": {
"DescribedCertificate$Certificate": "<p>The file name for the certificate.</p>",
"ImportCertificateRequest$Certificate": "<ul> <li> <p>For the CLI, provide a file path for a certificate in URI format. For example, <code>--certificate file://encryption-cert.pem</code>. Alternatively, you can provide the raw content.</p> </li> <li> <p>For the SDK, specify the raw content of a certificate file. For example, <code>--certificate \"`cat encryption-cert.pem`\"</code>.</p> </li> </ul>"
}
},
"CertificateChainType": {
"base": null,
"refs": {
"DescribedCertificate$CertificateChain": "<p>The list of certificates that make up the chain for the certificate.</p>",
"ImportCertificateRequest$CertificateChain": "<p>An optional list of certificates that make up the chain for the certificate that's being imported.</p>"
}
},
"CertificateId": {
"base": null,
"refs": {
"CertificateIds$member": null,
"DeleteCertificateRequest$CertificateId": "<p>The identifier of the certificate object that you are deleting.</p>",
"DescribeCertificateRequest$CertificateId": "<p>An array of identifiers for the imported certificates. You use this identifier for working with profiles and partner profiles.</p>",
"DescribedCertificate$CertificateId": "<p>An array of identifiers for the imported certificates. You use this identifier for working with profiles and partner profiles.</p>",
"ImportCertificateResponse$CertificateId": "<p>An array of identifiers for the imported certificates. You use this identifier for working with profiles and partner profiles.</p>",
"ListedCertificate$CertificateId": "<p>An array of identifiers for the imported certificates. You use this identifier for working with profiles and partner profiles.</p>",
"UpdateCertificateRequest$CertificateId": "<p>The identifier of the certificate object that you are updating.</p>",
"UpdateCertificateResponse$CertificateId": "<p>Returns the identifier of the certificate object that you are updating.</p>"
}
},
"CertificateIds": {
"base": null,
"refs": {
"CreateProfileRequest$CertificateIds": "<p>An array of identifiers for the imported certificates. You use this identifier for working with profiles and partner profiles.</p>",
"DescribedProfile$CertificateIds": "<p>An array of identifiers for the imported certificates. You use this identifier for working with profiles and partner profiles.</p>",
"UpdateProfileRequest$CertificateIds": "<p>An array of identifiers for the imported certificates. You use this identifier for working with profiles and partner profiles.</p>"
}
},
"CertificateStatusType": {
"base": null,
"refs": {
"DescribedCertificate$Status": "<p>The certificate can be either <code>ACTIVE</code>, <code>PENDING_ROTATION</code>, or <code>INACTIVE</code>. <code>PENDING_ROTATION</code> means that this certificate will replace the current certificate when it expires.</p>",
"ListedCertificate$Status": "<p>The certificate can be either <code>ACTIVE</code>, <code>PENDING_ROTATION</code>, or <code>INACTIVE</code>. <code>PENDING_ROTATION</code> means that this certificate will replace the current certificate when it expires.</p>"
}
},
"CertificateType": {
"base": null,
"refs": {
"DescribedCertificate$Type": "<p>If a private key has been specified for the certificate, its type is <code>CERTIFICATE_WITH_PRIVATE_KEY</code>. If there is no private key, the type is <code>CERTIFICATE</code>.</p>",
"ListedCertificate$Type": "<p>The type for the certificate. If a private key has been specified for the certificate, its type is <code>CERTIFICATE_WITH_PRIVATE_KEY</code>. If there is no private key, the type is <code>CERTIFICATE</code>.</p>"
}
},
"CertificateUsageType": {
"base": null,
"refs": {
"DescribedCertificate$Usage": "<p>Specifies how this certificate is used. It can be used in the following ways:</p> <ul> <li> <p> <code>SIGNING</code>: For signing AS2 messages</p> </li> <li> <p> <code>ENCRYPTION</code>: For encrypting AS2 messages</p> </li> <li> <p> <code>TLS</code>: For securing AS2 communications sent over HTTPS</p> </li> </ul>",
"ImportCertificateRequest$Usage": "<p>Specifies how this certificate is used. It can be used in the following ways:</p> <ul> <li> <p> <code>SIGNING</code>: For signing AS2 messages</p> </li> <li> <p> <code>ENCRYPTION</code>: For encrypting AS2 messages</p> </li> <li> <p> <code>TLS</code>: For securing AS2 communications sent over HTTPS</p> </li> </ul>",
"ListedCertificate$Usage": "<p>Specifies how this certificate is used. It can be used in the following ways:</p> <ul> <li> <p> <code>SIGNING</code>: For signing AS2 messages</p> </li> <li> <p> <code>ENCRYPTION</code>: For encrypting AS2 messages</p> </li> <li> <p> <code>TLS</code>: For securing AS2 communications sent over HTTPS</p> </li> </ul>"
}
},
"CompressionEnum": {
"base": null,
"refs": {
"As2ConnectorConfig$Compression": "<p>Specifies whether the AS2 file is compressed.</p>"
}
},
"ConflictException": {
"base": "<p>This exception is thrown when the <code>UpdateServer</code> is called for a file transfer protocol-enabled server that has VPC as the endpoint type and the server's <code>VpcEndpointID</code> is not in the available state.</p>",
"refs": {
}
},
"ConnectorId": {
"base": null,
"refs": {
"CreateConnectorResponse$ConnectorId": "<p>The unique identifier for the connector, returned after the API call succeeds.</p>",
"DeleteConnectorRequest$ConnectorId": "<p>The unique identifier for the connector.</p>",
"DescribeConnectorRequest$ConnectorId": "<p>The unique identifier for the connector.</p>",
"DescribedConnector$ConnectorId": "<p>The unique identifier for the connector.</p>",
"ListedConnector$ConnectorId": "<p>The unique identifier for the connector.</p>",
"StartDirectoryListingRequest$ConnectorId": "<p>The unique identifier for the connector.</p>",
"StartFileTransferRequest$ConnectorId": "<p>The unique identifier for the connector.</p>",
"TestConnectionRequest$ConnectorId": "<p>The unique identifier for the connector.</p>",
"TestConnectionResponse$ConnectorId": "<p>Returns the identifier of the connector object that you are testing.</p>",
"UpdateConnectorRequest$ConnectorId": "<p>The unique identifier for the connector.</p>",
"UpdateConnectorResponse$ConnectorId": "<p>Returns the identifier of the connector object that you are updating.</p>"
}
},
"ConnectorSecurityPolicyName": {
"base": null,
"refs": {
"CreateConnectorRequest$SecurityPolicyName": "<p>Specifies the name of the security policy for the connector.</p>",
"DescribedConnector$SecurityPolicyName": "<p>The text name of the security policy for the specified connector.</p>",
"UpdateConnectorRequest$SecurityPolicyName": "<p>Specifies the name of the security policy for the connector.</p>"
}
},
"CopyStepDetails": {
"base": "<p>Each step type has its own <code>StepDetails</code> structure.</p>",
"refs": {
"WorkflowStep$CopyStepDetails": "<p>Details for a step that performs a file copy.</p> <p> Consists of the following values: </p> <ul> <li> <p>A description</p> </li> <li> <p>An Amazon S3 location for the destination of the file copy.</p> </li> <li> <p>A flag that indicates whether to overwrite an existing file of the same name. The default is <code>FALSE</code>.</p> </li> </ul>"
}
},
"CreateAccessRequest": {
"base": null,
"refs": {
}
},
"CreateAccessResponse": {
"base": null,
"refs": {
}
},
"CreateAgreementRequest": {
"base": null,
"refs": {
}
},
"CreateAgreementResponse": {
"base": null,
"refs": {
}
},
"CreateConnectorRequest": {
"base": null,
"refs": {
}
},
"CreateConnectorResponse": {
"base": null,
"refs": {
}
},
"CreateProfileRequest": {
"base": null,
"refs": {
}
},
"CreateProfileResponse": {
"base": null,
"refs": {
}
},
"CreateServerRequest": {
"base": null,
"refs": {
}
},
"CreateServerResponse": {
"base": null,
"refs": {
}
},
"CreateUserRequest": {
"base": null,
"refs": {
}
},
"CreateUserResponse": {
"base": null,
"refs": {
}
},
"CreateWorkflowRequest": {
"base": null,
"refs": {
}
},
"CreateWorkflowResponse": {
"base": null,
"refs": {
}
},
"CustomStepDetails": {
"base": "<p>Each step type has its own <code>StepDetails</code> structure.</p>",
"refs": {
"WorkflowStep$CustomStepDetails": "<p>Details for a step that invokes an Lambda function.</p> <p>Consists of the Lambda function's name, target, and timeout (in seconds). </p>"
}
},
"CustomStepStatus": {
"base": null,
"refs": {
"SendWorkflowStepStateRequest$Status": "<p>Indicates whether the specified step succeeded or failed.</p>"
}
},
"CustomStepTarget": {
"base": null,
"refs": {
"CustomStepDetails$Target": "<p>The ARN for the Lambda function that is being called.</p>"
}
},
"CustomStepTimeoutSeconds": {
"base": null,
"refs": {
"CustomStepDetails$TimeoutSeconds": "<p>Timeout, in seconds, for the step.</p>"
}
},
"DateImported": {
"base": null,
"refs": {
"DescribedHostKey$DateImported": "<p>The date on which the host key was added to the server.</p>",
"ListedHostKey$DateImported": "<p>The date on which the host key was added to the server.</p>",
"SshPublicKey$DateImported": "<p>Specifies the date that the public key was added to the Transfer Family user.</p>"
}
},
"DecryptStepDetails": {
"base": "<p>Each step type has its own <code>StepDetails</code> structure.</p>",
"refs": {
"WorkflowStep$DecryptStepDetails": "<p>Details for a step that decrypts an encrypted file.</p> <p>Consists of the following values:</p> <ul> <li> <p>A descriptive name</p> </li> <li> <p>An Amazon S3 or Amazon Elastic File System (Amazon EFS) location for the source file to decrypt.</p> </li> <li> <p>An S3 or Amazon EFS location for the destination of the file decryption.</p> </li> <li> <p>A flag that indicates whether to overwrite an existing file of the same name. The default is <code>FALSE</code>.</p> </li> <li> <p>The type of encryption that's used. Currently, only PGP encryption is supported.</p> </li> </ul>"
}
},
"DeleteAccessRequest": {
"base": null,
"refs": {
}
},
"DeleteAgreementRequest": {
"base": null,
"refs": {
}
},
"DeleteCertificateRequest": {
"base": null,
"refs": {
}
},
"DeleteConnectorRequest": {
"base": null,
"refs": {
}
},
"DeleteHostKeyRequest": {
"base": null,
"refs": {
}
},
"DeleteProfileRequest": {
"base": null,
"refs": {
}
},
"DeleteServerRequest": {
"base": null,
"refs": {
}
},
"DeleteSshPublicKeyRequest": {
"base": null,
"refs": {
}
},
"DeleteStepDetails": {
"base": "<p>The name of the step, used to identify the delete step.</p>",
"refs": {
"WorkflowStep$DeleteStepDetails": "<p>Details for a step that deletes the file.</p>"
}
},
"DeleteUserRequest": {
"base": null,
"refs": {
}
},
"DeleteWorkflowRequest": {
"base": null,
"refs": {
}
},
"DescribeAccessRequest": {
"base": null,
"refs": {
}
},
"DescribeAccessResponse": {
"base": null,
"refs": {
}
},
"DescribeAgreementRequest": {
"base": null,
"refs": {
}
},
"DescribeAgreementResponse": {
"base": null,
"refs": {
}
},
"DescribeCertificateRequest": {
"base": null,
"refs": {
}
},
"DescribeCertificateResponse": {
"base": null,
"refs": {
}
},
"DescribeConnectorRequest": {
"base": null,
"refs": {
}
},
"DescribeConnectorResponse": {
"base": null,
"refs": {
}
},
"DescribeExecutionRequest": {
"base": null,
"refs": {
}
},
"DescribeExecutionResponse": {
"base": null,
"refs": {
}
},
"DescribeHostKeyRequest": {
"base": null,
"refs": {
}
},
"DescribeHostKeyResponse": {
"base": null,
"refs": {
}
},
"DescribeProfileRequest": {
"base": null,
"refs": {
}
},
"DescribeProfileResponse": {
"base": null,
"refs": {
}
},
"DescribeSecurityPolicyRequest": {
"base": null,
"refs": {
}
},
"DescribeSecurityPolicyResponse": {
"base": null,
"refs": {
}
},
"DescribeServerRequest": {
"base": null,
"refs": {
}
},
"DescribeServerResponse": {
"base": null,
"refs": {
}
},
"DescribeUserRequest": {
"base": null,
"refs": {
}
},
"DescribeUserResponse": {
"base": null,
"refs": {
}
},
"DescribeWorkflowRequest": {
"base": null,
"refs": {
}
},
"DescribeWorkflowResponse": {
"base": null,
"refs": {
}
},
"DescribedAccess": {
"base": "<p>Describes the properties of the access that was specified.</p>",
"refs": {
"DescribeAccessResponse$Access": "<p>The external identifier of the server that the access is attached to.</p>"
}
},
"DescribedAgreement": {
"base": "<p>Describes the properties of an agreement.</p>",
"refs": {
"DescribeAgreementResponse$Agreement": "<p>The details for the specified agreement, returned as a <code>DescribedAgreement</code> object.</p>"
}
},
"DescribedCertificate": {
"base": "<p>Describes the properties of a certificate.</p>",
"refs": {
"DescribeCertificateResponse$Certificate": "<p>The details for the specified certificate, returned as an object.</p>"
}
},
"DescribedConnector": {
"base": "<p>Describes the parameters for the connector, as identified by the <code>ConnectorId</code>.</p>",
"refs": {
"DescribeConnectorResponse$Connector": "<p>The structure that contains the details of the connector.</p>"
}
},
"DescribedExecution": {
"base": "<p>The details for an execution object.</p>",
"refs": {
"DescribeExecutionResponse$Execution": "<p>The structure that contains the details of the workflow' execution.</p>"
}
},
"DescribedHostKey": {
"base": "<p>The details for a server host key.</p>",
"refs": {
"DescribeHostKeyResponse$HostKey": "<p>Returns the details for the specified host key.</p>"
}
},
"DescribedProfile": {
"base": "<p>The details for a local or partner AS2 profile. </p>",
"refs": {
"DescribeProfileResponse$Profile": "<p>The details of the specified profile, returned as an object.</p>"
}
},
"DescribedSecurityPolicy": {
"base": "<p>Describes the properties of a security policy that you specify. For more information about security policies, see <a href=\"https://docs.aws.amazon.com/transfer/latest/userguide/security-policies.html\">Working with security policies for servers</a> or <a href=\"https://docs.aws.amazon.com/transfer/latest/userguide/security-policies-connectors.html\">Working with security policies for SFTP connectors</a>.</p>",
"refs": {
"DescribeSecurityPolicyResponse$SecurityPolicy": "<p>An array containing the properties of the security policy.</p>"
}
},
"DescribedServer": {
"base": "<p>Describes the properties of a file transfer protocol-enabled server that was specified.</p>",
"refs": {
"DescribeServerResponse$Server": "<p>An array containing the properties of a server with the <code>ServerID</code> you specified.</p>"
}
},
"DescribedUser": {
"base": "<p>Describes the properties of a user that was specified.</p>",
"refs": {
"DescribeUserResponse$User": "<p>An array containing the properties of the Transfer Family user for the <code>ServerID</code> value that you specified.</p>"
}
},
"DescribedWorkflow": {
"base": "<p>Describes the properties of the specified workflow</p>",
"refs": {
"DescribeWorkflowResponse$Workflow": "<p>The structure that contains the details of the workflow.</p>"
}
},
"Description": {
"base": null,
"refs": {
"CreateAgreementRequest$Description": "<p>A name or short description to identify the agreement. </p>",
"DescribedAgreement$Description": "<p>The name or short description that's used to identify the agreement.</p>",
"DescribedCertificate$Description": "<p>The name or description that's used to identity the certificate. </p>",
"ImportCertificateRequest$Description": "<p>A short description that helps identify the certificate. </p>",
"ListedAgreement$Description": "<p>The current description for the agreement. You can change it by calling the <code>UpdateAgreement</code> operation and providing a new description. </p>",
"ListedCertificate$Description": "<p>The name or short description that's used to identify the certificate.</p>",
"UpdateAgreementRequest$Description": "<p>To replace the existing description, provide a short description for the agreement. </p>",
"UpdateCertificateRequest$Description": "<p>A short description to help identify the certificate.</p>"
}
},
"DirectoryId": {
"base": null,
"refs": {
"IdentityProviderDetails$DirectoryId": "<p>The identifier of the Directory Service directory that you want to use as your identity provider.</p>"
}
},
"DirectoryListingOptimization": {
"base": "<p>Indicates whether optimization to directory listing on S3 servers is used. Disabled by default for compatibility.</p>",
"refs": {
"S3StorageOptions$DirectoryListingOptimization": "<p>Specifies whether or not performance for your Amazon S3 directories is optimized. This is disabled by default.</p> <p>By default, home directory mappings have a <code>TYPE</code> of <code>DIRECTORY</code>. If you enable this option, you would then need to explicitly set the <code>HomeDirectoryMapEntry</code> <code>Type</code> to <code>FILE</code> if you want a mapping to have a file target.</p>"
}
},
"Domain": {
"base": null,
"refs": {
"CreateServerRequest$Domain": "<p>The domain of the storage system that is used for file transfers. There are two domains available: Amazon Simple Storage Service (Amazon S3) and Amazon Elastic File System (Amazon EFS). The default value is S3.</p> <note> <p>After the server is created, the domain cannot be changed.</p> </note>",
"DescribedServer$Domain": "<p>Specifies the domain of the storage system that is used for file transfers. There are two domains available: Amazon Simple Storage Service (Amazon S3) and Amazon Elastic File System (Amazon EFS). The default value is S3.</p>",
"ListedServer$Domain": "<p>Specifies the domain of the storage system that is used for file transfers. There are two domains available: Amazon Simple Storage Service (Amazon S3) and Amazon Elastic File System (Amazon EFS). The default value is S3.</p>"
}
},
"EfsFileLocation": {
"base": "<p>Specifies the details for the file location for the file that's being used in the workflow. Only applicable if you are using Amazon Elastic File Systems (Amazon EFS) for storage.</p> <p> </p>",
"refs": {
"FileLocation$EfsFileLocation": "<p>Specifies the Amazon EFS identifier and the path for the file being used.</p>",
"InputFileLocation$EfsFileLocation": "<p>Specifies the details for the Amazon Elastic File System (Amazon EFS) file that's being decrypted.</p>"
}
},
"EfsFileSystemId": {
"base": null,
"refs": {
"EfsFileLocation$FileSystemId": "<p>The identifier of the file system, assigned by Amazon EFS.</p>"
}
},
"EfsPath": {
"base": null,
"refs": {
"EfsFileLocation$Path": "<p>The pathname for the folder being used by a workflow.</p>"
}
},
"EncryptionAlg": {
"base": null,
"refs": {
"As2ConnectorConfig$EncryptionAlgorithm": "<p>The algorithm that is used to encrypt the file.</p> <p>Note the following:</p> <ul> <li> <p>Do not use the <code>DES_EDE3_CBC</code> algorithm unless you must support a legacy client that requires it, as it is a weak encryption algorithm.</p> </li> <li> <p>You can only specify <code>NONE</code> if the URL for your connector uses HTTPS. Using HTTPS ensures that no traffic is sent in clear text.</p> </li> </ul>"
}
},
"EncryptionType": {
"base": null,
"refs": {
"DecryptStepDetails$Type": "<p>The type of encryption used. Currently, this value must be <code>PGP</code>.</p>"
}
},
"EndpointDetails": {
"base": "<p>The virtual private cloud (VPC) endpoint settings that are configured for your file transfer protocol-enabled server. With a VPC endpoint, you can restrict access to your server and resources only within your VPC. To control incoming internet traffic, invoke the <code>UpdateServer</code> API and attach an Elastic IP address to your server's endpoint.</p> <note> <p> After May 19, 2021, you won't be able to create a server using <code>EndpointType=VPC_ENDPOINT</code> in your Amazon Web Servicesaccount if your account hasn't already done so before May 19, 2021. If you have already created servers with <code>EndpointType=VPC_ENDPOINT</code> in your Amazon Web Servicesaccount on or before May 19, 2021, you will not be affected. After this date, use <code>EndpointType</code>=<code>VPC</code>.</p> <p>For more information, see https://docs.aws.amazon.com/transfer/latest/userguide/create-server-in-vpc.html#deprecate-vpc-endpoint.</p> </note>",
"refs": {
"CreateServerRequest$EndpointDetails": "<p>The virtual private cloud (VPC) endpoint settings that are configured for your server. When you host your endpoint within your VPC, you can make your endpoint accessible only to resources within your VPC, or you can attach Elastic IP addresses and make your endpoint accessible to clients over the internet. Your VPC's default security groups are automatically assigned to your endpoint.</p>",
"DescribedServer$EndpointDetails": "<p>The virtual private cloud (VPC) endpoint settings that are configured for your server. When you host your endpoint within your VPC, you can make your endpoint accessible only to resources within your VPC, or you can attach Elastic IP addresses and make your endpoint accessible to clients over the internet. Your VPC's default security groups are automatically assigned to your endpoint.</p>",
"UpdateServerRequest$EndpointDetails": "<p>The virtual private cloud (VPC) endpoint settings that are configured for your server. When you host your endpoint within your VPC, you can make your endpoint accessible only to resources within your VPC, or you can attach Elastic IP addresses and make your endpoint accessible to clients over the internet. Your VPC's default security groups are automatically assigned to your endpoint.</p>"
}
},
"EndpointType": {
"base": null,
"refs": {
"CreateServerRequest$EndpointType": "<p>The type of endpoint that you want your server to use. You can choose to make your server's endpoint publicly accessible (PUBLIC) or host it inside your VPC. With an endpoint that is hosted in a VPC, you can restrict access to your server and resources only within your VPC or choose to make it internet facing by attaching Elastic IP addresses directly to it.</p> <note> <p> After May 19, 2021, you won't be able to create a server using <code>EndpointType=VPC_ENDPOINT</code> in your Amazon Web Services account if your account hasn't already done so before May 19, 2021. If you have already created servers with <code>EndpointType=VPC_ENDPOINT</code> in your Amazon Web Services account on or before May 19, 2021, you will not be affected. After this date, use <code>EndpointType</code>=<code>VPC</code>.</p> <p>For more information, see https://docs.aws.amazon.com/transfer/latest/userguide/create-server-in-vpc.html#deprecate-vpc-endpoint.</p> <p>It is recommended that you use <code>VPC</code> as the <code>EndpointType</code>. With this endpoint type, you have the option to directly associate up to three Elastic IPv4 addresses (BYO IP included) with your server's endpoint and use VPC security groups to restrict traffic by the client's public IP address. This is not possible with <code>EndpointType</code> set to <code>VPC_ENDPOINT</code>.</p> </note>",
"DescribedServer$EndpointType": "<p>Defines the type of endpoint that your server is connected to. If your server is connected to a VPC endpoint, your server isn't accessible over the public internet.</p>",
"ListedServer$EndpointType": "<p>Specifies the type of VPC endpoint that your server is connected to. If your server is connected to a VPC endpoint, your server isn't accessible over the public internet.</p>",
"UpdateServerRequest$EndpointType": "<p>The type of endpoint that you want your server to use. You can choose to make your server's endpoint publicly accessible (PUBLIC) or host it inside your VPC. With an endpoint that is hosted in a VPC, you can restrict access to your server and resources only within your VPC or choose to make it internet facing by attaching Elastic IP addresses directly to it.</p> <note> <p> After May 19, 2021, you won't be able to create a server using <code>EndpointType=VPC_ENDPOINT</code> in your Amazon Web Servicesaccount if your account hasn't already done so before May 19, 2021. If you have already created servers with <code>EndpointType=VPC_ENDPOINT</code> in your Amazon Web Servicesaccount on or before May 19, 2021, you will not be affected. After this date, use <code>EndpointType</code>=<code>VPC</code>.</p> <p>For more information, see https://docs.aws.amazon.com/transfer/latest/userguide/create-server-in-vpc.html#deprecate-vpc-endpoint.</p> <p>It is recommended that you use <code>VPC</code> as the <code>EndpointType</code>. With this endpoint type, you have the option to directly associate up to three Elastic IPv4 addresses (BYO IP included) with your server's endpoint and use VPC security groups to restrict traffic by the client's public IP address. This is not possible with <code>EndpointType</code> set to <code>VPC_ENDPOINT</code>.</p> </note>"
}
},
"ExecutionError": {
"base": "<p>Specifies the error message and type, for an error that occurs during the execution of the workflow.</p>",
"refs": {
"ExecutionStepResult$Error": "<p>Specifies the details for an error, if it occurred during execution of the specified workflow step.</p>"
}
},
"ExecutionErrorMessage": {
"base": null,
"refs": {
"ExecutionError$Message": "<p>Specifies the descriptive message that corresponds to the <code>ErrorType</code>.</p>"
}
},
"ExecutionErrorType": {
"base": null,
"refs": {
"ExecutionError$Type": "<p>Specifies the error type.</p> <ul> <li> <p> <code>ALREADY_EXISTS</code>: occurs for a copy step, if the overwrite option is not selected and a file with the same name already exists in the target location.</p> </li> <li> <p> <code>BAD_REQUEST</code>: a general bad request: for example, a step that attempts to tag an EFS file returns <code>BAD_REQUEST</code>, as only S3 files can be tagged.</p> </li> <li> <p> <code>CUSTOM_STEP_FAILED</code>: occurs when the custom step provided a callback that indicates failure.</p> </li> <li> <p> <code>INTERNAL_SERVER_ERROR</code>: a catch-all error that can occur for a variety of reasons.</p> </li> <li> <p> <code>NOT_FOUND</code>: occurs when a requested entity, for example a source file for a copy step, does not exist.</p> </li> <li> <p> <code>PERMISSION_DENIED</code>: occurs if your policy does not contain the correct permissions to complete one or more of the steps in the workflow.</p> </li> <li> <p> <code>TIMEOUT</code>: occurs when the execution times out.</p> <note> <p> You can set the <code>TimeoutSeconds</code> for a custom step, anywhere from 1 second to 1800 seconds (30 minutes). </p> </note> </li> <li> <p> <code>THROTTLED</code>: occurs if you exceed the new execution refill rate of one workflow per second.</p> </li> </ul>"
}
},
"ExecutionId": {
"base": null,
"refs": {
"DescribeExecutionRequest$ExecutionId": "<p>A unique identifier for the execution of a workflow.</p>",
"DescribedExecution$ExecutionId": "<p>A unique identifier for the execution of a workflow.</p>",
"ListedExecution$ExecutionId": "<p>A unique identifier for the execution of a workflow.</p>",
"SendWorkflowStepStateRequest$ExecutionId": "<p>A unique identifier for the execution of a workflow.</p>"
}
},
"ExecutionResults": {
"base": "<p>Specifies the steps in the workflow, as well as the steps to execute in case of any errors during workflow execution.</p>",
"refs": {
"DescribedExecution$Results": "<p>A structure that describes the execution results. This includes a list of the steps along with the details of each step, error type and message (if any), and the <code>OnExceptionSteps</code> structure.</p>"
}
},
"ExecutionStatus": {
"base": null,
"refs": {
"DescribedExecution$Status": "<p>The status is one of the execution. Can be in progress, completed, exception encountered, or handling the exception. </p>",
"ListedExecution$Status": "<p>The status is one of the execution. Can be in progress, completed, exception encountered, or handling the exception.</p>"
}
},
"ExecutionStepResult": {
"base": "<p>Specifies the following details for the step: error (if any), outputs (if any), and the step type.</p>",
"refs": {
"ExecutionStepResults$member": null
}
},
"ExecutionStepResults": {
"base": null,
"refs": {
"ExecutionResults$Steps": "<p>Specifies the details for the steps that are in the specified workflow.</p>",
"ExecutionResults$OnExceptionSteps": "<p>Specifies the steps (actions) to take if errors are encountered during execution of the workflow.</p>"
}
},
"ExternalId": {
"base": null,
"refs": {
"CreateAccessRequest$ExternalId": "<p>A unique identifier that is required to identify specific groups within your directory. The users of the group that you associate have access to your Amazon S3 or Amazon EFS resources over the enabled protocols using Transfer Family. If you know the group name, you can view the SID values by running the following command using Windows PowerShell.</p> <p> <code>Get-ADGroup -Filter {samAccountName -like \"<i>YourGroupName</i>*\"} -Properties * | Select SamAccountName,ObjectSid</code> </p> <p>In that command, replace <i>YourGroupName</i> with the name of your Active Directory group.</p> <p>The regular expression used to validate this parameter is a string of characters consisting of uppercase and lowercase alphanumeric characters with no spaces. You can also include underscores or any of the following characters: =,.@:/-</p>",
"CreateAccessResponse$ExternalId": "<p>The external identifier of the group whose users have access to your Amazon S3 or Amazon EFS resources over the enabled protocols using Transfer Family.</p>",
"DeleteAccessRequest$ExternalId": "<p>A unique identifier that is required to identify specific groups within your directory. The users of the group that you associate have access to your Amazon S3 or Amazon EFS resources over the enabled protocols using Transfer Family. If you know the group name, you can view the SID values by running the following command using Windows PowerShell.</p> <p> <code>Get-ADGroup -Filter {samAccountName -like \"<i>YourGroupName</i>*\"} -Properties * | Select SamAccountName,ObjectSid</code> </p> <p>In that command, replace <i>YourGroupName</i> with the name of your Active Directory group.</p> <p>The regular expression used to validate this parameter is a string of characters consisting of uppercase and lowercase alphanumeric characters with no spaces. You can also include underscores or any of the following characters: =,.@:/-</p>",
"DescribeAccessRequest$ExternalId": "<p>A unique identifier that is required to identify specific groups within your directory. The users of the group that you associate have access to your Amazon S3 or Amazon EFS resources over the enabled protocols using Transfer Family. If you know the group name, you can view the SID values by running the following command using Windows PowerShell.</p> <p> <code>Get-ADGroup -Filter {samAccountName -like \"<i>YourGroupName</i>*\"} -Properties * | Select SamAccountName,ObjectSid</code> </p> <p>In that command, replace <i>YourGroupName</i> with the name of your Active Directory group.</p> <p>The regular expression used to validate this parameter is a string of characters consisting of uppercase and lowercase alphanumeric characters with no spaces. You can also include underscores or any of the following characters: =,.@:/-</p>",
"DescribedAccess$ExternalId": "<p>A unique identifier that is required to identify specific groups within your directory. The users of the group that you associate have access to your Amazon S3 or Amazon EFS resources over the enabled protocols using Transfer Family. If you know the group name, you can view the SID values by running the following command using Windows PowerShell.</p> <p> <code>Get-ADGroup -Filter {samAccountName -like \"<i>YourGroupName</i>*\"} -Properties * | Select SamAccountName,ObjectSid</code> </p> <p>In that command, replace <i>YourGroupName</i> with the name of your Active Directory group.</p> <p>The regular expression used to validate this parameter is a string of characters consisting of uppercase and lowercase alphanumeric characters with no spaces. You can also include underscores or any of the following characters: =,.@:/-</p>",
"ListedAccess$ExternalId": "<p>A unique identifier that is required to identify specific groups within your directory. The users of the group that you associate have access to your Amazon S3 or Amazon EFS resources over the enabled protocols using Transfer Family. If you know the group name, you can view the SID values by running the following command using Windows PowerShell.</p> <p> <code>Get-ADGroup -Filter {samAccountName -like \"<i>YourGroupName</i>*\"} -Properties * | Select SamAccountName,ObjectSid</code> </p> <p>In that command, replace <i>YourGroupName</i> with the name of your Active Directory group.</p> <p>The regular expression used to validate this parameter is a string of characters consisting of uppercase and lowercase alphanumeric characters with no spaces. You can also include underscores or any of the following characters: =,.@:/-</p>",
"UpdateAccessRequest$ExternalId": "<p>A unique identifier that is required to identify specific groups within your directory. The users of the group that you associate have access to your Amazon S3 or Amazon EFS resources over the enabled protocols using Transfer Family. If you know the group name, you can view the SID values by running the following command using Windows PowerShell.</p> <p> <code>Get-ADGroup -Filter {samAccountName -like \"<i>YourGroupName</i>*\"} -Properties * | Select SamAccountName,ObjectSid</code> </p> <p>In that command, replace <i>YourGroupName</i> with the name of your Active Directory group.</p> <p>The regular expression used to validate this parameter is a string of characters consisting of uppercase and lowercase alphanumeric characters with no spaces. You can also include underscores or any of the following characters: =,.@:/-</p>",
"UpdateAccessResponse$ExternalId": "<p>The external identifier of the group whose users have access to your Amazon S3 or Amazon EFS resources over the enabled protocols using Amazon Web ServicesTransfer Family.</p>"
}
},
"FileLocation": {
"base": "<p>Specifies the Amazon S3 or EFS file details to be used in the step.</p>",
"refs": {
"DescribedExecution$InitialFileLocation": "<p>A structure that describes the Amazon S3 or EFS file location. This is the file location when the execution begins: if the file is being copied, this is the initial (as opposed to destination) file location.</p>",
"ListedExecution$InitialFileLocation": "<p>A structure that describes the Amazon S3 or EFS file location. This is the file location when the execution begins: if the file is being copied, this is the initial (as opposed to destination) file location.</p>"
}
},
"FilePath": {
"base": null,
"refs": {
"FilePaths$member": null,
"StartDirectoryListingRequest$RemoteDirectoryPath": "<p>Specifies the directory on the remote SFTP server for which you want to list its contents.</p>",
"StartDirectoryListingRequest$OutputDirectoryPath": "<p>Specifies the path (bucket and prefix) in Amazon S3 storage to store the results of the directory listing.</p>",
"StartFileTransferRequest$LocalDirectoryPath": "<p>For an inbound transfer, the <code>LocaDirectoryPath</code> specifies the destination for one or more files that are transferred from the partner's SFTP server.</p>",
"StartFileTransferRequest$RemoteDirectoryPath": "<p>For an outbound transfer, the <code>RemoteDirectoryPath</code> specifies the destination for one or more files that are transferred to the partner's SFTP server. If you don't specify a <code>RemoteDirectoryPath</code>, the destination for transferred files is the SFTP user's home directory.</p>"
}
},
"FilePaths": {
"base": null,
"refs": {
"StartFileTransferRequest$SendFilePaths": "<p>One or more source paths for the Amazon S3 storage. Each string represents a source file path for one outbound file transfer. For example, <code> <i>DOC-EXAMPLE-BUCKET</i>/<i>myfile.txt</i> </code>.</p> <note> <p>Replace <code> <i>DOC-EXAMPLE-BUCKET</i> </code> with one of your actual buckets.</p> </note>",
"StartFileTransferRequest$RetrieveFilePaths": "<p>One or more source paths for the partner's SFTP server. Each string represents a source file path for one inbound file transfer.</p>"
}
},
"Fips": {
"base": null,
"refs": {
"DescribedSecurityPolicy$Fips": "<p>Specifies whether this policy enables Federal Information Processing Standards (FIPS). This parameter applies to both server and connector security policies.</p>"
}
},
"Function": {
"base": null,
"refs": {
"IdentityProviderDetails$Function": "<p>The ARN for a Lambda function to use for the Identity provider.</p>"
}
},
"HomeDirectory": {
"base": null,
"refs": {
"CreateAccessRequest$HomeDirectory": "<p>The landing directory (folder) for a user when they log in to the server using the client.</p> <p>A <code>HomeDirectory</code> example is <code>/bucket_name/home/mydirectory</code>.</p> <note> <p>The <code>HomeDirectory</code> parameter is only used if <code>HomeDirectoryType</code> is set to <code>PATH</code>.</p> </note>",
"CreateAgreementRequest$BaseDirectory": "<p>The landing directory (folder) for files transferred by using the AS2 protocol.</p> <p>A <code>BaseDirectory</code> example is <code>/DOC-EXAMPLE-BUCKET/home/mydirectory</code>.</p>",
"CreateUserRequest$HomeDirectory": "<p>The landing directory (folder) for a user when they log in to the server using the client.</p> <p>A <code>HomeDirectory</code> example is <code>/bucket_name/home/mydirectory</code>.</p> <note> <p>The <code>HomeDirectory</code> parameter is only used if <code>HomeDirectoryType</code> is set to <code>PATH</code>.</p> </note>",
"DescribedAccess$HomeDirectory": "<p>The landing directory (folder) for a user when they log in to the server using the client.</p> <p>A <code>HomeDirectory</code> example is <code>/bucket_name/home/mydirectory</code>.</p> <note> <p>The <code>HomeDirectory</code> parameter is only used if <code>HomeDirectoryType</code> is set to <code>PATH</code>.</p> </note>",
"DescribedAgreement$BaseDirectory": "<p>The landing directory (folder) for files that are transferred by using the AS2 protocol.</p>",
"DescribedUser$HomeDirectory": "<p>The landing directory (folder) for a user when they log in to the server using the client.</p> <p>A <code>HomeDirectory</code> example is <code>/bucket_name/home/mydirectory</code>.</p> <note> <p>The <code>HomeDirectory</code> parameter is only used if <code>HomeDirectoryType</code> is set to <code>PATH</code>.</p> </note>",
"ListedAccess$HomeDirectory": "<p>The landing directory (folder) for a user when they log in to the server using the client.</p> <p>A <code>HomeDirectory</code> example is <code>/bucket_name/home/mydirectory</code>.</p> <note> <p>The <code>HomeDirectory</code> parameter is only used if <code>HomeDirectoryType</code> is set to <code>PATH</code>.</p> </note>",
"ListedUser$HomeDirectory": "<p>The landing directory (folder) for a user when they log in to the server using the client.</p> <p>A <code>HomeDirectory</code> example is <code>/bucket_name/home/mydirectory</code>.</p> <note> <p>The <code>HomeDirectory</code> parameter is only used if <code>HomeDirectoryType</code> is set to <code>PATH</code>.</p> </note>",
"UpdateAccessRequest$HomeDirectory": "<p>The landing directory (folder) for a user when they log in to the server using the client.</p> <p>A <code>HomeDirectory</code> example is <code>/bucket_name/home/mydirectory</code>.</p> <note> <p>The <code>HomeDirectory</code> parameter is only used if <code>HomeDirectoryType</code> is set to <code>PATH</code>.</p> </note>",
"UpdateAgreementRequest$BaseDirectory": "<p>To change the landing directory (folder) for files that are transferred, provide the bucket folder that you want to use; for example, <code>/<i>DOC-EXAMPLE-BUCKET</i>/<i>home</i>/<i>mydirectory</i> </code>.</p>",
"UpdateUserRequest$HomeDirectory": "<p>The landing directory (folder) for a user when they log in to the server using the client.</p> <p>A <code>HomeDirectory</code> example is <code>/bucket_name/home/mydirectory</code>.</p> <note> <p>The <code>HomeDirectory</code> parameter is only used if <code>HomeDirectoryType</code> is set to <code>PATH</code>.</p> </note>"
}
},
"HomeDirectoryMapEntry": {
"base": "<p>Represents an object that contains entries and targets for <code>HomeDirectoryMappings</code>.</p> <p>The following is an <code>Entry</code> and <code>Target</code> pair example for <code>chroot</code>.</p> <p> <code>[ { \"Entry\": \"/\", \"Target\": \"/bucket_name/home/mydirectory\" } ]</code> </p>",
"refs": {
"HomeDirectoryMappings$member": null
}
},
"HomeDirectoryMappings": {
"base": null,
"refs": {
"CreateAccessRequest$HomeDirectoryMappings": "<p>Logical directory mappings that specify what Amazon S3 or Amazon EFS paths and keys should be visible to your user and how you want to make them visible. You must specify the <code>Entry</code> and <code>Target</code> pair, where <code>Entry</code> shows how the path is made visible and <code>Target</code> is the actual Amazon S3 or Amazon EFS path. If you only specify a target, it is displayed as is. You also must ensure that your Identity and Access Management (IAM) role provides access to paths in <code>Target</code>. This value can be set only when <code>HomeDirectoryType</code> is set to <i>LOGICAL</i>.</p> <p>The following is an <code>Entry</code> and <code>Target</code> pair example.</p> <p> <code>[ { \"Entry\": \"/directory1\", \"Target\": \"/bucket_name/home/mydirectory\" } ]</code> </p> <p>In most cases, you can use this value instead of the session policy to lock down your user to the designated home directory (\"<code>chroot</code>\"). To do this, you can set <code>Entry</code> to <code>/</code> and set <code>Target</code> to the <code>HomeDirectory</code> parameter value.</p> <p>The following is an <code>Entry</code> and <code>Target</code> pair example for <code>chroot</code>.</p> <p> <code>[ { \"Entry\": \"/\", \"Target\": \"/bucket_name/home/mydirectory\" } ]</code> </p>",
"CreateUserRequest$HomeDirectoryMappings": "<p>Logical directory mappings that specify what Amazon S3 or Amazon EFS paths and keys should be visible to your user and how you want to make them visible. You must specify the <code>Entry</code> and <code>Target</code> pair, where <code>Entry</code> shows how the path is made visible and <code>Target</code> is the actual Amazon S3 or Amazon EFS path. If you only specify a target, it is displayed as is. You also must ensure that your Identity and Access Management (IAM) role provides access to paths in <code>Target</code>. This value can be set only when <code>HomeDirectoryType</code> is set to <i>LOGICAL</i>.</p> <p>The following is an <code>Entry</code> and <code>Target</code> pair example.</p> <p> <code>[ { \"Entry\": \"/directory1\", \"Target\": \"/bucket_name/home/mydirectory\" } ]</code> </p> <p>In most cases, you can use this value instead of the session policy to lock your user down to the designated home directory (\"<code>chroot</code>\"). To do this, you can set <code>Entry</code> to <code>/</code> and set <code>Target</code> to the value the user should see for their home directory when they log in.</p> <p>The following is an <code>Entry</code> and <code>Target</code> pair example for <code>chroot</code>.</p> <p> <code>[ { \"Entry\": \"/\", \"Target\": \"/bucket_name/home/mydirectory\" } ]</code> </p>",
"DescribedAccess$HomeDirectoryMappings": "<p>Logical directory mappings that specify what Amazon S3 or Amazon EFS paths and keys should be visible to your user and how you want to make them visible. You must specify the <code>Entry</code> and <code>Target</code> pair, where <code>Entry</code> shows how the path is made visible and <code>Target</code> is the actual Amazon S3 or Amazon EFS path. If you only specify a target, it is displayed as is. You also must ensure that your Identity and Access Management (IAM) role provides access to paths in <code>Target</code>. This value can be set only when <code>HomeDirectoryType</code> is set to <i>LOGICAL</i>.</p> <p>In most cases, you can use this value instead of the session policy to lock down the associated access to the designated home directory (\"<code>chroot</code>\"). To do this, you can set <code>Entry</code> to '/' and set <code>Target</code> to the <code>HomeDirectory</code> parameter value.</p>",
"DescribedUser$HomeDirectoryMappings": "<p>Logical directory mappings that specify what Amazon S3 or Amazon EFS paths and keys should be visible to your user and how you want to make them visible. You must specify the <code>Entry</code> and <code>Target</code> pair, where <code>Entry</code> shows how the path is made visible and <code>Target</code> is the actual Amazon S3 or Amazon EFS path. If you only specify a target, it is displayed as is. You also must ensure that your Identity and Access Management (IAM) role provides access to paths in <code>Target</code>. This value can be set only when <code>HomeDirectoryType</code> is set to <i>LOGICAL</i>.</p> <p>In most cases, you can use this value instead of the session policy to lock your user down to the designated home directory (\"<code>chroot</code>\"). To do this, you can set <code>Entry</code> to '/' and set <code>Target</code> to the HomeDirectory parameter value.</p>",
"UpdateAccessRequest$HomeDirectoryMappings": "<p>Logical directory mappings that specify what Amazon S3 or Amazon EFS paths and keys should be visible to your user and how you want to make them visible. You must specify the <code>Entry</code> and <code>Target</code> pair, where <code>Entry</code> shows how the path is made visible and <code>Target</code> is the actual Amazon S3 or Amazon EFS path. If you only specify a target, it is displayed as is. You also must ensure that your Identity and Access Management (IAM) role provides access to paths in <code>Target</code>. This value can be set only when <code>HomeDirectoryType</code> is set to <i>LOGICAL</i>.</p> <p>The following is an <code>Entry</code> and <code>Target</code> pair example.</p> <p> <code>[ { \"Entry\": \"/directory1\", \"Target\": \"/bucket_name/home/mydirectory\" } ]</code> </p> <p>In most cases, you can use this value instead of the session policy to lock down your user to the designated home directory (\"<code>chroot</code>\"). To do this, you can set <code>Entry</code> to <code>/</code> and set <code>Target</code> to the <code>HomeDirectory</code> parameter value.</p> <p>The following is an <code>Entry</code> and <code>Target</code> pair example for <code>chroot</code>.</p> <p> <code>[ { \"Entry\": \"/\", \"Target\": \"/bucket_name/home/mydirectory\" } ]</code> </p>",
"UpdateUserRequest$HomeDirectoryMappings": "<p>Logical directory mappings that specify what Amazon S3 or Amazon EFS paths and keys should be visible to your user and how you want to make them visible. You must specify the <code>Entry</code> and <code>Target</code> pair, where <code>Entry</code> shows how the path is made visible and <code>Target</code> is the actual Amazon S3 or Amazon EFS path. If you only specify a target, it is displayed as is. You also must ensure that your Identity and Access Management (IAM) role provides access to paths in <code>Target</code>. This value can be set only when <code>HomeDirectoryType</code> is set to <i>LOGICAL</i>.</p> <p>The following is an <code>Entry</code> and <code>Target</code> pair example.</p> <p> <code>[ { \"Entry\": \"/directory1\", \"Target\": \"/bucket_name/home/mydirectory\" } ]</code> </p> <p>In most cases, you can use this value instead of the session policy to lock down your user to the designated home directory (\"<code>chroot</code>\"). To do this, you can set <code>Entry</code> to '/' and set <code>Target</code> to the HomeDirectory parameter value.</p> <p>The following is an <code>Entry</code> and <code>Target</code> pair example for <code>chroot</code>.</p> <p> <code>[ { \"Entry\": \"/\", \"Target\": \"/bucket_name/home/mydirectory\" } ]</code> </p>"
}
},
"HomeDirectoryType": {
"base": null,
"refs": {
"CreateAccessRequest$HomeDirectoryType": "<p>The type of landing directory (folder) that you want your users' home directory to be when they log in to the server. If you set it to <code>PATH</code>, the user will see the absolute Amazon S3 bucket or Amazon EFS path as is in their file transfer protocol clients. If you set it to <code>LOGICAL</code>, you need to provide mappings in the <code>HomeDirectoryMappings</code> for how you want to make Amazon S3 or Amazon EFS paths visible to your users.</p> <note> <p>If <code>HomeDirectoryType</code> is <code>LOGICAL</code>, you must provide mappings, using the <code>HomeDirectoryMappings</code> parameter. If, on the other hand, <code>HomeDirectoryType</code> is <code>PATH</code>, you provide an absolute path using the <code>HomeDirectory</code> parameter. You cannot have both <code>HomeDirectory</code> and <code>HomeDirectoryMappings</code> in your template.</p> </note>",
"CreateUserRequest$HomeDirectoryType": "<p>The type of landing directory (folder) that you want your users' home directory to be when they log in to the server. If you set it to <code>PATH</code>, the user will see the absolute Amazon S3 bucket or Amazon EFS path as is in their file transfer protocol clients. If you set it to <code>LOGICAL</code>, you need to provide mappings in the <code>HomeDirectoryMappings</code> for how you want to make Amazon S3 or Amazon EFS paths visible to your users.</p> <note> <p>If <code>HomeDirectoryType</code> is <code>LOGICAL</code>, you must provide mappings, using the <code>HomeDirectoryMappings</code> parameter. If, on the other hand, <code>HomeDirectoryType</code> is <code>PATH</code>, you provide an absolute path using the <code>HomeDirectory</code> parameter. You cannot have both <code>HomeDirectory</code> and <code>HomeDirectoryMappings</code> in your template.</p> </note>",
"DescribedAccess$HomeDirectoryType": "<p>The type of landing directory (folder) that you want your users' home directory to be when they log in to the server. If you set it to <code>PATH</code>, the user will see the absolute Amazon S3 bucket or Amazon EFS path as is in their file transfer protocol clients. If you set it to <code>LOGICAL</code>, you need to provide mappings in the <code>HomeDirectoryMappings</code> for how you want to make Amazon S3 or Amazon EFS paths visible to your users.</p> <note> <p>If <code>HomeDirectoryType</code> is <code>LOGICAL</code>, you must provide mappings, using the <code>HomeDirectoryMappings</code> parameter. If, on the other hand, <code>HomeDirectoryType</code> is <code>PATH</code>, you provide an absolute path using the <code>HomeDirectory</code> parameter. You cannot have both <code>HomeDirectory</code> and <code>HomeDirectoryMappings</code> in your template.</p> </note>",
"DescribedUser$HomeDirectoryType": "<p>The type of landing directory (folder) that you want your users' home directory to be when they log in to the server. If you set it to <code>PATH</code>, the user will see the absolute Amazon S3 bucket or Amazon EFS path as is in their file transfer protocol clients. If you set it to <code>LOGICAL</code>, you need to provide mappings in the <code>HomeDirectoryMappings</code> for how you want to make Amazon S3 or Amazon EFS paths visible to your users.</p> <note> <p>If <code>HomeDirectoryType</code> is <code>LOGICAL</code>, you must provide mappings, using the <code>HomeDirectoryMappings</code> parameter. If, on the other hand, <code>HomeDirectoryType</code> is <code>PATH</code>, you provide an absolute path using the <code>HomeDirectory</code> parameter. You cannot have both <code>HomeDirectory</code> and <code>HomeDirectoryMappings</code> in your template.</p> </note>",
"ListedAccess$HomeDirectoryType": "<p>The type of landing directory (folder) that you want your users' home directory to be when they log in to the server. If you set it to <code>PATH</code>, the user will see the absolute Amazon S3 bucket or Amazon EFS path as is in their file transfer protocol clients. If you set it to <code>LOGICAL</code>, you need to provide mappings in the <code>HomeDirectoryMappings</code> for how you want to make Amazon S3 or Amazon EFS paths visible to your users.</p> <note> <p>If <code>HomeDirectoryType</code> is <code>LOGICAL</code>, you must provide mappings, using the <code>HomeDirectoryMappings</code> parameter. If, on the other hand, <code>HomeDirectoryType</code> is <code>PATH</code>, you provide an absolute path using the <code>HomeDirectory</code> parameter. You cannot have both <code>HomeDirectory</code> and <code>HomeDirectoryMappings</code> in your template.</p> </note>",
"ListedUser$HomeDirectoryType": "<p>The type of landing directory (folder) that you want your users' home directory to be when they log in to the server. If you set it to <code>PATH</code>, the user will see the absolute Amazon S3 bucket or Amazon EFS path as is in their file transfer protocol clients. If you set it to <code>LOGICAL</code>, you need to provide mappings in the <code>HomeDirectoryMappings</code> for how you want to make Amazon S3 or Amazon EFS paths visible to your users.</p> <note> <p>If <code>HomeDirectoryType</code> is <code>LOGICAL</code>, you must provide mappings, using the <code>HomeDirectoryMappings</code> parameter. If, on the other hand, <code>HomeDirectoryType</code> is <code>PATH</code>, you provide an absolute path using the <code>HomeDirectory</code> parameter. You cannot have both <code>HomeDirectory</code> and <code>HomeDirectoryMappings</code> in your template.</p> </note>",
"UpdateAccessRequest$HomeDirectoryType": "<p>The type of landing directory (folder) that you want your users' home directory to be when they log in to the server. If you set it to <code>PATH</code>, the user will see the absolute Amazon S3 bucket or Amazon EFS path as is in their file transfer protocol clients. If you set it to <code>LOGICAL</code>, you need to provide mappings in the <code>HomeDirectoryMappings</code> for how you want to make Amazon S3 or Amazon EFS paths visible to your users.</p> <note> <p>If <code>HomeDirectoryType</code> is <code>LOGICAL</code>, you must provide mappings, using the <code>HomeDirectoryMappings</code> parameter. If, on the other hand, <code>HomeDirectoryType</code> is <code>PATH</code>, you provide an absolute path using the <code>HomeDirectory</code> parameter. You cannot have both <code>HomeDirectory</code> and <code>HomeDirectoryMappings</code> in your template.</p> </note>",
"UpdateUserRequest$HomeDirectoryType": "<p>The type of landing directory (folder) that you want your users' home directory to be when they log in to the server. If you set it to <code>PATH</code>, the user will see the absolute Amazon S3 bucket or Amazon EFS path as is in their file transfer protocol clients. If you set it to <code>LOGICAL</code>, you need to provide mappings in the <code>HomeDirectoryMappings</code> for how you want to make Amazon S3 or Amazon EFS paths visible to your users.</p> <note> <p>If <code>HomeDirectoryType</code> is <code>LOGICAL</code>, you must provide mappings, using the <code>HomeDirectoryMappings</code> parameter. If, on the other hand, <code>HomeDirectoryType</code> is <code>PATH</code>, you provide an absolute path using the <code>HomeDirectory</code> parameter. You cannot have both <code>HomeDirectory</code> and <code>HomeDirectoryMappings</code> in your template.</p> </note>"
}
},
"HostKey": {
"base": null,
"refs": {
"CreateServerRequest$HostKey": "<p>The RSA, ECDSA, or ED25519 private key to use for your SFTP-enabled server. You can add multiple host keys, in case you want to rotate keys, or have a set of active keys that use different algorithms.</p> <p>Use the following command to generate an RSA 2048 bit key with no passphrase:</p> <p> <code>ssh-keygen -t rsa -b 2048 -N \"\" -m PEM -f my-new-server-key</code>.</p> <p>Use a minimum value of 2048 for the <code>-b</code> option. You can create a stronger key by using 3072 or 4096.</p> <p>Use the following command to generate an ECDSA 256 bit key with no passphrase:</p> <p> <code>ssh-keygen -t ecdsa -b 256 -N \"\" -m PEM -f my-new-server-key</code>.</p> <p>Valid values for the <code>-b</code> option for ECDSA are 256, 384, and 521.</p> <p>Use the following command to generate an ED25519 key with no passphrase:</p> <p> <code>ssh-keygen -t ed25519 -N \"\" -f my-new-server-key</code>.</p> <p>For all of these commands, you can replace <i>my-new-server-key</i> with a string of your choice.</p> <important> <p>If you aren't planning to migrate existing users from an existing SFTP-enabled server to a new server, don't update the host key. Accidentally changing a server's host key can be disruptive.</p> </important> <p>For more information, see <a href=\"https://docs.aws.amazon.com/transfer/latest/userguide/edit-server-config.html#configuring-servers-change-host-key\">Manage host keys for your SFTP-enabled server</a> in the <i>Transfer Family User Guide</i>.</p>",
"ImportHostKeyRequest$HostKeyBody": "<p>The private key portion of an SSH key pair.</p> <p>Transfer Family accepts RSA, ECDSA, and ED25519 keys.</p>",
"UpdateServerRequest$HostKey": "<p>The RSA, ECDSA, or ED25519 private key to use for your SFTP-enabled server. You can add multiple host keys, in case you want to rotate keys, or have a set of active keys that use different algorithms.</p> <p>Use the following command to generate an RSA 2048 bit key with no passphrase:</p> <p> <code>ssh-keygen -t rsa -b 2048 -N \"\" -m PEM -f my-new-server-key</code>.</p> <p>Use a minimum value of 2048 for the <code>-b</code> option. You can create a stronger key by using 3072 or 4096.</p> <p>Use the following command to generate an ECDSA 256 bit key with no passphrase:</p> <p> <code>ssh-keygen -t ecdsa -b 256 -N \"\" -m PEM -f my-new-server-key</code>.</p> <p>Valid values for the <code>-b</code> option for ECDSA are 256, 384, and 521.</p> <p>Use the following command to generate an ED25519 key with no passphrase:</p> <p> <code>ssh-keygen -t ed25519 -N \"\" -f my-new-server-key</code>.</p> <p>For all of these commands, you can replace <i>my-new-server-key</i> with a string of your choice.</p> <important> <p>If you aren't planning to migrate existing users from an existing SFTP-enabled server to a new server, don't update the host key. Accidentally changing a server's host key can be disruptive.</p> </important> <p>For more information, see <a href=\"https://docs.aws.amazon.com/transfer/latest/userguide/edit-server-config.html#configuring-servers-change-host-key\">Manage host keys for your SFTP-enabled server</a> in the <i>Transfer Family User Guide</i>.</p>"
}
},
"HostKeyDescription": {
"base": null,
"refs": {
"DescribedHostKey$Description": "<p>The text description for this host key.</p>",
"ImportHostKeyRequest$Description": "<p>The text description that identifies this host key.</p>",
"ListedHostKey$Description": "<p>The current description for the host key. You can change it by calling the <code>UpdateHostKey</code> operation and providing a new description.</p>",
"UpdateHostKeyRequest$Description": "<p>An updated description for the host key.</p>"
}
},
"HostKeyFingerprint": {
"base": null,
"refs": {
"DescribedHostKey$HostKeyFingerprint": "<p>The public key fingerprint, which is a short sequence of bytes used to identify the longer public key.</p>",
"DescribedServer$HostKeyFingerprint": "<p>Specifies the Base64-encoded SHA256 fingerprint of the server's host key. This value is equivalent to the output of the <code>ssh-keygen -l -f my-new-server-key</code> command.</p>",
"ListedHostKey$Fingerprint": "<p>The public key fingerprint, which is a short sequence of bytes used to identify the longer public key.</p>"
}
},
"HostKeyId": {
"base": null,
"refs": {
"DeleteHostKeyRequest$HostKeyId": "<p>The identifier of the host key that you are deleting.</p>",
"DescribeHostKeyRequest$HostKeyId": "<p>The identifier of the host key that you want described.</p>",
"DescribedHostKey$HostKeyId": "<p>A unique identifier for the host key.</p>",
"ImportHostKeyResponse$HostKeyId": "<p>Returns the host key identifier for the imported key.</p>",
"ListedHostKey$HostKeyId": "<p>A unique identifier for the host key.</p>",
"UpdateHostKeyRequest$HostKeyId": "<p>The identifier of the host key that you are updating.</p>",
"UpdateHostKeyResponse$HostKeyId": "<p>Returns the host key identifier for the updated host key.</p>"
}
},
"HostKeyType": {
"base": null,
"refs": {
"DescribedHostKey$Type": "<p>The encryption algorithm that is used for the host key. The <code>Type</code> parameter is specified by using one of the following values:</p> <ul> <li> <p> <code>ssh-rsa</code> </p> </li> <li> <p> <code>ssh-ed25519</code> </p> </li> <li> <p> <code>ecdsa-sha2-nistp256</code> </p> </li> <li> <p> <code>ecdsa-sha2-nistp384</code> </p> </li> <li> <p> <code>ecdsa-sha2-nistp521</code> </p> </li> </ul>",
"ListedHostKey$Type": "<p>The encryption algorithm that is used for the host key. The <code>Type</code> parameter is specified by using one of the following values:</p> <ul> <li> <p> <code>ssh-rsa</code> </p> </li> <li> <p> <code>ssh-ed25519</code> </p> </li> <li> <p> <code>ecdsa-sha2-nistp256</code> </p> </li> <li> <p> <code>ecdsa-sha2-nistp384</code> </p> </li> <li> <p> <code>ecdsa-sha2-nistp521</code> </p> </li> </ul>"
}
},
"IdentityProviderDetails": {
"base": "<p>Returns information related to the type of user authentication that is in use for a file transfer protocol-enabled server's users. A server can have only one method of authentication.</p>",
"refs": {
"CreateServerRequest$IdentityProviderDetails": "<p>Required when <code>IdentityProviderType</code> is set to <code>AWS_DIRECTORY_SERVICE</code>, <code>Amazon Web Services_LAMBDA</code> or <code>API_GATEWAY</code>. Accepts an array containing all of the information required to use a directory in <code>AWS_DIRECTORY_SERVICE</code> or invoke a customer-supplied authentication API, including the API Gateway URL. Not required when <code>IdentityProviderType</code> is set to <code>SERVICE_MANAGED</code>.</p>",
"DescribedServer$IdentityProviderDetails": "<p>Specifies information to call a customer-supplied authentication API. This field is not populated when the <code>IdentityProviderType</code> of a server is <code>AWS_DIRECTORY_SERVICE</code> or <code>SERVICE_MANAGED</code>.</p>",
"UpdateServerRequest$IdentityProviderDetails": "<p>An array containing all of the information required to call a customer's authentication API method.</p>"
}
},
"IdentityProviderType": {
"base": "<p>The mode of authentication for a server. The default value is <code>SERVICE_MANAGED</code>, which allows you to store and access user credentials within the Transfer Family service.</p> <p>Use <code>AWS_DIRECTORY_SERVICE</code> to provide access to Active Directory groups in Directory Service for Microsoft Active Directory or Microsoft Active Directory in your on-premises environment or in Amazon Web Services using AD Connector. This option also requires you to provide a Directory ID by using the <code>IdentityProviderDetails</code> parameter.</p> <p>Use the <code>API_GATEWAY</code> value to integrate with an identity provider of your choosing. The <code>API_GATEWAY</code> setting requires you to provide an Amazon API Gateway endpoint URL to call for authentication by using the <code>IdentityProviderDetails</code> parameter.</p> <p>Use the <code>AWS_LAMBDA</code> value to directly use an Lambda function as your identity provider. If you choose this value, you must specify the ARN for the Lambda function in the <code>Function</code> parameter for the <code>IdentityProviderDetails</code> data type.</p>",
"refs": {
"CreateServerRequest$IdentityProviderType": "<p>The mode of authentication for a server. The default value is <code>SERVICE_MANAGED</code>, which allows you to store and access user credentials within the Transfer Family service.</p> <p>Use <code>AWS_DIRECTORY_SERVICE</code> to provide access to Active Directory groups in Directory Service for Microsoft Active Directory or Microsoft Active Directory in your on-premises environment or in Amazon Web Services using AD Connector. This option also requires you to provide a Directory ID by using the <code>IdentityProviderDetails</code> parameter.</p> <p>Use the <code>API_GATEWAY</code> value to integrate with an identity provider of your choosing. The <code>API_GATEWAY</code> setting requires you to provide an Amazon API Gateway endpoint URL to call for authentication by using the <code>IdentityProviderDetails</code> parameter.</p> <p>Use the <code>AWS_LAMBDA</code> value to directly use an Lambda function as your identity provider. If you choose this value, you must specify the ARN for the Lambda function in the <code>Function</code> parameter for the <code>IdentityProviderDetails</code> data type.</p>",
"DescribedServer$IdentityProviderType": "<p>The mode of authentication for a server. The default value is <code>SERVICE_MANAGED</code>, which allows you to store and access user credentials within the Transfer Family service.</p> <p>Use <code>AWS_DIRECTORY_SERVICE</code> to provide access to Active Directory groups in Directory Service for Microsoft Active Directory or Microsoft Active Directory in your on-premises environment or in Amazon Web Services using AD Connector. This option also requires you to provide a Directory ID by using the <code>IdentityProviderDetails</code> parameter.</p> <p>Use the <code>API_GATEWAY</code> value to integrate with an identity provider of your choosing. The <code>API_GATEWAY</code> setting requires you to provide an Amazon API Gateway endpoint URL to call for authentication by using the <code>IdentityProviderDetails</code> parameter.</p> <p>Use the <code>AWS_LAMBDA</code> value to directly use an Lambda function as your identity provider. If you choose this value, you must specify the ARN for the Lambda function in the <code>Function</code> parameter for the <code>IdentityProviderDetails</code> data type.</p>",
"ListedServer$IdentityProviderType": "<p>The mode of authentication for a server. The default value is <code>SERVICE_MANAGED</code>, which allows you to store and access user credentials within the Transfer Family service.</p> <p>Use <code>AWS_DIRECTORY_SERVICE</code> to provide access to Active Directory groups in Directory Service for Microsoft Active Directory or Microsoft Active Directory in your on-premises environment or in Amazon Web Services using AD Connector. This option also requires you to provide a Directory ID by using the <code>IdentityProviderDetails</code> parameter.</p> <p>Use the <code>API_GATEWAY</code> value to integrate with an identity provider of your choosing. The <code>API_GATEWAY</code> setting requires you to provide an Amazon API Gateway endpoint URL to call for authentication by using the <code>IdentityProviderDetails</code> parameter.</p> <p>Use the <code>AWS_LAMBDA</code> value to directly use an Lambda function as your identity provider. If you choose this value, you must specify the ARN for the Lambda function in the <code>Function</code> parameter for the <code>IdentityProviderDetails</code> data type.</p>"
}
},
"ImportCertificateRequest": {
"base": null,
"refs": {
}
},
"ImportCertificateResponse": {
"base": null,
"refs": {
}
},
"ImportHostKeyRequest": {
"base": null,
"refs": {
}
},
"ImportHostKeyResponse": {
"base": null,
"refs": {
}
},
"ImportSshPublicKeyRequest": {
"base": null,
"refs": {
}
},
"ImportSshPublicKeyResponse": {
"base": "<p>Identifies the user, the server they belong to, and the identifier of the SSH public key associated with that user. A user can have more than one key on each server that they are associated with.</p>",
"refs": {
}
},
"InputFileLocation": {
"base": "<p>Specifies the location for the file that's being processed.</p>",
"refs": {
"CopyStepDetails$DestinationFileLocation": "<p>Specifies the location for the file being copied. Use <code>${Transfer:UserName}</code> or <code>${Transfer:UploadDate}</code> in this field to parametrize the destination prefix by username or uploaded date.</p> <ul> <li> <p>Set the value of <code>DestinationFileLocation</code> to <code>${Transfer:UserName}</code> to copy uploaded files to an Amazon S3 bucket that is prefixed with the name of the Transfer Family user that uploaded the file.</p> </li> <li> <p>Set the value of <code>DestinationFileLocation</code> to <code>${Transfer:UploadDate}</code> to copy uploaded files to an Amazon S3 bucket that is prefixed with the date of the upload.</p> <note> <p>The system resolves <code>UploadDate</code> to a date format of <i>YYYY-MM-DD</i>, based on the date the file is uploaded in UTC.</p> </note> </li> </ul>",
"DecryptStepDetails$DestinationFileLocation": "<p>Specifies the location for the file being decrypted. Use <code>${Transfer:UserName}</code> or <code>${Transfer:UploadDate}</code> in this field to parametrize the destination prefix by username or uploaded date.</p> <ul> <li> <p>Set the value of <code>DestinationFileLocation</code> to <code>${Transfer:UserName}</code> to decrypt uploaded files to an Amazon S3 bucket that is prefixed with the name of the Transfer Family user that uploaded the file.</p> </li> <li> <p>Set the value of <code>DestinationFileLocation</code> to <code>${Transfer:UploadDate}</code> to decrypt uploaded files to an Amazon S3 bucket that is prefixed with the date of the upload.</p> <note> <p>The system resolves <code>UploadDate</code> to a date format of <i>YYYY-MM-DD</i>, based on the date the file is uploaded in UTC.</p> </note> </li> </ul>"
}
},
"InternalServiceError": {
"base": "<p>This exception is thrown when an error occurs in the Transfer Family service.</p>",
"refs": {
}
},
"InvalidNextTokenException": {
"base": "<p>The <code>NextToken</code> parameter that was passed is invalid.</p>",
"refs": {
}
},
"InvalidRequestException": {
"base": "<p>This exception is thrown when the client submits a malformed request.</p>",
"refs": {
}
},
"ListAccessesRequest": {
"base": null,
"refs": {
}
},
"ListAccessesResponse": {
"base": null,
"refs": {
}
},
"ListAgreementsRequest": {
"base": null,
"refs": {
}
},
"ListAgreementsResponse": {
"base": null,
"refs": {
}
},
"ListCertificatesRequest": {
"base": null,
"refs": {
}
},
"ListCertificatesResponse": {
"base": null,
"refs": {
}
},
"ListConnectorsRequest": {
"base": null,
"refs": {
}
},
"ListConnectorsResponse": {
"base": null,
"refs": {
}
},
"ListExecutionsRequest": {
"base": null,
"refs": {
}
},
"ListExecutionsResponse": {
"base": null,
"refs": {
}
},
"ListHostKeysRequest": {
"base": null,
"refs": {
}
},
"ListHostKeysResponse": {
"base": null,
"refs": {
}
},
"ListProfilesRequest": {
"base": null,
"refs": {
}
},
"ListProfilesResponse": {
"base": null,
"refs": {
}
},
"ListSecurityPoliciesRequest": {
"base": null,
"refs": {
}
},
"ListSecurityPoliciesResponse": {
"base": null,
"refs": {
}
},
"ListServersRequest": {
"base": null,
"refs": {
}
},
"ListServersResponse": {
"base": null,
"refs": {
}
},
"ListTagsForResourceRequest": {
"base": null,
"refs": {
}
},
"ListTagsForResourceResponse": {
"base": null,
"refs": {
}
},
"ListUsersRequest": {
"base": null,
"refs": {
}
},
"ListUsersResponse": {
"base": null,
"refs": {
}
},
"ListWorkflowsRequest": {
"base": null,
"refs": {
}
},
"ListWorkflowsResponse": {
"base": null,
"refs": {
}
},
"ListedAccess": {
"base": "<p>Lists the properties for one or more specified associated accesses.</p>",
"refs": {
"ListedAccesses$member": null
}
},
"ListedAccesses": {
"base": null,
"refs": {
"ListAccessesResponse$Accesses": "<p>Returns the accesses and their properties for the <code>ServerId</code> value that you specify.</p>"
}
},
"ListedAgreement": {
"base": "<p>Describes the properties of an agreement.</p>",
"refs": {
"ListedAgreements$member": null
}
},
"ListedAgreements": {
"base": null,
"refs": {
"ListAgreementsResponse$Agreements": "<p>Returns an array, where each item contains the details of an agreement.</p>"
}
},
"ListedCertificate": {
"base": "<p>Describes the properties of a certificate.</p>",
"refs": {
"ListedCertificates$member": null
}
},
"ListedCertificates": {
"base": null,
"refs": {
"ListCertificatesResponse$Certificates": "<p>Returns an array of the certificates that are specified in the <code>ListCertificates</code> call.</p>"
}
},
"ListedConnector": {
"base": "<p>Returns details of the connector that is specified.</p>",
"refs": {
"ListedConnectors$member": null
}
},
"ListedConnectors": {
"base": null,
"refs": {
"ListConnectorsResponse$Connectors": "<p>Returns an array, where each item contains the details of a connector.</p>"
}
},
"ListedExecution": {
"base": "<p>Returns properties of the execution that is specified.</p>",
"refs": {
"ListedExecutions$member": null
}
},
"ListedExecutions": {
"base": null,
"refs": {
"ListExecutionsResponse$Executions": "<p>Returns the details for each execution, in a <code>ListedExecution</code> array.</p>"
}
},
"ListedHostKey": {
"base": "<p>Returns properties of the host key that's specified.</p>",
"refs": {
"ListedHostKeys$member": null
}
},
"ListedHostKeys": {
"base": null,
"refs": {
"ListHostKeysResponse$HostKeys": "<p>Returns an array, where each item contains the details of a host key.</p>"
}
},
"ListedProfile": {
"base": "<p>Returns the properties of the profile that was specified.</p>",
"refs": {
"ListedProfiles$member": null
}
},
"ListedProfiles": {
"base": null,
"refs": {
"ListProfilesResponse$Profiles": "<p>Returns an array, where each item contains the details of a profile.</p>"
}
},
"ListedServer": {
"base": "<p>Returns properties of a file transfer protocol-enabled server that was specified.</p>",
"refs": {
"ListedServers$member": null
}
},
"ListedServers": {
"base": null,
"refs": {
"ListServersResponse$Servers": "<p>An array of servers that were listed.</p>"
}
},
"ListedUser": {
"base": "<p>Returns properties of the user that you specify.</p>",
"refs": {
"ListedUsers$member": null
}
},
"ListedUsers": {
"base": null,
"refs": {
"ListUsersResponse$Users": "<p>Returns the Transfer Family users and their properties for the <code>ServerId</code> value that you specify.</p>"
}
},
"ListedWorkflow": {
"base": "<p>Contains the identifier, text description, and Amazon Resource Name (ARN) for the workflow.</p>",
"refs": {
"ListedWorkflows$member": null
}
},
"ListedWorkflows": {
"base": null,
"refs": {
"ListWorkflowsResponse$Workflows": "<p>Returns the <code>Arn</code>, <code>WorkflowId</code>, and <code>Description</code> for each workflow.</p>"
}
},
"ListingId": {
"base": null,
"refs": {
"StartDirectoryListingResponse$ListingId": "<p>Returns a unique identifier for the directory listing call.</p>"
}
},
"LogGroupName": {
"base": null,
"refs": {
"LoggingConfiguration$LogGroupName": "<p>The name of the CloudWatch logging group for the Transfer Family server to which this workflow belongs.</p>"
}
},
"LoggingConfiguration": {
"base": "<p>Consists of the logging role and the log group name.</p>",
"refs": {
"DescribedExecution$LoggingConfiguration": "<p>The IAM logging role associated with the execution.</p>"
}
},
"MapEntry": {
"base": null,
"refs": {
"HomeDirectoryMapEntry$Entry": "<p>Represents an entry for <code>HomeDirectoryMappings</code>.</p>"
}
},
"MapTarget": {
"base": null,
"refs": {
"HomeDirectoryMapEntry$Target": "<p>Represents the map target that is used in a <code>HomeDirectoryMapEntry</code>.</p>"
}
},
"MapType": {
"base": null,
"refs": {
"HomeDirectoryMapEntry$Type": "<p>Specifies the type of mapping. Set the type to <code>FILE</code> if you want the mapping to point to a file, or <code>DIRECTORY</code> for the directory to point to a directory.</p> <note> <p>By default, home directory mappings have a <code>Type</code> of <code>DIRECTORY</code> when you create a Transfer Family server. You would need to explicitly set <code>Type</code> to <code>FILE</code> if you want a mapping to have a file target.</p> </note>"
}
},
"MaxItems": {
"base": null,
"refs": {
"StartDirectoryListingRequest$MaxItems": "<p>An optional parameter where you can specify the maximum number of file/directory names to retrieve. The default value is 1,000.</p>"
}
},
"MaxResults": {
"base": null,
"refs": {
"ListAccessesRequest$MaxResults": "<p>Specifies the maximum number of access SIDs to return.</p>",
"ListAgreementsRequest$MaxResults": "<p>The maximum number of agreements to return.</p>",
"ListCertificatesRequest$MaxResults": "<p>The maximum number of certificates to return.</p>",
"ListConnectorsRequest$MaxResults": "<p>The maximum number of connectors to return.</p>",
"ListExecutionsRequest$MaxResults": "<p>Specifies the maximum number of executions to return.</p>",
"ListHostKeysRequest$MaxResults": "<p>The maximum number of host keys to return.</p>",
"ListProfilesRequest$MaxResults": "<p>The maximum number of profiles to return.</p>",
"ListSecurityPoliciesRequest$MaxResults": "<p>Specifies the number of security policies to return as a response to the <code>ListSecurityPolicies</code> query.</p>",
"ListServersRequest$MaxResults": "<p>Specifies the number of servers to return as a response to the <code>ListServers</code> query.</p>",
"ListTagsForResourceRequest$MaxResults": "<p>Specifies the number of tags to return as a response to the <code>ListTagsForResource</code> request.</p>",
"ListUsersRequest$MaxResults": "<p>Specifies the number of users to return as a response to the <code>ListUsers</code> request.</p>",
"ListWorkflowsRequest$MaxResults": "<p>Specifies the maximum number of workflows to return.</p>"
}
},
"MdnResponse": {
"base": null,
"refs": {
"As2ConnectorConfig$MdnResponse": "<p>Used for outbound requests (from an Transfer Family server to a partner AS2 server) to determine whether the partner response for transfers is synchronous or asynchronous. Specify either of the following values:</p> <ul> <li> <p> <code>SYNC</code>: The system expects a synchronous MDN response, confirming that the file was transferred successfully (or not).</p> </li> <li> <p> <code>NONE</code>: Specifies that no MDN response is required.</p> </li> </ul>"
}
},
"MdnSigningAlg": {
"base": null,
"refs": {
"As2ConnectorConfig$MdnSigningAlgorithm": "<p>The signing algorithm for the MDN response.</p> <note> <p>If set to DEFAULT (or not set at all), the value for <code>SigningAlgorithm</code> is used.</p> </note>"
}
},
"Message": {
"base": null,
"refs": {
"ConflictException$Message": null,
"InternalServiceError$Message": null,
"InvalidNextTokenException$Message": null,
"InvalidRequestException$Message": null,
"ResourceExistsException$Message": null,
"ResourceNotFoundException$Message": null,
"TestConnectionResponse$StatusMessage": "<p>Returns <code>Connection succeeded</code> if the test is successful. Or, returns a descriptive error message if the test fails. The following list provides troubleshooting details, depending on the error message that you receive.</p> <ul> <li> <p>Verify that your secret name aligns with the one in Transfer Role permissions.</p> </li> <li> <p>Verify the server URL in the connector configuration , and verify that the login credentials work successfully outside of the connector.</p> </li> <li> <p>Verify that the secret exists and is formatted correctly.</p> </li> <li> <p>Verify that the trusted host key in the connector configuration matches the <code>ssh-keyscan</code> output.</p> </li> </ul>",
"TestIdentityProviderResponse$Message": "<p>A message that indicates whether the test was successful or not.</p> <note> <p>If an empty string is returned, the most likely cause is that the authentication failed due to an incorrect username or password.</p> </note>"
}
},
"MessageSubject": {
"base": null,
"refs": {
"As2ConnectorConfig$MessageSubject": "<p>Used as the <code>Subject</code> HTTP header attribute in AS2 messages that are being sent with the connector.</p>"
}
},
"NextToken": {
"base": null,
"refs": {
"ListAccessesRequest$NextToken": "<p>When you can get additional results from the <code>ListAccesses</code> call, a <code>NextToken</code> parameter is returned in the output. You can then pass in a subsequent command to the <code>NextToken</code> parameter to continue listing additional accesses.</p>",
"ListAccessesResponse$NextToken": "<p>When you can get additional results from the <code>ListAccesses</code> call, a <code>NextToken</code> parameter is returned in the output. You can then pass in a subsequent command to the <code>NextToken</code> parameter to continue listing additional accesses.</p>",
"ListAgreementsRequest$NextToken": "<p>When you can get additional results from the <code>ListAgreements</code> call, a <code>NextToken</code> parameter is returned in the output. You can then pass in a subsequent command to the <code>NextToken</code> parameter to continue listing additional agreements.</p>",
"ListAgreementsResponse$NextToken": "<p>Returns a token that you can use to call <code>ListAgreements</code> again and receive additional results, if there are any.</p>",
"ListCertificatesRequest$NextToken": "<p>When you can get additional results from the <code>ListCertificates</code> call, a <code>NextToken</code> parameter is returned in the output. You can then pass in a subsequent command to the <code>NextToken</code> parameter to continue listing additional certificates.</p>",
"ListCertificatesResponse$NextToken": "<p>Returns the next token, which you can use to list the next certificate.</p>",
"ListConnectorsRequest$NextToken": "<p>When you can get additional results from the <code>ListConnectors</code> call, a <code>NextToken</code> parameter is returned in the output. You can then pass in a subsequent command to the <code>NextToken</code> parameter to continue listing additional connectors.</p>",
"ListConnectorsResponse$NextToken": "<p>Returns a token that you can use to call <code>ListConnectors</code> again and receive additional results, if there are any.</p>",
"ListExecutionsRequest$NextToken": "<p> <code>ListExecutions</code> returns the <code>NextToken</code> parameter in the output. You can then pass the <code>NextToken</code> parameter in a subsequent command to continue listing additional executions.</p> <p> This is useful for pagination, for instance. If you have 100 executions for a workflow, you might only want to list first 10. If so, call the API by specifying the <code>max-results</code>: </p> <p> <code>aws transfer list-executions --max-results 10</code> </p> <p> This returns details for the first 10 executions, as well as the pointer (<code>NextToken</code>) to the eleventh execution. You can now call the API again, supplying the <code>NextToken</code> value you received: </p> <p> <code>aws transfer list-executions --max-results 10 --next-token $somePointerReturnedFromPreviousListResult</code> </p> <p> This call returns the next 10 executions, the 11th through the 20th. You can then repeat the call until the details for all 100 executions have been returned. </p>",
"ListExecutionsResponse$NextToken": "<p> <code>ListExecutions</code> returns the <code>NextToken</code> parameter in the output. You can then pass the <code>NextToken</code> parameter in a subsequent command to continue listing additional executions.</p>",
"ListHostKeysRequest$NextToken": "<p>When there are additional results that were not returned, a <code>NextToken</code> parameter is returned. You can use that value for a subsequent call to <code>ListHostKeys</code> to continue listing results.</p>",
"ListHostKeysResponse$NextToken": "<p>Returns a token that you can use to call <code>ListHostKeys</code> again and receive additional results, if there are any.</p>",
"ListProfilesRequest$NextToken": "<p>When there are additional results that were not returned, a <code>NextToken</code> parameter is returned. You can use that value for a subsequent call to <code>ListProfiles</code> to continue listing results.</p>",
"ListProfilesResponse$NextToken": "<p>Returns a token that you can use to call <code>ListProfiles</code> again and receive additional results, if there are any.</p>",
"ListSecurityPoliciesRequest$NextToken": "<p>When additional results are obtained from the <code>ListSecurityPolicies</code> command, a <code>NextToken</code> parameter is returned in the output. You can then pass the <code>NextToken</code> parameter in a subsequent command to continue listing additional security policies.</p>",
"ListSecurityPoliciesResponse$NextToken": "<p>When you can get additional results from the <code>ListSecurityPolicies</code> operation, a <code>NextToken</code> parameter is returned in the output. In a following command, you can pass in the <code>NextToken</code> parameter to continue listing security policies.</p>",
"ListServersRequest$NextToken": "<p>When additional results are obtained from the <code>ListServers</code> command, a <code>NextToken</code> parameter is returned in the output. You can then pass the <code>NextToken</code> parameter in a subsequent command to continue listing additional servers.</p>",
"ListServersResponse$NextToken": "<p>When you can get additional results from the <code>ListServers</code> operation, a <code>NextToken</code> parameter is returned in the output. In a following command, you can pass in the <code>NextToken</code> parameter to continue listing additional servers.</p>",
"ListTagsForResourceRequest$NextToken": "<p>When you request additional results from the <code>ListTagsForResource</code> operation, a <code>NextToken</code> parameter is returned in the input. You can then pass in a subsequent command to the <code>NextToken</code> parameter to continue listing additional tags.</p>",
"ListTagsForResourceResponse$NextToken": "<p>When you can get additional results from the <code>ListTagsForResource</code> call, a <code>NextToken</code> parameter is returned in the output. You can then pass in a subsequent command to the <code>NextToken</code> parameter to continue listing additional tags.</p>",
"ListUsersRequest$NextToken": "<p>If there are additional results from the <code>ListUsers</code> call, a <code>NextToken</code> parameter is returned in the output. You can then pass the <code>NextToken</code> to a subsequent <code>ListUsers</code> command, to continue listing additional users.</p>",
"ListUsersResponse$NextToken": "<p>When you can get additional results from the <code>ListUsers</code> call, a <code>NextToken</code> parameter is returned in the output. You can then pass in a subsequent command to the <code>NextToken</code> parameter to continue listing additional users.</p>",
"ListWorkflowsRequest$NextToken": "<p> <code>ListWorkflows</code> returns the <code>NextToken</code> parameter in the output. You can then pass the <code>NextToken</code> parameter in a subsequent command to continue listing additional workflows.</p>",
"ListWorkflowsResponse$NextToken": "<p> <code>ListWorkflows</code> returns the <code>NextToken</code> parameter in the output. You can then pass the <code>NextToken</code> parameter in a subsequent command to continue listing additional workflows.</p>"
}
},
"NullableRole": {
"base": null,
"refs": {
"CreateServerRequest$LoggingRole": "<p>The Amazon Resource Name (ARN) of the Identity and Access Management (IAM) role that allows a server to turn on Amazon CloudWatch logging for Amazon S3 or Amazon EFSevents. When set, you can view user activity in your CloudWatch logs.</p>",
"DescribedServer$LoggingRole": "<p>The Amazon Resource Name (ARN) of the Identity and Access Management (IAM) role that allows a server to turn on Amazon CloudWatch logging for Amazon S3 or Amazon EFSevents. When set, you can view user activity in your CloudWatch logs.</p>",
"UpdateServerRequest$LoggingRole": "<p>The Amazon Resource Name (ARN) of the Identity and Access Management (IAM) role that allows a server to turn on Amazon CloudWatch logging for Amazon S3 or Amazon EFSevents. When set, you can view user activity in your CloudWatch logs.</p>"
}
},
"OnPartialUploadWorkflowDetails": {
"base": null,
"refs": {
"WorkflowDetails$OnPartialUpload": "<p>A trigger that starts a workflow if a file is only partially uploaded. You can attach a workflow to a server that executes whenever there is a partial upload.</p> <p>A <i>partial upload</i> occurs when a file is open when the session disconnects.</p>"
}
},
"OnUploadWorkflowDetails": {
"base": null,
"refs": {
"WorkflowDetails$OnUpload": "<p>A trigger that starts a workflow: the workflow begins to execute after a file is uploaded.</p> <p>To remove an associated workflow from a server, you can provide an empty <code>OnUpload</code> object, as in the following example.</p> <p> <code>aws transfer update-server --server-id s-01234567890abcdef --workflow-details '{\"OnUpload\":[]}'</code> </p>"
}
},
"OutputFileName": {
"base": null,
"refs": {
"StartDirectoryListingResponse$OutputFileName": "<p>Returns the file name where the results are stored. This is a combination of the connector ID and the listing ID: <code><connector-id>-<listing-id>.json</code>.</p>"
}
},
"OverwriteExisting": {
"base": null,
"refs": {
"CopyStepDetails$OverwriteExisting": "<p>A flag that indicates whether to overwrite an existing file of the same name. The default is <code>FALSE</code>.</p> <p>If the workflow is processing a file that has the same name as an existing file, the behavior is as follows:</p> <ul> <li> <p>If <code>OverwriteExisting</code> is <code>TRUE</code>, the existing file is replaced with the file being processed.</p> </li> <li> <p>If <code>OverwriteExisting</code> is <code>FALSE</code>, nothing happens, and the workflow processing stops.</p> </li> </ul>",
"DecryptStepDetails$OverwriteExisting": "<p>A flag that indicates whether to overwrite an existing file of the same name. The default is <code>FALSE</code>.</p> <p>If the workflow is processing a file that has the same name as an existing file, the behavior is as follows:</p> <ul> <li> <p>If <code>OverwriteExisting</code> is <code>TRUE</code>, the existing file is replaced with the file being processed.</p> </li> <li> <p>If <code>OverwriteExisting</code> is <code>FALSE</code>, nothing happens, and the workflow processing stops.</p> </li> </ul>"
}
},
"PassiveIp": {
"base": null,
"refs": {
"ProtocolDetails$PassiveIp": "<p> Indicates passive mode, for FTP and FTPS protocols. Enter a single IPv4 address, such as the public IP address of a firewall, router, or load balancer. For example: </p> <p> <code>aws transfer update-server --protocol-details PassiveIp=0.0.0.0</code> </p> <p>Replace <code>0.0.0.0</code> in the example above with the actual IP address you want to use.</p> <note> <p> If you change the <code>PassiveIp</code> value, you must stop and then restart your Transfer Family server for the change to take effect. For details on using passive mode (PASV) in a NAT environment, see <a href=\"http://aws.amazon.com/blogs/storage/configuring-your-ftps-server-behind-a-firewall-or-nat-with-aws-transfer-family/\">Configuring your FTPS server behind a firewall or NAT with Transfer Family</a>. </p> </note> <p> <i>Special values</i> </p> <p>The <code>AUTO</code> and <code>0.0.0.0</code> are special values for the <code>PassiveIp</code> parameter. The value <code>PassiveIp=AUTO</code> is assigned by default to FTP and FTPS type servers. In this case, the server automatically responds with one of the endpoint IPs within the PASV response. <code>PassiveIp=0.0.0.0</code> has a more unique application for its usage. For example, if you have a High Availability (HA) Network Load Balancer (NLB) environment, where you have 3 subnets, you can only specify a single IP address using the <code>PassiveIp</code> parameter. This reduces the effectiveness of having High Availability. In this case, you can specify <code>PassiveIp=0.0.0.0</code>. This tells the client to use the same IP address as the Control connection and utilize all AZs for their connections. Note, however, that not all FTP clients support the <code>PassiveIp=0.0.0.0</code> response. FileZilla and WinSCP do support it. If you are using other clients, check to see if your client supports the <code>PassiveIp=0.0.0.0</code> response.</p>"
}
},
"Policy": {
"base": null,
"refs": {
"CreateAccessRequest$Policy": "<p>A session policy for your user so that you can use the same Identity and Access Management (IAM) role across multiple users. This policy scopes down a user's access to portions of their Amazon S3 bucket. Variables that you can use inside this policy include <code>${Transfer:UserName}</code>, <code>${Transfer:HomeDirectory}</code>, and <code>${Transfer:HomeBucket}</code>.</p> <note> <p>This policy applies only when the domain of <code>ServerId</code> is Amazon S3. Amazon EFS does not use session policies.</p> <p>For session policies, Transfer Family stores the policy as a JSON blob, instead of the Amazon Resource Name (ARN) of the policy. You save the policy as a JSON blob and pass it in the <code>Policy</code> argument.</p> <p>For an example of a session policy, see <a href=\"https://docs.aws.amazon.com/transfer/latest/userguide/session-policy.html\">Example session policy</a>.</p> <p>For more information, see <a href=\"https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html\">AssumeRole</a> in the <i>Security Token Service API Reference</i>.</p> </note>",
"CreateUserRequest$Policy": "<p>A session policy for your user so that you can use the same Identity and Access Management (IAM) role across multiple users. This policy scopes down a user's access to portions of their Amazon S3 bucket. Variables that you can use inside this policy include <code>${Transfer:UserName}</code>, <code>${Transfer:HomeDirectory}</code>, and <code>${Transfer:HomeBucket}</code>.</p> <note> <p>This policy applies only when the domain of <code>ServerId</code> is Amazon S3. Amazon EFS does not use session policies.</p> <p>For session policies, Transfer Family stores the policy as a JSON blob, instead of the Amazon Resource Name (ARN) of the policy. You save the policy as a JSON blob and pass it in the <code>Policy</code> argument.</p> <p>For an example of a session policy, see <a href=\"https://docs.aws.amazon.com/transfer/latest/userguide/session-policy.html\">Example session policy</a>.</p> <p>For more information, see <a href=\"https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html\">AssumeRole</a> in the <i>Amazon Web Services Security Token Service API Reference</i>.</p> </note>",
"DescribedAccess$Policy": "<p>A session policy for your user so that you can use the same Identity and Access Management (IAM) role across multiple users. This policy scopes down a user's access to portions of their Amazon S3 bucket. Variables that you can use inside this policy include <code>${Transfer:UserName}</code>, <code>${Transfer:HomeDirectory}</code>, and <code>${Transfer:HomeBucket}</code>.</p>",
"DescribedUser$Policy": "<p>A session policy for your user so that you can use the same Identity and Access Management (IAM) role across multiple users. This policy scopes down a user's access to portions of their Amazon S3 bucket. Variables that you can use inside this policy include <code>${Transfer:UserName}</code>, <code>${Transfer:HomeDirectory}</code>, and <code>${Transfer:HomeBucket}</code>.</p>",
"UpdateAccessRequest$Policy": "<p>A session policy for your user so that you can use the same Identity and Access Management (IAM) role across multiple users. This policy scopes down a user's access to portions of their Amazon S3 bucket. Variables that you can use inside this policy include <code>${Transfer:UserName}</code>, <code>${Transfer:HomeDirectory}</code>, and <code>${Transfer:HomeBucket}</code>.</p> <note> <p>This policy applies only when the domain of <code>ServerId</code> is Amazon S3. Amazon EFS does not use session policies.</p> <p>For session policies, Transfer Family stores the policy as a JSON blob, instead of the Amazon Resource Name (ARN) of the policy. You save the policy as a JSON blob and pass it in the <code>Policy</code> argument.</p> <p>For an example of a session policy, see <a href=\"https://docs.aws.amazon.com/transfer/latest/userguide/session-policy.html\">Example session policy</a>.</p> <p>For more information, see <a href=\"https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html\">AssumeRole</a> in the <i>Amazon Web ServicesSecurity Token Service API Reference</i>.</p> </note>",
"UpdateUserRequest$Policy": "<p>A session policy for your user so that you can use the same Identity and Access Management (IAM) role across multiple users. This policy scopes down a user's access to portions of their Amazon S3 bucket. Variables that you can use inside this policy include <code>${Transfer:UserName}</code>, <code>${Transfer:HomeDirectory}</code>, and <code>${Transfer:HomeBucket}</code>.</p> <note> <p>This policy applies only when the domain of <code>ServerId</code> is Amazon S3. Amazon EFS does not use session policies.</p> <p>For session policies, Transfer Family stores the policy as a JSON blob, instead of the Amazon Resource Name (ARN) of the policy. You save the policy as a JSON blob and pass it in the <code>Policy</code> argument.</p> <p>For an example of a session policy, see <a href=\"https://docs.aws.amazon.com/transfer/latest/userguide/session-policy\">Creating a session policy</a>.</p> <p>For more information, see <a href=\"https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html\">AssumeRole</a> in the <i>Amazon Web Services Security Token Service API Reference</i>.</p> </note>"
}
},
"PosixId": {
"base": null,
"refs": {
"PosixProfile$Uid": "<p>The POSIX user ID used for all EFS operations by this user.</p>",
"PosixProfile$Gid": "<p>The POSIX group ID used for all EFS operations by this user.</p>",
"SecondaryGids$member": null
}
},
"PosixProfile": {
"base": "<p>The full POSIX identity, including user ID (<code>Uid</code>), group ID (<code>Gid</code>), and any secondary groups IDs (<code>SecondaryGids</code>), that controls your users' access to your Amazon EFS file systems. The POSIX permissions that are set on files and directories in your file system determine the level of access your users get when transferring files into and out of your Amazon EFS file systems.</p>",
"refs": {
"CreateAccessRequest$PosixProfile": null,
"CreateUserRequest$PosixProfile": "<p>Specifies the full POSIX identity, including user ID (<code>Uid</code>), group ID (<code>Gid</code>), and any secondary groups IDs (<code>SecondaryGids</code>), that controls your users' access to your Amazon EFS file systems. The POSIX permissions that are set on files and directories in Amazon EFS determine the level of access your users get when transferring files into and out of your Amazon EFS file systems.</p>",
"DescribedAccess$PosixProfile": null,
"DescribedExecution$PosixProfile": null,
"DescribedUser$PosixProfile": "<p>Specifies the full POSIX identity, including user ID (<code>Uid</code>), group ID (<code>Gid</code>), and any secondary groups IDs (<code>SecondaryGids</code>), that controls your users' access to your Amazon Elastic File System (Amazon EFS) file systems. The POSIX permissions that are set on files and directories in your file system determine the level of access your users get when transferring files into and out of your Amazon EFS file systems.</p>",
"UpdateAccessRequest$PosixProfile": null,
"UpdateUserRequest$PosixProfile": "<p>Specifies the full POSIX identity, including user ID (<code>Uid</code>), group ID (<code>Gid</code>), and any secondary groups IDs (<code>SecondaryGids</code>), that controls your users' access to your Amazon Elastic File Systems (Amazon EFS). The POSIX permissions that are set on files and directories in your file system determines the level of access your users get when transferring files into and out of your Amazon EFS file systems.</p>"
}
},
"PostAuthenticationLoginBanner": {
"base": null,
"refs": {
"CreateServerRequest$PostAuthenticationLoginBanner": "<p>Specifies a string to display when users connect to a server. This string is displayed after the user authenticates.</p> <note> <p>The SFTP protocol does not support post-authentication display banners.</p> </note>",
"DescribedServer$PostAuthenticationLoginBanner": "<p>Specifies a string to display when users connect to a server. This string is displayed after the user authenticates.</p> <note> <p>The SFTP protocol does not support post-authentication display banners.</p> </note>",
"UpdateServerRequest$PostAuthenticationLoginBanner": "<p>Specifies a string to display when users connect to a server. This string is displayed after the user authenticates.</p> <note> <p>The SFTP protocol does not support post-authentication display banners.</p> </note>"
}
},
"PreAuthenticationLoginBanner": {
"base": null,
"refs": {
"CreateServerRequest$PreAuthenticationLoginBanner": "<p>Specifies a string to display when users connect to a server. This string is displayed before the user authenticates. For example, the following banner displays details about using the system:</p> <p> <code>This system is for the use of authorized users only. Individuals using this computer system without authority, or in excess of their authority, are subject to having all of their activities on this system monitored and recorded by system personnel.</code> </p>",
"DescribedServer$PreAuthenticationLoginBanner": "<p>Specifies a string to display when users connect to a server. This string is displayed before the user authenticates. For example, the following banner displays details about using the system:</p> <p> <code>This system is for the use of authorized users only. Individuals using this computer system without authority, or in excess of their authority, are subject to having all of their activities on this system monitored and recorded by system personnel.</code> </p>",
"UpdateServerRequest$PreAuthenticationLoginBanner": "<p>Specifies a string to display when users connect to a server. This string is displayed before the user authenticates. For example, the following banner displays details about using the system:</p> <p> <code>This system is for the use of authorized users only. Individuals using this computer system without authority, or in excess of their authority, are subject to having all of their activities on this system monitored and recorded by system personnel.</code> </p>"
}
},
"PrivateKeyType": {
"base": null,
"refs": {
"ImportCertificateRequest$PrivateKey": "<ul> <li> <p>For the CLI, provide a file path for a private key in URI format.For example, <code>--private-key file://encryption-key.pem</code>. Alternatively, you can provide the raw content of the private key file.</p> </li> <li> <p>For the SDK, specify the raw content of a private key file. For example, <code>--private-key \"`cat encryption-key.pem`\"</code> </p> </li> </ul>"
}
},
"ProfileId": {
"base": null,
"refs": {
"As2ConnectorConfig$LocalProfileId": "<p>A unique identifier for the AS2 local profile.</p>",
"As2ConnectorConfig$PartnerProfileId": "<p>A unique identifier for the partner profile for the connector.</p>",
"CreateAgreementRequest$LocalProfileId": "<p>A unique identifier for the AS2 local profile.</p>",
"CreateAgreementRequest$PartnerProfileId": "<p>A unique identifier for the partner profile used in the agreement.</p>",
"CreateProfileResponse$ProfileId": "<p>The unique identifier for the AS2 profile, returned after the API call succeeds.</p>",
"DeleteProfileRequest$ProfileId": "<p>The identifier of the profile that you are deleting.</p>",
"DescribeProfileRequest$ProfileId": "<p>The identifier of the profile that you want described.</p>",
"DescribedAgreement$LocalProfileId": "<p>A unique identifier for the AS2 local profile.</p>",
"DescribedAgreement$PartnerProfileId": "<p>A unique identifier for the partner profile used in the agreement.</p>",
"DescribedProfile$ProfileId": "<p>A unique identifier for the local or partner AS2 profile.</p>",
"ListedAgreement$LocalProfileId": "<p>A unique identifier for the AS2 local profile.</p>",
"ListedAgreement$PartnerProfileId": "<p>A unique identifier for the partner profile.</p>",
"ListedProfile$ProfileId": "<p>A unique identifier for the local or partner AS2 profile.</p>",
"UpdateAgreementRequest$LocalProfileId": "<p>A unique identifier for the AS2 local profile.</p> <p>To change the local profile identifier, provide a new value here.</p>",
"UpdateAgreementRequest$PartnerProfileId": "<p>A unique identifier for the partner profile. To change the partner profile identifier, provide a new value here.</p>",
"UpdateProfileRequest$ProfileId": "<p>The identifier of the profile object that you are updating.</p>",
"UpdateProfileResponse$ProfileId": "<p>Returns the identifier for the profile that's being updated.</p>"
}
},
"ProfileType": {
"base": null,
"refs": {
"CreateProfileRequest$ProfileType": "<p>Determines the type of profile to create:</p> <ul> <li> <p>Specify <code>LOCAL</code> to create a local profile. A local profile represents the AS2-enabled Transfer Family server organization or party.</p> </li> <li> <p>Specify <code>PARTNER</code> to create a partner profile. A partner profile represents a remote organization, external to Transfer Family.</p> </li> </ul>",
"DescribedProfile$ProfileType": "<p>Indicates whether to list only <code>LOCAL</code> type profiles or only <code>PARTNER</code> type profiles. If not supplied in the request, the command lists all types of profiles.</p>",
"ListProfilesRequest$ProfileType": "<p>Indicates whether to list only <code>LOCAL</code> type profiles or only <code>PARTNER</code> type profiles. If not supplied in the request, the command lists all types of profiles.</p>",
"ListedProfile$ProfileType": "<p>Indicates whether to list only <code>LOCAL</code> type profiles or only <code>PARTNER</code> type profiles. If not supplied in the request, the command lists all types of profiles.</p>"
}
},
"Protocol": {
"base": null,
"refs": {
"Protocols$member": null,
"TestIdentityProviderRequest$ServerProtocol": "<p>The type of file transfer protocol to be tested.</p> <p>The available protocols are:</p> <ul> <li> <p>Secure Shell (SSH) File Transfer Protocol (SFTP)</p> </li> <li> <p>File Transfer Protocol Secure (FTPS)</p> </li> <li> <p>File Transfer Protocol (FTP)</p> </li> <li> <p>Applicability Statement 2 (AS2)</p> </li> </ul>"
}
},
"ProtocolDetails": {
"base": "<p> The protocol settings that are configured for your server. </p>",
"refs": {
"CreateServerRequest$ProtocolDetails": "<p>The protocol settings that are configured for your server.</p> <ul> <li> <p> To indicate passive mode (for FTP and FTPS protocols), use the <code>PassiveIp</code> parameter. Enter a single dotted-quad IPv4 address, such as the external IP address of a firewall, router, or load balancer. </p> </li> <li> <p>To ignore the error that is generated when the client attempts to use the <code>SETSTAT</code> command on a file that you are uploading to an Amazon S3 bucket, use the <code>SetStatOption</code> parameter. To have the Transfer Family server ignore the <code>SETSTAT</code> command and upload files without needing to make any changes to your SFTP client, set the value to <code>ENABLE_NO_OP</code>. If you set the <code>SetStatOption</code> parameter to <code>ENABLE_NO_OP</code>, Transfer Family generates a log entry to Amazon CloudWatch Logs, so that you can determine when the client is making a <code>SETSTAT</code> call.</p> </li> <li> <p>To determine whether your Transfer Family server resumes recent, negotiated sessions through a unique session ID, use the <code>TlsSessionResumptionMode</code> parameter.</p> </li> <li> <p> <code>As2Transports</code> indicates the transport method for the AS2 messages. Currently, only HTTP is supported.</p> </li> </ul>",
"DescribedServer$ProtocolDetails": "<p>The protocol settings that are configured for your server.</p> <ul> <li> <p> To indicate passive mode (for FTP and FTPS protocols), use the <code>PassiveIp</code> parameter. Enter a single dotted-quad IPv4 address, such as the external IP address of a firewall, router, or load balancer. </p> </li> <li> <p>To ignore the error that is generated when the client attempts to use the <code>SETSTAT</code> command on a file that you are uploading to an Amazon S3 bucket, use the <code>SetStatOption</code> parameter. To have the Transfer Family server ignore the <code>SETSTAT</code> command and upload files without needing to make any changes to your SFTP client, set the value to <code>ENABLE_NO_OP</code>. If you set the <code>SetStatOption</code> parameter to <code>ENABLE_NO_OP</code>, Transfer Family generates a log entry to Amazon CloudWatch Logs, so that you can determine when the client is making a <code>SETSTAT</code> call.</p> </li> <li> <p>To determine whether your Transfer Family server resumes recent, negotiated sessions through a unique session ID, use the <code>TlsSessionResumptionMode</code> parameter.</p> </li> <li> <p> <code>As2Transports</code> indicates the transport method for the AS2 messages. Currently, only HTTP is supported.</p> </li> </ul>",
"UpdateServerRequest$ProtocolDetails": "<p>The protocol settings that are configured for your server.</p> <ul> <li> <p> To indicate passive mode (for FTP and FTPS protocols), use the <code>PassiveIp</code> parameter. Enter a single dotted-quad IPv4 address, such as the external IP address of a firewall, router, or load balancer. </p> </li> <li> <p>To ignore the error that is generated when the client attempts to use the <code>SETSTAT</code> command on a file that you are uploading to an Amazon S3 bucket, use the <code>SetStatOption</code> parameter. To have the Transfer Family server ignore the <code>SETSTAT</code> command and upload files without needing to make any changes to your SFTP client, set the value to <code>ENABLE_NO_OP</code>. If you set the <code>SetStatOption</code> parameter to <code>ENABLE_NO_OP</code>, Transfer Family generates a log entry to Amazon CloudWatch Logs, so that you can determine when the client is making a <code>SETSTAT</code> call.</p> </li> <li> <p>To determine whether your Transfer Family server resumes recent, negotiated sessions through a unique session ID, use the <code>TlsSessionResumptionMode</code> parameter.</p> </li> <li> <p> <code>As2Transports</code> indicates the transport method for the AS2 messages. Currently, only HTTP is supported.</p> </li> </ul>"
}
},
"Protocols": {
"base": null,
"refs": {
"CreateServerRequest$Protocols": "<p>Specifies the file transfer protocol or protocols over which your file transfer protocol client can connect to your server's endpoint. The available protocols are:</p> <ul> <li> <p> <code>SFTP</code> (Secure Shell (SSH) File Transfer Protocol): File transfer over SSH</p> </li> <li> <p> <code>FTPS</code> (File Transfer Protocol Secure): File transfer with TLS encryption</p> </li> <li> <p> <code>FTP</code> (File Transfer Protocol): Unencrypted file transfer</p> </li> <li> <p> <code>AS2</code> (Applicability Statement 2): used for transporting structured business-to-business data</p> </li> </ul> <note> <ul> <li> <p>If you select <code>FTPS</code>, you must choose a certificate stored in Certificate Manager (ACM) which is used to identify your server when clients connect to it over FTPS.</p> </li> <li> <p>If <code>Protocol</code> includes either <code>FTP</code> or <code>FTPS</code>, then the <code>EndpointType</code> must be <code>VPC</code> and the <code>IdentityProviderType</code> must be either <code>AWS_DIRECTORY_SERVICE</code>, <code>AWS_LAMBDA</code>, or <code>API_GATEWAY</code>.</p> </li> <li> <p>If <code>Protocol</code> includes <code>FTP</code>, then <code>AddressAllocationIds</code> cannot be associated.</p> </li> <li> <p>If <code>Protocol</code> is set only to <code>SFTP</code>, the <code>EndpointType</code> can be set to <code>PUBLIC</code> and the <code>IdentityProviderType</code> can be set any of the supported identity types: <code>SERVICE_MANAGED</code>, <code>AWS_DIRECTORY_SERVICE</code>, <code>AWS_LAMBDA</code>, or <code>API_GATEWAY</code>.</p> </li> <li> <p>If <code>Protocol</code> includes <code>AS2</code>, then the <code>EndpointType</code> must be <code>VPC</code>, and domain must be Amazon S3.</p> </li> </ul> </note>",
"DescribedServer$Protocols": "<p>Specifies the file transfer protocol or protocols over which your file transfer protocol client can connect to your server's endpoint. The available protocols are:</p> <ul> <li> <p> <code>SFTP</code> (Secure Shell (SSH) File Transfer Protocol): File transfer over SSH</p> </li> <li> <p> <code>FTPS</code> (File Transfer Protocol Secure): File transfer with TLS encryption</p> </li> <li> <p> <code>FTP</code> (File Transfer Protocol): Unencrypted file transfer</p> </li> <li> <p> <code>AS2</code> (Applicability Statement 2): used for transporting structured business-to-business data</p> </li> </ul> <note> <ul> <li> <p>If you select <code>FTPS</code>, you must choose a certificate stored in Certificate Manager (ACM) which is used to identify your server when clients connect to it over FTPS.</p> </li> <li> <p>If <code>Protocol</code> includes either <code>FTP</code> or <code>FTPS</code>, then the <code>EndpointType</code> must be <code>VPC</code> and the <code>IdentityProviderType</code> must be either <code>AWS_DIRECTORY_SERVICE</code>, <code>AWS_LAMBDA</code>, or <code>API_GATEWAY</code>.</p> </li> <li> <p>If <code>Protocol</code> includes <code>FTP</code>, then <code>AddressAllocationIds</code> cannot be associated.</p> </li> <li> <p>If <code>Protocol</code> is set only to <code>SFTP</code>, the <code>EndpointType</code> can be set to <code>PUBLIC</code> and the <code>IdentityProviderType</code> can be set any of the supported identity types: <code>SERVICE_MANAGED</code>, <code>AWS_DIRECTORY_SERVICE</code>, <code>AWS_LAMBDA</code>, or <code>API_GATEWAY</code>.</p> </li> <li> <p>If <code>Protocol</code> includes <code>AS2</code>, then the <code>EndpointType</code> must be <code>VPC</code>, and domain must be Amazon S3.</p> </li> </ul> </note>",
"UpdateServerRequest$Protocols": "<p>Specifies the file transfer protocol or protocols over which your file transfer protocol client can connect to your server's endpoint. The available protocols are:</p> <ul> <li> <p> <code>SFTP</code> (Secure Shell (SSH) File Transfer Protocol): File transfer over SSH</p> </li> <li> <p> <code>FTPS</code> (File Transfer Protocol Secure): File transfer with TLS encryption</p> </li> <li> <p> <code>FTP</code> (File Transfer Protocol): Unencrypted file transfer</p> </li> <li> <p> <code>AS2</code> (Applicability Statement 2): used for transporting structured business-to-business data</p> </li> </ul> <note> <ul> <li> <p>If you select <code>FTPS</code>, you must choose a certificate stored in Certificate Manager (ACM) which is used to identify your server when clients connect to it over FTPS.</p> </li> <li> <p>If <code>Protocol</code> includes either <code>FTP</code> or <code>FTPS</code>, then the <code>EndpointType</code> must be <code>VPC</code> and the <code>IdentityProviderType</code> must be either <code>AWS_DIRECTORY_SERVICE</code>, <code>AWS_LAMBDA</code>, or <code>API_GATEWAY</code>.</p> </li> <li> <p>If <code>Protocol</code> includes <code>FTP</code>, then <code>AddressAllocationIds</code> cannot be associated.</p> </li> <li> <p>If <code>Protocol</code> is set only to <code>SFTP</code>, the <code>EndpointType</code> can be set to <code>PUBLIC</code> and the <code>IdentityProviderType</code> can be set any of the supported identity types: <code>SERVICE_MANAGED</code>, <code>AWS_DIRECTORY_SERVICE</code>, <code>AWS_LAMBDA</code>, or <code>API_GATEWAY</code>.</p> </li> <li> <p>If <code>Protocol</code> includes <code>AS2</code>, then the <code>EndpointType</code> must be <code>VPC</code>, and domain must be Amazon S3.</p> </li> </ul> </note>"
}
},
"Resource": {
"base": null,
"refs": {
"ResourceExistsException$Resource": null,
"ResourceNotFoundException$Resource": null
}
},
"ResourceExistsException": {
"base": "<p>The requested resource does not exist, or exists in a region other than the one specified for the command.</p>",
"refs": {
}
},
"ResourceNotFoundException": {
"base": "<p>This exception is thrown when a resource is not found by the Amazon Web ServicesTransfer Family service.</p>",
"refs": {
}
},
"ResourceType": {
"base": null,
"refs": {
"ResourceExistsException$ResourceType": null,
"ResourceNotFoundException$ResourceType": null
}
},
"Response": {
"base": null,
"refs": {
"TestIdentityProviderResponse$Response": "<p>The response that is returned from your API Gateway or your Lambda function.</p>"
}
},
"RetryAfterSeconds": {
"base": null,
"refs": {
"ThrottlingException$RetryAfterSeconds": null
}
},
"Role": {
"base": null,
"refs": {
"CreateAccessRequest$Role": "<p>The Amazon Resource Name (ARN) of the Identity and Access Management (IAM) role that controls your users' access to your Amazon S3 bucket or Amazon EFS file system. The policies attached to this role determine the level of access that you want to provide your users when transferring files into and out of your Amazon S3 bucket or Amazon EFS file system. The IAM role should also contain a trust relationship that allows the server to access your resources when servicing your users' transfer requests.</p>",
"CreateAgreementRequest$AccessRole": "<p>Connectors are used to send files using either the AS2 or SFTP protocol. For the access role, provide the Amazon Resource Name (ARN) of the Identity and Access Management role to use.</p> <p> <b>For AS2 connectors</b> </p> <p>With AS2, you can send files by calling <code>StartFileTransfer</code> and specifying the file paths in the request parameter, <code>SendFilePaths</code>. We use the file’s parent directory (for example, for <code>--send-file-paths /bucket/dir/file.txt</code>, parent directory is <code>/bucket/dir/</code>) to temporarily store a processed AS2 message file, store the MDN when we receive them from the partner, and write a final JSON file containing relevant metadata of the transmission. So, the <code>AccessRole</code> needs to provide read and write access to the parent directory of the file location used in the <code>StartFileTransfer</code> request. Additionally, you need to provide read and write access to the parent directory of the files that you intend to send with <code>StartFileTransfer</code>.</p> <p>If you are using Basic authentication for your AS2 connector, the access role requires the <code>secretsmanager:GetSecretValue</code> permission for the secret. If the secret is encrypted using a customer-managed key instead of the Amazon Web Services managed key in Secrets Manager, then the role also needs the <code>kms:Decrypt</code> permission for that key.</p> <p> <b>For SFTP connectors</b> </p> <p>Make sure that the access role provides read and write access to the parent directory of the file location that's used in the <code>StartFileTransfer</code> request. Additionally, make sure that the role provides <code>secretsmanager:GetSecretValue</code> permission to Secrets Manager.</p>",
"CreateConnectorRequest$AccessRole": "<p>Connectors are used to send files using either the AS2 or SFTP protocol. For the access role, provide the Amazon Resource Name (ARN) of the Identity and Access Management role to use.</p> <p> <b>For AS2 connectors</b> </p> <p>With AS2, you can send files by calling <code>StartFileTransfer</code> and specifying the file paths in the request parameter, <code>SendFilePaths</code>. We use the file’s parent directory (for example, for <code>--send-file-paths /bucket/dir/file.txt</code>, parent directory is <code>/bucket/dir/</code>) to temporarily store a processed AS2 message file, store the MDN when we receive them from the partner, and write a final JSON file containing relevant metadata of the transmission. So, the <code>AccessRole</code> needs to provide read and write access to the parent directory of the file location used in the <code>StartFileTransfer</code> request. Additionally, you need to provide read and write access to the parent directory of the files that you intend to send with <code>StartFileTransfer</code>.</p> <p>If you are using Basic authentication for your AS2 connector, the access role requires the <code>secretsmanager:GetSecretValue</code> permission for the secret. If the secret is encrypted using a customer-managed key instead of the Amazon Web Services managed key in Secrets Manager, then the role also needs the <code>kms:Decrypt</code> permission for that key.</p> <p> <b>For SFTP connectors</b> </p> <p>Make sure that the access role provides read and write access to the parent directory of the file location that's used in the <code>StartFileTransfer</code> request. Additionally, make sure that the role provides <code>secretsmanager:GetSecretValue</code> permission to Secrets Manager.</p>",
"CreateConnectorRequest$LoggingRole": "<p>The Amazon Resource Name (ARN) of the Identity and Access Management (IAM) role that allows a connector to turn on CloudWatch logging for Amazon S3 events. When set, you can view connector activity in your CloudWatch logs.</p>",
"CreateUserRequest$Role": "<p>The Amazon Resource Name (ARN) of the Identity and Access Management (IAM) role that controls your users' access to your Amazon S3 bucket or Amazon EFS file system. The policies attached to this role determine the level of access that you want to provide your users when transferring files into and out of your Amazon S3 bucket or Amazon EFS file system. The IAM role should also contain a trust relationship that allows the server to access your resources when servicing your users' transfer requests.</p>",
"DescribedAccess$Role": "<p>The Amazon Resource Name (ARN) of the Identity and Access Management (IAM) role that controls your users' access to your Amazon S3 bucket or Amazon EFS file system. The policies attached to this role determine the level of access that you want to provide your users when transferring files into and out of your Amazon S3 bucket or Amazon EFS file system. The IAM role should also contain a trust relationship that allows the server to access your resources when servicing your users' transfer requests.</p>",
"DescribedAgreement$AccessRole": "<p>Connectors are used to send files using either the AS2 or SFTP protocol. For the access role, provide the Amazon Resource Name (ARN) of the Identity and Access Management role to use.</p> <p> <b>For AS2 connectors</b> </p> <p>With AS2, you can send files by calling <code>StartFileTransfer</code> and specifying the file paths in the request parameter, <code>SendFilePaths</code>. We use the file’s parent directory (for example, for <code>--send-file-paths /bucket/dir/file.txt</code>, parent directory is <code>/bucket/dir/</code>) to temporarily store a processed AS2 message file, store the MDN when we receive them from the partner, and write a final JSON file containing relevant metadata of the transmission. So, the <code>AccessRole</code> needs to provide read and write access to the parent directory of the file location used in the <code>StartFileTransfer</code> request. Additionally, you need to provide read and write access to the parent directory of the files that you intend to send with <code>StartFileTransfer</code>.</p> <p>If you are using Basic authentication for your AS2 connector, the access role requires the <code>secretsmanager:GetSecretValue</code> permission for the secret. If the secret is encrypted using a customer-managed key instead of the Amazon Web Services managed key in Secrets Manager, then the role also needs the <code>kms:Decrypt</code> permission for that key.</p> <p> <b>For SFTP connectors</b> </p> <p>Make sure that the access role provides read and write access to the parent directory of the file location that's used in the <code>StartFileTransfer</code> request. Additionally, make sure that the role provides <code>secretsmanager:GetSecretValue</code> permission to Secrets Manager.</p>",
"DescribedConnector$AccessRole": "<p>Connectors are used to send files using either the AS2 or SFTP protocol. For the access role, provide the Amazon Resource Name (ARN) of the Identity and Access Management role to use.</p> <p> <b>For AS2 connectors</b> </p> <p>With AS2, you can send files by calling <code>StartFileTransfer</code> and specifying the file paths in the request parameter, <code>SendFilePaths</code>. We use the file’s parent directory (for example, for <code>--send-file-paths /bucket/dir/file.txt</code>, parent directory is <code>/bucket/dir/</code>) to temporarily store a processed AS2 message file, store the MDN when we receive them from the partner, and write a final JSON file containing relevant metadata of the transmission. So, the <code>AccessRole</code> needs to provide read and write access to the parent directory of the file location used in the <code>StartFileTransfer</code> request. Additionally, you need to provide read and write access to the parent directory of the files that you intend to send with <code>StartFileTransfer</code>.</p> <p>If you are using Basic authentication for your AS2 connector, the access role requires the <code>secretsmanager:GetSecretValue</code> permission for the secret. If the secret is encrypted using a customer-managed key instead of the Amazon Web Services managed key in Secrets Manager, then the role also needs the <code>kms:Decrypt</code> permission for that key.</p> <p> <b>For SFTP connectors</b> </p> <p>Make sure that the access role provides read and write access to the parent directory of the file location that's used in the <code>StartFileTransfer</code> request. Additionally, make sure that the role provides <code>secretsmanager:GetSecretValue</code> permission to Secrets Manager.</p>",
"DescribedConnector$LoggingRole": "<p>The Amazon Resource Name (ARN) of the Identity and Access Management (IAM) role that allows a connector to turn on CloudWatch logging for Amazon S3 events. When set, you can view connector activity in your CloudWatch logs.</p>",
"DescribedExecution$ExecutionRole": "<p>The IAM role associated with the execution.</p>",
"DescribedUser$Role": "<p>The Amazon Resource Name (ARN) of the Identity and Access Management (IAM) role that controls your users' access to your Amazon S3 bucket or Amazon EFS file system. The policies attached to this role determine the level of access that you want to provide your users when transferring files into and out of your Amazon S3 bucket or Amazon EFS file system. The IAM role should also contain a trust relationship that allows the server to access your resources when servicing your users' transfer requests.</p>",
"IdentityProviderDetails$InvocationRole": "<p>This parameter is only applicable if your <code>IdentityProviderType</code> is <code>API_GATEWAY</code>. Provides the type of <code>InvocationRole</code> used to authenticate the user account.</p>",
"ListedAccess$Role": "<p>The Amazon Resource Name (ARN) of the Identity and Access Management (IAM) role that controls your users' access to your Amazon S3 bucket or Amazon EFS file system. The policies attached to this role determine the level of access that you want to provide your users when transferring files into and out of your Amazon S3 bucket or Amazon EFS file system. The IAM role should also contain a trust relationship that allows the server to access your resources when servicing your users' transfer requests.</p>",
"ListedServer$LoggingRole": "<p>The Amazon Resource Name (ARN) of the Identity and Access Management (IAM) role that allows a server to turn on Amazon CloudWatch logging for Amazon S3 or Amazon EFSevents. When set, you can view user activity in your CloudWatch logs.</p>",
"ListedUser$Role": "<p>The Amazon Resource Name (ARN) of the Identity and Access Management (IAM) role that controls your users' access to your Amazon S3 bucket or Amazon EFS file system. The policies attached to this role determine the level of access that you want to provide your users when transferring files into and out of your Amazon S3 bucket or Amazon EFS file system. The IAM role should also contain a trust relationship that allows the server to access your resources when servicing your users' transfer requests.</p> <note> <p>The IAM role that controls your users' access to your Amazon S3 bucket for servers with <code>Domain=S3</code>, or your EFS file system for servers with <code>Domain=EFS</code>. </p> <p>The policies attached to this role determine the level of access you want to provide your users when transferring files into and out of your S3 buckets or EFS file systems.</p> </note>",
"LoggingConfiguration$LoggingRole": "<p>The Amazon Resource Name (ARN) of the Identity and Access Management (IAM) role that allows a server to turn on Amazon CloudWatch logging for Amazon S3 or Amazon EFSevents. When set, you can view user activity in your CloudWatch logs.</p>",
"UpdateAccessRequest$Role": "<p>The Amazon Resource Name (ARN) of the Identity and Access Management (IAM) role that controls your users' access to your Amazon S3 bucket or Amazon EFS file system. The policies attached to this role determine the level of access that you want to provide your users when transferring files into and out of your Amazon S3 bucket or Amazon EFS file system. The IAM role should also contain a trust relationship that allows the server to access your resources when servicing your users' transfer requests.</p>",
"UpdateAgreementRequest$AccessRole": "<p>Connectors are used to send files using either the AS2 or SFTP protocol. For the access role, provide the Amazon Resource Name (ARN) of the Identity and Access Management role to use.</p> <p> <b>For AS2 connectors</b> </p> <p>With AS2, you can send files by calling <code>StartFileTransfer</code> and specifying the file paths in the request parameter, <code>SendFilePaths</code>. We use the file’s parent directory (for example, for <code>--send-file-paths /bucket/dir/file.txt</code>, parent directory is <code>/bucket/dir/</code>) to temporarily store a processed AS2 message file, store the MDN when we receive them from the partner, and write a final JSON file containing relevant metadata of the transmission. So, the <code>AccessRole</code> needs to provide read and write access to the parent directory of the file location used in the <code>StartFileTransfer</code> request. Additionally, you need to provide read and write access to the parent directory of the files that you intend to send with <code>StartFileTransfer</code>.</p> <p>If you are using Basic authentication for your AS2 connector, the access role requires the <code>secretsmanager:GetSecretValue</code> permission for the secret. If the secret is encrypted using a customer-managed key instead of the Amazon Web Services managed key in Secrets Manager, then the role also needs the <code>kms:Decrypt</code> permission for that key.</p> <p> <b>For SFTP connectors</b> </p> <p>Make sure that the access role provides read and write access to the parent directory of the file location that's used in the <code>StartFileTransfer</code> request. Additionally, make sure that the role provides <code>secretsmanager:GetSecretValue</code> permission to Secrets Manager.</p>",
"UpdateConnectorRequest$AccessRole": "<p>Connectors are used to send files using either the AS2 or SFTP protocol. For the access role, provide the Amazon Resource Name (ARN) of the Identity and Access Management role to use.</p> <p> <b>For AS2 connectors</b> </p> <p>With AS2, you can send files by calling <code>StartFileTransfer</code> and specifying the file paths in the request parameter, <code>SendFilePaths</code>. We use the file’s parent directory (for example, for <code>--send-file-paths /bucket/dir/file.txt</code>, parent directory is <code>/bucket/dir/</code>) to temporarily store a processed AS2 message file, store the MDN when we receive them from the partner, and write a final JSON file containing relevant metadata of the transmission. So, the <code>AccessRole</code> needs to provide read and write access to the parent directory of the file location used in the <code>StartFileTransfer</code> request. Additionally, you need to provide read and write access to the parent directory of the files that you intend to send with <code>StartFileTransfer</code>.</p> <p>If you are using Basic authentication for your AS2 connector, the access role requires the <code>secretsmanager:GetSecretValue</code> permission for the secret. If the secret is encrypted using a customer-managed key instead of the Amazon Web Services managed key in Secrets Manager, then the role also needs the <code>kms:Decrypt</code> permission for that key.</p> <p> <b>For SFTP connectors</b> </p> <p>Make sure that the access role provides read and write access to the parent directory of the file location that's used in the <code>StartFileTransfer</code> request. Additionally, make sure that the role provides <code>secretsmanager:GetSecretValue</code> permission to Secrets Manager.</p>",
"UpdateConnectorRequest$LoggingRole": "<p>The Amazon Resource Name (ARN) of the Identity and Access Management (IAM) role that allows a connector to turn on CloudWatch logging for Amazon S3 events. When set, you can view connector activity in your CloudWatch logs.</p>",
"UpdateUserRequest$Role": "<p>The Amazon Resource Name (ARN) of the Identity and Access Management (IAM) role that controls your users' access to your Amazon S3 bucket or Amazon EFS file system. The policies attached to this role determine the level of access that you want to provide your users when transferring files into and out of your Amazon S3 bucket or Amazon EFS file system. The IAM role should also contain a trust relationship that allows the server to access your resources when servicing your users' transfer requests.</p>",
"WorkflowDetail$ExecutionRole": "<p>Includes the necessary permissions for S3, EFS, and Lambda operations that Transfer can assume, so that all workflow steps can operate on the required resources</p>"
}
},
"S3Bucket": {
"base": null,
"refs": {
"S3FileLocation$Bucket": "<p>Specifies the S3 bucket that contains the file being used.</p>",
"S3InputFileLocation$Bucket": "<p>Specifies the S3 bucket for the customer input file.</p>"
}
},
"S3Etag": {
"base": null,
"refs": {
"S3FileLocation$Etag": "<p>The entity tag is a hash of the object. The ETag reflects changes only to the contents of an object, not its metadata.</p>"
}
},
"S3FileLocation": {
"base": "<p>Specifies the details for the file location for the file that's being used in the workflow. Only applicable if you are using S3 storage.</p>",
"refs": {
"FileLocation$S3FileLocation": "<p>Specifies the S3 details for the file being used, such as bucket, ETag, and so forth.</p>"
}
},
"S3InputFileLocation": {
"base": "<p>Specifies the customer input Amazon S3 file location. If it is used inside <code>copyStepDetails.DestinationFileLocation</code>, it should be the S3 copy destination.</p> <p> You need to provide the bucket and key. The key can represent either a path or a file. This is determined by whether or not you end the key value with the forward slash (/) character. If the final character is \"/\", then your file is copied to the folder, and its name does not change. If, rather, the final character is alphanumeric, your uploaded file is renamed to the path value. In this case, if a file with that name already exists, it is overwritten. </p> <p>For example, if your path is <code>shared-files/bob/</code>, your uploaded files are copied to the <code>shared-files/bob/</code>, folder. If your path is <code>shared-files/today</code>, each uploaded file is copied to the <code>shared-files</code> folder and named <code>today</code>: each upload overwrites the previous version of the <i>bob</i> file.</p>",
"refs": {
"InputFileLocation$S3FileLocation": "<p>Specifies the details for the Amazon S3 file that's being copied or decrypted.</p>"
}
},
"S3Key": {
"base": null,
"refs": {
"S3FileLocation$Key": "<p>The name assigned to the file when it was created in Amazon S3. You use the object key to retrieve the object.</p>",
"S3InputFileLocation$Key": "<p>The name assigned to the file when it was created in Amazon S3. You use the object key to retrieve the object.</p>"
}
},
"S3StorageOptions": {
"base": "<p>The Amazon S3 storage options that are configured for your server.</p>",
"refs": {
"CreateServerRequest$S3StorageOptions": "<p>Specifies whether or not performance for your Amazon S3 directories is optimized. This is disabled by default.</p> <p>By default, home directory mappings have a <code>TYPE</code> of <code>DIRECTORY</code>. If you enable this option, you would then need to explicitly set the <code>HomeDirectoryMapEntry</code> <code>Type</code> to <code>FILE</code> if you want a mapping to have a file target.</p>",
"DescribedServer$S3StorageOptions": "<p>Specifies whether or not performance for your Amazon S3 directories is optimized. This is disabled by default.</p> <p>By default, home directory mappings have a <code>TYPE</code> of <code>DIRECTORY</code>. If you enable this option, you would then need to explicitly set the <code>HomeDirectoryMapEntry</code> <code>Type</code> to <code>FILE</code> if you want a mapping to have a file target.</p>",
"UpdateServerRequest$S3StorageOptions": "<p>Specifies whether or not performance for your Amazon S3 directories is optimized. This is disabled by default.</p> <p>By default, home directory mappings have a <code>TYPE</code> of <code>DIRECTORY</code>. If you enable this option, you would then need to explicitly set the <code>HomeDirectoryMapEntry</code> <code>Type</code> to <code>FILE</code> if you want a mapping to have a file target.</p>"
}
},
"S3Tag": {
"base": "<p>Specifies the key-value pair that are assigned to a file during the execution of a Tagging step.</p>",
"refs": {
"S3Tags$member": null
}
},
"S3TagKey": {
"base": null,
"refs": {
"S3Tag$Key": "<p>The name assigned to the tag that you create.</p>"
}
},
"S3TagValue": {
"base": null,
"refs": {
"S3Tag$Value": "<p>The value that corresponds to the key.</p>"
}
},
"S3Tags": {
"base": null,
"refs": {
"TagStepDetails$Tags": "<p>Array that contains from 1 to 10 key/value pairs.</p>"
}
},
"S3VersionId": {
"base": null,
"refs": {
"S3FileLocation$VersionId": "<p>Specifies the file version.</p>"
}
},
"SecondaryGids": {
"base": null,
"refs": {
"PosixProfile$SecondaryGids": "<p>The secondary POSIX group IDs used for all EFS operations by this user.</p>"
}
},
"SecretId": {
"base": null,
"refs": {
"SftpConnectorConfig$UserSecretId": "<p>The identifier for the secret (in Amazon Web Services Secrets Manager) that contains the SFTP user's private key, password, or both. The identifier must be the Amazon Resource Name (ARN) of the secret.</p>"
}
},
"SecurityGroupId": {
"base": null,
"refs": {
"SecurityGroupIds$member": null
}
},
"SecurityGroupIds": {
"base": null,
"refs": {
"EndpointDetails$SecurityGroupIds": "<p>A list of security groups IDs that are available to attach to your server's endpoint.</p> <note> <p>This property can only be set when <code>EndpointType</code> is set to <code>VPC</code>.</p> <p>You can edit the <code>SecurityGroupIds</code> property in the <a href=\"https://docs.aws.amazon.com/transfer/latest/userguide/API_UpdateServer.html\">UpdateServer</a> API only if you are changing the <code>EndpointType</code> from <code>PUBLIC</code> or <code>VPC_ENDPOINT</code> to <code>VPC</code>. To change security groups associated with your server's VPC endpoint after creation, use the Amazon EC2 <a href=\"https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_ModifyVpcEndpoint.html\">ModifyVpcEndpoint</a> API.</p> </note>"
}
},
"SecurityPolicyName": {
"base": null,
"refs": {
"CreateServerRequest$SecurityPolicyName": "<p>Specifies the name of the security policy for the server.</p>",
"DescribeSecurityPolicyRequest$SecurityPolicyName": "<p>Specify the text name of the security policy for which you want the details.</p>",
"DescribedSecurityPolicy$SecurityPolicyName": "<p>The text name of the specified security policy.</p>",
"DescribedServer$SecurityPolicyName": "<p>Specifies the name of the security policy for the server.</p>",
"SecurityPolicyNames$member": null,
"UpdateServerRequest$SecurityPolicyName": "<p>Specifies the name of the security policy for the server.</p>"
}
},
"SecurityPolicyNames": {
"base": null,
"refs": {
"ListSecurityPoliciesResponse$SecurityPolicyNames": "<p>An array of security policies that were listed.</p>"
}
},
"SecurityPolicyOption": {
"base": null,
"refs": {
"SecurityPolicyOptions$member": null
}
},
"SecurityPolicyOptions": {
"base": null,
"refs": {
"DescribedSecurityPolicy$SshCiphers": "<p>Lists the enabled Secure Shell (SSH) cipher encryption algorithms in the security policy that is attached to the server or connector. This parameter applies to both server and connector security policies.</p>",
"DescribedSecurityPolicy$SshKexs": "<p>Lists the enabled SSH key exchange (KEX) encryption algorithms in the security policy that is attached to the server or connector. This parameter applies to both server and connector security policies.</p>",
"DescribedSecurityPolicy$SshMacs": "<p>Lists the enabled SSH message authentication code (MAC) encryption algorithms in the security policy that is attached to the server or connector. This parameter applies to both server and connector security policies.</p>",
"DescribedSecurityPolicy$TlsCiphers": "<p>Lists the enabled Transport Layer Security (TLS) cipher encryption algorithms in the security policy that is attached to the server.</p> <note> <p>This parameter only applies to security policies for servers.</p> </note>",
"DescribedSecurityPolicy$SshHostKeyAlgorithms": "<p>Lists the host key algorithms for the security policy.</p> <note> <p>This parameter only applies to security policies for connectors.</p> </note>"
}
},
"SecurityPolicyProtocol": {
"base": null,
"refs": {
"SecurityPolicyProtocols$member": null
}
},
"SecurityPolicyProtocols": {
"base": null,
"refs": {
"DescribedSecurityPolicy$Protocols": "<p>Lists the file transfer protocols that the security policy applies to.</p>"
}
},
"SecurityPolicyResourceType": {
"base": null,
"refs": {
"DescribedSecurityPolicy$Type": "<p>The resource type to which the security policy applies, either server or connector.</p>"
}
},
"SendWorkflowStepStateRequest": {
"base": null,
"refs": {
}
},
"SendWorkflowStepStateResponse": {
"base": null,
"refs": {
}
},
"ServerId": {
"base": null,
"refs": {
"CreateAccessRequest$ServerId": "<p>A system-assigned unique identifier for a server instance. This is the specific server that you added your user to.</p>",
"CreateAccessResponse$ServerId": "<p>The identifier of the server that the user is attached to.</p>",
"CreateAgreementRequest$ServerId": "<p>A system-assigned unique identifier for a server instance. This is the specific server that the agreement uses.</p>",
"CreateServerResponse$ServerId": "<p>The service-assigned identifier of the server that is created.</p>",
"CreateUserRequest$ServerId": "<p>A system-assigned unique identifier for a server instance. This is the specific server that you added your user to.</p>",
"CreateUserResponse$ServerId": "<p>The identifier of the server that the user is attached to.</p>",
"DeleteAccessRequest$ServerId": "<p>A system-assigned unique identifier for a server that has this user assigned.</p>",
"DeleteAgreementRequest$ServerId": "<p>The server identifier associated with the agreement that you are deleting.</p>",
"DeleteHostKeyRequest$ServerId": "<p>The identifier of the server that contains the host key that you are deleting.</p>",
"DeleteServerRequest$ServerId": "<p>A unique system-assigned identifier for a server instance.</p>",
"DeleteSshPublicKeyRequest$ServerId": "<p>A system-assigned unique identifier for a file transfer protocol-enabled server instance that has the user assigned to it.</p>",
"DeleteUserRequest$ServerId": "<p>A system-assigned unique identifier for a server instance that has the user assigned to it.</p>",
"DescribeAccessRequest$ServerId": "<p>A system-assigned unique identifier for a server that has this access assigned.</p>",
"DescribeAccessResponse$ServerId": "<p>A system-assigned unique identifier for a server that has this access assigned.</p>",
"DescribeAgreementRequest$ServerId": "<p>The server identifier that's associated with the agreement.</p>",
"DescribeHostKeyRequest$ServerId": "<p>The identifier of the server that contains the host key that you want described.</p>",
"DescribeServerRequest$ServerId": "<p>A system-assigned unique identifier for a server.</p>",
"DescribeUserRequest$ServerId": "<p>A system-assigned unique identifier for a server that has this user assigned.</p>",
"DescribeUserResponse$ServerId": "<p>A system-assigned unique identifier for a server that has this user assigned.</p>",
"DescribedAgreement$ServerId": "<p>A system-assigned unique identifier for a server instance. This identifier indicates the specific server that the agreement uses.</p>",
"DescribedServer$ServerId": "<p>Specifies the unique system-assigned identifier for a server that you instantiate.</p>",
"ImportHostKeyRequest$ServerId": "<p>The identifier of the server that contains the host key that you are importing.</p>",
"ImportHostKeyResponse$ServerId": "<p>Returns the server identifier that contains the imported key.</p>",
"ImportSshPublicKeyRequest$ServerId": "<p>A system-assigned unique identifier for a server.</p>",
"ImportSshPublicKeyResponse$ServerId": "<p>A system-assigned unique identifier for a server.</p>",
"ListAccessesRequest$ServerId": "<p>A system-assigned unique identifier for a server that has users assigned to it.</p>",
"ListAccessesResponse$ServerId": "<p>A system-assigned unique identifier for a server that has users assigned to it.</p>",
"ListAgreementsRequest$ServerId": "<p>The identifier of the server for which you want a list of agreements.</p>",
"ListHostKeysRequest$ServerId": "<p>The identifier of the server that contains the host keys that you want to view.</p>",
"ListHostKeysResponse$ServerId": "<p>Returns the server identifier that contains the listed host keys.</p>",
"ListUsersRequest$ServerId": "<p>A system-assigned unique identifier for a server that has users assigned to it.</p>",
"ListUsersResponse$ServerId": "<p>A system-assigned unique identifier for a server that the users are assigned to.</p>",
"ListedAgreement$ServerId": "<p>The unique identifier for the agreement.</p>",
"ListedServer$ServerId": "<p>Specifies the unique system assigned identifier for the servers that were listed.</p>",
"StartServerRequest$ServerId": "<p>A system-assigned unique identifier for a server that you start.</p>",
"StopServerRequest$ServerId": "<p>A system-assigned unique identifier for a server that you stopped.</p>",
"TestIdentityProviderRequest$ServerId": "<p>A system-assigned identifier for a specific server. That server's user authentication method is tested with a user name and password.</p>",
"UpdateAccessRequest$ServerId": "<p>A system-assigned unique identifier for a server instance. This is the specific server that you added your user to.</p>",
"UpdateAccessResponse$ServerId": "<p>The identifier of the server that the user is attached to.</p>",
"UpdateAgreementRequest$ServerId": "<p>A system-assigned unique identifier for a server instance. This is the specific server that the agreement uses.</p>",
"UpdateHostKeyRequest$ServerId": "<p>The identifier of the server that contains the host key that you are updating.</p>",
"UpdateHostKeyResponse$ServerId": "<p>Returns the server identifier for the server that contains the updated host key.</p>",
"UpdateServerRequest$ServerId": "<p>A system-assigned unique identifier for a server instance that the Transfer Family user is assigned to.</p>",
"UpdateServerResponse$ServerId": "<p>A system-assigned unique identifier for a server that the Transfer Family user is assigned to.</p>",
"UpdateUserRequest$ServerId": "<p>A system-assigned unique identifier for a Transfer Family server instance that the user is assigned to.</p>",
"UpdateUserResponse$ServerId": "<p>A system-assigned unique identifier for a Transfer Family server instance that the account is assigned to.</p>",
"UserDetails$ServerId": "<p>The system-assigned unique identifier for a Transfer server instance. </p>"
}
},
"ServiceErrorMessage": {
"base": null,
"refs": {
"AccessDeniedException$Message": null,
"ServiceUnavailableException$Message": null
}
},
"ServiceManagedEgressIpAddress": {
"base": null,
"refs": {
"ServiceManagedEgressIpAddresses$member": null
}
},
"ServiceManagedEgressIpAddresses": {
"base": "<p>The list of egress IP addresses of this server. These IP addresses are only relevant for servers that use the AS2 protocol. They are used for sending asynchronous MDNs. These IP addresses are assigned automatically when you create an AS2 server. Additionally, if you update an existing server and add the AS2 protocol, static IP addresses are assigned as well.</p>",
"refs": {
"DescribedConnector$ServiceManagedEgressIpAddresses": "<p>The list of egress IP addresses of this connector. These IP addresses are assigned automatically when you create the connector.</p>",
"DescribedServer$As2ServiceManagedEgressIpAddresses": "<p>The list of egress IP addresses of this server. These IP addresses are only relevant for servers that use the AS2 protocol. They are used for sending asynchronous MDNs.</p> <p>These IP addresses are assigned automatically when you create an AS2 server. Additionally, if you update an existing server and add the AS2 protocol, static IP addresses are assigned as well.</p>"
}
},
"ServiceMetadata": {
"base": "<p>A container object for the session details that are associated with a workflow.</p>",
"refs": {
"DescribedExecution$ServiceMetadata": "<p>A container object for the session details that are associated with a workflow.</p>",
"ListedExecution$ServiceMetadata": "<p>A container object for the session details that are associated with a workflow.</p>"
}
},
"ServiceUnavailableException": {
"base": "<p>The request has failed because the Amazon Web ServicesTransfer Family service is not available.</p>",
"refs": {
}
},
"SessionId": {
"base": null,
"refs": {
"UserDetails$SessionId": "<p>The system-assigned unique identifier for a session that corresponds to the workflow.</p>"
}
},
"SetStatOption": {
"base": null,
"refs": {
"ProtocolDetails$SetStatOption": "<p>Use the <code>SetStatOption</code> to ignore the error that is generated when the client attempts to use <code>SETSTAT</code> on a file you are uploading to an S3 bucket.</p> <p>Some SFTP file transfer clients can attempt to change the attributes of remote files, including timestamp and permissions, using commands, such as <code>SETSTAT</code> when uploading the file. However, these commands are not compatible with object storage systems, such as Amazon S3. Due to this incompatibility, file uploads from these clients can result in errors even when the file is otherwise successfully uploaded.</p> <p>Set the value to <code>ENABLE_NO_OP</code> to have the Transfer Family server ignore the <code>SETSTAT</code> command, and upload files without needing to make any changes to your SFTP client. While the <code>SetStatOption</code> <code>ENABLE_NO_OP</code> setting ignores the error, it does generate a log entry in Amazon CloudWatch Logs, so you can determine when the client is making a <code>SETSTAT</code> call.</p> <note> <p>If you want to preserve the original timestamp for your file, and modify other file attributes using <code>SETSTAT</code>, you can use Amazon EFS as backend storage with Transfer Family.</p> </note>"
}
},
"SftpAuthenticationMethods": {
"base": null,
"refs": {
"IdentityProviderDetails$SftpAuthenticationMethods": "<p>For SFTP-enabled servers, and for custom identity providers <i>only</i>, you can specify whether to authenticate using a password, SSH key pair, or both.</p> <ul> <li> <p> <code>PASSWORD</code> - users must provide their password to connect.</p> </li> <li> <p> <code>PUBLIC_KEY</code> - users must provide their private key to connect.</p> </li> <li> <p> <code>PUBLIC_KEY_OR_PASSWORD</code> - users can authenticate with either their password or their key. This is the default value.</p> </li> <li> <p> <code>PUBLIC_KEY_AND_PASSWORD</code> - users must provide both their private key and their password to connect. The server checks the key first, and then if the key is valid, the system prompts for a password. If the private key provided does not match the public key that is stored, authentication fails.</p> </li> </ul>"
}
},
"SftpConnectorConfig": {
"base": "<p>Contains the details for an SFTP connector object. The connector object is used for transferring files to and from a partner's SFTP server.</p> <note> <p>Because the <code>SftpConnectorConfig</code> data type is used for both creating and updating SFTP connectors, its parameters, <code>TrustedHostKeys</code> and <code>UserSecretId</code> are marked as not required. This is a bit misleading, as they are not required when you are updating an existing SFTP connector, but <i>are required</i> when you are creating a new SFTP connector.</p> </note>",
"refs": {
"CreateConnectorRequest$SftpConfig": "<p>A structure that contains the parameters for an SFTP connector object.</p>",
"DescribedConnector$SftpConfig": "<p>A structure that contains the parameters for an SFTP connector object.</p>",
"UpdateConnectorRequest$SftpConfig": "<p>A structure that contains the parameters for an SFTP connector object.</p>"
}
},
"SftpConnectorTrustedHostKey": {
"base": null,
"refs": {
"SftpConnectorTrustedHostKeyList$member": null
}
},
"SftpConnectorTrustedHostKeyList": {
"base": null,
"refs": {
"SftpConnectorConfig$TrustedHostKeys": "<p>The public portion of the host key, or keys, that are used to identify the external server to which you are connecting. You can use the <code>ssh-keyscan</code> command against the SFTP server to retrieve the necessary key.</p> <p>The three standard SSH public key format elements are <code><key type></code>, <code><body base64></code>, and an optional <code><comment></code>, with spaces between each element. Specify only the <code><key type></code> and <code><body base64></code>: do not enter the <code><comment></code> portion of the key.</p> <p>For the trusted host key, Transfer Family accepts RSA and ECDSA keys.</p> <ul> <li> <p>For RSA keys, the <code><key type></code> string is <code>ssh-rsa</code>.</p> </li> <li> <p>For ECDSA keys, the <code><key type></code> string is either <code>ecdsa-sha2-nistp256</code>, <code>ecdsa-sha2-nistp384</code>, or <code>ecdsa-sha2-nistp521</code>, depending on the size of the key you generated.</p> </li> </ul> <p>Run this command to retrieve the SFTP server host key, where your SFTP server name is <code>ftp.host.com</code>.</p> <p> <code>ssh-keyscan ftp.host.com</code> </p> <p>This prints the public host key to standard output.</p> <p> <code>ftp.host.com ssh-rsa AAAAB3Nza...<long-string-for-public-key</code> </p> <p>Copy and paste this string into the <code>TrustedHostKeys</code> field for the <code>create-connector</code> command or into the <b>Trusted host keys</b> field in the console.</p>"
}
},
"SigningAlg": {
"base": null,
"refs": {
"As2ConnectorConfig$SigningAlgorithm": "<p>The algorithm that is used to sign the AS2 messages sent with the connector.</p>"
}
},
"SourceFileLocation": {
"base": null,
"refs": {
"CopyStepDetails$SourceFileLocation": "<p>Specifies which file to use as input to the workflow step: either the output from the previous step, or the originally uploaded file for the workflow.</p> <ul> <li> <p>To use the previous file as the input, enter <code>${previous.file}</code>. In this case, this workflow step uses the output file from the previous workflow step as input. This is the default value.</p> </li> <li> <p>To use the originally uploaded file location as input for this step, enter <code>${original.file}</code>.</p> </li> </ul>",
"CustomStepDetails$SourceFileLocation": "<p>Specifies which file to use as input to the workflow step: either the output from the previous step, or the originally uploaded file for the workflow.</p> <ul> <li> <p>To use the previous file as the input, enter <code>${previous.file}</code>. In this case, this workflow step uses the output file from the previous workflow step as input. This is the default value.</p> </li> <li> <p>To use the originally uploaded file location as input for this step, enter <code>${original.file}</code>.</p> </li> </ul>",
"DecryptStepDetails$SourceFileLocation": "<p>Specifies which file to use as input to the workflow step: either the output from the previous step, or the originally uploaded file for the workflow.</p> <ul> <li> <p>To use the previous file as the input, enter <code>${previous.file}</code>. In this case, this workflow step uses the output file from the previous workflow step as input. This is the default value.</p> </li> <li> <p>To use the originally uploaded file location as input for this step, enter <code>${original.file}</code>.</p> </li> </ul>",
"DeleteStepDetails$SourceFileLocation": "<p>Specifies which file to use as input to the workflow step: either the output from the previous step, or the originally uploaded file for the workflow.</p> <ul> <li> <p>To use the previous file as the input, enter <code>${previous.file}</code>. In this case, this workflow step uses the output file from the previous workflow step as input. This is the default value.</p> </li> <li> <p>To use the originally uploaded file location as input for this step, enter <code>${original.file}</code>.</p> </li> </ul>",
"TagStepDetails$SourceFileLocation": "<p>Specifies which file to use as input to the workflow step: either the output from the previous step, or the originally uploaded file for the workflow.</p> <ul> <li> <p>To use the previous file as the input, enter <code>${previous.file}</code>. In this case, this workflow step uses the output file from the previous workflow step as input. This is the default value.</p> </li> <li> <p>To use the originally uploaded file location as input for this step, enter <code>${original.file}</code>.</p> </li> </ul>"
}
},
"SourceIp": {
"base": null,
"refs": {
"TestIdentityProviderRequest$SourceIp": "<p>The source IP address of the account to be tested.</p>"
}
},
"SshPublicKey": {
"base": "<p>Provides information about the public Secure Shell (SSH) key that is associated with a Transfer Family user for the specific file transfer protocol-enabled server (as identified by <code>ServerId</code>). The information returned includes the date the key was imported, the public key contents, and the public key ID. A user can store more than one SSH public key associated with their user name on a specific server.</p>",
"refs": {
"SshPublicKeys$member": null
}
},
"SshPublicKeyBody": {
"base": null,
"refs": {
"CreateUserRequest$SshPublicKeyBody": "<p>The public portion of the Secure Shell (SSH) key used to authenticate the user to the server.</p> <p>The three standard SSH public key format elements are <code><key type></code>, <code><body base64></code>, and an optional <code><comment></code>, with spaces between each element.</p> <p>Transfer Family accepts RSA, ECDSA, and ED25519 keys.</p> <ul> <li> <p>For RSA keys, the key type is <code>ssh-rsa</code>.</p> </li> <li> <p>For ED25519 keys, the key type is <code>ssh-ed25519</code>.</p> </li> <li> <p>For ECDSA keys, the key type is either <code>ecdsa-sha2-nistp256</code>, <code>ecdsa-sha2-nistp384</code>, or <code>ecdsa-sha2-nistp521</code>, depending on the size of the key you generated.</p> </li> </ul>",
"ImportSshPublicKeyRequest$SshPublicKeyBody": "<p>The public key portion of an SSH key pair.</p> <p>Transfer Family accepts RSA, ECDSA, and ED25519 keys.</p>",
"SshPublicKey$SshPublicKeyBody": "<p>Specifies the content of the SSH public key as specified by the <code>PublicKeyId</code>.</p> <p>Transfer Family accepts RSA, ECDSA, and ED25519 keys.</p>"
}
},
"SshPublicKeyCount": {
"base": null,
"refs": {
"ListedUser$SshPublicKeyCount": "<p>Specifies the number of SSH public keys stored for the user you specified.</p>"
}
},
"SshPublicKeyId": {
"base": null,
"refs": {
"DeleteSshPublicKeyRequest$SshPublicKeyId": "<p>A unique identifier used to reference your user's specific SSH key.</p>",
"ImportSshPublicKeyResponse$SshPublicKeyId": "<p>The name given to a public key by the system that was imported.</p>",
"SshPublicKey$SshPublicKeyId": "<p>Specifies the <code>SshPublicKeyId</code> parameter contains the identifier of the public key.</p>"
}
},
"SshPublicKeys": {
"base": null,
"refs": {
"DescribedUser$SshPublicKeys": "<p>Specifies the public key portion of the Secure Shell (SSH) keys stored for the described user.</p>"
}
},
"StartDirectoryListingRequest": {
"base": null,
"refs": {
}
},
"StartDirectoryListingResponse": {
"base": null,
"refs": {
}
},
"StartFileTransferRequest": {
"base": null,
"refs": {
}
},
"StartFileTransferResponse": {
"base": null,
"refs": {
}
},
"StartServerRequest": {
"base": null,
"refs": {
}
},
"State": {
"base": "<p>Describes the condition of a file transfer protocol-enabled server with respect to its ability to perform file operations. There are six possible states: <code>OFFLINE</code>, <code>ONLINE</code>, <code>STARTING</code>, <code>STOPPING</code>, <code>START_FAILED</code>, and <code>STOP_FAILED</code>.</p> <p> <code>OFFLINE</code> indicates that the server exists, but that it is not available for file operations. <code>ONLINE</code> indicates that the server is available to perform file operations. <code>STARTING</code> indicates that the server's was instantiated, but the server is not yet available to perform file operations. Under normal conditions, it can take a couple of minutes for the server to be completely operational. Both <code>START_FAILED</code> and <code>STOP_FAILED</code> are error conditions.</p>",
"refs": {
"DescribedServer$State": "<p>The condition of the server that was described. A value of <code>ONLINE</code> indicates that the server can accept jobs and transfer files. A <code>State</code> value of <code>OFFLINE</code> means that the server cannot perform file transfer operations.</p> <p>The states of <code>STARTING</code> and <code>STOPPING</code> indicate that the server is in an intermediate state, either not fully able to respond, or not fully offline. The values of <code>START_FAILED</code> or <code>STOP_FAILED</code> can indicate an error condition.</p>",
"ListedServer$State": "<p>The condition of the server that was described. A value of <code>ONLINE</code> indicates that the server can accept jobs and transfer files. A <code>State</code> value of <code>OFFLINE</code> means that the server cannot perform file transfer operations.</p> <p>The states of <code>STARTING</code> and <code>STOPPING</code> indicate that the server is in an intermediate state, either not fully able to respond, or not fully offline. The values of <code>START_FAILED</code> or <code>STOP_FAILED</code> can indicate an error condition.</p>"
}
},
"Status": {
"base": null,
"refs": {
"TestConnectionResponse$Status": "<p>Returns <code>OK</code> for successful test, or <code>ERROR</code> if the test fails.</p>"
}
},
"StatusCode": {
"base": null,
"refs": {
"TestIdentityProviderResponse$StatusCode": "<p>The HTTP status code that is the response from your API Gateway or your Lambda function.</p>"
}
},
"StepResultOutputsJson": {
"base": null,
"refs": {
"ExecutionStepResult$Outputs": "<p>The values for the key/value pair applied as a tag to the file. Only applicable if the step type is <code>TAG</code>.</p>"
}
},
"StopServerRequest": {
"base": null,
"refs": {
}
},
"StructuredLogDestinations": {
"base": null,
"refs": {
"CreateServerRequest$StructuredLogDestinations": "<p>Specifies the log groups to which your server logs are sent.</p> <p>To specify a log group, you must provide the ARN for an existing log group. In this case, the format of the log group is as follows:</p> <p> <code>arn:aws:logs:region-name:amazon-account-id:log-group:log-group-name:*</code> </p> <p>For example, <code>arn:aws:logs:us-east-1:111122223333:log-group:mytestgroup:*</code> </p> <p>If you have previously specified a log group for a server, you can clear it, and in effect turn off structured logging, by providing an empty value for this parameter in an <code>update-server</code> call. For example:</p> <p> <code>update-server --server-id s-1234567890abcdef0 --structured-log-destinations</code> </p>",
"DescribedServer$StructuredLogDestinations": "<p>Specifies the log groups to which your server logs are sent.</p> <p>To specify a log group, you must provide the ARN for an existing log group. In this case, the format of the log group is as follows:</p> <p> <code>arn:aws:logs:region-name:amazon-account-id:log-group:log-group-name:*</code> </p> <p>For example, <code>arn:aws:logs:us-east-1:111122223333:log-group:mytestgroup:*</code> </p> <p>If you have previously specified a log group for a server, you can clear it, and in effect turn off structured logging, by providing an empty value for this parameter in an <code>update-server</code> call. For example:</p> <p> <code>update-server --server-id s-1234567890abcdef0 --structured-log-destinations</code> </p>",
"UpdateServerRequest$StructuredLogDestinations": "<p>Specifies the log groups to which your server logs are sent.</p> <p>To specify a log group, you must provide the ARN for an existing log group. In this case, the format of the log group is as follows:</p> <p> <code>arn:aws:logs:region-name:amazon-account-id:log-group:log-group-name:*</code> </p> <p>For example, <code>arn:aws:logs:us-east-1:111122223333:log-group:mytestgroup:*</code> </p> <p>If you have previously specified a log group for a server, you can clear it, and in effect turn off structured logging, by providing an empty value for this parameter in an <code>update-server</code> call. For example:</p> <p> <code>update-server --server-id s-1234567890abcdef0 --structured-log-destinations</code> </p>"
}
},
"SubnetId": {
"base": null,
"refs": {
"SubnetIds$member": null
}
},
"SubnetIds": {
"base": null,
"refs": {
"EndpointDetails$SubnetIds": "<p>A list of subnet IDs that are required to host your server endpoint in your VPC.</p> <note> <p>This property can only be set when <code>EndpointType</code> is set to <code>VPC</code>.</p> </note>"
}
},
"Tag": {
"base": "<p>Creates a key-value pair for a specific resource. Tags are metadata that you can use to search for and group a resource for various purposes. You can apply tags to servers, users, and roles. A tag key can take more than one value. For example, to group servers for accounting purposes, you might create a tag called <code>Group</code> and assign the values <code>Research</code> and <code>Accounting</code> to that group.</p>",
"refs": {
"Tags$member": null
}
},
"TagKey": {
"base": null,
"refs": {
"Tag$Key": "<p>The name assigned to the tag that you create.</p>",
"TagKeys$member": null
}
},
"TagKeys": {
"base": null,
"refs": {
"UntagResourceRequest$TagKeys": "<p>TagKeys are key-value pairs assigned to ARNs that can be used to group and search for resources by type. This metadata can be attached to resources for any purpose.</p>"
}
},
"TagResourceRequest": {
"base": null,
"refs": {
}
},
"TagStepDetails": {
"base": "<p>Each step type has its own <code>StepDetails</code> structure.</p> <p>The key/value pairs used to tag a file during the execution of a workflow step.</p>",
"refs": {
"WorkflowStep$TagStepDetails": "<p>Details for a step that creates one or more tags.</p> <p>You specify one or more tags. Each tag contains a key-value pair.</p>"
}
},
"TagValue": {
"base": null,
"refs": {
"Tag$Value": "<p>Contains one or more values that you assigned to the key name you create.</p>"
}
},
"Tags": {
"base": null,
"refs": {
"CreateAgreementRequest$Tags": "<p>Key-value pairs that can be used to group and search for agreements.</p>",
"CreateConnectorRequest$Tags": "<p>Key-value pairs that can be used to group and search for connectors. Tags are metadata attached to connectors for any purpose.</p>",
"CreateProfileRequest$Tags": "<p>Key-value pairs that can be used to group and search for AS2 profiles.</p>",
"CreateServerRequest$Tags": "<p>Key-value pairs that can be used to group and search for servers.</p>",
"CreateUserRequest$Tags": "<p>Key-value pairs that can be used to group and search for users. Tags are metadata attached to users for any purpose.</p>",
"CreateWorkflowRequest$Tags": "<p>Key-value pairs that can be used to group and search for workflows. Tags are metadata attached to workflows for any purpose.</p>",
"DescribedAgreement$Tags": "<p>Key-value pairs that can be used to group and search for agreements.</p>",
"DescribedCertificate$Tags": "<p>Key-value pairs that can be used to group and search for certificates.</p>",
"DescribedConnector$Tags": "<p>Key-value pairs that can be used to group and search for connectors.</p>",
"DescribedHostKey$Tags": "<p>Key-value pairs that can be used to group and search for host keys.</p>",
"DescribedProfile$Tags": "<p>Key-value pairs that can be used to group and search for profiles.</p>",
"DescribedServer$Tags": "<p>Specifies the key-value pairs that you can use to search for and group servers that were assigned to the server that was described.</p>",
"DescribedUser$Tags": "<p>Specifies the key-value pairs for the user requested. Tag can be used to search for and group users for a variety of purposes.</p>",
"DescribedWorkflow$Tags": "<p>Key-value pairs that can be used to group and search for workflows. Tags are metadata attached to workflows for any purpose.</p>",
"ImportCertificateRequest$Tags": "<p>Key-value pairs that can be used to group and search for certificates.</p>",
"ImportHostKeyRequest$Tags": "<p>Key-value pairs that can be used to group and search for host keys.</p>",
"ListTagsForResourceResponse$Tags": "<p>Key-value pairs that are assigned to a resource, usually for the purpose of grouping and searching for items. Tags are metadata that you define.</p>",
"TagResourceRequest$Tags": "<p>Key-value pairs assigned to ARNs that you can use to group and search for resources by type. You can attach this metadata to resources (servers, users, workflows, and so on) for any purpose.</p>"
}
},
"TestConnectionRequest": {
"base": null,
"refs": {
}
},
"TestConnectionResponse": {
"base": null,
"refs": {
}
},
"TestIdentityProviderRequest": {
"base": null,
"refs": {
}
},
"TestIdentityProviderResponse": {
"base": null,
"refs": {
}
},
"ThrottlingException": {
"base": "<p>The request was denied due to request throttling.</p>",
"refs": {
}
},
"TlsSessionResumptionMode": {
"base": null,
"refs": {
"ProtocolDetails$TlsSessionResumptionMode": "<p>A property used with Transfer Family servers that use the FTPS protocol. TLS Session Resumption provides a mechanism to resume or share a negotiated secret key between the control and data connection for an FTPS session. <code>TlsSessionResumptionMode</code> determines whether or not the server resumes recent, negotiated sessions through a unique session ID. This property is available during <code>CreateServer</code> and <code>UpdateServer</code> calls. If a <code>TlsSessionResumptionMode</code> value is not specified during <code>CreateServer</code>, it is set to <code>ENFORCED</code> by default.</p> <ul> <li> <p> <code>DISABLED</code>: the server does not process TLS session resumption client requests and creates a new TLS session for each request. </p> </li> <li> <p> <code>ENABLED</code>: the server processes and accepts clients that are performing TLS session resumption. The server doesn't reject client data connections that do not perform the TLS session resumption client processing.</p> </li> <li> <p> <code>ENFORCED</code>: the server processes and accepts clients that are performing TLS session resumption. The server rejects client data connections that do not perform the TLS session resumption client processing. Before you set the value to <code>ENFORCED</code>, test your clients.</p> <note> <p>Not all FTPS clients perform TLS session resumption. So, if you choose to enforce TLS session resumption, you prevent any connections from FTPS clients that don't perform the protocol negotiation. To determine whether or not you can use the <code>ENFORCED</code> value, you need to test your clients.</p> </note> </li> </ul>"
}
},
"TransferId": {
"base": null,
"refs": {
"StartFileTransferResponse$TransferId": "<p>Returns the unique identifier for the file transfer.</p>"
}
},
"UntagResourceRequest": {
"base": null,
"refs": {
}
},
"UpdateAccessRequest": {
"base": null,
"refs": {
}
},
"UpdateAccessResponse": {
"base": null,
"refs": {
}
},
"UpdateAgreementRequest": {
"base": null,
"refs": {
}
},
"UpdateAgreementResponse": {
"base": null,
"refs": {
}
},
"UpdateCertificateRequest": {
"base": null,
"refs": {
}
},
"UpdateCertificateResponse": {
"base": null,
"refs": {
}
},
"UpdateConnectorRequest": {
"base": null,
"refs": {
}
},
"UpdateConnectorResponse": {
"base": null,
"refs": {
}
},
"UpdateHostKeyRequest": {
"base": null,
"refs": {
}
},
"UpdateHostKeyResponse": {
"base": null,
"refs": {
}
},
"UpdateProfileRequest": {
"base": null,
"refs": {
}
},
"UpdateProfileResponse": {
"base": null,
"refs": {
}
},
"UpdateServerRequest": {
"base": null,
"refs": {
}
},
"UpdateServerResponse": {
"base": null,
"refs": {
}
},
"UpdateUserRequest": {
"base": null,
"refs": {
}
},
"UpdateUserResponse": {
"base": "<p> <code>UpdateUserResponse</code> returns the user name and identifier for the request to update a user's properties.</p>",
"refs": {
}
},
"Url": {
"base": null,
"refs": {
"CreateConnectorRequest$Url": "<p>The URL of the partner's AS2 or SFTP endpoint.</p>",
"DescribedConnector$Url": "<p>The URL of the partner's AS2 or SFTP endpoint.</p>",
"IdentityProviderDetails$Url": "<p>Provides the location of the service endpoint used to authenticate users.</p>",
"ListedConnector$Url": "<p>The URL of the partner's AS2 or SFTP endpoint.</p>",
"TestIdentityProviderResponse$Url": "<p>The endpoint of the service used to authenticate a user.</p>",
"UpdateConnectorRequest$Url": "<p>The URL of the partner's AS2 or SFTP endpoint.</p>"
}
},
"UserCount": {
"base": null,
"refs": {
"DescribedServer$UserCount": "<p>Specifies the number of users that are assigned to a server you specified with the <code>ServerId</code>.</p>",
"ListedServer$UserCount": "<p>Specifies the number of users that are assigned to a server you specified with the <code>ServerId</code>.</p>"
}
},
"UserDetails": {
"base": "<p>Specifies the user name, server ID, and session ID for a workflow.</p>",
"refs": {
"ServiceMetadata$UserDetails": "<p>The Server ID (<code>ServerId</code>), Session ID (<code>SessionId</code>) and user (<code>UserName</code>) make up the <code>UserDetails</code>.</p>"
}
},
"UserName": {
"base": null,
"refs": {
"CreateUserRequest$UserName": "<p>A unique string that identifies a user and is associated with a <code>ServerId</code>. This user name must be a minimum of 3 and a maximum of 100 characters long. The following are valid characters: a-z, A-Z, 0-9, underscore '_', hyphen '-', period '.', and at sign '@'. The user name can't start with a hyphen, period, or at sign.</p>",
"CreateUserResponse$UserName": "<p>A unique string that identifies a Transfer Family user.</p>",
"DeleteSshPublicKeyRequest$UserName": "<p>A unique string that identifies a user whose public key is being deleted.</p>",
"DeleteUserRequest$UserName": "<p>A unique string that identifies a user that is being deleted from a server.</p>",
"DescribeUserRequest$UserName": "<p>The name of the user assigned to one or more servers. User names are part of the sign-in credentials to use the Transfer Family service and perform file transfer tasks.</p>",
"DescribedUser$UserName": "<p>Specifies the name of the user that was requested to be described. User names are used for authentication purposes. This is the string that will be used by your user when they log in to your server.</p>",
"ImportSshPublicKeyRequest$UserName": "<p>The name of the Transfer Family user that is assigned to one or more servers.</p>",
"ImportSshPublicKeyResponse$UserName": "<p>A user name assigned to the <code>ServerID</code> value that you specified.</p>",
"ListedUser$UserName": "<p>Specifies the name of the user whose ARN was specified. User names are used for authentication purposes.</p>",
"TestIdentityProviderRequest$UserName": "<p>The name of the account to be tested.</p>",
"UpdateUserRequest$UserName": "<p>A unique string that identifies a user and is associated with a server as specified by the <code>ServerId</code>. This user name must be a minimum of 3 and a maximum of 100 characters long. The following are valid characters: a-z, A-Z, 0-9, underscore '_', hyphen '-', period '.', and at sign '@'. The user name can't start with a hyphen, period, or at sign.</p>",
"UpdateUserResponse$UserName": "<p>The unique identifier for a user that is assigned to a server instance that was specified in the request.</p>",
"UserDetails$UserName": "<p>A unique string that identifies a Transfer Family user associated with a server.</p>"
}
},
"UserPassword": {
"base": null,
"refs": {
"TestIdentityProviderRequest$UserPassword": "<p>The password of the account to be tested.</p>"
}
},
"VpcEndpointId": {
"base": null,
"refs": {
"EndpointDetails$VpcEndpointId": "<p>The identifier of the VPC endpoint.</p> <note> <p>This property can only be set when <code>EndpointType</code> is set to <code>VPC_ENDPOINT</code>.</p> <p>For more information, see https://docs.aws.amazon.com/transfer/latest/userguide/create-server-in-vpc.html#deprecate-vpc-endpoint.</p> </note>"
}
},
"VpcId": {
"base": null,
"refs": {
"EndpointDetails$VpcId": "<p>The VPC identifier of the VPC in which a server's endpoint will be hosted.</p> <note> <p>This property can only be set when <code>EndpointType</code> is set to <code>VPC</code>.</p> </note>"
}
},
"WorkflowDescription": {
"base": null,
"refs": {
"CreateWorkflowRequest$Description": "<p>A textual description for the workflow.</p>",
"DescribedWorkflow$Description": "<p>Specifies the text description for the workflow.</p>",
"ListedWorkflow$Description": "<p>Specifies the text description for the workflow.</p>"
}
},
"WorkflowDetail": {
"base": "<p>Specifies the workflow ID for the workflow to assign and the execution role that's used for executing the workflow.</p> <p>In addition to a workflow to execute when a file is uploaded completely, <code>WorkflowDetails</code> can also contain a workflow ID (and execution role) for a workflow to execute on partial upload. A partial upload occurs when the server session disconnects while the file is still being uploaded.</p>",
"refs": {
"OnPartialUploadWorkflowDetails$member": null,
"OnUploadWorkflowDetails$member": null
}
},
"WorkflowDetails": {
"base": "<p>Container for the <code>WorkflowDetail</code> data type. It is used by actions that trigger a workflow to begin execution.</p>",
"refs": {
"CreateServerRequest$WorkflowDetails": "<p>Specifies the workflow ID for the workflow to assign and the execution role that's used for executing the workflow.</p> <p>In addition to a workflow to execute when a file is uploaded completely, <code>WorkflowDetails</code> can also contain a workflow ID (and execution role) for a workflow to execute on partial upload. A partial upload occurs when the server session disconnects while the file is still being uploaded.</p>",
"DescribedServer$WorkflowDetails": "<p>Specifies the workflow ID for the workflow to assign and the execution role that's used for executing the workflow.</p> <p>In addition to a workflow to execute when a file is uploaded completely, <code>WorkflowDetails</code> can also contain a workflow ID (and execution role) for a workflow to execute on partial upload. A partial upload occurs when the server session disconnects while the file is still being uploaded.</p>",
"UpdateServerRequest$WorkflowDetails": "<p>Specifies the workflow ID for the workflow to assign and the execution role that's used for executing the workflow.</p> <p>In addition to a workflow to execute when a file is uploaded completely, <code>WorkflowDetails</code> can also contain a workflow ID (and execution role) for a workflow to execute on partial upload. A partial upload occurs when the server session disconnects while the file is still being uploaded.</p> <p>To remove an associated workflow from a server, you can provide an empty <code>OnUpload</code> object, as in the following example.</p> <p> <code>aws transfer update-server --server-id s-01234567890abcdef --workflow-details '{\"OnUpload\":[]}'</code> </p>"
}
},
"WorkflowId": {
"base": null,
"refs": {
"CreateWorkflowResponse$WorkflowId": "<p>A unique identifier for the workflow.</p>",
"DeleteWorkflowRequest$WorkflowId": "<p>A unique identifier for the workflow.</p>",
"DescribeExecutionRequest$WorkflowId": "<p>A unique identifier for the workflow.</p>",
"DescribeExecutionResponse$WorkflowId": "<p>A unique identifier for the workflow.</p>",
"DescribeWorkflowRequest$WorkflowId": "<p>A unique identifier for the workflow.</p>",
"DescribedWorkflow$WorkflowId": "<p>A unique identifier for the workflow.</p>",
"ListExecutionsRequest$WorkflowId": "<p>A unique identifier for the workflow.</p>",
"ListExecutionsResponse$WorkflowId": "<p>A unique identifier for the workflow.</p>",
"ListedWorkflow$WorkflowId": "<p>A unique identifier for the workflow.</p>",
"SendWorkflowStepStateRequest$WorkflowId": "<p>A unique identifier for the workflow.</p>",
"WorkflowDetail$WorkflowId": "<p>A unique identifier for the workflow.</p>"
}
},
"WorkflowStep": {
"base": "<p>The basic building block of a workflow.</p>",
"refs": {
"WorkflowSteps$member": null
}
},
"WorkflowStepName": {
"base": null,
"refs": {
"CopyStepDetails$Name": "<p>The name of the step, used as an identifier.</p>",
"CustomStepDetails$Name": "<p>The name of the step, used as an identifier.</p>",
"DecryptStepDetails$Name": "<p>The name of the step, used as an identifier.</p>",
"DeleteStepDetails$Name": "<p>The name of the step, used as an identifier.</p>",
"TagStepDetails$Name": "<p>The name of the step, used as an identifier.</p>"
}
},
"WorkflowStepType": {
"base": null,
"refs": {
"ExecutionStepResult$StepType": "<p>One of the available step types.</p> <ul> <li> <p> <b> <code>COPY</code> </b> - Copy the file to another location.</p> </li> <li> <p> <b> <code>CUSTOM</code> </b> - Perform a custom step with an Lambda function target.</p> </li> <li> <p> <b> <code>DECRYPT</code> </b> - Decrypt a file that was encrypted before it was uploaded.</p> </li> <li> <p> <b> <code>DELETE</code> </b> - Delete the file.</p> </li> <li> <p> <b> <code>TAG</code> </b> - Add a tag to the file.</p> </li> </ul>",
"WorkflowStep$Type": "<p> Currently, the following step types are supported. </p> <ul> <li> <p> <b> <code>COPY</code> </b> - Copy the file to another location.</p> </li> <li> <p> <b> <code>CUSTOM</code> </b> - Perform a custom step with an Lambda function target.</p> </li> <li> <p> <b> <code>DECRYPT</code> </b> - Decrypt a file that was encrypted before it was uploaded.</p> </li> <li> <p> <b> <code>DELETE</code> </b> - Delete the file.</p> </li> <li> <p> <b> <code>TAG</code> </b> - Add a tag to the file.</p> </li> </ul>"
}
},
"WorkflowSteps": {
"base": null,
"refs": {
"CreateWorkflowRequest$Steps": "<p>Specifies the details for the steps that are in the specified workflow.</p> <p> The <code>TYPE</code> specifies which of the following actions is being taken for this step. </p> <ul> <li> <p> <b> <code>COPY</code> </b> - Copy the file to another location.</p> </li> <li> <p> <b> <code>CUSTOM</code> </b> - Perform a custom step with an Lambda function target.</p> </li> <li> <p> <b> <code>DECRYPT</code> </b> - Decrypt a file that was encrypted before it was uploaded.</p> </li> <li> <p> <b> <code>DELETE</code> </b> - Delete the file.</p> </li> <li> <p> <b> <code>TAG</code> </b> - Add a tag to the file.</p> </li> </ul> <note> <p> Currently, copying and tagging are supported only on S3. </p> </note> <p> For file location, you specify either the Amazon S3 bucket and key, or the Amazon EFS file system ID and path. </p>",
"CreateWorkflowRequest$OnExceptionSteps": "<p>Specifies the steps (actions) to take if errors are encountered during execution of the workflow.</p> <note> <p>For custom steps, the Lambda function needs to send <code>FAILURE</code> to the call back API to kick off the exception steps. Additionally, if the Lambda does not send <code>SUCCESS</code> before it times out, the exception steps are executed.</p> </note>",
"DescribedWorkflow$Steps": "<p>Specifies the details for the steps that are in the specified workflow.</p>",
"DescribedWorkflow$OnExceptionSteps": "<p>Specifies the steps (actions) to take if errors are encountered during execution of the workflow.</p>"
}
}
}
}