Skip to content

Commit

Permalink
docs: Update wording in API docs
Browse files Browse the repository at this point in the history
  • Loading branch information
Anthony Oliveri committed Feb 12, 2019
1 parent 8d8d5f2 commit 2c64dbb
Show file tree
Hide file tree
Showing 8 changed files with 52 additions and 46 deletions.
25 changes: 12 additions & 13 deletions Source/CompareComplyV1/CompareComply.swift
Original file line number Diff line number Diff line change
Expand Up @@ -124,7 +124,7 @@ public class CompareComply {
/**
Convert file to HTML.

Uploads an input file. The response includes an HTML version of the document.
Convert an uploaded file to HTML.

- parameter file: The file to convert.
- parameter modelID: The analysis model to be used by the service. For the `/v1/element_classification` and
Expand Down Expand Up @@ -191,7 +191,7 @@ public class CompareComply {
/**
Classify the elements of a document.

Uploads a file. The response includes an analysis of the document's structural and semantic elements.
Analyze an uploaded file's structural and semantic elements.

- parameter file: The file to classify.
- parameter modelID: The analysis model to be used by the service. For the `/v1/element_classification` and
Expand Down Expand Up @@ -258,7 +258,7 @@ public class CompareComply {
/**
Extract a document's tables.

Uploads a file. The response includes an analysis of the document's tables.
Extract and analyze an uploaded file's tables.

- parameter file: The file on which to run table extraction.
- parameter modelID: The analysis model to be used by the service. For the `/v1/element_classification` and
Expand Down Expand Up @@ -325,8 +325,7 @@ public class CompareComply {
/**
Compare two documents.

Uploads two input files. The response includes JSON comparing the two documents. Uploaded files must be in the same
file format.
Compare two uploaded input files. Uploaded files must be in the same file format.

- parameter file1: The first file to compare.
- parameter file2: The second file to compare.
Expand Down Expand Up @@ -632,7 +631,7 @@ public class CompareComply {
/**
List a specified feedback entry.

- parameter feedbackID: An string that specifies the feedback entry to be included in the output.
- parameter feedbackID: A string that specifies the feedback entry to be included in the output.
- parameter modelID: The analysis model to be used by the service. For the `/v1/element_classification` and
`/v1/comparison` methods, the default is `contracts`. For the `/v1/tables` method, the default is `tables`. These
defaults apply to the standalone methods as well as to the methods' use in batch-processing requests.
Expand Down Expand Up @@ -685,7 +684,7 @@ public class CompareComply {
/**
Deletes a specified feedback entry.

- parameter feedbackID: An string that specifies the feedback entry to be deleted from the document.
- parameter feedbackID: A string that specifies the feedback entry to be deleted from the document.
- parameter modelID: The analysis model to be used by the service. For the `/v1/element_classification` and
`/v1/comparison` methods, the default is `contracts`. For the `/v1/tables` method, the default is `tables`. These
defaults apply to the standalone methods as well as to the methods' use in batch-processing requests.
Expand Down Expand Up @@ -842,9 +841,9 @@ public class CompareComply {
}

/**
Gets the list of submitted batch-processing jobs.
List submitted batch-processing jobs.

Gets the list of batch-processing jobs submitted by users.
List the batch-processing jobs submitted by users.

- parameter headers: A dictionary of request headers to be sent with this request.
- parameter completionHandler: A function executed when the request completes with a successful result or error
Expand Down Expand Up @@ -882,9 +881,9 @@ public class CompareComply {
}

/**
Gets information about a specific batch-processing request.
Get information about a specific batch-processing request.

Gets information about a batch-processing request with a specified ID.
Get information about a batch-processing request with a specified ID.

- parameter batchID: The ID of the batch-processing request whose information you want to retrieve.
- parameter headers: A dictionary of request headers to be sent with this request.
Expand Down Expand Up @@ -929,9 +928,9 @@ public class CompareComply {
}

/**
Updates a pending or active batch-processing request.
Update a pending or active batch-processing request.

Updates a pending or active batch-processing request. You can rescan the input bucket to check for new documents or
Update a pending or active batch-processing request. You can rescan the input bucket to check for new documents or
cancel a request.

- parameter batchID: The ID of the batch-processing request you want to update.
Expand Down
4 changes: 2 additions & 2 deletions Source/CompareComplyV1/Models/Attribute.swift
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ import Foundation
public struct Attribute: Codable, Equatable {

/**
The type of attribute. Possible values are `Currency`, `DateTime`, `Location`, `Organization`, and `Person`.
The type of attribute.
*/
public enum TypeEnum: String {
case currency = "Currency"
Expand All @@ -33,7 +33,7 @@ public struct Attribute: Codable, Equatable {
}

/**
The type of attribute. Possible values are `Currency`, `DateTime`, `Location`, `Organization`, and `Person`.
The type of attribute.
*/
public var type: String?

Expand Down
3 changes: 2 additions & 1 deletion Source/CompareComplyV1/Models/Document.swift
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,8 @@ public struct Document: Codable, Equatable {
public var hash: String?

/**
The label applied to the input document with the calling method's `file1_label` or `file2_label` value.
The label applied to the input document with the calling method's `file_1_label` or `file_2_label` value. This
field is specified only in the output of the **Comparing two documents** method.
*/
public var label: String?

Expand Down
2 changes: 1 addition & 1 deletion Source/CompareComplyV1/Models/LeadingSentence.swift
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ public struct LeadingSentence: Codable, Equatable {
public var location: Location?

/**
An array of `location` objects listing the locations of detected leading sentences.
An array of `location` objects that lists the locations of detected leading sentences.
*/
public var elementLocations: [ElementLocations]?

Expand Down
2 changes: 1 addition & 1 deletion Source/CompareComplyV1/Models/SectionTitles.swift
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ public struct SectionTitles: Codable, Equatable {
public var level: Int?

/**
An array of `location` objects listing the locations of detected leading sentences.
An array of `location` objects that lists the locations of detected section titles.
*/
public var elementLocations: [ElementLocations]?

Expand Down
15 changes: 9 additions & 6 deletions Source/SpeechToTextV1/SpeechToText.swift
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ import RestKit

/**
The IBM® Speech to Text service provides APIs that use IBM's speech-recognition capabilities to produce transcripts
of spoken audio. The service can transcribe speech from various languages and audio formats. It addition to basic
of spoken audio. The service can transcribe speech from various languages and audio formats. In addition to basic
transcription, the service can produce detailed information about many different aspects of the audio. For most
languages, the service supports two sampling rates, broadband and narrowband. It returns all JSON response content in
the UTF-8 character set.
Expand Down Expand Up @@ -2625,15 +2625,17 @@ public class SpeechToText {
use. The service cannot accept subsequent training requests, or requests to add new audio resources, until the
existing request completes.
You can use the optional `custom_language_model_id` parameter to specify the GUID of a separately created custom
language model that is to be used during training. Specify a custom language model if you have verbatim
language model that is to be used during training. Train with a custom language model if you have verbatim
transcriptions of the audio files that you have added to the custom model or you have either corpora (text files)
or a list of words that are relevant to the contents of the audio files. For more information, see the **Create a
custom language model** method.
or a list of words that are relevant to the contents of the audio files. Both of the custom models must be based on
the same version of the same base model for training to succeed.
Training can fail to start for the following reasons:
* The service is currently handling another request for the custom model, such as another training request or a
request to add audio resources to the model.
* The custom model contains less than 10 minutes or more than 100 hours of audio data.
* One or more of the custom model's audio resources is invalid.
* You passed an incompatible custom language model with the `custom_language_model_id` query parameter. Both custom
models must be based on the same version of the same base model.
**See also:** [Train the custom acoustic
model](https://cloud.ibm.com/docs/services/speech-to-text/acoustic-create.html#trainModel).

Expand All @@ -2642,7 +2644,8 @@ public class SpeechToText {
- parameter customLanguageModelID: The customization ID (GUID) of a custom language model that is to be used
during training of the custom acoustic model. Specify a custom language model that has been trained with verbatim
transcriptions of the audio resources or that contains words that are relevant to the contents of the audio
resources.
resources. The custom language model must be based on the same version of the same base model as the custom
acoustic model. The credentials specified with the request must own both custom models.
- parameter headers: A dictionary of request headers to be sent with this request.
- parameter completionHandler: A function executed when the request completes with a successful result or error
*/
Expand Down Expand Up @@ -2761,7 +2764,7 @@ public class SpeechToText {
request. You must make the request with credentials for the instance of the service that owns the custom model.
- parameter customLanguageModelID: If the custom acoustic model was trained with a custom language model, the
customization ID (GUID) of that custom language model. The custom language model must be upgraded before the
custom acoustic model can be upgraded.
custom acoustic model can be upgraded. The credentials specified with the request must own both custom models.
- parameter headers: A dictionary of request headers to be sent with this request.
- parameter completionHandler: A function executed when the request completes with a successful result or error
*/
Expand Down
9 changes: 4 additions & 5 deletions Source/VisualRecognitionV3/Models/Classifier.swift
Original file line number Diff line number Diff line change
Expand Up @@ -42,8 +42,7 @@ public struct Classifier: Codable, Equatable {
public var name: String

/**
Unique ID of the account who owns the classifier. Returned when verbose=`true`. Might not be returned by some
requests.
Unique ID of the account who owns the classifier. Might not be returned by some requests.
*/
public var owner: String?

Expand Down Expand Up @@ -73,14 +72,14 @@ public struct Classifier: Codable, Equatable {
public var classes: [Class]?

/**
Date and time in Coordinated Universal Time (UTC) that the classifier was updated. Returned when verbose=`true`.
Might not be returned by some requests. Identical to `updated` and retained for backward compatibility.
Date and time in Coordinated Universal Time (UTC) that the classifier was updated. Might not be returned by some
requests. Identical to `updated` and retained for backward compatibility.
*/
public var retrained: Date?

/**
Date and time in Coordinated Universal Time (UTC) that the classifier was most recently updated. The field matches
either `retrained` or `created`. Returned when verbose=`true`. Might not be returned by some requests.
either `retrained` or `created`. Might not be returned by some requests.
*/
public var updated: Date?

Expand Down
38 changes: 21 additions & 17 deletions Source/VisualRecognitionV3/VisualRecognition.swift
Original file line number Diff line number Diff line change
Expand Up @@ -153,21 +153,25 @@ public class VisualRecognition {

Classify images with built-in or custom classifiers.

- parameter imagesFile: An image file (.jpg, .png) or .zip file with images. Maximum image size is 10 MB. Include
no more than 20 images and limit the .zip file to 100 MB. Encode the image and .zip file names in UTF-8 if they
contain non-ASCII characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters.
- parameter imagesFile: An image file (.gif, .jpg, .png, .tif) or .zip file with images. Maximum image size is 10
MB. Include no more than 20 images and limit the .zip file to 100 MB. Encode the image and .zip file names in
UTF-8 if they contain non-ASCII characters. The service assumes UTF-8 encoding if it encounters non-ASCII
characters.
You can also include an image with the **url** parameter.
- parameter acceptLanguage: The desired language of parts of the response. See the response for details.
- parameter url: The URL of an image to analyze. Must be in .jpg, or .png format. The minimum recommended pixel
density is 32X32 pixels per inch, and the maximum image size is 10 MB.
- parameter url: The URL of an image (.gif, .jpg, .png, .tif) to analyze. The minimum recommended pixel density
is 32X32 pixels, but the service tends to perform better with images that are at least 224 x 224 pixels. The
maximum image size is 10 MB.
You can also include images with the **images_file** parameter.
- parameter threshold: The minimum score a class must have to be displayed in the response. Set the threshold to
`0.0` to ignore the classification score and return all values.
- parameter owners: The categories of classifiers to apply. Use `IBM` to classify against the `default` general
classifier, and use `me` to classify against your custom classifiers. To analyze the image against both
classifier categories, set the value to both `IBM` and `me`.
The built-in `default` classifier is used if both **classifier_ids** and **owners** parameters are empty.
The **classifier_ids** parameter overrides **owners**, so make sure that **classifier_ids** is empty.
`0.0` to return all identified classes.
- parameter owners: The categories of classifiers to apply. The **classifier_ids** parameter overrides
**owners**, so make sure that **classifier_ids** is empty.
- Use `IBM` to classify against the `default` general classifier. You get the same result if both
**classifier_ids** and **owners** parameters are empty.
- Use `me` to classify against all your custom classifiers. However, for better performance use
**classifier_ids** to specify the specific custom classifiers to apply.
- Use both `IBM` and `me` to analyze the image against both classifier categories.
- parameter classifierIDs: Which classifiers to apply. Overrides the **owners** parameter. You can specify both
custom and built-in classifier IDs. The built-in `default` classifier is used if both **classifier_ids** and
**owners** parameters are empty.
Expand Down Expand Up @@ -269,16 +273,17 @@ public class VisualRecognition {
built-in model, so no training is necessary. The Detect faces method does not support general biometric facial
recognition.
Supported image formats include .gif, .jpg, .png, and .tif. The maximum image size is 10 MB. The minimum
recommended pixel density is 32X32 pixels per inch.
recommended pixel density is 32X32 pixels, but the service tends to perform better with images that are at least
224 x 224 pixels.

- parameter imagesFile: An image file (gif, .jpg, .png, .tif.) or .zip file with images. Limit the .zip file to
100 MB. You can include a maximum of 15 images in a request.
Encode the image and .zip file names in UTF-8 if they contain non-ASCII characters. The service assumes UTF-8
encoding if it encounters non-ASCII characters.
You can also include an image with the **url** parameter.
- parameter url: The URL of an image to analyze. Must be in .gif, .jpg, .png, or .tif format. The minimum
recommended pixel density is 32X32 pixels per inch, and the maximum image size is 10 MB. Redirects are followed,
so you can use a shortened URL.
recommended pixel density is 32X32 pixels, but the service tends to perform better with images that are at least
224 x 224 pixels. The maximum image size is 10 MB. Redirects are followed, so you can use a shortened URL.
You can also include images with the **images_file** parameter.
- parameter acceptLanguage: The desired language of parts of the response. See the response for details.
- parameter imagesFileContentType: The content type of imagesFile.
Expand Down Expand Up @@ -529,9 +534,8 @@ public class VisualRecognition {
/**
Update a classifier.

Update a custom classifier by adding new positive or negative classes (examples) or by adding new images to
existing classes. You must supply at least one set of positive or negative examples. For details, see [Updating
custom
Update a custom classifier by adding new positive or negative classes or by adding new images to existing classes.
You must supply at least one set of positive or negative examples. For details, see [Updating custom
classifiers](https://cloud.ibm.com/docs/services/visual-recognition/customizing.html#updating-custom-classifiers).
Encode all names in UTF-8 if they contain non-ASCII characters (.zip and image file names, and classifier and class
names). The service assumes UTF-8 encoding if it encounters non-ASCII characters.
Expand Down

0 comments on commit 2c64dbb

Please sign in to comment.