doc_id
stringlengths 40
40
| url
stringlengths 90
160
| title
stringlengths 5
96
| document
stringlengths 24
62.1k
| md_document
stringlengths 63
109k
|
---|---|---|---|---|
2339DEEF952ECF06246F2A5DAED6925E00F52D64 | https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-model-health-metrics.html?context=cdpaas&locale=en | watsonx.governance model health monitor evaluation metrics | watsonx.governance model health monitor evaluation metrics
watsonx.governance enables model health monitor evaluations by default to help you understand your model behavior and performance. You can use model health metrics to determine how efficiently your model deployment processes your transactions.
Supported model health metrics
The following metric categories for model health evaluations are supported by watsonx.governance. Each category contains metrics that provide details about your model performance:
* Scoring requests
watsonx.governance calculates the number of scoring requests that your model deployment receives during model health evaluations. This metric category is supported for traditional machine learning models and foundation models.
* Records
watsonx.governance calculates the total, average, minimum, maximum, and median number of transaction records that are processed across scoring requests during model health evaluations. This metric category is supported for traditional machine learning models and foundation models.
* Token count
watsonx.governance calculates the number of tokens that are processed across scoring requests for your model deployment. This metric category is supported for foundation models only. watsonx.governance calculates the following metrics to measure token count during evaluations: - Input token count: Calculates the total, average, minimum, maximum, and median input token count across multiple scoring requests during evaluations - Output token count: Calculates the total, average, minimum, maximum, and median output token count across scoring requests during evaluations
* Throughput and latency
watsonx.governance calculates latency by tracking the time that it takes to process scoring requests and transaction records per millisecond (ms). Throughput is calculated by tracking the number of scoring requests and transaction records that are processed per second. To calculate throughput and latency, watsonx.governance uses the response_time value from your scoring requests to track the time that your model deployment takes to process scoring requests. For Watson Machine Learning deployments, Watson OpenScale automatically detects the response_time value when you configure evaluations. For external and custom deployments, you must specify the response_time value when you send scoring requests to calculate throughput and latency as shown in the following example from the Watson OpenScale [Python SDK](https://client-docs.aiopenscale.cloud.ibm.com/html/index.html): python from ibm_watson_openscale.supporting_classes.payload_record import PayloadRecord client.data_sets.store_records( data_set_id=payload_data_set_id, request_body=[ PayloadRecord( scoring_id=<uuid>, request=openscale_input, response=openscale_output, response_time=<response_time>, user_id=<user_id>) ] ) watsonx.governance calculates the following metrics to measure thoughput and latency during evaluations: - API latency: Time taken (in ms) to process a scoring request by your model deployment. - API throughput: Number of scoring requests processed by your model deployment per second - Record latency: Time taken (in ms) to process a record by your model deployment - Record throughput: Number of records processed by your model deployment per second This metric category is supported for traditional machine learning models and foundation models.
* Users
watsonx.governance calculates the number of users that send scoring requests to your model deployments. This metric category is supported for traditional machine learning models and foundation models. To calculate the number of users, watsonx.governance uses the user_id from scoring requests to identify the users that send the scoring requests that your model receives. For external and custom deployments, you must specify the user_id value when you send scoring requests to calculate the number of users as shown in the following example from the Watson OpenScale [Python SDK](https://client-docs.aiopenscale.cloud.ibm.com/html/index.html): python from ibm_watson_openscale.supporting_classes.payload_record import PayloadRecord client.data_sets.store_records( data_set_id=payload_data_set_id, request_body=[ PayloadRecord( scoring_id=<uuid>, request=openscale_input, response=openscale_output, response_time=<response_time>, user_id=<user_id>). --> value to be supplied by user ] ) When you view a summary of the Users metric in watsonx.governance, you can use the real-time view to see the total number of users and the aggregated views to see the average number of users.
* Payload size
watsonx.governance calculates the total, average, minimum, maximum, and median payload size of the transaction records that your model deployment processes across scoring requests in kilobytes (KB). watsonx.governance does not support payload size metrics for image models. This metric category is supported for traditional machine learning models only.
| # watsonx\.governance model health monitor evaluation metrics #
watsonx\.governance enables model health monitor evaluations by default to help you understand your model behavior and performance\. You can use model health metrics to determine how efficiently your model deployment processes your transactions\.
## Supported model health metrics ##
The following metric categories for model health evaluations are supported by watsonx\.governance\. Each category contains metrics that provide details about your model performance:
<!-- <ul> -->
* Scoring requests
watsonx.governance calculates the number of scoring requests that your model deployment receives during model health evaluations. This metric category is supported for traditional machine learning models and foundation models.
<!-- </ul> -->
<!-- <ul> -->
* Records
watsonx.governance calculates the **total**, **average**, **minimum**, **maximum**, and **median** number of transaction records that are processed across scoring requests during model health evaluations. This metric category is supported for traditional machine learning models and foundation models.
<!-- </ul> -->
<!-- <ul> -->
* Token count
watsonx.governance calculates the number of tokens that are processed across scoring requests for your model deployment. This metric category is supported for foundation models only. watsonx.governance calculates the following metrics to measure token count during evaluations: - **Input token count**: Calculates the **total**, **average**, **minimum**, **maximum**, and **median** input token count across multiple scoring requests during evaluations - **Output token count**: Calculates the **total**, **average**, **minimum**, **maximum**, and **median** output token count across scoring requests during evaluations
<!-- </ul> -->
<!-- <ul> -->
* Throughput and latency
watsonx.governance calculates latency by tracking the time that it takes to process scoring requests and transaction records per millisecond (ms). Throughput is calculated by tracking the number of scoring requests and transaction records that are processed per second. To calculate throughput and latency, watsonx.governance uses the `response_time` value from your scoring requests to track the time that your model deployment takes to process scoring requests. For Watson Machine Learning deployments, Watson OpenScale automatically detects the `response_time` value when you configure evaluations. For external and custom deployments, you must specify the `response_time` value when you send scoring requests to calculate throughput and latency as shown in the following example from the Watson OpenScale [Python SDK](https://client-docs.aiopenscale.cloud.ibm.com/html/index.html): `python from ibm_watson_openscale.supporting_classes.payload_record import PayloadRecord client.data_sets.store_records( data_set_id=payload_data_set_id, request_body=[ PayloadRecord( scoring_id=<uuid>, request=openscale_input, response=openscale_output, response_time=<response_time>, user_id=<user_id>) ] )` watsonx.governance calculates the following metrics to measure thoughput and latency during evaluations: - **API latency**: Time taken (in ms) to process a scoring request by your model deployment. - **API throughput**: Number of scoring requests processed by your model deployment per second - **Record latency**: Time taken (in ms) to process a record by your model deployment - **Record throughput**: Number of records processed by your model deployment per second This metric category is supported for traditional machine learning models and foundation models.
<!-- </ul> -->
<!-- <ul> -->
* Users
watsonx.governance calculates the number of users that send scoring requests to your model deployments. This metric category is supported for traditional machine learning models and foundation models. To calculate the number of users, watsonx.governance uses the `user_id` from scoring requests to identify the users that send the scoring requests that your model receives. For external and custom deployments, you must specify the `user_id` value when you send scoring requests to calculate the number of users as shown in the following example from the Watson OpenScale [Python SDK](https://client-docs.aiopenscale.cloud.ibm.com/html/index.html): `python from ibm_watson_openscale.supporting_classes.payload_record import PayloadRecord client.data_sets.store_records( data_set_id=payload_data_set_id, request_body=[ PayloadRecord( scoring_id=<uuid>, request=openscale_input, response=openscale_output, response_time=<response_time>, user_id=<user_id>). --> value to be supplied by user ] )` When you view a summary of the **Users** metric in watsonx.governance, you can use the real-time view to see the total number of users and the aggregated views to see the average number of users.
<!-- </ul> -->
<!-- <ul> -->
* Payload size
watsonx.governance calculates the **total**, **average**, **minimum**, **maximum**, and **median** payload size of the transaction records that your model deployment processes across scoring requests in kilobytes (KB). watsonx.governance does not support payload size metrics for image models. This metric category is supported for traditional machine learning models only.
<!-- </ul> -->
<!-- </article "role="article" "> -->
|
CCCF5EC3E34E81E3E25FFE29317CDAC2ED1C936D | https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-monitor-accuracy.html?context=cdpaas&locale=en | Configuring quality evaluations in watsonx.governance | Configuring quality evaluations in watsonx.governance
watsonx.governance quality evaluations measure your foundation model's ability to provide correct outcomes
When you [evaluate prompt templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-eval-prompt.html), you can review a summary of quality evaluation results for the text classification task type.
The summary displays scores and violations for metrics that are calculated with default settings.
To configure quality evaluations with your own settings, you can set a minimum sample size and set threshold values for each metric. The minimum sample size indicates the minimum number of model transaction records that you want to evaluate and the threshold values create alerts when your metric scores violate your thresholds. The metric scores must be higher than the threshold values to avoid violations. Higher metric values indicate better scores.
Supported quality metrics
When you enable quality evaluations in watsonx.governance, you can generate metrics that help you determine how well your foundation model predicts outcomes.
watsonx.governance supports the following quality metrics:
* Accuracy
- Description: The proportion of correct predictions - Default thresholds: Lower limit = 80% - Problem types: Multiclass classification - Chart values: Last value in the timeframe - Metrics details available: Confusion matrix - Understanding accuracy: Accuracy can mean different things depending on the type of algorithm: - Multi-class classification: Accuracy measures the number of times any class was predicted correctly, normalized by the number of data points. For more details, see [Multi-class classification](https://spark.apache.org/docs/2.1.0/mllib-evaluation-metrics.htmlmulticlass-classification){: external} in the Apache Spark documentation.
* Weighted true positive rate
- Description: Weighted mean of class TPR with weights equal to class probability - Default thresholds: Lower limit = 80% - Problem type: Multiclass classification - Chart values: Last value in the timeframe - Metrics details available: Confusion matrix - Do the math: The True positive rate is calculated by the following formula:number of true positives TPR = _________________________________________________________ number of true positives + number of false negatives
* Weighted false positive rate
- Description: Weighted mean of class FPR with weights equal to class probability. For more details, see [Multi-class classification](https://spark.apache.org/docs/2.1.0/mllib-evaluation-metrics.htmlmulticlass-classification){: external} in the Apache Spark documentation. - Default thresholds: Lower limit = 80% - Problem type: Multiclass classification - Chart values: Last value in the timeframe - Metrics details available: Confusion matrix - Do the math: The Weighted False Positive Rate is the application of the FPR with weighted data.number of false positives FPR = ______________________________________________________ (number of false positives + number of true negatives)
* Weighted recall
- Description: Weighted mean of recall with weights equal to class probability - Default thresholds: Lower limit = 80% - Problem type: Multiclass classification - Chart values: Last value in the timeframe - Metrics details available: Confusion matrix - Do the math: Weighted recall (wR) is defined as the number of true positives (Tp) over the number of true positives plus the number of false negatives (Fn) used with weighted data.number of true positives Recall = ______________________________________________________ number of true positives + number of false negatives
* Weighted precision
- Description: Weighted mean of precision with weights equal to class probability - Default thresholds: Lower limit = 80% - Problem type: Multiclass classification - Chart values: Last value in the timeframe - Metrics details available: Confusion matrix - Do the math: Precision (P) is defined as the number of true positives (Tp) over the number of true positives plus the number of false positives (Fp).number of true positives Precision = ________________________________________________________ number of true positives + the number of false positives
* Weighted F1-Measure
- Description: Weighted mean of F1-measure with weights equal to class probability - Default thresholds: Lower limit = 80% - Problem type: Multiclass classification - Chart values: Last value in the timeframe - Metrics details available: Confusion matrix - Do the math: The Weighted F1-Measure is the result of using weighted data.precision * recall F1 = 2 * ____________________ precision + recall
* Matthews correlation coefficient
- Description: Measures the quality of binary and multiclass classifications by accounting for true and false positives and negatives. Balanced measure that can be used even if the classes are different sizes. A correlation coefficient value between -1 and +1. A coefficient of +1 represents a perfect prediction, 0 an average random prediction and -1 and inverse prediction. - Default thresholds: Lower limit = 80 - Chart values: Last value in the timeframe - Metrics details available: Confusion matrix
* Label skew
- Description: Measures the asymmetry of label distributions. If skewness is 0, the dataset is perfectly balanced, it if is less than -1 or greater than 1, the distribution is highly skewed, anything in between is moderately skewed. - Default thresholds:
- Lower limit = -0.5 - Upper limit = 0.5 - Chart values: Last value in the timeframe
Parent topic:[Configuring model evaluations](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-monitors-overview.html)
| # Configuring quality evaluations in watsonx\.governance #
watsonx\.governance quality evaluations measure your foundation model's ability to provide correct outcomes
When you [evaluate prompt templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-eval-prompt.html), you can review a summary of quality evaluation results for the text classification task type\.
The summary displays scores and violations for metrics that are calculated with default settings\.
To configure quality evaluations with your own settings, you can set a minimum sample size and set threshold values for each metric\. The minimum sample size indicates the minimum number of model transaction records that you want to evaluate and the threshold values create alerts when your metric scores violate your thresholds\. The metric scores must be higher than the threshold values to avoid violations\. Higher metric values indicate better scores\.
## Supported quality metrics ##
When you enable quality evaluations in watsonx\.governance, you can generate metrics that help you determine how well your foundation model predicts outcomes\.
watsonx\.governance supports the following quality metrics:
<!-- <ul> -->
* Accuracy
- **Description**: The proportion of correct predictions - **Default thresholds**: Lower limit = 80% - **Problem types**: Multiclass classification - **Chart values**: Last value in the timeframe - **Metrics details available**: Confusion matrix - **Understanding accuracy**: Accuracy can mean different things depending on the type of algorithm: - **Multi-class classification**: Accuracy measures the number of times any class was predicted correctly, normalized by the number of data points. For more details, see [Multi-class classification](https://spark.apache.org/docs/2.1.0/mllib-evaluation-metrics.html#multiclass-classification)\{: external\} in the Apache Spark documentation.
<!-- </ul> -->
<!-- <ul> -->
* Weighted true positive rate
- **Description**: Weighted mean of class TPR with weights equal to class probability - **Default thresholds**: Lower limit = 80% - **Problem type**: Multiclass classification - **Chart values**: Last value in the timeframe - **Metrics details available**: Confusion matrix - **Do the math**: The True positive rate is calculated by the following formula:`number of true positives TPR = _________________________________________________________ number of true positives + number of false negatives`
<!-- </ul> -->
<!-- <ul> -->
* Weighted false positive rate
- **Description**: Weighted mean of class FPR with weights equal to class probability. For more details, see [Multi-class classification](https://spark.apache.org/docs/2.1.0/mllib-evaluation-metrics.html#multiclass-classification)\{: external\} in the Apache Spark documentation. - **Default thresholds**: Lower limit = 80% - **Problem type**: Multiclass classification - **Chart values**: Last value in the timeframe - **Metrics details available**: Confusion matrix - **Do the math**: The Weighted False Positive Rate is the application of the FPR with weighted data.`number of false positives FPR = ______________________________________________________ (number of false positives + number of true negatives)`
<!-- </ul> -->
<!-- <ul> -->
* Weighted recall
- **Description**: Weighted mean of recall with weights equal to class probability - **Default thresholds**: Lower limit = 80% - **Problem type**: Multiclass classification - **Chart values**: Last value in the timeframe - **Metrics details available**: Confusion matrix - **Do the math**: Weighted recall (wR) is defined as the number of true positives (Tp) over the number of true positives plus the number of false negatives (Fn) used with weighted data.`number of true positives Recall = ______________________________________________________ number of true positives + number of false negatives`
<!-- </ul> -->
<!-- <ul> -->
* Weighted precision
- **Description**: Weighted mean of precision with weights equal to class probability - **Default thresholds**: Lower limit = 80% - **Problem type**: Multiclass classification - **Chart values**: Last value in the timeframe - **Metrics details available**: Confusion matrix - **Do the math**: Precision (P) is defined as the number of true positives (Tp) over the number of true positives plus the number of false positives (Fp).`number of true positives Precision = ________________________________________________________ number of true positives + the number of false positives`
<!-- </ul> -->
<!-- <ul> -->
* Weighted F1\-Measure
- **Description**: Weighted mean of F1-measure with weights equal to class probability - **Default thresholds**: Lower limit = 80% - **Problem type**: Multiclass classification - **Chart values**: Last value in the timeframe - **Metrics details available**: Confusion matrix - **Do the math**: The Weighted F1-Measure is the result of using weighted data.`precision * recall F1 = 2 * ____________________ precision + recall`
<!-- </ul> -->
<!-- <ul> -->
* Matthews correlation coefficient
- **Description**: Measures the quality of binary and multiclass classifications by accounting for true and false positives and negatives. Balanced measure that can be used even if the classes are different sizes. A correlation coefficient value between -1 and \+1. A coefficient of \+1 represents a perfect prediction, 0 an average random prediction and -1 and inverse prediction. - **Default thresholds**: Lower limit = 80 - **Chart values**: Last value in the timeframe - **Metrics details available**: Confusion matrix
<!-- </ul> -->
<!-- <ul> -->
* Label skew
- **Description**: Measures the asymmetry of label distributions. If skewness is 0, the dataset is perfectly balanced, it if is less than -1 or greater than 1, the distribution is highly skewed, anything in between is moderately skewed. - **Default thresholds**:
- Lower limit = -0.5 - Upper limit = 0.5 - **Chart values**: Last value in the timeframe
<!-- </ul> -->
**Parent topic:**[Configuring model evaluations](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-monitors-overview.html)
<!-- </article "role="article" "> -->
|
2EC85CF6AB5E5A276DA78F1129AD3F1F5C92F5BB | https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-monitor-gen-quality.html?context=cdpaas&locale=en | watsonx.governance generative AI quality evaluations | watsonx.governance generative AI quality evaluations
You can use watsonx.governance generative AI quality evaluations to measure how well your foundation model performs tasks.
When you [evaluate prompt templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-eval-prompt.html), you can review a summary of generative AI quality evaluation results for the following task types:
* Text summarization
* Content generation
* Entity extraction
* Question answering
The summary displays scores and violations for metrics that are calculated with default settings.
To configure generative AI quality evaluations with your own settings, you can set a minimum sample size and set threshold values for each metric as shown in the following example:

The minimum sample size indicates the minimum number of model transaction records that you want to evaluate and the threshold values create alerts when your metric scores violate your thresholds. The metric scores must be higher than the lower threshold values to avoid violations. Higher metric values indicate better scores.
Supported generative AI quality metrics
The following generative AI quality metrics are supported by watsonx.governance:
* ROUGE
[ROUGE](https://github.com/huggingface/evaluate/tree/main/metrics/rouge) is a set of metrics that assess how well a generated summary or translation compares to one or more reference summaries or translations. The generative AI quality evaluation calculates the rouge1, rouge2, and rougeLSum metrics. - Task types: - Text summarization - Content generation - Question answering - Entity extraction - Parameters: - Use stemmer: If true, users Porter stemmer to strip word suffixes. Defaults to false. - Thresholds: - Lower bound: 0.8 - Upper boud: 1.0
* SARI
[SARI](https://github.com/huggingface/evaluate/tree/main/metrics/sari) compares the predicted simplified sentences against the reference and the source sentences and explicitly measures the goodness of words that are added, deleted, and kept by the system. - Task types: - Text summarization - Thresholds: - Lower bound: 0 - Upper bound: 100
* METEOR
[METEOR](https://github.com/huggingface/evaluate/tree/main/metrics/meteor) is calculated with the harmonic mean of precision and recall to capture how well-ordered the matched words in machine translations are in relation to human-produced reference translations. - Task types: - Text summarization - Content generation - Parameters: - Alpha: Controls relative weights of precision and recall
- Beta: Controls shape of penalty as a function of fragmentation. - Gamma: The relative weight assigned to fragmentation penalty.
- Thresholds: - Lower bound: 0 - Upper bound: 1
* Text quality
Text quality evaluates the output of a model against [SuperGLUE](https://github.com/huggingface/evaluate/tree/af3c30561d840b83e54fc5f7150ea58046d6af69/metrics/super_glue) datasets by measuring the [F1 score](https://github.com/huggingface/evaluate/tree/main/metrics/f1), [precision](https://github.com/huggingface/evaluate/tree/main/metrics/precision), and [recall](https://github.com/huggingface/evaluate/tree/main/metrics/recall) against the model predictions and its ground truth data. It is calculated by normalizing the input strings and checking the number of similar tokens between the predictions and references. - Task types: - Text summarization - Content generation - Thresholds: - Lower bound: 0.8 - Upper bound: 1
* BLEU
[BLEU](https://github.com/huggingface/evaluate/blob/main/metrics/bleu/README.md) evaluates the quality of machine-translated text when translated from one natural language to another by comparing individual translated segments to a set of reference translations. - Task types: - Text summarization - Content generation - Question answering - Parameters: - Max order: Maximum n-gram order to use when completing BLEU score - Smooth: Whether or not to apply Lin et al. 2004 smoothing - Thresholds: - Lower bound: 0.8 - Upper bound: 1
* Sentence similarity
[Sentence similarity](https://huggingface.co/tasks/sentence-similarity::text=Sentence%20Similarity%20is%20the%20task,similar%20they%20are%20between%20them) determines how similar two texts are by converting input texts into vectors that capture semantic information and calculating their similarity. It measures Jaccard similarity and Cosine similarity. - Task types: Text summarization - Thresholds: - Lower limit: 0.8 - Upper limit: 1
* PII
[PII](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-entity-enhanced.htmlrule-based-pii) measures if the provided content contains any personally identifiable information in the input and output data by using the Watson Natural Language Processing Entity extraction model. - Task types: - Text summarization - Content generation - Question answering - Thresholds: - Upper limit: 0
* HAP
HAP measures if there is any toxic content in the input data provided to the model, and also any toxic content in the model generated output. - Task types: - Text summarization - Content generation - Question answering - Thesholds - Upper limit: 0
* Readability
The readability score determines the readability, complexity, and grade level of the model's output. - Task types: - Text summarization - Content generation - Thresholds: - Lower limit: 60
* Exact match
[Exact match](https://github.com/huggingface/evaluate/tree/main/metrics/exact_match) returns the rate at which the input predicted strings exactly match their references. - Task types: - Question answering - Entity extraction - Parameters: - Regexes to ignore: Regex expressions of characters to ignore when calculating the exact matches. - Ignore case: If True, turns everything to lowercase so that capitalization differences are ignored. - Ignore punctuation: If True, removes punctuation before comparing strings. - Ignore numbers: If True, removes all digits before comparing strings. - Thresholds: - Lower limit: 0.8 - Upper limit: 1
* Multi-label/class metrics
Multi-label/class metrics measure model performance for multi-label/multi-class predictions. - Metrics: - Micro F1 score - Macro F1 score - Micro precision - Macro precision - Micro recall - Macro recall - Task types: Entity extraction - Thresholds: - Lower limit: 0.8 - Upper limit: 1
Parent topic:[Configuring model evaluations](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-monitors-overview.html)
| # watsonx\.governance generative AI quality evaluations #
You can use watsonx\.governance generative AI quality evaluations to measure how well your foundation model performs tasks\.
When you [evaluate prompt templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-eval-prompt.html), you can review a summary of generative AI quality evaluation results for the following task types:
<!-- <ul> -->
* Text summarization
* Content generation
* Entity extraction
* Question answering
<!-- </ul> -->
The summary displays scores and violations for metrics that are calculated with default settings\.
To configure generative AI quality evaluations with your own settings, you can set a minimum sample size and set threshold values for each metric as shown in the following example:

The minimum sample size indicates the minimum number of model transaction records that you want to evaluate and the threshold values create alerts when your metric scores violate your thresholds\. The metric scores must be higher than the lower threshold values to avoid violations\. Higher metric values indicate better scores\.
## Supported generative AI quality metrics ##
The following generative AI quality metrics are supported by watsonx\.governance:
<!-- <ul> -->
* ROUGE
[ROUGE](https://github.com/huggingface/evaluate/tree/main/metrics/rouge) is a set of metrics that assess how well a generated summary or translation compares to one or more reference summaries or translations. The generative AI quality evaluation calculates the rouge1, rouge2, and rougeLSum metrics. - **Task types**: - Text summarization - Content generation - Question answering - Entity extraction - **Parameters**: - Use stemmer: If true, users Porter stemmer to strip word suffixes. Defaults to false. - **Thresholds**: - Lower bound: 0.8 - Upper boud: 1.0
<!-- </ul> -->
<!-- <ul> -->
* SARI
[SARI](https://github.com/huggingface/evaluate/tree/main/metrics/sari) compares the predicted simplified sentences against the reference and the source sentences and explicitly measures the goodness of words that are added, deleted, and kept by the system. - **Task types**: - Text summarization - **Thresholds**: - Lower bound: 0 - Upper bound: 100
<!-- </ul> -->
<!-- <ul> -->
* METEOR
[METEOR](https://github.com/huggingface/evaluate/tree/main/metrics/meteor) is calculated with the harmonic mean of precision and recall to capture how well-ordered the matched words in machine translations are in relation to human-produced reference translations. - **Task types**: - Text summarization - Content generation - **Parameters**: - Alpha: Controls relative weights of precision and recall
- Beta: Controls shape of penalty as a function of fragmentation. - Gamma: The relative weight assigned to fragmentation penalty.
- **Thresholds**: - Lower bound: 0 - Upper bound: 1
<!-- </ul> -->
<!-- <ul> -->
* Text quality
Text quality evaluates the output of a model against [SuperGLUE](https://github.com/huggingface/evaluate/tree/af3c30561d840b83e54fc5f7150ea58046d6af69/metrics/super_glue) datasets by measuring the [F1 score](https://github.com/huggingface/evaluate/tree/main/metrics/f1), [precision](https://github.com/huggingface/evaluate/tree/main/metrics/precision), and [recall](https://github.com/huggingface/evaluate/tree/main/metrics/recall) against the model predictions and its ground truth data. It is calculated by normalizing the input strings and checking the number of similar tokens between the predictions and references. - **Task types**: - Text summarization - Content generation - **Thresholds**: - Lower bound: 0.8 - Upper bound: 1
<!-- </ul> -->
<!-- <ul> -->
* BLEU
[BLEU](https://github.com/huggingface/evaluate/blob/main/metrics/bleu/README.md) evaluates the quality of machine-translated text when translated from one natural language to another by comparing individual translated segments to a set of reference translations. - **Task types**: - Text summarization - Content generation - Question answering - **Parameters**: - Max order: Maximum n-gram order to use when completing BLEU score - Smooth: Whether or not to apply Lin et al. 2004 smoothing - **Thresholds**: - Lower bound: 0.8 - Upper bound: 1
<!-- </ul> -->
<!-- <ul> -->
* Sentence similarity
[Sentence similarity](https://huggingface.co/tasks/sentence-similarity#:~:text=Sentence%20Similarity%20is%20the%20task,similar%20they%20are%20between%20them) determines how similar two texts are by converting input texts into vectors that capture semantic information and calculating their similarity. It measures Jaccard similarity and Cosine similarity. - **Task types**: Text summarization - **Thresholds**: - Lower limit: 0.8 - Upper limit: 1
<!-- </ul> -->
<!-- <ul> -->
* PII
[PII](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-entity-enhanced.html#rule-based-pii) measures if the provided content contains any personally identifiable information in the input and output data by using the Watson Natural Language Processing Entity extraction model. - **Task types**: - Text summarization - Content generation - Question answering - **Thresholds**: - Upper limit: 0
<!-- </ul> -->
<!-- <ul> -->
* HAP
HAP measures if there is any toxic content in the input data provided to the model, and also any toxic content in the model generated output. - **Task types**: - Text summarization - Content generation - Question answering - **Thesholds** - Upper limit: 0
<!-- </ul> -->
<!-- <ul> -->
* Readability
The readability score determines the readability, complexity, and grade level of the model's output. - **Task types**: - Text summarization - Content generation - **Thresholds**: - Lower limit: 60
<!-- </ul> -->
<!-- <ul> -->
* Exact match
[Exact match](https://github.com/huggingface/evaluate/tree/main/metrics/exact_match) returns the rate at which the input predicted strings exactly match their references. - **Task types**: - Question answering - Entity extraction - **Parameters**: - Regexes to ignore: Regex expressions of characters to ignore when calculating the exact matches. - Ignore case: If True, turns everything to lowercase so that capitalization differences are ignored. - Ignore punctuation: If True, removes punctuation before comparing strings. - Ignore numbers: If True, removes all digits before comparing strings. - **Thresholds**: - Lower limit: 0.8 - Upper limit: 1
<!-- </ul> -->
<!-- <ul> -->
* Multi\-label/class metrics
Multi-label/class metrics measure model performance for multi-label/multi-class predictions. - **Metrics**: - Micro F1 score - Macro F1 score - Micro precision - Macro precision - Micro recall - Macro recall - **Task types**: Entity extraction - **Thresholds**: - Lower limit: 0.8 - Upper limit: 1
<!-- </ul> -->
**Parent topic:**[Configuring model evaluations](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-monitors-overview.html)
<!-- </article "role="article" "> -->
|
B924D359F00DB1671F86ACA7A3EE226206DFBED1 | https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-monitors-overview.html?context=cdpaas&locale=en | Configuring model evaluations in watsonx.governance | Configuring model evaluations in watsonx.governance
Configure watsonx.governance evaluations to generate insights about your model performance.
You can configure the following types of evaluations in watsonx.governance:
* [Quality](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-monitor-accuracy.html)
Evaluates how well your model predicts correct outcomes that match labeled test data.
* [Drift v2](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-driftv2-config.html)
Evaluates changes in your model output, the accuracy of your predictions, and the distribution of your input data
* [Generative AI quality](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-monitor-gen-quality.html)
Measures how well your foundation model performs tasks
watsonx.governance also enables [model health evaluations](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-model-health-metrics.html) by default to help you determine how efficiently your model deployment processes transactions.
Parent topic:[Evaluating AI models with Watson OpenScale](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/getting-started.html)
| # Configuring model evaluations in watsonx\.governance #
Configure watsonx\.governance evaluations to generate insights about your model performance\.
You can configure the following types of evaluations in watsonx\.governance:
<!-- <ul> -->
* [Quality](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-monitor-accuracy.html)
Evaluates how well your model predicts correct outcomes that match labeled test data.
* [Drift v2](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-driftv2-config.html)
Evaluates changes in your model output, the accuracy of your predictions, and the distribution of your input data
* [Generative AI quality](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-monitor-gen-quality.html)
Measures how well your foundation model performs tasks
<!-- </ul> -->
watsonx\.governance also enables [model health evaluations](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-model-health-metrics.html) by default to help you determine how efficiently your model deployment processes transactions\.
**Parent topic:**[Evaluating AI models with Watson OpenScale](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/getting-started.html)
<!-- </article "role="article" "> -->
|
DE9CE5D0599D0D181890911721738BA3DEE01E34 | https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-payload-logging.html?context=cdpaas&locale=en | Payload logging in watsonx.governance | Payload logging in watsonx.governance
You can enable payload logging in watsonx.governance to configure model evaluations.
To [manage payload data](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-manage-payload-data.html) for configuring drift v2, generative AI quality, and model health evaluations, watsonx.governance must log your payload data in the payload logging table.
Generative AI quality evaluations use payload data to generate results for the following task types when you evaluate prompt templates:
* Text summarization
* Content generation
* Question answering
Drift v2 and model health evaluations use payload data to generate results for the following task types when you evaluate prompt templates:
* Text classification
* Text summarization
* Content generation
* Entity extraction
* Question answering
You can log your payload data with the payload logging endpoint or by uploading a CSV file. For more information, see [Sending model transactions](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-send-model-transactions.html)
Parent topic:[Managing payload data in Watson OpenScale](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-manage-payload-data.html)
| # Payload logging in watsonx\.governance #
You can enable payload logging in watsonx\.governance to configure model evaluations\.
To [manage payload data](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-manage-payload-data.html) for configuring drift v2, generative AI quality, and model health evaluations, watsonx\.governance must log your payload data in the payload logging table\.
Generative AI quality evaluations use payload data to generate results for the following task types when you evaluate prompt templates:
<!-- <ul> -->
* Text summarization
* Content generation
* Question answering
<!-- </ul> -->
Drift v2 and model health evaluations use payload data to generate results for the following task types when you evaluate prompt templates:
<!-- <ul> -->
* Text classification
* Text summarization
* Content generation
* Entity extraction
* Question answering
<!-- </ul> -->
You can log your payload data with the payload logging endpoint or by uploading a CSV file\. For more information, see [Sending model transactions](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-send-model-transactions.html)
**Parent topic:**[Managing payload data in Watson OpenScale](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-manage-payload-data.html)
<!-- </article "role="article" "> -->
|
E54340E1EF02D2436758A56105B3182481FF1783 | https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-plan-options.html?context=cdpaas&locale=en | watsonx.governance offering plan options | watsonx.governance offering plan options
The watsonx.governance service enables responsible, transparent, and explainable AI.
The available plans depend on the region where you are provisioning the service from the IBM Cloud catalog.
* In the Dallas region, provision a [watsonx.governance plan](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-plan-options.html?context=cdpaas&locale=enwos-plan-options-xgov-plans).
* In the Frankfurt region, provision an [Watson OpenScale](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-plan-options2.html) plan.
watsonx.governance plans (Dallas only)
Watsonx.governance offers a free Lite plan and a paid Essentials plan.
With watsonx.governance you can:
* Evaluate machine learning models for dimensions such as fairness, quality, or drift.
* Define AI use cases in a collaborative, open way to define a business problem and track the solution.
* Capture the details for machine learning models, in each stage of their lifecycle, and store the data in factsheets within an associated AI use case.
* Maintain collections of AI uses cases in inventories, where you can manage access.
For Large Language Models in watsonx.ai, you can also:
* Evaluate prompt templates across multiple dimensions such as quality, Personally Identifable Information (PII) in prompt input and outputs, and Abuse or Profanity in Prompt input and ouput.
* Monitor metrics for Large Language Model performance.
* Automatically capture metadata in a Factsheet from development to deployment, for each stage in the lifecycle.
watsonx.governance Lite plan features
Lite plan features include:
* Maximum of 200 resource units
* 1 resource unit per predictive model evaluation
* 1 resource unit per foundational model evaluation
* 1 resource unit per global explanation, with a maximum of 500 local explanations
* 1 resource unit per 500 local explanations
* Maximum of 1,000 records per evaluation
* Limit of 3 rows per use case
* Limit of 3 use cases
* Limit of 1 inventory
watsonx.governance Essential plan features
Essential plan features include:
* Maximum of 500 inventories
* 1 resource unit per predictive model evaluation
* 1 resource unit per foundational model evaluation
* 1 resource unit per global explanation, with a maximum of 500 local explanations
* 1 resource unit per 500 local explanations
* Maximum of 50,000 records per evaluation
Next steps
[Provisioning and launching the watsonx.governance service](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-provision-launch.html)
Parent topic:[watsonx.governance](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/aiopenscale.html)
| # watsonx\.governance offering plan options #
The watsonx\.governance service enables responsible, transparent, and explainable AI\.
The available plans depend on the region where you are provisioning the service from the IBM Cloud catalog\.
<!-- <ul> -->
* In the Dallas region, provision a [watsonx\.governance plan](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-plan-options.html?context=cdpaas&locale=en#wos-plan-options-xgov-plans)\.
* In the Frankfurt region, provision an [Watson OpenScale](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-plan-options2.html) plan\.
<!-- </ul> -->
## watsonx\.governance plans (Dallas only) ##
Watsonx\.governance offers a free Lite plan and a paid Essentials plan\.
With watsonx\.governance you can:
<!-- <ul> -->
* Evaluate machine learning models for dimensions such as fairness, quality, or drift\.
* Define AI use cases in a collaborative, open way to define a business problem and track the solution\.
* Capture the details for machine learning models, in each stage of their lifecycle, and store the data in factsheets within an associated AI use case\.
* Maintain collections of AI uses cases in inventories, where you can manage access\.
<!-- </ul> -->
For Large Language Models in watsonx\.ai, you can also:
<!-- <ul> -->
* Evaluate prompt templates across multiple dimensions such as quality, Personally Identifable Information (PII) in prompt input and outputs, and Abuse or Profanity in Prompt input and ouput\.
* Monitor metrics for Large Language Model performance\.
* Automatically capture metadata in a Factsheet from development to deployment, for each stage in the lifecycle\.
<!-- </ul> -->
### watsonx\.governance Lite plan features ###
Lite plan features include:
<!-- <ul> -->
* Maximum of 200 resource units
* 1 resource unit per predictive model evaluation
* 1 resource unit per foundational model evaluation
* 1 resource unit per global explanation, with a maximum of 500 local explanations
* 1 resource unit per 500 local explanations
* Maximum of 1,000 records per evaluation
* Limit of 3 rows per use case
* Limit of 3 use cases
* Limit of 1 inventory
<!-- </ul> -->
### watsonx\.governance Essential plan features ###
Essential plan features include:
<!-- <ul> -->
* Maximum of 500 inventories
* 1 resource unit per predictive model evaluation
* 1 resource unit per foundational model evaluation
* 1 resource unit per global explanation, with a maximum of 500 local explanations
* 1 resource unit per 500 local explanations
* Maximum of 50,000 records per evaluation
<!-- </ul> -->
## Next steps ##
[Provisioning and launching the watsonx\.governance service](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-provision-launch.html)
**Parent topic:**[watsonx\.governance](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/aiopenscale.html)
<!-- </article "role="article" "> -->
|
E45EEB80195E54D02A6F6CB7505F1FB73B4D4DAB | https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-plan-options2.html?context=cdpaas&locale=en | Watson OpenScale offering plan options | Watson OpenScale offering plan options
The Watson OpenScale enables responsible, transparent, and explainable AI.
With Watson OpenScale you can:
* Evaluate machine learning models for dimensions such as fairness, quality, or drift.
* Explore transactions to gain insights about your model.
Watson OpenScale legacy offering plans
Important:The legacy offering plan for Watson OpenScale is available only in the Frankfurt region. In the Dallas region, the [watsonx.governance plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-plan-options.html) are available instead.
Watson OpenScale Standard v2 plan
Watson OpenScale offers a Standard v2 plan that charge users on a per model basis.
There are no restrictions or limitations on payload data, feedback rows, or explanations under the Standard v2 instance.
Regional limitations
Watson OpenScale is not available in some regions. See [Regional availability for services and features](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/regional-datactr.html) for more details.
Note:The regional availability for every service can also be found in the [IBM watsonx catalog](https://dataplatform.cloud.ibm.com/data/catalog?target=services&context=cpdaas).
Quota limits
To avoid performance issues and manage resources efficiently, Watson OpenScale sets the following quota limits:
Asset Limit
DataMart 100 per instance
Service providers 100 per instance
Integrated systems 100 per instance
Subscriptions 100 per service provider
Monitor instances 100 per subscription
Every asset in Watson OpenScale has a hard limitation of 10000 instances of the asset per service instance.
PostgreSQL databases for Watson OpenScale
You can use a PostgreSQL database for your Watson OpenScale instance. PostgreSQL is a powerful, open source object-relational database that is highly customizable and compliant with many security standards.
If your model processes personally identifiable information (PII), use a PostgreSQL database for your model. PostgreSQL is compliant with:
* GDPR
* HIPAA
* PCI-DSS
* SOC 1 Type 2
* SOC 2 Type 2
* ISO 27001
* ISO 27017
* ISO 27018
* ISO 27701
Next steps
[Managing the Watson OpenScale service](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-provision-launch.html)
Parent topic:[watsonx.governance](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/aiopenscale.html)
| # Watson OpenScale offering plan options #
The Watson OpenScale enables responsible, transparent, and explainable AI\.
With Watson OpenScale you can:
<!-- <ul> -->
* Evaluate machine learning models for dimensions such as fairness, quality, or drift\.
* Explore transactions to gain insights about your model\.
<!-- </ul> -->
## Watson OpenScale legacy offering plans ##
Important:The legacy offering plan for Watson OpenScale is available only in the Frankfurt region\. In the Dallas region, the [watsonx\.governance plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-plan-options.html) are available instead\.
### Watson OpenScale Standard v2 plan ###
Watson OpenScale offers a Standard v2 plan that charge users on a per model basis\.
There are no restrictions or limitations on payload data, feedback rows, or explanations under the Standard v2 instance\.
### Regional limitations ###
Watson OpenScale is not available in some regions\. See [Regional availability for services and features](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/regional-datactr.html) for more details\.
Note:The regional availability for every service can also be found in the [IBM watsonx catalog](https://dataplatform.cloud.ibm.com/data/catalog?target=services&context=cpdaas)\.
### Quota limits ###
To avoid performance issues and manage resources efficiently, Watson OpenScale sets the following quota limits:
<!-- <table> -->
| Asset | Limit |
| ------------------ | ------------------------ |
| DataMart | 100 per instance |
| Service providers | 100 per instance |
| Integrated systems | 100 per instance |
| Subscriptions | 100 per service provider |
| Monitor instances | 100 per subscription |
<!-- </table ""> -->
Every asset in Watson OpenScale has a hard limitation of 10000 instances of the asset per service instance\.
### PostgreSQL databases for Watson OpenScale ###
You can use a PostgreSQL database for your Watson OpenScale instance\. PostgreSQL is a powerful, open source object\-relational database that is highly customizable and compliant with many security standards\.
If your model processes personally identifiable information (PII), use a PostgreSQL database for your model\. PostgreSQL is compliant with:
<!-- <ul> -->
* GDPR
* HIPAA
* PCI\-DSS
* SOC 1 Type 2
* SOC 2 Type 2
* ISO 27001
* ISO 27017
* ISO 27018
* ISO 27701
<!-- </ul> -->
## Next steps ##
[Managing the Watson OpenScale service](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-provision-launch.html)
**Parent topic:**[watsonx\.governance](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/aiopenscale.html)
<!-- </article "role="article" "> -->
|
6199BBB097894542EA31C726D8EF4A3357EED1E2 | https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-provision-launch.html?context=cdpaas&locale=en | Provisioning and launching watsonx.governance | Provisioning and launching watsonx.governance
You can provision and launch your watsonx.governance service instance to start monitoring your model assets.
Prerequisite : You must be [signed up for watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/signup-wx.html).
Required permissions : To provision and launch a watsonx.governance service instance, you must have Administrator or Editor platform access roles in the IBM Cloud account for IBM watsonx. If you signed up for IBM watsonx with your own IBM Cloud account, you are the owner of the account. Otherwise, you can [check your IBM Cloud account roles and permissions](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/your-roles.htmliamroles).
Launching a watsonx.governance service instance
Before you launch watsonx.governance, you must [create a service instance](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/create-services.html) from your watsonx account.
To launch watsonx.governance from IBM watsonx:
1. From the navigation menu , choose Administration > Services > Service instances.
2. Click your watsonx.governance service instance.
3. From the Service Details page, click Launch watsonx.governance.
Managing watsonx.governance
You can manage your Watson OpenScale service instance by upgrading it or deleting it.
You can upgrade watsonx.governance from a free Lite plan to a paid plan by using the IBM Cloud dashboard:
Note:Upgrade to a paid plan if you are getting error messages, such as 403 Errors.AIQFM0011: 'Lite plan has exceeded the 50,000 rows limitation for Debias or Deployment creation failed. Error: 402.
1. From the watsonx.governance dashboard, click your profile.
2. Click View upgrade options.
3. Select the Essential plan and click Upgrade.
You can also delete the watsonx.governance service instance and related data. After 30 days of inactivity, the data mart is automatically deleted for a Lite plan.
When the data mart is deleted, it includes the service configuration settings and tables:
* All configuration tables are deleted including the following configuration tables and files:
* Bindings
* Subscriptions
* Settings
* All the tables that are created for model evaluation are deleted, including, but not limited to, the following tables:
* Payload
* Feedback
* Manual labeling
* monitors
* Performance
* Explanation
* Annotation tables
Lite plan services are deleted after 30 days of inactivity. Even if you don't delete your instance from IBM Cloud, your data mart is deleted after 30 days of inactivity.
As a user of the Essential plan, your data mart is not automatically deleted. You can delete your watsonx.governance service instance from IBM Cloud and use the command-line interface to delete the data mart.
Parent topic:[Evaluating AI models with Watson OpenScale](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/getting-started.html)
| # Provisioning and launching watsonx\.governance #
You can provision and launch your watsonx\.governance service instance to start monitoring your model assets\.
**Prerequisite** : You must be [signed up for watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/signup-wx.html)\.
**Required permissions** : To provision and launch a watsonx\.governance service instance, you must have *Administrator* or *Editor* platform access roles in the IBM Cloud account for IBM watsonx\. If you signed up for IBM watsonx with your own IBM Cloud account, you are the owner of the account\. Otherwise, you can [check your IBM Cloud account roles and permissions](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/your-roles.html#iamroles)\.
## Launching a watsonx\.governance service instance ##
Before you launch watsonx\.governance, you must [create a service instance](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/create-services.html) from your watsonx account\.
To launch watsonx\.governance from IBM watsonx:
<!-- <ol> -->
1. From the navigation menu , choose **Administration > Services > Service instances**\.
2. Click your *watsonx\.governance* service instance\.
3. From the *Service Details* page, click **Launch watsonx\.governance**\.
<!-- </ol> -->
## Managing watsonx\.governance ##
You can manage your Watson OpenScale service instance by upgrading it or deleting it\.
You can upgrade watsonx\.governance from a free Lite plan to a paid plan by using the IBM Cloud dashboard:
Note:Upgrade to a paid plan if you are getting error messages, such as `403 Errors.AIQFM0011: 'Lite plan has exceeded the 50,000 rows limitation for Debias` or `Deployment creation failed. Error: 402`\.
<!-- <ol> -->
1. From the watsonx\.governance dashboard, click your profile\.
2. Click **View upgrade options**\.
3. Select the **Essential** plan and click **Upgrade**\.
<!-- </ol> -->
You can also delete the watsonx\.governance service instance and related data\. After 30 days of inactivity, the data mart is automatically deleted for a Lite plan\.
When the data mart is deleted, it includes the service configuration settings and tables:
<!-- <ul> -->
* All configuration tables are deleted including the following configuration tables and files:
<!-- <ul> -->
* Bindings
* Subscriptions
* Settings
<!-- </ul> -->
* All the tables that are created for model evaluation are deleted, including, but not limited to, the following tables:
<!-- <ul> -->
* Payload
* Feedback
* Manual labeling
* monitors
* Performance
* Explanation
* Annotation tables
<!-- </ul> -->
<!-- </ul> -->
Lite plan services are deleted after 30 days of inactivity\. Even if you don't delete your instance from IBM Cloud, your data mart is deleted after 30 days of inactivity\.
As a user of the Essential plan, your data mart is not automatically deleted\. You can delete your watsonx\.governance service instance from IBM Cloud and use the command\-line interface to delete the data mart\.
**Parent topic:**[Evaluating AI models with Watson OpenScale](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/getting-started.html)
<!-- </article "role="article" "> -->
|
60CC59B176B08462143EA591DAC074060AD988C7 | https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-send-model-transactions.html?context=cdpaas&locale=en | Sending model transactions in watsonx.governance | Sending model transactions in watsonx.governance
You must send model transactions from your deployment to watsonx.governance to enable model evaluations.
To generate accurate results for your model evaluations constantly, watsonx.governance must continue to receive new data from your deployment. watsonx.governance provides different methods that you can use to send transactions for model evaluations.
Importing data
When you review evaluation results in watsonx.governance, you can import data by selecting Evaluate now in the Actions menu to import payload and feedback data for your model evaluations.

For pre-production models, you must upload a CSV file that contains examples of input and output data. To run evaluations with imported data, you must map prompt variables to the associated columns in your CSV file and select Upload and evaluate as shown in the following example:

For production models, you can select Upload payload data or Upload feedback data in the Import test data window to upload a CSV file as shown in the following example:

The CSV file must contain labeled columns that match the columns in your payload and feedback schemas. When your upload completes successfully, you can select Evaluate now to run your evaluations with your imported data.
Using endpoints
For production models, Watson OpenScale supports endpoints that you can use to provide data in formats that enable evaluations. You can use the payload logging endpoint to send scoring requests for drift evaluations and use the feedback logging endpoint to provide feedback data for quality evaluations. For more information about the data formats, see [Managing data for model evaluations](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-manage-data.htmlit-dbo-active).
Parent topic:[Managing data for model evaluations in Watson OpenScale](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-manage-data.html)
| # Sending model transactions in watsonx\.governance #
You must send model transactions from your deployment to watsonx\.governance to enable model evaluations\.
To generate accurate results for your model evaluations constantly, watsonx\.governance must continue to receive new data from your deployment\. watsonx\.governance provides different methods that you can use to send transactions for model evaluations\.
## Importing data ##
When you review evaluation results in watsonx\.governance, you can import data by selecting **Evaluate now** in the **Actions** menu to import payload and feedback data for your model evaluations\.

For pre\-production models, you must upload a CSV file that contains examples of input and output data\. To run evaluations with imported data, you must map prompt variables to the associated columns in your CSV file and select **Upload and evaluate** as shown in the following example:

For production models, you can select **Upload payload data** or **Upload feedback data** in the **Import test data** window to upload a CSV file as shown in the following example:

The CSV file must contain labeled columns that match the columns in your payload and feedback schemas\. When your upload completes successfully, you can select **Evaluate now** to run your evaluations with your imported data\.
## Using endpoints ##
For production models, Watson OpenScale supports endpoints that you can use to provide data in formats that enable evaluations\. You can use the payload logging endpoint to send scoring requests for drift evaluations and use the feedback logging endpoint to provide feedback data for quality evaluations\. For more information about the data formats, see [Managing data for model evaluations](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-manage-data.html#it-dbo-active)\.
**Parent topic:**[Managing data for model evaluations in Watson OpenScale](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-manage-data.html)
<!-- </article "role="article" "> -->
|
422554C1DCEBABC93CB859B4A896908DA48A540D | https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-setup-wos.html?context=cdpaas&locale=en | Setting up watsonx.governance | Setting up watsonx.governance
You can set up watsonx.governance to monitor model assets in your IBM watsonx projects or deployment spaces. To set up watsonx.governance, you can manage users and roles for your organization to control access to your projects or deployment spaces.
To set up watsonx.governance, complete the following tasks:
* [Creating access policies](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-setup-wos.html?context=cdpaas&locale=enwos-access-policies)
* [Managing users and roles](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-setup-wos.html?context=cdpaas&locale=enwos-users-wx)
Creating access policies
You can complete the following steps to invite users to an IBM Cloud account that has a watsonx.governance instance installed and assign service access.
Required roles : Users must have have the Reader, Writer, or higher IBM Cloud IAM Platform roles for service access. Users that are assigned the Writer role or higher can access information across projects and deployment spaces in watsonx.governance.
1. From the IBM Cloud homepage, click Manage > Access (IAM).
2. From the IAM dashboard, click Users and select Invite user.
3. Complete the following fields:
* How do you want to assign access? : Access policy.
* Which service do you want to assign access to? : watsonx.governance and click Next.
* How do you want to scope the access : Select the scope of access for users and click Next.
* If you select Specific resources, select an attribute type and specify a value for each condition that you add.
* If you select Service instance in the Attribute type list, specify your instance in the Value field.
4. If you have multiple instances, you must find the data mart ID to specify the instance that you want to assign users access to. You can use one of the following methods to find the data mart ID:
* On the Insights dashboard, click a model deployment tile and go to Actions > View model information to find the data mart ID.
* On the Insights dashboard, click the navigation menu on a model deployment tile and select Configure monitors. Then, go to the Endpoints tab and find the data mart ID in the Integration details section of the Model information tab.
5. Select the Reader role in the Service access list.
6. Assign access to users.
* If you are assigning access to new users, click Add, and then click Invite in the Access summary pane.
* If you are assigning access to existing users, click Add, and then click Assign in the Access summary pane.
watsonx.governance users and roles
You can assign roles to watsonx.governance users to collaborate on model evaluations in [projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborate.htmladd-collaborators) and [deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/collaborator-permissions-wml.htmladding-collaborators).
The following table lists permissions for roles that you can assign for access to evaluations. The Operator and Viewer roles are equivalent.
Table 1. Operations by role
The first row of the table describes separate roles that you can choose from when creating a user. Each column provides a checkmark in the role category for the capability associated with that role.
Operations Admin role Editor role Viewer/Operator role
Evaluation ✔ ✔
View evaluation result ✔ ✔ ✔
Configure monitoring condition ✔ ✔
View monitoring condition ✔ ✔ ✔
Upload training data CSV file in model risk management ✔ ✔
Parent topic:[Setting up the platform for administrators](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-platform.html)
| # Setting up watsonx\.governance #
You can set up watsonx\.governance to monitor model assets in your IBM watsonx projects or deployment spaces\. To set up watsonx\.governance, you can manage users and roles for your organization to control access to your projects or deployment spaces\.
To set up watsonx\.governance, complete the following tasks:
<!-- <ul> -->
* [Creating access policies](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-setup-wos.html?context=cdpaas&locale=en#wos-access-policies)
* [Managing users and roles](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-setup-wos.html?context=cdpaas&locale=en#wos-users-wx)
<!-- </ul> -->
## Creating access policies ##
You can complete the following steps to invite users to an IBM Cloud account that has a watsonx\.governance instance installed and assign service access\.
**Required roles** : Users must have have the **Reader**, **Writer**, or higher IBM Cloud IAM Platform roles for service access\. Users that are assigned the **Writer** role or higher can access information across projects and deployment spaces in watsonx\.governance\.
<!-- <ol> -->
1. From the IBM Cloud homepage, click **Manage > Access (IAM)**\.
2. From the IAM dashboard, click **Users** and select **Invite user**\.
3. Complete the following fields:
<!-- <ul> -->
* *How do you want to assign access?* : `Access policy`.
* *Which service do you want to assign access to?* : `watsonx.governance` and click **Next**.
* *How do you want to scope the access* : Select the scope of access for users and click **Next**.
<!-- <ul> -->
* If you select **Specific resources**, select an attribute type and specify a value for each condition that you add.
* If you select **Service instance** in the *Attribute type* list, specify your instance in the *Value* field.
<!-- </ul> -->
<!-- </ul> -->
4. If you have multiple instances, you must find the data mart ID to specify the instance that you want to assign users access to\. You can use one of the following methods to find the data mart ID:
<!-- <ul> -->
* On the **Insights** dashboard, click a model deployment tile and go to **Actions > View model information** to find the data mart ID.
* On the **Insights** dashboard, click the navigation menu on a model deployment tile and select **Configure monitors**. Then, go to the **Endpoints** tab and find the data mart ID in the **Integration details** section of the **Model information** tab.
<!-- </ul> -->
5. Select the **Reader** role in the **Service access** list\.
6. Assign access to users\.
<!-- <ul> -->
* If you are assigning access to new users, click **Add**, and then click **Invite** in the *Access summary* pane.
* If you are assigning access to existing users, click **Add**, and then click **Assign** in the *Access summary* pane.
<!-- </ul> -->
<!-- </ol> -->
## watsonx\.governance users and roles ##
You can assign roles to watsonx\.governance users to collaborate on model evaluations in [projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborate.html#add-collaborators) and [deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/collaborator-permissions-wml.html#adding-collaborators)\.
The following table lists permissions for roles that you can assign for access to evaluations\. The **Operator** and **Viewer** roles are equivalent\.
<!-- <table "class="comparison-table" "> -->
Table 1\. Operations by role
The first row of the table describes separate roles that you can choose from when creating a user\. Each column provides a checkmark in the role category for the capability associated with that role\.
| Operations | Admin role | Editor role | Viewer/Operator role |
|:------------------------------------------------------ |:----------:|:-----------:|:--------------------:|
| Evaluation | ✔ | ✔ | |
| View evaluation result | ✔ | ✔ | ✔ |
| Configure monitoring condition | ✔ | ✔ | |
| View monitoring condition | ✔ | ✔ | ✔ |
| Upload training data CSV file in model risk management | ✔ | ✔ | |
<!-- </table "class="comparison-table" "> -->
**Parent topic:**[Setting up the platform for administrators](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-platform.html)
<!-- </article "role="article" "> -->
|
225192BB81696D14887CC55070A6DFA14B3315F7 | https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/asset_browser.html?context=cdpaas&locale=en | Adding data to Data Refinery | Adding data to Data Refinery
After you [create a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html) and you [create connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html) or you [add data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/add-data-project.html) to the project, you can then add data to Data Refinery and start prepping that data for analysis.
You can add data to Data Refinery in one of several ways:
* Select Prepare data from the overflow menu () of a data asset in the All assets list for the project
* Preview a data asset in the project and then click Prepare data
* Navigate to Data Refinery first and then add data to it
Navigate to Data Refinery
1. Access Data Refinery from within a project. Click the Assets tab.
2. Click New asset > Prepare and visualize data.
3. Select the data that you want to work with from Data assets or from Connections.
From Data assets:
* Select a data file (the selection includes data files that were already shaped with Data Refinery)
* Select a connected data asset
From Connections:
* Select a connection and file
* Select a connection, folder, and file
* Select a connection, schema, and table or view
Data Refinery supports these file types: Avro, CSV, delimited text files, JSON, Microsoft Excel (xls and xlsx formats. First sheet only, except for connections and connected data assets.), Parquet, SAS with the "sas7bdat" extension (read only), TSV (read only)
Data Refinery operates on a sample subset of rows in the data set. The sample size is 1 MB or 10,000 rows, whichever comes first. However, when you run a job for the Data Refinery flow, the entire data set is processed. If the Data Refinery flow fails with a large data asset, see workarounds in [Troubleshooting Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/ts_index.html).
Data connections marked with a key icon () are locked. If you are authorized to access the data source, you are asked to enter your personal credentials the first time you select it. This one-time step permanently unlocks the connection for you. After you have unlocked the connection, the key icon is no longer displayed. See [Adding connections to projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html).
4. Click Add to load the data into Data Refinery.
Next steps
* [Refine your data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html)
* [Validate your data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/metrics.html)
* [Use visualizations to gain insights into your data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/visualizations.html)
Parent topic:[Refining data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html)
| # Adding data to Data Refinery #
After you [create a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html) and you [create connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html) or you [add data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/add-data-project.html) to the project, you can then add data to Data Refinery and start prepping that data for analysis\.
You can add data to Data Refinery in one of several ways:
<!-- <ul> -->
* Select **Prepare data** from the overflow menu () of a data asset in the **All assets** list for the project
* Preview a data asset in the project and then click **Prepare data**
* Navigate to Data Refinery first and then add data to it
<!-- </ul> -->
## Navigate to Data Refinery ##
<!-- <ol> -->
1. Access Data Refinery from within a project\. Click the **Assets** tab\.
2. Click **New asset > Prepare and visualize data**\.
3. Select the data that you want to work with from **Data assets** or from **Connections**\.
From **Data assets**:
<!-- <ul> -->
* Select a data file (the selection includes data files that were already shaped with Data Refinery)
* Select a connected data asset
<!-- </ul> -->
From **Connections**:
<!-- <ul> -->
* Select a connection and file
* Select a connection, folder, and file
* Select a connection, schema, and table or view
<!-- </ul> -->
Data Refinery supports these file types: Avro, CSV, delimited text files, JSON, Microsoft Excel (xls and xlsx formats. First sheet only, except for connections and connected data assets.), Parquet, SAS with the "sas7bdat" extension (read only), TSV (read only)
Data Refinery operates on a sample subset of rows in the data set. The sample size is 1 MB or 10,000 rows, whichever comes first. However, when you run a job for the Data Refinery flow, the entire data set is processed. If the Data Refinery flow fails with a large data asset, see workarounds in [Troubleshooting Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/ts_index.html).
Data connections marked with a key icon () are locked. If you are authorized to access the data source, you are asked to enter your personal credentials the first time you select it. This one-time step permanently unlocks the connection for you. After you have unlocked the connection, the key icon is no longer displayed. See [Adding connections to projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html).
4. Click **Add** to load the data into Data Refinery\.
<!-- </ol> -->
## Next steps ##
<!-- <ul> -->
* [Refine your data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html)
* [Validate your data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/metrics.html)
* [Use visualizations to gain insights into your data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/visualizations.html)
<!-- </ul> -->
**Parent topic:**[Refining data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html)
<!-- </article "role="article" "> -->
|
5B0B25C87C4C2D91E8376D2AFF3726E4CA355F36 | https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/code_operations.html?context=cdpaas&locale=en | Interactive code templates in Data Refinery | Interactive code templates in Data Refinery
Data Refinery provides interactive templates for you to code operations, functions, and logical operators. Access the templates from the command-line text box at the top of the page. The templates include interactive assistance to help you with the syntax options.
Important: Support is for the operations and functions in the user interface. If you insert other operations or functions from an open source library, the Data Refinery flow might fail. See the command-line help and be sure to use the list of operations or functions from the templates. Use the examples in the templates to further customize the syntax as needed.
* [Operations](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/code_operations.html?context=cdpaas&locale=enoperations)
* [Functions](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/code_operations.html?context=cdpaas&locale=enfunctions)
* [Logical operators](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/code_operations.html?context=cdpaas&locale=enlogical_operators)
Operations
arrange
arrange(`<column>)
Sort rows, in ascending order, by the specified columns.
arrange(desc(`<column>))
Sort rows, in descending order, by the specified column.
arrange(`<column>, <column>)
Sort rows, in ascending order, by each specified, successive column, keeping the order from the prior sort intact.
count
count()
Total the data by group.
count(`<column>)
Group the data by the specified column and return the number of rows with unique values (for string values) or return the total for each group (for numeric values).
count(`<column>, wt=<column>)
Group the data by the specified column and return the number of rows with unique values (for string values) or return the total for each group (for numeric values) in the specified weight column.
count(`<column>, wt=<func>(<column>))
Group the data by the specified column and return the result of the function applied to the specified weight column.
count(`<column>, wt=<func>(<column>), sort = <logical>)
Group the data by the specified column and return the result of the function applied to the specified weight column, sorted or not.
distinct
distinct()
Keep distinct, unique rows based on all columns or on specified columns.
filter
filter(`<column> <logicalOperator> provide_value)
Keep rows that meet the specified condition and filter out all other rows.
For the Boolean column type, provide_value should be uppercase TRUE or FALSE.
filter(`<column>== <logical>)
Keep rows that meet the specified filter conditions based on logical value TRUE or FALSE.
filter(<func>(<column>) <logicalOperator> provide_value)
Keep rows that meet the specified condition and filter out all other rows. The condition can apply a function to a column on the left side of the operator.
filter(`<column> <logicalOperator><func(column)>)
Keep rows that meet the specified condition and filter out all other rows. The condition can apply a function to a column on the right side of the operator.
filter(<logicalfunc(column)>)
Keep rows that meet the specified condition and filter out all other rows. The condition can apply a logical function to a column.
filter(`<column> <logicalOperator> provide_value <andor> <column> <logicalOperator> provide_value )
Keep rows that meet the specified conditions and filter out all other rows.
group_by
group_by(`<column>)
Group the data based on the specified column.
group_by(desc(`<column>))
Group the data, in descending order, based on the specified column.
mutate
mutate(provide_new_column = `<column>)
Add a new column and keep existing columns.
mutate(provide_new_column = <func(column)>)
Add a new column by using the specified expression, which applies a function to a column. Keep existing columns.
mutate(provide_new_column = case_when(`<column> <operator> provide_value_or_column_to_compare provide_value_or_column_to_replace, <column> <operator> provide_value_or_column_to_compare provide_value_or_column_to_replace, TRUE provide_default_value_or_column))
Add a new column by using the specified conditional expression.
mutate(provide_new_column = `<column> <operator> <column>)
Add a new column by using the specified expression, which performs a calculation with existing columns. Keep existing columns.
mutate(provide_new_column = coalesce(`<column>, <column>))
Add a new column by using the specified expression, which replaces missing values in the new column with values from another, specified column. As an alternative to specifying another column, you can specify a value, a function on a column, or a function on a value. Keep existing columns.
mutate(provide_new_column = if_else(`<column> <logicalOperator> provide_value, provide_value_for_true, provide_value_for_false))
Add a new column by using the specified conditional expression. Keep existing columns.
mutate(provide_new_column = `<column>, provide_new_column = <column>)
Add multiple new columns and keep existing columns.
mutate(provide_new_column = n())
Count the values in the groups. Ensure grouping is done already using group_by. Keep existing columns.
mutate_all
mutate_all(funs(<func>))
Apply the specified function to all of the columns and overwrite the existing values in those columns. Specify whether to remove missing values.
mutate_all(funs(. <operator> provide_value))
Apply the specified operator to all of the columns and overwrite the existing values in those columns.
mutate_all(funs("provide_value" = . <operator> provide_value))
Apply the specified operator to all of the columns and create new columns to hold the results. Give the new columns names that end with the specified value.
mutate_at
mutate_at(vars(`<column>), funs(<func>))
Apply functions to the specified columns.
mutate_if
mutate_if(<predicateFunc>, <func>)
Apply functions to the columns that meet the specified condition.
mutate_if(<predicateFunc>, funs( . <operator> provide_value))
Apply the specified operator to the columns that meet the specified condition.
mutate_if(<predicateFunc>, funs(<func>))
Apply functions to the columns that meet the specified condition. Specify whether to remove missing values.
rename
rename(provide_new_column = `<column>)
Rename the specified column.
sample_frac
sample_frac(provide_number_between_0_and_1, weight=`<column>,replace=<logical>)
Generate a random sample based on a percentage of the data. weight is optional and is the ratio of probability the row will be chosen. Provide a numeric column. replace is optional and its Default is FALSE.
sample_n
sample_n(provide_number_of_rows,weight=`<column>,replace=<logical>)
Generate a random sample of data based on a number of rows. weight is optional and is the ratio of probability the row will be chosen. Provide a numeric column. replace is optional and its default is FALSE.
select
select(`<column>)
Keep the specified column.
select(-`<column>)
Remove the specified column.
select(starts_with("provide_text_value"))
Keep columns with names that start with the specified value.
select(ends_with("provide_text_value"))
Keep columns with names that end with the specified value.
select(contains("provide_text_value"))
Keep columns with names that contain the specified value.
select(matches ("provide_text_value"))
Keep columns with names that match the specified value. The specified value can be text or a regular expression.
select(`<column>:<column>)
Keep the columns in the specified range. Specify the range as from one column to another column.
select(`<column>, everything())
Keep all of the columns, but make the specified column the first column.
select(`<column>, <column>)
Keep the specified columns.
select_if
select_if(<predicateFunc>) Keep columns that meet the specified condition. Supported functions include:
* contains
* ends_with
* matches
* num_range
* starts_with
summarize
summarize(provide_new_column = <func>(<column>))
Apply aggregate functions to the specified columns to reduce multiple column values to a single value. Be sure to group the column data first by using the group_by operation.
summarize_all
summarize_all(<func>)
Apply an aggregate function to all of the columns to reduce multiple column values to a single value. Specify whether to remove missing values. Be sure to group the column data first by using the group_by operation.
summarize_all(funs(<func>))
Apply multiple aggregate functions to all of the columns to reduce multiple column values to a single value. Create new columns to hold the results. Specify whether to remove missing values. Be sure to group the column data first by using the group_by operation.
summarize_if
summarize_if(<predicate_conditions>,...)
Apply aggregate functions to columns that meet the specified conditions to reduce multiple column values to a single value. Specify whether to remove missing values. Be sure to group the column data first by using the group_by operation. Supported functions include:
* count
* max
* mean
* min
* standard deviation
* sum
tally
tally()
Counts the number of rows (for string columns) or totals the data (for numeric values) by group. Be sure to group the column data first by using the group_by operation.
tally(wt=`<column>)
Counts the number of rows (for string columns) or totals the data (for numeric columns) by group for the weighted column.
tally( wt=<func>(<column>), sort = <logical>)
Applies a function to the specified weighted column and returns the result, by group, sorted or not.
top_n
top_n(provide_value)
Select the top or bottom N rows (by value) in each group. Specify a positive integer to select the top N rows; specify a negative integer to select the bottom N rows.
top_n(provide_value, `<column>)
Select the top or bottom N rows (by value) in each group, based on the specified column. Specify a positive integer to select the top N rows; specify a negative integer to select the bottom N rows.
If duplicate rows affect the count, use the Remove duplicates GUI operation prior to using the top_n() operation.
transmute
transmute(<new_or_existing_column> = <column>)
Add a new column or overwrite an existing one by using the specified expression. Keep only columns that are mentioned in the expression.
transmute(<new_or_existing_column> = <func(column)>)
Add a new column or overwrite an existing one by applying a function to the specified column. Keep only columns that are mentioned in the expression.
transmute(<new_or_existing_column> = <column> <operator> <column>)
Add a new column or overwrite an existing one by applying an operator to the specified column. Keep only columns that are mentioned in the expression.
transmute(<new_or_existing_column> = <column>, <new_or_existing_column> = <column>)
Add multiple new columns. Keep only columns that are mentioned in the expression.
transmute(<new_or_existing_column> = if_else( provide_value, provide_value_for_true, provide_value_for_false))
Add a new column or overwrite an existing one by using the specified conditional expressions. Keep only columns that are mentioned in the expressions.
ungroup
ungroup()
Ungroup the data.
Functions
Aggregate
* mean
* min
* n
* sd
* sum
Logical
* is.na
Numerical
* abs
* coalesce
* cut
* exp
* floor
Text
* c
* coalesce
* paste
* tolower
* toupper
Type
* as.character
* as.double
* as.integer
* as.logical
Logical operators
* <
* <=
* >=
* >
* between
* !=
* ==
* %in%
Parent topic:[Refining data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html)
| # Interactive code templates in Data Refinery #
Data Refinery provides interactive templates for you to code operations, functions, and logical operators\. Access the templates from the command\-line text box at the top of the page\. The templates include interactive assistance to help you with the syntax options\.
Important: Support is for the operations and functions in the user interface\. If you insert other operations or functions from an open source library, the Data Refinery flow might fail\. See the command\-line help and be sure to use the list of operations or functions from the templates\. Use the examples in the templates to further customize the syntax as needed\.
<!-- <ul> -->
* [Operations](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/code_operations.html?context=cdpaas&locale=en#operations)
* [Functions](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/code_operations.html?context=cdpaas&locale=en#functions)
* [Logical operators](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/code_operations.html?context=cdpaas&locale=en#logical_operators)
<!-- </ul> -->
## Operations ##
### arrange ###
arrange(\``<column>`\`)
Sort rows, in ascending order, by the specified columns\.
arrange(desc(\``<column>`\`))
Sort rows, in descending order, by the specified column\.
arrange(\``<column>`\`, \``<column>`\`)
Sort rows, in ascending order, by each specified, successive column, keeping the order from the prior sort intact\.
### count ###
count()
Total the data by group\.
count(\``<column>`\`)
Group the data by the specified column and return the number of rows with unique values (for string values) or return the total for each group (for numeric values)\.
count(\``<column>`\`, wt=\``<column>`\`)
Group the data by the specified column and return the number of rows with unique values (for string values) or return the total for each group (for numeric values) in the specified weight column\.
count(\``<column>`\`, wt=`<func>`(\``<column>`\`))
Group the data by the specified column and return the result of the function applied to the specified weight column\.
count(\``<column>`\`, wt=`<func>`(\``<column>`\`), sort = `<logical>`)
Group the data by the specified column and return the result of the function applied to the specified weight column, sorted or not\.
### distinct ###
distinct()
Keep distinct, unique rows based on all columns or on specified columns\.
### filter ###
filter(\``<column>`\` `<logicalOperator>` provide\_value)
Keep rows that meet the specified condition and filter out all other rows\.
For the Boolean column type, provide\_value should be uppercase TRUE or FALSE\.
filter(\``<column>`\`== `<logical>`)
Keep rows that meet the specified filter conditions based on logical value TRUE or FALSE\.
filter(`<func>`(\``<column>`\`) `<logicalOperator>` provide\_value)
Keep rows that meet the specified condition and filter out all other rows\. The condition can apply a function to a column on the left side of the operator\.
filter(\``<column>`\` `<logicalOperator>``<func(column)>`)
Keep rows that meet the specified condition and filter out all other rows\. The condition can apply a function to a column on the right side of the operator\.
filter(`<logicalfunc(column)>`)
Keep rows that meet the specified condition and filter out all other rows\. The condition can apply a logical function to a column\.
filter(\``<column>`\` `<logicalOperator>` provide\_value `<andor>` \``<column>`\` `<logicalOperator>` provide\_value )
Keep rows that meet the specified conditions and filter out all other rows\.
### group\_by ###
group\_by(\``<column>`\`)
Group the data based on the specified column\.
group\_by(desc(\``<column>`\`))
Group the data, in descending order, based on the specified column\.
### mutate ###
mutate(provide\_new\_column = \``<column>`\`)
Add a new column and keep existing columns\.
mutate(provide\_new\_column = `<func(column)>`)
Add a new column by using the specified expression, which applies a function to a column\. Keep existing columns\.
mutate(provide\_new\_column = case\_when(\``<column>`\` `<operator>` provide\_value\_or\_column\_to\_compare ~ provide\_value\_or\_column\_to\_replace, \``<column>`\` `<operator>` provide\_value\_or\_column\_to\_compare ~ provide\_value\_or\_column\_to\_replace, TRUE ~ provide\_default\_value\_or\_column))
Add a new column by using the specified conditional expression\.
mutate(provide\_new\_column = \``<column>`\` `<operator>` \``<column>`\`)
Add a new column by using the specified expression, which performs a calculation with existing columns\. Keep existing columns\.
mutate(provide\_new\_column = coalesce(\``<column>`\`, \``<column>`\`))
Add a new column by using the specified expression, which replaces missing values in the new column with values from another, specified column\. As an alternative to specifying another column, you can specify a value, a function on a column, or a function on a value\. Keep existing columns\.
mutate(provide\_new\_column = if\_else(\``<column>`\` `<logicalOperator>` provide\_value, provide\_value\_for\_true, provide\_value\_for\_false))
Add a new column by using the specified conditional expression\. Keep existing columns\.
mutate(provide\_new\_column = \``<column>`\`, provide\_new\_column = \``<column>`\`)
Add multiple new columns and keep existing columns\.
mutate(provide\_new\_column = n())
Count the values in the groups\. Ensure grouping is done already using group\_by\. Keep existing columns\.
### mutate\_all ###
mutate\_all(funs(`<func>`))
Apply the specified function to all of the columns and overwrite the existing values in those columns\. Specify whether to remove missing values\.
mutate\_all(funs(\. `<operator>` provide\_value))
Apply the specified operator to all of the columns and overwrite the existing values in those columns\.
mutate\_all(funs("provide\_value" = \. `<operator>` provide\_value))
Apply the specified operator to all of the columns and create new columns to hold the results\. Give the new columns names that end with the specified value\.
### mutate\_at ###
mutate\_at(vars(\``<column>`\`), funs(`<func>`))
Apply functions to the specified columns\.
### mutate\_if ###
mutate\_if(`<predicateFunc>`, `<func>`)
Apply functions to the columns that meet the specified condition\.
mutate\_if(`<predicateFunc>`, funs( \. `<operator>` provide\_value))
Apply the specified operator to the columns that meet the specified condition\.
mutate\_if(`<predicateFunc>`, funs(`<func>`))
Apply functions to the columns that meet the specified condition\. Specify whether to remove missing values\.
### rename ###
rename(provide\_new\_column = \``<column>`\`)
Rename the specified column\.
### sample\_frac ###
sample\_frac(provide\_number\_between\_0\_and\_1, weight=\``<column>`\`,replace=`<logical>`)
Generate a random sample based on a percentage of the data\. weight is optional and is the ratio of probability the row will be chosen\. Provide a numeric column\. replace is optional and its Default is FALSE\.
### sample\_n ###
sample\_n(provide\_number\_of\_rows,weight=\``<column>`\`,replace=`<logical>`)
Generate a random sample of data based on a number of rows\. weight is optional and is the ratio of probability the row will be chosen\. Provide a numeric column\. replace is optional and its default is FALSE\.
### select ###
select(\``<column>`\`)
Keep the specified column\.
select(\-\``<column>`\`)
Remove the specified column\.
select(starts\_with("provide\_text\_value"))
Keep columns with names that start with the specified value\.
select(ends\_with("provide\_text\_value"))
Keep columns with names that end with the specified value\.
select(contains("provide\_text\_value"))
Keep columns with names that contain the specified value\.
select(matches ("provide\_text\_value"))
Keep columns with names that match the specified value\. The specified value can be text or a regular expression\.
select(\``<column>`\`:\``<column>`\`)
Keep the columns in the specified range\. Specify the range as from one column to another column\.
select(\``<column>`\`, everything())
Keep all of the columns, but make the specified column the first column\.
select(\``<column>`\`, \``<column>`\`)
Keep the specified columns\.
### select\_if ###
select\_if(`<predicateFunc>`) Keep columns that meet the specified condition\. Supported functions include:
<!-- <ul> -->
* contains
* ends\_with
* matches
* num\_range
* starts\_with
<!-- </ul> -->
### summarize ###
summarize(provide\_new\_column = `<func>`(\``<column>`\`))
Apply aggregate functions to the specified columns to reduce multiple column values to a single value\. Be sure to group the column data first by using the group\_by operation\.
### summarize\_all ###
summarize\_all(`<func>`)
Apply an aggregate function to all of the columns to reduce multiple column values to a single value\. Specify whether to remove missing values\. Be sure to group the column data first by using the group\_by operation\.
summarize\_all(funs(`<func>`))
Apply multiple aggregate functions to all of the columns to reduce multiple column values to a single value\. Create new columns to hold the results\. Specify whether to remove missing values\. Be sure to group the column data first by using the group\_by operation\.
### summarize\_if ###
summarize\_if(`<predicate_conditions>`,\.\.\.)
Apply aggregate functions to columns that meet the specified conditions to reduce multiple column values to a single value\. Specify whether to remove missing values\. Be sure to group the column data first by using the group\_by operation\. Supported functions include:
<!-- <ul> -->
* count
* max
* mean
* min
* standard deviation
* sum
<!-- </ul> -->
### tally ###
tally()
Counts the number of rows (for string columns) or totals the data (for numeric values) by group\. Be sure to group the column data first by using the group\_by operation\.
tally(wt=\``<column>`\`)
Counts the number of rows (for string columns) or totals the data (for numeric columns) by group for the weighted column\.
tally( wt=`<func>`(\``<column>`\`), sort = `<logical>`)
Applies a function to the specified weighted column and returns the result, by group, sorted or not\.
### top\_n ###
top\_n(provide\_value)
Select the top or bottom N rows (by value) in each group\. Specify a positive integer to select the top N rows; specify a negative integer to select the bottom N rows\.
top\_n(provide\_value, \``<column>`\`)
Select the top or bottom N rows (by value) in each group, based on the specified column\. Specify a positive integer to select the top N rows; specify a negative integer to select the bottom N rows\.
If duplicate rows affect the count, use the **Remove duplicates** GUI operation prior to using the top\_n() operation\.
### transmute ###
transmute(`<new_or_existing_column>` = \``<column>`\`)
Add a new column or overwrite an existing one by using the specified expression\. Keep only columns that are mentioned in the expression\.
transmute(`<new_or_existing_column>` = `<func(column)>`)
Add a new column or overwrite an existing one by applying a function to the specified column\. Keep only columns that are mentioned in the expression\.
transmute(`<new_or_existing_column>` = \``<column>`\` `<operator>` \``<column>`\`)
Add a new column or overwrite an existing one by applying an operator to the specified column\. Keep only columns that are mentioned in the expression\.
transmute(`<new_or_existing_column>` = \``<column>`\`, `<new_or_existing_column>` = \``<column>`\`)
Add multiple new columns\. Keep only columns that are mentioned in the expression\.
transmute(`<new_or_existing_column>` = if\_else( provide\_value, provide\_value\_for\_true, provide\_value\_for\_false))
Add a new column or overwrite an existing one by using the specified conditional expressions\. Keep only columns that are mentioned in the expressions\.
### ungroup ###
ungroup()
Ungroup the data\.
## Functions ##
### Aggregate ###
<!-- <ul> -->
* mean
* min
* n
* sd
* sum
<!-- </ul> -->
### Logical ###
<!-- <ul> -->
* is\.na
<!-- </ul> -->
### Numerical ###
<!-- <ul> -->
* abs
* coalesce
* cut
* exp
* floor
<!-- </ul> -->
### Text ###
<!-- <ul> -->
* c
* coalesce
* paste
* tolower
* toupper
<!-- </ul> -->
### Type ###
<!-- <ul> -->
* as\.character
* as\.double
* as\.integer
* as\.logical
<!-- </ul> -->
## Logical operators ##
<!-- <ul> -->
* <
* <=
* >=
* >
* between
* \!=
* ==
* %in%
<!-- </ul> -->
**Parent topic:**[Refining data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html)
<!-- </article "role="article" "> -->
|
0999F59BB8E2E2AB7722D57CDBC051A0984ABE45 | https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=en | Managing Data Refinery flows | Managing Data Refinery flows
A Data Refinery flow is an ordered set of steps to cleanse, shape, and enhance data. As you [refine your data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.htmlrefine) by [applying operations](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/gui_operations.html) to a data set, you dynamically build a customized Data Refinery flow that you can modify in real time and save for future use.
These are actions that you can do while you refine your data:
Working with the Data Refinery flow
* [Save a Data Refinery flow](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=ensave)
* [Run or schedule a job for Data Refinery flow](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=enjobs)
* [Rename a Data Refinery flow](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=enrename)
Steps
* [Undo or redo a step](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=enundo)
* [Edit, duplicate, insert, or delete a step](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=enedit-duplicate)
* [View the Data Refinery flow steps in a "snapshot view"](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=ensnapshot)
* [Export the Data Refinery flow data to a CSV file](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=enexport)
Working with the data sets
* [Change the source of a Data Refinery flow](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=enchange)
* [Edit the sample size](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=ensample)
* [Edit the source properties ](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=enedit-source)
* [Change the target of a Data Refinery flow](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=enoutput)
* [Edit the target properties](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=enedit-target)
* [Change the name of the Data Refinery flow target](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=enchange-name)
Actions on the project page
* [Reopen a Data Refinery flow to continue working](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=enreopen)
* [Duplicate a Data Refinery flow](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=enclone)
* [Delete a Data Refinery flow](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=enremove)
* [Promote a Data Refinery flow to a space](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=enpromote)
Working with the Data Refinery flow
Save a Data Refinery flow
Save a Data Refinery flow by clicking the Save Data Refinery flow icon  in the Data Refinery toolbar. Data Refinery flows are saved to the project that you're working in. Save a Data Refinery flow so that you can continue refining a data set later.
The default output of the Data Refinery flow is saved as a data asset source-file-name_shaped.csv. For example, if the source file is mydata.csv, the default name and output for the Data Refinery flow is mydata_csv_shaped. You can edit the name and add an extension by [changing the target of a Data Refinery flow](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=enoutput).
Run or schedule a job for a Data Refinery flow
Data Refinery supports large data sets, which can be time-consuming and unwieldy to refine. So that you can work quickly and efficiently, Data Refinery operates on a sample subset of rows in the data set. The sample size is 1 MB or 10,000 rows, whichever comes first. When you run a job for the Data Refinery flow, the entire data set is processed. When you run the job, you select the runtime and you can add a one-time or repeating schedule.
In Data Refinery, from the Data Refinery toolbar click the Jobs icon , and then select Save and create a job or Save and view jobs.
After you save a Data Refinery flow, you can also create a job for it from the Project page. Go to the Assets tab, select the Data Refinery flow, choose New job from the overflow menu ().
You must have the Admin or Editor role to view the job details or to edit or run the job. With the Viewer role for the project, you can view only the job details.
For more information about jobs, see [Creating jobs in Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-job-dr.html).
Rename a Data Refinery flow
On the Data Refinery toolbar, open the Info pane . Or open the Flow settings  and go to the General tab.
Steps
Undo or redo a step
Click the undo () icon or the redo () icon on the toolbar.
Edit, duplicate, insert, or delete a step
In the Steps pane, click the overflow menu () on the step for the operation that you want to change. Select the action (Edit, Duplicate, Insert step before, Insert step after, or Delete).
* If you select Edit, Data Refinery goes into edit mode and either displays the operation to be edited on the command line or in the Operation pane. Apply the edited operation.
* If you select Duplicate, the duplicated step is inserted after the selected step.
Note:The Duplicate action is not available for the Join or Union operations.
Data Refinery updates the Data Refinery flow to reflect the changes and reruns all the operations.
View the Data Refinery flow steps in a "snapshot view"
To see what your data looked like at any point in time, click a previous step to put Data Refinery into snapshot view. For example, if you click Data source, you see what your data looked like before you started refining it. Click any operation step to see what your data looked like after that operation was applied. To leave snapshot view, click Viewing step x of y or click the same step that you selected to get into snapshot view.
Export the Data Refinery flow data to a CSV file
Click Export () on the toolbar to export the data at the current step in your Data Refinery flow to a CSV file without saving or running a Data Refinery flow job. Use this option, for example, if you want quick output of a Data Refinery flow that is in progress. When you export the data, a CSV file is created and downloaded to your computer's Downloads folder (or the user-specified download location) at the current step in the Data Refinery flow. If you are in [snapshot view](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=ensnapshot), the output of the CSV file is at the step that you clicked. If you are viewing a sample (subset) of the data, only the sample data will be in the output.
Working with the data sets
Change the source of a Data Refinery flow
Change the source of a Data Refinery flow. Run the same Data Refinery flow but with a different source data set. There are two ways that you can change the source:
* In the Steps pane: Click the overflow menu () next to Data source, select Edit, and then choose a different source data set.

* In the Flow settings: You can use this method if you want to change more than one data source in the same place. For example, for a Join or a Union operation. On the toolbar, open the Flow settings . Go to the Source data sets tab and click the overflow menu () next to the data source. Select Replace data source, and then choose a different source data set.
For best results, the new data set should have a schema that is compatible to the original data set (for example, column names, number of columns, and data types). If the new data set has a different schema, operations that won't work with the schema will show errors. You can edit or delete the operations, or change the source to one that has a more compatible schema.
Edit the sample size
When you run the job for the Data Refinery flow, the operations are performed on the full data set. However, when you apply the operations interactively in Data Refinery, depending on the size of the data set, you view only a sample of the data.
Increase the sample size to see results that will be closer to the results of the Data Refinery flow job, but be aware that it might take longer to view the results in Data Refinery. The maximum is a top-row count of 10,000 rows or 1 MB, whichever comes first. Decrease the sample size to view faster results. Depending on the size of the data and the number and complexity of the operations, you might want to experiment with the sample size to see what works best for the data set.
On the toolbar, open the Flow settings . Go to the Source data sets tab and click the overflow menu () next to the data source, and select Edit sample.
Edit the source properties
The available properties depend on the data source. Different properties are available for data assets and for data from different kinds of connections. Change the file format only if the inferred file format is incorrect. If you change the file format, the source is read with the new format, but the source file remains unchanged. Changing the format source properties might be an iterative process. Inspect your data after you apply an option.
On the toolbar, open the Flow settings . Go to the Source data sets tab and click the overflow menu () next to the data source, and select Edit format.
Important: Use caution if you edit the source properties. Incorrect selections might produce unexpected results when the data is read or impair the Data Refinery flow job. Inspect the results of the Data Refinery flow carefully.
Change the target of a Data Refinery flow
By default, the target of the Data Refinery is saved as a data asset in the project that you're working in.
To change the target location, open Flow settings  from the toolbar. Go to the Target data set tab, click Select target, and select a different target location.
Edit the target properties
The available properties depend on the data source. Different properties are available for data assets and for data from different kinds of connections.
To change the target data set's properties, open the Flow settings  from the toolbar. Go to the Target data set tab, and click Edit properties.
Change the name of the Data Refinery flow target
The name of the target data set is included in the fields that you can change when you edit the target properties.
By default, the target of the Data Refinery is saved as a data asset source-file-name_shaped.csv in the project. For example, if the source is mydata.csv, the default name and output for the Data Refinery flow is the data asset mydata_csv_shaped.
Different properties and naming conventions apply to a target data set from a connection. For example, if the data set is in Cloud Object Storage, the data set is identified in the Bucket and File name fields. If the data set is in a Db2 database, the data set is identified in the Schema name and Table name fields.
Important: Use caution if you edit the target properties. Incorrect selections might produce unexpected results or impair the Data Refinery flow job. Inspect the results of the Data Refinery flow carefully.
Actions on the project page
Reopen a Data Refinery flow to continue working
To reopen a Data Refinery flow and continue refining your data, go to the project’s Assets tab. Under Asset types, expand Flows, click Data Refinery flow. Click the Data Refinery flow name.
Duplicate a Data Refinery flow
To create a copy of a Data Refinery flow, go to the project's Assets tab, expand Flows, click Data Refinery flow. Select the Data Refinery flow, and then select Duplicate from the overflow menu (). The Data Refinery flow is added to the Data Refinery flows list as "original-name copy 1".
Delete a Data Refinery flow
To delete a Data Refinery flow, go to the project's Assets tab, expand Flows, click Data Refinery flow. Select the Data Refinery flow, and then select Delete from the overflow menu ().
Promote a Data Refinery flow to a space
Deployment spaces are used to manage a set of related assets in a separate environment from your projects. You use a space to prepare data for a deployment job for Watson Machine Learning. You can promote Data Refinery flows from multiple projects to a single space. Complete the steps in the Data Refinery flow before you promote it because the Data Refinery flow is not editable in a space.
To promote a Data Refinery flow to a space, go to the project's Assets tab, expand Flows, click Data Refinery flow. Select the Data Refinery flow. Click the overflow menu () for the Data Refinery flow, and then select Promote. The source file for the Data Refinery flow and any other dependent data will be promoted as well.
To create or run a job for the Data Refinery flow in a space, go the space’s Assets tab, scroll down to the Data Refinery flow, and select New job () from the overflow menu (). If you've already created the job, go to the Jobs tab to edit the job or view the job run details. The shaped output of the Data Refinery flow job will be available on the space’s Assets tab. You must have the Admin or Editor role to view the job details or to edit or run the job. With the Viewer role for the project, you can only view the job details. You can use the shaped output as input data for a job in Watson Machine Learning.
Restriction:When you promote a Data Refinery flow from a project to a space and the target of the Data Refinery flow is a connected data asset, you must manually promote the connected data asset. This action ensures that the connected data asset's data is updated when you run the Data Refinery flow job in the space. Otherwise, a successful run of the Data Refinery flow job will create a new data asset in the space.
For information about spaces, see [Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html).
Parent topic:[Refining data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html)
| # Managing Data Refinery flows #
A Data Refinery flow is an ordered set of steps to cleanse, shape, and enhance data\. As you [refine your data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html#refine) by [applying operations](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/gui_operations.html) to a data set, you dynamically build a customized Data Refinery flow that you can modify in real time and save for future use\.
These are actions that you can do while you refine your data:
**Working with the Data Refinery flow**
<!-- <ul> -->
* [Save a Data Refinery flow](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=en#save)
* [Run or schedule a job for Data Refinery flow](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=en#jobs)
* [Rename a Data Refinery flow](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=en#rename)
<!-- </ul> -->
**Steps**
<!-- <ul> -->
* [Undo or redo a step](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=en#undo)
* [Edit, duplicate, insert, or delete a step](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=en#edit-duplicate)
* [View the Data Refinery flow steps in a "snapshot view"](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=en#snapshot)
* [Export the Data Refinery flow data to a CSV file](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=en#export)
<!-- </ul> -->
**Working with the data sets**
<!-- <ul> -->
* [Change the source of a Data Refinery flow](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=en#change)
* [Edit the sample size](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=en#sample)
* [Edit the source properties ](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=en#edit-source)
* [Change the target of a Data Refinery flow](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=en#output)
* [Edit the target properties](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=en#edit-target)
* [Change the name of the Data Refinery flow target](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=en#change-name)
<!-- </ul> -->
**Actions on the project page**
<!-- <ul> -->
* [Reopen a Data Refinery flow to continue working](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=en#reopen)
* [Duplicate a Data Refinery flow](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=en#clone)
* [Delete a Data Refinery flow](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=en#remove)
* [Promote a Data Refinery flow to a space](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=en#promote)
<!-- </ul> -->
## Working with the Data Refinery flow ##
### Save a Data Refinery flow ###
Save a Data Refinery flow by clicking the Save Data Refinery flow icon  in the Data Refinery toolbar\. Data Refinery flows are saved to the project that you're working in\. Save a Data Refinery flow so that you can continue refining a data set later\.
The default output of the Data Refinery flow is saved as a data asset *source\-file\-name*\_shaped\.csv\. For example, if the source file is `mydata.csv`, the default name and output for the Data Refinery flow is `mydata_csv_shaped`\. You can edit the name and add an extension by [changing the target of a Data Refinery flow](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=en#output)\.
### Run or schedule a job for a Data Refinery flow ###
Data Refinery supports large data sets, which can be time\-consuming and unwieldy to refine\. So that you can work quickly and efficiently, Data Refinery operates on a sample subset of rows in the data set\. The sample size is 1 MB or 10,000 rows, whichever comes first\. When you run a job for the Data Refinery flow, the entire data set is processed\. When you run the job, you select the runtime and you can add a one\-time or repeating schedule\.
In Data Refinery, from the Data Refinery toolbar click the Jobs icon , and then select **Save and create a job** or **Save and view jobs**\.
After you save a Data Refinery flow, you can also create a job for it from the Project page\. Go to the **Assets** tab, select the Data Refinery flow, choose **New job** from the overflow menu ()\.
You must have the **Admin** or **Editor** role to view the job details or to edit or run the job\. With the **Viewer** role for the project, you can view only the job details\.
For more information about jobs, see [Creating jobs in Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-job-dr.html)\.
### Rename a Data Refinery flow ###
On the Data Refinery toolbar, open the Info pane \. Or open the Flow settings  and go to the **General** tab\.
## Steps ##
### Undo or redo a step ###
Click the undo () icon or the redo () icon on the toolbar\.
### Edit, duplicate, insert, or delete a step ###
In the Steps pane, click the overflow menu () on the step for the operation that you want to change\. Select the action (**Edit**, **Duplicate**, **Insert step before**, **Insert step after**, or **Delete**)\.
<!-- <ul> -->
* If you select **Edit**, Data Refinery goes into edit mode and either displays the operation to be edited on the command line or in the Operation pane\. Apply the edited operation\.
<!-- </ul> -->
<!-- <ul> -->
* If you select **Duplicate**, the duplicated step is inserted after the selected step\.
<!-- </ul> -->
Note:The **Duplicate** action is not available for the *Join* or *Union* operations\.
Data Refinery updates the Data Refinery flow to reflect the changes and reruns all the operations\.
### View the Data Refinery flow steps in a "snapshot view" ###
To see what your data looked like at any point in time, click a previous step to put Data Refinery into snapshot view\. For example, if you click **Data source**, you see what your data looked like before you started refining it\. Click any operation step to see what your data looked like after that operation was applied\. To leave snapshot view, click **Viewing step x of y** or click the same step that you selected to get into snapshot view\.
### Export the Data Refinery flow data to a CSV file ###
Click Export () on the toolbar to export the data at the current step in your Data Refinery flow to a CSV file without saving or running a Data Refinery flow job\. Use this option, for example, if you want quick output of a Data Refinery flow that is in progress\. When you export the data, a CSV file is created and downloaded to your computer's **Downloads** folder (or the user\-specified download location) at the current step in the Data Refinery flow\. If you are in [snapshot view](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=en#snapshot), the output of the CSV file is at the step that you clicked\. If you are viewing a sample (subset) of the data, only the sample data will be in the output\.
## Working with the data sets ##
### Change the source of a Data Refinery flow ###
Change the source of a Data Refinery flow\. Run the same Data Refinery flow but with a different source data set\. There are two ways that you can change the source:
<!-- <ul> -->
* In the **Steps** pane: Click the overflow menu () next to **Data source**, select **Edit**, and then choose a different source data set\.

* In the Flow settings: You can use this method if you want to change more than one data source in the same place\. For example, for a Join or a Union operation\. On the toolbar, open the Flow settings \. Go to the **Source data sets** tab and click the overflow menu () next to the data source\. Select **Replace data source**, and then choose a different source data set\.
<!-- </ul> -->
For best results, the new data set should have a schema that is compatible to the original data set (for example, column names, number of columns, and data types)\. If the new data set has a different schema, operations that won't work with the schema will show errors\. You can edit or delete the operations, or change the source to one that has a more compatible schema\.
### Edit the sample size ###
When you run the job for the Data Refinery flow, the operations are performed on the full data set\. However, when you apply the operations interactively in Data Refinery, depending on the size of the data set, you view only a sample of the data\.
Increase the sample size to see results that will be closer to the results of the Data Refinery flow job, but be aware that it might take longer to view the results in Data Refinery\. The maximum is a top\-row count of 10,000 rows or 1 MB, whichever comes first\. Decrease the sample size to view faster results\. Depending on the size of the data and the number and complexity of the operations, you might want to experiment with the sample size to see what works best for the data set\.
On the toolbar, open the Flow settings \. Go to the **Source data sets** tab and click the overflow menu () next to the data source, and select **Edit sample**\.
### Edit the source properties ###
The available properties depend on the data source\. Different properties are available for data assets and for data from different kinds of connections\. Change the file format only if the inferred file format is incorrect\. If you change the file format, the source is read with the new format, but the source file remains unchanged\. Changing the format source properties might be an iterative process\. Inspect your data after you apply an option\.
On the toolbar, open the Flow settings \. Go to the **Source data sets** tab and click the overflow menu () next to the data source, and select **Edit format**\.
Important: Use caution if you edit the source properties\. Incorrect selections might produce unexpected results when the data is read or impair the Data Refinery flow job\. Inspect the results of the Data Refinery flow carefully\.
### Change the target of a Data Refinery flow ###
By default, the target of the Data Refinery is saved as a data asset in the project that you're working in\.
To change the *target location*, open Flow settings  from the toolbar\. Go to the **Target data set** tab, click **Select target**, and select a different target location\.
### Edit the target properties ###
The available properties depend on the data source\. Different properties are available for data assets and for data from different kinds of connections\.
To change the target data set's properties, open the Flow settings  from the toolbar\. Go to the **Target data set** tab, and click **Edit properties**\.
#### Change the name of the Data Refinery flow target ####
The name of the target data set is included in the fields that you can change when you edit the target properties\.
By default, the target of the Data Refinery is saved as a data asset *source\-file\-name*\_shaped\.csv in the project\. For example, if the source is `mydata.csv`, the default name and output for the Data Refinery flow is the data asset `mydata_csv_shaped`\.
Different properties and naming conventions apply to a target data set from a connection\. For example, if the data set is in Cloud Object Storage, the data set is identified in the **Bucket** and **File name** fields\. If the data set is in a Db2 database, the data set is identified in the **Schema name** and **Table name** fields\.
Important: Use caution if you edit the target properties\. Incorrect selections might produce unexpected results or impair the Data Refinery flow job\. Inspect the results of the Data Refinery flow carefully\.
## Actions on the project page ##
### Reopen a Data Refinery flow to continue working ###
To reopen a Data Refinery flow and continue refining your data, go to the project’s **Assets** tab\. Under **Asset types**, expand **Flows**, click **Data Refinery flow**\. Click the Data Refinery flow name\.
### Duplicate a Data Refinery flow ###
To create a copy of a Data Refinery flow, go to the project's **Assets** tab, expand **Flows**, click **Data Refinery flow**\. Select the Data Refinery flow, and then select **Duplicate** from the overflow menu ()\. The Data Refinery flow is added to the Data Refinery flows list as "*original\-name* copy 1"\.
### Delete a Data Refinery flow ###
To delete a Data Refinery flow, go to the project's **Assets** tab, expand **Flows**, click **Data Refinery flow**\. Select the Data Refinery flow, and then select **Delete** from the overflow menu ()\.
### Promote a Data Refinery flow to a space ###
Deployment spaces are used to manage a set of related assets in a separate environment from your projects\. You use a space to prepare data for a deployment job for Watson Machine Learning\. You can promote Data Refinery flows from multiple projects to a single space\. Complete the steps in the Data Refinery flow before you promote it because the Data Refinery flow is not editable in a space\.
To promote a Data Refinery flow to a space, go to the project's **Assets** tab, expand **Flows**, click **Data Refinery flow**\. Select the Data Refinery flow\. Click the overflow menu () for the Data Refinery flow, and then select **Promote**\. The source file for the Data Refinery flow and any other dependent data will be promoted as well\.
To create or run a job for the Data Refinery flow in a space, go the space’s **Assets** tab, scroll down to the Data Refinery flow, and select **New job** () from the overflow menu ()\. If you've already created the job, go to the **Jobs** tab to edit the job or view the job run details\. The shaped output of the Data Refinery flow job will be available on the space’s **Assets** tab\. You must have the **Admin** or **Editor** role to view the job details or to edit or run the job\. With the **Viewer** role for the project, you can only view the job details\. You can use the shaped output as input data for a job in Watson Machine Learning\.
Restriction:When you promote a Data Refinery flow from a project to a space and the target of the Data Refinery flow is a *connected data asset*, you must manually promote the connected data asset\. This action ensures that the connected data asset's data is updated when you run the Data Refinery flow job in the space\. Otherwise, a successful run of the Data Refinery flow job will create a new data asset in the space\.
For information about spaces, see [Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html)\.
**Parent topic:**[Refining data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html)
<!-- </article "role="article" "> -->
|
9C03418999E6B01345837D9DD0F8E0410ED5CB7D | https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/gui_operations.html?context=cdpaas&locale=en | CLEANSE | CLEANSE
Convert column type
When you open a file in Data Refinery, the Convert column type operation is automatically applied as the first step if it detects any nonstring data types in the data. Data types are automatically converted to inferred data types. To change the automatic conversion for a selected column, click the overflow menu () for the step and select Edit. As with any other operation, you can undo the step. The Convert column type operation is reapplied every time that you open the file in Data Refinery. Automatic conversion is applied as needed for file-based data sources only. (It does not apply to a data source from a database connection.)
To confirm what data type each column's data was converted to, click Edit from the overflow menu () to view the data types. The information includes the format for date or timestamp data.
If the data is converted to an Integer or to a Decimal data type, you can specify the decimal symbol and the thousands grouping symbol for all applicable columns. Strings that are converted to the Decimal data type use a dot for the decimal symbol and a comma for the thousands grouping symbol. Alternatively, you can select comma for the decimal symbol and dot or a custom symbol for the thousands grouping symbol. The decimal symbol and the thousands grouping symbol cannot be the same.
The source data is read from left to right until a terminator or an unrecognized character is encountered. For example, if you are converting string data 12,834 to Decimal and you do not specify what to do with the comma (,), the data will be truncated to 12. Similarly, if the source data has multiple dots (.), and you select dot for the decimal symbol, the first dot is used as the decimal separator and the digits following the second dot are truncated. A source string of 1.834.230,000 is converted to a value of 1.834.
The Convert column type operation automatically converts these date and timestamp formats:
* Date: ymd, ydm
* Timestamp: ymdHMS, ymdHM, ydmHMS, ydmHM
Date and Timestamp strings must use four digits for the year.
You can manually apply the Convert column type operation to change the data type of a column at any point in the Data Refinery flow. You can create a new column to hold the result of this operation or you can overwrite the existing column.
Tip: A column's data type determines the operations that you can use. Changing the data type can affect which operations are relevant for that column.
Video transcript
1. The Convert column type operation automatically converted the first column from String to Integer. Let's change the data types of the other three columns.
2. To change the data type of european column from string to decimal, select the column and then edit the Convert column type operation step.
3. To change the data type of european column from string to decimal, select the column and then edit the Convert column type operation step.
4. Select Decimal.
5. The column uses the comma delimiter so select Comma (,) for the decimal symbol.
6. Select the next column, DATETIME. Select Timestamp and a format.
7. Click Apply.
8. The columns are now Integer, Decimal, Date, and Timestamp data types The Convert column type step in the Steps panel is updated.
Convert column value to missing
Convert values in the selected column to missing values if they match values in the specified column or they match a specified value.
Video transcript
1. The Convert column value to missing operation converts the values in a selected column to missing values if they match the values in a specified column or if they match a specified value.
2. A missing value is equivalent to an SQL NULL, which is a field with no value. It is different from a zero value or a value that contains spaces.
3. You can use the Convert column value to missing operation when you think that the data would be better represented as missing values. For example, when you want to use missing values in a Replace missing values operation or in a Filter operation.
4. Let's use the Convert column value to missing operation to change values to missing based on a matched value.
5. Notice that the DESC column has many rows with the value CANCELLED ORDER. Let's convert the CANCELLED ORDER strings to missing values.
6. The Convert column value to missing operation is under the CLEANSE category.
7. Type the string to replace with missing values.
8. The values that were formerly CANCELLED ORDER are now missing values.
Extract date or time value
Extract a selected portion of a date or time value from a column with a date or timestamp data type.
Video transcript
1. The Extract date or time value operation extracts a selected portion of a date or time value from a column that is a date or timestamp data type.
2. The DATE column is a String data type. First, let's use the Convert column type operation to convert it to the Date data type.
3. Select the Convert column type operation from the DATE column's menu. Select Date.
4. Select a Date format.
5. The DATE column is now a date data type.
6. The ISO Date format is used when the String data type was converted to the Date data type. For example, the string 01/08/2018 was converted to the date 2018-01-08.
7. Now we can extract the year portion of the date into a new column.
8. The Extract date or time value operation is under the CLEANSE category.
9. Select Year for the portion of the date to extract, and type YEAR for the new column name.
10. The year portion of the DATE column is in the new column, YEAR.
11. The Steps panel displays the Extract date or time value operation.
Filter
Filter rows by the selected columns. Keep rows with the selected column values; filter out all other rows.
For these string Filter operators, do not enclose the value in quotation marks. If the value contains quotation marks, escape them with a slash character. For example: "text":
* Contains
* Does not contain
* Starts with
* Does not start with
* End with
* Does not end with
Folowing are the operators for numeric, string, and Boolean (logical), and date and timestamp columns:
Operator Numeric String Boolean Date and timestamp
Contains ✓
Does not contain ✓
Does not end with ✓
Does not start with ✓
Ends with ✓
Is between two numbers ✓
Is empty ✓ ✓ ✓
Is equal to ✓ ✓ ✓
Is false ✓
Is greater than ✓ ✓
Is greater than or equal to ✓ ✓
Is in ✓ ✓
Is less than ✓ ✓
Is less than or equal to ✓ ✓
Is not empty ✓ ✓ ✓
Is not equal to ✓ ✓ ✓
Is not in ✓ ✓
Is not null ✓
Is null ✓ ✓
Is true ✓
Starts with ✓
Video transcript
1. Use the Filter operation to filter rows by the selected columns. You can apply multiple conditions in one Filter operation.
2. Use a regular expression to filter out all the rows except those where the string in the Emp ID column starts with 8.
3. Filter the rows by two states abbreviations.
4. Click Apply. Only the rows where Emp ID starts with 8 and State is AR or TX are in the table.
5. The rows are now filtered by AR and PA. The Filter step in the Steps panel is updated.
Remove column
Remove the selected column.
Video transcript
1. Use the Remove column operation to quickly remove a column from a data asset.
2. The quickest way to remove a column is from the column's menu.
3. The name of the removed column is in the Steps panel.
4. Remove another column.
5. The name of the removed column is in the Steps panel.
Remove duplicates
Remove rows with duplicate column values.
Video transcript
1. The Remove duplicates operation removes rows that have duplicate column values.
2. The data set has 43 rows. Many of the rows in the APPLYCODE column have duplicate values. We want to reduce the data set to the rows where each value in the APPLYCODE column occurs only once.
3. Select the Remove duplicates operation from the APPLYCODE column's menu.
4. The Remove duplicates operation removed each occurrence of a duplicate value starting from the top row. The data set is now 4 rows.
Remove empty rows
Remove rows that have a blank or missing value for the selected column.
Video transcript
1. The Remove empty rows operation removes rows that have a blank or missing value for the selected column.
2. A missing value is equivalent to an SQL NULL, which is a field with no value. It is different from a zero value or a value that contains spaces.
3. The data set has 43 rows. Many of the rows in the TRACK column have missing values. We want to reduce the data set to the rows that have a value in the TRACK column.
4. Select the Remove empty rows operation from the TRACK column's menu.
5. The Remove empty rows operation removed each row that had a blank or missing value in the TRACK column. The data set is now 21 rows.
Replace missing values
Replace missing values in the column with a specified value or with the value from a specified column in the same row.
Video transcript
1. The Replace missing values operation replaces missing values in a column with a specified value or with the value from a specified column in the same row.
2. The STATE column has many rows with empty values. We want to replace those empty values with a string.
3. The Replace missing values operation is under the CLEANSE category.
4. For the State column, replace the missing values with the string Incomplete.
5. The missing values now have the value Incomplete.
6. The Steps panel displays the Replace missing values operation.
Replace substring
Replace the specified substring with the specified text.
Video transcript
1. The Replace substring operation replaces a substring with text that you specify.
2. The DECLINE column has many rows that include the string BANC. We want to replace this string with BANK.
3. The Replace substring operation is under the CLEANSE category.
4. Type the string to replace and the replacement string.
5. All occurrences of the string BANC have been replaced with BANK.
6. The Steps panel displays the Replace substring operation.
Substitute
Obscure sensitive information from view by substituting a random string of characters for the actual data in the selected column.
Video transcript
1. The Substitute operation obscures sensitive information by substituting a random string of characters for the data in the selected column.
2. The quickest way to substitute the data in a column is to select Substitute from the column's menu.
3. The Substitute operation shows in the Steps panel.
4. Substitute values in another column.
5. The second Substitute operation shows in the Steps panel.
Text
You can apply text operations only to string columns. You can create a new column to hold the result of an operation or you can overwrite the existing column.
Text > Collapse spaces
Collapse multiple, consecutive spaces in the text to a single space.
Text > Concatenate string
Link together any string to the text. You can prepend the string to the text, append the string to the text, or both.
Text > Lowercase
Convert the text to lowercase.
Text > Number of characters
Return the number of characters in the text.
Text > Pad characters
Pad the text with the specified string. Specify whether to pad the text on the left, right, or both the left and right.
Text > Substring
Create substrings from the text that start at the specified position and have the specified length.
Text > Title case
Convert the text to title case.
Text > Trim quotes
Remove single or double quotation marks from the text.
Text > Trim spaces
Remove leading, trailing, and extra spaces from the text.
Text > Uppercase
Convert the text to uppercase.
Video transcript
1. You can apply a Text operation to string columns. Create a new column for the result or overwrite the existing column.
2. First, concatenate a string to the values in the WORD column.
3. Available Text operations.
4. Concatenate the string to the right side, append with a space, and type up.
5. The values in the WORD column are appended with a space and the word up.
6. The Text operation displays in the Steps panel.
7. Next, pad the values in the ANIMAL column with a string.
8. Pad the values in the ANIMAL column with ampersand (&) symbols to the right for a minimum of 7 characters.
9. The values in the ANIMAL column are padded with the & symbol so that each string is at least seven characters.
10. Notice that the opossum, pangolin, platypus, and hedgehog values do not have a padding character because those strings were already seven or more characters long.
11. Next, use Substring to remove the t character from the ID column.
12. Select Position 2 to start the new string at that position. Select Length 4 for a four-character length string.
13. The initial t character in the ID column is removed in the NEW-ID column.
COMPUTE
Calculate
Perform a calculation with another column or with a specified value. The operators are:
* Addition
* Division
* Exponentiation
* Is between two numbers
* Is equal to
* Is greater than
* Is greater than or equal to
* Is less than
* Is less than or equal to
* Is not equal to
* Modulus
* Multiplication
* Subtraction
Video transcript
1. The Calculate operation performs a calculation, such as addition or subtraction, with another column or with a specified value.
2. Select the column to begin.
3. Available calculations
4. Now select the second column for the Addition calculation.
5. And apply the change.
6. The id column is updated, and the Steps panel shows the completed operation.
7. You can also access the operations from the column's menu.
8. This time, select Is between two numbers. Specify the range, and create a new column for the results.
9. The new column displays in the table and the new calculate operation displays in the Steps panel.
10. This time, select Is equal to to compare two columns, and create a new column for the results.
11. The new column displays in the table and the new calculate operation displays in the Steps panel.
Math
You can apply math operations only to numeric columns. You can create a new column to hold the result of an operation or you can overwrite the existing column.
Math > Absolute value
Get the absolute value of a number.
Example: The absolute value of both 4 and -4 is 4.
Math > Arc cosine
Get the arc cosine of an angle.
Math > Ceiling
Get the nearest integer of greater value, also known as the ceiling of the number.
Examples: The ceiling of 2.31 is 3. The ceiling of -2.31 is -2.
Math > Exponent
Get a number raised to the power of the column value.
Math > Floor
Get the nearest integer of lesser value, also known as the floor of the number.
Example: The floor of 2.31 is 2. The floor of -2.31 is -3.
Math > Round
Get the whole number nearest to the column value. If the column value is a whole number, return it.
Math > Square root
Get the square root of the column value.
Video transcript
1. Apply a Math operation to the values in a column. Create a new column for the results or overwrite the existing column.
2. Available Math operations
3. Apply Absolute value to the column's values.
4. Create new column for results.
5. The new column is added to the table, and the Math operation displays in the Steps panel.
6. You can also access the operation from the column's menu.
7. Apply Round to the ANGLE column's values.
8. Create a new column for results.
9. The new column is added to the table, and the new Math operation displays in the Steps panel.
ORGANIZE
Aggregate
Apply summary calculations to the values of one or more columns. Each aggregation creates a new column. Optionally, select Group by columns to group the new column by another column that defines a characteristic of the group, for example, a department or an ID. You can group by multiple columns. You can combine multiple aggregations in a single operation.
The available aggregate operations depend on the data type.
Numeric data:
* Count unique values
* Minimum
* Maximum
* Sum
* Standard deviation
* Mean
String data:
* Combine row values
* Count unique values
Video transcript
1. The Aggregate operation applies summary calculations to the values of one or more columns. Each aggregation creates a new column.
2. Available aggregations depend on whether the data is numeric or string data.
3. The available operators depend on the column's data type. Available operators for numeric data.
4. With the UniqueCarrier text column selected, you can see the available operators for string data.
5. We will count how many unique values are in the UniqueCarrier column. This aggregation will show how many airlines are in the data set.
6. We have 22 airlines in the new Airlines column. The other columns are deleted.
7. The Aggregate operation displays in the Steps panel.
8. Let's start over to show an aggregation on numeric data.
9. Show the average (mean value) of the arrival delays.
10. The average value of all the arrival delays is in the new MeanArrDelay column. The other columns are deleted.
11. You can also group the aggregated column by another column that defines a characteristic of the group.
12. Let's edit the Aggregate step by adding a Group by selection so we can see the average of arrival delays by airline.
13. Group the results by the UniqueCarrier column.
14. The average arrival delays are now grouped by airline.
15. The Steps panel displays the Aggregate operation.
Concatenate
Concatenate the values of two or more columns.
Video transcript
1. The Concatenate operation concatenates the values of two or more columns.
2. The Concatenate operation is under the ORGANIZE category.
3. Select the columns to concatenate.
4. Select a separator to use between the concatenated values.
5. Type a name for the column for the concatenated values.
6. The new column can display as the right-most column in the data set, or next to the original column.
7. Keep the original columns, and apply the changes.
8. The new DATE column shows the concatenated values from the other three columns with a semicolon separator.
9. The Concatenate operation displays in the Steps panel.
10. The DATE column is a String data type. Let's use the Convert column type operation to convert it to the Date data type.
11. Select the Convert column type operation from the DATE column's menu. Select Date.
12. Select a date format and create a new column for the result.
13. Place the new column next to the original column, and apply the changes.
14. The new column displays with the converted date format.
15. The Convert column type operation displays in the Steps panel.
16. The ISO Date format is used when the String data type was converted to the Date data type. For example, the string 2004;2;3 was converted to the date 2004-02-03.
Conditional replace
Replace the values in a column based on conditions.
Video transcript
1. Use the Conditional replace operation to replace the values in a column based on conditions.
2. First, let's specify conditions to replace data in the CODE string column and create a new column for the results.
3. Available condition operators for string data.
4. Add the first condition - CONDITION 1: CODE Is equal to value C replace with COMPLETE.
5. Add a second condition - CONDITION 2: CODE Is equal to value I replace with INCOMPLETE.
6. Specify what to do with any values that do not meet the conditions. Here we will enter two double quotation marks to indicate an empty string.
7. Create a new column for the results.
8. The new column, STATUS, shows the conditional replacements from the CODE column.
9. The Conditional replace operation shows in the Steps panel.
10. Next, let's specify conditions to replace data in the INPUT integer column and create a new column for the results.
11. Available condition operators for numeric data.
12. Add the first condition - CONDITION 1: INPUT Is less than or equal to value 3 replace with value LOW.
13. Add a second condition - CONDITION 2: INPUT Is in values 4,5,6 replace with value MED.
14. Add a third condition - CONDITION 3: INPUT Is greater than or equal to value 7 replace with value HIGH.
15. Specify what to do with any values that do not meet the conditions.
16. Create a new column for the results.
17. The new column, RATING, shows the conditional replacements from the INPUT column.
18. The Conditional replace operation shows in the Steps panel.
Join
Combine data from two data sets based on a comparison of the values in specified key columns. Specify the type of join to perform, select the columns (join keys) in both data sets that you want to compare, and select the columns that you want in the resulting data set.
The join key columns in both data sets need to be compatible data types. If the Join operation is the first step that you add, check whether the Convert column type operation automatically converted the data type of the join key columns in the first data set when you opened the file in Data Refinery. Also, depending where the Join operation is in the Data Refinery flow, you can use the Convert column type operation to ensure that the join key columns' data types match. Click a previous step in Steps panel to see the snapshot view of the step.
The join types include:
Join type Description
Left join Returns all rows in the original data set and return only matching rows in the joining data set. Returns one row in the original data set for each matching row in the joining data set.
Right join Returns all rows in the joining data set and return only matching rows in the original data set. Returns one row in the joining data set for each matching row in the original data set.
Inner join Returns only the rows in each data set that match rows in the other data set. Returns one row in the original data set for each matching row in the joining data set.
Full join Returns all rows in both data sets. Blends rows in the original data set with matching rows in the joining data set.
Semi join Returns only the rows in the original data set that match rows in the joining data set. Returns one row in the original data set for all matching rows in the joining data set.
Anti join Returns only the rows in the original data set that do not match rows in the joining data set.
Video transcript
1. The customers.csv data set contains information about your company's customers, and the sales.csv data set contains information about your company's sales representatives.
2. The data sets share the SALESREP_ID column.
3. The customers.csv data set is open in Data Refinery.
4. The Join operation can combine the data from these two data sets based on a comparison of the values in the SALESREP_ID column.
5. You want to do an inner join to return only the rows in each data set that match in the other data set.
6. You can add a custom suffix to append to columns that exist in both data sets to see the source data set for that column.
7. Select the sales.csv data set to join with the customers.csv data set.
8. For the join key, begin typing the column name to see a filtered list. The SALESREP_ID column links the two data sets.
9. Next, select the columns to include. Duplicate columns will display the suffix appended.
10. Now apply the changes.
11. The Join operation displays in the Steps panel.
12. Now, the data set is enriched with the columns from the customers.csv and sales.csv data sets.
Rename column
Rename the selected column.
Video transcript
1. Use the Rename column operation to quickly rename a column.
2. The fastest way to rename a column is to edit the column's name in the table.
3. Edit the name and press Enter on your keyboard.
4. The Rename column step shows the old name and the new name.
5. Now rename another column.
6. The Steps panel shows the BANKS column was renamed to DOGS.
7. Now rename the last column.
8. The Steps panel shows the RATIOS column was renamed to BIRDS.
Sample
Generate a subset of your data by using one of the following methods. Sampling steps from UI operations apply only when the flow is run.
* Random sample: Each data record of the subset has an equal probability of being chosen.
* Stratified sample: Divide the data into one or more subgroups called strata. Then generate one random sample that contains data from each subgroup.
Video transcript
1. The Sample operation generates a subset of your data.
2. Use the Sample operation when you have a large amount of data and you want to work on a representative sample for faster prototyping.
3. The Sample operation is in the ORGANIZE category.
4. Choose one of two methods to create a sample.
5. With a random sample, each row has an equal probability to be included in the sample data.
6. You can choose a random sample by number of rows or by percentage of data.
7. A stratified sample builds on a random sample. As with a random sample, you specify the amount of data in the sample (rows or percentage).
8. With a stratified sample, you divide the data into one or more subgroups called strata. Then you generate one random sample that contains customized data from each subgroup.
9. For Method, if you choose Auto, you select one column for the strata.
10. If you choose Manual, you specify one or more strata and for each strata you specify filter conditions that define the rows in each strata.
11. In this airline data example, we'll create two strata. One strata defines 50% of the output to have New York City destination airports and the second the strata defines the remaining 50% to have a specified flight distance.
12. In Specify details for this strata box, enter the percentage of the sample that will represent the conditions that you will specify in this first strata. The strata percentages must total 100%.
13. Available operators for string data.
14. 50% of the sample will have New York City area destination airports.
15. Click Save to save the first strata.
16. The first strata, identified as Strata0, has one condition. In this strata, 50% of sample must meet the condition.
17. In Specify details for this strata box, enter the percentage of the sample that will represent the conditions that you will specify in the second strata.
18. Available operators for numeric data.
19. 50% of the sample will be for flights with a distance greater than 500.
20. Click Save to save the second strata.
21. The second strata, identified as Strata1, has one condition. In this strata, 50% of the sample must meet the condition.
22. If you use multiple strata, the Sample operation internally applies a Filter operation with an OR condition on the strata. Depending on the data, the conditions, and the size of the sample, the results of using one strata with multiple conditions might differ from using multiple strata.
23. Unlike the other Data Refinery operations, the Sample operation changes the data set only after you create and run a job for the Data Refinery flow.
24. The Sample step shows in the Steps panel.
25. The data set is over 10000 rows.
26. Save and create a job for the Data Refinery flow.
27. The new asset file is added to the project for the output of the Data Refinery flow.
28. View the output file.
29. There are 10 rows (50% of the sample) with New York City airports in the Dest column, but 17 rows in the Distance column with values greater than 500.
30. These results are because the strata were applied with an OR condition and there was overlapping data for the conditions specified in first strata where the rows that were filtered by Dest containing New York City airports had Distance values greater than 500.
31. The output file in Data Refinery shows the reduced size.
Sort ascending
Sort all the rows in the table by the selected column in ascending order.
Sort descending
Sort all the rows in the table by the selected column in descending order.
Video transcript
1. Quickly sort all the rows in a data set by sorting the rows in a selected column.
2. The fastest way to sort columns is from the column's menu.
3. You can sort the rows in ascending or descending order.
4. Sort ascending.
5. The order of all the rows in the table is updated by the Sort operation of the first column.
6. The Sort operation shows in the Steps panel.
7. Sort descending.
8. The order of all the rows in the table is changed by the Sort operation of the second column.
9. The second Sort operation shows in the Steps panel.
10. Sort ascending.
11. The order of all the rows in the table is changed by the Sort operation of the third column.
12. The third Sort operation shows in the Steps panel.
Split column
Split the column by non-alphanumeric characters, position, pattern, or text.
Video transcript
1. The Split column operation splits one column into two or more columns based on non-alphanumeric characters, text, pattern, or position.
2. To begin, let's split the YMD column into YEAR, MONTH, and DAY columns.
3. The Split column operation is in the ORGANIZE category.
4. First, select the YMD column to split.
5. The tabs offer four choices for ways to split the column.
6. DEFAULT uses any non-alphanumeric character that's in the column values to split the column.
7. In TEXT, you select a character or enter text to split the column.
8. In PATTERN, you enter a regular expression based on R syntax to determine where to split the column.
9. In POSITION, you specify at what position to split the column.
10. We want to split the YMD column by the asterisk (*), which is a non-alphanumeric character, so we'll select the DEFAULT tab.
11. Split the YMD column into three new columns - YEAR, MONTH, and DAY.
12. The three new columns, YEAR, MONTH, and DAY, are added to the data set.
13. The Split column operation shows in the Steps panel.
14. Next split the FLIGHT column into two columns - One for the airline code and one for the flight number. Because airline codes are two characters, we can split the column by position.
15. Click the POSITION tab, and then type 2 in the Positions box.
16. Split the FLIGHT column into two new columns - AIRLINE and FLTNMBR.
17. The two new columns, AIRLINE and FLIGHTNBR, are added to the data set.
18. The Split column operation shows in the Steps panel.
Union
Combine the rows from two data sets that share the same schema and filter out the duplicates. If you select Allow a different number of columns and allow duplicate values, the operation is a UNION ALL command.
Video transcript
1. The Union operation combines the rows from two data sets that share the same schema.
2. This data set has four columns and six rows. The data types from left to right are String, String, Decimal, String.
3. When the data set was loaded into Data Refinery, the AUTOMATIC Convert column type operation automatically converted the PRICE column to the Decimal data type.
4. The columns in the second data set must be compatible to the data types in this data set.
5. Select the data set to combine with the current data set.
6. When you preview the new data set, you see that it also has four columns. However, the PRICE column is a String data type.
7. Before you apply the Union operation, you need to delete the AUTOMATIC Convert column type step so that the PRICE column is the same data type as the PRICE column in the new data set (String).
8. The PRICE column is now string data.
9. Now repeat the union operation.
10. The new data set is added to the current data set. The data set is increased to 12 rows.
11. The Union operation shows in the Steps panel.
12. Now add a data set that has a different number of columns. The matching columns must still be compatible data types.
13. Select the data set to combine with the current data set.
14. When you preview the new data set, you see that it has one more column than the original data set. The fifth column is TYPE.
15. Select Allow a different number of columns and allow duplicate values.
16. Apply the Union operation.
17. The new data set is added to the current data set. The data set is increased to 18 rows.
18. The additional column, TYPE, is added to the data set.
19. The Union operation shows in the Steps panel.
Tip for the Union operation: If you receive an error about incompatible schemas, check if the automatic Convert column type operation changed the data types of the first data set. Delete the Convert column type step and try again.
NATURAL LANGUAGE
Remove stop words Remove common words of the English language, such as “the” or “and.” Stop words usually have little semantic value for text analytics algorithms and models. Remove the stop words to reduce the data volume and to improve the quality of the data that you use to train machine learning models.
Optional: To confirm which words were removed, apply the Tokenize operation (by words) on the selected column, and then view the statistics for the words in the Profile tab. You can undo the Tokenize step later in the Data Refinery flow.
Video transcript
1. The Remove stop words operation removes common words of the English language from the data set. Stop words usually have little semantic value for text analytics algorithms and models. Remove the stop words to reduce the data volume and to improve the data quality.
2. The Remove stop words operation removes these words: a, an, and, are, as, at, be, but, by, for, from, if, in, into, is, it, no, not, of, on, or, such, that, the, their, then, there, these, they, this, to, was, will, with.
3. The Remove stop words operation is under the NATURAL LANGUAGE category.
4. Select the STRING column.
5. Click Apply to remove the stop words.
6. The stop words are removed from the STRING column.
7. The Remove stop words operation shows in the Steps panel.
Tokenize
Break up English text into words, sentences, paragraphs, lines, characters, or by regular expression.
Video transcript
1. The Tokenize operation breaks up English text into words, sentences, paragraphs, lines, characters, or by regular expression.
2. The Tokenize operation is under the NATURAL LANGUAGE category.
3. Select the STRING column.
4. Available tokenize options.
5. Create a new column with the name WORDS.
6. The Tokenize operation has taken the words from the STRING column and created a new column, WORDS, with a row for each word.
7. The Tokenize operation shows in the Steps panel.
Parent topic:[Refining data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html)
| ## CLEANSE ##
**Convert column type**
When you open a file in Data Refinery, the **Convert column type** operation is automatically applied as the first step if it detects any nonstring data types in the data\. Data types are automatically converted to inferred data types\. To change the automatic conversion for a selected column, click the overflow menu () for the step and select **Edit**\. As with any other operation, you can undo the step\. The **Convert column type** operation is reapplied every time that you open the file in Data Refinery\. Automatic conversion is applied as needed for file\-based data sources only\. (It does not apply to a data source from a database connection\.)
To confirm what data type each column's data was converted to, click **Edit** from the overflow menu () to view the data types\. The information includes the format for date or timestamp data\.
If the data is converted to an Integer or to a Decimal data type, you can specify the decimal symbol and the thousands grouping symbol for all applicable columns\. Strings that are converted to the Decimal data type use a dot for the decimal symbol and a comma for the thousands grouping symbol\. Alternatively, you can select comma for the decimal symbol and dot or a custom symbol for the thousands grouping symbol\. The decimal symbol and the thousands grouping symbol cannot be the same\.
The source data is read from left to right until a terminator or an unrecognized character is encountered\. For example, if you are converting string data `12,834` to Decimal and you do not specify what to do with the comma (,), the data will be truncated to `12`\. Similarly, if the source data has multiple dots (\.), and you select dot for the decimal symbol, the first dot is used as the decimal separator and the digits following the second dot are truncated\. A source string of `1.834.230,000` is converted to a value of `1.834`\.
The **Convert column type** operation automatically converts these date and timestamp formats:
<!-- <ul> -->
* Date: `ymd`, `ydm`
* Timestamp: `ymdHMS`, `ymdHM`, `ydmHMS`, `ydmHM`
<!-- </ul> -->
Date and Timestamp strings must use four digits for the year\.
You can manually apply the **Convert column type** operation to change the data type of a column at any point in the Data Refinery flow\. You can create a new column to hold the result of this operation or you can overwrite the existing column\.
Tip: A column's data type determines the operations that you can use\. Changing the data type can affect which operations are relevant for that column\.
**Video transcript**
<!-- <ol> -->
1. The Convert column type operation automatically converted the first column from String to Integer\. Let's change the data types of the other three columns\.
2. To change the data type of european column from string to decimal, select the column and then edit the Convert column type operation step\.
3. To change the data type of european column from string to decimal, select the column and then edit the Convert column type operation step\.
4. Select Decimal\.
5. The column uses the comma delimiter so select Comma (,) for the decimal symbol\.
6. Select the next column, DATETIME\. Select Timestamp and a format\.
7. Click Apply\.
8. The columns are now Integer, Decimal, Date, and Timestamp data types The Convert column type step in the Steps panel is updated\.
<!-- </ol> -->
**Convert column value to missing**
Convert values in the selected column to missing values if they match values in the specified column or they match a specified value\.
**Video transcript**
<!-- <ol> -->
1. The Convert column value to missing operation converts the values in a selected column to missing values if they match the values in a specified column or if they match a specified value\.
2. A missing value is equivalent to an SQL NULL, which is a field with no value\. It is different from a zero value or a value that contains spaces\.
3. You can use the Convert column value to missing operation when you think that the data would be better represented as missing values\. For example, when you want to use missing values in a Replace missing values operation or in a Filter operation\.
4. Let's use the Convert column value to missing operation to change values to missing based on a matched value\.
5. Notice that the DESC column has many rows with the value CANCELLED ORDER\. Let's convert the CANCELLED ORDER strings to missing values\.
6. The Convert column value to missing operation is under the CLEANSE category\.
7. Type the string to replace with missing values\.
8. The values that were formerly CANCELLED ORDER are now missing values\.
<!-- </ol> -->
**Extract date or time value**
Extract a selected portion of a date or time value from a column with a date or timestamp data type\.
**Video transcript**
<!-- <ol> -->
1. The Extract date or time value operation extracts a selected portion of a date or time value from a column that is a date or timestamp data type\.
2. The DATE column is a String data type\. First, let's use the Convert column type operation to convert it to the Date data type\.
3. Select the Convert column type operation from the DATE column's menu\. Select Date\.
4. Select a Date format\.
5. The DATE column is now a date data type\.
6. The ISO Date format is used when the String data type was converted to the Date data type\. For example, the string 01/08/2018 was converted to the date 2018\-01\-08\.
7. Now we can extract the year portion of the date into a new column\.
8. The Extract date or time value operation is under the CLEANSE category\.
9. Select Year for the portion of the date to extract, and type YEAR for the new column name\.
10. The year portion of the DATE column is in the new column, YEAR\.
11. The Steps panel displays the Extract date or time value operation\.
<!-- </ol> -->
**Filter**
Filter rows by the selected columns\. Keep rows with the selected column values; filter out all other rows\.
For these string **Filter** operators, do not enclose the value in quotation marks\. If the value contains quotation marks, escape them with a slash character\. For example: `\"text\"`:
<!-- <ul> -->
* Contains
* Does not contain
* Starts with
* Does not start with
* End with
* Does not end with
<!-- </ul> -->
Folowing are the operators for numeric, string, and Boolean (logical), and date and timestamp columns:
<!-- <table> -->
| Operator | Numeric | String | Boolean | Date and timestamp |
| --------------------------- | ------- | ------ | ------- | ------------------ |
| Contains | | ✓ | | |
| Does not contain | | ✓ | | |
| Does not end with | | ✓ | | |
| Does not start with | | ✓ | | |
| Ends with | | ✓ | | |
| Is between two numbers | ✓ | | | |
| Is empty | | ✓ | ✓ | ✓ |
| Is equal to | ✓ | ✓ | | ✓ |
| Is false | | | ✓ | |
| Is greater than | ✓ | | | ✓ |
| Is greater than or equal to | ✓ | | | ✓ |
| Is in | ✓ | ✓ | | |
| Is less than | ✓ | | | ✓ |
| Is less than or equal to | ✓ | | | ✓ |
| Is not empty | | ✓ | ✓ | ✓ |
| Is not equal to | ✓ | ✓ | | ✓ |
| Is not in | ✓ | ✓ | | |
| Is not null | | ✓ | | |
| Is null | ✓ | ✓ | | |
| Is true | | | ✓ | |
| Starts with | | ✓ | |
<!-- </table ""> -->
**Video transcript**
<!-- <ol> -->
1. Use the Filter operation to filter rows by the selected columns\. You can apply multiple conditions in one Filter operation\.
2. Use a regular expression to filter out all the rows except those where the string in the Emp ID column starts with 8\.
3. Filter the rows by two states abbreviations\.
4. Click Apply\. Only the rows where Emp ID starts with 8 and State is AR or TX are in the table\.
5. The rows are now filtered by AR and PA\. The Filter step in the Steps panel is updated\.
<!-- </ol> -->
**Remove column**
Remove the selected column\.
**Video transcript**
<!-- <ol> -->
1. Use the Remove column operation to quickly remove a column from a data asset\.
2. The quickest way to remove a column is from the column's menu\.
3. The name of the removed column is in the Steps panel\.
4. Remove another column\.
5. The name of the removed column is in the Steps panel\.
<!-- </ol> -->
**Remove duplicates**
Remove rows with duplicate column values\.
**Video transcript**
<!-- <ol> -->
1. The Remove duplicates operation removes rows that have duplicate column values\.
2. The data set has 43 rows\. Many of the rows in the APPLYCODE column have duplicate values\. We want to reduce the data set to the rows where each value in the APPLYCODE column occurs only once\.
3. Select the Remove duplicates operation from the APPLYCODE column's menu\.
4. The Remove duplicates operation removed each occurrence of a duplicate value starting from the top row\. The data set is now 4 rows\.
<!-- </ol> -->
**Remove empty rows**
Remove rows that have a blank or missing value for the selected column\.
**Video transcript**
<!-- <ol> -->
1. The Remove empty rows operation removes rows that have a blank or missing value for the selected column\.
2. A missing value is equivalent to an SQL NULL, which is a field with no value\. It is different from a zero value or a value that contains spaces\.
3. The data set has 43 rows\. Many of the rows in the TRACK column have missing values\. We want to reduce the data set to the rows that have a value in the TRACK column\.
4. Select the Remove empty rows operation from the TRACK column's menu\.
5. The Remove empty rows operation removed each row that had a blank or missing value in the TRACK column\. The data set is now 21 rows\.
<!-- </ol> -->
**Replace missing values**
Replace missing values in the column with a specified value or with the value from a specified column in the same row\.
**Video transcript**
<!-- <ol> -->
1. The Replace missing values operation replaces missing values in a column with a specified value or with the value from a specified column in the same row\.
2. The STATE column has many rows with empty values\. We want to replace those empty values with a string\.
3. The Replace missing values operation is under the CLEANSE category\.
4. For the State column, replace the missing values with the string Incomplete\.
5. The missing values now have the value Incomplete\.
6. The Steps panel displays the Replace missing values operation\.
<!-- </ol> -->
**Replace substring**
Replace the specified substring with the specified text\.
**Video transcript**
<!-- <ol> -->
1. The Replace substring operation replaces a substring with text that you specify\.
2. The DECLINE column has many rows that include the string BANC\. We want to replace this string with BANK\.
3. The Replace substring operation is under the CLEANSE category\.
4. Type the string to replace and the replacement string\.
5. All occurrences of the string BANC have been replaced with BANK\.
6. The Steps panel displays the Replace substring operation\.
<!-- </ol> -->
**Substitute**
Obscure sensitive information from view by substituting a random string of characters for the actual data in the selected column\.
**Video transcript**
<!-- <ol> -->
1. The Substitute operation obscures sensitive information by substituting a random string of characters for the data in the selected column\.
2. The quickest way to substitute the data in a column is to select Substitute from the column's menu\.
3. The Substitute operation shows in the Steps panel\.
4. Substitute values in another column\.
5. The second Substitute operation shows in the Steps panel\.
<!-- </ol> -->
### Text ###
You can apply text operations only to string columns\. You can create a new column to hold the result of an operation or you can overwrite the existing column\.
**Text > Collapse spaces**
Collapse multiple, consecutive spaces in the text to a single space\.
**Text > Concatenate string**
Link together any string to the text\. You can prepend the string to the text, append the string to the text, or both\.
**Text > Lowercase**
Convert the text to lowercase\.
**Text > Number of characters**
Return the number of characters in the text\.
**Text > Pad characters**
Pad the text with the specified string\. Specify whether to pad the text on the left, right, or both the left and right\.
**Text > Substring**
Create substrings from the text that start at the specified position and have the specified length\.
**Text > Title case**
Convert the text to title case\.
**Text > Trim quotes**
Remove single or double quotation marks from the text\.
**Text > Trim spaces**
Remove leading, trailing, and extra spaces from the text\.
**Text > Uppercase**
Convert the text to uppercase\.
**Video transcript**
<!-- <ol> -->
1. You can apply a Text operation to string columns\. Create a new column for the result or overwrite the existing column\.
2. First, concatenate a string to the values in the WORD column\.
3. Available Text operations\.
4. Concatenate the string to the right side, append with a space, and type up\.
5. The values in the WORD column are appended with a space and the word up\.
6. The Text operation displays in the Steps panel\.
7. Next, pad the values in the ANIMAL column with a string\.
8. Pad the values in the ANIMAL column with ampersand (&) symbols to the right for a minimum of 7 characters\.
9. The values in the ANIMAL column are padded with the & symbol so that each string is at least seven characters\.
10. Notice that the opossum, pangolin, platypus, and hedgehog values do not have a padding character because those strings were already seven or more characters long\.
11. Next, use Substring to remove the t character from the ID column\.
12. Select Position 2 to start the new string at that position\. Select Length 4 for a four\-character length string\.
13. The initial t character in the ID column is removed in the NEW\-ID column\.
<!-- </ol> -->
## COMPUTE ##
**Calculate**
Perform a calculation with another column or with a specified value\. The operators are:
<!-- <ul> -->
* Addition
* Division
* Exponentiation
* Is between two numbers
* Is equal to
* Is greater than
* Is greater than or equal to
* Is less than
* Is less than or equal to
* Is not equal to
* Modulus
* Multiplication
* Subtraction
<!-- </ul> -->
**Video transcript**
<!-- <ol> -->
1. The Calculate operation performs a calculation, such as addition or subtraction, with another column or with a specified value\.
2. Select the column to begin\.
3. Available calculations
4. Now select the second column for the Addition calculation\.
5. And apply the change\.
6. The id column is updated, and the Steps panel shows the completed operation\.
7. You can also access the operations from the column's menu\.
8. This time, select Is between two numbers\. Specify the range, and create a new column for the results\.
9. The new column displays in the table and the new calculate operation displays in the Steps panel\.
10. This time, select Is equal to to compare two columns, and create a new column for the results\.
11. The new column displays in the table and the new calculate operation displays in the Steps panel\.
<!-- </ol> -->
### Math ###
You can apply math operations only to numeric columns\. You can create a new column to hold the result of an operation or you can overwrite the existing column\.
**Math > Absolute value**
Get the absolute value of a number\.
Example: The absolute value of both 4 and \-4 is 4\.
**Math > Arc cosine**
Get the arc cosine of an angle\.
**Math > Ceiling**
Get the nearest integer of greater value, also known as the ceiling of the number\.
Examples: The ceiling of 2\.31 is 3\. The ceiling of \-2\.31 is \-2\.
**Math > Exponent**
Get a number raised to the power of the column value\.
**Math > Floor**
Get the nearest integer of lesser value, also known as the floor of the number\.
Example: The floor of 2\.31 is 2\. The floor of \-2\.31 is \-3\.
**Math > Round**
Get the whole number nearest to the column value\. If the column value is a whole number, return it\.
**Math > Square root**
Get the square root of the column value\.
**Video transcript**
<!-- <ol> -->
1. Apply a Math operation to the values in a column\. Create a new column for the results or overwrite the existing column\.
2. Available Math operations
3. Apply Absolute value to the column's values\.
4. Create new column for results\.
5. The new column is added to the table, and the Math operation displays in the Steps panel\.
6. You can also access the operation from the column's menu\.
7. Apply Round to the ANGLE column's values\.
8. Create a new column for results\.
9. The new column is added to the table, and the new Math operation displays in the Steps panel\.
<!-- </ol> -->
## ORGANIZE ##
**Aggregate**
Apply summary calculations to the values of one or more columns\. Each aggregation creates a new column\. Optionally, select **Group by columns** to group the new column by another column that defines a characteristic of the group, for example, a department or an ID\. You can group by multiple columns\. You can combine multiple aggregations in a single operation\.
The available aggregate operations depend on the data type\.
Numeric data:
<!-- <ul> -->
* Count unique values
* Minimum
* Maximum
* Sum
* Standard deviation
* Mean
<!-- </ul> -->
String data:
<!-- <ul> -->
* Combine row values
* Count unique values
<!-- </ul> -->
**Video transcript**
<!-- <ol> -->
1. The Aggregate operation applies summary calculations to the values of one or more columns\. Each aggregation creates a new column\.
2. Available aggregations depend on whether the data is numeric or string data\.
3. The available operators depend on the column's data type\. Available operators for numeric data\.
4. With the UniqueCarrier text column selected, you can see the available operators for string data\.
5. We will count how many unique values are in the UniqueCarrier column\. This aggregation will show how many airlines are in the data set\.
6. We have 22 airlines in the new Airlines column\. The other columns are deleted\.
7. The Aggregate operation displays in the Steps panel\.
8. Let's start over to show an aggregation on numeric data\.
9. Show the average (mean value) of the arrival delays\.
10. The average value of all the arrival delays is in the new MeanArrDelay column\. The other columns are deleted\.
11. You can also group the aggregated column by another column that defines a characteristic of the group\.
12. Let's edit the Aggregate step by adding a Group by selection so we can see the average of arrival delays by airline\.
13. Group the results by the UniqueCarrier column\.
14. The average arrival delays are now grouped by airline\.
15. The Steps panel displays the Aggregate operation\.
<!-- </ol> -->
**Concatenate**
Concatenate the values of two or more columns\.
**Video transcript**
<!-- <ol> -->
1. The Concatenate operation concatenates the values of two or more columns\.
2. The Concatenate operation is under the ORGANIZE category\.
3. Select the columns to concatenate\.
4. Select a separator to use between the concatenated values\.
5. Type a name for the column for the concatenated values\.
6. The new column can display as the right\-most column in the data set, or next to the original column\.
7. Keep the original columns, and apply the changes\.
8. The new DATE column shows the concatenated values from the other three columns with a semicolon separator\.
9. The Concatenate operation displays in the Steps panel\.
10. The DATE column is a String data type\. Let's use the Convert column type operation to convert it to the Date data type\.
11. Select the Convert column type operation from the DATE column's menu\. Select Date\.
12. Select a date format and create a new column for the result\.
13. Place the new column next to the original column, and apply the changes\.
14. The new column displays with the converted date format\.
15. The Convert column type operation displays in the Steps panel\.
16. The ISO Date format is used when the String data type was converted to the Date data type\. For example, the string 2004;2;3 was converted to the date 2004\-02\-03\.
<!-- </ol> -->
**Conditional replace**
Replace the values in a column based on conditions\.
**Video transcript**
<!-- <ol> -->
1. Use the Conditional replace operation to replace the values in a column based on conditions\.
2. First, let's specify conditions to replace data in the CODE string column and create a new column for the results\.
3. Available condition operators for string data\.
4. Add the first condition \- CONDITION 1: CODE Is equal to value C replace with COMPLETE\.
5. Add a second condition \- CONDITION 2: CODE Is equal to value I replace with INCOMPLETE\.
6. Specify what to do with any values that do not meet the conditions\. Here we will enter two double quotation marks to indicate an empty string\.
7. Create a new column for the results\.
8. The new column, STATUS, shows the conditional replacements from the CODE column\.
9. The Conditional replace operation shows in the Steps panel\.
10. Next, let's specify conditions to replace data in the INPUT integer column and create a new column for the results\.
11. Available condition operators for numeric data\.
12. Add the first condition \- CONDITION 1: INPUT Is less than or equal to value 3 replace with value LOW\.
13. Add a second condition \- CONDITION 2: INPUT Is in values 4,5,6 replace with value MED\.
14. Add a third condition \- CONDITION 3: INPUT Is greater than or equal to value 7 replace with value HIGH\.
15. Specify what to do with any values that do not meet the conditions\.
16. Create a new column for the results\.
17. The new column, RATING, shows the conditional replacements from the INPUT column\.
18. The Conditional replace operation shows in the Steps panel\.
<!-- </ol> -->
**Join**
Combine data from two data sets based on a comparison of the values in specified key columns\. Specify the type of join to perform, select the columns (join keys) in both data sets that you want to compare, and select the columns that you want in the resulting data set\.
The join key columns in both data sets need to be compatible data types\. If the **Join** operation is the first step that you add, check whether the **Convert column type** operation automatically converted the data type of the join key columns in the first data set when you opened the file in Data Refinery\. Also, depending where the **Join** operation is in the Data Refinery flow, you can use the **Convert column type** operation to ensure that the join key columns' data types match\. Click a previous step in **Steps** panel to see the snapshot view of the step\.
The join types include:
<!-- <table> -->
| Join type | Description |
| ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| Left join | Returns all rows in the original data set and return only matching rows in the joining data set\. Returns one row in the original data set for each matching row in the joining data set\. |
| Right join | Returns all rows in the joining data set and return only matching rows in the original data set\. Returns one row in the joining data set for each matching row in the original data set\. |
| Inner join | Returns only the rows in each data set that match rows in the other data set\. Returns one row in the original data set for each matching row in the joining data set\. |
| Full join | Returns all rows in both data sets\. Blends rows in the original data set with matching rows in the joining data set\. |
| Semi join | Returns only the rows in the original data set that match rows in the joining data set\. Returns one row in the original data set for all matching rows in the joining data set\. |
| Anti join | Returns only the rows in the original data set that do not match rows in the joining data set\. |
<!-- </table ""> -->
**Video transcript**
<!-- <ol> -->
1. The customers\.csv data set contains information about your company's customers, and the sales\.csv data set contains information about your company's sales representatives\.
2. The data sets share the SALESREP\_ID column\.
3. The customers\.csv data set is open in Data Refinery\.
4. The Join operation can combine the data from these two data sets based on a comparison of the values in the SALESREP\_ID column\.
5. You want to do an inner join to return only the rows in each data set that match in the other data set\.
6. You can add a custom suffix to append to columns that exist in both data sets to see the source data set for that column\.
7. Select the sales\.csv data set to join with the customers\.csv data set\.
8. For the join key, begin typing the column name to see a filtered list\. The SALESREP\_ID column links the two data sets\.
9. Next, select the columns to include\. Duplicate columns will display the suffix appended\.
10. Now apply the changes\.
11. The Join operation displays in the Steps panel\.
12. Now, the data set is enriched with the columns from the customers\.csv and sales\.csv data sets\.
<!-- </ol> -->
**Rename column**
Rename the selected column\.
**Video transcript**
<!-- <ol> -->
1. Use the Rename column operation to quickly rename a column\.
2. The fastest way to rename a column is to edit the column's name in the table\.
3. Edit the name and press Enter on your keyboard\.
4. The Rename column step shows the old name and the new name\.
5. Now rename another column\.
6. The Steps panel shows the BANKS column was renamed to DOGS\.
7. Now rename the last column\.
8. The Steps panel shows the RATIOS column was renamed to BIRDS\.
<!-- </ol> -->
**Sample**
Generate a subset of your data by using one of the following methods\. Sampling steps from UI operations apply only when the flow is run\.
<!-- <ul> -->
* Random sample: Each data record of the subset has an equal probability of being chosen\.
* Stratified sample: Divide the data into one or more subgroups called *strata*\. Then generate one random sample that contains data from each subgroup\.
<!-- </ul> -->
**Video transcript**
<!-- <ol> -->
1. The Sample operation generates a subset of your data\.
2. Use the Sample operation when you have a large amount of data and you want to work on a representative sample for faster prototyping\.
3. The Sample operation is in the ORGANIZE category\.
4. Choose one of two methods to create a sample\.
5. With a random sample, each row has an equal probability to be included in the sample data\.
6. You can choose a random sample by number of rows or by percentage of data\.
7. A stratified sample builds on a random sample\. As with a random sample, you specify the amount of data in the sample (rows or percentage)\.
8. With a stratified sample, you divide the data into one or more subgroups called strata\. Then you generate one random sample that contains customized data from each subgroup\.
9. For Method, if you choose Auto, you select one column for the strata\.
10. If you choose Manual, you specify one or more strata and for each strata you specify filter conditions that define the rows in each strata\.
11. In this airline data example, we'll create two strata\. One strata defines 50% of the output to have New York City destination airports and the second the strata defines the remaining 50% to have a specified flight distance\.
12. In Specify details for this strata box, enter the percentage of the sample that will represent the conditions that you will specify in this first strata\. The strata percentages must total 100%\.
13. Available operators for string data\.
14. 50% of the sample will have New York City area destination airports\.
15. Click Save to save the first strata\.
16. The first strata, identified as Strata0, has one condition\. In this strata, 50% of sample must meet the condition\.
17. In Specify details for this strata box, enter the percentage of the sample that will represent the conditions that you will specify in the second strata\.
18. Available operators for numeric data\.
19. 50% of the sample will be for flights with a distance greater than 500\.
20. Click Save to save the second strata\.
21. The second strata, identified as Strata1, has one condition\. In this strata, 50% of the sample must meet the condition\.
22. If you use multiple strata, the Sample operation internally applies a Filter operation with an OR condition on the strata\. Depending on the data, the conditions, and the size of the sample, the results of using one strata with multiple conditions might differ from using multiple strata\.
23. Unlike the other Data Refinery operations, the Sample operation changes the data set only after you create and run a job for the Data Refinery flow\.
24. The Sample step shows in the Steps panel\.
25. The data set is over 10000 rows\.
26. Save and create a job for the Data Refinery flow\.
27. The new asset file is added to the project for the output of the Data Refinery flow\.
28. View the output file\.
29. There are 10 rows (50% of the sample) with New York City airports in the Dest column, but 17 rows in the Distance column with values greater than 500\.
30. These results are because the strata were applied with an OR condition and there was overlapping data for the conditions specified in first strata where the rows that were filtered by Dest containing New York City airports had Distance values greater than 500\.
31. The output file in Data Refinery shows the reduced size\.
<!-- </ol> -->
**Sort ascending**
Sort all the rows in the table by the selected column in ascending order\.
**Sort descending**
Sort all the rows in the table by the selected column in descending order\.
**Video transcript**
<!-- <ol> -->
1. Quickly sort all the rows in a data set by sorting the rows in a selected column\.
2. The fastest way to sort columns is from the column's menu\.
3. You can sort the rows in ascending or descending order\.
4. Sort ascending\.
5. The order of all the rows in the table is updated by the Sort operation of the first column\.
6. The Sort operation shows in the Steps panel\.
7. Sort descending\.
8. The order of all the rows in the table is changed by the Sort operation of the second column\.
9. The second Sort operation shows in the Steps panel\.
10. Sort ascending\.
11. The order of all the rows in the table is changed by the Sort operation of the third column\.
12. The third Sort operation shows in the Steps panel\.
<!-- </ol> -->
**Split column**
Split the column by non\-alphanumeric characters, position, pattern, or text\.
**Video transcript**
<!-- <ol> -->
1. The Split column operation splits one column into two or more columns based on non\-alphanumeric characters, text, pattern, or position\.
2. To begin, let's split the YMD column into YEAR, MONTH, and DAY columns\.
3. The Split column operation is in the ORGANIZE category\.
4. First, select the YMD column to split\.
5. The tabs offer four choices for ways to split the column\.
6. DEFAULT uses any non\-alphanumeric character that's in the column values to split the column\.
7. In TEXT, you select a character or enter text to split the column\.
8. In PATTERN, you enter a regular expression based on R syntax to determine where to split the column\.
9. In POSITION, you specify at what position to split the column\.
10. We want to split the YMD column by the asterisk (\*), which is a non\-alphanumeric character, so we'll select the DEFAULT tab\.
11. Split the YMD column into three new columns \- YEAR, MONTH, and DAY\.
12. The three new columns, YEAR, MONTH, and DAY, are added to the data set\.
13. The Split column operation shows in the Steps panel\.
14. Next split the FLIGHT column into two columns \- One for the airline code and one for the flight number\. Because airline codes are two characters, we can split the column by position\.
15. Click the POSITION tab, and then type 2 in the Positions box\.
16. Split the FLIGHT column into two new columns \- AIRLINE and FLTNMBR\.
17. The two new columns, AIRLINE and FLIGHTNBR, are added to the data set\.
18. The Split column operation shows in the Steps panel\.
<!-- </ol> -->
**Union**
Combine the rows from two data sets that share the same schema and filter out the duplicates\. If you select **Allow a different number of columns and allow duplicate values**, the operation is a `UNION ALL` command\.
**Video transcript**
<!-- <ol> -->
1. The Union operation combines the rows from two data sets that share the same schema\.
2. This data set has four columns and six rows\. The data types from left to right are String, String, Decimal, String\.
3. When the data set was loaded into Data Refinery, the AUTOMATIC Convert column type operation automatically converted the PRICE column to the Decimal data type\.
4. The columns in the second data set must be compatible to the data types in this data set\.
5. Select the data set to combine with the current data set\.
6. When you preview the new data set, you see that it also has four columns\. However, the PRICE column is a String data type\.
7. Before you apply the Union operation, you need to delete the AUTOMATIC Convert column type step so that the PRICE column is the same data type as the PRICE column in the new data set (String)\.
8. The PRICE column is now string data\.
9. Now repeat the union operation\.
10. The new data set is added to the current data set\. The data set is increased to 12 rows\.
11. The Union operation shows in the Steps panel\.
12. Now add a data set that has a different number of columns\. The matching columns must still be compatible data types\.
13. Select the data set to combine with the current data set\.
14. When you preview the new data set, you see that it has one more column than the original data set\. The fifth column is TYPE\.
15. Select Allow a different number of columns and allow duplicate values\.
16. Apply the Union operation\.
17. The new data set is added to the current data set\. The data set is increased to 18 rows\.
18. The additional column, TYPE, is added to the data set\.
19. The Union operation shows in the Steps panel\.
<!-- </ol> -->
Tip for the **Union** operation: If you receive an error about incompatible schemas, check if the automatic **Convert column type** operation changed the data types of the first data set\. Delete the **Convert column type** step and try again\.
## NATURAL LANGUAGE ##
**Remove stop words** Remove common words of the English language, such as “the” or “and\.” Stop words usually have little semantic value for text analytics algorithms and models\. Remove the stop words to reduce the data volume and to improve the quality of the data that you use to train machine learning models\.
Optional: To confirm which words were removed, apply the **Tokenize** operation (by words) on the selected column, and then view the statistics for the words in the **Profile** tab\. You can undo the **Tokenize** step later in the Data Refinery flow\.
**Video transcript**
<!-- <ol> -->
1. The Remove stop words operation removes common words of the English language from the data set\. Stop words usually have little semantic value for text analytics algorithms and models\. Remove the stop words to reduce the data volume and to improve the data quality\.
2. The Remove stop words operation removes these words: a, an, and, are, as, at, be, but, by, for, from, if, in, into, is, it, no, not, of, on, or, such, that, the, their, then, there, these, they, this, to, was, will, with\.
3. The Remove stop words operation is under the NATURAL LANGUAGE category\.
4. Select the STRING column\.
5. Click Apply to remove the stop words\.
6. The stop words are removed from the STRING column\.
7. The Remove stop words operation shows in the Steps panel\.
<!-- </ol> -->
**Tokenize**
Break up English text into words, sentences, paragraphs, lines, characters, or by regular expression\.
**Video transcript**
<!-- <ol> -->
1. The Tokenize operation breaks up English text into words, sentences, paragraphs, lines, characters, or by regular expression\.
2. The Tokenize operation is under the NATURAL LANGUAGE category\.
3. Select the STRING column\.
4. Available tokenize options\.
5. Create a new column with the name WORDS\.
6. The Tokenize operation has taken the words from the STRING column and created a new column, WORDS, with a row for each word\.
7. The Tokenize operation shows in the Steps panel\.
<!-- </ol> -->
**Parent topic:**[Refining data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html)
<!-- </article "role="article" "> -->
|
82B2C3E8A59998DAA1BC70938A0155EC8C9ED3A1 | https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/metrics.html?context=cdpaas&locale=en | Validating your data in Data Refinery | Validating your data in Data Refinery
At any time after you've added data to Data Refinery, you can validate your data. Typically, you'll want to do this at multiple points in the refinement process.
To validate your data:
1. From Data Refinery, click the Profile tab.
2. Review the metrics for each column.
3. Take appropriate actions, as described in the following sections, depending on what you learn.
Frequency
Frequency is the number of times that a value, or a value in a specified range, occurs. Each frequency distribution (bar) shows the count of unique values in a column.
Review the frequency distribution to find anomalies in your data. If you want to cleanse your data of those anomalies, simply remove the values.
For Integer and Date/Time columns, you can customize the number of bins (groupings) that you want to see. In the default multi-column view, the maximum is 20. If you expand the frequency chart row, the maximum is 50.
Statistics
Statistics are a collection of quantitative data. The statistics for each column show the minimum, maximum, mean, and number of unique values in that column.
Depending on a column's data type, the statistics for each column will vary slightly. For example, statistics for a column of data type integer have minimum, maximum, and mean values while statistics for a column of data type string have minimum length, maximum length, and mean length values.
Parent topic:[Refining data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html)
| # Validating your data in Data Refinery #
At any time after you've added data to Data Refinery, you can validate your data\. Typically, you'll want to do this at multiple points in the refinement process\.
To validate your data:
<!-- <ol> -->
1. From Data Refinery, click the **Profile** tab\.
2. Review the metrics for each column\.
3. Take appropriate actions, as described in the following sections, depending on what you learn\.
<!-- </ol> -->
## Frequency ##
Frequency is the number of times that a value, or a value in a specified range, occurs\. Each frequency distribution (bar) shows the count of unique values in a column\.
Review the frequency distribution to find anomalies in your data\. If you want to cleanse your data of those anomalies, simply remove the values\.
For Integer and Date/Time columns, you can customize the number of bins (groupings) that you want to see\. In the default multi\-column view, the maximum is 20\. If you expand the frequency chart row, the maximum is 50\.
## Statistics ##
Statistics are a collection of quantitative data\. The statistics for each column show the minimum, maximum, mean, and number of unique values in that column\.
Depending on a column's data type, the statistics for each column will vary slightly\. For example, statistics for a column of data type integer have minimum, maximum, and mean values while statistics for a column of data type string have minimum length, maximum length, and mean length values\.
**Parent topic:**[Refining data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html)
<!-- </article "role="article" "> -->
|
4B74E9409284F77897DB58B77271337A4493A410 | https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refinery-datasources.html?context=cdpaas&locale=en | Supported data sources for Data Refinery | Supported data sources for Data Refinery
Data Refinery supports the following data sources in connections.
IBM services
* [IBM Cloud Data Engine](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-sqlquery.html)(Supports source connections only)
* [IBM Cloud Databases for DataStax](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-datastax.html)
* [IBM Cloud Databases for MongoDB](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-mongodb.html)(Supports source connections only)
* [IBM Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos.html)
* [IBM Cloudant](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cloudant.html)
* [IBM Cognos Analytics](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cognos.html)(Supports source connections only)
* [IBM Data Virtualization Manager for z/OS](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-datavirt-z.html)
* [IBM Db2](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2.html)
* [IBM Db2 Big SQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2-bigsql.html)
* [IBM Db2 on Cloud](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2-cloud.html)
* [IBM Db2 Warehouse](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2-wh.html)
* [IBM Planning Analytics](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-plananalytics.html)(Supports source connections only)
* [IBM Watson Query](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-data-virtual.html)(Supports source connections only)
Third-party services
* [Amazon RDS for MySQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azrds-mysql.html)
* [Amazon RDS for Oracle](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azrds-oracle.html)
* [Amazon RDS for PostgreSQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azrds-postresql.html)
* [Amazon Redshift](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-redshift.html)
* [Amazon S3](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-amazon-s3.html)
* [Apache Cassandra](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cassandra.html)
* [Apache Derby](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-derby.html)
* [Apache HDFS](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-hdfs.html)
* [Apache Hive](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-hive.html)(Supports source connections only)
* [Box](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-box.html)
* [Cloudera Impala](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cloudera.html)(Supports source connections only)
* [Dremio](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-dremio.html)
* [Dropbox](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-dropbox.html)
* [Elasticsearch](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-elastic.html)
* [FTP](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-ftp.html)
* [Generic S3](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-generics3.html)
* [Google BigQuery](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-bigquery.html)
* [Google Cloud Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cloud-storage.html)
* [HTTP](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-http.html)(Supports source connections only)
* [MariaDB](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-mariadb.html)
* [Microsoft Azure Blob Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azureblob.html)
* [Microsoft Azure Cosmos DB](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cosmosdb.html)
* [Microsoft SQL Server](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-sql-server.html)
* [MongoDB](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-mongo.html)(Supports source connections only)
* [OData](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-odata.html)(Supports source connections only)
* [Oracle](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-oracle.html)
* [PostgreSQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-postgresql.html)
* [Presto](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-presto.html)(Supports source connections only)
* [SAP OData](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-sapodata.html)(Supports source connections only)
* [SingleStoreDB](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-singlestore.html)
* [Snowflake](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-snowflake.html)
Parent topic: [Refining data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html)
| # Supported data sources for Data Refinery #
Data Refinery supports the following data sources in connections\.
## IBM services ##
<!-- <ul> -->
* [IBM Cloud Data Engine](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-sqlquery.html)*(Supports source connections only)*
* [IBM Cloud Databases for DataStax](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-datastax.html)
* [IBM Cloud Databases for MongoDB](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-mongodb.html)*(Supports source connections only)*
* [IBM Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos.html)
* [IBM Cloudant](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cloudant.html)
* [IBM Cognos Analytics](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cognos.html)*(Supports source connections only)*
* [IBM Data Virtualization Manager for z/OS](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-datavirt-z.html)
* [IBM Db2](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2.html)
* [IBM Db2 Big SQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2-bigsql.html)
* [IBM Db2 on Cloud](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2-cloud.html)
* [IBM Db2 Warehouse](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2-wh.html)
* [IBM Planning Analytics](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-plananalytics.html)*(Supports source connections only)*
* [IBM Watson Query](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-data-virtual.html)*(Supports source connections only)*
<!-- </ul> -->
## Third\-party services ##
<!-- <ul> -->
* [Amazon RDS for MySQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azrds-mysql.html)
* [Amazon RDS for Oracle](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azrds-oracle.html)
* [Amazon RDS for PostgreSQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azrds-postresql.html)
* [Amazon Redshift](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-redshift.html)
* [Amazon S3](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-amazon-s3.html)
* [Apache Cassandra](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cassandra.html)
* [Apache Derby](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-derby.html)
* [Apache HDFS](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-hdfs.html)
* [Apache Hive](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-hive.html)*(Supports source connections only)*
* [Box](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-box.html)
* [Cloudera Impala](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cloudera.html)*(Supports source connections only)*
* [Dremio](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-dremio.html)
* [Dropbox](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-dropbox.html)
* [Elasticsearch](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-elastic.html)
* [FTP](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-ftp.html)
* [Generic S3](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-generics3.html)
* [Google BigQuery](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-bigquery.html)
* [Google Cloud Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cloud-storage.html)
* [HTTP](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-http.html)*(Supports source connections only)*
* [MariaDB](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-mariadb.html)
* [Microsoft Azure Blob Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azureblob.html)
* [Microsoft Azure Cosmos DB](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cosmosdb.html)
* [Microsoft SQL Server](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-sql-server.html)
* [MongoDB](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-mongo.html)*(Supports source connections only)*
* [OData](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-odata.html)*(Supports source connections only)*
* [Oracle](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-oracle.html)
* [PostgreSQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-postgresql.html)
* [Presto](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-presto.html)*(Supports source connections only)*
* [SAP OData](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-sapodata.html)*(Supports source connections only)*
* [SingleStoreDB](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-singlestore.html)
* [Snowflake](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-snowflake.html)
<!-- </ul> -->
**Parent topic**: [Refining data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html)
<!-- </article "role="article" "> -->
|
653F494EE7F3D688FCAEB05AFF303354D718EAB5 | https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html?context=cdpaas&locale=en | Refining data | Refining data
To refine data, you take it from one location, cleanse and shape it, and then load the result into a different location. You can cleanse and shape tabular data with a graphical flow editor tool called Data Refinery.
When you cleanse data, you fix or remove data that is incorrect, incomplete, improperly formatted, or duplicated. When you shape data, you customize it by filtering, sorting, combining or removing columns.
You create a Data Refinery flow as a set of ordered operations on data. Data Refinery includes a graphical interface to profile your data to validate it and over 20 customizable charts that give you insights into your data.
Data format {: #dr-format} : Avro, CSV, JSON, Microsoft Excel (xls and xlsx formats. First sheet only, except for connections and connected data assets.), Parquet, SAS with the "sas7bdat" extension (read only), TSV (read only), or delimited text data asset : Tables in relational data sources
Data size : Any. Data Refinery operates on a sample subset of rows in the data set. The sample size is 1 MB or 10,000 rows, whichever comes first. However, when you run a job for the Data Refinery flow, the entire data set is processed. If the Data Refinery flow fails with a large data asset, see workarounds in [Troubleshooting Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/ts_index.html).
* [Prerequisites](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html?context=cdpaas&locale=enprereqs)
* [Source file limitations](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html?context=cdpaas&locale=enlimitsource)
* [Target file limitations](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html?context=cdpaas&locale=enlimittarget)
* [Data set previews](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html?context=cdpaas&locale=enpreviews)
* [Refine your data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html?context=cdpaas&locale=enrefine)
Prerequisites
Before you can refine data, you need [a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html) that uses Cloud Object Storage. You can use the sandbox project or create a new project.
* Watch this video to see how to create a project
If you have data in cloud or on-premises data sources, you'll need to [add connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html) to those sources and you'll need to [add data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html) from each connection. If you want to be able to save refined data to cloud or on-premises data sources, create connections for this purpose as well. Source connections can be used only to read data; target connections can be used only to load (save) data. When you create a target connection, be sure to use credentials that have Write permission or you won't be able to save your Data Refinery flow output to the target.
Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform.
* Watch this video to see how to create a connection and add connected data to a project
Source file limitations
CSV files
Be sure that CSV files are correctly formatted and conform to the following rules:
* Two consecutive commas in a row indicate an empty column.
* If a row ends with a comma, an additional column is created.
White-space characters are considered as part of the data
If your data includes columns that contain white space (blank) characters, Data Refinery considers those white-space characters as part of the data, even though you can't see them in the grid. Some database tools might pad character strings with white-space characters to make all the data in a column the same length and this change affects the results of Data Refinery operations that compare data.
Column names
Be sure that column names conform to the following rules:
* Duplicate column names are not allowed. Column names must be unique within the data set. Column names are not case-sensitive. A data set that includes a column name "Sales" and another column name "sales" will not work.
* The column names are not reserved words in the R programming language.
* The column names are not numbers. A workaround is to enclose the column names in double quotation marks ("").
Data sets with columns with the "Other" data type are not supported in Data Refinery flows
If your data set contains columns that have data types that are identified as "Other" in the Watson Studio preview, the columns will show as the String data type in Data Refinery. However, if you try to use the data in a Data Refinery flow, the job for the Data Refinery flow will fail. An example of a data type that shows as "Other" in the preview is the Db2 DECFLOAT data type.
Target file limitations
The following limitation applies if you save Data Refinery flow output (the target data set) to a file:
* You can't change the file format if the file is an existing data asset.
Data set previews
Data Refinery provides support for large data sets, which can be time-consuming and unwieldy to refine. To enable you to work quickly and efficiently, it operates on a subset of rows in the data set while you interactively refine the data. When you run a job for the Data Refinery flow, it operates on the entire data set.
Refine your data
The following video shows you how to refine data.
Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform.
This video provides a visual method to learn the concepts and tasks in this documentation.
* Transcript
Synchronize transcript with video
Time Transcript
00:00 This video shows you how to shape raw data using Data Refinery.
00:05 To get started refining data from a project, view the data asset and open it in Data Refinery.
00:14 The "Information" pane contains the name for the data flow and for the data flow output, once you've finished refining the data.
00:23 The "Data" tab shows you a sample set of the rows and columns in the data set.
00:29 To improve performance, you won't see all the rows in the shaper.
00:33 But rest assured that when you are done refining the data, the data flow will be run on the full data set.
00:41 The "Profile" tab shows you frequency and summary statistics for each of your columns.
00:49 The "Visualizations" tab provides data visualizations for the columns you are interested in.
00:57 Suggested charts have a blue dot next to their icons.
01:03 Use the different perspectives available in the charts to identify patterns, connections, and relationships within the data.
01:12 Now, let's do some data wrangling.
01:17 Start with a simple operation, like sorting on the specified column - in this case, the "Year" column.
01:27 Say you want to focus on delays just for a specific airline so you can filter the data to show only those rows where the unique carrier is "United Airlines".
01:47 It would be helpful to see the total delay.
01:50 You can do that by creating a new column to combine the arrival and departure delays.
01:56 Notice that the column type is inferred to be integer.
02:00 Select the departure delay column and use the "Calculate" operation.
02:09 In this case, you'll add the arrive delay column to the selected column and create a new column, called "TotalDelay".
02:23 You can position the new column at the end of the list of columns or next to the original column.
02:31 When you apply the operation, the new column displays next to the departure delay column.
02:38 If you make a mistake, or just decide to make a change, just access the "Steps" panel and delete that step.
02:46 This will undo that particular operation.
02:50 You can also use the redo and undo buttons.
02:56 Next, you'd like to focus on the "TotalDelay" column so you can use the "select" operation to move the column to the beginning.
03:09 This command arranges the "TotalDelay" column as the first in the list, and everything else comes after that.
03:21 Next, use the "group_by" operation to divide the data into groups by year, month, and day.
03:32 So, when you select the "TotalDelay" column, you'll see the "Year", "Month", "DayofMonth", and "TotalDelay" columns.
03:44 Lastly, you want to find the mean of the "TotalDelay" column.
03:48 When you expand the "Operations" menu, in the "Organize" section, you'll find the "Aggregate" operation, which includes the "Mean" function.
04:08 Now you have a new column, called "AverageDelay", that represents the average for the total delay.
04:17 Now to run the data flow and save and create the job.
04:24 Provide a name for the job and continue to the next screen.
04:28 The "Configure" step allows you to review what the input and output of your job run will be.
04:36 And select the environment used to run the job.
04:41 Scheduling a job is optional, but you can set a date and repeat the job, if you'd like.
04:51 And you can choose to receive notifications for this job.
04:56 Everything looks good, so create and run the job.
05:00 This could take several minutes, because remember that the data flow will be run on the full data set.
05:06 In the mean time, you can view the status.
05:12 When the run is compete, you can go back to the "Assets" tab in the project.
05:20 And open the Data Refinery flow to further refine the data.
05:28 For example, you could sort the "AverageDelay" column in descending order.
05:36 Now, edit the flow settings.
05:39 On the "General" panel, you can change the Data Refinery flow name.
05:46 On the "Source data sets" panel, you can edit the sample or format for the source data set or replace the data source.
05:56 And on the "Target data set" panel, you can specify an alternate location, such as an external data source.
06:06 You can also edit the properties for the target, such as the write mode, the file format, and change the data set asset name.
06:21 Now, run the data flow again; but this time, save and view the jobs.
06:28 Select the job that you want to view from the list and run the job.
06:41 When the run completes, go back to the project.
06:46 And on the "Assets" tab, you'll see all three files:
06:51 The original.
06:54 The first refined data set, showing the "AverageDelay" unsorted.
07:02 And the second data set, showing the "AverageDelay" column sorted in descending order.
07:11 And back on the "Assets" tab, there's the Data Refinery flow.
07:19 Find more videos in the Cloud Pak for Data as a Service documentation.
1. Access Data Refinery from within a project. Click New asset > Prepare and visualize data. Then select the data that you want to work with. Alternatively, from the Assets tab of a project, open a file ([supported formats](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html?context=cdpaas&locale=endr-format)) to preview it, and then click Prepare data.
2. Use steps to apply operations that cleanse, shape, and enrich your data. Browse [operation categories or search for a specific operation](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/gui_operations.html), then let the UI guide you. You can [enter R code](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/code_operations.html) in the command line and let autocomplete assist you in getting the correct syntax. As you apply operations to a data set, Data Refinery keeps track of them and builds a Data Refinery flow. For each operation that you apply, Data Refinery adds a step.
Data tab

If your data contains non-string data types, the Convert column type GUI operation is automatically applied as the first step in the Data Refinery flow when you open a file in Data Refinery. Data types are automatically converted to inferred data types, such as Integer, Date, or Boolean. You can undo or edit this step.
3. Click the Profile tab to [validate your data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/metrics.html) throughout the data refinement process.
Profile tab

4. Click the Visualizations tab to [visualize the data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/visualizations.html) in charts. Uncover patterns, trends, and correlations within your data.
Visualizations tab

5. Refine the sample data set to suit your needs.
6. Click Save and create a job or Save and view jobs in the toolbar to run the Data Refinery flow on the entire data set. Select the runtime and add a one-time or repeating schedule. For information about jobs, see [Creating jobs in Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-job-dr.html).
For the actions that you can do as you refine your data, see [Managing Data Refinery flows](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html).
Next step
[Analyze your data and build models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-science.html)
Learn more
* [Manage Data Refinery flows](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html)
* [Quick start: Refine data](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html)
Parent topic: [Preparing data](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/get-data.html)
| # Refining data #
To refine data, you take it from one location, cleanse and shape it, and then load the result into a different location\. You can cleanse and shape tabular data with a graphical flow editor tool called Data Refinery\.
When you *cleanse data*, you fix or remove data that is incorrect, incomplete, improperly formatted, or duplicated\. When you *shape data*, you customize it by filtering, sorting, combining or removing columns\.
You create a *Data Refinery flow* as a set of ordered operations on data\. Data Refinery includes a graphical interface to profile your data to validate it and over 20 customizable charts that give you insights into your data\.
**Data format** \{: \#dr\-format\} : Avro, CSV, JSON, Microsoft Excel (xls and xlsx formats\. First sheet only, except for connections and connected data assets\.), Parquet, SAS with the "sas7bdat" extension (read only), TSV (read only), or delimited text data asset : Tables in relational data sources
**Data size** : Any\. Data Refinery operates on a sample subset of rows in the data set\. The sample size is 1 MB or 10,000 rows, whichever comes first\. However, when you run a job for the Data Refinery flow, the entire data set is processed\. If the Data Refinery flow fails with a large data asset, see workarounds in [Troubleshooting Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/ts_index.html)\.
<!-- <ul> -->
* [Prerequisites](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html?context=cdpaas&locale=en#prereqs)
* [Source file limitations](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html?context=cdpaas&locale=en#limitsource)
* [Target file limitations](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html?context=cdpaas&locale=en#limittarget)
* [Data set previews](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html?context=cdpaas&locale=en#previews)
* [Refine your data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html?context=cdpaas&locale=en#refine)
<!-- </ul> -->
## Prerequisites ##
Before you can refine data, you need [a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html) that uses Cloud Object Storage\. You can use the sandbox project or create a new project\.
<!-- <ul> -->
* Watch this video to see how to create a project
<!-- </ul> -->
If you have data in cloud or on\-premises data sources, you'll need to [add connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html) to those sources and you'll need to [add data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html) from each connection\. If you want to be able to save refined data to cloud or on\-premises data sources, create connections for this purpose as well\. Source connections can be used only to read data; target connections can be used only to load (save) data\. When you create a target connection, be sure to use credentials that have Write permission or you won't be able to save your Data Refinery flow output to the target\.
Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform\.
<!-- <ul> -->
* Watch this video to see how to create a connection and add connected data to a project
<!-- </ul> -->
## Source file limitations ##
### CSV files ###
Be sure that CSV files are correctly formatted and conform to the following rules:
<!-- <ul> -->
* Two consecutive commas in a row indicate an empty column\.
* If a row ends with a comma, an additional column is created\.
<!-- </ul> -->
### White\-space characters are considered as part of the data ###
If your data includes columns that contain white space (blank) characters, Data Refinery considers those white\-space characters as part of the data, even though you can't see them in the grid\. Some database tools might pad character strings with white\-space characters to make all the data in a column the same length and this change affects the results of Data Refinery operations that compare data\.
### Column names ###
Be sure that column names conform to the following rules:
<!-- <ul> -->
* Duplicate column names are not allowed\. Column names must be unique within the data set\. Column names are not case\-sensitive\. A data set that includes a column name "Sales" and another column name "sales" will not work\.
* The column names are not reserved words in the R programming language\.
* The column names are not numbers\. A workaround is to enclose the column names in double quotation marks ("")\.
<!-- </ul> -->
### Data sets with columns with the "Other" data type are not supported in Data Refinery flows ###
If your data set contains columns that have data types that are identified as "Other" in the Watson Studio preview, the columns will show as the String data type in Data Refinery\. However, if you try to use the data in a Data Refinery flow, the job for the Data Refinery flow will fail\. An example of a data type that shows as "Other" in the preview is the Db2 DECFLOAT data type\.
## Target file limitations ##
The following limitation applies if you save Data Refinery flow output (the target data set) to a file:
<!-- <ul> -->
* You can't change the file format if the file is an existing data asset\.
<!-- </ul> -->
## Data set previews ##
Data Refinery provides support for large data sets, which can be time\-consuming and unwieldy to refine\. To enable you to work quickly and efficiently, it operates on a subset of rows in the data set while you interactively refine the data\. When you run a job for the Data Refinery flow, it operates on the entire data set\.
## Refine your data ##
The following video shows you how to refine data\.
Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform\.
This video provides a visual method to learn the concepts and tasks in this documentation\.
<!-- <ul> -->
* Transcript
Synchronize transcript with video
<!-- <table "class="bx--data-table bx--data-table--zebra" style="border-collapse: collapse; border: none;" "> -->
| Time | Transcript |
| ----- | -------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 00:00 | This video shows you how to shape raw data using Data Refinery. |
| 00:05 | To get started refining data from a project, view the data asset and open it in Data Refinery. |
| 00:14 | The "Information" pane contains the name for the data flow and for the data flow output, once you've finished refining the data. |
| 00:23 | The "Data" tab shows you a sample set of the rows and columns in the data set. |
| 00:29 | To improve performance, you won't see all the rows in the shaper. |
| 00:33 | But rest assured that when you are done refining the data, the data flow will be run on the full data set. |
| 00:41 | The "Profile" tab shows you frequency and summary statistics for each of your columns. |
| 00:49 | The "Visualizations" tab provides data visualizations for the columns you are interested in. |
| 00:57 | Suggested charts have a blue dot next to their icons. |
| 01:03 | Use the different perspectives available in the charts to identify patterns, connections, and relationships within the data. |
| 01:12 | Now, let's do some data wrangling. |
| 01:17 | Start with a simple operation, like sorting on the specified column - in this case, the "Year" column. |
| 01:27 | Say you want to focus on delays just for a specific airline so you can filter the data to show only those rows where the unique carrier is "United Airlines". |
| 01:47 | It would be helpful to see the total delay. |
| 01:50 | You can do that by creating a new column to combine the arrival and departure delays. |
| 01:56 | Notice that the column type is inferred to be integer. |
| 02:00 | Select the departure delay column and use the "Calculate" operation. |
| 02:09 | In this case, you'll add the arrive delay column to the selected column and create a new column, called "TotalDelay". |
| 02:23 | You can position the new column at the end of the list of columns or next to the original column. |
| 02:31 | When you apply the operation, the new column displays next to the departure delay column. |
| 02:38 | If you make a mistake, or just decide to make a change, just access the "Steps" panel and delete that step. |
| 02:46 | This will undo that particular operation. |
| 02:50 | You can also use the redo and undo buttons. |
| 02:56 | Next, you'd like to focus on the "TotalDelay" column so you can use the "select" operation to move the column to the beginning. |
| 03:09 | This command arranges the "TotalDelay" column as the first in the list, and everything else comes after that. |
| 03:21 | Next, use the "group\_by" operation to divide the data into groups by year, month, and day. |
| 03:32 | So, when you select the "TotalDelay" column, you'll see the "Year", "Month", "DayofMonth", and "TotalDelay" columns. |
| 03:44 | Lastly, you want to find the mean of the "TotalDelay" column. |
| 03:48 | When you expand the "Operations" menu, in the "Organize" section, you'll find the "Aggregate" operation, which includes the "Mean" function. |
| 04:08 | Now you have a new column, called "AverageDelay", that represents the average for the total delay. |
| 04:17 | Now to run the data flow and save and create the job. |
| 04:24 | Provide a name for the job and continue to the next screen. |
| 04:28 | The "Configure" step allows you to review what the input and output of your job run will be. |
| 04:36 | And select the environment used to run the job. |
| 04:41 | Scheduling a job is optional, but you can set a date and repeat the job, if you'd like. |
| 04:51 | And you can choose to receive notifications for this job. |
| 04:56 | Everything looks good, so create and run the job. |
| 05:00 | This could take several minutes, because remember that the data flow will be run on the full data set. |
| 05:06 | In the mean time, you can view the status. |
| 05:12 | When the run is compete, you can go back to the "Assets" tab in the project. |
| 05:20 | And open the Data Refinery flow to further refine the data. |
| 05:28 | For example, you could sort the "AverageDelay" column in descending order. |
| 05:36 | Now, edit the flow settings. |
| 05:39 | On the "General" panel, you can change the Data Refinery flow name. |
| 05:46 | On the "Source data sets" panel, you can edit the sample or format for the source data set or replace the data source. |
| 05:56 | And on the "Target data set" panel, you can specify an alternate location, such as an external data source. |
| 06:06 | You can also edit the properties for the target, such as the write mode, the file format, and change the data set asset name. |
| 06:21 | Now, run the data flow again; but this time, save and view the jobs. |
| 06:28 | Select the job that you want to view from the list and run the job. |
| 06:41 | When the run completes, go back to the project. |
| 06:46 | And on the "Assets" tab, you'll see all three files: |
| 06:51 | The original. |
| 06:54 | The first refined data set, showing the "AverageDelay" unsorted. |
| 07:02 | And the second data set, showing the "AverageDelay" column sorted in descending order. |
| 07:11 | And back on the "Assets" tab, there's the Data Refinery flow. |
| 07:19 | Find more videos in the Cloud Pak for Data as a Service documentation. |
<!-- </table "class="bx--data-table bx--data-table--zebra" style="border-collapse: collapse; border: none;" "> -->
<!-- </ul> -->
1\. Access Data Refinery from within a project\. Click **New asset > Prepare and visualize data**\. Then select the data that you want to work with\. Alternatively, from the **Assets** tab of a project, open a file ([supported formats](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html?context=cdpaas&locale=en#dr-format)) to preview it, and then click **Prepare data**\.
2\. Use steps to apply operations that cleanse, shape, and enrich your data\. Browse [operation categories or search for a specific operation](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/gui_operations.html), then let the UI guide you\. You can [enter R code](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/code_operations.html) in the command line and let autocomplete assist you in getting the correct syntax\. As you apply operations to a data set, Data Refinery keeps track of them and builds a Data Refinery flow\. For each operation that you apply, Data Refinery adds a step\.
Data tab

If your data contains non\-string data types, the **Convert column type** GUI operation is automatically applied as the first step in the Data Refinery flow when you open a file in Data Refinery\. Data types are automatically converted to inferred data types, such as Integer, Date, or Boolean\. You can undo or edit this step\.
3\. Click the **Profile** tab to [validate your data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/metrics.html) throughout the data refinement process\.
Profile tab

4\. Click the **Visualizations** tab to [visualize the data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/visualizations.html) in charts\. Uncover patterns, trends, and correlations within your data\.
Visualizations tab

5\. Refine the sample data set to suit your needs\.
6\. Click **Save and create a job** or **Save and view jobs** in the toolbar to run the Data Refinery flow on the entire data set\. Select the runtime and add a one\-time or repeating schedule\. For information about jobs, see [Creating jobs in Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-job-dr.html)\.
For the actions that you can do as you refine your data, see [Managing Data Refinery flows](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html)\.
## Next step ##
[Analyze your data and build models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-science.html)
## Learn more ##
<!-- <ul> -->
* [Manage Data Refinery flows](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html)
* [Quick start: Refine data](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html)
<!-- </ul> -->
**Parent topic**: [Preparing data](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/get-data.html)
<!-- </article "role="article" "> -->
|
751ABCAB00F67C93C253EC74D686E2CFCC0062AD | https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/ts_index.html?context=cdpaas&locale=en | Troubleshooting Data Refinery | Troubleshooting Data Refinery
Use this information to resolve questions about using Data Refinery.
* [Cannot refine data from an Excel data asset](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/ts_index.html?context=cdpaas&locale=endr-excel)
* [Data Refinery flow job fails with a large data asset](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/ts_index.html?context=cdpaas&locale=enbigdata-dr)
Cannot refine data from an Excel data asset
The Data Refinery flow might fail if it cannot read the data. Confirm the format of the Excel file. By default, the first line of the file is treated as the header. You can change this setting in the Flow settings . Go to the Source data sets tab and click the overflow menu () next to the data source, and select Edit format. You can also specify the first line property, which designates which row is the first row in the data set to be read. Changing these properties affects how the data is displayed in Data Refinery as well as the Data Refinery job run and flow output.
Data Refinery flow job fails with a large data asset
If your Data Refinery flow job fails with a large data asset, try these troubleshooting tips to fix the problem:
* Instead of using a project data asset as the target of the Data Refinery flow (default), use Cloud storage. For example, IBM Cloud Object Storage, Amazon S3, or Google Cloud Storage.
* Select a Spark & R environment for the Data Refinery flow job or create a new Spark & R environment template.
| # Troubleshooting Data Refinery #
Use this information to resolve questions about using Data Refinery\.
<!-- <ul> -->
* [Cannot refine data from an Excel data asset](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/ts_index.html?context=cdpaas&locale=en#dr-excel)
* [Data Refinery flow job fails with a large data asset](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/ts_index.html?context=cdpaas&locale=en#bigdata-dr)
<!-- </ul> -->
## Cannot refine data from an Excel data asset ##
The Data Refinery flow might fail if it cannot read the data\. Confirm the format of the Excel file\. By default, the first line of the file is treated as the header\. You can change this setting in the Flow settings \. Go to the **Source data sets** tab and click the overflow menu () next to the data source, and select **Edit format**\. You can also specify the first line property, which designates which row is the first row in the data set to be read\. Changing these properties affects how the data is displayed in Data Refinery as well as the Data Refinery job run and flow output\.
## Data Refinery flow job fails with a large data asset ##
If your Data Refinery flow job fails with a large data asset, try these troubleshooting tips to fix the problem:
<!-- <ul> -->
* Instead of using a project data asset as the target of the Data Refinery flow (default), use Cloud storage\. For example, IBM Cloud Object Storage, Amazon S3, or Google Cloud Storage\.
* Select a **Spark & R** environment for the Data Refinery flow job or create a new **Spark & R** environment template\.
<!-- </ul> -->
<!-- </article "role="article" "> -->
|
B8AA7399868C0AE8DD698C9048EBD50C3F17EF12 | https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/visualizations.html?context=cdpaas&locale=en | Visualizing your data in Data Refinery | Visualizing your data in Data Refinery
Visualizing information in graphical ways gives you insights into your data. You can add steps to your Data Refinery flow while you visualize you data and see the changes. By exploring data from different perspectives with visualizations, you can identify patterns, connections, and relationships within that data as well as quickly understand large amounts of information.
You can also visualize your data with these same charts in an SPSS Modeler flow. Use the Charts node, which is available under the Graphs section on the node palette. Double-click the Charts node to open the properties pane. Then click Launch Chart Builder to open the chart builder and create one or more chart definitions to associate with the node.

To visualize your data:
1. From Data Refinery, click the Visualizations tab.
2. Start with a chart or select columns:
* Click any of the available charts. Then, add columns in the DETAILS pane that opens on the left side of the page.
* Select the columns that you want to work with. Suggested charts are indicated with a dot next to the chart name. Click a chart to visualize your data.
Important: Available chart types are ordered from most relevant to least relevant, based on the selected columns. If there are no columns in the data set with a data type that is supported for a chart type, that chart will not be available. If a column's data type is not supported for a chart, that column is not available for selection for that chart. Dots next to the charts' names suggest the best charts for your data.
Charts
The following charts are included:
* 3D charts display data in a 3-D coordinate system by drawing each column as a cuboid to create a 3D effect.
* Bar charts are handy for displaying and comparing categories of data side by side. The bars can be in any order. You can also arrange them from high to low or from low to high.
* Box plot charts compare distributions between many groups or data sets. They display the variation in groups of data: the spread and skew of that data and the outliers.
* Bubble charts display each category in the groups as a bubble.
* Candlestick charts are a type of financial chart that displays price movements of a security, derivative, or currency.
* Circle packing charts display hierarchical data as a set of nested areas.
* Customized charts give you the ability to render charts based on JSON input.
* Dual Y-axes charts use two Y-axis variables to show relationships between data.
* Error bars indicate the error or uncertainty in a value. They give a general idea of how precise a value is or conversely, how far a value might be from the true value.
* Evaluation charts are combination charts that measure the quality of a binary classifier. You need three columns for input: actual (target) value, predict value, and confidence (0 or 1). Move the slider in the Cutoff chart to dynamically update the other charts. The ROC and other charts are standard measurements of the classifier.
* Heat map charts display data as color to convey activity levels or density. Typically low values are displayed as cooler colors and high values are displayed as warmer colors.
* Histogram charts show the frequency distribution of data.
* Line charts show trends in data over time by calculating a summary statistic for one column for each value of another column and then drawing a line that connects the values.
* Map charts show geographic point data, so you can compare values and show categories across geographical regions.
* Math curve charts display a group of curves based on equations that you enter. You do not use a data set with this chart. Instead, you use it to compare the results with the data set in another chart, like the scatter plot chart.
* Multi-charts display up to four combinations of Bar, Line, Pie, and Scatter plot charts. You can show the same kind of chart more than once with different data. For example, two pie charts with data from different columns.
* Multi-series charts display data from multiple data sets or multiple columns as a series of points that are connected by straight lines or bars.
* Parallel coordinate charts display and compare rows of data (called profiles) to find similarities. Each row is a line and the value in each column of the row is represented by a point on that line.
* Pie charts show proportion. Each value in a series is displayed as a proportional slice of the pie. The pie represents the total sum of the values.
* Population pyramid charts show the frequency distribution of a variable across categories. They are typically used to show changes in demographic data.
* Quantile-quantile (Q-Q) plot charts compare the expected distribution values with the observed values by plotting their quantiles.
* Radar charts integrate three or more quantitative variables that are represented on axes (radii) into a single radial figure. Data is plotted on each axis and joined to adjacent axes by connecting lines. Radar charts are useful to show correlations and compare categorized data.
* Relationship charts show how columns of data relate to one another and what the strength of that relationship is by using varying types of lines.
* Scatter matrix charts map columns against each other and display their scatter plots and correlation. Use to compare multiple columns and how strong their correlation is with one another.
* Scatter plot charts show correlation (how much one variable is affected by another) by displaying and comparing the values in two columns.
* Sunburst charts are similar to layered pie charts, in which different proportions of different categories are shown at once on multiple levels.
* Theme river charts use a specialized flow graph that shows changes over time.
* Time plot charts illustrate data points at successive intervals of time.
* t-SNE charts help you visualize high-dimensional data sets. They're useful for embedding high-dimensional data into a space of two or three dimensions, which can then be visualized in a scatter plot.
* Tree charts display hierarchical data, categorically splitting into different branches. Use to sort different data sets under different categories. The Tree chart consists of a root node, line connections called branches that represent the relationships and connections between the members, and leaf nodes that do not have child nodes.
* Treemap charts display hierarchical data as a set of nested areas. Use to compare sizes between groups and single elements that are nested in the groups.
* Word cloud charts display how frequently words appear in text by making the size of each word proportional to its frequency.
Actions
You can take any of the following actions:
* Start over: Clears the visualization and the DETAILS pane, and returns you to the starting page for visualizations
* Specify whether to display the field value or the field label. This option applies only to SPSS Modeler when you define labels. For example, if you have a "Gender" field and you have defined a label as female with the value 0, and then the label male for value 1. If there is no label defined, the value is displayed.
* Download visualization:
* Download chart image: Download a PNG file that contains an image of the current chart.
* Download chart details: Download a JSON file that contains the details for the current chart.
* Set global preferences that apply to all charts
Chart actions
Available chart actions depend on the chart. Chart actions include:
* Zoom
* Restore: View the chart at normal scale
* Select data: Highlight data in the Data tab that you select in the chart
* Clear selection: Remove highlighting from the data in the Data tab
Learn more
[Data Visualization – How to Pick the Right Chart Type?](https://eazybi.com/blog/data_visualization_and_chart_types/)
Parent topic:[Refining data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html)
| # Visualizing your data in Data Refinery #
Visualizing information in graphical ways gives you insights into your data\. You can add steps to your Data Refinery flow while you visualize you data and see the changes\. By exploring data from different perspectives with visualizations, you can identify patterns, connections, and relationships within that data as well as quickly understand large amounts of information\.
You can also visualize your data with these same charts in an SPSS Modeler flow\. Use the Charts node, which is available under the Graphs section on the node palette\. Double\-click the Charts node to open the properties pane\. Then click **Launch Chart Builder** to open the chart builder and create one or more chart definitions to associate with the node\.

To visualize your data:
<!-- <ol> -->
1. From Data Refinery, click the **Visualizations** tab\.
2. Start with a chart or select columns:
<!-- <ul> -->
* Click any of the available charts. Then, add columns in the **DETAILS** pane that opens on the left side of the page.
* Select the columns that you want to work with. Suggested charts are indicated with a dot next to the chart name. Click a chart to visualize your data.
<!-- </ul> -->
<!-- </ol> -->
Important: Available chart types are ordered from most relevant to least relevant, based on the selected columns\. If there are no columns in the data set with a data type that is supported for a chart type, that chart will not be available\. If a column's data type is not supported for a chart, that column is not available for selection for that chart\. Dots next to the charts' names suggest the best charts for your data\.
## Charts ##
The following charts are included:
<!-- <ul> -->
* 3D charts display data in a 3\-D coordinate system by drawing each column as a cuboid to create a 3D effect\.
* Bar charts are handy for displaying and comparing categories of data side by side\. The bars can be in any order\. You can also arrange them from high to low or from low to high\.
* Box plot charts compare distributions between many groups or data sets\. They display the variation in groups of data: the spread and skew of that data and the outliers\.
* Bubble charts display each category in the groups as a bubble\.
* Candlestick charts are a type of financial chart that displays price movements of a security, derivative, or currency\.
* Circle packing charts display hierarchical data as a set of nested areas\.
* Customized charts give you the ability to render charts based on JSON input\.
* Dual Y\-axes charts use two Y\-axis variables to show relationships between data\.
* Error bars indicate the error or uncertainty in a value\. They give a general idea of how precise a value is or conversely, how far a value might be from the true value\.
* Evaluation charts are combination charts that measure the quality of a binary classifier\. You need three columns for input: actual (target) value, predict value, and confidence (0 or 1)\. Move the slider in the Cutoff chart to dynamically update the other charts\. The ROC and other charts are standard measurements of the classifier\.
* Heat map charts display data as color to convey activity levels or density\. Typically low values are displayed as cooler colors and high values are displayed as warmer colors\.
* Histogram charts show the frequency distribution of data\.
* Line charts show trends in data over time by calculating a summary statistic for one column for each value of another column and then drawing a line that connects the values\.
* Map charts show geographic point data, so you can compare values and show categories across geographical regions\.
* Math curve charts display a group of curves based on equations that you enter\. You do not use a data set with this chart\. Instead, you use it to compare the results with the data set in another chart, like the scatter plot chart\.
* Multi\-charts display up to four combinations of Bar, Line, Pie, and Scatter plot charts\. You can show the same kind of chart more than once with different data\. For example, two pie charts with data from different columns\.
* Multi\-series charts display data from multiple data sets or multiple columns as a series of points that are connected by straight lines or bars\.
* Parallel coordinate charts display and compare rows of data (called profiles) to find similarities\. Each row is a line and the value in each column of the row is represented by a point on that line\.
* Pie charts show proportion\. Each value in a series is displayed as a proportional slice of the pie\. The pie represents the total sum of the values\.
* Population pyramid charts show the frequency distribution of a variable across categories\. They are typically used to show changes in demographic data\.
* Quantile\-quantile (Q\-Q) plot charts compare the expected distribution values with the observed values by plotting their quantiles\.
* Radar charts integrate three or more quantitative variables that are represented on axes (radii) into a single radial figure\. Data is plotted on each axis and joined to adjacent axes by connecting lines\. Radar charts are useful to show correlations and compare categorized data\.
* Relationship charts show how columns of data relate to one another and what the strength of that relationship is by using varying types of lines\.
* Scatter matrix charts map columns against each other and display their scatter plots and correlation\. Use to compare multiple columns and how strong their correlation is with one another\.
* Scatter plot charts show correlation (how much one variable is affected by another) by displaying and comparing the values in two columns\.
* Sunburst charts are similar to layered pie charts, in which different proportions of different categories are shown at once on multiple levels\.
* Theme river charts use a specialized flow graph that shows changes over time\.
* Time plot charts illustrate data points at successive intervals of time\.
* t\-SNE charts help you visualize high\-dimensional data sets\. They're useful for embedding high\-dimensional data into a space of two or three dimensions, which can then be visualized in a scatter plot\.
* Tree charts display hierarchical data, categorically splitting into different branches\. Use to sort different data sets under different categories\. The Tree chart consists of a root node, line connections called branches that represent the relationships and connections between the members, and leaf nodes that do not have child nodes\.
* Treemap charts display hierarchical data as a set of nested areas\. Use to compare sizes between groups and single elements that are nested in the groups\.
* Word cloud charts display how frequently words appear in text by making the size of each word proportional to its frequency\.
<!-- </ul> -->
## Actions ##
You can take any of the following actions:
<!-- <ul> -->
* Start over: Clears the visualization and the **DETAILS** pane, and returns you to the starting page for visualizations
* Specify whether to display the field value or the field label\. This option applies only to SPSS Modeler when you define labels\. For example, if you have a "Gender" field and you have defined a label as female with the value 0, and then the label male for value 1\. If there is no label defined, the value is displayed\.
* Download visualization:
<!-- <ul> -->
* Download chart image: Download a PNG file that contains an image of the current chart.
* Download chart details: Download a JSON file that contains the details for the current chart.
<!-- </ul> -->
* Set global preferences that apply to all charts
<!-- </ul> -->
## Chart actions ##
Available chart actions depend on the chart\. Chart actions include:
<!-- <ul> -->
* Zoom
* Restore: View the chart at normal scale
* Select data: Highlight data in the Data tab that you select in the chart
* Clear selection: Remove highlighting from the data in the Data tab
<!-- </ul> -->
## Learn more ##
[Data Visualization – How to Pick the Right Chart Type?](https://eazybi.com/blog/data_visualization_and_chart_types/)
**Parent topic:**[Refining data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html)
<!-- </article "role="article" "> -->
|
A4E9FAE09BE2F3C0191CBC14A56085B0773A2585 | https://dataplatform.cloud.ibm.com/docs/content/wsj/satellite/satellite-connect-s3-bucket.html?context=cdpaas&locale=en | Accessing data in AWS through access points from a notebook | Accessing data in AWS through access points from a notebook
In IBM watsonx you can access data stored in AWS S3 buckets through access points from a notebook.
Run the notebook in an environment in IBM watsonx. Create an internet-enabled access point to connect to the S3 bucket.
Connecting to AWS S3 data through an internet-enabled access point
You can access data in an AWS S3 bucket through an internet-enabled access point in any AWS region.
To access S3 data through an internet-enabled access point:
1. Create an access point for your S3 bucket. See [Creating access points](https://docs.aws.amazon.com/AmazonS3/latest/dev/creating-access-points.html).
Set the network origin to Internet.
2. After the access point is created, make a note of the Amazon resource name (ARN) for the access point. Example: ARN: arn:aws:s3:us-east-1:675068711478:accesspoint/cust-data-bucket-internet-ap. You will need to enter the ARN in your notebook.
Accessing AWS S3 data from your notebook
The following sample code snippet shows you how to access AWS data from your notebook by using an access point:
import boto3
import pandas as pd
use an access key and a secret that has access to the bucket
access_key="..."
secret="..."
s3_client = boto3.client('s3', aws_access_key_id=access_key, aws_secret_access_key=secret)
the Amazon resource name (ARN) of the access point
arn = "..."
the file you want to retrieve
fileName="customers.csv"
response = s3_client.get_object(Bucket=arn, Key=fileName)
s3FileStream = response["Body"]
for other file types, change the line below to use the appropriate read_() method from pandas
customerDF = pd.read_csv(s3FileStream)
Parent topic:[Loading and accessing data in a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html)
| # Accessing data in AWS through access points from a notebook #
In IBM watsonx you can access data stored in AWS S3 buckets through access points from a notebook\.
Run the notebook in an environment in IBM watsonx\. Create an internet\-enabled access point to connect to the S3 bucket\.
## Connecting to AWS S3 data through an internet\-enabled access point ##
You can access data in an AWS S3 bucket through an internet\-enabled access point in any AWS region\.
To access S3 data through an internet\-enabled access point:
<!-- <ol> -->
1. Create an access point for your S3 bucket\. See [Creating access points](https://docs.aws.amazon.com/AmazonS3/latest/dev/creating-access-points.html)\.
Set the network origin to `Internet`.
2. After the access point is created, make a note of the Amazon resource name (ARN) for the access point\. Example: `ARN: arn:aws:s3:us-east-1:675068711478:accesspoint/cust-data-bucket-internet-ap`\. You will need to enter the ARN in your notebook\.
<!-- </ol> -->
## Accessing AWS S3 data from your notebook ##
The following sample code snippet shows you how to access AWS data from your notebook by using an access point:
import boto3
import pandas as pd
# use an access key and a secret that has access to the bucket
access_key="..."
secret="..."
s3_client = boto3.client('s3', aws_access_key_id=access_key, aws_secret_access_key=secret)
#the Amazon resource name (ARN) of the access point
arn = "..."
# the file you want to retrieve
fileName="customers.csv"
response = s3_client.get_object(Bucket=arn, Key=fileName)
s3FileStream = response["Body"]
#for other file types, change the line below to use the appropriate read_() method from pandas
customerDF = pd.read_csv(s3FileStream)
**Parent topic:**[Loading and accessing data in a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html)
<!-- </article "role="article" "> -->
|
EEF0F3C3DC121F5C389E547BD20F2AA807074028 | https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/key-management-by-application.html?context=cdpaas&locale=en | Key management by application | Key management by application
This topic describes how to manage column encryption keys by application. It explains how to provide master keys and how to write and read encrypted data using these master keys.
Providing master keys
To provide master keys:
1. Pass the explicit master keys, in the following format:
parameter name: "encryption.key.list"
parameter value: "<master key ID>:<master key (base64)> , <master key ID>:<master key (base64)>.."
For example:
sc.hadoopConfiguration.set("encryption.key.list" , "k1:iKwfmI5rDf7HwVBcqeNE6w== , k2:LjxH/aXxMduX6IQcwQgOlw== , k3:rnZHCxhUHr79Y6zvQnxSEQ==")
The length of master keys before base64 encoding can be 16, 24 or 32 bytes (128, 192 or 256 bits).
Writing encrypted data
To write encrypted data:
1. Specify which columns to encrypt, and which master keys to use:
parameter name: "encryption.column.keys"
parameter value: "<master key ID>:<column>,<column>;<master key ID>:<column> .."
2. Specify the footer key:
parameter name: "encryption.footer.key"
parameter value: "<master key ID>"
For example:
dataFrame.write
.option("encryption.footer.key" , "k1")
.option("encryption.column.keys" , "k2:SSN,Address;k3:CreditCard")
.parquet("<path to encrypted files>")
Note:"<path to encrypted files>" must contain the string .encrypted in the URL, for example /path/to/my_table.parquet.encrypted. If either the "encryption.column.keys" parameter or the "encryption.footer.key" parameter is not set, an exception will be thrown.
Reading encrypted data
The required metadata is stored in the encrypted Parquet files.
To read the encrypted data:
1. Provide the encryption keys:
sc.hadoopConfiguration.set("encryption.key.list" , "k1:iKwfmI5rDf7HwVBcqeNE6w== , k2:LjxH/aXxMduX6IQcwQgOlw== , k3:rnZHCxhUHr79Y6zvQnxSEQ==")
2. Call the regular parquet read commands, such as:
val dataFrame = spark.read.parquet("<path to encrypted files>")
Note:"<path to encrypted files>" must contain the string .encrypted in the URL, for example /path/to/my_table.parquet.encrypted.
Parent topic:[Parquet encryption](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/parquet-encryption.html)
| # Key management by application #
This topic describes how to manage column encryption keys by application\. It explains how to provide master keys and how to write and read encrypted data using these master keys\.
## Providing master keys ##
To provide master keys:
<!-- <ol> -->
1. Pass the explicit master keys, in the following format:
parameter name: "encryption.key.list"
parameter value: "<master key ID>:<master key (base64)> , <master key ID>:<master key (base64)>.."
For example:
sc.hadoopConfiguration.set("encryption.key.list" , "k1:iKwfmI5rDf7HwVBcqeNE6w== , k2:LjxH/aXxMduX6IQcwQgOlw== , k3:rnZHCxhUHr79Y6zvQnxSEQ==")
The length of master keys before base64 encoding can be 16, 24 or 32 bytes (128, 192 or 256 bits).
<!-- </ol> -->
## Writing encrypted data ##
To write encrypted data:
<!-- <ol> -->
1. Specify which columns to encrypt, and which master keys to use:
parameter name: "encryption.column.keys"
parameter value: "<master key ID>:<column>,<column>;<master key ID>:<column> .."
2. Specify the footer key:
parameter name: "encryption.footer.key"
parameter value: "<master key ID>"
For example:
dataFrame.write
.option("encryption.footer.key" , "k1")
.option("encryption.column.keys" , "k2:SSN,Address;k3:CreditCard")
.parquet("<path to encrypted files>")
Note:`"<path to encrypted files>"` must contain the string `.encrypted` in the URL, for example `/path/to/my_table.parquet.encrypted`. If either the `"encryption.column.keys"` parameter or the `"encryption.footer.key"` parameter is not set, an exception will be thrown.
<!-- </ol> -->
## Reading encrypted data ##
The required metadata is stored in the encrypted Parquet files\.
To read the encrypted data:
<!-- <ol> -->
1. Provide the encryption keys:
sc.hadoopConfiguration.set("encryption.key.list" , "k1:iKwfmI5rDf7HwVBcqeNE6w== , k2:LjxH/aXxMduX6IQcwQgOlw== , k3:rnZHCxhUHr79Y6zvQnxSEQ==")
2. Call the regular parquet read commands, such as:
val dataFrame = spark.read.parquet("<path to encrypted files>")
Note:`"<path to encrypted files>"` must contain the string `.encrypted` in the URL, for example `/path/to/my_table.parquet.encrypted`.
<!-- </ol> -->
**Parent topic:**[Parquet encryption](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/parquet-encryption.html)
<!-- </article "role="article" "> -->
|
E778331BF398F2DB0F6477EF689D0DD6A2AAA81E | https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/key-management-by-kms.html?context=cdpaas&locale=en | Key management by KMS | Key management by KMS
Parquet modular encryption can work with arbitrary Key Management Service (KMS) servers. A custom KMS client class, able to communicate with the chosen KMS server, has to be provided to the Analytics Engine powered by Apache Spark instance. This class needs to implement the KmsClient interface (part of the Parquet modular encryption API). Analytics Engine powered by Apache Spark includes the VaultClient KmsClient, that can be used out of the box if you use Hashicorp Vault as the KMS server for the master keys. If you use or plan to use a different KMS system, you can develop a custom KmsClient class (taking the VaultClient code as an example).
Custom KmsClient class
Parquet modular encryption provides a simple interface called org.apache.parquet.crypto.keytools.KmsClient with the following two main functions that you must implement:
// Wraps a key - encrypts it with the master key, encodes the result and
// potentially adds KMS-specific metadata.
public String wrapKey(byte[] keyBytes, String masterKeyIdentifier)
// Decrypts (unwraps) a key with the master key.
public byte[] unwrapKey(String wrappedKey, String masterKeyIdentifier)
In addition, the interface provides the following initialization function that passes KMS parameters and other configuration:
public void initialize(Configuration configuration, String kmsInstanceID, String kmsInstanceURL, String accessToken)
See [Example of KmsClient implementation](https://github.com/apache/parquet-mr/blob/master/parquet-hadoop/src/test/java/org/apache/parquet/crypto/keytools/samples/VaultClient.java) to learn how to implement a KmsClient.
After you have developed the custom KmsClient class, add it to a jar supplied to Analytics Engine powered by Apache Spark, and pass its full name in the Spark Hadoop configuration, for example:
sc.hadoopConfiguration.set("parquet.ecnryption.kms.client.class", "full.name.of.YourKmsClient"
Key management by Hashicorp Vault
If you decide to use Hashicorp Vault as the KMS server, you can use the pre-packaged VaultClient:
sc.hadoopConfiguration.set("parquet.ecnryption.kms.client.class", "com.ibm.parquet.key.management.VaultClient")
Creating master keys
Consult the Hashicorp Vault documentation for the specifics about actions on Vault. See:
* [Transit Secrets Engine](https://www.vaultproject.io/docs/secrets/transit)
* [Encryption as a Service: Transit Secrets Engine](https://learn.hashicorp.com/tutorials/vault/eaas-transit)
* Enable the Transit Engine either at the default path or providing a custom path.
* Create named encryption keys.
* Configure access policies with which a user or machine is allowed to access these named keys.
Writing encrypted data
1. Pass the following parameters:
* Set "parquet.encryption.kms.client.class" to "com.ibm.parquet.key.management.VaultClient":
sc.hadoopConfiguration.set("parquet.ecnryption.kms.client.class", "com.ibm.parquet.key.management.VaultClient")
* Optional: Set the custom path "parquet.encryption.kms.instance.id" to your transit engine:
sc.hadoopConfiguration.set("parquet.encryption.kms.instance.id" , "north/transit1")
* Set "parquet.encryption.kms.instance.url" to the URL of your Vault instance:
sc.hadoopConfiguration.set("parquet.encryption.kms.instance.url" , "https://<hostname>:8200")
* Set "parquet.encryption.key.access.token" to a valid access token with the access policy attached, which provides access rights to the required keys in your Vault instance:
sc.hadoopConfiguration.set("parquet.encryption.key.access.token" , "<token string>")
* If the token is located in a local file, load it:
val token = scala.io.Source.fromFile("<token file>").mkStringsc.hadoopConfiguration.set("parquet.encryption.key.access.token" , token)
2. Specify which columns need to be encrypted, and with which master keys. You must also specify the footer key. For example:
val k1 = "key1"
val k2 = "key2"
val k3 = "key3"
dataFrame.write
.option("parquet.encryption.footer.key" , k1)
.option("parquet.encryption.column.keys" , k2+":SSN,Address;"+k3+":CreditCard")
.parquet("<path to encrypted files>")
Note: If either the "parquet.encryption.column.keys" or the "parquet.encryption.footer.key" parameter is not set, an exception will be thrown.
Reading encrypted data
The required metadata, including the ID and URL of the Hashicorp Vault instance, is stored in the encrypted Parquet files.
To read the encrypted metadata:
1. Set KMS client to the Vault client implementation:
sc.hadoopConfiguration.set("parquet.ecnryption.kms.client.class", "com.ibm.parquet.key.management.VaultClient")
2. Provide the access token with policy attached that grants access to the relevant keys:
sc.hadoopConfiguration.set("parquet.encryption.key.access.token" , "<token string>")
3. Call the regular Parquet read commands, such as:
val dataFrame = spark.read.parquet("<path to encrypted files>")
Key rotation
If key rotation is required, an administrator with access rights to the KMS key rotation actions must rotate master keys in Hashicorp Vault using the procedure described in the Hashicorp Vault documentation. Thereafter the administrator can trigger Parquet key rotation by calling:
public static void KeyToolkit.rotateMasterKeys(String folderPath, Configuration hadoopConfig)
To enable Parquet key rotation, the following Hadoop configuration properties must be set:
* The parameters "parquet.encryption.key.access.token" and "parquet.encryption.kms.instance.url" must set set, and optionally "parquet.encryption.kms.instance.id"
* The parameter "parquet.encryption.key.material.store.internally" must be set to "false".
* The parameter "parquet.encryption.kms.client.class" must be set to "com.ibm.parquet.key.management.VaultClient"
For example:
sc.hadoopConfiguration.set("parquet.encryption.kms.instance.url" , "https://<hostname>:8200")sc.hadoopConfiguration.set("parquet.encryption.key.access.token" , "<token string>")
sc.hadoopConfiguration.set("parquet.encryption.kms.client.class","com.ibm.parquet.key.management.VaultClient")
sc.hadoopConfiguration.set("parquet.encryption.key.material.store.internally", "false")
KeyToolkit.rotateMasterKeys("<path to encrypted files>", sc.hadoopConfiguration)
Parent topic:[Parquet encryption](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/parquet-encryption.html)
| # Key management by KMS #
Parquet modular encryption can work with arbitrary Key Management Service (KMS) servers\. A custom KMS client class, able to communicate with the chosen KMS server, has to be provided to the Analytics Engine powered by Apache Spark instance\. This class needs to implement the KmsClient interface (part of the Parquet modular encryption API)\. Analytics Engine powered by Apache Spark includes the VaultClient KmsClient, that can be used out of the box if you use Hashicorp Vault as the KMS server for the master keys\. If you use or plan to use a different KMS system, you can develop a custom KmsClient class (taking the VaultClient code as an example)\.
## Custom KmsClient class ##
Parquet modular encryption provides a simple interface called `org.apache.parquet.crypto.keytools.KmsClient` with the following two main functions that you must implement:
// Wraps a key - encrypts it with the master key, encodes the result and
// potentially adds KMS-specific metadata.
public String wrapKey(byte[] keyBytes, String masterKeyIdentifier)
// Decrypts (unwraps) a key with the master key.
public byte[] unwrapKey(String wrappedKey, String masterKeyIdentifier)
In addition, the interface provides the following initialization function that passes KMS parameters and other configuration:
public void initialize(Configuration configuration, String kmsInstanceID, String kmsInstanceURL, String accessToken)
See [Example of KmsClient implementation](https://github.com/apache/parquet-mr/blob/master/parquet-hadoop/src/test/java/org/apache/parquet/crypto/keytools/samples/VaultClient.java) to learn how to implement a KmsClient\.
After you have developed the custom KmsClient class, add it to a jar supplied to Analytics Engine powered by Apache Spark, and pass its full name in the Spark Hadoop configuration, for example:
sc.hadoopConfiguration.set("parquet.ecnryption.kms.client.class", "full.name.of.YourKmsClient"
## Key management by Hashicorp Vault ##
If you decide to use Hashicorp Vault as the KMS server, you can use the pre\-packaged VaultClient:
sc.hadoopConfiguration.set("parquet.ecnryption.kms.client.class", "com.ibm.parquet.key.management.VaultClient")
### Creating master keys ###
Consult the Hashicorp Vault documentation for the specifics about actions on Vault\. See:
<!-- <ul> -->
* [Transit Secrets Engine](https://www.vaultproject.io/docs/secrets/transit)
* [Encryption as a Service: Transit Secrets Engine](https://learn.hashicorp.com/tutorials/vault/eaas-transit)
* Enable the Transit Engine either at the default path or providing a custom path\.
* Create named encryption keys\.
* Configure access policies with which a user or machine is allowed to access these named keys\.
<!-- </ul> -->
### Writing encrypted data ###
<!-- <ol> -->
1. Pass the following parameters:
<!-- <ul> -->
* Set `"parquet.encryption.kms.client.class"` to `"com.ibm.parquet.key.management.VaultClient"`:
sc.hadoopConfiguration.set("parquet.ecnryption.kms.client.class", "com.ibm.parquet.key.management.VaultClient")
* Optional: Set the custom path `"parquet.encryption.kms.instance.id"` to your transit engine:
sc.hadoopConfiguration.set("parquet.encryption.kms.instance.id" , "north/transit1")
* Set `"parquet.encryption.kms.instance.url"` to the URL of your Vault instance:
sc.hadoopConfiguration.set("parquet.encryption.kms.instance.url" , "https://<hostname>:8200")
* Set `"parquet.encryption.key.access.token"` to a valid access token with the access policy attached, which provides access rights to the required keys in your Vault instance:
sc.hadoopConfiguration.set("parquet.encryption.key.access.token" , "<token string>")
* If the token is located in a local file, load it:
val token = scala.io.Source.fromFile("<token file>").mkStringsc.hadoopConfiguration.set("parquet.encryption.key.access.token" , token)
<!-- </ul> -->
2. Specify which columns need to be encrypted, and with which master keys\. You must also specify the footer key\. For example:
val k1 = "key1"
val k2 = "key2"
val k3 = "key3"
dataFrame.write
.option("parquet.encryption.footer.key" , k1)
.option("parquet.encryption.column.keys" , k2+":SSN,Address;"+k3+":CreditCard")
.parquet("<path to encrypted files>")
Note: If either the `"parquet.encryption.column.keys"` or the `"parquet.encryption.footer.key"` parameter is not set, an exception will be thrown.
<!-- </ol> -->
## Reading encrypted data ##
The required metadata, including the ID and URL of the Hashicorp Vault instance, is stored in the encrypted Parquet files\.
To read the encrypted metadata:
<!-- <ol> -->
1. Set KMS client to the Vault client implementation:
sc.hadoopConfiguration.set("parquet.ecnryption.kms.client.class", "com.ibm.parquet.key.management.VaultClient")
2. Provide the access token with policy attached that grants access to the relevant keys:
sc.hadoopConfiguration.set("parquet.encryption.key.access.token" , "<token string>")
3. Call the regular Parquet read commands, such as:
val dataFrame = spark.read.parquet("<path to encrypted files>")
<!-- </ol> -->
## Key rotation ##
If key rotation is required, an administrator with access rights to the KMS key rotation actions must rotate master keys in Hashicorp Vault using the procedure described in the Hashicorp Vault documentation\. Thereafter the administrator can trigger Parquet key rotation by calling:
public static void KeyToolkit.rotateMasterKeys(String folderPath, Configuration hadoopConfig)
To enable Parquet key rotation, the following Hadoop configuration properties must be set:
<!-- <ul> -->
* The parameters `"parquet.encryption.key.access.token"` and `"parquet.encryption.kms.instance.url"` must set set, and optionally `"parquet.encryption.kms.instance.id"`
* The parameter `"parquet.encryption.key.material.store.internally"` must be set to `"false"`\.
* The parameter `"parquet.encryption.kms.client.class"` must be set to `"com.ibm.parquet.key.management.VaultClient"`
<!-- </ul> -->
For example:
sc.hadoopConfiguration.set("parquet.encryption.kms.instance.url" , "https://<hostname>:8200")sc.hadoopConfiguration.set("parquet.encryption.key.access.token" , "<token string>")
sc.hadoopConfiguration.set("parquet.encryption.kms.client.class","com.ibm.parquet.key.management.VaultClient")
sc.hadoopConfiguration.set("parquet.encryption.key.material.store.internally", "false")
KeyToolkit.rotateMasterKeys("<path to encrypted files>", sc.hadoopConfiguration)
**Parent topic:**[Parquet encryption](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/parquet-encryption.html)
<!-- </article "role="article" "> -->
|
339F2EBDAD7CD0A3445BDF69C69AB7B28B4353C4 | https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/parquet-encryption.html?context=cdpaas&locale=en | Parquet modular encryption | Parquet modular encryption
If your data is stored in columnar format, you can use Parquet modular encryption to encrypt sensitive columns when writing Parquet files, and decrypt these columns when reading the encrypted files. Encrypting data at the column level, enables you to decide which columns to encrypt and how to control the column access.
Besides ensuring privacy, Parquet modular encryption also protects the integrity of stored data. Any tampering with file contents is detected and triggers a reader-side exception.
Key features include:
1. Parquet modular encryption and decryption is performed on the Spark cluster. Therefore, sensitive data and the encryption keys are not visible to the storage.
2. Standard Parquet features, such as encoding, compression, columnar projection and predicate push-down, continue to work as usual on files with Parquet modular encryption format.
3. You can choose one of two encryption algorithms that are defined in the Parquet specification. Both algorithms support column encryption, however:
* The default algorithm AES-GCM provides full protection against tampering with data and metadata parts in Parquet files.
* The alternative algorithm AES-GCM-CTR supports partial integrity protection of Parquet files. Only metadata parts are protected against tampering, not data parts. An advantage of this algorithm is that it has a lower throughput overhead compared to the AES-GCM algorithm.
4. You can choose which columns to encrypt. Other columns won't be encrypted, reducing the throughput overhead.
5. Different columns can be encrypted with different keys.
6. By default, the main Parquet metadata module (the file footer) is encrypted to hide the file schema and list of sensitive columns. However, you can choose not to encrypt the file footers in order to enable legacy readers (such as other Spark distributions that don't yet support Parquet modular encryption) to read the unencrypted columns in the encrypted files.
7. Encryption keys can be managed in one of two ways:
* Directly by your application. See [Key management by application](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/key-management-by-application.html).
* By a key management system (KMS) that generates, stores and destroys encryption keys used by the Spark service. These keys never leave the KMS server, and therefore are invisible to other components, including the Spark service. See [Key management by KMS](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/key-management-by-kms.html).
Note: Only master encryption keys (MEKs) need to be managed by your application or by a KMS.
For each sensitive column, you must specify which master key to use for encryption. Also, a master key must be specified for the footer of each encrypted file (data frame). By default, the footer key will be used for footer encryption. However, if you choose a plain text footer mode, the footer won’t be encrypted, and the key will be used only for integrity verification of the footer.
The encryption parameters can be passed via the standard Spark Hadoop configuration, for example by setting configuration values in the Hadoop configuration of the application's SparkContext:
sc.hadoopConfiguration.set("<parameter name>" , "<parameter value>")
Alternatively, you can pass parameter values through write options:
<data frame name>.write
.option("<parameter name>" , "<parameter value>")
.parquet("<write path>")
Running with Parquet modular encryption
Parquet modular encryption is available only in Spark notebooks that are run in an IBM Analytics Engine service instance. Parquet modular encryption is not supported in notebooks that run in a Spark environment.
To enable Parquet modular encryption, set the following Spark classpath properties to point to the Parquet jar files that implement Parquet modular encryption, and to the key management jar file:
1. Navigate to Ambari > Spark > Config -> Custom spark2-default.
2. Add the following two parameters to point explicitly to the location of the JAR files. Make sure that you edit the paths to use the actual version of jar files on the cluster.
spark.driver.extraClassPath=/home/common/lib/parquetEncryption/ibm-parquet-kms-<latestversion>-jar-with-dependencies.jar:/home/common/lib/parquetEncryption/parquet-format-<latestversion>.jar:/home/common/lib/parquetEncryption/parquet-hadoop-<latestversion>.jar
spark.executor.extraClassPath=/home/common/lib/parquetEncryption/ibm-parquet-<latestversion>-jar-with-dependencies.jar:/home/common/lib/parquetEncryption/parquet-format-<latestversion>.jar:/home/common/lib/parquetEncryption/parquet-hadoop-<latestversion>.jar
Mandatory parameters
The following parameters are required for writing encrypted data:
* List of columns to encrypt, with the master encryption keys:
parameter name: "encryption.column.keys"
parameter value: "<master key ID>:<column>,<column>;<master key ID>:<column>,.."
* The footer key:
parameter name: "encryption.footer.key"
parameter value: "<master key ID>"
For example:
dataFrame.write
.option("encryption.footer.key" , "k1")
.option("encryption.column.keys" , "k2:SSN,Address;k3:CreditCard")
.parquet("<path to encrypted files>")
Important:If neither the encryption.column.keys parameter nor the encryption.footer.key parameter is set, the file will not be encrypted. If only one of these parameters is set, an exception is thrown, because these parameters are mandatory for encrypted files.
Optional parameters
The following optional parameters can be used when writing encrypted data:
* The encryption algorithm AES-GCM-CTR
By default, Parquet modular encryption uses the AES-GCM algorithm that provides full protection against tampering with data and metadata in Parquet files. However, as Spark 2.3.0 runs on Java 8, which doesn’t support AES acceleration in CPU hardware (this was only added in Java 9), the overhead of data integrity verification can affect workload throughput in certain situations.
To compensate this, you can switch off the data integrity verification support and write the encrypted files with the alternative algorithm AES-GCM-CTR, which verifies the integrity of the metadata parts only and not that of the data parts, and has a lower throughput overhead compared to the AES-GCM algorithm.
parameter name: "encryption.algorithm"
parameter value: "AES_GCM_CTR_V1"
* Plain text footer mode for legacy readers
By default, the main Parquet metadata module (the file footer) is encrypted to hide the file schema and list of sensitive columns. However, you can decide not to encrypt the file footers in order to enable other Spark and Parquet readers (that don't yet support Parquet modular encryption) to read the unencrypted columns in the encrypted files. To switch off footer encryption, set the following parameter:
parameter name: "encryption.plaintext.footer"
parameter value: "true"
Important:The encryption.footer.key parameter must also be specified in the plain text footer mode. Although the footer is not encrypted, the key is used to sign the footer content, which means that new readers could verify its integrity. Legacy readers are not affected by the addition of the footer signature.
Usage examples
The following sample code snippets for Python show how to create data frames, written to encrypted parquet files, and read from encrypted parquet files.
* Python: Writing encrypted data:
from pyspark.sql import Row
squaresDF = spark.createDataFrame(
sc.parallelize(range(1, 6))
.map(lambda i: Row(int_column=i, square_int_column=i 2)))
sc._jsc.hadoopConfiguration().set("encryption.key.list",
"key1: AAECAwQFBgcICQoLDA0ODw==, key2: AAECAAECAAECAAECAAECAA==")
sc._jsc.hadoopConfiguration().set("encryption.column.keys",
"key1:square_int_column")
sc._jsc.hadoopConfiguration().set("encryption.footer.key", "key2")
encryptedParquetPath = "squares.parquet.encrypted"
squaresDF.write.parquet(encryptedParquetPath)
* Python: Reading encrypted data:
sc._jsc.hadoopConfiguration().set("encryption.key.list",
"key1: AAECAwQFBgcICQoLDA0ODw==, key2: AAECAAECAAECAAECAAECAA==")
encryptedParquetPath = "squares.parquet.encrypted"
parquetFile = spark.read.parquet(encryptedParquetPath)
parquetFile.show()
The contents of the Python job file InMemoryKMS.py is as follows:
from pyspark.sql import SparkSession
from pyspark import SparkContext
from pyspark.sql import Row
if __name__ == "__main__":
spark = SparkSession
.builder
.appName("InMemoryKMS")
.getOrCreate()
sc = spark.sparkContext
KMS operation
print("Setup InMemoryKMS")
hconf = sc._jsc.hadoopConfiguration()
encryptedParquetFullName = "testparquet.encrypted"
print("Write Encrypted Parquet file")
hconf.set("encryption.key.list", "key1: AAECAwQFBgcICQoLDA0ODw==, key2: AAECAAECAAECAAECAAECAA==")
btDF = spark.createDataFrame(sc.parallelize(range(1, 6)).map(lambda i: Row(ssn=i, value=i 2)))
btDF.write.mode("overwrite").option("encryption.column.keys", "key1:ssn").option("encryption.footer.key", "key2").parquet(encryptedParquetFullName)
print("Read Encrypted Parquet file")
encrDataDF = spark.read.parquet(encryptedParquetFullName)
encrDataDF.createOrReplaceTempView("bloodtests")
queryResult = spark.sql("SELECT ssn, value FROM bloodtests")
queryResult.show(10)
sc.stop()
spark.stop()
Internals of encryption key handling
When writing a Parquet file, a random data encryption key (DEK) is generated for each encrypted column and for the footer. These keys are used to encrypt the data and the metadata modules in the Parquet file.
The data encryption key is then encrypted with a key encryption key (KEK), also generated inside Spark/Parquet for each master key. The key encryption key is encrypted with a master encryption key (MEK) locally.
Encrypted data encryption keys and key encryption keys are stored in the Parquet file metadata, along with the master key identity. Each key encryption key has a unique identity (generated locally as a secure random 16-byte value), also stored in the file metadata.
When reading a Parquet file, the identifier of the master encryption key (MEK) and the encrypted key encryption key (KEK) with its identifier, and the encrypted data encryption key (DEK) are extracted from the file metadata.
The key encryption key is decrypted with the master encryption key locally. Then the data encryption key (DEK) is decrypted locally, using the key encryption key (KEK).
Learn more
* [Parquet modular encryption](https://github.com/apache/parquet-format/blob/apache-parquet-format-2.7.0/Encryption.md)
| # Parquet modular encryption #
If your data is stored in columnar format, you can use Parquet modular encryption to encrypt sensitive columns when writing Parquet files, and decrypt these columns when reading the encrypted files\. Encrypting data at the column level, enables you to decide which columns to encrypt and how to control the column access\.
Besides ensuring privacy, Parquet modular encryption also protects the integrity of stored data\. Any tampering with file contents is detected and triggers a reader\-side exception\.
Key features include:
<!-- <ol> -->
1. Parquet modular encryption and decryption is performed on the Spark cluster\. Therefore, sensitive data and the encryption keys are not visible to the storage\.
2. Standard Parquet features, such as encoding, compression, columnar projection and predicate push\-down, continue to work as usual on files with Parquet modular encryption format\.
3. You can choose one of two encryption algorithms that are defined in the Parquet specification\. Both algorithms support column encryption, however:
<!-- <ul> -->
* The default algorithm `AES-GCM` provides full protection against tampering with data and metadata parts in Parquet files.
* The alternative algorithm `AES-GCM-CTR` supports partial integrity protection of Parquet files. Only metadata parts are protected against tampering, not data parts. An advantage of this algorithm is that it has a lower throughput overhead compared to the `AES-GCM` algorithm.
<!-- </ul> -->
4. You can choose which columns to encrypt\. Other columns won't be encrypted, reducing the throughput overhead\.
5. Different columns can be encrypted with different keys\.
6. By default, the main Parquet metadata module (the file footer) is encrypted to hide the file schema and list of sensitive columns\. However, you can choose not to encrypt the file footers in order to enable legacy readers (such as other Spark distributions that don't yet support Parquet modular encryption) to read the unencrypted columns in the encrypted files\.
7. Encryption keys can be managed in one of two ways:
<!-- <ul> -->
* Directly by your application. See [Key management by application](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/key-management-by-application.html).
* By a key management system (KMS) that generates, stores and destroys encryption keys used by the Spark service. These keys never leave the KMS server, and therefore are invisible to other components, including the Spark service. See [Key management by KMS](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/key-management-by-kms.html).
Note: Only master encryption keys (MEKs) need to be managed by your application or by a KMS.
For each sensitive column, you must specify which master key to use for encryption. Also, a master key must be specified for the footer of each encrypted file (data frame). By default, the footer key will be used for footer encryption. However, if you choose a plain text footer mode, the footer won’t be encrypted, and the key will be used only for integrity verification of the footer.
The encryption parameters can be passed via the standard Spark Hadoop configuration, for example by setting configuration values in the Hadoop configuration of the application's SparkContext:
sc.hadoopConfiguration.set("<parameter name>" , "<parameter value>")
Alternatively, you can pass parameter values through write options:
<data frame name>.write
.option("<parameter name>" , "<parameter value>")
.parquet("<write path>")
<!-- </ul> -->
<!-- </ol> -->
## Running with Parquet modular encryption ##
Parquet modular encryption is available only in Spark notebooks that are run in an IBM Analytics Engine service instance\. Parquet modular encryption is not supported in notebooks that run in a Spark environment\.
To enable Parquet modular encryption, set the following Spark classpath properties to point to the Parquet jar files that implement Parquet modular encryption, and to the key management jar file:
<!-- <ol> -->
1. Navigate to **Ambari > Spark > Config \-> Custom spark2\-default**\.
2. Add the following two parameters to point explicitly to the location of the JAR files\. Make sure that you edit the paths to use the actual version of jar files on the cluster\.
spark.driver.extraClassPath=/home/common/lib/parquetEncryption/ibm-parquet-kms-<latestversion>-jar-with-dependencies.jar:/home/common/lib/parquetEncryption/parquet-format-<latestversion>.jar:/home/common/lib/parquetEncryption/parquet-hadoop-<latestversion>.jar
spark.executor.extraClassPath=/home/common/lib/parquetEncryption/ibm-parquet-<latestversion>-jar-with-dependencies.jar:/home/common/lib/parquetEncryption/parquet-format-<latestversion>.jar:/home/common/lib/parquetEncryption/parquet-hadoop-<latestversion>.jar
<!-- </ol> -->
## Mandatory parameters ##
The following parameters are required for writing encrypted data:
<!-- <ul> -->
* List of columns to encrypt, with the master encryption keys:
parameter name: "encryption.column.keys"
parameter value: "<master key ID>:<column>,<column>;<master key ID>:<column>,.."
* The footer key:
parameter name: "encryption.footer.key"
parameter value: "<master key ID>"
For example:
dataFrame.write
.option("encryption.footer.key" , "k1")
.option("encryption.column.keys" , "k2:SSN,Address;k3:CreditCard")
.parquet("<path to encrypted files>")
Important:If neither the `encryption.column.keys` parameter nor the `encryption.footer.key` parameter is set, the file will not be encrypted. If only one of these parameters is set, an exception is thrown, because these parameters are mandatory for encrypted files.
<!-- </ul> -->
## Optional parameters ##
The following optional parameters can be used when writing encrypted data:
<!-- <ul> -->
* The encryption algorithm `AES-GCM-CTR`
By default, Parquet modular encryption uses the `AES-GCM` algorithm that provides full protection against tampering with data and metadata in Parquet files. However, as Spark 2.3.0 runs on Java 8, which doesn’t support AES acceleration in CPU hardware (this was only added in Java 9), the overhead of data integrity verification can affect workload throughput in certain situations.
To compensate this, you can switch off the data integrity verification support and write the encrypted files with the alternative algorithm `AES-GCM-CTR`, which verifies the integrity of the metadata parts only and not that of the data parts, and has a lower throughput overhead compared to the `AES-GCM` algorithm.
parameter name: "encryption.algorithm"
parameter value: "AES_GCM_CTR_V1"
* Plain text footer mode for legacy readers
By default, the main Parquet metadata module (the file footer) is encrypted to hide the file schema and list of sensitive columns. However, you can decide not to encrypt the file footers in order to enable other Spark and Parquet readers (that don't yet support Parquet modular encryption) to read the unencrypted columns in the encrypted files. To switch off footer encryption, set the following parameter:
parameter name: "encryption.plaintext.footer"
parameter value: "true"
Important:The `encryption.footer.key` parameter must also be specified in the plain text footer mode. Although the footer is not encrypted, the key is used to sign the footer content, which means that new readers could verify its integrity. Legacy readers are not affected by the addition of the footer signature.
<!-- </ul> -->
## Usage examples ##
The following sample code snippets for Python show how to create data frames, written to encrypted parquet files, and read from encrypted parquet files\.
<!-- <ul> -->
* Python: Writing encrypted data:
from pyspark.sql import Row
squaresDF = spark.createDataFrame(
sc.parallelize(range(1, 6))
.map(lambda i: Row(int_column=i, square_int_column=i ** 2)))
sc._jsc.hadoopConfiguration().set("encryption.key.list",
"key1: AAECAwQFBgcICQoLDA0ODw==, key2: AAECAAECAAECAAECAAECAA==")
sc._jsc.hadoopConfiguration().set("encryption.column.keys",
"key1:square_int_column")
sc._jsc.hadoopConfiguration().set("encryption.footer.key", "key2")
encryptedParquetPath = "squares.parquet.encrypted"
squaresDF.write.parquet(encryptedParquetPath)
* Python: Reading encrypted data:
sc._jsc.hadoopConfiguration().set("encryption.key.list",
"key1: AAECAwQFBgcICQoLDA0ODw==, key2: AAECAAECAAECAAECAAECAA==")
encryptedParquetPath = "squares.parquet.encrypted"
parquetFile = spark.read.parquet(encryptedParquetPath)
parquetFile.show()
<!-- </ul> -->
The contents of the Python job file `InMemoryKMS.py` is as follows:
from pyspark.sql import SparkSession
from pyspark import SparkContext
from pyspark.sql import Row
if __name__ == "__main__":
spark = SparkSession \
.builder \
.appName("InMemoryKMS") \
.getOrCreate()
sc = spark.sparkContext
##KMS operation
print("Setup InMemoryKMS")
hconf = sc._jsc.hadoopConfiguration()
encryptedParquetFullName = "testparquet.encrypted"
print("Write Encrypted Parquet file")
hconf.set("encryption.key.list", "key1: AAECAwQFBgcICQoLDA0ODw==, key2: AAECAAECAAECAAECAAECAA==")
btDF = spark.createDataFrame(sc.parallelize(range(1, 6)).map(lambda i: Row(ssn=i, value=i ** 2)))
btDF.write.mode("overwrite").option("encryption.column.keys", "key1:ssn").option("encryption.footer.key", "key2").parquet(encryptedParquetFullName)
print("Read Encrypted Parquet file")
encrDataDF = spark.read.parquet(encryptedParquetFullName)
encrDataDF.createOrReplaceTempView("bloodtests")
queryResult = spark.sql("SELECT ssn, value FROM bloodtests")
queryResult.show(10)
sc.stop()
spark.stop()
## Internals of encryption key handling ##
When writing a Parquet file, a random data encryption key (DEK) is generated for each encrypted column and for the footer\. These keys are used to encrypt the data and the metadata modules in the Parquet file\.
The data encryption key is then encrypted with a key encryption key (KEK), also generated inside Spark/Parquet for each master key\. The key encryption key is encrypted with a master encryption key (MEK) locally\.
Encrypted data encryption keys and key encryption keys are stored in the Parquet file metadata, along with the master key identity\. Each key encryption key has a unique identity (generated locally as a secure random 16\-byte value), also stored in the file metadata\.
When reading a Parquet file, the identifier of the master encryption key (MEK) and the encrypted key encryption key (KEK) with its identifier, and the encrypted data encryption key (DEK) are extracted from the file metadata\.
The key encryption key is decrypted with the master encryption key locally\. Then the data encryption key (DEK) is decrypted locally, using the key encryption key (KEK)\.
## Learn more ##
<!-- <ul> -->
* [Parquet modular encryption](https://github.com/apache/parquet-format/blob/apache-parquet-format-2.7.0/Encryption.md)
<!-- </ul> -->
<!-- </article "role="article" "> -->
|
2D08EDD168FBEE078290F386F7EC3EB1998ADF02 | https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/time-reference-system.html?context=cdpaas&locale=en | Time reference system | Time reference system
Time reference system (TRS) is a local, regional or global system used to identify time.
A time reference system defines a specific projection for forward and reverse mapping between a timestamp and its numeric representation. A common example that most users are familiar with is UTC time, which maps a timestamp, for example, (1 Jan 2019, 12 midnight (GMT) into a 64-bit integer value (1546300800000), which captures the number of milliseconds that have elapsed since 1 Jan 1970, 12 midnight (GMT). Generally speaking, the timestamp value is better suited for human readability, while the numeric representation is better suited for machine processing.
In the time series library, a time series can be associated with a TRS. A TRS is composed of a:
* Time tick that captures time granularity, for example 1 minute
* Zoned date time that captures a start time, for example 1 Jan 2019, 12 midnight US Eastern Daylight Savings time (EDT). A timestamp is mapped into a numeric representation by computing the number of elapsed time ticks since the start time. A numeric representation is scaled by the granularity and shifted by the start time when it is mapped back to a timestamp.
Note that this forward + reverse projection might lead to time loss. For instance, if the true time granularity of a time series is in seconds, then forward and reverse mapping of the time stamps 09:00:01 and 09:00:02 (to be read as hh:mm:ss) to a granularity of one minute would result in the time stamps 09:00:00 and 09:00:00 respectively. In this example, a time series, whose granularity is in seconds, is being mapped to minutes and thus the reverse mapping looses information. However, if the mapped granularity is higher than the granularity of the input time series (more specifically, if the time series granularity is an integral multiple of the mapped granularity) then the forward + reverse projection is guaranteed to be lossless. For example, mapping a time series, whose granularity is in minutes, to seconds and reverse projecting it to minutes would result in lossless reconstruction of the timestamps.
Setting TRS
When a time series is created, it is associated with a TRS (or None if no TRS is specified). If the TRS is None, then the numeric values cannot be mapped to timestamps. Note that TRS can only be set on a time series at construction time. The reason is that a time series by design is an immutable object. Immutability comes in handy when the library is used in multi-threaded environments or in distributed computing environments such as Apache Spark. While a TRS can be set only at construction time, it can be changed using the with_trs method as described in the next section. with_trs produces a new time series and thus has no impact on immutability.
Let us consider a simple time series created from an in-memory list:
values = [1.0, 2.0, 4.0]
x = tspy.time_series(values)
x
This returns:
TimeStamp: 0 Value: 1.0
TimeStamp: 1 Value: 2.0
TimeStamp: 2 Value: 4.0
At construction time, the time series can be associated with a TRS. Associating a TRS with a time series allows its numeric timestamps to be as per the time tick and offset/timezone. The following example shows 1 minute and 1 Jan 2019, 12 midnight (GMT):
zdt = datetime.datetime(2019,1,1,0,0,0,0,tzinfo=datetime.timezone.utc)
x_trs = tspy.time_series(data, granularity=datetime.timedelta(minutes=1), start_time=zdt)
x_trs
This returns:
TimeStamp: 2019-01-01T00:00Z Value: 1.0
TimeStamp: 2019-01-01T00:01Z Value: 2.0
TimeStamp: 2019-01-01T00:02Z Value: 4.0
Here is another example where the numeric timestamps are reinterpreted with a time tick of one hour and offset/timezone as 1 Jan 2019, 12 midnight US Eastern Daylight Savings time (EDT).
tz_edt = datetime.timezone.edt
zdt = datetime.datetime(2019,1,1,0,0,0,0,tzinfo=tz_edt)
x_trs = tspy.time_series(data, granularity=datetime.timedelta(hours=1), start_time=zdt)
x_trs
This returns:
TimeStamp: 2019-01-01T00:00-04:00 Value: 1.0
TimeStamp: 2019-01-01T00:01-04:00 Value: 2.0
TimeStamp: 2019-01-01T00:02-04:00 Value: 4.0
Note that the timestamps now indicate an offset of -4 hours from GMT (EDT timezone) and captures the time tick of one hour. Also note that setting a TRS does NOT change the numeric timestamps - it only specifies a way of interpreting numeric timestamps.
x_trs.print(human_readable=False)
This returns:
TimeStamp: 0 Value: 1.0
TimeStamp: 1 Value: 2.0
TimeStamp: 2 Value: 4.0
Changing TRS
You can change the TRS associated with a time series using the with_trs function. Note that this function will throw an exception if the input time series is not associated with a TRS (if TRS is None). Using with_trs changes the numeric timestamps.
The following code sample shows TRS set at contructions time without using with_trs:
1546300800 is the epoch time in seconds for 1 Jan 2019, 12 midnight GMT
zdt1 = datetime.datetime(1970,1,1,0,0,0,0,tzinfo=datetime.timezone.utc)
y = tspy.observations.of(tspy.observation(1546300800, 1.0),tspy.observation(1546300860, 2.0), tspy.observation(1546300920,
4.0)).to_time_series(granularity=datetime.timedelta(seconds=1), start_time=zdt1)
y.print()
y.print(human_readable=False)
This returns:
TimeStamp: 2019-01-01T00:00Z Value: 1.0
TimeStamp: 2019-01-01T00:01Z Value: 2.0
TimeStamp: 2019-01-01T00:02Z Value: 4.0
TRS has been set during construction time - no changes to numeric timestamps
TimeStamp: 1546300800 Value: 1.0
TimeStamp: 1546300860 Value: 2.0
TimeStamp: 1546300920 Value: 4.0
The following example shows how to apply with_trs to change granularity to one minute and retain the original time offset (1 Jan 1970, 12 midnight GMT):
y_minutely_1970 = y.with_trs(granularity=datetime.timedelta(minutes=1), start_time=zdt1)
y_minutely_1970.print()
y_minutely_1970.print(human_readable=False)
This returns:
TimeStamp: 2019-01-01T00:00Z Value: 1.0
TimeStamp: 2019-01-01T00:01Z Value: 2.0
TimeStamp: 2019-01-01T00:02Z Value: 4.0
numeric timestamps have changed to number of elapsed minutes since 1 Jan 1970, 12 midnight GMT
TimeStamp: 25771680 Value: 1.0
TimeStamp: 25771681 Value: 2.0
TimeStamp: 25771682 Value: 4.0
Now apply with_trs to change granularity to one minute and the offset to 1 Jan 2019, 12 midnight GMT:
zdt2 = datetime.datetime(2019,1,1,0,0,0,0,tzinfo=datetime.timezone.utc)
y_minutely = y.with_trs(granularity=datetime.timedelta(minutes=1), start_time=zdt2)
y_minutely.print()
y_minutely.print(human_readable=False)
This returns:
TimeStamp: 2019-01-01T00:00Z Value: 1.0
TimeStamp: 2019-01-01T00:01Z Value: 2.0
TimeStamp: 2019-01-01T00:02Z Value: 4.0
numeric timestamps are now minutes elapsed since 1 Jan 2019, 12 midnight GMT
TimeStamp: 0 Value: 1.0
TimeStamp: 1 Value: 2.0
TimeStamp: 2 Value: 4.0
To better understand how it impacts post processing, let's examine the following. Note that materialize on numeric timestamps operates on the underlying numeric timestamps associated with the time series.
print(y.materialize(0,2))
print(y_minutely_1970.materialize(0,2))
print(y_minutely.materialize(0,2))
This returns:
numeric timestamps in y are in the range 1546300800, 1546300920 and thus y.materialize(0,2) is empty
[]
numeric timestamps in y_minutely_1970 are in the range 25771680, 25771682 and thus y_minutely_1970.materialize(0,2) is empty
[]
numeric timestamps in y_minutely are in the range 0, 2
[(0,1.0),(1,2.0),(2,4.0)]
The method materialize can also be applied to datetime objects. This results in an exception if the underlying time series is not associated with a TRS (if TRS is None). Assuming the underlying time series has a TRS, the datetime objects are mapped to a numeric range using the TRS.
Jan 1 2019, 12 midnight GMT
dt_beg = datetime.datetime(2019,1,1,0,0,0,0,tzinfo=datetime.timezone.utc)
Jan 1 2019, 12:02 AM GMT
dt_end = datetime.datetime(2019,1,1,0,2,0,0,tzinfo=datetime.timezone.utc)
print(y.materialize(dt_beg, dt_end))
print(y_minutely_1970.materialize(dt_beg, dt_end))
print(y_minutely.materialize(dt_beg, dt_end))
materialize on y in UTC millis
[(1546300800,1.0),(1546300860,2.0), (1546300920,4.0)]
materialize on y_minutely_1970 in UTC minutes
[(25771680,1.0),(25771681,2.0),(25771682,4.0)]
materialize on y_minutely in minutes offset by 1 Jan 2019, 12 midnight
[(0,1.0),(1,2.0),(2,4.0)]
Duplicate timestamps
Changing the TRS can result in duplicate timestamps. The following example changes the granularity to one hour which results in duplicate timestamps. The time series library handles duplicate timestamps seamlessly and provides convenience combiners to reduce values associated with duplicate timestamps into a single value, for example by calculating an average of the values grouped by duplicate timestamps.
y_hourly = y_minutely.with_trs(granularity=datetime.timedelta(hours=1), start_time=zdt2)
print(y_minutely)
print(y_minutely.materialize(0,2))
print(y_hourly)
print(y_hourly.materialize(0,0))
This returns:
y_minutely - minutely time series
TimeStamp: 2019-01-01T00:00Z Value: 1.0
TimeStamp: 2019-01-01T00:01Z Value: 2.0
TimeStamp: 2019-01-01T00:02Z Value: 4.0
y_minutely has numeric timestamps 0, 1 and 2
[(0,1.0),(1,2.0),(2,4.0)]
y_hourly - hourly time series has duplicate timestamps
TimeStamp: 2019-01-01T00:00Z Value: 1.0
TimeStamp: 2019-01-01T00:00Z Value: 2.0
TimeStamp: 2019-01-01T00:00Z Value: 4.0
y_hourly has numeric timestamps of all 0
[(0,1.0),(0,2.0),(0,4.0)]
Duplicate timestamps can be optionally combined as follows:
y_hourly_averaged = y_hourly.transform(transformers.combine_duplicate_granularity(lambda x: sum(x)/len(x))
print(y_hourly_averaged.materialize(0,0))
This returns:
values corresponding to the duplicate numeric timestamp 0 have been combined using average
average = (1+2+4)/3 = 2.33
[(0,2.33)]
Learn more
To use the tspy Python SDK, see the [tspy Python SDK documentation](https://ibm-cloud.github.io/tspy-docs/).
Parent topic:[Time series analysis](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/time-series-lib.html)
| # Time reference system #
Time reference system (TRS) is a local, regional or global system used to identify time\.
A time reference system defines a specific projection for forward and reverse mapping between a timestamp and its numeric representation\. A common example that most users are familiar with is UTC time, which maps a timestamp, for example, (1 Jan 2019, 12 midnight (GMT) into a 64\-bit integer value (1546300800000), which captures the number of milliseconds that have elapsed since 1 Jan 1970, 12 midnight (GMT)\. Generally speaking, the timestamp value is better suited for human readability, while the numeric representation is better suited for machine processing\.
In the time series library, a time series can be associated with a TRS\. A TRS is composed of a:
<!-- <ul> -->
* Time tick that captures time granularity, for example 1 minute
* Zoned date time that captures a start time, for example `1 Jan 2019, 12 midnight US Eastern Daylight Savings time (EDT)`\. A timestamp is mapped into a numeric representation by computing the number of elapsed time ticks since the start time\. A numeric representation is scaled by the granularity and shifted by the start time when it is mapped back to a timestamp\.
<!-- </ul> -->
Note that this forward \+ reverse projection might lead to time loss\. For instance, if the true time granularity of a time series is in seconds, then forward and reverse mapping of the time stamps `09:00:01` and `09:00:02` (to be read as `hh:mm:ss`) to a granularity of one minute would result in the time stamps `09:00:00` and `09:00:00` respectively\. In this example, a time series, whose granularity is in seconds, is being mapped to minutes and thus the reverse mapping looses information\. However, if the mapped granularity is higher than the granularity of the input time series (more specifically, if the time series granularity is an integral multiple of the mapped granularity) then the forward \+ reverse projection is guaranteed to be lossless\. For example, mapping a time series, whose granularity is in minutes, to seconds and reverse projecting it to minutes would result in lossless reconstruction of the timestamps\.
## Setting TRS ##
When a time series is created, it is associated with a TRS (or None if no TRS is specified)\. If the TRS is None, then the numeric values cannot be mapped to timestamps\. Note that TRS can only be set on a time series at construction time\. The reason is that a time series by design is an immutable object\. Immutability comes in handy when the library is used in multi\-threaded environments or in distributed computing environments such as Apache Spark\. While a TRS can be set only at construction time, it can be changed using the `with_trs` method as described in the next section\. `with_trs` produces a new time series and thus has no impact on immutability\.
Let us consider a simple time series created from an in\-memory list:
values = [1.0, 2.0, 4.0]
x = tspy.time_series(values)
x
This returns:
TimeStamp: 0 Value: 1.0
TimeStamp: 1 Value: 2.0
TimeStamp: 2 Value: 4.0
At construction time, the time series can be associated with a TRS\. Associating a TRS with a time series allows its numeric timestamps to be as per the time tick and offset/timezone\. The following example shows `1 minute and 1 Jan 2019, 12 midnight (GMT)`:
zdt = datetime.datetime(2019,1,1,0,0,0,0,tzinfo=datetime.timezone.utc)
x_trs = tspy.time_series(data, granularity=datetime.timedelta(minutes=1), start_time=zdt)
x_trs
This returns:
TimeStamp: 2019-01-01T00:00Z Value: 1.0
TimeStamp: 2019-01-01T00:01Z Value: 2.0
TimeStamp: 2019-01-01T00:02Z Value: 4.0
Here is another example where the numeric timestamps are reinterpreted with a time tick of one hour and offset/timezone as `1 Jan 2019, 12 midnight US Eastern Daylight Savings time (EDT)`\.
tz_edt = datetime.timezone.edt
zdt = datetime.datetime(2019,1,1,0,0,0,0,tzinfo=tz_edt)
x_trs = tspy.time_series(data, granularity=datetime.timedelta(hours=1), start_time=zdt)
x_trs
This returns:
TimeStamp: 2019-01-01T00:00-04:00 Value: 1.0
TimeStamp: 2019-01-01T00:01-04:00 Value: 2.0
TimeStamp: 2019-01-01T00:02-04:00 Value: 4.0
Note that the timestamps now indicate an offset of \-4 hours from GMT (EDT timezone) and captures the time tick of one hour\. Also note that setting a TRS does NOT change the numeric timestamps \- it only specifies a way of interpreting numeric timestamps\.
x_trs.print(human_readable=False)
This returns:
TimeStamp: 0 Value: 1.0
TimeStamp: 1 Value: 2.0
TimeStamp: 2 Value: 4.0
## Changing TRS ##
You can change the TRS associated with a time series using the `with_trs` function\. Note that this function will throw an exception if the input time series is not associated with a TRS (if TRS is None)\. Using `with_trs` changes the numeric timestamps\.
The following code sample shows TRS set at contructions time without using `with_trs`:
# 1546300800 is the epoch time in seconds for 1 Jan 2019, 12 midnight GMT
zdt1 = datetime.datetime(1970,1,1,0,0,0,0,tzinfo=datetime.timezone.utc)
y = tspy.observations.of(tspy.observation(1546300800, 1.0),tspy.observation(1546300860, 2.0), tspy.observation(1546300920,
4.0)).to_time_series(granularity=datetime.timedelta(seconds=1), start_time=zdt1)
y.print()
y.print(human_readable=False)
This returns:
TimeStamp: 2019-01-01T00:00Z Value: 1.0
TimeStamp: 2019-01-01T00:01Z Value: 2.0
TimeStamp: 2019-01-01T00:02Z Value: 4.0
# TRS has been set during construction time - no changes to numeric timestamps
TimeStamp: 1546300800 Value: 1.0
TimeStamp: 1546300860 Value: 2.0
TimeStamp: 1546300920 Value: 4.0
The following example shows how to apply `with_trs` to change `granularity` to one minute and retain the original time offset (1 Jan 1970, 12 midnight GMT):
y_minutely_1970 = y.with_trs(granularity=datetime.timedelta(minutes=1), start_time=zdt1)
y_minutely_1970.print()
y_minutely_1970.print(human_readable=False)
This returns:
TimeStamp: 2019-01-01T00:00Z Value: 1.0
TimeStamp: 2019-01-01T00:01Z Value: 2.0
TimeStamp: 2019-01-01T00:02Z Value: 4.0
# numeric timestamps have changed to number of elapsed minutes since 1 Jan 1970, 12 midnight GMT
TimeStamp: 25771680 Value: 1.0
TimeStamp: 25771681 Value: 2.0
TimeStamp: 25771682 Value: 4.0
Now apply `with_trs` to change `granularity` to one minute and the offset to 1 Jan 2019, 12 midnight GMT:
zdt2 = datetime.datetime(2019,1,1,0,0,0,0,tzinfo=datetime.timezone.utc)
y_minutely = y.with_trs(granularity=datetime.timedelta(minutes=1), start_time=zdt2)
y_minutely.print()
y_minutely.print(human_readable=False)
This returns:
TimeStamp: 2019-01-01T00:00Z Value: 1.0
TimeStamp: 2019-01-01T00:01Z Value: 2.0
TimeStamp: 2019-01-01T00:02Z Value: 4.0
# numeric timestamps are now minutes elapsed since 1 Jan 2019, 12 midnight GMT
TimeStamp: 0 Value: 1.0
TimeStamp: 1 Value: 2.0
TimeStamp: 2 Value: 4.0
To better understand how it impacts post processing, let's examine the following\. Note that `materialize` on numeric timestamps operates on the underlying numeric timestamps associated with the time series\.
print(y.materialize(0,2))
print(y_minutely_1970.materialize(0,2))
print(y_minutely.materialize(0,2))
This returns:
# numeric timestamps in y are in the range 1546300800, 1546300920 and thus y.materialize(0,2) is empty
[]
# numeric timestamps in y_minutely_1970 are in the range 25771680, 25771682 and thus y_minutely_1970.materialize(0,2) is empty
[]
# numeric timestamps in y_minutely are in the range 0, 2
[(0,1.0),(1,2.0),(2,4.0)]
The method `materialize` can also be applied to datetime objects\. This results in an exception if the underlying time series is not associated with a TRS (if TRS is None)\. Assuming the underlying time series has a TRS, the datetime objects are mapped to a numeric range using the TRS\.
# Jan 1 2019, 12 midnight GMT
dt_beg = datetime.datetime(2019,1,1,0,0,0,0,tzinfo=datetime.timezone.utc)
# Jan 1 2019, 12:02 AM GMT
dt_end = datetime.datetime(2019,1,1,0,2,0,0,tzinfo=datetime.timezone.utc)
print(y.materialize(dt_beg, dt_end))
print(y_minutely_1970.materialize(dt_beg, dt_end))
print(y_minutely.materialize(dt_beg, dt_end))
# materialize on y in UTC millis
[(1546300800,1.0),(1546300860,2.0), (1546300920,4.0)]
# materialize on y_minutely_1970 in UTC minutes
[(25771680,1.0),(25771681,2.0),(25771682,4.0)]
# materialize on y_minutely in minutes offset by 1 Jan 2019, 12 midnight
[(0,1.0),(1,2.0),(2,4.0)]
## Duplicate timestamps ##
Changing the TRS can result in duplicate timestamps\. The following example changes the granularity to one hour which results in duplicate timestamps\. The time series library handles duplicate timestamps seamlessly and provides convenience combiners to reduce values associated with duplicate timestamps into a single value, for example by calculating an average of the values grouped by duplicate timestamps\.
y_hourly = y_minutely.with_trs(granularity=datetime.timedelta(hours=1), start_time=zdt2)
print(y_minutely)
print(y_minutely.materialize(0,2))
print(y_hourly)
print(y_hourly.materialize(0,0))
This returns:
# y_minutely - minutely time series
TimeStamp: 2019-01-01T00:00Z Value: 1.0
TimeStamp: 2019-01-01T00:01Z Value: 2.0
TimeStamp: 2019-01-01T00:02Z Value: 4.0
# y_minutely has numeric timestamps 0, 1 and 2
[(0,1.0),(1,2.0),(2,4.0)]
# y_hourly - hourly time series has duplicate timestamps
TimeStamp: 2019-01-01T00:00Z Value: 1.0
TimeStamp: 2019-01-01T00:00Z Value: 2.0
TimeStamp: 2019-01-01T00:00Z Value: 4.0
# y_hourly has numeric timestamps of all 0
[(0,1.0),(0,2.0),(0,4.0)]
Duplicate timestamps can be optionally combined as follows:
y_hourly_averaged = y_hourly.transform(transformers.combine_duplicate_granularity(lambda x: sum(x)/len(x))
print(y_hourly_averaged.materialize(0,0))
This returns:
# values corresponding to the duplicate numeric timestamp 0 have been combined using average
# average = (1+2+4)/3 = 2.33
[(0,2.33)]
## Learn more ##
To use the `tspy` Python SDK, see the [`tspy` Python SDK documentation](https://ibm-cloud.github.io/tspy-docs/)\.
**Parent topic:**[Time series analysis](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/time-series-lib.html)
<!-- </article "role="article" "> -->
|
0108F00736882AC35E3C56CD3CE0D91BCB5798A8 | https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/time-series-functions.html?context=cdpaas&locale=en | Time series functions | Time series functions
Time series functions are aggregate functions that operate on sequences of data values measured at points in time.
The following sections describe some of the time series functions available in different time series packages.
Transforms
Transforms are functions that are applied on a time series resulting in another time series. The time series library supports various types of transforms, including provided transforms (by using from tspy.functions import transformers) as well as user defined transforms.
The following sample shows some provided transforms:
Interpolation
>>> ts = tspy.time_series([1.0, 2.0, 3.0, 4.0, 5.0, 6.0])
>>> periodicity = 2
>>> interp = interpolators.nearest(0.0)
>>> interp_ts = ts.resample(periodicity, interp)
>>> interp_ts.print()
TimeStamp: 0 Value: 1.0
TimeStamp: 2 Value: 3.0
TimeStamp: 4 Value: 5.0
Fillna
>>> shift_ts = ts.shift(2)
print("shifted ts to add nulls")
print(shift_ts)
print("nfilled ts to make nulls 0s")
null_filled_ts = shift_ts.fillna(interpolators.fill(0.0))
print(null_filled_ts)
shifted ts to add nulls
TimeStamp: 0 Value: null
TimeStamp: 1 Value: null
TimeStamp: 2 Value: 1.0
TimeStamp: 3 Value: 2.0
TimeStamp: 4 Value: 3.0
TimeStamp: 5 Value: 4.0
filled ts to make nulls 0s
TimeStamp: 0 Value: 0.0
TimeStamp: 1 Value: 0.0
TimeStamp: 2 Value: 1.0
TimeStamp: 3 Value: 2.0
TimeStamp: 4 Value: 3.0
TimeStamp: 5 Value: 4.0
Additive White Gaussian Noise (AWGN)
>>> noise_ts = ts.transform(transformers.awgn(mean=0.0,sd=.03))
>>> print(noise_ts)
TimeStamp: 0 Value: 0.9962378841388397
TimeStamp: 1 Value: 1.9681980879378596
TimeStamp: 2 Value: 3.0289374962174405
TimeStamp: 3 Value: 3.990728648807705
TimeStamp: 4 Value: 4.935338359740761
TimeStamp: 5 Value: 6.03395072999318
Segmentation
Segmentation or windowing is the process of splitting a time series into multiple segments. The time series library supports various forms of segmentation and allows creating user-defined segments as well.
* Window based segmentation
This type of segmentation of a time series is based on user specified segment sizes. The segments can be record based or time based. There are options that allow for creating tumbling as well as sliding window based segments.
>>> import tspy
>>> ts_orig = tspy.builder()
.add(tspy.observation(1,1.0))
.add(tspy.observation(2,2.0))
.add(tspy.observation(6,6.0))
.result().to_time_series()
>>> ts_orig
timestamp: 1 Value: 1.0
timestamp: 2 Value: 2.0
timestamp: 6 Value: 6.0
>>> ts = ts_orig.segment_by_time(3,1)
>>> ts
timestamp: 1 Value: original bounds: (1,3) actual bounds: (1,2) observations: [(1,1.0),(2,2.0)]
timestamp: 2 Value: original bounds: (2,4) actual bounds: (2,2) observations: [(2,2.0)]
timestamp: 3 Value: this segment is empty
timestamp: 4 Value: original bounds: (4,6) actual bounds: (6,6) observations: [(6,6.0)]
* Anchor based segmentation
Anchor based segmentation is a very important type of segmentation that creates a segment by anchoring on a specific lambda, which can be a simple value. An example is looking at events that preceded a 500 error or examining values after observing an anomaly. Variants of anchor based segmentation include providing a range with multiple markers.
>>> import tspy
>>> ts_orig = tspy.time_series([1.0, 2.0, 3.0, 4.0, 5.0])
>>> ts_orig
timestamp: 0 Value: 1.0
timestamp: 1 Value: 2.0
timestamp: 2 Value: 3.0
timestamp: 3 Value: 4.0
timestamp: 4 Value: 5.0
>>> ts = ts_orig.segment_by_anchor(lambda x: x % 2 == 0, 1, 2)
>>> ts
timestamp: 1 Value: original bounds: (0,3) actual bounds: (0,3) observations: [(0,1.0),(1,2.0),(2,3.0),(3,4.0)]
timestamp: 3 Value: original bounds: (2,5) actual bounds: (2,4) observations: [(2,3.0),(3,4.0),(4,5.0)]
* Segmenters
There are several specialized segmenters provided out of the box by importing the segmenters package (using from tspy.functions import segmenters). An example segmenter is one that uses regression to segment a time series:
>>> ts = tspy.time_series([1.0,2.0,3.0,4.0,5.0,2.0,1.0,-1.0,50.0,53.0,56.0])
>>> max_error = .5
>>> skip = 1
>>> reg_sts = ts.to_segments(segmenters.regression(max_error,skip,use_relative=True))
>>> reg_sts
timestamp: 0 Value: range: (0, 4) outliers: {}
timestamp: 5 Value: range: (5, 7) outliers: {}
timestamp: 8 Value: range: (8, 10) outliers: {}
Reducers
A reducer is a function that is applied to the values across a set of time series to produce a single value. The time series reducer functions are similar to the reducer concept used by Hadoop/Spark. This single value can be a collection, but more generally is a single object. An example of a reducer function is averaging the values in a time series.
Several reducer functions are supported, including:
* Distance reducers
Distance reducers are a class of reducers that compute the distance between two time series. The library supports numeric as well as categorical distance functions on sequences. These include time warping distance measurements such as Itakura Parallelogram, Sakoe-Chiba Band, DTW non-constrained and DTW non-time warped contraints. Distribution distances such as Hungarian distance and Earth-Movers distance are also available.
For categorical time series distance measurements, you can use Damerau Levenshtein and Jaro-Winkler distance measures.
>>> from tspy.functions import
>>> ts = tspy.time_series([1.0, 2.0, 3.0, 4.0, 5.0, 6.0])
>>> ts2 = ts.transform(transformers.awgn(sd=.3))
>>> dtw_distance = ts.reduce(ts2,reducers.dtw(lambda obs1, obs2: abs(obs1.value - obs2.value)))
>>> print(dtw_distance)
1.8557981638880405
* Math reducers
Several convenient math reducers for numeric time series are provided. These include basic ones such as average, sum, standard deviation, and moments. Entropy, kurtosis, FFT and variants of it, various correlations, and histogram are also included. A convenient basic summarization reducer is the describe function that provides basic information about the time series.
>>> from tspy.functions import
>>> ts = tspy.time_series([1.0, 2.0, 3.0, 4.0, 5.0, 6.0])
>>> ts2 = ts.transform(transformers.awgn(sd=.3))
>>> corr = ts.reduce(ts2, reducers.correlation())
>>> print(corr)
0.9938941942380525
>>> adf = ts.reduce(reducers.adf())
>>> print(adf)
pValue: -3.45
satisfies test: false
>>> ts2 = ts.transform(transformers.awgn(sd=.3))
>>> granger = ts.reduce(ts2, reducers.granger(1))
>>> print(granger) f_stat, p_value, R2
-1.7123613937876463,-3.874412217575385,1.0
* Another basic reducer that is very useful for getting a first order understanding of the time series is the describe reducer. The following illustrates this reducer:
>>> desc = ts.describe()
>>> print(desc)
min inter-arrival-time: 1
max inter-arrival-time: 1
mean inter-arrival-time: 1.0
top: null
unique: 6
frequency: 1
first: TimeStamp: 0 Value: 1.0
last: TimeStamp: 5 Value: 6.0
count: 6
mean:3.5
std:1.707825127659933
min:1.0
max:6.0
25%:1.75
50%:3.5
75%:5.25
Temporal joins
The library includes functions for temporal joins or joining time series based on their timestamps. The join functions are similar to those in a database, including left, right, outer, inner, left outer, right outer joins, and so on. The following sample codes shows some of these join functions:
Create a collection of observations (materialized TimeSeries)
observations_left = tspy.observations(tspy.observation(1, 0.0), tspy.observation(3, 1.0), tspy.observation(8, 3.0), tspy.observation(9, 2.5))
observations_right = tspy.observations(tspy.observation(2, 2.0), tspy.observation(3, 1.5), tspy.observation(7, 4.0), tspy.observation(9, 5.5), tspy.observation(10, 4.5))
Build TimeSeries from Observations
ts_left = observations_left.to_time_series()
ts_right = observations_right.to_time_series()
Perform full join
ts_full = ts_left.full_join(ts_right)
print(ts_full)
TimeStamp: 1 Value: [0.0, null]
TimeStamp: 2 Value: [null, 2.0]
TimeStamp: 3 Value: [1.0, 1.5]
TimeStamp: 7 Value: [null, 4.0]
TimeStamp: 8 Value: [3.0, null]
TimeStamp: 9 Value: [2.5, 5.5]
TimeStamp: 10 Value: [null, 4.5]
Perform left align with interpolation
ts_left_aligned, ts_right_aligned = ts_left.left_align(ts_right, interpolators.nearest(0.0))
print("left ts result")
print(ts_left_aligned)
print("right ts result")
print(ts_right_aligned)
left ts result
TimeStamp: 1 Value: 0.0
TimeStamp: 3 Value: 1.0
TimeStamp: 8 Value: 3.0
TimeStamp: 9 Value: 2.5
right ts result
TimeStamp: 1 Value: 0.0
TimeStamp: 3 Value: 1.5
TimeStamp: 8 Value: 4.0
TimeStamp: 9 Value: 5.5
Forecasting
A key functionality provided by the time series library is forecasting. The library includes functions for simple as well as complex forecasting models, including ARIMA, Exponential, Holt-Winters, and BATS. The following example shows the function to create a Holt-Winters:
import random
model = tspy.forecasters.hws(samples_per_season=samples_per_season, initial_training_seasons=initial_training_seasons)
for i in range(100):
timestamp = i
value = random.randint(1,10)* 1.0
model.update_model(timestamp, value)
print(model)
Forecasting Model
Algorithm: HWSAdditive=5 (aLevel=0.001, bSlope=0.001, gSeas=0.001) level=6.087789839896166, slope=0.018901997884893912, seasonal(amp,per,avg)=(1.411203455586738,5, 0,-0.0037471500727535465)
Is model init-ed
if model.is_initialized():
print(model.forecast_at(120))
6.334135728495107
ts = tspy.time_series([float(i) for i in range(10)])
print(ts)
TimeStamp: 0 Value: 0.0
TimeStamp: 1 Value: 1.0
TimeStamp: 2 Value: 2.0
TimeStamp: 3 Value: 3.0
TimeStamp: 4 Value: 4.0
TimeStamp: 5 Value: 5.0
TimeStamp: 6 Value: 6.0
TimeStamp: 7 Value: 7.0
TimeStamp: 8 Value: 8.0
TimeStamp: 9 Value: 9.0
num_predictions = 5
model = tspy.forecasters.auto(8)
confidence = .99
predictions = ts.forecast(num_predictions, model, confidence=confidence)
print(predictions.to_time_series())
TimeStamp: 10 Value: {value=10.0, lower_bound=10.0, upper_bound=10.0, error=0.0}
TimeStamp: 11 Value: {value=10.997862810553725, lower_bound=9.934621260488143, upper_bound=12.061104360619307, error=0.41277640121597475}
TimeStamp: 12 Value: {value=11.996821082897318, lower_bound=10.704895525154571, upper_bound=13.288746640640065, error=0.5015571318964149}
TimeStamp: 13 Value: {value=12.995779355240911, lower_bound=11.50957896664928, upper_bound=14.481979743832543, error=0.5769793776877866}
TimeStamp: 14 Value: {value=13.994737627584504, lower_bound=12.33653268707341, upper_bound=15.652942568095598, error=0.6437557559526337}
print(predictions.to_time_series().to_df())
timestamp value lower_bound upper_bound error
0 10 10.000000 10.000000 10.000000 0.000000
1 11 10.997863 9.934621 12.061104 0.412776
2 12 11.996821 10.704896 13.288747 0.501557
3 13 12.995779 11.509579 14.481980 0.576979
4 14 13.994738 12.336533 15.652943 0.643756
Time series SQL
The time series library is tightly integrated with Apache Spark. By using new data types in Spark Catalyst, you are able to perform time series SQL operations that scale out horizontally using Apache Spark. This enables you to easily use time series extensions in IBM Analytics Engine or in solutions that include IBM Analytics Engine functionality like the Watson Studio Spark environments.
SQL extensions cover most aspects of the time series functions, including segmentation, transformations, reducers, forecasting, and I/O. See [Analyzing time series data](https://cloud.ibm.com/docs/sql-query?topic=sql-query-ts_intro).
Learn more
To use the tspy Python SDK, see the [tspy Python SDK documentation](https://ibm-cloud.github.io/tspy-docs/).
Parent topic:[Time series analysis](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/time-series-lib.html)
| # Time series functions #
Time series functions are aggregate functions that operate on sequences of data values measured at points in time\.
The following sections describe some of the time series functions available in different time series packages\.
## Transforms ##
Transforms are functions that are applied on a time series resulting in another time series\. The time series library supports various types of transforms, including provided transforms (by using `from tspy.functions import transformers`) as well as user defined transforms\.
The following sample shows some provided transforms:
#Interpolation
>>> ts = tspy.time_series([1.0, 2.0, 3.0, 4.0, 5.0, 6.0])
>>> periodicity = 2
>>> interp = interpolators.nearest(0.0)
>>> interp_ts = ts.resample(periodicity, interp)
>>> interp_ts.print()
TimeStamp: 0 Value: 1.0
TimeStamp: 2 Value: 3.0
TimeStamp: 4 Value: 5.0
#Fillna
>>> shift_ts = ts.shift(2)
print("shifted ts to add nulls")
print(shift_ts)
print("\nfilled ts to make nulls 0s")
null_filled_ts = shift_ts.fillna(interpolators.fill(0.0))
print(null_filled_ts)
shifted ts to add nulls
TimeStamp: 0 Value: null
TimeStamp: 1 Value: null
TimeStamp: 2 Value: 1.0
TimeStamp: 3 Value: 2.0
TimeStamp: 4 Value: 3.0
TimeStamp: 5 Value: 4.0
filled ts to make nulls 0s
TimeStamp: 0 Value: 0.0
TimeStamp: 1 Value: 0.0
TimeStamp: 2 Value: 1.0
TimeStamp: 3 Value: 2.0
TimeStamp: 4 Value: 3.0
TimeStamp: 5 Value: 4.0
# Additive White Gaussian Noise (AWGN)
>>> noise_ts = ts.transform(transformers.awgn(mean=0.0,sd=.03))
>>> print(noise_ts)
TimeStamp: 0 Value: 0.9962378841388397
TimeStamp: 1 Value: 1.9681980879378596
TimeStamp: 2 Value: 3.0289374962174405
TimeStamp: 3 Value: 3.990728648807705
TimeStamp: 4 Value: 4.935338359740761
TimeStamp: 5 Value: 6.03395072999318
## Segmentation ##
Segmentation or windowing is the process of splitting a time series into multiple segments\. The time series library supports various forms of segmentation and allows creating user\-defined segments as well\.
<!-- <ul> -->
* Window based segmentation
This type of segmentation of a time series is based on user specified segment sizes. The segments can be record based or time based. There are options that allow for creating tumbling as well as sliding window based segments.
>>> import tspy
>>> ts_orig = tspy.builder()
.add(tspy.observation(1,1.0))
.add(tspy.observation(2,2.0))
.add(tspy.observation(6,6.0))
.result().to_time_series()
>>> ts_orig
timestamp: 1 Value: 1.0
timestamp: 2 Value: 2.0
timestamp: 6 Value: 6.0
>>> ts = ts_orig.segment_by_time(3,1)
>>> ts
timestamp: 1 Value: original bounds: (1,3) actual bounds: (1,2) observations: [(1,1.0),(2,2.0)]
timestamp: 2 Value: original bounds: (2,4) actual bounds: (2,2) observations: [(2,2.0)]
timestamp: 3 Value: this segment is empty
timestamp: 4 Value: original bounds: (4,6) actual bounds: (6,6) observations: [(6,6.0)]
* Anchor based segmentation
Anchor based segmentation is a very important type of segmentation that creates a segment by anchoring on a specific lambda, which can be a simple value. An example is looking at events that preceded a 500 error or examining values after observing an anomaly. Variants of anchor based segmentation include providing a range with multiple markers.
>>> import tspy
>>> ts_orig = tspy.time_series([1.0, 2.0, 3.0, 4.0, 5.0])
>>> ts_orig
timestamp: 0 Value: 1.0
timestamp: 1 Value: 2.0
timestamp: 2 Value: 3.0
timestamp: 3 Value: 4.0
timestamp: 4 Value: 5.0
>>> ts = ts_orig.segment_by_anchor(lambda x: x % 2 == 0, 1, 2)
>>> ts
timestamp: 1 Value: original bounds: (0,3) actual bounds: (0,3) observations: [(0,1.0),(1,2.0),(2,3.0),(3,4.0)]
timestamp: 3 Value: original bounds: (2,5) actual bounds: (2,4) observations: [(2,3.0),(3,4.0),(4,5.0)]
* Segmenters
There are several specialized segmenters provided out of the box by importing the `segmenters` package (using `from tspy.functions import segmenters`). An example segmenter is one that uses regression to segment a time series:
>>> ts = tspy.time_series([1.0,2.0,3.0,4.0,5.0,2.0,1.0,-1.0,50.0,53.0,56.0])
>>> max_error = .5
>>> skip = 1
>>> reg_sts = ts.to_segments(segmenters.regression(max_error,skip,use_relative=True))
>>> reg_sts
timestamp: 0 Value: range: (0, 4) outliers: {}
timestamp: 5 Value: range: (5, 7) outliers: {}
timestamp: 8 Value: range: (8, 10) outliers: {}
<!-- </ul> -->
## Reducers ##
A reducer is a function that is applied to the values across a set of time series to produce a single value\. The time series `reducer` functions are similar to the reducer concept used by Hadoop/Spark\. This single value can be a collection, but more generally is a single object\. An example of a reducer function is averaging the values in a time series\.
Several `reducer` functions are supported, including:
<!-- <ul> -->
* Distance reducers
Distance reducers are a class of reducers that compute the distance between two time series. The library supports numeric as well as categorical distance functions on sequences. These include time warping distance measurements such as Itakura Parallelogram, Sakoe-Chiba Band, DTW non-constrained and DTW non-time warped contraints. Distribution distances such as Hungarian distance and Earth-Movers distance are also available.
For categorical time series distance measurements, you can use Damerau Levenshtein and Jaro-Winkler distance measures.
>>> from tspy.functions import *
>>> ts = tspy.time_series([1.0, 2.0, 3.0, 4.0, 5.0, 6.0])
>>> ts2 = ts.transform(transformers.awgn(sd=.3))
>>> dtw_distance = ts.reduce(ts2,reducers.dtw(lambda obs1, obs2: abs(obs1.value - obs2.value)))
>>> print(dtw_distance)
1.8557981638880405
* Math reducers
Several convenient math reducers for numeric time series are provided. These include basic ones such as average, sum, standard deviation, and moments. Entropy, kurtosis, FFT and variants of it, various correlations, and histogram are also included. A convenient basic summarization reducer is the `describe` function that provides basic information about the time series.
>>> from tspy.functions import *
>>> ts = tspy.time_series([1.0, 2.0, 3.0, 4.0, 5.0, 6.0])
>>> ts2 = ts.transform(transformers.awgn(sd=.3))
>>> corr = ts.reduce(ts2, reducers.correlation())
>>> print(corr)
0.9938941942380525
>>> adf = ts.reduce(reducers.adf())
>>> print(adf)
pValue: -3.45
satisfies test: false
>>> ts2 = ts.transform(transformers.awgn(sd=.3))
>>> granger = ts.reduce(ts2, reducers.granger(1))
>>> print(granger) #f_stat, p_value, R2
-1.7123613937876463,-3.874412217575385,1.0
* Another basic reducer that is very useful for getting a first order understanding of the time series is the describe reducer\. The following illustrates this reducer:
>>> desc = ts.describe()
>>> print(desc)
min inter-arrival-time: 1
max inter-arrival-time: 1
mean inter-arrival-time: 1.0
top: null
unique: 6
frequency: 1
first: TimeStamp: 0 Value: 1.0
last: TimeStamp: 5 Value: 6.0
count: 6
mean:3.5
std:1.707825127659933
min:1.0
max:6.0
25%:1.75
50%:3.5
75%:5.25
<!-- </ul> -->
## Temporal joins ##
The library includes functions for temporal joins or joining time series based on their timestamps\. The join functions are similar to those in a database, including left, right, outer, inner, left outer, right outer joins, and so on\. The following sample codes shows some of these join functions:
# Create a collection of observations (materialized TimeSeries)
observations_left = tspy.observations(tspy.observation(1, 0.0), tspy.observation(3, 1.0), tspy.observation(8, 3.0), tspy.observation(9, 2.5))
observations_right = tspy.observations(tspy.observation(2, 2.0), tspy.observation(3, 1.5), tspy.observation(7, 4.0), tspy.observation(9, 5.5), tspy.observation(10, 4.5))
# Build TimeSeries from Observations
ts_left = observations_left.to_time_series()
ts_right = observations_right.to_time_series()
# Perform full join
ts_full = ts_left.full_join(ts_right)
print(ts_full)
TimeStamp: 1 Value: [0.0, null]
TimeStamp: 2 Value: [null, 2.0]
TimeStamp: 3 Value: [1.0, 1.5]
TimeStamp: 7 Value: [null, 4.0]
TimeStamp: 8 Value: [3.0, null]
TimeStamp: 9 Value: [2.5, 5.5]
TimeStamp: 10 Value: [null, 4.5]
# Perform left align with interpolation
ts_left_aligned, ts_right_aligned = ts_left.left_align(ts_right, interpolators.nearest(0.0))
print("left ts result")
print(ts_left_aligned)
print("right ts result")
print(ts_right_aligned)
left ts result
TimeStamp: 1 Value: 0.0
TimeStamp: 3 Value: 1.0
TimeStamp: 8 Value: 3.0
TimeStamp: 9 Value: 2.5
right ts result
TimeStamp: 1 Value: 0.0
TimeStamp: 3 Value: 1.5
TimeStamp: 8 Value: 4.0
TimeStamp: 9 Value: 5.5
## Forecasting ##
A key functionality provided by the time series library is forecasting\. The library includes functions for simple as well as complex forecasting models, including ARIMA, Exponential, Holt\-Winters, and BATS\. The following example shows the function to create a Holt\-Winters:
import random
model = tspy.forecasters.hws(samples_per_season=samples_per_season, initial_training_seasons=initial_training_seasons)
for i in range(100):
timestamp = i
value = random.randint(1,10)* 1.0
model.update_model(timestamp, value)
print(model)
Forecasting Model
Algorithm: HWSAdditive=5 (aLevel=0.001, bSlope=0.001, gSeas=0.001) level=6.087789839896166, slope=0.018901997884893912, seasonal(amp,per,avg)=(1.411203455586738,5, 0,-0.0037471500727535465)
#Is model init-ed
if model.is_initialized():
print(model.forecast_at(120))
6.334135728495107
ts = tspy.time_series([float(i) for i in range(10)])
print(ts)
TimeStamp: 0 Value: 0.0
TimeStamp: 1 Value: 1.0
TimeStamp: 2 Value: 2.0
TimeStamp: 3 Value: 3.0
TimeStamp: 4 Value: 4.0
TimeStamp: 5 Value: 5.0
TimeStamp: 6 Value: 6.0
TimeStamp: 7 Value: 7.0
TimeStamp: 8 Value: 8.0
TimeStamp: 9 Value: 9.0
num_predictions = 5
model = tspy.forecasters.auto(8)
confidence = .99
predictions = ts.forecast(num_predictions, model, confidence=confidence)
print(predictions.to_time_series())
TimeStamp: 10 Value: {value=10.0, lower_bound=10.0, upper_bound=10.0, error=0.0}
TimeStamp: 11 Value: {value=10.997862810553725, lower_bound=9.934621260488143, upper_bound=12.061104360619307, error=0.41277640121597475}
TimeStamp: 12 Value: {value=11.996821082897318, lower_bound=10.704895525154571, upper_bound=13.288746640640065, error=0.5015571318964149}
TimeStamp: 13 Value: {value=12.995779355240911, lower_bound=11.50957896664928, upper_bound=14.481979743832543, error=0.5769793776877866}
TimeStamp: 14 Value: {value=13.994737627584504, lower_bound=12.33653268707341, upper_bound=15.652942568095598, error=0.6437557559526337}
print(predictions.to_time_series().to_df())
timestamp value lower_bound upper_bound error
0 10 10.000000 10.000000 10.000000 0.000000
1 11 10.997863 9.934621 12.061104 0.412776
2 12 11.996821 10.704896 13.288747 0.501557
3 13 12.995779 11.509579 14.481980 0.576979
4 14 13.994738 12.336533 15.652943 0.643756
## Time series SQL ##
The time series library is tightly integrated with Apache Spark\. By using new data types in Spark Catalyst, you are able to perform time series SQL operations that scale out horizontally using Apache Spark\. This enables you to easily use time series extensions in IBM Analytics Engine or in solutions that include IBM Analytics Engine functionality like the Watson Studio Spark environments\.
SQL extensions cover most aspects of the time series functions, including segmentation, transformations, reducers, forecasting, and I/O\. See [Analyzing time series data](https://cloud.ibm.com/docs/sql-query?topic=sql-query-ts_intro)\.
## Learn more ##
To use the `tspy` Python SDK, see the [`tspy` Python SDK documentation](https://ibm-cloud.github.io/tspy-docs/)\.
**Parent topic:**[Time series analysis](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/time-series-lib.html)
<!-- </article "role="article" "> -->
|
A6587CBE69B6227CE1D087CC141CCF13669F2060 | https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/time-series-key-functionality.html?context=cdpaas&locale=en | Time series key functionality | Time series key functionality
The time series library provides various functions on univariate, multivariate, multi-key time series as well as numeric and categorical types.
The functionality provided by the library can be broadly categorized into:
* Time series I/O, for creating and saving time series data
* Time series functions, transforms, windowing or segmentation, and reducers
* Time series SQL and SQL extensions to Spark to enable executing scalable time series functions
Some of the key functionality is shown in the following sections using examples.
Time series I/O
The primary input and output (I/O) functionality for a time series is through a pandas DataFrame or a Python list. The following code sample shows constructing a time series from a DataFrame:
>>> import numpy as np
>>> import pandas as pd
>>> data = np.array(['', 'key', 'timestamp', "value"],'', "a", 1, 27], '', "b", 3, 4], '', "a", 5, 17], '', "a", 3, 7], '', "b", 2, 45]])
>>> df = pd.DataFrame(data=data[1:, 1:], index=data[1:, 0], columns=data[0, 1:]).astype(dtype={'key': 'object', 'timestamp': 'int64', 'value': 'float64'})
>>> df
key timestamp value
a 1 27.0
b 3 4.0
a 5 17.0
a 3 7.0
b 2 45.0
Create a timeseries from a dataframe, providing a timestamp and a value column
>>> ts = tspy.time_series(df, ts_column="timestamp", value_column="value")
>>> ts
TimeStamp: 1 Value: 27.0
TimeStamp: 2 Value: 45.0
TimeStamp: 3 Value: 4.0
TimeStamp: 3 Value: 7.0
TimeStamp: 5 Value: 17.0
To revert from a time series back to a pandas DataFrame, use the to_df function:
>>> import tspy
>>> ts_orig = tspy.time_series([1.0, 2.0, 3.0])
>>> ts_orig
TimeStamp: 0 Value: 1
TimeStamp: 1 Value: 2
TimeStamp: 2 Value: 3
>>> df = ts_orig.to_df()
>>> df
timestamp value
0 0 1
1 1 2
2 2 3
Data model
Time series data does not have any standards for the model and data types, unlike some data types such as spatial, which are governed by a standard such as Open Geospatial Consortium (OGC). The challenge with time series data is the wide variety of functions that need to be supported, similar to that of Spark Resilient Distributed Datasets (RDD).
The data model allows for a wide variety of operations ranging across different forms of segmentation or windowing of time series, transformations or conversions of one time series to another, reducers that compute a static value from a time series, joins that join multiple time series, and collectors of time series from different time zones. The time series library enables the plug-and-play of new functions while keeping the core data structure unchangeable. The library also support numeric and categorical typed timeseries.
With time zones and various human readable time formats, a key aspect of the data model is support for Time Reference System (TRS). Every time series is associated with a TRS (system default), which can be remapped to any specific choice of the user at any time, enabling easy transformation of a specific time series or a segment of a time series. See [Using time reference system](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/time-reference-system.html).
Further, with the need for handling large scale time series, the library offers a lazy evaluation construct by providing a mechanism for identifying the maximal narrow temporal dependency. This construct is very similar to that of a Spark computation graph, which also loads data into memory on as needed basis and realizes the computations only when needed.
Time series data types
You can use multiple data types as an element of a time series, spanning numeric, categorical, array, and dictionary data structures.
The following data types are supported in a time series:
Data type Description
numeric Time series with univariate observations of numeric type including double and integer. For example:[(1, 7.2), (3, 4.5), (5, 4.5), (5, 4.6), (5, 7.1), (7, 3.9), (9, 1.1)]
numeric array Time series with multivariate observations of numeric type, including double array and integer array. For example: [(1, 7.2, 8.74]), (3, 4.5, 9.44]), (5, 4.5, 10.12]), (5, 4.6, 12.91]), (5, 7.1, 9.90]), (7, 3.9, 3.76])]
string Time series with univariate observations of type string, for example: [(1, "a"), (3, "b"), (5, "c"), (5, "d"), (5, "e"), (7, "f"), (9, "g")]
string array Time series with multivariate observations of type string array, for example: [(1, "a", "xq"]), (3, "b", "zr"]), (5, "c", "ms"]), (5, "d", "rt"]), (5, "e", "wu"]), (7, "f", "vv"]), (9, "g", "zw"])]
segment Time series of segments. The output of the segmentBy function, can be any type, including numeric, string, numeric array, and string array. For example: [(1,(1, 7.2), (3, 4.5)]), (5,(5, 4.5), (5, 4.6), (5, 7.1)]), (7,(7, 3.9), (9, 1.1)])]
dictionary Time series of dictionaries. A dictionary can have arbitrary types inside it
Time series functions
You can use different functions in the provided time series packages to analyze time series data to extract meaningful information with which to create models that can be used to predict new values based on previously observed values. See [Time series functions](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/time-series-functions.html).
Learn more
To use the tspy Python SDK, see the [tspy Python SDK documentation](https://ibm-cloud.github.io/tspy-docs/).
Parent topic:[Time series analysis](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/time-series-lib.html)
| # Time series key functionality #
The time series library provides various functions on univariate, multivariate, multi\-key time series as well as numeric and categorical types\.
The functionality provided by the library can be broadly categorized into:
<!-- <ul> -->
* Time series I/O, for creating and saving time series data
* Time series functions, transforms, windowing or segmentation, and reducers
* Time series SQL and SQL extensions to Spark to enable executing scalable time series functions
<!-- </ul> -->
Some of the key functionality is shown in the following sections using examples\.
## Time series I/O ##
The primary input and output (I/O) functionality for a time series is through a pandas DataFrame or a Python list\. The following code sample shows constructing a time series from a DataFrame:
>>> import numpy as np
>>> import pandas as pd
>>> data = np.array(['', 'key', 'timestamp', "value"],'', "a", 1, 27], '', "b", 3, 4], '', "a", 5, 17], '', "a", 3, 7], '', "b", 2, 45]])
>>> df = pd.DataFrame(data=data[1:, 1:], index=data[1:, 0], columns=data[0, 1:]).astype(dtype={'key': 'object', 'timestamp': 'int64', 'value': 'float64'})
>>> df
key timestamp value
a 1 27.0
b 3 4.0
a 5 17.0
a 3 7.0
b 2 45.0
#Create a timeseries from a dataframe, providing a timestamp and a value column
>>> ts = tspy.time_series(df, ts_column="timestamp", value_column="value")
>>> ts
TimeStamp: 1 Value: 27.0
TimeStamp: 2 Value: 45.0
TimeStamp: 3 Value: 4.0
TimeStamp: 3 Value: 7.0
TimeStamp: 5 Value: 17.0
To revert from a time series back to a pandas DataFrame, use the `to_df` function:
>>> import tspy
>>> ts_orig = tspy.time_series([1.0, 2.0, 3.0])
>>> ts_orig
TimeStamp: 0 Value: 1
TimeStamp: 1 Value: 2
TimeStamp: 2 Value: 3
>>> df = ts_orig.to_df()
>>> df
timestamp value
0 0 1
1 1 2
2 2 3
## Data model ##
Time series data does not have any standards for the model and data types, unlike some data types such as spatial, which are governed by a standard such as Open Geospatial Consortium (OGC)\. The challenge with time series data is the wide variety of functions that need to be supported, similar to that of Spark Resilient Distributed Datasets (RDD)\.
The data model allows for a wide variety of operations ranging across different forms of segmentation or windowing of time series, transformations or conversions of one time series to another, reducers that compute a static value from a time series, joins that join multiple time series, and collectors of time series from different time zones\. The time series library enables the plug\-and\-play of new functions while keeping the core data structure unchangeable\. The library also support numeric and categorical typed timeseries\.
With time zones and various human readable time formats, a key aspect of the data model is support for Time Reference System (TRS)\. Every time series is associated with a TRS (system default), which can be remapped to any specific choice of the user at any time, enabling easy transformation of a specific time series or a segment of a time series\. See [Using time reference system](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/time-reference-system.html)\.
Further, with the need for handling large scale time series, the library offers a lazy evaluation construct by providing a mechanism for identifying the maximal narrow temporal dependency\. This construct is very similar to that of a Spark computation graph, which also loads data into memory on as needed basis and realizes the computations only when needed\.
## Time series data types ##
You can use multiple data types as an element of a time series, spanning numeric, categorical, array, and dictionary data structures\.
The following data types are supported in a time series:
<!-- <table> -->
| Data type | Description |
| ------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| numeric | Time series with univariate observations of numeric type including double and integer\. For example:`[(1, 7.2), (3, 4.5), (5, 4.5), (5, 4.6), (5, 7.1), (7, 3.9), (9, 1.1)]` |
| numeric array | Time series with multivariate observations of numeric type, including double array and integer array\. For example: `[(1, 7.2, 8.74]), (3, 4.5, 9.44]), (5, 4.5, 10.12]), (5, 4.6, 12.91]), (5, 7.1, 9.90]), (7, 3.9, 3.76])]` |
| string | Time series with univariate observations of type string, for example: `[(1, "a"), (3, "b"), (5, "c"), (5, "d"), (5, "e"), (7, "f"), (9, "g")]` |
| string array | Time series with multivariate observations of type string array, for example: `[(1, "a", "xq"]), (3, "b", "zr"]), (5, "c", "ms"]), (5, "d", "rt"]), (5, "e", "wu"]), (7, "f", "vv"]), (9, "g", "zw"])]` |
| segment | Time series of segments\. The output of the `segmentBy` function, can be any type, including numeric, string, numeric array, and string array\. For example: `[(1,(1, 7.2), (3, 4.5)]), (5,(5, 4.5), (5, 4.6), (5, 7.1)]), (7,(7, 3.9), (9, 1.1)])]` |
| dictionary | Time series of dictionaries\. A dictionary can have arbitrary types inside it |
<!-- </table ""> -->
## Time series functions ##
You can use different functions in the provided time series packages to analyze time series data to extract meaningful information with which to create models that can be used to predict new values based on previously observed values\. See [Time series functions](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/time-series-functions.html)\.
## Learn more ##
To use the `tspy` Python SDK, see the [`tspy` Python SDK documentation](https://ibm-cloud.github.io/tspy-docs/)\.
**Parent topic:**[Time series analysis](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/time-series-lib.html)
<!-- </article "role="article" "> -->
|
F3C0AD81BBF56463510440F7F81EB146A6C0015C | https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/time-series-lazy-evaluation.html?context=cdpaas&locale=en | Time series lazy evaluation | Time series lazy evaluation
Lazy evaluation is an evaluation strategy that delays the evaluation of an expression until its value is needed. When combined with memoization, lazy evaluation strategy avoids repeated evaluations and can reduce the running time of certain functions by a significant factor.
The time series library uses lazy evaluation to process data. Notionally an execution graph is constructed on time series data whose evaluation is triggered only when its output is materialized. Assuming an object is moving in a one dimensional space, whose location is captured by x(t). You can determine the harsh acceleration/braking (h(t)) of this object by using its velocity (v(t)) and acceleration (a(t)) time series as follows:
1d location timeseries
x(t) = input location timeseries
velocity - first derivative of x(t)
v(t) = x(t) - x(t-1)
acceleration - second derivative of x(t)
a(t) = v(t) - v(t-1)
harsh acceleration/braking using thresholds on acceleration
h(t) = +1 if a(t) > threshold_acceleration
= -1 if a(t) < threshold_deceleration
= 0 otherwise
This results in a simple execution graph of the form:
x(t) --> v(t) --> a(t) --> h(t)
Evaluations are triggered only when an action is performed, such as compute h(5...10), i.e. compute h(5), ..., h(10). The library captures narrow temporal dependencies between time series. In this example, h(5...10) requires a(5...10), which in turn requires v(4...10), which then requires x(3...10). Only the relevant portions of a(t), v(t) and x(t) are evaluated.
h(5...10) <-- a(5...10) <-- v(4...10) <-- x(3...10)
Furthermore, evaluations are memoized and can thus be reused in subsequent actions on h. For example, when a request for h(7...12) follows a request for h(5...10), the memoized values h(7...10) would be leveraged; further, h(11...12) would be evaluated using a(11...12), v(10...12) and x(9...12), which would in turn leverage v(10) and x(9...10) memoized from the prior computation.
In a more general example, you could define a smoothened velocity timeseries as follows:
1d location timeseries
x(t) = input location timeseries
velocity - first derivative of x(t)
v(t) = x(t) - x(t-1)
smoothened velocity
alpha is the smoothing factor
n is a smoothing history
v_smooth(t) = (v(t)1.0 + v(t-1)alpha + ... + v(t-n)alpha^n) / (1 + alpha + ... + alpha^n)
acceleration - second derivative of x(t)
a(t) = v_smooth(t) - v_smooth(t-1)
In this example h(l...u) has the following temporal dependency. Evaluation of h(l...u) would strictly adhere to this temporal dependency with memoization.
h(l...u) <-- a(l...u) <-- v_smooth(l-1...u) <-- v(l-n-1...u) <-- x(l-n-2...u)
An Example
The following example shows a python code snippet that implements harsh acceleration on a simple in-memory time series. The library includes several built-in transforms. In this example the difference transform is applied twice to the location time series to compute acceleration time series. A map operation is applied to the acceleration time series using a harsh lambda function, which is defined after the code sample, that maps acceleration to either +1 (harsh acceleration), -1 (harsh braking) and 0 (otherwise). The filter operation selects only instances wherein either harsh acceleration or harsh braking is observed. Prior to calling get_values, an execution graph is created, but no computations are performed. On calling get_values(5, 10), the evaluation is performed with memoization on the narrowest possible temporal dependency in the execution graph.
import tspy
from tspy.builders.functions import transformers
x = tspy.time_series([1.0, 2.0, 4.0, 7.0, 11.0, 16.0, 22.0, 29.0, 28.0, 30.0, 29.0, 30.0, 30.0])
v = x.transform(transformers.difference())
a = v.transform(transformers.difference())
h = a.map(harsh).filter(lambda h: h != 0)
print(h[5, 10])
The harsh lambda is defined as follows:
def harsh(a):
threshold_acceleration = 2.0
threshold_braking = -2.0
if (a > threshold_acceleration):
return +1
elif (a < threshold_braking):
return -1
else:
return 0
Learn more
To use the tspy Python SDK, see the [tspy Python SDK documentation](https://ibm-cloud.github.io/tspy-docs/).
Parent topic:[Time series analysis](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/time-series-lib.html)
| # Time series lazy evaluation #
Lazy evaluation is an evaluation strategy that delays the evaluation of an expression until its value is needed\. When combined with memoization, lazy evaluation strategy avoids repeated evaluations and can reduce the running time of certain functions by a significant factor\.
The time series library uses lazy evaluation to process data\. Notionally an execution graph is constructed on time series data whose evaluation is triggered only when its output is materialized\. Assuming an object is moving in a one dimensional space, whose location is captured by x(t)\. You can determine the harsh acceleration/braking (`h(t)`) of this object by using its velocity (`v(t)`) and acceleration (`a(t)`) time series as follows:
# 1d location timeseries
x(t) = input location timeseries
# velocity - first derivative of x(t)
v(t) = x(t) - x(t-1)
# acceleration - second derivative of x(t)
a(t) = v(t) - v(t-1)
# harsh acceleration/braking using thresholds on acceleration
h(t) = +1 if a(t) > threshold_acceleration
= -1 if a(t) < threshold_deceleration
= 0 otherwise
This results in a simple execution graph of the form:
x(t) --> v(t) --> a(t) --> h(t)
Evaluations are triggered only when an action is performed, such as `compute h(5...10)`, i\.e\. `compute h(5), ..., h(10)`\. The library captures narrow temporal dependencies between time series\. In this example, `h(5...10)` requires `a(5...10)`, which in turn requires `v(4...10)`, which then requires `x(3...10)`\. Only the relevant portions of `a(t)`, `v(t)` and `x(t)` are evaluated\.
h(5...10) <-- a(5...10) <-- v(4...10) <-- x(3...10)
Furthermore, evaluations are memoized and can thus be reused in subsequent actions on `h`\. For example, when a request for `h(7...12)` follows a request for `h(5...10)`, the memoized values `h(7...10)` would be leveraged; further, `h(11...12)` would be evaluated using `a(11...12), v(10...12)` and `x(9...12)`, which would in turn leverage `v(10)` and `x(9...10)` memoized from the prior computation\.
In a more general example, you could define a smoothened velocity timeseries as follows:
# 1d location timeseries
x(t) = input location timeseries
# velocity - first derivative of x(t)
v(t) = x(t) - x(t-1)
# smoothened velocity
# alpha is the smoothing factor
# n is a smoothing history
v_smooth(t) = (v(t)*1.0 + v(t-1)*alpha + ... + v(t-n)*alpha^n) / (1 + alpha + ... + alpha^n)
# acceleration - second derivative of x(t)
a(t) = v_smooth(t) - v_smooth(t-1)
In this example `h(l...u)` has the following temporal dependency\. Evaluation of `h(l...u)` would strictly adhere to this temporal dependency with memoization\.
h(l...u) <-- a(l...u) <-- v_smooth(l-1...u) <-- v(l-n-1...u) <-- x(l-n-2...u)
## An Example ##
The following example shows a python code snippet that implements harsh acceleration on a simple in\-memory time series\. The library includes several built\-in transforms\. In this example the difference transform is applied twice to the location time series to compute acceleration time series\. A map operation is applied to the acceleration time series using a harsh lambda function, which is defined after the code sample, that maps acceleration to either `+1` (harsh acceleration), `-1` (harsh braking) and `0` (otherwise)\. The filter operation selects only instances wherein either harsh acceleration or harsh braking is observed\. Prior to calling `get_values`, an execution graph is created, but no computations are performed\. On calling `get_values(5, 10)`, the evaluation is performed with memoization on the narrowest possible temporal dependency in the execution graph\.
import tspy
from tspy.builders.functions import transformers
x = tspy.time_series([1.0, 2.0, 4.0, 7.0, 11.0, 16.0, 22.0, 29.0, 28.0, 30.0, 29.0, 30.0, 30.0])
v = x.transform(transformers.difference())
a = v.transform(transformers.difference())
h = a.map(harsh).filter(lambda h: h != 0)
print(h[5, 10])
The harsh lambda is defined as follows:
def harsh(a):
threshold_acceleration = 2.0
threshold_braking = -2.0
if (a > threshold_acceleration):
return +1
elif (a < threshold_braking):
return -1
else:
return 0
## Learn more ##
To use the `tspy` Python SDK, see the [`tspy` Python SDK documentation](https://ibm-cloud.github.io/tspy-docs/)\.
**Parent topic:**[Time series analysis](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/time-series-lib.html)
<!-- </article "role="article" "> -->
|
22D15F386DC333BC069EEA8671E895C97956E754 | https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/time-series-lib-using.html?context=cdpaas&locale=en | Using the time series library | Using the time series library
To get started working with the time series library, import the library to your Python notebook or application.
Use this command to import the time series library:
Import the package
import tspy
Creating a time series
To create a time series and use the library functions, you must decide on the data source. Supported data sources include:
* In-memory lists
* pandas DataFrames
* In-memory collections of observations (using the ObservationCollection construct)
* User-defined readers (using the TimeSeriesReader construct)
The following example shows ingesting data from an in-memory list:
ts = tspy.time_series([5.0, 2.0, 4.0, 6.0, 6.0, 7.0])
ts
The output is as follows:
TimeStamp: 0 Value: 5.0
TimeStamp: 1 Value: 2.0
TimeStamp: 2 Value: 4.0
TimeStamp: 3 Value: 6.0
TimeStamp: 4 Value: 6.0
TimeStamp: 5 Value: 7.0
You can also operate on many time-series at the same time by using the MultiTimeSeries construct. A MultiTimeSeries is essentially a dictionary of time series, where each time series has its own unique key. The time series are not aligned in time.
The MultiTimeSeries construct provides similar methods for transforming and ingesting as the single time series construct:
mts = tspy.multi_time_series({
"ts1": tspy.time_series([1.0, 2.0, 3.0]),
"ts2": tspy.time_series([5.0, 2.0, 4.0, 5.0])
})
The output is the following:
ts2 time series
------------------------------
TimeStamp: 0 Value: 5.0
TimeStamp: 1 Value: 2.0
TimeStamp: 2 Value: 4.0
TimeStamp: 3 Value: 5.0
ts1 time series
------------------------------
TimeStamp: 0 Value: 1.0
TimeStamp: 1 Value: 2.0
TimeStamp: 2 Value: 3.0
Interpreting time
By default, a time series uses a long data type to denote when a given observation was created, which is referred to as a time tick. A time reference system is used for time series with timestamps that are human interpretable. See [Using time reference system](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/time-reference-system.html).
The following example shows how to create a simple time series where each index denotes a day after the start time of 1990-01-01:
import datetime
granularity = datetime.timedelta(days=1)
start_time = datetime.datetime(1990, 1, 1, 0, 0, 0, 0, tzinfo=datetime.timezone.utc)
ts = tspy.time_series([5.0, 2.0, 4.0, 6.0, 6.0, 7.0], granularity=granularity, start_time=start_time)
ts
The output is as follows:
TimeStamp: 1990-01-01T00:00Z Value: 5.0
TimeStamp: 1990-01-02T00:00Z Value: 2.0
TimeStamp: 1990-01-03T00:00Z Value: 4.0
TimeStamp: 1990-01-04T00:00Z Value: 6.0
TimeStamp: 1990-01-05T00:00Z Value: 6.0
TimeStamp: 1990-01-06T00:00Z Value: 7.0
Performing simple transformations
Transformations are functions which, when given one or more time series, return a new time series.
For example, to segment a time series into windows where each window is of size=3, sliding by 2 records, you can use the following method:
window_ts = ts.segment(3, 2)
window_ts
The output is as follows:
TimeStamp: 0 Value: original bounds: (0,2) actual bounds: (0,2) observations: [(0,5.0),(1,2.0),(2,4.0)]
TimeStamp: 2 Value: original bounds: (2,4) actual bounds: (2,4) observations: [(2,4.0),(3,6.0),(4,6.0)]
This example shows adding 1 to each value in a time series:
add_one_ts = ts.map(lambda x: x + 1)
add_one_ts
The output is as follows:
TimeStamp: 0 Value: 6.0
TimeStamp: 1 Value: 3.0
TimeStamp: 2 Value: 5.0
TimeStamp: 3 Value: 7.0
TimeStamp: 4 Value: 7.0
TimeStamp: 5 Value: 8.0
Or you can temporally left join a time series, for example ts with another time series ts2:
ts2 = tspy.time_series([1.0, 2.0, 3.0])
joined_ts = ts.left_join(ts2)
joined_ts
The output is as follows:
TimeStamp: 0 Value: [5.0, 1.0]
TimeStamp: 1 Value: [2.0, 2.0]
TimeStamp: 2 Value: [4.0, 3.0]
TimeStamp: 3 Value: [6.0, null]
TimeStamp: 4 Value: [6.0, null]
TimeStamp: 5 Value: [7.0, null]
Using transformers
A rich suite of built-in transformers is provided in the transformers package. Import the package to use the provided transformer functions:
from tspy.builders.functions import transformers
After you have added the package, you can transform data in a time series be using the transform method.
For example, to perform a difference on a time-series:
ts_diff = ts.transform(transformers.difference())
Here the output is:
TimeStamp: 1 Value: -3.0
TimeStamp: 2 Value: 2.0
TimeStamp: 3 Value: 2.0
TimeStamp: 4 Value: 0.0
TimeStamp: 5 Value: 1.0
Using reducers
Similar to the transformers package, you can reduce a time series by using methods provided by the reducers package. You can import the reducers package as follows:
from tspy.builders.functions import reducers
After you have imported the package, use the reduce method to get the average over a time-series for example:
avg = ts.reduce(reducers.average())
avg
This outputs:
5.0
Reducers have a special property that enables them to be used alongside segmentation transformations (hourly sum, avg in the window prior to an error occurring, and others). Because the output of a segmentation + reducer is a time series, the transform method is used.
For example, to segment into windows of size 3 and get the average across each window, use:
avg_windows_ts = ts.segment(3).transform(reducers.average())
This results in:
imeStamp: 0 Value: 3.6666666666666665
TimeStamp: 1 Value: 4.0
TimeStamp: 2 Value: 5.333333333333333
TimeStamp: 3 Value: 6.333333333333333
Graphing time series
Lazy evaluation is used when graphing a time series. When you graph a time series, you can do one of the following:
* Collect the observations of the time series, which returns an BoundTimeSeries
* Reduce the time series to a value or collection of values
* Perform save or print operations
For example, to collect and return all of the values of a timeseries:
observations = ts.materialize()
observations
This results in:
[(0,5.0),(1,2.0),(2,4.0),(3,6.0),(4,6.0),(5,7.0)]
To collect a range from a time series, use:
observations = ts[1:3] same as ts.materialize(1, 3)
observations
Here the output is:
[(1,2.0),(2,4.0),(3,6.0)]
Note that a time series is optimized for range queries if the time series is periodic in nature.
Using the describe on a current time series, also graphs the time series:
describe_obj = ts.describe()
describe_obj
The output is:
min inter-arrival-time: 1
max inter-arrival-time: 1
mean inter-arrival-time: 1.0
top: 6.0
unique: 5
frequency: 2
first: TimeStamp: 0 Value: 5.0
last: TimeStamp: 5 Value: 7.0
count: 6
mean:5.0
std:1.632993161855452
min:2.0
max:7.0
25%:3.5
50%:5.5
75%:6.25
Learn more
* [Time series key functionality](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/time-series-key-functionality.html)
* [Time series functions](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/time-series-functions.html)
* [Time series lazy evaluation](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/time-series-lazy-evaluation.html)
* [Using time reference system](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/time-reference-system.html)
* [tspy Python SDK documentation](https://ibm-cloud.github.io/tspy-docs/)
Parent topic:[Time series analysis](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/time-series-lib.html)
| # Using the time series library #
To get started working with the time series library, import the library to your Python notebook or application\.
Use this command to import the time series library:
# Import the package
import tspy
## Creating a time series ##
To create a time series and use the library functions, you must decide on the data source\. Supported data sources include:
<!-- <ul> -->
* In\-memory lists
* pandas DataFrames
* In\-memory collections of observations (using the `ObservationCollection` construct)
* User\-defined readers (using the `TimeSeriesReader` construct)
<!-- </ul> -->
The following example shows ingesting data from an in\-memory list:
ts = tspy.time_series([5.0, 2.0, 4.0, 6.0, 6.0, 7.0])
ts
The output is as follows:
TimeStamp: 0 Value: 5.0
TimeStamp: 1 Value: 2.0
TimeStamp: 2 Value: 4.0
TimeStamp: 3 Value: 6.0
TimeStamp: 4 Value: 6.0
TimeStamp: 5 Value: 7.0
You can also operate on many time\-series at the same time by using the `MultiTimeSeries` construct\. A `MultiTimeSeries` is essentially a dictionary of time series, where each time series has its own unique key\. The time series are not aligned in time\.
The `MultiTimeSeries` construct provides similar methods for transforming and ingesting as the single time series construct:
mts = tspy.multi_time_series({
"ts1": tspy.time_series([1.0, 2.0, 3.0]),
"ts2": tspy.time_series([5.0, 2.0, 4.0, 5.0])
})
The output is the following:
ts2 time series
------------------------------
TimeStamp: 0 Value: 5.0
TimeStamp: 1 Value: 2.0
TimeStamp: 2 Value: 4.0
TimeStamp: 3 Value: 5.0
ts1 time series
------------------------------
TimeStamp: 0 Value: 1.0
TimeStamp: 1 Value: 2.0
TimeStamp: 2 Value: 3.0
## Interpreting time ##
By default, a time series uses a `long` data type to denote when a given observation was created, which is referred to as a time tick\. A time reference system is used for time series with timestamps that are human interpretable\. See [Using time reference system](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/time-reference-system.html)\.
The following example shows how to create a simple time series where each index denotes a day after the start time of `1990-01-01`:
import datetime
granularity = datetime.timedelta(days=1)
start_time = datetime.datetime(1990, 1, 1, 0, 0, 0, 0, tzinfo=datetime.timezone.utc)
ts = tspy.time_series([5.0, 2.0, 4.0, 6.0, 6.0, 7.0], granularity=granularity, start_time=start_time)
ts
The output is as follows:
TimeStamp: 1990-01-01T00:00Z Value: 5.0
TimeStamp: 1990-01-02T00:00Z Value: 2.0
TimeStamp: 1990-01-03T00:00Z Value: 4.0
TimeStamp: 1990-01-04T00:00Z Value: 6.0
TimeStamp: 1990-01-05T00:00Z Value: 6.0
TimeStamp: 1990-01-06T00:00Z Value: 7.0
## Performing simple transformations ##
Transformations are functions which, when given one or more time series, return a new time series\.
For example, to segment a time series into windows where each window is of `size=3`, sliding by 2 records, you can use the following method:
window_ts = ts.segment(3, 2)
window_ts
The output is as follows:
TimeStamp: 0 Value: original bounds: (0,2) actual bounds: (0,2) observations: [(0,5.0),(1,2.0),(2,4.0)]
TimeStamp: 2 Value: original bounds: (2,4) actual bounds: (2,4) observations: [(2,4.0),(3,6.0),(4,6.0)]
This example shows adding 1 to each value in a time series:
add_one_ts = ts.map(lambda x: x + 1)
add_one_ts
The output is as follows:
TimeStamp: 0 Value: 6.0
TimeStamp: 1 Value: 3.0
TimeStamp: 2 Value: 5.0
TimeStamp: 3 Value: 7.0
TimeStamp: 4 Value: 7.0
TimeStamp: 5 Value: 8.0
Or you can temporally left join a time series, for example `ts` with another time series `ts2`:
ts2 = tspy.time_series([1.0, 2.0, 3.0])
joined_ts = ts.left_join(ts2)
joined_ts
The output is as follows:
TimeStamp: 0 Value: [5.0, 1.0]
TimeStamp: 1 Value: [2.0, 2.0]
TimeStamp: 2 Value: [4.0, 3.0]
TimeStamp: 3 Value: [6.0, null]
TimeStamp: 4 Value: [6.0, null]
TimeStamp: 5 Value: [7.0, null]
### Using transformers ###
A rich suite of built\-in transformers is provided in the transformers package\. Import the package to use the provided transformer functions:
from tspy.builders.functions import transformers
After you have added the package, you can transform data in a time series be using the `transform` method\.
For example, to perform a difference on a time\-series:
ts_diff = ts.transform(transformers.difference())
Here the output is:
TimeStamp: 1 Value: -3.0
TimeStamp: 2 Value: 2.0
TimeStamp: 3 Value: 2.0
TimeStamp: 4 Value: 0.0
TimeStamp: 5 Value: 1.0
### Using reducers ###
Similar to the transformers package, you can reduce a time series by using methods provided by the reducers package\. You can import the reducers package as follows:
from tspy.builders.functions import reducers
After you have imported the package, use the `reduce` method to get the average over a time\-series for example:
avg = ts.reduce(reducers.average())
avg
This outputs:
5.0
Reducers have a special property that enables them to be used alongside segmentation transformations (hourly sum, avg in the window prior to an error occurring, and others)\. Because the output of a `segmentation + reducer` is a time series, the `transform` method is used\.
For example, to segment into windows of size 3 and get the average across each window, use:
avg_windows_ts = ts.segment(3).transform(reducers.average())
This results in:
imeStamp: 0 Value: 3.6666666666666665
TimeStamp: 1 Value: 4.0
TimeStamp: 2 Value: 5.333333333333333
TimeStamp: 3 Value: 6.333333333333333
## Graphing time series ##
Lazy evaluation is used when graphing a time series\. When you graph a time series, you can do one of the following:
<!-- <ul> -->
* Collect the observations of the time series, which returns an `BoundTimeSeries`
* Reduce the time series to a value or collection of values
* Perform save or print operations
<!-- </ul> -->
For example, to collect and return all of the values of a timeseries:
observations = ts.materialize()
observations
This results in:
[(0,5.0),(1,2.0),(2,4.0),(3,6.0),(4,6.0),(5,7.0)]
To collect a range from a time series, use:
observations = ts[1:3] # same as ts.materialize(1, 3)
observations
Here the output is:
[(1,2.0),(2,4.0),(3,6.0)]
Note that a time series is optimized for range queries if the time series is periodic in nature\.
Using the `describe` on a current time series, also graphs the time series:
describe_obj = ts.describe()
describe_obj
The output is:
min inter-arrival-time: 1
max inter-arrival-time: 1
mean inter-arrival-time: 1.0
top: 6.0
unique: 5
frequency: 2
first: TimeStamp: 0 Value: 5.0
last: TimeStamp: 5 Value: 7.0
count: 6
mean:5.0
std:1.632993161855452
min:2.0
max:7.0
25%:3.5
50%:5.5
75%:6.25
## Learn more ##
<!-- <ul> -->
* [Time series key functionality](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/time-series-key-functionality.html)
* [Time series functions](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/time-series-functions.html)
* [Time series lazy evaluation](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/time-series-lazy-evaluation.html)
* [Using time reference system](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/time-reference-system.html)
* [`tspy` Python SDK documentation](https://ibm-cloud.github.io/tspy-docs/)
<!-- </ul> -->
**Parent topic:**[Time series analysis](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/time-series-lib.html)
<!-- </article "role="article" "> -->
|
D5521B8EC8CED84A2E383B3B6D5BC20795EF87B7 | https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/time-series-lib.html?context=cdpaas&locale=en | Time series analysis | Time series analysis
A time series is a sequence of data values measured at successive, though not necessarily regular, points in time. The time series library allows you to perform various key operations on time series data, including segmentation, forecasting, joins, transforms, and reducers.
The library supports various time series types, including numeric, categorical, and arrays. Examples of time series data include:
* Stock share prices and trading volumes
* Clickstream data
* Electrocardiogram (ECG) data
* Temperature or seismographic data
* Network performance measurements
* Network logs
* Electricity usage as recorded by a smart meter and reported via an Internet of Things data feed
An entry in a time series is called an observation. Each observation comprises a time tick, a 64-bit integer that indicates when the observation was made, and the data that was recorded for that observation. The recorded data can be either numerical, for example, a temperature or a stock share price, or categorical, for example, a geographic area. A time series can but must not necessarily be associated with a time reference system (TRS), which defines the granularity of each time tick and the start time.
The time series library is Python only.
Next step
* [Using the time series library](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/time-series-lib-using.html)
Learn more
* [Time series key functionality](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/time-series-key-functionality.html)
| # Time series analysis #
A time series is a sequence of data values measured at successive, though not necessarily regular, points in time\. The time series library allows you to perform various key operations on time series data, including segmentation, forecasting, joins, transforms, and reducers\.
The library supports various time series types, including numeric, categorical, and arrays\. Examples of time series data include:
<!-- <ul> -->
* Stock share prices and trading volumes
* Clickstream data
* Electrocardiogram (ECG) data
* Temperature or seismographic data
* Network performance measurements
* Network logs
* Electricity usage as recorded by a smart meter and reported via an Internet of Things data feed
<!-- </ul> -->
An entry in a time series is called an observation\. Each observation comprises a time tick, a 64\-bit integer that indicates when the observation was made, and the data that was recorded for that observation\. The recorded data can be either numerical, for example, a temperature or a stock share price, or categorical, for example, a geographic area\. A time series can but must not necessarily be associated with a time reference system (TRS), which defines the granularity of each time tick and the start time\.
The time series library is Python only\.
## Next step ##
<!-- <ul> -->
* [Using the time series library](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/time-series-lib-using.html)
<!-- </ul> -->
## Learn more ##
<!-- <ul> -->
* [Time series key functionality](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/time-series-key-functionality.html)
<!-- </ul> -->
<!-- </article "role="article" "> -->
|
7E8D1F67FD96A81FF6D9459C1310919908000CBF | https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/export_data_sd.html?context=cdpaas&locale=en | Exporting synthetic data | Exporting synthetic data
Using Synthetic Data Generator, you can export synthetic data to remote data sources using connections or write data to a project (Delimited or SAV).
Double-click the node to open its properties. Various options are available, described as follows. After running the node, you can find the data at the export location you specified.
Exporting to a project
Under Export to, select This project and then select the project path. For File type, select either Delimited or SAV.
Exporting to a connection
Under Export to, select Save to a connection to open the Asset Browser and then select the connection to export to. For a list of supported data sources, see [Creating synthetic data from imported data](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/import_data_sd.html).
Setting the field delimiter, quote character, and decimal symbol
Different countries use different symbols to separate the integer part from the fractional part of a number and to separate fields in data. For example, you might use a comma instead of a period to separate the integer part from the fractional part of numbers. And, rather than using commas to separate fields in your data, you might use colons or tabs. With a Data Asset import or export node, you can specify these symbols and other options. Double-click the node to open its properties and specify data formats as desired. 
| # Exporting synthetic data #
Using *Synthetic Data Generator*, you can export synthetic data to remote data sources using connections or write data to a project (**Delimited** or **SAV**)\.
Double\-click the node to open its properties\. Various options are available, described as follows\. After running the node, you can find the data at the export location you specified\.
## Exporting to a project ##
Under **Export to**, select **This project** and then select the project path\. For **File type**, select either **Delimited** or **SAV**\.
## Exporting to a connection ##
Under **Export to**, select **Save to a connection** to open the Asset Browser and then select the connection to export to\. For a list of supported data sources, see [Creating synthetic data from imported data](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/import_data_sd.html)\.
## Setting the field delimiter, quote character, and decimal symbol ##
Different countries use different symbols to separate the integer part from the fractional part of a number and to separate fields in data\. For example, you might use a comma instead of a period to separate the integer part from the fractional part of numbers\. And, rather than using commas to separate fields in your data, you might use colons or tabs\. With a Data Asset import or export node, you can specify these symbols and other options\. Double\-click the node to open its properties and specify data formats as desired\. 
<!-- </article "role="article" "> -->
|
C52D7D525C33EB8FA5B5ACC8B16243223D78AC68 | https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/generate_data_sd.html?context=cdpaas&locale=en | Creating synthetic data from a custom data schema | Creating synthetic data from a custom data schema
Using the Synthetic Data Generator graphical editor flow tool, you can generate a structured synthetic data set based on meta data, automatically or with user-specified statistical distributions. You can define the data within each table column, their distributions, and any correlations. You can then export and review your synthetic data.
Before you can use generate to create synthetic data, you need [to create a task](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/synthetic_data_overview_sd.htmlcreate-synthetic).
1. The Generate synthetic tabular data flow window opens. Select use case Create from custom data schema. Click Next. 
2. Select Generate options. You can use the Synthetic Data Generator graphical editor flow tool to specify the number of rows and add columns. You can define properties and specify fields, storage types, statistical distributions, and distribution parameters. Click Next. 
3. Select Export data to select the export file name and type. For more information, see [Exporting data](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/export_data_sd.html). Click Next. 
4. Select Review to check your selection and make any updates before generating your synthetic data. Click Save and run. 
Learn more
[Creating synthetic data from production data](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/mask_mimic_data_sd.html)
| # Creating synthetic data from a custom data schema #
Using the *Synthetic Data Generator* graphical editor flow tool, you can generate a structured synthetic data set based on meta data, automatically or with user\-specified statistical distributions\. You can define the data within each table column, their distributions, and any correlations\. You can then export and review your synthetic data\.
Before you can use *generate* to create synthetic data, you need [to create a task](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/synthetic_data_overview_sd.html#create-synthetic)\.
1\. The **Generate synthetic tabular data flow** window opens\. Select use case **Create from custom data schema**\. Click **Next**\. 
2\. Select **Generate options**\. You can use the *Synthetic Data Generator* graphical editor flow tool to specify the number of rows and add columns\. You can define properties and specify fields, storage types, statistical distributions, and distribution parameters\. Click **Next**\. 
3\. Select **Export data** to select the export file name and type\. For more information, see [Exporting data](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/export_data_sd.html)\. Click **Next**\. 
4\. Select **Review** to check your selection and make any updates before generating your synthetic data\. Click **Save and run**\. 
## Learn more ##
[Creating synthetic data from production data](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/mask_mimic_data_sd.html)
<!-- </article "role="article" "> -->
|
204F36069EE071B185A1BCE8370946A50BDDCDD5 | https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/import_data_sd.html?context=cdpaas&locale=en | Creating synthetic data from imported data | Creating synthetic data from imported data
Supported data sources for Synthetic Data Generator.
Using Synthetic Data Generator, you can connect to your data no matter where it lives, using either connectors or data files.
Data size
The Synthetic Data Generator environment can import up to 2.5GB of data.
Connectors
The following table lists the data sources that you can connect to using Synthetic Data Generator.
Connector Read Only Read & Write Notes
[Amazon RDS for MySQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azrds-mysql.html) ✓ Replace the data set option isn't supported for this connection.
[Amazon RDS for PostgreSQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azrds-postresql.html) ✓ Replace the data set option isn't supported for this connection.
[Amazon Redshift](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-redshift.html) ✓
[Amazon S3](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-amazon-s3.html) ✓
[Apache Cassandra](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cassandra.html) ✓
[Apache Derby](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-derby.html) ✓
[Apache HDFS (formerly known as "Hortonworks HDFS")](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-hdfs.html) ✓
[Apache Hive](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-hive.html) ✓
[Box](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-box.html) ✓ ✓
[Cloud Object-Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos.html) ✓
[Cloud Object-Storage (infrastructure)](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos-infra.html) ✓
[Cloudant](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cloudant.html) ✓
[Cloudera Impala](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cloudera.html) ✓
[Cognos-Analytics](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cognos.html) ✓
[Data Virtualization Manager for z/OS](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-datavirt-z.html) ✓
[Db2](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2.html) ✓
[Db2 Big SQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2-bigsql.html) ✓
[Db2 for i](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2i.html) ✓
[Db2 for z/OS](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2zos.html) ✓
[Db2 on Cloud](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2-cloud.html) ✓
[Db2 Warehouse](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2-wh.html) ✓
[Dropbox](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-dropbox.html) ✓
[FTP (remote file system transfer)](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-ftp.html) ✓
[Google BigQuery](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-bigquery.html) ✓
[Google Cloud Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cloud-storage.html) ✓
[Greenplum](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-greenplum.html) ✓
[HTTP](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-http.html) ✓
[IBM Cloud Databases for MySQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-compose-mysql.html) ✓
[IBM Cloud Data Engine](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-sqlquery.html) ✓
[IBM Cloud Databases for DataStax](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-datastax.html) ✓
[IBM Cloud Databases for MongoDB](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-mongodb.html) ✓
[IBM Cloud Databases for PostgreSQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-dbase-postgresql.html) ✓
[IBM Watson Query](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-data-virtual.html) ✓
[Informix](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-informix.html) ✓
[Looker](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-looker.html) ✓
[MariaDB](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-mariadb.html)
[Microsoft Azure Blob Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azureblob.html) ✓
[Microsoft Azure Cosmos DB](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cosmosdb.html) ✓
[Microsoft Azure Data Lake Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azuredls.html) ✓
[Microsoft Azure File Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azurefs.html) ✓
[Microsoft Azure SQL Database](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azure-sql.html) ✓
[Microsoft SQL Server](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-sql-server.html) ✓ SQL pushback isn't supported when Active Directory is enabled.
[MongoDB](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-mongodb.html) ✓
[MySQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-mysql.html) ✓
[Netezza Performance Server](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-puredata.html) ✓
[OData](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-odata.html) ✓
[Oracle](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-oracle.html) ✓
[Planning Analytics (formerly known as "IBM TM1")](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-plananalytics.html) ✓ Only the Replace the data set option is supported.
[PostgreSQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-postgresql.html) ✓
[Presto](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-presto.html) ✓
[Salesforce.com](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-salesforce.html) ✓
[SAP ASE](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-sap-ase.html) ✓
[SAP IQ](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-sap-iq.html) ✓
[SAP OData](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-sapodata.html) ✓
[Snowflake](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-snowflake.html) ✓
[Tableau](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-tableau.html) ✓
[Teradata](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-teradata.html) ✓
Data files
In addition to using data from remote data sources or integrated databases, you can use data from files. You can work with data from the following types of files using Synthetic Data Generator.
Connector Read Only Read & Write
AVRO ✓
CSV/delimited ✓
Excel (XLS, XLSX) ✓
JSON ✓
ORC
Parquet
SAS ✓
SAV ✓
SHP
XML ✓
| # Creating synthetic data from imported data #
Supported data sources for Synthetic Data Generator\.
Using Synthetic Data Generator, you can connect to your data no matter where it lives, using either connectors or data files\.
## Data size ##
The Synthetic Data Generator environment can import up to ~2\.5GB of data\.
## Connectors ##
The following table lists the data sources that you can connect to using Synthetic Data Generator\.
<!-- <table> -->
| Connector | Read Only | Read & Write | Notes |
| ------------------------------------------------------------------------------------------------------ | --------- | ------------ | --------------------------------------------------------------------- |
| [Amazon RDS for MySQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azrds-mysql.html) | | ✓ | **Replace the data set** option isn't supported for this connection\. |
| [Amazon RDS for PostgreSQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azrds-postresql.html) | | ✓ | **Replace the data set** option isn't supported for this connection\. |
| [Amazon Redshift](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-redshift.html) | | ✓ | |
| [Amazon S3](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-amazon-s3.html) | | ✓ | |
| [Apache Cassandra](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cassandra.html) | | ✓ | |
| [Apache Derby](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-derby.html) | | ✓ | |
| [Apache HDFS (formerly known as "Hortonworks HDFS")](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-hdfs.html) | | ✓ | |
| [Apache Hive](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-hive.html) | ✓ | | |
| [Box](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-box.html) | ✓ | ✓ | |
| [Cloud Object\-Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos.html) | | ✓ | |
| [Cloud Object\-Storage (infrastructure)](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos-infra.html) | | ✓ | |
| [Cloudant](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cloudant.html) | | ✓ | |
| [Cloudera Impala](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cloudera.html) | ✓ | | |
| [Cognos\-Analytics](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cognos.html) | ✓ | | |
| [Data Virtualization Manager for z/OS](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-datavirt-z.html) | ✓ | | |
| [Db2](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2.html) | | ✓ | |
| [Db2 Big SQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2-bigsql.html) | | ✓ | |
| [Db2 for i](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2i.html) | | ✓ | |
| [Db2 for z/OS](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2zos.html) | | ✓ | |
| [Db2 on Cloud](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2-cloud.html) | | ✓ | |
| [Db2 Warehouse](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2-wh.html) | | ✓ | |
| [Dropbox](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-dropbox.html) | | ✓ | |
| [FTP (remote file system transfer)](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-ftp.html) | | ✓ | |
| [Google BigQuery](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-bigquery.html) | | ✓ | |
| [Google Cloud Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cloud-storage.html) | | ✓ | |
| [Greenplum](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-greenplum.html) | | ✓ | |
| [HTTP](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-http.html) | ✓ | | |
| [IBM Cloud Databases for MySQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-compose-mysql.html) | | ✓ | |
| [IBM Cloud Data Engine](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-sqlquery.html) | ✓ | | |
| [IBM Cloud Databases for DataStax](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-datastax.html) | | ✓ | |
| [IBM Cloud Databases for MongoDB](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-mongodb.html) | ✓ | | |
| [IBM Cloud Databases for PostgreSQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-dbase-postgresql.html) | | ✓ | |
| [IBM Watson Query](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-data-virtual.html) | ✓ | | |
| [Informix](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-informix.html) | | ✓ | |
| [Looker](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-looker.html) | ✓ | | |
| [MariaDB](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-mariadb.html) | | | |
| [Microsoft Azure Blob Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azureblob.html) | | ✓ | |
| [Microsoft Azure Cosmos DB](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cosmosdb.html) | | ✓ | |
| [Microsoft Azure Data Lake Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azuredls.html) | | ✓ | |
| [Microsoft Azure File Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azurefs.html) | | ✓ | |
| [Microsoft Azure SQL Database](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azure-sql.html) | | ✓ | |
| [Microsoft SQL Server](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-sql-server.html) | | ✓ | SQL pushback isn't supported when Active Directory is enabled\. |
| [MongoDB](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-mongodb.html) | ✓ | | |
| [MySQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-mysql.html) | | ✓ | |
| [Netezza Performance Server](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-puredata.html) | | ✓ | |
| [OData](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-odata.html) | | ✓ | |
| [Oracle](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-oracle.html) | | ✓ | |
| [Planning Analytics (formerly known as "IBM TM1")](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-plananalytics.html) | | ✓ | Only the **Replace the data set** option is supported\. |
| [PostgreSQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-postgresql.html) | | ✓ | |
| [Presto](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-presto.html) | ✓ | | |
| [Salesforce\.com](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-salesforce.html) | ✓ | | |
| [SAP ASE](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-sap-ase.html) | | ✓ | |
| [SAP IQ](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-sap-iq.html) | ✓ | | |
| [SAP OData](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-sapodata.html) | | ✓ | |
| [Snowflake](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-snowflake.html) | | ✓ | |
| [Tableau](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-tableau.html) | ✓ | | |
| [Teradata](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-teradata.html) | | ✓ |
<!-- </table ""> -->
## Data files ##
In addition to using data from remote data sources or integrated databases, you can use data from files\. You can work with data from the following types of files using Synthetic Data Generator\.
<!-- <table> -->
| Connector | Read Only | Read & Write |
| ----------------- | --------- | ------------ |
| AVRO | ✓ | |
| CSV/delimited | | ✓ |
| Excel (XLS, XLSX) | | ✓ |
| JSON | | ✓ |
| ORC | | |
| Parquet | | |
| SAS | ✓ | |
| SAV | | ✓ |
| SHP | | |
| XML | | ✓ |
<!-- </table ""> -->
<!-- </article "role="article" "> -->
|
971AE69D7D2A527C25F31A6C8D8D64EE68B48519 | https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/mask_mimic_data_sd.html?context=cdpaas&locale=en | Creating synthetic data from production data | Creating synthetic data from production data
Using the Synthetic Data Generator graphical editor flow tool, you can generate a structured synthetic data set based on your production data. You can import data, anonymize, mimic (to generate synthetic data), export, and review your data.
Before you can use mimic and mask to create synthetic data, you need [to create a task](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/synthetic_data_overview_sd.htmlcreate-synthetic).
1. The Generate synthetic tabular data flow window opens. Select use case Leverage your existing data. Click Next. 
2. Select Import data. You can also drag-and-drop a data file into your project. You can also select data from a project. For more information, see [Importing data](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/import_data_sd.html). 
3. Once you have imported your data, you can use the Synthetic Data Generator graphical flow editor tool to anonymize your production data, masking the data. You can disguise column names, column values, or both, when working with data that is to be included in a model downstream of the node. For example, you can use bank customer data and hide marital status. 
4. You can then use the Synthetic Data Generator tool to mimic your production data. This will generate synthetic data, based on your production data, using a set of candidate statistical distributions to modify each column in your data. 
5. You can export your synthetic data and review it. For more information, see [Exporting synthetic data](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/export_data_sd.html). 
Learn more
[Creating synthetic data from a custom data schema](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/generate_data_sd.html)
| # Creating synthetic data from production data #
Using the *Synthetic Data Generator* graphical editor flow tool, you can generate a structured synthetic data set based on your production data\. You can import data, *anonymize*, *mimic* (to generate synthetic data), export, and review your data\.
Before you can use *mimic* and *mask* to create synthetic data, you need [to create a task](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/synthetic_data_overview_sd.html#create-synthetic)\.
1\. The **Generate synthetic tabular data flow** window opens\. Select use case **Leverage your existing data**\. Click **Next**\. 
2\. Select **Import data**\. You can also drag\-and\-drop a data file into your project\. You can also select data from a project\. For more information, see [Importing data](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/import_data_sd.html)\. 
3\. Once you have imported your data, you can use the *Synthetic Data Generator* graphical flow editor tool to *anonymize* your production data, masking the data\. You can disguise column names, column values, or both, when working with data that is to be included in a model downstream of the node\. For example, you can use bank customer data and hide marital status\. 
4\. You can then use the *Synthetic Data Generator* tool to *mimic* your production data\. This will generate synthetic data, based on your production data, using a set of candidate statistical distributions to modify each column in your data\. 
5\. You can export your synthetic data and review it\. For more information, see [Exporting synthetic data](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/export_data_sd.html)\. 
## Learn more ##
[Creating synthetic data from a custom data schema](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/generate_data_sd.html)
<!-- </article "role="article" "> -->
|
30A8256A4972314DA32827A081B7541138B454A9 | https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/synthetic_data_overview_sd.html?context=cdpaas&locale=en | Creating Synthetic data | Creating Synthetic data
Use the graphical flow editor tool Synthetic Data Generator to generate synthetic tabular data based on production data or a custom data schema using visual flows and modeling algorithms.
To create synthetic data, the first option is to use the Synthetic Data Generator graphical flow editor tool to mask and mimic production data, and then to load the result into a different location.
The second option is to use the Synthetic Data Generator graphical flow editor to generate synthetic data from a custom data schema using visual flows and modeling algorithms.
This image shows an overview of the Synthetic Data Generator graphical flow editor. 
Data format Learn more about [Creating synthetic data from imported data](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/import_data_sd.html).
Data size : The Synthetic Data Generator environment can import up to 2.5GB of data.
Prerequisites
Before you can create synthetic data, you need [to create a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html).
Create synthetic data
1. Access the Synthetic Data Generator tool from within a project. To select a new asset, open a tool, and create an asset, click New asset.
2. Select All > Prepare Data > Generate synthetic tabular data from the What do you want to do? window. 
3. The Generate synthetic tabular data window opens. Add a name for the asset and a description (optional). Click Create. The flow will open and it might take a minute to create a new session for the flow. 
4. The Welcome to Synthetic Data Generator wizard opens. You can choose to get started as a first time or experienced user.

5. If you choose to get started as a first time user, the Generate synthetic tabular data flow window opens. 
Learn more
* [Creating synthetic data from production data](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/mask_mimic_data_sd.html)
* [Creating synthetic data from a custom data schema](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/generate_data_sd.html)
* Try the [Generate synthetic tabular data tutorial](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-generate-data.html)
| # Creating Synthetic data #
Use the graphical flow editor tool *Synthetic Data Generator* to generate synthetic tabular data based on production data or a custom data schema using visual flows and modeling algorithms\.
To create synthetic data, the first option is to use the *Synthetic Data Generator* graphical flow editor tool to *mask* and *mimic* production data, and then to load the result into a different location\.
The second option is to use the *Synthetic Data Generator* graphical flow editor to *generate* synthetic data from a custom data schema using visual flows and modeling algorithms\.
This image shows an overview of the *Synthetic Data Generator* graphical flow editor\. 
**Data format** Learn more about [Creating synthetic data from imported data](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/import_data_sd.html)\.
**Data size** : The *Synthetic Data Generator* environment can import up to ~2\.5GB of data\.
## Prerequisites ##
Before you can create synthetic data, you need [to create a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html)\.
## Create synthetic data ##
1\. Access the *Synthetic Data Generator* tool from within a project\. To select a new asset, open a tool, and create an asset, click **New asset**\.
2\. Select **All > Prepare Data > Generate synthetic tabular data** from the *What do you want to do?* window\. 
3\. The **Generate synthetic tabular data** window opens\. Add a name for the asset and a description (optional)\. Click **Create**\. The flow will open and it might take a minute to create a new session for the flow\. 
4\. The **Welcome to Synthetic Data Generator** wizard opens\. You can choose to get started as a first time or experienced user\.

5\. If you choose to get started as a first time user, the **Generate synthetic tabular data flow** window opens\. 
## Learn more ##
<!-- <ul> -->
* [Creating synthetic data from production data](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/mask_mimic_data_sd.html)
* [Creating synthetic data from a custom data schema](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/generate_data_sd.html)
* Try the [Generate synthetic tabular data tutorial](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-generate-data.html)
<!-- </ul> -->
<!-- </article "role="article" "> -->
|
448502B5D06CD5BCAA58F569AA43AA2E0394A794 | https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=en | Troubleshoot Watson Machine Learning | Troubleshoot Watson Machine Learning
Here are the answers to common troubleshooting questions about using IBM Watson Machine Learning.
Getting help and support for Watson Machine Learning
If you have problems or questions when using Watson Machine Learning, you can get help by searching for information or by asking questions through a forum. You can also open a support ticket.
When using the forums to ask a question, tag your question so that it is seen by the Watson Machine Learning development teams.
If you have technical questions about Watson Machine Learning, post your question on [Stack Overflow ](http://stackoverflow.com/search?q=machine-learning+ibm-bluemix) and tag your question with "ibm-bluemix" and "machine-learning".
For questions about the service and getting started instructions, use the [IBM developerWorks dW Answers ](https://developer.ibm.com/answers/topics/machine-learning/?smartspace=bluemix) forum. Include the "machine-learning" and "bluemix" tags.
Contents
* [Authorization token has not been provided](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_missing_authorization_token)
* [Invalid authorization token](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_invalid_authorization_token)
* [Authorization token and instance_id which was used in the request are not the same](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_not_matching_authorization_token)
* [Authorization token is expired](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_expired_authorization_token)
* [Public key needed for authentication is not available](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_missing_public_key)
* [Operation timed out after {{timeout}}](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_operation_timeout)
* [Unhandled exception of type {{type}} with {{status}}](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_unhandled_exception_with_status)
* [Unhandled exception of type {{type}} with {{response}}](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_unhandled_exception_with_response)
* [Unhandled exception of type {{type}} with {{json}}](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_unhandled_exception_with_json)
* [Unhandled exception of type {{type}} with {{message}}](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_unhandled_exception_with_message)
* [Requested object could not be found](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_not_found)
* [Underlying database reported too many requests](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_too_many_cloudant_requests)
* [The definition of the evaluation is not defined neither in the artifactModelVersion nor in the deployment. It needs to be specified \" +\n \"at least in one of the places](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_missing_evaluation_definition)
* [Data module not found in IBM Federated Learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=enfl_data_module_missing)
* [Evaluation requires learning configuration specified for the model](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_missing_learning_configuration)
* [Evaluation requires spark instance to be provided in X-Spark-Service-Instance header](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_missing_spark_definition_for_evaluation)
* [Model does not contain any version](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_missing_latest_model_version)
* [Patch operation can only modify existing learning configuration](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_patch_non_existing_learning_configuration)
* [Patch operation expects exactly one replace operation](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_patch_multiple_ops)
* [The given payload is missing required fields: FIELD or the values of the fields are corrupted](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_invalid_request_payload)
* [Provided evaluation method: METHOD is not supported. Supported values: VALUE](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_evaluation_method_not_supported)
* [There can be only one active evaluation per model. Request could not be completed because of existing active evaluation: {{url}}](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_active_evaluation_conflict)
* [The deployment type {{type}} is not supported](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_not_supported_deployment_type)
* [Incorrect input: ({{message}})](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_deserialization_error)
* [Insufficient data - metric {{name}} could not be calculated](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_missing_metric)
* [For type {{type}} spark instance must be provided in X-Spark-Service-Instance header](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_missing_prediction_spark_definition)
* [Action {{action}} has failed with message {{message}}](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_http_client_error)
* [Path {{path}} is not allowed. Only allowed path for patch stream is /status](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_wrong_stream_patch_path)
* [Patch operation is not allowed for instance of type {{$type}}](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_patch_not_supported)
* [Data connection {{data}} is invalid for feedback_data_ref](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_invalid_feedback_data_connection)
* [Path {{path}} is not allowed. Only allowed path for patch model is /deployed_version/url or /deployed_version/href for V2](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_patch_model_path_not_allowed)
* [Parsing failure: {{msg}}](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_parsing_error)
* [Runtime environment for selected model: {{env}} is not supported for learning configuration. Supported environments: - [{{supported_envs}}]](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_runtime_env_not_supported)
* [Current plan \'{{plan}}\' only allows {{limit}} deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_deployments_plan_limit_reached)
* [Database connection definition is not valid ({{code}})](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_sql_error)
* [There were problems while connecting underlying {{system}}](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_stream_tcp_error)
* [Error extracting X-Spark-Service-Instance header: ({{message}})](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_spark_header_deserialization_error)
* [This functionality is forbidden for non beta users](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_not_beta_user)
* [{{code}} {{message}}](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_underlying_api_error)
* [Rate limit exceeded](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_rate_limit_exceeded)
* [Invalid query parameter {{paramName}} value: {{value}}](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_invalid_query_parameter_value)
* [Invalid token type: {{type}}](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_invalid_token_type)
* [Invalid token format. Bearer token format should be used](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_invalid_token_format)
* [Input JSON file is missing or invalid: 400](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=enos_invalid_input)
* [Authorization token has expired: 401](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=enos_expired_authorization_token)
* [Unknown deployment identification:404](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=enos_unkown_depid)
* [Internal server error:500](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=enos_internal_error)
* [Invalid type for ml_artifact: Pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=enos_invalid_type_artifact)
* [ValueError: Training_data_ref name and connection cannot be None, if Pipeline Artifact is not given.](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=enpipeline_error)
Authorization token has not been provided.
What's happening
The REST API cannot be invoked successfully.
Why it's happening
Authorization token has not been provided in the Authorization header.
How to fix it
Pass authorization token in the Authorization header.
Invalid authorization token.
What's happening
The REST API cannot be invoked successfully.
Why it's happening
Authorization token which has been provided cannot be decoded or parsed.
How to fix it
Pass correct authorization token in the Authorization header.
Authorization token and instance_id which was used in the request are not the same.
What's happening
The REST API cannot be invoked successfully.
Why it's happening
The Authorization token which has been used is not generated for the service instance against which it was used.
How to fix it
Pass authorization token in the Authorization header which corresponds to the service instance which is being used.
Authorization token is expired.
What's happening
The REST API cannot be invoked successfully.
Why it's happening
Authorization token is expired.
How to fix it
Pass not expired authorization token in the Authorization header.
Public key needed for authentication is not available.
What's happening
The REST API cannot be invoked successfully.
Why it's happening
This is internal service issue.
How to fix it
The issue needs to be fixed by support team.
Operation timed out after {{timeout}}
What's happening
The REST API cannot be invoked successfully.
Why it's happening
The timeout occurred during performing requested operation.
How to fix it
Try to invoke desired operation again.
Unhandled exception of type {{type}} with {{status}}
What's happening
The REST API cannot be invoked successfully.
Why it's happening
This is internal service issue.
How to fix it
Try to invoke desired operation again. If it occurs more times than it needs to be fixed by support team.
Unhandled exception of type {{type}} with {{response}}
What's happening
The REST API cannot be invoked successfully.
Why it's happening
This is internal service issue.
How to fix it
Try to invoke desired operation again. If it occurs more times than it needs to be fixed by support team.
Unhandled exception of type {{type}} with {{json}}
What's happening
The REST API cannot be invoked successfully.
Why it's happening
This is internal service issue.
How to fix it
Try to invoke desired operation again. If it occurs more times than it needs to be fixed by support team.
Unhandled exception of type {{type}} with {{message}}
What's happening
The REST API cannot be invoked successfully.
Why it's happening
This is internal service issue.
How to fix it
Try to invoke desired operation again. If it occurs more times than it needs to be fixed by support team.
Requested object could not be found.
What's happening
The REST API cannot be invoked successfully.
Why it's happening
The request resource could not be found.
How to fix it
Ensure that you are referring to the existing resource.
Underlying database reported too many requests.
What's happening
The REST API cannot be invoked successfully.
Why it's happening
The user has sent too many requests in a given amount of time.
How to fix it
Try to invoke desired operation again.
The definition of the evaluation is not defined neither in the artifactModelVersion nor in the deployment. It needs to be specified \" +\n \"at least in one of the places.
What's happening
The REST API cannot be invoked successfully.
Why it's happening
Learning Configuration does not contain all required information
How to fix it
Provide definition in learning configuration
Evaluation requires learning configuration specified for the model.
What's happening
There is no possibility to create learning iteration.
Why it's happening
There is no learning configuration defined for the model.
How to fix it
Create learning configuration and try to create learning iteration again.
Evaluation requires spark instance to be provided in X-Spark-Service-Instance header
What's happening
The REST API cannot be invoked successfully.
Why it's happening
There is no all required information in learning configuration
How to fix it
Provide spark_service in Learning Configuration or in X-Spark-Service-Instance header
Model does not contain any version.
What's happening
There is no possibility to create neither deployment nor set learning configuration.
Why it's happening
There is inconsistency related to the persistence of the model.
How to fix it
Try to persist the model again and try perform the action again.
Data module not found in IBM Federated Learning.
What's happening
The data handler for IBM Federated Learning is trying to extract a data module from the FL library but is unable to find it. You might see the following error message:
ModuleNotFoundError: No module named 'ibmfl.util.datasets'
Why it's happening
Possibly an outdated DataHandler.
How to fix it
Please review and update your DataHandler to conform to the latest spec. Here is the link to the most recent [MNIST data handler](https://github.com/IBMDataScience/sample-notebooks/blob/master/Files/mnist_keras_data_handler.py) or ensure your sample versions are up-to-date.
Patch operation can only modify existing learning configuration.
What's happening
There is no possibility to invoke patch REST API method to patch learning configuration.
Why it's happening
There is no learning configuration set for this model or model does not exist.
How to fix it
Endure that model exists and has already learning configuration set.
Patch operation expects exactly one replace operation.
What's happening
The deployment cannot be patched.
Why it's happening
The patch payload contains more than one operation or the patch operation is different than replace.
How to fix it
Use only one operation in the patch payload which is replace operation
The given payload is missing required fields: FIELD or the values of the fields are corrupted.
What's happening
There is no possibility to process action which is related to access to the underlying data set.
Why it's happening
The access to the data set is not properly defined.
How to fix it
Correct the access definition for the data set.
Provided evaluation method: METHOD is not supported. Supported values: VALUE.
What's happening
There is no possibility to create learning configuration.
Why it's happening
The wrong evaluation method was used to create learning configuration.
How to fix it
Use supported evaluation method which is one of: regression, binary, multiclass.
There can be only one active evaluation per model. Request could not be completed because of existing active evaluation: {{url}}
What's happening
There is no possibility to create another learning iteration
Why it's happening
There can be only one running evaluation for the model.
How to fix it
See the already running evaluation or wait till it ends and start the new one.
The deployment type {{type}} is not supported.
What's happening
There is no possibility to create deployment.
Why it's happening
Not supported deployment type was used.
How to fix it
Supported deployment type should be used.
Incorrect input: ({{message}})
What's happening
The REST API cannot be invoked successfully.
Why it's happening
There is an issue with parsing json.
How to fix it
Ensure that the correct json is passed in the request.
Insufficient data - metric {{name}} could not be calculated
What's happening
Learning iteration has failed
Why it's happening
Value for metric with defined threshold could not be calculated because of insufficient feedback data
How to fix it
Review and improve data in data source feedback_data_ref in learning configuration
For type {{type}} spark instance must be provided in X-Spark-Service-Instance header
What's happening
Deployment cannot be created
Why it's happening
batch and streaming deployments require spark instance to be provided
How to fix it
Provide spark instance in X-Spark-Service-Instance header
Action {{action}} has failed with message {{message}}
What's happening
The REST API cannot be invoked successfully.
Why it's happening
There was an issue with invoking underlying service.
How to fix it
If there is suggestion how to fix the issue than follow it. Contact the support team if there is no suggestion in the message or the suggestion does not solve the issue.
Path {{path}} is not allowed. Only allowed path for patch stream is /status
What's happening
There is no possibility to patch the stream deployment.
Why it's happening
The wrong path was used to patch the stream deployment.
How to fix it
Patch the stream deployment with supported path option which is /status (it allows to start/stop stream processing.
Patch operation is not allowed for instance of type {{$type}}
What's happening
There is no possibility to patch the deployment.
Why it's happening
The wrong deployment type is being patched.
How to fix it
Patch the stream deployment type.
Data connection {{data}} is invalid for feedback_data_ref
What's happening
There is no possibility to create learning configuration for the model.
Why it's happening
Not supported data source was used when defining feedback_data_ref.
How to fix it
Use only supported data source type which is dashdb
Path {{path}} is not allowed. Only allowed path for patch model is /deployed_version/url or /deployed_version/href for V2
What's happening
There is no option to patch model.
Why it's happening
The wrong path was used during patching of the model.
How to fix it
Patch model with supported path which allows to update the version of deployed model.
Parsing failure: {{msg}}
What's happening
The REST API cannot be invoked successfully.
Why it's happening
The requested payload could not be parsed successfully.
How to fix it
Ensure that your request payload is correct and can be parsed correctly.
Runtime environment for selected model: {{env}} is not supported for learning configuration. Supported environments: [{{supported_envs}}].
What's happening
There is no option to create learning configuration
Why it's happening
The model for which the learning_configuration was tried to be created is not supported.
How to fix it
Create learning configuration for model which has supported runtime.
Current plan \'{{plan}}\' only allows {{limit}} deployments
What's happening
There is no possibility to create deployment.
Why it's happening
The limit for number of deployments was reached for the current plan.
How to fix it
Upgrade to the plan which does not have such limitation.
Database connection definition is not valid ({{code}})
What's happening
There is no possibility utilize the learning configuration functionality.
Why it's happening
Database connection definition is not valid.
How to fix it
Try to fix the issue which is described by code returned by underlying database.
There were problems while connecting underlying {{system}}
What's happening
The REST API cannot be invoked successfully.
Why it's happening
There was an issue during connection to the underlying system. It might be temporary network issue.
How to fix it
Try to invoke desired operation again. If it occurs more times than contact support team.
What's happening
There is no possibility to invoke REST API which requires Spark credentials
Why it's happening
There is an issue with base-64 decoding or parsing Spark credentials.
How to fix it
Ensure that the correct Spark credentials were correctly base-64 encoded. For more information, see the documentation.
This functionality is forbidden for non beta users.
What's happening
The desired REST API cannot be invoked successfully.
Why it's happening
REST API which was invoked is currently in beta.
How to fix it
If you are interested in participating, add yourself to the wait list. The details can be found in documentation.
{{code}} {{message}}
What's happening
The REST API cannot be invoked successfully.
Why it's happening
There was an issue with invoking underlying service.
How to fix it
If there is suggestion how to fix the issue then follow it. Contact the support team if there is no suggestion in the message or the suggestion does not solve the issue.
Rate limit exceeded.
What's happening
Rate limit exceeded.
Why it's happening
Rate limit for current plan has been exceeded.
How to fix it
To solve this problem, acquire another plan with a greater rate limit
Invalid query parameter {{paramName}} value: {{value}}
What's happening
Validation error as passed incorrect value for query parameter.
Why it's happening
Error in getting result for query.
How to fix it
Correct query parameter value. The details can be found in documentation.
Invalid token type: {{type}}
What's happening
Error regarding token type.
Why it's happening
Error in authorization.
How to fix it
Token should be started with Bearer prefix
Invalid token format. Bearer token format should be used.
What's happening
Error regarding token format.
Why it's happening
Error in authorization.
How to fix it
Token should be bearer token and should start with Bearer prefix
Input JSON file is missing or invalid: 400
What's happening
The following message displays when you try to score online: Input JSON file is missing or invalid.
Why it's happening
This message displays when the scoring input payload doesn't match the expected input type that is required for scoring the model. Specifically, the following reasons may apply:
* The input payload is empty.
* The input payload schema is not valid.
* The input datatypes does not match the expected datatypes.
How to fix it
Correct the input payload. Make sure that the payload has correct syntax, a valid schema, and proper data types. After you make corrections, try to score online again. For syntax issues, verify the JSON file by using the jsonlint command.
Authorization token has expired: 401
What's happening
The following message displays when you try to score online: Authorization failed.
Why it's happening
This message displays when the token that is used for scoring has expired.
How to fix it
Re-generate the token for this IBM Watson Machine Learning instance and then retry. If you still see this issue contact IBM Support.
Unknown deployment identification:404
What's happening
The following message displays when you try to score online Unknown deployment identification.
Why it's happening
This message displays when the deployment ID that is used for scoring does not exists.
How to fix it
Make sure you are providing the correct deployment ID. If not, deploy the model with the deployment ID and then try scoring it again.
Internal server error:500
What's happening
The following message displays when you try to score online: Internal server error
Why it's happening
This message displays if the downstream data flow on which the online scoring depends fails.
How to fix it
After waiting for a period of time, try to score online again. If it fails again then contact IBM Support.
Invalid type for ml_artifact: Pipeline
What's happening
The following message displays when you try to publish Spark model using Common API client library on your workstation.
Why it's happening
This message displays if you have invalid pyspark setup in operating system.
How to fix it
Set up system environment paths according to the instruction:
SPARK_HOME={installed_spark_path}
JAVA_HOME={installed_java_path}
PYTHONPATH=$SPARK_HOME/python/
ValueError: Training_data_ref name and connection cannot be None, if Pipeline Artifact is not given.
What's happening
The training data set is missing or has not been properly referenced.
Why it's happening
The Pipeline Artifact is a training data set in this instance.
How to fix it
When persisting a Spark PipelineModel you MUST supply a training data set, if you don't the client says it doesn't support PipelineModels, rather than saying a PipelineModel must be accompanied by the training set.
| # Troubleshoot Watson Machine Learning #
Here are the answers to common troubleshooting questions about using IBM Watson Machine Learning\.
## Getting help and support for Watson Machine Learning ##
If you have problems or questions when using Watson Machine Learning, you can get help by searching for information or by asking questions through a forum\. You can also open a support ticket\.
When using the forums to ask a question, tag your question so that it is seen by the Watson Machine Learning development teams\.
If you have technical questions about Watson Machine Learning, post your question on [Stack Overflow ](http://stackoverflow.com/search?q=machine-learning+ibm-bluemix) and tag your question with "ibm\-bluemix" and "machine\-learning"\.
For questions about the service and getting started instructions, use the [IBM developerWorks dW Answers ](https://developer.ibm.com/answers/topics/machine-learning/?smartspace=bluemix) forum\. Include the "machine\-learning" and "bluemix" tags\.
## Contents ##
<!-- <ul> -->
* [Authorization token has not been provided](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=en#ts_missing_authorization_token)
* [Invalid authorization token](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=en#ts_invalid_authorization_token)
* [Authorization token and instance\_id which was used in the request are not the same](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=en#ts_not_matching_authorization_token)
* [Authorization token is expired](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=en#ts_expired_authorization_token)
* [Public key needed for authentication is not available](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=en#ts_missing_public_key)
* [Operation timed out after \{\{timeout\}\}](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=en#ts_operation_timeout)
* [Unhandled exception of type \{\{type\}\} with \{\{status\}\}](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=en#ts_unhandled_exception_with_status)
* [Unhandled exception of type \{\{type\}\} with \{\{response\}\}](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=en#ts_unhandled_exception_with_response)
* [Unhandled exception of type \{\{type\}\} with \{\{json\}\}](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=en#ts_unhandled_exception_with_json)
* [Unhandled exception of type \{\{type\}\} with \{\{message\}\}](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=en#ts_unhandled_exception_with_message)
* [Requested object could not be found](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=en#ts_not_found)
* [Underlying database reported too many requests](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=en#ts_too_many_cloudant_requests)
* [The definition of the evaluation is not defined neither in the artifactModelVersion nor in the deployment\. It needs to be specified \\" \+\\n \\"at least in one of the places](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=en#ts_missing_evaluation_definition)
* [Data module not found in IBM Federated Learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=en#fl_data_module_missing)
* [Evaluation requires learning configuration specified for the model](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=en#ts_missing_learning_configuration)
* [Evaluation requires spark instance to be provided in `X-Spark-Service-Instance` header](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=en#ts_missing_spark_definition_for_evaluation)
* [Model does not contain any version](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=en#ts_missing_latest_model_version)
* [Patch operation can only modify existing learning configuration](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=en#ts_patch_non_existing_learning_configuration)
* [Patch operation expects exactly one replace operation](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=en#ts_patch_multiple_ops)
* [The given payload is missing required fields: FIELD or the values of the fields are corrupted](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=en#ts_invalid_request_payload)
* [Provided evaluation method: METHOD is not supported\. Supported values: VALUE](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=en#ts_evaluation_method_not_supported)
* [There can be only one active evaluation per model\. Request could not be completed because of existing active evaluation: \{\{url\}\}](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=en#ts_active_evaluation_conflict)
* [The deployment type \{\{type\}\} is not supported](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=en#ts_not_supported_deployment_type)
* [Incorrect input: (\{\{message\}\})](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=en#ts_deserialization_error)
* [Insufficient data \- metric \{\{name\}\} could not be calculated](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=en#ts_missing_metric)
* [For type \{\{type\}\} spark instance must be provided in `X-Spark-Service-Instance` header](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=en#ts_missing_prediction_spark_definition)
* [Action \{\{action\}\} has failed with message \{\{message\}\}](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=en#ts_http_client_error)
* [Path `{{path}}` is not allowed\. Only allowed path for patch stream is `/status`](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=en#ts_wrong_stream_patch_path)
* [Patch operation is not allowed for instance of type `{{$type}}`](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=en#ts_patch_not_supported)
* [Data connection `{{data}}` is invalid for feedback\_data\_ref](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=en#ts_invalid_feedback_data_connection)
* [Path \{\{path\}\} is not allowed\. Only allowed path for patch model is `/deployed_version/url` or `/deployed_version/href` for V2](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=en#ts_patch_model_path_not_allowed)
* [Parsing failure: \{\{msg\}\}](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=en#ts_parsing_error)
* [Runtime environment for selected model: \{\{env\}\} is not supported for `learning configuration`\. Supported environments: \- \[\{\{supported\_envs\}\}\]](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=en#ts_runtime_env_not_supported)
* [Current plan \\'\{\{plan\}\}\\' only allows \{\{limit\}\} deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=en#ts_deployments_plan_limit_reached)
* [Database connection definition is not valid (\{\{code\}\})](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=en#ts_sql_error)
* [There were problems while connecting underlying \{\{system\}\}](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=en#ts_stream_tcp_error)
* [Error extracting X\-Spark\-Service\-Instance header: (\{\{message\}\})](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=en#ts_spark_header_deserialization_error)
* [This functionality is forbidden for non beta users](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=en#ts_not_beta_user)
* [\{\{code\}\} \{\{message\}\}](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=en#ts_underlying_api_error)
* [Rate limit exceeded](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=en#ts_rate_limit_exceeded)
* [Invalid query parameter `{{paramName}}` value: \{\{value\}\}](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=en#ts_invalid_query_parameter_value)
* [Invalid token type: \{\{type\}\}](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=en#ts_invalid_token_type)
* [Invalid token format\. Bearer token format should be used](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=en#ts_invalid_token_format)
* [Input JSON file is missing or invalid: 400](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=en#os_invalid_input)
* [Authorization token has expired: 401](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=en#os_expired_authorization_token)
* [Unknown deployment identification:404](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=en#os_unkown_depid)
* [Internal server error:500](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=en#os_internal_error)
* [Invalid type for ml\_artifact: Pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=en#os_invalid_type_artifact)
* [ValueError: Training\_data\_ref name and connection cannot be None, if Pipeline Artifact is not given\.](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=en#pipeline_error)
<!-- </ul> -->
## Authorization token has not been provided\. ##
### What's happening ###
The REST API cannot be invoked successfully\.
### Why it's happening ###
Authorization token has not been provided in the `Authorization` header\.
### How to fix it ###
Pass authorization token in the `Authorization` header\.
## Invalid authorization token\. ##
### What's happening ###
The REST API cannot be invoked successfully\.
### Why it's happening ###
Authorization token which has been provided cannot be decoded or parsed\.
### How to fix it ###
Pass correct authorization token in the `Authorization` header\.
## Authorization token and instance\_id which was used in the request are not the same\. ##
### What's happening ###
The REST API cannot be invoked successfully\.
### Why it's happening ###
The Authorization token which has been used is not generated for the service instance against which it was used\.
### How to fix it ###
Pass authorization token in the `Authorization` header which corresponds to the service instance which is being used\.
## Authorization token is expired\. ##
### What's happening ###
The REST API cannot be invoked successfully\.
### Why it's happening ###
Authorization token is expired\.
### How to fix it ###
Pass not expired authorization token in the `Authorization` header\.
## Public key needed for authentication is not available\. ##
### What's happening ###
The REST API cannot be invoked successfully\.
### Why it's happening ###
This is internal service issue\.
### How to fix it ###
The issue needs to be fixed by support team\.
## Operation timed out after \{\{timeout\}\} ##
### What's happening ###
The REST API cannot be invoked successfully\.
### Why it's happening ###
The timeout occurred during performing requested operation\.
### How to fix it ###
Try to invoke desired operation again\.
## Unhandled exception of type \{\{type\}\} with \{\{status\}\} ##
### What's happening ###
The REST API cannot be invoked successfully\.
### Why it's happening ###
This is internal service issue\.
### How to fix it ###
Try to invoke desired operation again\. If it occurs more times than it needs to be fixed by support team\.
## Unhandled exception of type \{\{type\}\} with \{\{response\}\} ##
### What's happening ###
The REST API cannot be invoked successfully\.
### Why it's happening ###
This is internal service issue\.
### How to fix it ###
Try to invoke desired operation again\. If it occurs more times than it needs to be fixed by support team\.
## Unhandled exception of type \{\{type\}\} with \{\{json\}\} ##
### What's happening ###
The REST API cannot be invoked successfully\.
### Why it's happening ###
This is internal service issue\.
### How to fix it ###
Try to invoke desired operation again\. If it occurs more times than it needs to be fixed by support team\.
## Unhandled exception of type \{\{type\}\} with \{\{message\}\} ##
### What's happening ###
The REST API cannot be invoked successfully\.
### Why it's happening ###
This is internal service issue\.
### How to fix it ###
Try to invoke desired operation again\. If it occurs more times than it needs to be fixed by support team\.
## Requested object could not be found\. ##
### What's happening ###
The REST API cannot be invoked successfully\.
### Why it's happening ###
The request resource could not be found\.
### How to fix it ###
Ensure that you are referring to the existing resource\.
## Underlying database reported too many requests\. ##
### What's happening ###
The REST API cannot be invoked successfully\.
### Why it's happening ###
The user has sent too many requests in a given amount of time\.
### How to fix it ###
Try to invoke desired operation again\.
## The definition of the evaluation is not defined neither in the artifactModelVersion nor in the deployment\. It needs to be specified \\" \+\\n \\"at least in one of the places\. ##
### What's happening ###
The REST API cannot be invoked successfully\.
### Why it's happening ###
Learning Configuration does not contain all required information
### How to fix it ###
Provide `definition` in `learning configuration`
## Evaluation requires learning configuration specified for the model\. ##
### What's happening ###
There is no possibility to create `learning iteration`\.
### Why it's happening ###
There is no `learning configuration` defined for the model\.
### How to fix it ###
Create `learning configuration` and try to create `learning iteration` again\.
## Evaluation requires spark instance to be provided in `X-Spark-Service-Instance` header ##
### What's happening ###
The REST API cannot be invoked successfully\.
### Why it's happening ###
There is no all required information in `learning configuration`
### How to fix it ###
Provide `spark_service` in Learning Configuration or in `X-Spark-Service-Instance` header
## Model does not contain any version\. ##
### What's happening ###
There is no possibility to create neither deployment nor set learning configuration\.
### Why it's happening ###
There is inconsistency related to the persistence of the model\.
### How to fix it ###
Try to persist the model again and try perform the action again\.
## Data module not found in IBM Federated Learning\. ##
### What's happening ###
The data handler for IBM Federated Learning is trying to extract a data module from the FL library but is unable to find it\. You might see the following error message:
ModuleNotFoundError: No module named 'ibmfl.util.datasets'
### Why it's happening ###
Possibly an outdated DataHandler\.
### How to fix it ###
Please review and update your DataHandler to conform to the latest spec\. Here is the link to the most recent [MNIST data handler](https://github.com/IBMDataScience/sample-notebooks/blob/master/Files/mnist_keras_data_handler.py) or ensure your sample versions are up\-to\-date\.
## Patch operation can only modify existing learning configuration\. ##
### What's happening ###
There is no possibility to invoke patch REST API method to patch learning configuration\.
### Why it's happening ###
There is no `learning configuration` set for this model or model does not exist\.
### How to fix it ###
Endure that model exists and has already learning configuration set\.
## Patch operation expects exactly one replace operation\. ##
### What's happening ###
The deployment cannot be patched\.
### Why it's happening ###
The patch payload contains more than one operation or the patch operation is different than `replace`\.
### How to fix it ###
Use only one operation in the patch payload which is `replace` operation
## The given payload is missing required fields: FIELD or the values of the fields are corrupted\. ##
### What's happening ###
There is no possibility to process action which is related to access to the underlying data set\.
### Why it's happening ###
The access to the data set is not properly defined\.
### How to fix it ###
Correct the access definition for the data set\.
## Provided evaluation method: METHOD is not supported\. Supported values: VALUE\. ##
### What's happening ###
There is no possibility to create learning configuration\.
### Why it's happening ###
The wrong evaluation method was used to create learning configuration\.
### How to fix it ###
Use supported evaluation method which is one of: `regression`, `binary`, `multiclass`\.
## There can be only one active evaluation per model\. Request could not be completed because of existing active evaluation: \{\{url\}\} ##
### What's happening ###
There is no possibility to create another learning iteration
### Why it's happening ###
There can be only one running evaluation for the model\.
### How to fix it ###
See the already running evaluation or wait till it ends and start the new one\.
## The deployment type \{\{type\}\} is not supported\. ##
### What's happening ###
There is no possibility to create deployment\.
### Why it's happening ###
Not supported deployment type was used\.
### How to fix it ###
Supported deployment type should be used\.
## Incorrect input: (\{\{message\}\}) ##
### What's happening ###
The REST API cannot be invoked successfully\.
### Why it's happening ###
There is an issue with parsing json\.
### How to fix it ###
Ensure that the correct json is passed in the request\.
## Insufficient data \- metric \{\{name\}\} could not be calculated ##
### What's happening ###
Learning iteration has failed
### Why it's happening ###
Value for metric with defined threshold could not be calculated because of insufficient feedback data
### How to fix it ###
Review and improve data in data source `feedback_data_ref` in `learning configuration`
## For type \{\{type\}\} spark instance must be provided in `X-Spark-Service-Instance` header ##
### What's happening ###
Deployment cannot be created
### Why it's happening ###
`batch` and `streaming` deployments require spark instance to be provided
### How to fix it ###
Provide spark instance in `X-Spark-Service-Instance` header
## Action \{\{action\}\} has failed with message \{\{message\}\} ##
### What's happening ###
The REST API cannot be invoked successfully\.
### Why it's happening ###
There was an issue with invoking underlying service\.
### How to fix it ###
If there is suggestion how to fix the issue than follow it\. Contact the support team if there is no suggestion in the message or the suggestion does not solve the issue\.
## Path `{{path}}` is not allowed\. Only allowed path for patch stream is `/status` ##
### What's happening ###
There is no possibility to patch the stream deployment\.
### Why it's happening ###
The wrong path was used to patch the `stream` deployment\.
### How to fix it ###
Patch the `stream` deployment with supported path option which is `/status` (it allows to start/stop stream processing\.
## Patch operation is not allowed for instance of type `{{$type}}` ##
### What's happening ###
There is no possibility to patch the deployment\.
### Why it's happening ###
The wrong deployment type is being patched\.
### How to fix it ###
Patch the `stream` deployment type\.
## Data connection `{{data}}` is invalid for feedback\_data\_ref ##
### What's happening ###
There is no possibility to create `learning configuration` for the model\.
### Why it's happening ###
Not supported data source was used when defining feedback\_data\_ref\.
### How to fix it ###
Use only supported data source type which is `dashdb`
## Path \{\{path\}\} is not allowed\. Only allowed path for patch model is `/deployed_version/url` or `/deployed_version/href` for V2 ##
### What's happening ###
There is no option to patch model\.
### Why it's happening ###
The wrong path was used during patching of the model\.
### How to fix it ###
Patch model with supported path which allows to update the version of deployed model\.
## Parsing failure: \{\{msg\}\} ##
### What's happening ###
The REST API cannot be invoked successfully\.
### Why it's happening ###
The requested payload could not be parsed successfully\.
### How to fix it ###
Ensure that your request payload is correct and can be parsed correctly\.
## Runtime environment for selected model: \{\{env\}\} is not supported for `learning configuration`\. Supported environments: \[\{\{supported\_envs\}\}\]\. ##
### What's happening ###
There is no option to create `learning configuration`
### Why it's happening ###
The model for which the `learning_configuration` was tried to be created is not supported\.
### How to fix it ###
Create `learning configuration` for model which has supported runtime\.
## Current plan \\'\{\{plan\}\}\\' only allows \{\{limit\}\} deployments ##
### What's happening ###
There is no possibility to create deployment\.
### Why it's happening ###
The limit for number of deployments was reached for the current plan\.
### How to fix it ###
Upgrade to the plan which does not have such limitation\.
## Database connection definition is not valid (\{\{code\}\}) ##
### What's happening ###
There is no possibility utilize the `learning configuration` functionality\.
### Why it's happening ###
Database connection definition is not valid\.
### How to fix it ###
Try to fix the issue which is described by `code` returned by underlying database\.
## There were problems while connecting underlying \{\{system\}\} ##
### What's happening ###
The REST API cannot be invoked successfully\.
### Why it's happening ###
There was an issue during connection to the underlying system\. It might be temporary network issue\.
### How to fix it ###
Try to invoke desired operation again\. If it occurs more times than contact support team\.
### What's happening ###
There is no possibility to invoke REST API which requires Spark credentials
### Why it's happening ###
There is an issue with base\-64 decoding or parsing Spark credentials\.
### How to fix it ###
Ensure that the correct Spark credentials were correctly base\-64 encoded\. For more information, see the documentation\.
## This functionality is forbidden for non beta users\. ##
### What's happening ###
The desired REST API cannot be invoked successfully\.
### Why it's happening ###
REST API which was invoked is currently in beta\.
### How to fix it ###
If you are interested in participating, add yourself to the wait list\. The details can be found in documentation\.
## \{\{code\}\} \{\{message\}\} ##
### What's happening ###
The REST API cannot be invoked successfully\.
### Why it's happening ###
There was an issue with invoking underlying service\.
### How to fix it ###
If there is suggestion how to fix the issue then follow it\. Contact the support team if there is no suggestion in the message or the suggestion does not solve the issue\.
## Rate limit exceeded\. ##
### What's happening ###
Rate limit exceeded\.
### Why it's happening ###
Rate limit for current plan has been exceeded\.
### How to fix it ###
To solve this problem, acquire another plan with a greater rate limit
## Invalid query parameter `{{paramName}}` value: \{\{value\}\} ##
### What's happening ###
Validation error as passed incorrect value for query parameter\.
### Why it's happening ###
Error in getting result for query\.
### How to fix it ###
Correct query parameter value\. The details can be found in documentation\.
## Invalid token type: \{\{type\}\} ##
### What's happening ###
Error regarding token type\.
### Why it's happening ###
Error in authorization\.
### How to fix it ###
Token should be started with `Bearer` prefix
## Invalid token format\. Bearer token format should be used\. ##
### What's happening ###
Error regarding token format\.
### Why it's happening ###
Error in authorization\.
### How to fix it ###
Token should be bearer token and should start with `Bearer` prefix
## Input JSON file is missing or invalid: 400 ##
### What's happening ###
The following message displays when you try to score online: **Input JSON file is missing or invalid**\.
### Why it's happening ###
This message displays when the scoring input payload doesn't match the expected input type that is required for scoring the model\. Specifically, the following reasons may apply:
<!-- <ul> -->
* The input payload is empty\.
* The input payload schema is not valid\.
* The input datatypes does not match the expected datatypes\.
<!-- </ul> -->
### How to fix it ###
Correct the input payload\. Make sure that the payload has correct syntax, a valid schema, and proper data types\. After you make corrections, try to score online again\. For syntax issues, verify the JSON file by using the `jsonlint` command\.
## Authorization token has expired: 401 ##
### What's happening ###
The following message displays when you try to score online: **Authorization failed**\.
### Why it's happening ###
This message displays when the token that is used for scoring has expired\.
### How to fix it ###
Re\-generate the token for this IBM Watson Machine Learning instance and then retry\. If you still see this issue contact IBM Support\.
## Unknown deployment identification:404 ##
### What's happening ###
The following message displays when you try to score online **Unknown deployment identification**\.
### Why it's happening ###
This message displays when the deployment ID that is used for scoring does not exists\.
### How to fix it ###
Make sure you are providing the correct deployment ID\. If not, deploy the model with the deployment ID and then try scoring it again\.
## Internal server error:500 ##
### What's happening ###
The following message displays when you try to score online: **Internal server error**
### Why it's happening ###
This message displays if the downstream data flow on which the online scoring depends fails\.
### How to fix it ###
After waiting for a period of time, try to score online again\. If it fails again then contact IBM Support\.
## Invalid type for ml\_artifact: Pipeline ##
### What's happening ###
The following message displays when you try to publish Spark model using Common API client library on your workstation\.
### Why it's happening ###
This message displays if you have invalid pyspark setup in operating system\.
### How to fix it ###
Set up system environment paths according to the instruction:
SPARK_HOME={installed_spark_path}
JAVA_HOME={installed_java_path}
PYTHONPATH=$SPARK_HOME/python/
## ValueError: Training\_data\_ref name and connection cannot be None, if Pipeline Artifact is not given\. ##
### What's happening ###
The training data set is missing or has not been properly referenced\.
### Why it's happening ###
The Pipeline Artifact is a training data set in this instance\.
### How to fix it ###
When persisting a Spark PipelineModel you MUST supply a training data set, if you don't the client says it doesn't support PipelineModels, rather than saying a PipelineModel must be accompanied by the training set\.
<!-- </article "role="article" "> -->
|
4A7F60F563F15CC32060C5F17CB44699A221AD5E | https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/service-status.html?context=cdpaas&locale=en | IBM Cloud services status | IBM Cloud services status
If you're having a problem with one of your services, go to the IBM Cloud Status page. The Status page shows unplanned incidents, planned maintenance, announcements, and security bulletin notifications about key events that affect the IBM Cloud platform, infrastructure, and major services.
You can find the Status page by logging in to the IBM Cloud console. Click Support from the menu bar, and then click View cloud status from the Support Center. Or, you can access the page directly at [IBM Cloud - Status](https://cloud.ibm.com/status?type=incident&component=ibm-cloud-platform&selected=status). Search for the service to view its status.
Learn more
[Viewing cloud status](https://cloud.ibm.com/docs/get-support?topic=get-support-viewing-cloud-status)
| # IBM Cloud services status #
If you're having a problem with one of your services, go to the IBM Cloud Status page\. The Status page shows unplanned incidents, planned maintenance, announcements, and security bulletin notifications about key events that affect the IBM Cloud platform, infrastructure, and major services\.
You can find the Status page by logging in to the IBM Cloud console\. Click **Support** from the menu bar, and then click **View cloud status** from the Support Center\. Or, you can access the page directly at [IBM Cloud \- Status](https://cloud.ibm.com/status?type=incident&component=ibm-cloud-platform&selected=status)\. Search for the service to view its status\.
## Learn more ##
[Viewing cloud status](https://cloud.ibm.com/docs/get-support?topic=get-support-viewing-cloud-status)
<!-- </article "role="article" "> -->
|
5A6081124D93ACD0A12843F64984257A02BB3871 | https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/troubleshoot-conn.html?context=cdpaas&locale=en | Troubleshooting connections | Troubleshooting connections
Use these solutions to resolve problems that you might encounter with connections.
IBM Db2 for z/OS: Error retrieving the schema list when you try to connect to a Db2 for z/OS server
When you test the connection to a Db2 for z/OS server and the connection cannot retrieve the schema list, you might receive the following error:
CDICC7002E: The assets request failed: CDICO2064E: The metadata for the column TABLE_SCHEM could not
be obtained: Sql error: [jcc] [10300] Invalid parameter: Unknown column name
TABLE_SCHEM. ERRORCODE=-4460, SQLSTATE=null
Workaround: On the Db2 for z/OS server, set the DESCSTAT subsystem parameter to No. For more information, see [DESCRIBE FOR STATIC field (DESCSTAT subsystem parameter)](https://www.ibm.com/docs/SSEPEK_13.0.0/inst/src/tpc/db2z_ipf_descstat.html).
Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)
| # Troubleshooting connections #
Use these solutions to resolve problems that you might encounter with connections\.
## IBM Db2 for z/OS: Error retrieving the schema list when you try to connect to a Db2 for z/OS server ##
When you test the connection to a Db2 for z/OS server and the connection cannot retrieve the schema list, you might receive the following error:
CDICC7002E: The assets request failed: CDICO2064E: The metadata for the column TABLE_SCHEM could not
be obtained: Sql error: [jcc] [10300] Invalid parameter: Unknown column name
TABLE_SCHEM. ERRORCODE=-4460, SQLSTATE=null
**Workaround:** On the Db2 for z/OS server, set the **DESCSTAT** subsystem parameter to `No`\. For more information, see [DESCRIBE FOR STATIC field (DESCSTAT subsystem parameter)](https://www.ibm.com/docs/SSEPEK_13.0.0/inst/src/tpc/db2z_ipf_descstat.html)\.
**Parent topic:**[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)
<!-- </article "role="article" "> -->
|
5D1BCA52E974C3F4DE54366A242DF751E73ACBD2 | https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/troubleshoot-cos.html?context=cdpaas&locale=en | Troubleshooting Cloud Object Storage for projects | Troubleshooting Cloud Object Storage for projects
Use these solutions to resolve issues you might experience when using Cloud Object Storage with projects in IBM watsonx. Many errors that occur when creating projects can be resolved by correctly configuring Cloud Object Storage. For instructions, see [Setting up Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html).
Possible error messages:
* [Error retrieving Administrator API key token for your Cloud Object Storage instance](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/troubleshoot-cos.html?context=cdpaas&locale=enkey-token)
* [Unable to configure credentials for your project in the selected Cloud Object Storage instance](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/troubleshoot-cos.html?context=cdpaas&locale=encredentials)
* [User login from given IP address is not permitted](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/troubleshoot-cos.html?context=cdpaas&locale=enrestricted-ip)
* [Project cannot be created](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/troubleshoot-cos.html?context=cdpaas&locale=enproject-failed)
Cannot retrieve API key
Symptoms
When you create a project, the following error occurs:
Error retrieving Administrator API key token for your Cloud Object Storage instance
Possible Causes
* You have not been assigned the Editor role in the IBM Cloud account.
Possible Resolutions
The account administrator must complete the following tasks:
* Invite users to the IBM Cloud account and assign the Editor role. See [Add non-administrative users to your IBM Cloud account](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-addl-users.htmlusers).
Unable to configure credentials
Symptoms
When you create a project and associate it to a Cloud Object Storage instance, the following error occurs:
Unable to configure credentials for your project in the selected Cloud Object Storage instance.
Possible Causes
* You have exceeded the access policy limit for the account.
* For a Lite account, you have exceeded the 25 GB limit for the Cloud Object Storage instance.
Possible Resolutions
For exceeding access policies:
1. Verify that you are the owner of the Cloud Object Storage instance or that the owner has granted you Administrator and Manager roles for this service instance. Otherwise, ask your IBM Cloud administrator to fix this problem.
2. Check the total number of access policies to determine whether you have reached a limit. See [IBM Cloud IAM limits](https://cloud.ibm.com/docs/account?topic=account-known-issuesiam_limits) for the limit information.
3. Delete at least 4 or more unused access policies for the service ID.
See [Reducing time and effort managing access](https://cloud.ibm.com/docs/account?topic=account-account_setuplimit-policies) for strategies that you can use to ensure that you don't reach the limit.
For exceeding 25 GB limit for a Lite account:
For a Lite account, you have exceeded the 25 GB limit for the Cloud Object Storage instance. Possible resolutions are to upgrade to a billable account, delete stored assets for the current account, or wait until the first of the month when the limit resets. See [Set up a billable account](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-account.htmlpaid-account).
Login not permitted from IP address
Symptoms
When you create or work with a project, the following error occurs:
User login from given IP address is not permitted. The user has configured IP address restriction for login. The given IP address 'XX.XXX.XXX.XX' is not contained in the list of allowed IP addresses.
Possible Causes
Restrict IP address access has been configured to allow specific IP addresses access to Watson Studio. The IP address of the computer you are using is not allowed.
Possible Resolutions
Add the IP address to the allowed IP addresses, if your security qualifications allow it. See [Allow specific IP addresses](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-network.htmlallow-specific-ip-addresses).
Project cannot be created
Symptoms
When you create a project, the following error occurs:
Project cannot be created.
Possible Causes
The Cloud Object Storage instance is not available, due to the Global location is not enabled for your services. Cloud Object Storage requires the Global location.
Possible Resolutions
Enable the Global location in your account profile. From your account, click your avatar and select Profile and settings to open your IBM watsonx profile. Under Service Filters > Locations, check the Global location as well as other locations where services are present. See [Manage your profile](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/personal-settings.htmlprofile).
| # Troubleshooting Cloud Object Storage for projects #
Use these solutions to resolve issues you might experience when using Cloud Object Storage with projects in IBM watsonx\. Many errors that occur when creating projects can be resolved by correctly configuring Cloud Object Storage\. For instructions, see [Setting up Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html)\.
Possible error messages:
<!-- <ul> -->
* [Error retrieving Administrator API key token for your Cloud Object Storage instance](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/troubleshoot-cos.html?context=cdpaas&locale=en#key-token)
* [Unable to configure credentials for your project in the selected Cloud Object Storage instance](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/troubleshoot-cos.html?context=cdpaas&locale=en#credentials)
* [User login from given IP address is not permitted](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/troubleshoot-cos.html?context=cdpaas&locale=en#restricted-ip)
* [Project cannot be created](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/troubleshoot-cos.html?context=cdpaas&locale=en#project-failed)
<!-- </ul> -->
## Cannot retrieve API key ##
### Symptoms ###
When you create a project, the following error occurs:
Error retrieving Administrator API key token for your Cloud Object Storage instance
### Possible Causes ###
<!-- <ul> -->
* You have not been assigned the **Editor** role in the IBM Cloud account\.
<!-- </ul> -->
### Possible Resolutions ###
The account administrator must complete the following tasks:
<!-- <ul> -->
* Invite users to the IBM Cloud account and assign the **Editor** role\. See [Add non\-administrative users to your IBM Cloud account](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-addl-users.html#users)\.
<!-- </ul> -->
## Unable to configure credentials ##
### Symptoms ###
When you create a project and associate it to a Cloud Object Storage instance, the following error occurs:
Unable to configure credentials for your project in the selected Cloud Object Storage instance.
### Possible Causes ###
<!-- <ul> -->
* You have exceeded the access policy limit for the account\.
* For a Lite account, you have exceeded the 25 GB limit for the Cloud Object Storage instance\.
<!-- </ul> -->
### Possible Resolutions ###
**For exceeding access policies:**
<!-- <ol> -->
1. Verify that you are the owner of the Cloud Object Storage instance or that the owner has granted you **Administrator** and **Manager** roles for this service instance\. Otherwise, ask your IBM Cloud administrator to fix this problem\.
2. Check the total number of access policies to determine whether you have reached a limit\. See [IBM Cloud IAM limits](https://cloud.ibm.com/docs/account?topic=account-known-issues#iam_limits) for the limit information\.
3. Delete at least 4 or more unused access policies for the service ID\.
<!-- </ol> -->
See [Reducing time and effort managing access](https://cloud.ibm.com/docs/account?topic=account-account_setup#limit-policies) for strategies that you can use to ensure that you don't reach the limit\.
**For exceeding 25 GB limit for a Lite account:**
For a Lite account, you have exceeded the 25 GB limit for the Cloud Object Storage instance\. Possible resolutions are to upgrade to a billable account, delete stored assets for the current account, or wait until the first of the month when the limit resets\. See [Set up a billable account](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-account.html#paid-account)\.
## Login not permitted from IP address ##
### Symptoms ###
When you create or work with a project, the following error occurs:
User login from given IP address is not permitted. The user has configured IP address restriction for login. The given IP address 'XX.XXX.XXX.XX' is not contained in the list of allowed IP addresses.
### Possible Causes ###
**Restrict IP address access** has been configured to allow specific IP addresses access to Watson Studio\. The IP address of the computer you are using is not allowed\.
### Possible Resolutions ###
Add the IP address to the allowed IP addresses, if your security qualifications allow it\. See [Allow specific IP addresses](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-network.html#allow-specific-ip-addresses)\.
## Project cannot be created ##
### Symptoms ###
When you create a project, the following error occurs:
Project cannot be created.
### Possible Causes ###
The Cloud Object Storage instance is not available, due to the **Global** location is not enabled for your services\. Cloud Object Storage requires the **Global** location\.
### Possible Resolutions ###
Enable the **Global** location in your account profile\. From your account, click your avatar and select **Profile and settings** to open your IBM watsonx profile\. Under **Service Filters > Locations**, check the **Global** location as well as other locations where services are present\. See [Manage your profile](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/personal-settings.html#profile)\.
<!-- </article "role="article" "> -->
|
3E24051D290E000441A4FDB326D73BB81505BD05 | https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/troubleshoot.html?context=cdpaas&locale=en | Troubleshooting | Troubleshooting
If you encounter an issue in IBM watsonx, use the following resources to resolve the problem.
* [View IBM Cloud service status](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/service-status.html)
* [Troubleshoot connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/troubleshoot-conn.html)
* [Troubleshoot Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/ts_index.html)
* [Troubleshoot Synthetic Data Generator](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ts_sd.html)
* [Troubleshoot IBM Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/troubleshoot-cos.html)
* [Troubleshoot Watson Machine Learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html)
* [Troubleshoot Watson OpenScale](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/wos-troubleshoot.html)
* [Troubleshoot Watson Studio on IBM Cloud](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/wscloud-troubleshoot.html)
* [Known issues](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/known-issues.html)
* [Get help](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-help.html)
| # Troubleshooting #
If you encounter an issue in IBM watsonx, use the following resources to resolve the problem\.
<!-- <ul> -->
* [View IBM Cloud service status](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/service-status.html)
* [Troubleshoot connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/troubleshoot-conn.html)
* [Troubleshoot Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/ts_index.html)
* [Troubleshoot Synthetic Data Generator](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ts_sd.html)
* [Troubleshoot IBM Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/troubleshoot-cos.html)
* [Troubleshoot Watson Machine Learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html)
* [Troubleshoot Watson OpenScale](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/wos-troubleshoot.html)
* [Troubleshoot Watson Studio on IBM Cloud](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/wscloud-troubleshoot.html)
* [Known issues](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/known-issues.html)
* [Get help](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-help.html)
<!-- </ul> -->
<!-- </article "role="article" "> -->
|
D0907278CA0EA55B0E0ED9E834810D502A817AF0 | https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ts_sd.html?context=cdpaas&locale=en | Troubleshooting Synthetic Data Generator | Troubleshooting Synthetic Data Generator
Use this information to resolve questions about using Synthetic Data Generator.
Typeless columns ignored for an Import node
When you use an Import node that contains Typeless columns, these columns will be ignored when you use the Mimic node. After pressing the Read Values button, the Typeless columns will be automatically set to Pass and will not be present in the final dataset.
Suggested workaround:
Add a new column in the Generate node for the missing column(s).
Size limit notice
The Synthetic Data Generator environment can import up to 2.5GB of data.
Suggested workaround:
If you receive a related error message or your data fails to import, please reduce the amount of data and try again.
Internal error occurred: SCAPI error: The value on row 1,029 is not a valid string
For example, preview of data asset using Import node gives the following error:
Node:
Import
WDP Connector Error: CDICO9999E: Internal error occurred: SCAPI error: The value on row 1,029 is not a valid string of the Bit data type for the SecurityDelay column.
This is expected behavior. In this particular case, the 1st 1000 rows were binary, 0's or 1's. The value at row 1,029 was 3. For most flat files, Synthetic Data Generator reads the 1st 1000 records to infer the data type. In this case, Synthetic Data Generator inferred binary values (0 or 1). When Synthetic Data Generator read a value of 3 at row 1,029, it threw an error, as 3 is not a binary value.
Suggested workarounds:
1. Users can adjust their Infer_record_count parameter to include more data, choosing 2000 rows instead (or more).
2. Users can update the value in the first 1000 rows that is causing the error, if this is an error in the data.
Error Mimic Data set no available input record.
The Mimic node requires the input dataset to have at least one valid record (a record without any missing values). If your dataset is empty, or if the dataset does not contain at least one valid record, clicking Run selection gives the following error message:
Node:
Mimic
Mimic Data set no available input record.
Suggested workarounds:
1. Fix your dataset so that there is at least one record (row) that contains a value for every column and then try again.
2. Click Read values from the Import node and run your flow again. 
Error: Incorrect number of fields detected in the server data model. or WDP Connector Execution Error
Creating a new flow using a .synth file, then doing a migration of the Import node with a newly uploaded file to the project, and then running the flow, gives one or both of the following errors:
Error: Incorrect number of fields detected in the server data model.
or
WDP Connector Execution Error
This error is caused by using different data sets (data models) for the create flow and for the migration data.
Suggested workaround:
Run the Mimic node that creates the Generate node a second time.
Error: Valid variable does not exist in metadata
Doing a migration of the Import node and then running the flow fails and gives the error:
Error: Valid variable does not exist in metadata
Suggested workaround:
Make sure that in your Import node you have at least one field that is not Typeless. For example, in the screen capture below, the only field in the Import node is Typeless. At least one field that is not Typeless should be added to the Import node to avoid this error. 
| # Troubleshooting Synthetic Data Generator #
Use this information to resolve questions about using *Synthetic Data Generator*\.
## Typeless columns ignored for an Import node ##
When you use an **Import** node that contains **Typeless** columns, these columns will be ignored when you use the **Mimic** node\. After pressing the **Read Values** button, the **Typeless** columns will be automatically set to **Pass** and will not be present in the final dataset\.
Suggested workaround:
Add a new column in the **Generate** node for the missing column(s)\.
## Size limit notice ##
The *Synthetic Data Generator* environment can import up to ~2\.5GB of data\.
Suggested workaround:
If you receive a related error message or your data fails to import, please reduce the amount of data and try again\.
## Internal error occurred: SCAPI error: The value on row 1,029 is not a valid string ##
For example, preview of data asset using **Import** node gives the following error:
Node:
Import
WDP Connector Error: CDICO9999E: Internal error occurred: SCAPI error: The value on row 1,029 is not a valid string of the Bit data type for the SecurityDelay column.
This is expected behavior\. In this particular case, the 1st 1000 rows were binary, 0's or 1's\. The value at row 1,029 was 3\. For most flat files, *Synthetic Data Generator* reads the 1st 1000 records to infer the data type\. In this case, *Synthetic Data Generator* inferred binary values (0 or 1)\. When *Synthetic Data Generator* read a value of 3 at row 1,029, it threw an error, as 3 is not a binary value\.
Suggested workarounds:
<!-- <ol> -->
1. Users can adjust their `Infer_record_count` parameter to include more data, choosing 2000 rows instead (or more)\.
2. Users can update the value in the first 1000 rows that is causing the error, if this is an error in the data\.
<!-- </ol> -->
## Error Mimic Data set no available input record\. ##
The **Mimic** node requires the input dataset to have at least one valid record (a record without any missing values)\. If your dataset is empty, or if the dataset does not contain at least one valid record, clicking **Run selection** gives the following error message:
Node:
Mimic
Mimic Data set no available input record.
Suggested workarounds:
<!-- <ol> -->
1. Fix your dataset so that there is at least one record (row) that contains a value for every column and then try again\.
2. Click **Read values** from the **Import** node and run your flow again\. 
<!-- </ol> -->
## Error: Incorrect number of fields detected in the server data model\. or WDP Connector Execution Error ##
Creating a new flow using a `.synth` file, then doing a migration of the **Import** node with a newly uploaded file to the project, and then running the flow, gives one or both of the following errors:
Error: Incorrect number of fields detected in the server data model.
or
WDP Connector Execution Error
This error is caused by using different data sets (data models) for the create flow and for the migration data\.
Suggested workaround:
Run the **Mimic** node that creates the **Generate** node a second time\.
## Error: Valid variable does not exist in metadata ##
Doing a migration of the **Import** node and then running the flow fails and gives the error:
Error: Valid variable does not exist in metadata
Suggested workaround:
Make sure that in your **Import** node you have at least one field that is not **Typeless**\. For example, in the screen capture below, the only field in the **Import** node is **Typeless**\. At least one field that is not **Typeless** should be added to the **Import** node to avoid this error\. 
<!-- </article "role="article" "> -->
|
0B35E778B109957EE1CC48FA8E46ED7A1633E380 | https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/wos-troubleshoot.html?context=cdpaas&locale=en | Troubleshooting Watson OpenScale | Troubleshooting Watson OpenScale
You can use the following techniques to work around problems with IBM Watson OpenScale.
* [When I use AutoAI, why am I getting an error about mismatched data?](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/wos-troubleshoot.html?context=cdpaas&locale=ents-trouble-common-autoai-binary)
* [Why am I getting errors during model configuration?](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/wos-troubleshoot.html?context=cdpaas&locale=ents-trouble-common-xgboost-wml-model-details)
* [Why are my class labels missing when I use XGBoost?](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/wos-troubleshoot.html?context=cdpaas&locale=ents-trouble-common-xgboost-multiclass)
* [Why are the payload analytics not displaying properly?](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/wos-troubleshoot.html?context=cdpaas&locale=ents-trouble-common-payloadfileformat)
* [Error: An error occurred while computing feature importance](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/wos-troubleshoot.html?context=cdpaas&locale=ents-trouble-wos-equals-sign-explainability)
* [Why are some of my active debias records missing?](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/wos-troubleshoot.html?context=cdpaas&locale=ents-trouble-common-payloadlogging-1000k-limit)
* [Watson OpenScale does not show any available schemas](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/wos-troubleshoot.html?context=cdpaas&locale=ents-available-schemas)
* [A monitor run fails with an OutOfResources exception error message](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/wos-troubleshoot.html?context=cdpaas&locale=ents-resources-exception)
When I use AutoAI, why am I getting an error about mismatched data?
You receive an error message about mismatched data when using AutoAI for binary classification. Note that AutoAI is only supported in IBM Watson OpenScale for IBM Cloud Pak for Data.
For binary classification type, AutoAI automatically sets the data type of the prediction column to boolean.
To fix this, implement one of the following solutions:
* Change the label column values in the training data to integer values, such as 0 or 1 depending on the outcome.
* Change the label column values in the training data to string value, such as A and B.
Why am I getting errors during model configuration?
The following error messages appear when you are configuring model details: Field feature_fields references column <name>, which is missing in input_schema of the model. Feature not found in input schema.
The preceding messages while completing the Model details section during configuration indicate a mismatch between the model input schema and the model training data schema:
To fix the issue, you must determine which of the following conditions is causing the error and take corrective action: If you use IBM Watson Machine Learning as your machine learning provider and the model type is XGBoost/scikit-learn refer to the Machine Learning [Python SDK documentation](https://ibm.github.io/watson-machine-learning-sdk/repository) for important information about how to store the model. To generate the drift detection model, you must use scikit-learn version 0.20.2 in notebooks. For all other cases, you must ensure that the training data column names match with the input schema column names.
Why are my class labels missing when I use XGBoost?
Native XGBoost multiclass classification does not return class labels.
By default, for binary and multiple class models, the XGBoost framework does not return class labels.
For XGBoost binary and multiple class models, you must update the model to return class labels.
Why are the payload analytics not displaying properly?
Payload analytics does not display properly and the following error message displays: AIQDT0044E Forbidden character " in column name <column name>
For proper processing of payload analytics, Watson OpenScale does not support column names with double quotation marks (") in the payload. This affects both scoring payload and feedback data in CSV and JSON formats.
Remove double quotation marks (") from the column names of the payload file.
Error: An error occurred while computing feature importance
You receive the following error message during processing: Error: An error occurred while computing feature importance.
Having an equals sign (=) in the column name of a dataset causes an issue with explainability.
Remove the equals sign (=) from the column name and send the dataset through processing again.
Why are some of my active debias records missing?
Active debias records do not reach the payload logging table.
When you use the active debias API, there is a limit of 1000 records that can be sent at one time for payload logging.
To avoid loss of data, you must use the active debias API to score in chunks of 1000 records or fewer.
For more information, see [Reviewing debiased transactions](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-insight-timechart.html).
Watson OpenScale does not show any available schemas
When a user attempts to retrieve schema information for Watson OpenScale, none are available. After attempting directly in DB2, without reference to Watson OpenScale, checking what schemas are available for the database userid also returns none.
Insufficient permissions for the database userid is causing database connection issues for Watson OpenScale.
Make sure the database user has the correct permissions needed for Watson OpenScale.
A monitor run fails with an OutOfResources exception error message
You receive an OutOfResources exception error message.
Although there's no longer a limit on the number of rows you can have in the feedback payload, scoring payload, or business payload tables. The 50,000 limit now applies to the number of records you can run through the quality and bias monitors each billing period.
After you reach your limit, you must either upgrade to a Standard plan or wait for the next billing period.
Missing deployments
A deployed model does not show up as a deployment that can be selected to create a subscription.
There are different reasons that a deployment does not show up in the list of available deployed models. If the model is not a supported type of model because it uses an unsupported algorithm or framework, it won't appear. Your machine learning provider might not be configured properly. It could also be that there are issues with permissions.
Use the following steps to resolve this issue:
1. Check that the model is a supported type. Not sure? For more information, see [Supported machine learning engines, frameworks, and models](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-frameworks-ovr.html).
2. Check that a machine learning provider exists in the Watson OpenScale configuration for the specific deployment space. For more information, see [Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html).
3. Check that the CP4D admin user has permission to access the deployment space.
Watson OpenScale evaluation might fail due to large number of subscriptions
If a Watson OpenScale instance contains too many subscriptions, such as 100 subscriptions, your quality evaluations might fail. You can view the details of the failure in the log for the data mart service pod that displays the following error message:
"Failure converting response to expected model EntityStreamSizeException: actual entity size (Some(8644836)) exceeded content length limit (8388608 bytes)! You can configure this by setting akka.http.[server|client].parsing.max-content-length or calling HttpEntity.withSizeLimit before materializing the dataBytes stream".
You can use the oc get pod -l component=aios-datamart command to find the name of the pod. You can also use the oc logs <pod name> command to the log for the pod.
To fix this error, you can use the following command to increase the maximum request body size by editing the "ADDITIONAL_JVM_OPTIONS" environment variable:
oc patch woservice <release name> -p '{"spec": {"datamart": {"additional_jvm_options":"-Dakka.http.client.parsing.max-content-length=100m"} }}' --type=merge
The release name is "aiopenscale" if you don't customize the release name when you install Watson OpenScale.
Microsoft Azure ML Studio
* Of the two types of Azure Machine Learning web services, only the New type is supported by Watson OpenScale. The Classic type is not supported.
* Default input name must be used: In the Azure web service, the default input name is "input1". Currently, this field is mandated for Watson OpenScale and, if it is missing, Watson OpenScale will not work.
If your Azure web service does not use the default name, change the input field name to "input1", then redeploy your web service and reconfigure your OpenScale machine learning provider settings.
* If calls to Microsoft Azure ML Studio to list the machine learning models causes the response to time out, for example when you have many web services, you must increase timeout values. You may need to work around this issue by changing the /etc/haproxy/haproxy.cfg configuration setting:
* Log in to the load balancer node and update /etc/haproxy/haproxy.cfg to set the client and server timeout from 1m to 5m:
timeout client 5m
timeout server 5m
* Run systemctl restart haproxy to restart the HAProxy load balancer.
If you are using a different load balancer, other than HAProxy, you may need to adjust timeout values in a similar fashion.
* Of the two types of Azure Machine Learning web services, only the New type is supported by Watson OpenScale. The Classic type is not supported.
Uploading feedback data fails in production subscription after importing settings
After importing the settings from your pre-production space to your production space you might have problems uploading feedback data. This happens when the datatypes do not match precisely. When you import settings, the feedback table references the payload table for its column types. You can avoid this issue by making sure that the payload data has the most precise value type first. For example, you must prioritize a double datatype over an integer datatype.
Microsoft Azure Machine Learning Service
When performing model evaluation, you may encounter issues where Watson OpenScale is not able to communicate with Azure Machine Learning Service, when it needs to invoke deployment scoring endpoints. Security tools that enforce your enterprise security policies, such as Symantec Blue Coat may prevent such access.
Watson OpenScale fails to create a new Hive table for the batch deployment subscription
When you choose to create a new Apache Hive table with the Parquet format during your Watson OpenScale batch deployment configuration, the following error might occur:
Attribute name "table name" contains invalid character(s) among " ,;{}()\n\t=". Please use alias to rename it.;
This error occurs if Watson OpenScale fails to run the CREATE TABLE SQL operation due to white space in a column name. To avoid this error, you can remove any white space from your column names or change the Apache Hive format to csv.
Watson OpenScale setup might fail with default Db2 database
When you set up Watson OpenScale and specify the default Db2 database, the setup might fail to complete.
To fix this issue, you must run the following command in Cloud Pak for Data to update Db2:
db2 update db cfg using DFT_EXTENT_SZ 32
After you run the command, you must create a new Db2 database to set up Watson OpenScale.
Parent topic:[Troubleshooting](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/troubleshoot.html)
| # Troubleshooting Watson OpenScale #
You can use the following techniques to work around problems with IBM Watson OpenScale\.
<!-- <ul> -->
* [When I use AutoAI, why am I getting an error about mismatched data?](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/wos-troubleshoot.html?context=cdpaas&locale=en#ts-trouble-common-autoai-binary)
* [Why am I getting errors during model configuration?](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/wos-troubleshoot.html?context=cdpaas&locale=en#ts-trouble-common-xgboost-wml-model-details)
* [Why are my class labels missing when I use XGBoost?](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/wos-troubleshoot.html?context=cdpaas&locale=en#ts-trouble-common-xgboost-multiclass)
* [Why are the payload analytics not displaying properly?](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/wos-troubleshoot.html?context=cdpaas&locale=en#ts-trouble-common-payloadfileformat)
* [Error: An error occurred while computing feature importance](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/wos-troubleshoot.html?context=cdpaas&locale=en#ts-trouble-wos-equals-sign-explainability)
* [Why are some of my active debias records missing?](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/wos-troubleshoot.html?context=cdpaas&locale=en#ts-trouble-common-payloadlogging-1000k-limit)
* [Watson OpenScale does not show any available schemas](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/wos-troubleshoot.html?context=cdpaas&locale=en#ts-available-schemas)
* [A monitor run fails with an `OutOfResources exception` error message](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/wos-troubleshoot.html?context=cdpaas&locale=en#ts-resources-exception)
<!-- </ul> -->
## When I use AutoAI, why am I getting an error about mismatched data? ##
You receive an error message about mismatched data when using AutoAI for binary classification\. Note that AutoAI is only supported in IBM Watson OpenScale for IBM Cloud Pak for Data\.
For binary classification type, AutoAI automatically sets the data type of the prediction column to boolean\.
To fix this, implement one of the following solutions:
<!-- <ul> -->
* Change the label column values in the training data to integer values, such as `0` or `1` depending on the outcome\.
* Change the label column values in the training data to string value, such as `A` and `B`\.
<!-- </ul> -->
## Why am I getting errors during model configuration? ##
The following error messages appear when you are configuring model details: **Field `feature_fields` references column `<name>`, which is missing in `input_schema` of the model\. Feature not found in input schema\.**
The preceding messages while completing the **Model details** section during configuration indicate a mismatch between the model input schema and the model training data schema:
To fix the issue, you must determine which of the following conditions is causing the error and take corrective action: If you use IBM Watson Machine Learning as your machine learning provider and the model type is XGBoost/scikit\-learn refer to the Machine Learning [Python SDK documentation](https://ibm.github.io/watson-machine-learning-sdk/#repository) for important information about how to store the model\. To generate the drift detection model, you must use scikit\-learn version 0\.20\.2 in notebooks\. For all other cases, you must ensure that the training data column names match with the input schema column names\.
## Why are my class labels missing when I use XGBoost? ##
Native XGBoost multiclass classification does not return class labels\.
By default, for binary and multiple class models, the XGBoost framework does not return class labels\.
For XGBoost binary and multiple class models, you must update the model to return class labels\.
## Why are the payload analytics not displaying properly? ##
Payload analytics does not display properly and the following error message displays: **AIQDT0044E Forbidden character `"` in column name `<column name>`**
For proper processing of payload analytics, Watson OpenScale does not support column names with double quotation marks (") in the payload\. This affects both scoring payload and feedback data in CSV and JSON formats\.
Remove double quotation marks (") from the column names of the payload file\.
## Error: An error occurred while computing feature importance ##
You receive the following error message during processing: `Error: An error occurred while computing feature importance`\.
Having an equals sign (=) in the column name of a dataset causes an issue with explainability\.
Remove the equals sign (=) from the column name and send the dataset through processing again\.
## Why are some of my active debias records missing? ##
Active debias records do not reach the payload logging table\.
When you use the active debias API, there is a limit of 1000 records that can be sent at one time for payload logging\.
To avoid loss of data, you must use the active debias API to score in chunks of 1000 records or fewer\.
For more information, see [Reviewing debiased transactions](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-insight-timechart.html)\.
## Watson OpenScale does not show any available schemas ##
When a user attempts to retrieve schema information for Watson OpenScale, none are available\. After attempting directly in DB2, without reference to Watson OpenScale, checking what schemas are available for the database userid also returns none\.
Insufficient permissions for the database userid is causing database connection issues for Watson OpenScale\.
Make sure the database user has the correct permissions needed for Watson OpenScale\.
## A monitor run fails with an `OutOfResources exception` error message ##
You receive an `OutOfResources exception` error message\.
Although there's no longer a limit on the number of rows you can have in the feedback payload, scoring payload, or business payload tables\. The 50,000 limit now applies to the number of records you can run through the quality and bias monitors each billing period\.
After you reach your limit, you must either upgrade to a Standard plan or wait for the next billing period\.
## Missing deployments ##
A deployed model does not show up as a deployment that can be selected to create a subscription\.
There are different reasons that a deployment does not show up in the list of available deployed models\. If the model is not a supported type of model because it uses an unsupported algorithm or framework, it won't appear\. Your machine learning provider might not be configured properly\. It could also be that there are issues with permissions\.
Use the following steps to resolve this issue:
<!-- <ol> -->
1. Check that the model is a supported type\. Not sure? For more information, see [Supported machine learning engines, frameworks, and models](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-frameworks-ovr.html)\.
2. Check that a machine learning provider exists in the Watson OpenScale configuration for the specific deployment space\. For more information, see [Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html)\.
3. Check that the CP4D `admin` user has permission to access the deployment space\.
<!-- </ol> -->
### Watson OpenScale evaluation might fail due to large number of subscriptions ###
If a Watson OpenScale instance contains too many subscriptions, such as 100 subscriptions, your quality evaluations might fail\. You can view the details of the failure in the log for the data mart service pod that displays the following error message:
"Failure converting response to expected model EntityStreamSizeException: actual entity size (Some(8644836)) exceeded content length limit (8388608 bytes)! You can configure this by setting akka.http.[server|client].parsing.max-content-length or calling HttpEntity.withSizeLimit before materializing the dataBytes stream".
You can use the `oc get pod -l component=aios-datamart` command to find the name of the pod\. You can also use the `oc logs <pod name>` command to the log for the pod\.
To fix this error, you can use the following command to increase the maximum request body size by editing the `"ADDITIONAL_JVM_OPTIONS"` environment variable:
oc patch woservice <release name> -p '{"spec": {"datamart": {"additional_jvm_options":"-Dakka.http.client.parsing.max-content-length=100m"} }}' --type=merge
The release name is `"aiopenscale"` if you don't customize the release name when you install Watson OpenScale\.
### Microsoft Azure ML Studio ###
<!-- <ul> -->
* Of the two types of Azure Machine Learning web services, only the `New` type is supported by Watson OpenScale\. The `Classic` type is not supported\.
* *Default input name must be used*: In the Azure web service, the default input name is `"input1"`\. Currently, this field is mandated for Watson OpenScale and, if it is missing, Watson OpenScale will not work\.
If your Azure web service does not use the default name, change the input field name to `"input1"`, then redeploy your web service and reconfigure your OpenScale machine learning provider settings.
* If calls to Microsoft Azure ML Studio to list the machine learning models causes the response to time out, for example when you have many web services, you must increase timeout values\. You may need to work around this issue by changing the `/etc/haproxy/haproxy.cfg` configuration setting:
<!-- <ul> -->
* Log in to the load balancer node and update `/etc/haproxy/haproxy.cfg` to set the client and server timeout from `1m` to `5m`:
timeout client 5m
timeout server 5m
* Run `systemctl restart haproxy` to restart the HAProxy load balancer.
<!-- </ul> -->
<!-- </ul> -->
If you are using a different load balancer, other than HAProxy, you may need to adjust timeout values in a similar fashion\.
<!-- <ul> -->
* Of the two types of Azure Machine Learning web services, only the `New` type is supported by Watson OpenScale\. The `Classic` type is not supported\.
<!-- </ul> -->
### Uploading feedback data fails in production subscription after importing settings ###
After importing the settings from your pre\-production space to your production space you might have problems uploading feedback data\. This happens when the datatypes do not match precisely\. When you import settings, the feedback table references the payload table for its column types\. You can avoid this issue by making sure that the payload data has the most precise value type first\. For example, you must prioritize a double datatype over an integer datatype\.
### Microsoft Azure Machine Learning Service ###
When performing model evaluation, you may encounter issues where Watson OpenScale is not able to communicate with Azure Machine Learning Service, when it needs to invoke deployment scoring endpoints\. Security tools that enforce your enterprise security policies, such as Symantec Blue Coat may prevent such access\.
### Watson OpenScale fails to create a new Hive table for the batch deployment subscription ###
When you choose to create a new Apache Hive table with the `Parquet` format during your Watson OpenScale batch deployment configuration, the following error might occur:
Attribute name "table name" contains invalid character(s) among " ,;{}()\\n\\t=". Please use alias to rename it.;
This error occurs if Watson OpenScale fails to run the `CREATE TABLE` SQL operation due to white space in a column name\. To avoid this error, you can remove any white space from your column names or change the Apache Hive format to `csv`\.
### Watson OpenScale setup might fail with default Db2 database ###
When you set up Watson OpenScale and specify the default Db2 database, the setup might fail to complete\.
To fix this issue, you must run the following command in Cloud Pak for Data to update Db2:
db2 update db cfg using DFT_EXTENT_SZ 32
After you run the command, you must create a new Db2 database to set up Watson OpenScale\.
**Parent topic:**[Troubleshooting](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/troubleshoot.html)
<!-- </article "role="article" "> -->
|
93A3A5E1A633EB2AB616759DFB76DC433ABD4D38 | https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/wscloud-troubleshoot.html?context=cdpaas&locale=en | Troubleshooting Watson Studio on IBM Cloud | Troubleshooting Watson Studio on IBM Cloud
You can use the following techniques to work around problems you might encounter with Watson Studio on IBM Cloud.
Project limit exceeded
Symptoms
When you create a project, the following error occurs:
The number of projects created by the authenticated user exceeds the designated limit.
Possible Causes
The number of projects an authenticated user can create per data center (region) is 100. The limit applies only to projects that a user creates. Projects for which the user is listed as a collaborator are not included in this limit.
Possible Resolutions
Although most customers do not reach this limit, possible resolutions include:
* Delete projects.
* Any authenticated user can request a project limit increase by contacting [IBM Cloud Support](https://www.ibm.com/cloud/support), provided that an adequate justification is specified.
Blank screen when loading
Symptoms
A blank screen appears when you open Watson Studio.
Possible Causes
A cached version is loading.
Possible Resolutions
1. Clear the browser cache and cookies and re-open Watson Studio.
2. Try a different type of browser. For example, switch from Firefox to Chrome.
3. If the blank screen still occurs, [open a support case](https://cloud.ibm.com/unifiedsupport/supportcenter), generate a .har file, compress it, and upload the compressed har file to the support case.
| # Troubleshooting Watson Studio on IBM Cloud #
You can use the following techniques to work around problems you might encounter with Watson Studio on IBM Cloud\.
## Project limit exceeded ##
### Symptoms ###
When you create a project, the following error occurs:
The number of projects created by the authenticated user exceeds the designated limit.
### Possible Causes ###
The number of projects an authenticated user can create per data center (region) is 100\. The limit applies only to projects that a user creates\. Projects for which the user is listed as a collaborator are not included in this limit\.
### Possible Resolutions ###
Although most customers do not reach this limit, possible resolutions include:
<!-- <ul> -->
* Delete projects\.
* Any authenticated user can request a project limit increase by contacting [IBM Cloud Support](https://www.ibm.com/cloud/support), provided that an adequate justification is specified\.
<!-- </ul> -->
## Blank screen when loading ##
### Symptoms ###
A blank screen appears when you open Watson Studio\.
### Possible Causes ###
A cached version is loading\.
### Possible Resolutions ###
<!-- <ol> -->
1. Clear the browser cache and cookies and re\-open Watson Studio\.
2. Try a different type of browser\. For example, switch from Firefox to Chrome\.
3. If the blank screen still occurs, [open a support case](https://cloud.ibm.com/unifiedsupport/supportcenter), generate a \.har file, compress it, and upload the compressed har file to the support case\.
<!-- </ol> -->
<!-- </article "role="article" "> -->
|
F7B2DD759B6FC618D53AD49053C24EF8D35105C5 | https://dataplatform.cloud.ibm.com/docs/content/wsj/wmls/wmls-deploy-overview.html?context=cdpaas&locale=en | Deploying and managing assets | Deploying and managing assets
Use Watson Machine Learning to deploy models and solutions so that you can put them into productive use, then monitor the deployed assets for fairness and explainability. You can also automate the AI lifecycle to keep your deployed assets current.
Completing the AI lifecycle
After you prepare your data and build then train models or solutions, you complete the AI lifecycle by deploying and monitoring your assets.

Deployment is the final stage of the lifecycle of a model or script, where you run your models and code. Watson Machine Learning provides the tools that you need to deploy an asset, such as a predictive model, Python function. You can also deploy foundation model assets, such as prompt templates, to put them into production.
Following deployment, you can use model management tools to evaluate your models. IBM Watson OpenScale tracks and measures outcomes from your AI models, and helps ensure they remain fair, explainable, and compliant. Watson OpenScale also detects and helps correct the drift in accuracy when an AI model is in production.
Finally, you can use IBM Watson Pipelines to manage your ModelOps processes. Create a pipeline that automates parts of the AI lifecycle, such as training and deploying a machine learning model.
Next steps
* To learn more about how to manage assets in a deployment space, see [Manage assets in a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html).
* To learn more about how to deploy assets from a deployment space, see [Deploy assets from a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html).
* To learn more how to deploy by using [Python client](https://ibm.github.io/watson-machine-learning-sdk/) or , see [Sample notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-samples-overview.html).
| # Deploying and managing assets #
Use Watson Machine Learning to deploy models and solutions so that you can put them into productive use, then monitor the deployed assets for fairness and explainability\. You can also automate the AI lifecycle to keep your deployed assets current\.
## Completing the AI lifecycle ##
After you prepare your data and build then train models or solutions, you complete the AI lifecycle by deploying and monitoring your assets\.

Deployment is the final stage of the lifecycle of a model or script, where you run your models and code\. Watson Machine Learning provides the tools that you need to deploy an asset, such as a predictive model, Python function\. You can also deploy foundation model assets, such as prompt templates, to put them into production\.
Following deployment, you can use model management tools to evaluate your models\. IBM Watson OpenScale tracks and measures outcomes from your AI models, and helps ensure they remain fair, explainable, and compliant\. Watson OpenScale also detects and helps correct the drift in accuracy when an AI model is in production\.
Finally, you can use IBM Watson Pipelines to manage your ModelOps processes\. Create a pipeline that automates parts of the AI lifecycle, such as training and deploying a machine learning model\.
## Next steps ##
<!-- <ul> -->
* To learn more about how to manage assets in a deployment space, see [Manage assets in a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html)\.
* To learn more about how to deploy assets from a deployment space, see [Deploy assets from a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html)\.
* To learn more how to deploy by using [Python client](https://ibm.github.io/watson-machine-learning-sdk/) or , see [Sample notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-samples-overview.html)\.
<!-- </ul> -->
<!-- </article "role="article" "> -->
|
F003581774D3028EF53E61A002C20A6D36BA8E00 | https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=en | Glossary | Glossary
This glossary provides terms and definitions for watsonx.ai and watsonx.governance.
The following cross-references are used in this glossary:
* See refers you from a nonpreferred term to the preferred term or from an abbreviation to the spelled-out form.
* See also refers you to a related or contrasting term.
[A](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=englossa)[B](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=englossb)[C](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=englossc)[D](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=englossd)[E](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=englosse)[F](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=englossf)[G](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=englossg)[H](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=englossh)[I](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=englossi)[J](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=englossj)[K](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=englossk)[L](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=englossl)[M](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=englossm)[N](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=englossn)[O](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=englosso)[P](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=englossp)[R](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=englossr)[S](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=englosss)[T](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=englosst)[U](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=englossu)[V](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=englossv)[W](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=englossw)[Z](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=englossz)
A
accelerator
In high-performance computing, a specialized circuit that is used to take some of the computational load from the CPU, increasing the efficiency of the system. For example, in deep learning, GPU-accelerated computing is often employed to offload part of the compute workload to a GPU while the main application runs off the CPU. See also [graphics processing unit](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx8987320).
accountability
The expectation that organizations or individuals will ensure the proper functioning, throughout their lifecycle, of the AI systems that they design, develop, operate or deploy, in accordance with their roles and applicable regulatory frameworks. This includes determining who is responsible for an AI mistake which may require legal experts to determine liability on a case-by-case basis.
activation function
A function defining a neural unit's output given a set of incoming activations from other neurons
active learning
A model for machine learning in which the system requests more labeled data only when it needs it.
active metadata
Metadata that is automatically updated based on analysis by machine learning processes. For example, profiling and data quality analysis automatically update metadata for data assets.
active runtime
An instance of an environment that is running to provide compute resources to analytical assets.
agent
An algorithm or a program that interacts with an environment to learn optimal actions or decisions, typically using reinforcement learning, to achieve a specific goal.
AI
See [artificial intelligence](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx3448902).
AI accelerator
Specialized silicon hardware designed to efficiently execute AI-related tasks like deep learning, machine learning, and neural networks for faster, energy-efficient computing. It can be a dedicated unit in a core, a separate chiplet on a multi-module chip or a separate card.
AI ethics
A multidisciplinary field that studies how to optimize AI's beneficial impact while reducing risks and adverse outcomes. Examples of AI ethics issues are data responsibility and privacy, fairness, explainability, robustness, transparency, environmental sustainability, inclusion, moral agency, value alignment, accountability, trust, and technology misuse.
AI governance
An organization's act of governing, through its corporate instructions, staff, processes and systems to direct, evaluate, monitor, and take corrective action throughout the AI lifecycle, to provide assurance that the AI system is operating as the organization intends, as its stakeholders expect, and as required by relevant regulation.
AI safety
The field of research aiming to ensure artificial intelligence systems operate in a manner that is beneficial to humanity and don't inadvertently cause harm, addressing issues like reliability, fairness, transparency, and alignment of AI systems with human values.
AI system
See [artificial intelligence system](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx10065431).
algorithm
A formula applied to data to determine optimal ways to solve analytical problems.
analytics
The science of studying data in order to find meaningful patterns in the data and draw conclusions based on those patterns.
appropriate trust
In an AI system, an amount of trust that is calibrated to its accuracy, reliability, and credibility.
artificial intelligence (AI)
The capability to acquire, process, create and apply knowledge in the form of a model to make predictions, recommendations or decisions.
artificial intelligence system (AI system)
A system that can make predictions, recommendations or decisions that influence physical or virtual environments, and whose outputs or behaviors are not necessarily pre-determined by its developer or user. AI systems are typically trained with large quantities of structured or unstructured data, and might be designed to operate with varying levels of autonomy or none, to achieve human-defined objectives.
asset
An item that contains information about data, other valuable information, or code that works with data. See also [data asset](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx6094928).
attention mechanism
A mechanism in deep learning models that determines which parts of the input a model focuses on when producing output.
AutoAI experiment
An automated training process that considers a series of training definitions and parameters to create a set of ranked pipelines as model candidates.
B
batch deployment
A method to deploy models that processes input data from a file, data connection, or connected data in a storage bucket, then writes the output to a selected destination.
bias
Systematic error in an AI system that has been designed, intentionally or not, in a way that may generate unfair decisions. Bias can be present both in the AI system and in the data used to train and test it. AI bias can emerge in an AI system as a result of cultural expectations; technical limitations; or unanticipated deployment contexts. See also [fairness](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx3565572).
bias detection
The process of calculating fairness to metrics to detect when AI models are delivering unfair outcomes based on certain attributes.
bias mitigation
Reducing biases in AI models by curating training data and applying fairness techniques.
binary classification
A classification model with two classes. Predictions are a binary choice of one of the two classes.
C
classification model
A predictive model that predicts data in distinct categories. Classifications can be binary, with two classes of data, or multi-class when there are more than 2 categories.
cleanse
To ensure that all values in a data set are consistent and correctly recorded.
CNN
See [convolutional neural network](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx10297974).
computational linguistics
Interdisciplinary field that explores approaches for computationally modeling natural languages.
compute resource
The hardware and software resources that are defined by an environment template to run assets in tools.
confusion matrix
A performance measurement that determines the accuracy between a model's positive and negative predicted outcomes compared to positive and negative actual outcomes.
connected data asset
A pointer to data that is accessed through a connection to an external data source.
connected folder asset
A pointer to a folder in IBM Cloud Object Storage.
connection
The information required to connect to a database. The actual information that is required varies according to the DBMS and connection method.
connection asset
An asset that contains information that enables connecting to a data source.
constraint
* In databases, a relationship between tables.
* In Decision Optimization, a condition that must be satisfied by the solution of a problem.
continuous learning
Automating the tasks of monitoring model performance, retraining with new data, and redeploying to ensure prediction quality.
convolutional neural network (CNN)
A class of neural network commonly used in computer vision tasks that uses convolutional layers to process image data.
Core ML deployment
The process of downloading a deployment in Core ML format for use in iOS apps.
corpus
A collection of source documents that are used to train a machine learning model.
cross-validation
A technique for testing how well a model generalizes in the absence of a hold-out test sample. Cross-validation divides the training data into a number of subsets, and then builds the same number of models, with each subset held out in turn. Each of those models is tested on the holdout sample, and the average accuracy of the models on those holdout samples is used to estimate the accuracy of the model when applied to new data.
curate
To select, collect, preserve, and maintain content relevant to a specific topic. Curation establishes, maintains, and adds value to data; it transforms data into trusted information and knowledge.
D
data asset
An asset that points to data, for example, to an uploaded file. Connections and connected data assets are also considered data assets. See also [asset](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx2172042).
data imputation
The substitution of missing values in a data set with estimated or explicit values.
data lake
A large-scale data storage repository that stores raw data in any format in a flat architecture. Data lakes hold structured and unstructured data as well as binary data for the purpose of processing and analysis.
data lakehouse
A unified data storage and processing architecture that combines the flexibility of a data lake with the structured querying and performance optimizations of a data warehouse, enabling scalable and efficient data analysis for AI and analytics applications.
data mining
The process of collecting critical business information from a data source, correlating the information, and uncovering associations, patterns, and trends. See also [predictive analytics](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx5067245).
Data Refinery flow
A set of steps that cleanse and shape data to produce a new data asset.
data science
The analysis and visualization of structured and unstructured data to discover insights and knowledge.
data set
A collection of data, usually in the form of rows (records) and columns (fields) and contained in a file or database table.
data source
A repository, queue, or feed for reading data, such as a Db2 database.
data table
A collection of data, usually in the form of rows (records) and columns (fields) and contained in a table.
data warehouse
A large, centralized repository of data collected from various sources that is used for reporting and data analysis. It primarily stores structured and semi-structured data, enabling businesses to make informed decisions.
DDL
See [distributed deep learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx9443383).
decision boundary
A division of data points in a space into distinct groups or classifications.
decoder-only model
A model that generates output text word by word by inference from the input sequence. Decoder-only models are used for tasks such as generating text and answering questions.
deep learning
A computational model that uses multiple layers of interconnected nodes, which are organized into hierarchical layers, to transform input data (first layer) through a series of computations to produce an output (final layer). Deep learning is inspired by the structure and function of the human brain. See also [distributed deep learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx9443383).
deep neural network
A neural network with multiple hidden layers, allowing for more complex representations of the data.
deployment
A model or application package that is available for use.
deployment space
A workspace where models are deployed and deployments are managed.
deterministic
Describes a characteristic of computing systems when their outputs are completely determined by their inputs.
DevOps
A software methodology that integrates application development and IT operations so that teams can deliver code faster to production and iterate continuously based on market feedback.
discriminative AI
A class of algorithm that focuses on finding a boundary that separates different classes in the data.
distributed deep learning (DDL)
An approach to deep learning training that leverages the methods of distributed computing. In a DDL environment, compute workload is distributed between the central processing unit and graphics processing unit. See also [deep learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx9443378).
DOcplex
A Python API for modeling and solving Decision Optimization problems.
E
embedding
A numerical representation of a unit of information, such as a word or a sentence, as a vector of real-valued numbers. Embeddings are learned, low-dimensional representations of higher-dimensional data. See also [encoding](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx2426645), [representation](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx6075962).
emergence
A property of foundation models in which the model exhibits behaviors that were not explicitly trained.
emergent behavior
A behavior exhibited by a foundation model that was not explicitly constructed.
encoder-decoder model
A model for both understanding input text and for generating output text based on the input text. Encoder-decoder models are used for tasks such as summarization or translation.
encoder-only model
A model that understands input text at the sentence level by transforming input sequences into representational vectors called embeddings. Encoder-only models are used for tasks such as classifying customer feedback and extracting information from large documents.
encoding
The representation of a unit of information, such as a character or a word, as a set of numbers. See also [embedding](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx10298004), [positional encoding](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx10298071).
endpoint URL
A network destination address that identifies resources, such as services and objects. For example, an endpoint URL is used to identify the location of a model or function deployment when a user sends payload data to the deployment.
environment
The compute resources for running jobs.
environment runtime
An instantiation of the environment template to run analytical assets.
environment template
A definition that specifies hardware and software resources to instantiate environment runtimes.
exogenous feature
A feature that can influence the predictive model but cannot be influenced in return. For example, temperatures can affect predicted ice cream sales, but ice cream sales cannot influence temperatures.
experiment
A model training process that considers a series of training definitions and parameters to determine the most accurate model configuration.
explainability
* The ability of human users to trace, audit, and understand predictions that are made in applications that use AI systems.
* The ability of an AI system to provide insights that humans can use to understand the causes of the system's predictions.
F
fairness
In an AI system, the equitable treatment of individuals or groups of individuals. The choice of a specific notion of equity for an AI system depends on the context in which it is used. See also [bias](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx2803778).
feature
A property or characteristic of an item within a data set, for example, a column in a spreadsheet. In some cases, features are engineered as combinations of other features in the data set.
feature engineering
The process of selecting, transforming, and creating new features from raw data to improve the performance and predictive power of machine learning models.
feature group
A set of columns of a particular data asset along with the metadata that is used for machine learning.
feature selection
Identifying the columns of data that best support an accurate prediction or score in a machine learning model.
feature store
A centralized repository or system that manages and organizes features, providing a scalable and efficient way to store, retrieve, and share feature data across machine learning pipelines and applications.
feature transformation
In AutoAI, a phase of pipeline creation that applies algorithms to transform and optimize the training data to achieve the best outcome for the model type.
federated learning
The training of a common machine learning model that uses multiple data sources that are not moved, joined, or shared. The result is a better-trained model without compromising data security.
few-shot prompting
A prompting technique in which a small number of examples are provided to the model to demonstrate how to complete the task.
fine tuning
The process of adapting a pre-trained model to perform a specific task by conducting additional training. Fine tuning may involve (1) updating the model’s existing parameters, known as full fine tuning, or (2) updating a subset of the model’s existing parameters or adding new parameters to the model and training them while freezing the model’s existing parameters, known as parameter-efficient fine tuning.
flow
A collection of nodes that define a set of steps for processing data or training a model.
foundation model
An AI model that can be adapted to a wide range of downstream tasks. Foundation models are typically large-scale generative models that are trained on unlabeled data using self-supervision. As large scale models, foundation models can include billions of parameters.
G
Gantt chart
A graphical representation of a project timeline and duration in which schedule data is displayed as horizontal bars along a time scale.
gen AI
See [generative AI](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx10298036).
generative AI (gen AI)
A class of AI algorithms that can produce various types of content including text, source code, imagery, audio, and synthetic data.
generative variability
The characteristic of generative models to produce varied outputs, even when the input to the model is held constant. See also [probabilistic](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx10298081).
GPU
See [graphics processing unit](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx8987320).
graphical builder
A tool for creating analytical assets by visually coding. A canvas is an area on which to place objects or nodes that can be connected to create a flow.
graphics processing unit (GPU)
A specialized processor designed to rapidly manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display. GPUs are heavily utilized in machine learning due to their parallel processing capabilities. See also [accelerator](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx2048370).
H
hallucination
A response from a foundation model that includes off-topic, repetitive, incorrect, or fabricated content. Hallucinations involving fabricating details can happen when a model is prompted to generate text, but the model doesn't have enough related text to draw upon to generate a result that contains the correct details.
hold-out set
A set of labeled data that is intentionally withheld from both the training and validation sets, serving as an unbiased assessment of the final model's performance on unseen data.
homogenization
The trend in machine learning research in which a small number of deep neural net architectures, such as the transformer, are achieving state-of-the-art results across a wide variety of tasks.
HPO
See [hyperparameter optimization](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx9895660).
human oversight
Human involvement in reviewing decisions rendered by an AI system, enabling human autonomy and accountability of decision.
hyperparameter
In machine learning, a parameter whose value is set before training as a way to increase model accuracy.
hyperparameter optimization (HPO)
The process for setting hyperparameter values to the settings that provide the most accurate model.
I
image
A software package that contains a set of libraries.
incremental learning
The process of training a model using data that is continually updated without forgetting data obtained from the preceding tasks. This technique is used to train a model with batches of data from a large training data source.
inferencing
The process of running live data through a trained AI model to make a prediction or solve a task.
ingest
* To feed data into a system for the purpose of creating a base of knowledge.
* To continuously add a high-volume of real-time data to a database.
insight
An accurate or deep understanding of something. Insights are derived using cognitive analytics to provide current snapshots and predictions of customer behaviors and attitudes.
intelligent AI
Artificial intelligence systems that can understand, learn, adapt, and implement knowledge, demonstrating abilities like decision-making, problem-solving, and understanding complex concepts, much like human intelligence.
intent
A purpose or goal expressed by customer input to a chatbot, such as answering a question or processing a bill payment.
J
job
A separately executable unit of work.
K
knowledge base
See [corpus](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx3954167).
L
label
A class or category assigned to a data point in supervised learning.Labels can be derived from data but are often applied by human labelers or annotators.
labeled data
Raw data that is assigned labels to add context or meaning so that it can be used to train machine learning models. For example, numeric values might be labeled as zip codes or ages to provide context for model inputs and outputs.
large language model (LLM)
A language model with a large number of parameters, trained on a large quantity of text.
LLM
See [large language model](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx10298052).
M
machine learning (ML)
A branch of artificial intelligence (AI) and computer science that focuses on the use of data and algorithms to imitate the way that humans learn, gradually improving the accuracy of AI models.
machine learning framework
The libraries and runtime for training and deploying a model.
machine learning model
An AI model that is trained on a a set of data to develop algorithms that it can use to analyze and learn from new data.
mental model
An individual’s understanding of how a system works and how their actions affect system outcomes. When these expectations do not match the actual capabilities of a system, it can lead to frustration, abandonment, or misuse.
misalignment
A discrepancy between the goals or behaviors that an AI system is optimized to achieve and the true, often complex, objectives of its human users or designers
ML
See [machine learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx8397498).
MLOps
* The practice for collaboration between data scientists and operations professionals to help manage production machine learning (or deep learning) lifecycle. MLOps looks to increase automation and improve the quality of production ML while also focusing on business and regulatory requirements. It involves model development, training, validation, deployment, monitoring, and management and uses methods like CI/CD.
* A methodology that takes a machine learning model from development to production.
model
* In a machine learning context, a set of functions and algorithms that have been trained and tested on a data set to provide predictions or decisions.
* In Decision Optimization, a mathematical formulation of a problem that can be solved with CPLEX optimization engines using different data sets.
ModelOps
A methodology for managing the full lifecycle of an AI model, including training, deployment, scoring, evaluation, retraining, and updating.
monitored group
A class of data that is monitored to determine if the results from a predictive model differ significantly from the results of the reference group. Groups are commonly monitored based on characteristics that include race, gender, or age.
multiclass classification model
A classification task with more than two classes. For example, where a binary classification model predicts yes or no values, a multi-class model predicts yes, no, maybe, or not applicable.
multivariate time series
Time series experiment that contains two or more changing variables. For example, a time series model forecasting the electricity usage of three clients.
N
natural language processing (NLP)
A field of artificial intelligence and linguistics that studies the problems inherent in the processing and manipulation of natural language, with an aim to increase the ability of computers to understand human languages.
natural language processing library
A library that provides basic natural language processing functions for syntax analysis and out-of-the-box pre-trained models for a wide variety of text processing tasks.
neural network
A mathematical model for predicting or classifying cases by using a complex mathematical scheme that simulates an abstract version of brain cells. A neural network is trained by presenting it with a large number of observed cases, one at a time, and allowing it to update itself repeatedly until it learns the task.
NLP
See [natural language processing](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx2031058).
node
In an SPSS Modeler flow, the graphical representation of a data operation.
notebook
An interactive document that contains executable code, descriptive text for that code, and the results of any code that is run.
notebook kernel
The part of the notebook editor that executes code and returns the computational results.
O
object storage
A method of storing data, typically used in the cloud, in which data is stored as discrete units, or objects, in a storage pool or repository that does not use a file hierarchy but that stores all objects at the same level.
one-shot learning
A model for deep learning that is based on the premise that most human learning takes place upon receiving just one or two examples. This model is similar to unsupervised learning.
one-shot prompting
A prompting technique in which a single example is provided to the model to demonstrate how to complete the task.
online deployment
Method of accessing a model or Python code deployment through an API endpoint as a web service to generate predictions online, in real time.
ontology
An explicit formal specification of the representation of the objects, concepts, and other entities that can exist in some area of interest and the relationships among them.
operational asset
An asset that runs code in a tool or a job.
optimization
The process of finding the most appropriate solution to a precisely defined problem while respecting the imposed constraints and limitations. For example, determining how to allocate resources or how to find the best elements or combinations from a large set of alternatives.
Optimization Programming Language
A modeling language for expressing model formulations of optimization problems in a format that can be solved by CPLEX optimization engines such as IBM CPLEX.
optimized metric
A metric used to measure the performance of the model. For example, accuracy is the typical metric used to measure the performance of a binary classification model.
orchestration
The process of creating an end-to-end flow that can train, run, deploy, test, and evaluate a machine learning model, and uses automation to coordinate the system, often using microservices.
overreliance
A user's acceptance of an incorrect recommendation made by an AI model. See also [reliance](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx10299283), [underreliance](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx10299288).
P
parameter
* A configurable part of the model that is internal to a model and whose values are estimated or learned from data. Parameters are aspects of the model that are adjusted during the training process to help the model accurately predict the output. The model's performance and predictive power largely depend on the values of these parameters.
* A real-valued weight between 0.0 and 1.0 indicating the strength of connection between two neurons in a neural network.
party
In Federated Learning, an entity that contributes data for training a common model. The data is not moved or combined but each party gets the benefit of the federated training.
payload
The data that is passed to a deployment to get back a score, prediction, or solution.
payload logging
The capture of payload data and deployment output to monitor ongoing health of AI in business applications.
pipeline
* In Watson Pipelines, an end-to-end flow of assets from creation through deployment.
* In AutoAI, a candidate model.
pipeline leaderboard
In AutoAI, a table that shows the list of automatically generated candidate models, as pipelines, ranked according to the specified criteria.
policy
A strategy or rule that an agent follows to determine the next action based on the current state.
positional encoding
An encoding of an ordered sequence of data that includes positional information, such as encoding of words in a sentence that includes each word's position within the sentence. See also [encoding](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx2426645).
predictive analytics
A business process and a set of related technologies that are concerned with the prediction of future possibilities and trends. Predictive analytics applies such diverse disciplines as probability, statistics, machine learning, and artificial intelligence to business problems to find the best action for a specific situation. See also [data mining](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx2114083).
pretrained model
An AI model that was previously trained on a large data set to accomplish a specific task. Pretrained models are used instead of building a model from scratch.
pretraining
The process of training a machine learning model on a large dataset before fine-tuning it for a specific task.
privacy
Assurance that information about an individual is protected from unauthorized access and inappropriate use.
probabilistic
The characteristic of being subject to randomness; non-deterministic. Probabilistic models do not produce the same outputs given the same inputs. See also [generative variability](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx10298041).
project
A collaborative workspace for working with data and other assets.
prompt
* Data, such as text or an image, that prepares, instructs, or conditions a foundation model's output.
* A component of an action that indicates that user input is required for a field before making a transition to an output screen.
prompt engineering
The process of designing natural language prompts for a language model to perform a specific task.
prompting
The process of providing input to a foundation model to induce it to produce output.
prompt tuning
An efficient, low-cost way of adapting a pre-trained model to new tasks without retraining the model or updating its weights. Prompt tuning involves learning a small number of new parameters that are appended to a model’s prompt, while freezing the model’s existing parameters.
pruning
The process of simplifying, shrinking, or trimming a decision tree or neural network. This is done by removing less important nodes or layers, reducing complexity to prevent overfitting and improve model generalization while maintaining its predictive power.
Python
A programming language that is used in data science and AI.
Python function
A function that contains Python code to support a model in production.
R
R
An extensible scripting language that is used in data science and AI that offers a wide variety of analytic, statistical, and graphical functions and techniques.
RAG
See [retrieval augmented generation](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx10299275).
random seed
A number used to initialize a pseudorandom number generator. Random seeds enable reproducibility for processes that rely on random number generation.
reference group
A group that is identified as most likely to receive a positive result in a predictive model. The results can be compared to a monitored group to look for potential bias in outcomes.
refine
To cleanse and shape data.
regression model
A model that relates a dependent variable to one or more independent variables.
reinforcement learning
A machine learning technique in which an agent learns to make sequential decisions in an environment to maximize a reward signal. Inspired by trial and error learning, agents interact with the environment, receive feedback, and adjust their actions to achieve optimal policies.
reinforcement learning on human feedback (RLHF)
A method of aligning a language learning model's responses to the instructions given in a prompt. RLHF requires human annotators rank multiple outputs from the model. These rankings are then used to train a reward model using reinforcement learning. The reward model is then used to fine-tune the large language model's output.
reliance
In AI systems, a user’s acceptance of a recommendation made by, or the output generated by, an AI model. See also [overreliance](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx10299271), [underreliance](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx10299288).
representation
An encoding of a unit of information, often as a vector of real-valued numbers. See also [embedding](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx10298004).
retrieval augmented generation (RAG)
A technique in which a large language model is augmented with knowledge from external sources to generate text. In the retrieval step, relevant documents from an external source are identified from the user’s query. In the generation step, portions of those documents are included in the LLM prompt to generate a response grounded in the retrieved documents.
reward
A signal used to guide an agent, typically a reinforcement learning agent, that provides feedback on the goodness of a decision
RLHF
See [reinforcement learning on human feedback](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx10298109).
runtime environment
The predefined or custom hardware and software configuration that is used to run tools or jobs, such as notebooks.
S
scoring
* In machine learning, the process of measuring the confidence of a predicted outcome.
* The process of computing how closely the attributes for an incoming identity match the attributes of an existing entity.
script
A file that contains Python or R scripts to support a model in production.
self-attention
An attention mechanism that uses information from the input data itself to determine what parts of the input to focus on when generating output.
self-supervised learning
A machine learning training method in which a model learns from unlabeled data by masking tokens in an input sequence and then trying to predict them. An example is "I like __ sprouts".
sentience
The capacity to have subjective experiences and feelings, or consciousness. It involves the ability to perceive, reason, and experience sensations such as pain and pleasure.
sentiment analysis
Examination of the sentiment or emotion expressed in text, such as determining if a movie review is positive or negative.
shape
To customize data by filtering, sorting, removing columns; joining tables; performing operations that include calculations, data groupings, hierarchies and more.
small data
Data that is accessible and comprehensible by humans. See also [structured data](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx2490040).
SQL pushback
In SPSS Modeler, the process of performing many data preparation and mining operations directly in the database through SQL code.
structured data
Data that resides in fixed fields within a record or file. Relational databases and spreadsheets are examples of structured data. See also [unstructured data](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx2490044), [small data](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx8317275).
structured information
Items stored in structured resources, such as search engine indices, databases, or knowledge bases.
supervised learning
A machine learning training method in which a model is trained on a labeled dataset to make predictions on new data.
T
temperature
A parameter in a generative model that specifies the amount of variation in the generation process. Higher temperatures result in greater variability in the model's output.
text classification
A model that automatically identifies and classifies text into specified categories.
time series
A set of values of a variable at periodic points in time.
time series model
A model that tracks and predicts data over time.
token
A discrete unit of meaning or analysis in a text, such as a word or subword.
tokenization
The process used in natural language processing to split a string of text into smaller units, such as words or subwords.
trained model
A model that is trained with actual data and is ready to be deployed to predict outcomes when presented with new data.
training
The initial stage of model building, involving a subset of the source data. The model learns by example from the known data. The model can then be tested against a further, different subset for which the outcome is already known.
training data
A set of annotated documents that can be used to train machine learning models.
training set
A set of labeled data that is used to train a machine learning model by exposing it to examples and their corresponding labels, enabling the model to learn patterns and make predictions.
transfer learning
A machine learning strategy in which a trained model is applied to a completely new problem.
transformer
A neural network architecture that uses positional encodings and the self-attention mechanism to predict the next token in a sequence of tokens.
transparency
Sharing appropriate information with stakeholders on how an AI system has been designed and developed. Examples of this information are what data is collected, how it will be used and stored, and who has access to it; and test results for accuracy, robustness and bias.
trust calibration
The process of evaluating and adjusting one’s trust in an AI system based on factors such as its accuracy, reliability, and credibility.
Turing test
Proposed by Alan Turing in 1950, a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.
U
underreliance
A user's rejection of a correct recommendations made by an AI model. See also [overreliance](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx10299271), [reliance](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx10299283).
univariate time series
Time series experiment that contains only one changing variable. For example, a time series model forecasting the temperature has a single prediction column of the temperature.
unstructured data
Any data that is stored in an unstructured format rather than in fixed fields. Data in a word processing document is an example of unstructured data. See also [structured data](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx2490040).
unstructured information
Data that is not contained in a fixed location, such as the natural language text document.
unsupervised learning
* A machine learning training method in which a model is not provided with labeled data and must find patterns or structure in the data on its own.
* A model for deep learning that allows raw, unlabeled data to be used to train a system with little to no human effort.
V
validation set
A separate set of labeled data that is used to evaluate the performance and generalization ability of a machine learning model during the training process, assisting in hyperparameter tuning and model selection.
vector
A one-dimensional, ordered list of numbers, such as [1, 2, 5] or [0.7, 0.2, -1.0].
virtual agent
A pretrained chat bot that can process natural language to respond and complete simple business transactions, or route more complicated requests to a human with subject matter expertise.
visualization
A graph, chart, plot, table, map, or any other visual representation of data.
W
weight
A coefficient for a node that transforms input data within the network's layer. Weight is a parameter that an AI model learns through training, adjusting its value to reduce errors in the model's predictions.
Z
zero-shot prompt
A prompting technique in which the model completes a task without being given a specific example of how.
| # Glossary #
This glossary provides terms and definitions for watsonx\.ai and watsonx\.governance\.
The following cross\-references are used in this glossary:
<!-- <ul> -->
* *See* refers you from a nonpreferred term to the preferred term or from an abbreviation to the spelled\-out form\.
* *See also* refers you to a related or contrasting term\.
<!-- </ul> -->
[A](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=en#glossa)[B](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=en#glossb)[C](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=en#glossc)[D](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=en#glossd)[E](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=en#glosse)[F](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=en#glossf)[G](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=en#glossg)[H](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=en#glossh)[I](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=en#glossi)[J](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=en#glossj)[K](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=en#glossk)[L](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=en#glossl)[M](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=en#glossm)[N](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=en#glossn)[O](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=en#glosso)[P](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=en#glossp)[R](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=en#glossr)[S](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=en#glosss)[T](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=en#glosst)[U](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=en#glossu)[V](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=en#glossv)[W](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=en#glossw)[Z](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=en#glossz)
## A ##
### accelerator ###
In high\-performance computing, a specialized circuit that is used to take some of the computational load from the CPU, increasing the efficiency of the system\. For example, in deep learning, GPU\-accelerated computing is often employed to offload part of the compute workload to a GPU while the main application runs off the CPU\. See also [graphics processing unit](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=en#x8987320)\.
### accountability ###
The expectation that organizations or individuals will ensure the proper functioning, throughout their lifecycle, of the AI systems that they design, develop, operate or deploy, in accordance with their roles and applicable regulatory frameworks\. This includes determining who is responsible for an AI mistake which may require legal experts to determine liability on a case\-by\-case basis\.
### activation function ###
A function defining a neural unit's output given a set of incoming activations from other neurons
### active learning ###
A model for machine learning in which the system requests more labeled data only when it needs it\.
### active metadata ###
Metadata that is automatically updated based on analysis by machine learning processes\. For example, profiling and data quality analysis automatically update metadata for data assets\.
### active runtime ###
An instance of an environment that is running to provide compute resources to analytical assets\.
### agent ###
An algorithm or a program that interacts with an environment to learn optimal actions or decisions, typically using reinforcement learning, to achieve a specific goal\.
### AI ###
See [artificial intelligence](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=en#x3448902)\.
### AI accelerator ###
Specialized silicon hardware designed to efficiently execute AI\-related tasks like deep learning, machine learning, and neural networks for faster, energy\-efficient computing\. It can be a dedicated unit in a core, a separate chiplet on a multi\-module chip or a separate card\.
### AI ethics ###
A multidisciplinary field that studies how to optimize AI's beneficial impact while reducing risks and adverse outcomes\. Examples of AI ethics issues are data responsibility and privacy, fairness, explainability, robustness, transparency, environmental sustainability, inclusion, moral agency, value alignment, accountability, trust, and technology misuse\.
### AI governance ###
An organization's act of governing, through its corporate instructions, staff, processes and systems to direct, evaluate, monitor, and take corrective action throughout the AI lifecycle, to provide assurance that the AI system is operating as the organization intends, as its stakeholders expect, and as required by relevant regulation\.
### AI safety ###
The field of research aiming to ensure artificial intelligence systems operate in a manner that is beneficial to humanity and don't inadvertently cause harm, addressing issues like reliability, fairness, transparency, and alignment of AI systems with human values\.
### AI system ###
See [artificial intelligence system](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=en#x10065431)\.
### algorithm ###
A formula applied to data to determine optimal ways to solve analytical problems\.
### analytics ###
The science of studying data in order to find meaningful patterns in the data and draw conclusions based on those patterns\.
### appropriate trust ###
In an AI system, an amount of trust that is calibrated to its accuracy, reliability, and credibility\.
### artificial intelligence (AI) ###
The capability to acquire, process, create and apply knowledge in the form of a model to make predictions, recommendations or decisions\.
### artificial intelligence system (AI system) ###
A system that can make predictions, recommendations or decisions that influence physical or virtual environments, and whose outputs or behaviors are not necessarily pre\-determined by its developer or user\. AI systems are typically trained with large quantities of structured or unstructured data, and might be designed to operate with varying levels of autonomy or none, to achieve human\-defined objectives\.
### asset ###
An item that contains information about data, other valuable information, or code that works with data\. See also [data asset](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=en#x6094928)\.
### attention mechanism ###
A mechanism in deep learning models that determines which parts of the input a model focuses on when producing output\.
### AutoAI experiment ###
An automated training process that considers a series of training definitions and parameters to create a set of ranked pipelines as model candidates\.
## B ##
### batch deployment ###
A method to deploy models that processes input data from a file, data connection, or connected data in a storage bucket, then writes the output to a selected destination\.
### bias ###
Systematic error in an AI system that has been designed, intentionally or not, in a way that may generate unfair decisions\. Bias can be present both in the AI system and in the data used to train and test it\. AI bias can emerge in an AI system as a result of cultural expectations; technical limitations; or unanticipated deployment contexts\. See also [fairness](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=en#x3565572)\.
### bias detection ###
The process of calculating fairness to metrics to detect when AI models are delivering unfair outcomes based on certain attributes\.
### bias mitigation ###
Reducing biases in AI models by curating training data and applying fairness techniques\.
### binary classification ###
A classification model with two classes\. Predictions are a binary choice of one of the two classes\.
## C ##
### classification model ###
A predictive model that predicts data in distinct categories\. Classifications can be binary, with two classes of data, or multi\-class when there are more than 2 categories\.
### cleanse ###
To ensure that all values in a data set are consistent and correctly recorded\.
### CNN ###
See [convolutional neural network](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=en#x10297974)\.
### computational linguistics ###
Interdisciplinary field that explores approaches for computationally modeling natural languages\.
### compute resource ###
The hardware and software resources that are defined by an environment template to run assets in tools\.
### confusion matrix ###
A performance measurement that determines the accuracy between a model's positive and negative predicted outcomes compared to positive and negative actual outcomes\.
### connected data asset ###
A pointer to data that is accessed through a connection to an external data source\.
### connected folder asset ###
A pointer to a folder in IBM Cloud Object Storage\.
### connection ###
The information required to connect to a database\. The actual information that is required varies according to the DBMS and connection method\.
### connection asset ###
An asset that contains information that enables connecting to a data source\.
### constraint ###
<!-- <ul> -->
* In databases, a relationship between tables\.
* In Decision Optimization, a condition that must be satisfied by the solution of a problem\.
<!-- </ul> -->
### continuous learning ###
Automating the tasks of monitoring model performance, retraining with new data, and redeploying to ensure prediction quality\.
### convolutional neural network (CNN) ###
A class of neural network commonly used in computer vision tasks that uses convolutional layers to process image data\.
### Core ML deployment ###
The process of downloading a deployment in Core ML format for use in iOS apps\.
### corpus ###
A collection of source documents that are used to train a machine learning model\.
### cross\-validation ###
A technique for testing how well a model generalizes in the absence of a hold\-out test sample\. Cross\-validation divides the training data into a number of subsets, and then builds the same number of models, with each subset held out in turn\. Each of those models is tested on the holdout sample, and the average accuracy of the models on those holdout samples is used to estimate the accuracy of the model when applied to new data\.
### curate ###
To select, collect, preserve, and maintain content relevant to a specific topic\. Curation establishes, maintains, and adds value to data; it transforms data into trusted information and knowledge\.
## D ##
### data asset ###
An asset that points to data, for example, to an uploaded file\. Connections and connected data assets are also considered data assets\. See also [asset](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=en#x2172042)\.
### data imputation ###
The substitution of missing values in a data set with estimated or explicit values\.
### data lake ###
A large\-scale data storage repository that stores raw data in any format in a flat architecture\. Data lakes hold structured and unstructured data as well as binary data for the purpose of processing and analysis\.
### data lakehouse ###
A unified data storage and processing architecture that combines the flexibility of a data lake with the structured querying and performance optimizations of a data warehouse, enabling scalable and efficient data analysis for AI and analytics applications\.
### data mining ###
The process of collecting critical business information from a data source, correlating the information, and uncovering associations, patterns, and trends\. See also [predictive analytics](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=en#x5067245)\.
### Data Refinery flow ###
A set of steps that cleanse and shape data to produce a new data asset\.
### data science ###
The analysis and visualization of structured and unstructured data to discover insights and knowledge\.
### data set ###
A collection of data, usually in the form of rows (records) and columns (fields) and contained in a file or database table\.
### data source ###
A repository, queue, or feed for reading data, such as a Db2 database\.
### data table ###
A collection of data, usually in the form of rows (records) and columns (fields) and contained in a table\.
### data warehouse ###
A large, centralized repository of data collected from various sources that is used for reporting and data analysis\. It primarily stores structured and semi\-structured data, enabling businesses to make informed decisions\.
### DDL ###
See [distributed deep learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=en#x9443383)\.
### decision boundary ###
A division of data points in a space into distinct groups or classifications\.
### decoder\-only model ###
A model that generates output text word by word by inference from the input sequence\. Decoder\-only models are used for tasks such as generating text and answering questions\.
### deep learning ###
A computational model that uses multiple layers of interconnected nodes, which are organized into hierarchical layers, to transform input data (first layer) through a series of computations to produce an output (final layer)\. Deep learning is inspired by the structure and function of the human brain\. See also [distributed deep learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=en#x9443383)\.
### deep neural network ###
A neural network with multiple hidden layers, allowing for more complex representations of the data\.
### deployment ###
A model or application package that is available for use\.
### deployment space ###
A workspace where models are deployed and deployments are managed\.
### deterministic ###
Describes a characteristic of computing systems when their outputs are completely determined by their inputs\.
### DevOps ###
A software methodology that integrates application development and IT operations so that teams can deliver code faster to production and iterate continuously based on market feedback\.
### discriminative AI ###
A class of algorithm that focuses on finding a boundary that separates different classes in the data\.
### distributed deep learning (DDL) ###
An approach to deep learning training that leverages the methods of distributed computing\. In a DDL environment, compute workload is distributed between the central processing unit and graphics processing unit\. See also [deep learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=en#x9443378)\.
### DOcplex ###
A Python API for modeling and solving Decision Optimization problems\.
## E ##
### embedding ###
A numerical representation of a unit of information, such as a word or a sentence, as a vector of real\-valued numbers\. Embeddings are learned, low\-dimensional representations of higher\-dimensional data\. See also [encoding](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=en#x2426645), [representation](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=en#x6075962)\.
### emergence ###
A property of foundation models in which the model exhibits behaviors that were not explicitly trained\.
### emergent behavior ###
A behavior exhibited by a foundation model that was not explicitly constructed\.
### encoder\-decoder model ###
A model for both understanding input text and for generating output text based on the input text\. Encoder\-decoder models are used for tasks such as summarization or translation\.
### encoder\-only model ###
A model that understands input text at the sentence level by transforming input sequences into representational vectors called embeddings\. Encoder\-only models are used for tasks such as classifying customer feedback and extracting information from large documents\.
### encoding ###
The representation of a unit of information, such as a character or a word, as a set of numbers\. See also [embedding](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=en#x10298004), [positional encoding](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=en#x10298071)\.
### endpoint URL ###
A network destination address that identifies resources, such as services and objects\. For example, an endpoint URL is used to identify the location of a model or function deployment when a user sends payload data to the deployment\.
### environment ###
The compute resources for running jobs\.
### environment runtime ###
An instantiation of the environment template to run analytical assets\.
### environment template ###
A definition that specifies hardware and software resources to instantiate environment runtimes\.
### exogenous feature ###
A feature that can influence the predictive model but cannot be influenced in return\. For example, temperatures can affect predicted ice cream sales, but ice cream sales cannot influence temperatures\.
### experiment ###
A model training process that considers a series of training definitions and parameters to determine the most accurate model configuration\.
### explainability ###
<!-- <ul> -->
* The ability of human users to trace, audit, and understand predictions that are made in applications that use AI systems\.
* The ability of an AI system to provide insights that humans can use to understand the causes of the system's predictions\.
<!-- </ul> -->
## F ##
### fairness ###
In an AI system, the equitable treatment of individuals or groups of individuals\. The choice of a specific notion of equity for an AI system depends on the context in which it is used\. See also [bias](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=en#x2803778)\.
### feature ###
A property or characteristic of an item within a data set, for example, a column in a spreadsheet\. In some cases, features are engineered as combinations of other features in the data set\.
### feature engineering ###
The process of selecting, transforming, and creating new features from raw data to improve the performance and predictive power of machine learning models\.
### feature group ###
A set of columns of a particular data asset along with the metadata that is used for machine learning\.
### feature selection ###
Identifying the columns of data that best support an accurate prediction or score in a machine learning model\.
### feature store ###
A centralized repository or system that manages and organizes features, providing a scalable and efficient way to store, retrieve, and share feature data across machine learning pipelines and applications\.
### feature transformation ###
In AutoAI, a phase of pipeline creation that applies algorithms to transform and optimize the training data to achieve the best outcome for the model type\.
### federated learning ###
The training of a common machine learning model that uses multiple data sources that are not moved, joined, or shared\. The result is a better\-trained model without compromising data security\.
### few\-shot prompting ###
A prompting technique in which a small number of examples are provided to the model to demonstrate how to complete the task\.
### fine tuning ###
The process of adapting a pre\-trained model to perform a specific task by conducting additional training\. Fine tuning may involve (1) updating the model’s existing parameters, known as full fine tuning, or (2) updating a subset of the model’s existing parameters or adding new parameters to the model and training them while freezing the model’s existing parameters, known as parameter\-efficient fine tuning\.
### flow ###
A collection of nodes that define a set of steps for processing data or training a model\.
### foundation model ###
An AI model that can be adapted to a wide range of downstream tasks\. Foundation models are typically large\-scale generative models that are trained on unlabeled data using self\-supervision\. As large scale models, foundation models can include billions of parameters\.
## G ##
### Gantt chart ###
A graphical representation of a project timeline and duration in which schedule data is displayed as horizontal bars along a time scale\.
### gen AI ###
See [generative AI](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=en#x10298036)\.
### generative AI (gen AI) ###
A class of AI algorithms that can produce various types of content including text, source code, imagery, audio, and synthetic data\.
### generative variability ###
The characteristic of generative models to produce varied outputs, even when the input to the model is held constant\. See also [probabilistic](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=en#x10298081)\.
### GPU ###
See [graphics processing unit](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=en#x8987320)\.
### graphical builder ###
A tool for creating analytical assets by visually coding\. A canvas is an area on which to place objects or nodes that can be connected to create a flow\.
### graphics processing unit (GPU) ###
A specialized processor designed to rapidly manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display\. GPUs are heavily utilized in machine learning due to their parallel processing capabilities\. See also [accelerator](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=en#x2048370)\.
## H ##
### hallucination ###
A response from a foundation model that includes off\-topic, repetitive, incorrect, or fabricated content\. Hallucinations involving fabricating details can happen when a model is prompted to generate text, but the model doesn't have enough related text to draw upon to generate a result that contains the correct details\.
### hold\-out set ###
A set of labeled data that is intentionally withheld from both the training and validation sets, serving as an unbiased assessment of the final model's performance on unseen data\.
### homogenization ###
The trend in machine learning research in which a small number of deep neural net architectures, such as the transformer, are achieving state\-of\-the\-art results across a wide variety of tasks\.
### HPO ###
See [hyperparameter optimization](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=en#x9895660)\.
### human oversight ###
Human involvement in reviewing decisions rendered by an AI system, enabling human autonomy and accountability of decision\.
### hyperparameter ###
In machine learning, a parameter whose value is set before training as a way to increase model accuracy\.
### hyperparameter optimization (HPO) ###
The process for setting hyperparameter values to the settings that provide the most accurate model\.
## I ##
### image ###
A software package that contains a set of libraries\.
### incremental learning ###
The process of training a model using data that is continually updated without forgetting data obtained from the preceding tasks\. This technique is used to train a model with batches of data from a large training data source\.
### inferencing ###
The process of running live data through a trained AI model to make a prediction or solve a task\.
### ingest ###
<!-- <ul> -->
* To feed data into a system for the purpose of creating a base of knowledge\.
* To continuously add a high\-volume of real\-time data to a database\.
<!-- </ul> -->
### insight ###
An accurate or deep understanding of something\. Insights are derived using cognitive analytics to provide current snapshots and predictions of customer behaviors and attitudes\.
### intelligent AI ###
Artificial intelligence systems that can understand, learn, adapt, and implement knowledge, demonstrating abilities like decision\-making, problem\-solving, and understanding complex concepts, much like human intelligence\.
### intent ###
A purpose or goal expressed by customer input to a chatbot, such as answering a question or processing a bill payment\.
## J ##
### job ###
A separately executable unit of work\.
## K ##
### knowledge base ###
See [corpus](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=en#x3954167)\.
## L ##
### label ###
A class or category assigned to a data point in supervised learning\.Labels can be derived from data but are often applied by human labelers or annotators\.
### labeled data ###
Raw data that is assigned labels to add context or meaning so that it can be used to train machine learning models\. For example, numeric values might be labeled as zip codes or ages to provide context for model inputs and outputs\.
### large language model (LLM) ###
A language model with a large number of parameters, trained on a large quantity of text\.
### LLM ###
See [large language model](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=en#x10298052)\.
## M ##
### machine learning (ML) ###
A branch of artificial intelligence (AI) and computer science that focuses on the use of data and algorithms to imitate the way that humans learn, gradually improving the accuracy of AI models\.
### machine learning framework ###
The libraries and runtime for training and deploying a model\.
### machine learning model ###
An AI model that is trained on a a set of data to develop algorithms that it can use to analyze and learn from new data\.
### mental model ###
An individual’s understanding of how a system works and how their actions affect system outcomes\. When these expectations do not match the actual capabilities of a system, it can lead to frustration, abandonment, or misuse\.
### misalignment ###
A discrepancy between the goals or behaviors that an AI system is optimized to achieve and the true, often complex, objectives of its human users or designers
### ML ###
See [machine learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=en#x8397498)\.
### MLOps ###
<!-- <ul> -->
* The practice for collaboration between data scientists and operations professionals to help manage production machine learning (or deep learning) lifecycle\. MLOps looks to increase automation and improve the quality of production ML while also focusing on business and regulatory requirements\. It involves model development, training, validation, deployment, monitoring, and management and uses methods like CI/CD\.
* A methodology that takes a machine learning model from development to production\.
<!-- </ul> -->
### model ###
<!-- <ul> -->
* In a machine learning context, a set of functions and algorithms that have been trained and tested on a data set to provide predictions or decisions\.
* In Decision Optimization, a mathematical formulation of a problem that can be solved with CPLEX optimization engines using different data sets\.
<!-- </ul> -->
### ModelOps ###
A methodology for managing the full lifecycle of an AI model, including training, deployment, scoring, evaluation, retraining, and updating\.
### monitored group ###
A class of data that is monitored to determine if the results from a predictive model differ significantly from the results of the reference group\. Groups are commonly monitored based on characteristics that include race, gender, or age\.
### multiclass classification model ###
A classification task with more than two classes\. For example, where a binary classification model predicts yes or no values, a multi\-class model predicts yes, no, maybe, or not applicable\.
### multivariate time series ###
Time series experiment that contains two or more changing variables\. For example, a time series model forecasting the electricity usage of three clients\.
## N ##
### natural language processing (NLP) ###
A field of artificial intelligence and linguistics that studies the problems inherent in the processing and manipulation of natural language, with an aim to increase the ability of computers to understand human languages\.
### natural language processing library ###
A library that provides basic natural language processing functions for syntax analysis and out\-of\-the\-box pre\-trained models for a wide variety of text processing tasks\.
### neural network ###
A mathematical model for predicting or classifying cases by using a complex mathematical scheme that simulates an abstract version of brain cells\. A neural network is trained by presenting it with a large number of observed cases, one at a time, and allowing it to update itself repeatedly until it learns the task\.
### NLP ###
See [natural language processing](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=en#x2031058)\.
### node ###
In an SPSS Modeler flow, the graphical representation of a data operation\.
### notebook ###
An interactive document that contains executable code, descriptive text for that code, and the results of any code that is run\.
### notebook kernel ###
The part of the notebook editor that executes code and returns the computational results\.
## O ##
### object storage ###
A method of storing data, typically used in the cloud, in which data is stored as discrete units, or objects, in a storage pool or repository that does not use a file hierarchy but that stores all objects at the same level\.
### one\-shot learning ###
A model for deep learning that is based on the premise that most human learning takes place upon receiving just one or two examples\. This model is similar to unsupervised learning\.
### one\-shot prompting ###
A prompting technique in which a single example is provided to the model to demonstrate how to complete the task\.
### online deployment ###
Method of accessing a model or Python code deployment through an API endpoint as a web service to generate predictions online, in real time\.
### ontology ###
An explicit formal specification of the representation of the objects, concepts, and other entities that can exist in some area of interest and the relationships among them\.
### operational asset ###
An asset that runs code in a tool or a job\.
### optimization ###
The process of finding the most appropriate solution to a precisely defined problem while respecting the imposed constraints and limitations\. For example, determining how to allocate resources or how to find the best elements or combinations from a large set of alternatives\.
### Optimization Programming Language ###
A modeling language for expressing model formulations of optimization problems in a format that can be solved by CPLEX optimization engines such as IBM CPLEX\.
### optimized metric ###
A metric used to measure the performance of the model\. For example, accuracy is the typical metric used to measure the performance of a binary classification model\.
### orchestration ###
The process of creating an end\-to\-end flow that can train, run, deploy, test, and evaluate a machine learning model, and uses automation to coordinate the system, often using microservices\.
### overreliance ###
A user's acceptance of an incorrect recommendation made by an AI model\. See also [reliance](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=en#x10299283), [underreliance](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=en#x10299288)\.
## P ##
### parameter ###
<!-- <ul> -->
* A configurable part of the model that is internal to a model and whose values are estimated or learned from data\. Parameters are aspects of the model that are adjusted during the training process to help the model accurately predict the output\. The model's performance and predictive power largely depend on the values of these parameters\.
* A real\-valued weight between 0\.0 and 1\.0 indicating the strength of connection between two neurons in a neural network\.
<!-- </ul> -->
### party ###
In Federated Learning, an entity that contributes data for training a common model\. The data is not moved or combined but each party gets the benefit of the federated training\.
### payload ###
The data that is passed to a deployment to get back a score, prediction, or solution\.
### payload logging ###
The capture of payload data and deployment output to monitor ongoing health of AI in business applications\.
### pipeline ###
<!-- <ul> -->
* In Watson Pipelines, an end\-to\-end flow of assets from creation through deployment\.
* In AutoAI, a candidate model\.
<!-- </ul> -->
### pipeline leaderboard ###
In AutoAI, a table that shows the list of automatically generated candidate models, as pipelines, ranked according to the specified criteria\.
### policy ###
A strategy or rule that an agent follows to determine the next action based on the current state\.
### positional encoding ###
An encoding of an ordered sequence of data that includes positional information, such as encoding of words in a sentence that includes each word's position within the sentence\. See also [encoding](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=en#x2426645)\.
### predictive analytics ###
A business process and a set of related technologies that are concerned with the prediction of future possibilities and trends\. Predictive analytics applies such diverse disciplines as probability, statistics, machine learning, and artificial intelligence to business problems to find the best action for a specific situation\. See also [data mining](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=en#x2114083)\.
### pretrained model ###
An AI model that was previously trained on a large data set to accomplish a specific task\. Pretrained models are used instead of building a model from scratch\.
### pretraining ###
The process of training a machine learning model on a large dataset before fine\-tuning it for a specific task\.
### privacy ###
Assurance that information about an individual is protected from unauthorized access and inappropriate use\.
### probabilistic ###
The characteristic of being subject to randomness; non\-deterministic\. Probabilistic models do not produce the same outputs given the same inputs\. See also [generative variability](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=en#x10298041)\.
### project ###
A collaborative workspace for working with data and other assets\.
### prompt ###
<!-- <ul> -->
* Data, such as text or an image, that prepares, instructs, or conditions a foundation model's output\.
* A component of an action that indicates that user input is required for a field before making a transition to an output screen\.
<!-- </ul> -->
### prompt engineering ###
The process of designing natural language prompts for a language model to perform a specific task\.
### prompting ###
The process of providing input to a foundation model to induce it to produce output\.
### prompt tuning ###
An efficient, low\-cost way of adapting a pre\-trained model to new tasks without retraining the model or updating its weights\. Prompt tuning involves learning a small number of new parameters that are appended to a model’s prompt, while freezing the model’s existing parameters\.
### pruning ###
The process of simplifying, shrinking, or trimming a decision tree or neural network\. This is done by removing less important nodes or layers, reducing complexity to prevent overfitting and improve model generalization while maintaining its predictive power\.
### Python ###
A programming language that is used in data science and AI\.
### Python function ###
A function that contains Python code to support a model in production\.
## R ##
### R ###
An extensible scripting language that is used in data science and AI that offers a wide variety of analytic, statistical, and graphical functions and techniques\.
### RAG ###
See [retrieval augmented generation](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=en#x10299275)\.
### random seed ###
A number used to initialize a pseudorandom number generator\. Random seeds enable reproducibility for processes that rely on random number generation\.
### reference group ###
A group that is identified as most likely to receive a positive result in a predictive model\. The results can be compared to a monitored group to look for potential bias in outcomes\.
### refine ###
To cleanse and shape data\.
### regression model ###
A model that relates a dependent variable to one or more independent variables\.
### reinforcement learning ###
A machine learning technique in which an agent learns to make sequential decisions in an environment to maximize a reward signal\. Inspired by trial and error learning, agents interact with the environment, receive feedback, and adjust their actions to achieve optimal policies\.
### reinforcement learning on human feedback (RLHF) ###
A method of aligning a language learning model's responses to the instructions given in a prompt\. RLHF requires human annotators rank multiple outputs from the model\. These rankings are then used to train a reward model using reinforcement learning\. The reward model is then used to fine\-tune the large language model's output\.
### reliance ###
In AI systems, a user’s acceptance of a recommendation made by, or the output generated by, an AI model\. See also [overreliance](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=en#x10299271), [underreliance](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=en#x10299288)\.
### representation ###
An encoding of a unit of information, often as a vector of real\-valued numbers\. See also [embedding](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=en#x10298004)\.
### retrieval augmented generation (RAG) ###
A technique in which a large language model is augmented with knowledge from external sources to generate text\. In the retrieval step, relevant documents from an external source are identified from the user’s query\. In the generation step, portions of those documents are included in the LLM prompt to generate a response grounded in the retrieved documents\.
### reward ###
A signal used to guide an agent, typically a reinforcement learning agent, that provides feedback on the goodness of a decision
### RLHF ###
See [reinforcement learning on human feedback](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=en#x10298109)\.
### runtime environment ###
The predefined or custom hardware and software configuration that is used to run tools or jobs, such as notebooks\.
## S ##
### scoring ###
<!-- <ul> -->
* In machine learning, the process of measuring the confidence of a predicted outcome\.
* The process of computing how closely the attributes for an incoming identity match the attributes of an existing entity\.
<!-- </ul> -->
### script ###
A file that contains Python or R scripts to support a model in production\.
### self\-attention ###
An attention mechanism that uses information from the input data itself to determine what parts of the input to focus on when generating output\.
### self\-supervised learning ###
A machine learning training method in which a model learns from unlabeled data by masking tokens in an input sequence and then trying to predict them\. An example is "I like ***\_\_*** sprouts"\.
### sentience ###
The capacity to have subjective experiences and feelings, or consciousness\. It involves the ability to perceive, reason, and experience sensations such as pain and pleasure\.
### sentiment analysis ###
Examination of the sentiment or emotion expressed in text, such as determining if a movie review is positive or negative\.
### shape ###
To customize data by filtering, sorting, removing columns; joining tables; performing operations that include calculations, data groupings, hierarchies and more\.
### small data ###
Data that is accessible and comprehensible by humans\. See also [structured data](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=en#x2490040)\.
### SQL pushback ###
In SPSS Modeler, the process of performing many data preparation and mining operations directly in the database through SQL code\.
### structured data ###
Data that resides in fixed fields within a record or file\. Relational databases and spreadsheets are examples of structured data\. See also [unstructured data](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=en#x2490044), [small data](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=en#x8317275)\.
### structured information ###
Items stored in structured resources, such as search engine indices, databases, or knowledge bases\.
### supervised learning ###
A machine learning training method in which a model is trained on a labeled dataset to make predictions on new data\.
## T ##
### temperature ###
A parameter in a generative model that specifies the amount of variation in the generation process\. Higher temperatures result in greater variability in the model's output\.
### text classification ###
A model that automatically identifies and classifies text into specified categories\.
### time series ###
A set of values of a variable at periodic points in time\.
### time series model ###
A model that tracks and predicts data over time\.
### token ###
A discrete unit of meaning or analysis in a text, such as a word or subword\.
### tokenization ###
The process used in natural language processing to split a string of text into smaller units, such as words or subwords\.
### trained model ###
A model that is trained with actual data and is ready to be deployed to predict outcomes when presented with new data\.
### training ###
The initial stage of model building, involving a subset of the source data\. The model learns by example from the known data\. The model can then be tested against a further, different subset for which the outcome is already known\.
### training data ###
A set of annotated documents that can be used to train machine learning models\.
### training set ###
A set of labeled data that is used to train a machine learning model by exposing it to examples and their corresponding labels, enabling the model to learn patterns and make predictions\.
### transfer learning ###
A machine learning strategy in which a trained model is applied to a completely new problem\.
### transformer ###
A neural network architecture that uses positional encodings and the self\-attention mechanism to predict the next token in a sequence of tokens\.
### transparency ###
Sharing appropriate information with stakeholders on how an AI system has been designed and developed\. Examples of this information are what data is collected, how it will be used and stored, and who has access to it; and test results for accuracy, robustness and bias\.
### trust calibration ###
The process of evaluating and adjusting one’s trust in an AI system based on factors such as its accuracy, reliability, and credibility\.
### Turing test ###
Proposed by Alan Turing in 1950, a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human\.
## U ##
### underreliance ###
A user's rejection of a correct recommendations made by an AI model\. See also [overreliance](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=en#x10299271), [reliance](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=en#x10299283)\.
### univariate time series ###
Time series experiment that contains only one changing variable\. For example, a time series model forecasting the temperature has a single prediction column of the temperature\.
### unstructured data ###
Any data that is stored in an unstructured format rather than in fixed fields\. Data in a word processing document is an example of unstructured data\. See also [structured data](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=en#x2490040)\.
### unstructured information ###
Data that is not contained in a fixed location, such as the natural language text document\.
### unsupervised learning ###
<!-- <ul> -->
* A machine learning training method in which a model is not provided with labeled data and must find patterns or structure in the data on its own\.
* A model for deep learning that allows raw, unlabeled data to be used to train a system with little to no human effort\.
<!-- </ul> -->
## V ##
### validation set ###
A separate set of labeled data that is used to evaluate the performance and generalization ability of a machine learning model during the training process, assisting in hyperparameter tuning and model selection\.
### vector ###
A one\-dimensional, ordered list of numbers, such as \[1, 2, 5\] or \[0\.7, 0\.2, \-1\.0\]\.
### virtual agent ###
A pretrained chat bot that can process natural language to respond and complete simple business transactions, or route more complicated requests to a human with subject matter expertise\.
### visualization ###
A graph, chart, plot, table, map, or any other visual representation of data\.
## W ##
### weight ###
A coefficient for a node that transforms input data within the network's layer\. Weight is a parameter that an AI model learns through training, adjusting its value to reduce errors in the model's predictions\.
## Z ##
### zero\-shot prompt ###
A prompting technique in which the model completes a task without being given a specific example of how\.
<!-- </article "role="article" "> -->
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.