Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 149
45.7k
| code_prompt
stringlengths 67
140k
|
---|---|---|
15,700 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Name
Data preparation by using a template to submit a job to Cloud Dataflow
Labels
GCP, Cloud Dataflow, Kubeflow, Pipeline
Summary
A Kubeflow Pipeline component to prepare data by using a template to submit a job to Cloud Dataflow.
Details
Intended use
Use this component when you have a pre-built Cloud Dataflow template and want to launch it as a step in a Kubeflow Pipeline.
Runtime arguments
Argument | Description | Optional | Data type | Accepted values | Default |
Step1: Load the component using KFP SDK
Step2: Sample
Note
Step3: Set sample parameters
Step4: Example pipeline that uses the component
Step5: Compile the pipeline
Step6: Submit the pipeline for execution
Step7: Inspect the output | <ASSISTANT_TASK:>
Python Code:
%%capture --no-stderr
!pip3 install kfp --upgrade
Explanation: Name
Data preparation by using a template to submit a job to Cloud Dataflow
Labels
GCP, Cloud Dataflow, Kubeflow, Pipeline
Summary
A Kubeflow Pipeline component to prepare data by using a template to submit a job to Cloud Dataflow.
Details
Intended use
Use this component when you have a pre-built Cloud Dataflow template and want to launch it as a step in a Kubeflow Pipeline.
Runtime arguments
Argument | Description | Optional | Data type | Accepted values | Default |
:--- | :---------- | :----------| :----------| :---------- | :----------|
project_id | The ID of the Google Cloud Platform (GCP) project to which the job belongs. | No | GCPProjectID | | |
gcs_path | The path to a Cloud Storage bucket containing the job creation template. It must be a valid Cloud Storage URL beginning with 'gs://'. | No | GCSPath | | |
launch_parameters | The parameters that are required to launch the template. The schema is defined in LaunchTemplateParameters. The parameter jobName is replaced by a generated name. | Yes | Dict | A JSON object which has the same structure as LaunchTemplateParameters | None |
location | The regional endpoint to which the job request is directed.| Yes | GCPRegion | | None |
staging_dir | The path to the Cloud Storage directory where the staging files are stored. A random subdirectory will be created under the staging directory to keep the job information. This is done so that you can resume the job in case of failure.| Yes | GCSPath | | None |
validate_only | If True, the request is validated but not executed. | Yes | Boolean | | False |
wait_interval | The number of seconds to wait between calls to get the status of the job. | Yes | Integer | | 30 |
Input data schema
The input gcs_path must contain a valid Cloud Dataflow template. The template can be created by following the instructions in Creating Templates. You can also use Google-provided templates.
Output
Name | Description
:--- | :----------
job_id | The id of the Cloud Dataflow job that is created.
Caution & requirements
To use the component, the following requirements must be met:
- Cloud Dataflow API is enabled.
- The component can authenticate to GCP. Refer to Authenticating Pipelines to GCP for details.
- The Kubeflow user service account is a member of:
- roles/dataflow.developer role of the project.
- roles/storage.objectViewer role of the Cloud Storage Object gcs_path.
- roles/storage.objectCreator role of the Cloud Storage Object staging_dir.
Detailed description
You can execute the template locally by following the instructions in Executing Templates. See the sample code below to learn how to execute the template.
Follow these steps to use the component in a pipeline:
1. Install the Kubeflow Pipeline SDK:
End of explanation
import kfp.components as comp
dataflow_template_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.7.0-rc.3/components/gcp/dataflow/launch_template/component.yaml')
help(dataflow_template_op)
Explanation: Load the component using KFP SDK
End of explanation
!gsutil cat gs://dataflow-samples/shakespeare/kinglear.txt
Explanation: Sample
Note: The following sample code works in an IPython notebook or directly in Python code.
In this sample, we run a Google-provided word count template from gs://dataflow-templates/latest/Word_Count. The template takes a text file as input and outputs word counts to a Cloud Storage bucket. Here is the sample input:
End of explanation
# Required Parameters
PROJECT_ID = '<Please put your project ID here>'
GCS_WORKING_DIR = 'gs://<Please put your GCS path here>' # No ending slash
# Optional Parameters
EXPERIMENT_NAME = 'Dataflow - Launch Template'
OUTPUT_PATH = '{}/out/wc'.format(GCS_WORKING_DIR)
Explanation: Set sample parameters
End of explanation
import kfp.dsl as dsl
import json
@dsl.pipeline(
name='Dataflow launch template pipeline',
description='Dataflow launch template pipeline'
)
def pipeline(
project_id = PROJECT_ID,
gcs_path = 'gs://dataflow-templates/latest/Word_Count',
launch_parameters = json.dumps({
'parameters': {
'inputFile': 'gs://dataflow-samples/shakespeare/kinglear.txt',
'output': OUTPUT_PATH
}
}),
location = '',
validate_only = 'False',
staging_dir = GCS_WORKING_DIR,
wait_interval = 30):
dataflow_template_op(
project_id = project_id,
gcs_path = gcs_path,
launch_parameters = launch_parameters,
location = location,
validate_only = validate_only,
staging_dir = staging_dir,
wait_interval = wait_interval)
Explanation: Example pipeline that uses the component
End of explanation
pipeline_func = pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
Explanation: Compile the pipeline
End of explanation
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
Explanation: Submit the pipeline for execution
End of explanation
!gsutil cat $OUTPUT_PATH*
Explanation: Inspect the output
End of explanation
<END_TASK> |
15,701 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
The regular approach to remapping fails...
Check grid structure
Step1: Correctly build the regridder
Step2: So the error could be fixed by breaking your full grid to several well-defined 2D tiles... or maybe it's easier to just use CDO...
CDO remapbil instead... | <ASSISTANT_TASK:>
Python Code:
ds['lon']
# multiple titles are stacked into a single 2D array
# just passinsg this will crash ESMPy
plt.scatter(ds['lon'], ds['lat'], s=0.2)
# Get a more well-defined 2D mesh (subset of the full grid)
plt.scatter(ds['lon'][:80,:], ds['lat'][:80,], s=0.2)
Explanation: The regular approach to remapping fails...
Check grid structure
End of explanation
ds_subset = ds.isel(i=slice(0,80), j=slice(0,80))
regridder = xe.Regridder(ds_subset, ds_out, 'bilinear')
dr_out = regridder(ds['thetao'][:, :, 0:80, 0:80])
dr_out
plt.figure(figsize=(12,2));
ax = plt.axes(projection=ccrs.PlateCarree());
dr_out[0, 0, ::].plot.pcolormesh(ax=ax, x='lon', y='lat');
ax.coastlines();
Explanation: Correctly build the regridder
End of explanation
cdo_infile = '/g/data/r87/dbi599/temp/thetao_Omon_IPSL-CM5A-LR_historical_r1i1p1_1850-01-01_susan-grid-cdo.nc'
ds_cdo = xr.open_dataset(cdo_infile, decode_times=False)
ds_cdo
plt.figure(figsize=(12,2));
ax = plt.axes(projection=ccrs.PlateCarree());
ds_cdo['thetao'][0, 0, ::].plot.pcolormesh(ax=ax);
ax.coastlines();
ds_cdo['lev'].data[0]
test_file = '/g/data/r87/dbi599/DRSv2/CMIP5/IPSL-CM5A-LR/historical/mon/ocean/r1i1p1/thetao/latest/thetao_Omon_IPSL-CM5A-LR_historical_r1i1p1_185001-189912_susan-horiz-grid.nc'
ds_test = xr.open_dataset(test_file, decode_times=False)
ds_test
plt.figure(figsize=(12,2));
ax = plt.axes(projection=ccrs.PlateCarree());
ds_test['thetao'][0, 0, ::].plot.pcolormesh(ax=ax);
ax.coastlines();
type(ds_test['thetao'].data)
ds_test['thetao'].
Explanation: So the error could be fixed by breaking your full grid to several well-defined 2D tiles... or maybe it's easier to just use CDO...
CDO remapbil instead...
End of explanation
<END_TASK> |
15,702 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
TP2 - Object recognition using neural networks and convolutional neural networks
M4108C/M4109C - INFOgr2D
Student 1
Step1: Your response
Step2: Your comment
Step3: On a divisé par deux le nombre d'image.
I-2. Fully-connected NNs on CIFAR-10
1) Design a fully connected NN named 'modelCifar_nn1' including 2 layers of 256 and 512 neurons with the sigmoid activation function. Train this model with 10 epochs and batch_size = 500 (remember to pre-process them before). Test the model and report the following results
Step4: Your comment
Step5: Your observation and comment
Step6: Your observation and comment
Step7: Your observation and comment
Step8: Your observation and comment
Step9: 3) Now describe your pre-processed data for training and validation
Step10: Your observation and comments
Step11: Result, observation and comment | <ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
import numpy as np
np.random.seed(7)
import keras
from keras.datasets import cifar10
# load and split data into training and test sets --> it may take some times with your own laptop
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
# describe your data (use print function)
print("train size : ",x_train.shape)
print("test size : ",x_test.shape)
print("train label : ",y_train.shape)
print("test label : ",y_test.shape)
nclass = len(np.unique(y_train))
print("number of classes:",nclass)
Explanation: TP2 - Object recognition using neural networks and convolutional neural networks
M4108C/M4109C - INFOgr2D
Student 1: Antoine Gicquel
<br>
For submission: <font style="color:blue"> TP2_nom1_nom2.iypnb </font>, Due: <font style="color:blue"> 18/03/2018 </font>
Introduction
In this lab, we design and observe the performance of the fully connected neural networks (NNs) as well as the convolutional neural networks (CNNs) for object regconition task. All implementations should be in Keras with Tensorflow backend. This lab includes three parts:
In the first part, we perform object recognition using NNs and CNNs on the CIFAR-10 dataset (import from Keras).
In the second part, we work on the image data which are imported from disk.
The last part includes some advanced exercices.
Read and response to each question. Use the print() function to show results in code cells and write your comments/responses using Markdown cells.
IMPORTANT: Every result should be commented!
NOTE: (max 20 pts)
- part I: 10 pts
- part II: 6 pts
- part III: 2 pts
- clarity and presentation: 2 pts
Part I. Object recognition using CIFAR-10 dataset <font color='red'> (10 pts)<font/>
I-1. The CIFAR-10 data
1) Load CIFAR dataset and describe its information (number of training/test images, image size, number of classes, class names, etc.) <font color='red'> (1 pts)<font/>
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
labels = ["airplane", "automobile", "bird", "cat", "deer", "dog", "frog", "horse", "ship", "truck"]
for i in range(0,9):
plt.subplot(3, 3, i+1)
plt.imshow(x_train[i], cmap=plt.get_cmap('gray')); plt.axis('off')
print(labels[y_train[i][0]])
Explanation: Your response:
Il y a 50000 images de 32 sur 32 avec 3 caneaux de couleurs pour l'entrainement et 10000 images de test.
2) Display some image samples with their class labels using matplotlib.pyplot <font color='red'> (1 pts)<font/>
End of explanation
x_train = x_train[0:25000,:]
y_train = y_train[0:25000]
print("train size : ",x_train.shape)
print("train label : ",y_train.shape)
Explanation: Your comment:
Les labels sont donnés du haut vers la droite avec les images correspondantes.
Voici les 9 images.
3) (If necessary) Reduce the number of training images (using half of them for example) for quick training and small-GPU computer
End of explanation
# pre-process your data
x_train = x_train.reshape(x_train.shape[0], 32*32*3)
x_test = x_test.reshape(x_test.shape[0], 32*32*3)
x_train = x_train.astype('float32')/255
x_test = x_test.astype('float32')/255
from keras.utils import np_utils
y_train_cat = np_utils.to_categorical(y_train, nclass)
y_test_cat = np_utils.to_categorical(y_test, nclass)
y_train_cat.shape
print("train size : ",x_train.shape)
print("test size : ",x_test.shape)
Explanation: On a divisé par deux le nombre d'image.
I-2. Fully-connected NNs on CIFAR-10
1) Design a fully connected NN named 'modelCifar_nn1' including 2 layers of 256 and 512 neurons with the sigmoid activation function. Train this model with 10 epochs and batch_size = 500 (remember to pre-process them before). Test the model and report the following results:
- number of total parameters (explain how to compute?)
- training and testing time
- test loss and accuracy
- number of iterations to complete one epoch (explain how to compute?)
<font color='red'> (2 pts)<font/>
<br/>
Explanation:<br/>
-> one epoch = one forward pass and one backward pass of all the training examples<br/>
-> batch size = the number of training examples in one forward/backward pass. The higher the batch size, the more memory space you'll need.<br/>
End of explanation
# Define the model
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation
from keras.optimizers import RMSprop
modelCifar_nn1 = Sequential()
modelCifar_nn1.add(Dense(256, input_shape=(3072,),activation='sigmoid'))
modelCifar_nn1.add(Dense(512, activation='sigmoid'))
modelCifar_nn1.add(Dense(10,activation='softmax')) #Last layer has nclass nodes
modelCifar_nn1.summary()
# compile and train the model
import time
# compile the model
modelCifar_nn1.compile(loss='categorical_crossentropy', optimizer =RMSprop(lr=0.001), metrics=["accuracy"])
# train the model
start_t_mod= time.time()
modelCifar_nn1.fit(x_train, y_train_cat, batch_size=500, epochs = 10)
finish_t_mod = time.time()
time = finish_t_mod - start_t_mod
print("training time :", time)
# evaluate the model
score = modelCifar_nn1.evaluate(x_test, y_test_cat)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
Explanation: Your comment:
Conversion des labels d'entiers en catégories et conversion des valeurs.
End of explanation
# Define the model
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation
from keras.optimizers import RMSprop
modelCifar_nn2 = Sequential()
modelCifar_nn2.add(Dense(256, input_shape=(3072,),activation='relu'))
modelCifar_nn2.add(Dense(512, activation='relu'))
modelCifar_nn2.add(Dense(10,activation='softmax')) #Last layer has nclass nodes
modelCifar_nn2.summary()
# compile and train the model
import time
# compile the model
modelCifar_nn2.compile(loss = 'categorical_crossentropy', optimizer = RMSprop(lr=0.001), metrics = ["accuracy"])
# train the model
start_t_mod= time.time()
modelCifar_nn2.fit(x_train, y_train_cat, batch_size = 500, epochs = 10)
finish_t_mod = time.time()
time = finish_t_mod - start_t_mod
print("training time :", time)
# evaluate the model
score = modelCifar_nn2.evaluate(x_test, y_test_cat)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
Explanation: Your observation and comment:
L'exactitude est de 43% avec le modele sigmoid et 10 etochs.
2) Design the NN model named modelCifar_nn2 by replacing the sigmoid activation with the ReLu activation. Train and test this model. Compare to the first one. <font color='red'> (1 pts)<font/>
End of explanation
# reload and pre-process your data
(x2_train, y2_train), (x2_test, y2_test) = cifar10.load_data()
#x2_train = x_train[0:25000,:]
#y2_train = y_train[0:25000]
x2_train = x2_train.astype('float32')
x2_test = x2_test.astype('float32')
x2_train = x2_train / 255.0
x2_test = x2_test / 255.0
# one hot encode outputs
y2_train = np_utils.to_categorical(y_train)
y2_test = np_utils.to_categorical(y_test)
print("train 2 size : ",x2_train.shape)
print("test 2 size : ",x2_test.shape)
print("train 2 label : ",y2_train.shape)
print("test 2 label : ",y2_test.shape)
# Define the model
from keras.layers.convolutional import Conv2D, MaxPooling2D
from keras.layers import Flatten
from keras.constraints import maxnorm
modelCifar_cnn1 = Sequential()
modelCifar_cnn1.add(Conv2D(16, (3, 3), input_shape=(32, 32, 3), padding='same', activation='relu', kernel_constraint=maxnorm(y2_test.shape[1])))
modelCifar_cnn1.add(MaxPooling2D(pool_size=(2, 2)))
modelCifar_cnn1.add(Dropout(0.2))
modelCifar_cnn1.add(Conv2D(32, (3, 3), activation='relu', padding='same', kernel_constraint=maxnorm(y2_test.shape[1])))
modelCifar_cnn1.add(MaxPooling2D(pool_size=(2, 2)))
modelCifar_cnn1.add(Flatten())
modelCifar_cnn1.add(Dense(128, activation='relu', kernel_constraint=maxnorm(y2_test.shape[1])))
modelCifar_cnn1.add(Dropout(0.5))
modelCifar_cnn1.add(Dense(10, activation='softmax'))
# compile and train the model
import time
from keras.optimizers import SGD
# compile the model
#modelCifar_cnn1.compile(loss='categorical_crossentropy', optimizer =RMSprop(lr=0.001), metrics=["accuracy"])
#modelCifar_cnn1.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
epochs = 10
lrate = 0.01
decay = lrate/epochs
sgd = SGD(lr=lrate, momentum=0.9, decay=decay, nesterov=False)
modelCifar_cnn1.compile(loss='categorical_crossentropy', optimizer=RMSprop(lr=0.001), metrics=['accuracy'])
#modelCifar_cnn1.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])
# train the model
start_t_mod= time.time()
modelCifar_cnn1.fit(x2_train, y2_train, validation_data=(x2_test, y2_test), epochs=epochs, batch_size=500)
finish_t_mod = time.time()
time = finish_t_mod - start_t_mod
print("training time :", time)
# evaluate the model
score = modelCifar_cnn1.evaluate(x2_test, y2_test_cat)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
Explanation: Your observation and comment: L'exactitude est de 20% avec le modele sigmoid et 10 etochs.
I-2. CNNs on CIFAR-10
1) Now design a CNN named modelCifar_cnn1 consisting of 2 convolutional layers + one fully-connected layer as follows:
- Conv_1: 16 filters of size 3x3, no padding, no stride, activation Relu
- maxPool_1: size 2x2
- Conv_2: 32 filters of size 3x3, no padding, no stride, activation Relu
- maxPool_2: size 2x2
- fc layer (Dense) 128 nodes
- [Do not forget Flatten() and final output dense layer with 'softmax' activation]
Reload and preprocess the data. Train this model with 10 epochs and batch_size = 500. Test the model and report the following results:
- number of total parameters (explain how to compute?)
- training and testing time
- test loss and accuracy
<font color='red'> (2 pts)<font/>
End of explanation
# Define the model
# modelCifar_cnn2 = Sequential()
Explanation: Your observation and comment:
2) Now modify the modelCifar_cnn1 by changing the filter size of 2 convolutional layers to 5x5. The new model is called modelCifar_cnn2. Train and test the model. Compare to the first CNN. <font color='red'> (1 pts)<font/>
End of explanation
from keras.preprocessing.image import ImageDataGenerator
batchSize = 100
datagen = ImageDataGenerator(rescale=1./255)
train_datagen = datagen.flow_from_directory(
'dataTP2/train', # this is your target directory which includes the training images
target_size = (50, 50), # all images will be resized to 50x50 pixels for fast computation
batch_size = batchSize,
class_mode = 'categorical')
validation_datagen = datagen.flow_from_directory(
'dataTP2/validation', # this is your target directory which includes the validation images
target_size = (50, 50), # all images will be resized to 50x50 pixels for fast computation
batch_size = batchSize,
class_mode = 'categorical')
Explanation: Your observation and comment:
*3) Compare the two CNNs with the two NNs in section I-1 in terms of accuracy, loss, number of parameters, calculation time, ect. * <font color='red'> (2 pts)<font/>
Fill the following table for comparison:
| Models | Number of parameters | Training time | Accuracy |
| ---------------|:---------------------:|:--------------:|:--------:|
| modelCifar_nn1 |
| modelCifar_nn2 |
| modelCifar_cnn1|
| modelCifar_cnn2|
Your observation and comment:
Part II - Cat and Dog classification <font color='red'> (6 pts)<font/>
In this part, we design and train CNNs on our data (import from disk). We will work on a small dataset including only 2 classes (cat and dog). Each one has 1000 images for training and 200 for validation.
You can download the data from:
(https://drive.google.com/open?id=15cQfeAuDY1CRuOduF5LZwWZ4koL6Dti9)
1) Describe the downloaded data: numer of training and validation images, number of classes, class names? Do the images have the same size? <font color='red'> (1 pts)<font/>
Your response:
2) Show some cat and dog images from the train set. Comment. <font color='red'> (1 pts)<font/>
Now we import the ImageDataGenerator module of Keras. This module can be used to pre-process the images and to perform data augmentation. We use 'flow_from_directory()' to generate batches of image data (and their labels) directly from our images in their respective folders (from disk).
End of explanation
# Define the model
# modelPart2_cnn1 = Sequential()
# train with .fit_generator
# modelPart2_cnn1.fit_generator(...)
# Define the model
# modelPart2_cnn2 = Sequential()
# train with .fit_generator
# modelPart2_cnn2.fit_generator(...)
Explanation: 3) Now describe your pre-processed data for training and validation: numer of training and validation images, number of classes, class names? Do the images have the same size? <font color='red'> (1 pts)<font/>
Your response:
4) Redefine, train and validate the 2 CNNs in Part I (namely modelPart2_cnn1, modelPart2_cnn2) on the new data using model.fit_generator instead of model.fit. Observe and compare the results. <font color='red'> (3 pts)<font/>
End of explanation
# Define new model
# modelCifar_cnn3 = Sequential()
# train and test
Explanation: Your observation and comments:
Part III - Advances <font color='red'> (2 pts)<font/>
In this part, you are free to improve your CNN performance using Data augmentation, Dropout, batch normalization, etc. Define at least 2 more CNNs to improve the classification performance of the CIFAR-10 dataset based on the first CNN (modelCifar_cnn1). That means you are not allowed to add more layers, change the number of filters or filter size, etc. Only the use of Data augmentation, Dropout, batch normalization is allowed. To use these techniques, further reading is required.
For each one, you are required to define the model, train, test and report the results.
End of explanation
# Define new model
# modelCifar_cnn4 = Sequential()
# train and test
Explanation: Result, observation and comment:
End of explanation
<END_TASK> |
15,703 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Train tensorflow or keras model on GCP or Kubeflow from Notebooks
This notebook introduces you to using Kubeflow Fairing to train the model to Kubeflow on Google Kubernetes Engine (GKE), and Google Cloud AI Platform training. This notebook demonstrate how to
Step1: Define the model logic
Step2: Train an Keras model in a notebook
Step3: Spicify a image registry that will hold the image built by fairing
Step4: Deploy the training job to kubeflow cluster
Step5: Deploy distributed training job to kubeflow cluster
Step6: Deploy the training job as CMLE training job
Doesn’t support CMLE distributed training
Step7: Inspect training process with tensorboard
Step8: Deploy the trained model to Kubeflow for predictions | <ASSISTANT_TASK:>
Python Code:
import os
import logging
import tensorflow as tf
import fairing
import numpy as np
from datetime import datetime
from fairing.cloud import gcp
# Setting up google container repositories (GCR) for storing output containers
# You can use any docker container registry istead of GCR
# For local notebook, GCP_PROJECT should be set explicitly
GCP_PROJECT = fairing.cloud.gcp.guess_project_name()
GCP_Bucket = os.environ['GCP_BUCKET'] # e.g., 'gs://kubeflow-demo-g/'
# This is for local notebook instead of that in kubeflow cluster
# os.environ['GOOGLE_APPLICATION_CREDENTIALS']=
Explanation: Train tensorflow or keras model on GCP or Kubeflow from Notebooks
This notebook introduces you to using Kubeflow Fairing to train the model to Kubeflow on Google Kubernetes Engine (GKE), and Google Cloud AI Platform training. This notebook demonstrate how to:
Train an Keras model in a local notebook,
Use Kubeflow Fairing to train an Keras model remotely on Kubeflow cluster,
Use Kubeflow Fairing to train an Keras model remotely on AI Platform training,
Use Kubeflow Fairing to deploy a trained model to Kubeflow, and Call the deployed endpoint for predictions.
You need Python 3.6 to use Kubeflow Fairing.
Setups
Pre-conditions
Deployed a kubeflow cluster through https://deploy.kubeflow.cloud/
Have the following environment variable ready:
PROJECT_ID # project host the kubeflow cluster or for running AI platform training
DEPLOYMENT_NAME # kubeflow deployment name, the same the cluster name after delpoyed
GCP_BUCKET # google cloud storage bucket
Create service account
bash
export SA_NAME = [service account name]
gcloud iam service-accounts create ${SA_NAME}
gcloud projects add-iam-policy-binding ${PROJECT_ID} \
--member serviceAccount:${SA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com \
--role 'roles/editor'
gcloud iam service-accounts keys create ~/key.json \
--iam-account ${SA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com
Authorize for Source Repository
bash
gcloud auth configure-docker
Update local kubeconfig (for submiting job to kubeflow cluster)
bash
export CLUSTER_NAME=${DEPLOYMENT_NAME} # this is the deployment name or the kubenete cluster name
export ZONE=us-central1-c
gcloud container clusters get-credentials ${CLUSTER_NAME} --region ${ZONE}
Set the environmental variable: GOOGLE_APPLICATION_CREDENTIALS
bash
export GOOGLE_APPLICATION_CREDENTIALS = ....
python
os.environ['GOOGLE_APPLICATION_CREDENTIALS']=...
Install the lastest version of fairing
python
pip install git+https://github.com/kubeflow/fairing@master
Please not that the above configuration is required for notebook service running outside Kubeflow environment. And the examples demonstrated in the notebook is fully tested on notebook service outside Kubeflow cluster also.
The environemt variables, e.g. service account, projects and etc, should have been pre-configured while setting up the cluster.
End of explanation
def gcs_copy(src_path, dst_path):
import subprocess
print(subprocess.run(['gsutil', 'cp', src_path, dst_path], stdout=subprocess.PIPE).stdout[:-1].decode('utf-8'))
def gcs_download(src_path, file_name):
import subprocess
print(subprocess.run(['gsutil', 'cp', src_path, file_name], stdout=subprocess.PIPE).stdout[:-1].decode('utf-8'))
class TensorflowModel(object):
def __init__(self):
self.model_file = "mnist_model.h5"
self.model = None
def build(self):
self.model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(512, activation=tf.nn.relu),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
self.model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
print(self.model.summary())
def save_model(self):
self.model.save(self.model_file)
gcs_copy(self.model_file, GCP_Bucket + self.model_file)
def train(self):
self.build()
mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
callbacks = [
# Interrupt training if `val_loss` stops improving for over 2 epochs
tf.keras.callbacks.EarlyStopping(patience=2, monitor='val_loss'),
# Write TensorBoard logs to `./logs` directory
tf.keras.callbacks.TensorBoard(log_dir=GCP_Bucket + 'logs/'
+ datetime.now().date().__str__())
]
self.model.fit(x_train, y_train, batch_size=32, epochs=5, callbacks=callbacks,
validation_data=(x_test, y_test))
self.save_model()
def predict(self, X):
if not self.model:
self.model = tf.keras.models.load_model(self.model_file)
# Do any preprocessing
prediction = self.model.predict(data=X)
Explanation: Define the model logic
End of explanation
TensorflowModel().train()
Explanation: Train an Keras model in a notebook
End of explanation
# In this demo, I use gsutil, therefore i compile a special image to install GoogleCloudSDK as based image
base_image = 'gcr.io/{}/fairing-predict-example:latest'.format(GCP_PROJECT)
!docker build --build-arg PY_VERSION=3.6.4 . -t {base_image}
!docker push {base_image}
GCP_PROJECT = fairing.cloud.gcp.guess_project_name()
BASE_IMAGE = 'gcr.io/{}/fairing-predict-example:latest'.format(GCP_PROJECT)
DOCKER_REGISTRY = 'gcr.io/{}/fairing-job-tf'.format(GCP_PROJECT)
Explanation: Spicify a image registry that will hold the image built by fairing
End of explanation
from fairing import TrainJob
from fairing.backends import GKEBackend
train_job = TrainJob(TensorflowModel, BASE_IMAGE, input_files=["requirements.txt"],
docker_registry=DOCKER_REGISTRY, backend=GKEBackend())
train_job.submit()
Explanation: Deploy the training job to kubeflow cluster
End of explanation
fairing.config.set_builder(name='docker', registry=DOCKER_REGISTRY,
base_image=BASE_IMAGE, push=True)
fairing.config.set_deployer(name='tfjob', worker_count=1, ps_count=1)
run_fn = fairing.config.fn(TensorflowModel)
run_fn()
Explanation: Deploy distributed training job to kubeflow cluster
End of explanation
from fairing import TrainJob
from fairing.backends import GCPManagedBackend
train_job = TrainJob(TensorflowModel, BASE_IMAGE, input_files=["requirements.txt"],
docker_registry=DOCKER_REGISTRY, backend=GCPManagedBackend())
train_job.submit()
Explanation: Deploy the training job as CMLE training job
Doesn’t support CMLE distributed training
End of explanation
# ! tensorboard --logdir=gs://kubeflow-demo-g/logs --host=localhost --port=8777
Explanation: Inspect training process with tensorboard
End of explanation
from fairing import PredictionEndpoint
from fairing.backends import KubeflowGKEBackend
# The trained_ames_model.joblib is exported during the above local training
endpoint = PredictionEndpoint(TensorflowModel, BASE_IMAGE, input_files=['mnist_model.h5', "requirements.txt"],
docker_registry=DOCKER_REGISTRY, backend=KubeflowGKEBackend())
endpoint.create()
endpoint.delete()
Explanation: Deploy the trained model to Kubeflow for predictions
End of explanation
<END_TASK> |
15,704 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Product Matching
This notebook is presented in the Product Matching webinar, one of many interesting webinars given by Turi. Check out upcoming webinars here. <br>
We will use GraphLab Create to perform product matching between textual descriptions of products from different sources, a task also known as Record Linkage. The data is available here.
The notebook is orginaized into the following sections
Step1: <a id="eval"></a> Evaluation functions
Step2: <a id="model"></a> Record Linker model
Step3: <a id="feature"></a> Feature Engineering
Step4: <b>In product matching, numbers can be highly helpful as they can represent model identifiers, versions, ect.</b>
Step5: <a id="k"></a> Exploring k
Step6: <b>
We can see that the extracted numbers positively affect our model. <br>
There are many more possible features to explore, and different distance functions to try. Read more in our userguide.
</b>
<a id="r"></a> Exploring the radius
<b>Some times not all k results make sense, or we just don't want to present a user with too many possibilities. <br>
For this we have the radius parameter which serves as a distance threshold. </b> | <ASSISTANT_TASK:>
Python Code:
import graphlab as gl
import re
import matplotlib.pyplot as plt
gl.canvas.set_target('ipynb')
%matplotlib inline
amazon = gl.SFrame.read_csv('Amazon.csv', verbose=False)
google = gl.SFrame.read_csv('GoogleProducts.csv', verbose=False)
truth = gl.SFrame.read_csv('Amzon_GoogleProducts_perfectMapping.csv', verbose=False)
print 'Amazon length: ', amazon.num_rows()
amazon.head(2)
print 'Google length: ', google.num_rows()
google.head(2)
print 'Truth length: ', truth.num_rows()
truth.head(2)
def transform(truth, amazon, google):
'''Transform the data into a more manageable format'''
# For the sake of this webinar we will look only at the names of the products
amazon = amazon[['id', 'title']]
google = google[['id', 'name']]
# Add a unique numeric label
amazon = amazon.add_row_number(column_name='label')
google = google.add_row_number(column_name='label')
# Change labels in truth based on the new numerical labels
truth = truth.join(amazon, on={'idAmazon' : 'id'})
truth = truth.join(google, on={'idGoogleBase' : 'id'})
# Rename some columns
amazon = amazon.rename({'title' : 'name'})
truth = truth.rename({
'label' : 'amazon label',
'title' : 'amazon name',
'label.1' : 'google label',
'name' : 'google name'
})
# Remove some others
truth.remove_columns(['idGoogleBase', 'idAmazon'])
amazon = amazon.remove_column('id')
google = google.remove_column('id')
return truth, amazon, google
truth, amazon, google = transform(truth, amazon, google)
amazon.head(3)
google.head(3)
truth.head(3)
Explanation: Product Matching
This notebook is presented in the Product Matching webinar, one of many interesting webinars given by Turi. Check out upcoming webinars here. <br>
We will use GraphLab Create to perform product matching between textual descriptions of products from different sources, a task also known as Record Linkage. The data is available here.
The notebook is orginaized into the following sections:
- <a href="#load">Loading and cleaning the data</a>
- <a href="#eval">Evaluation functions</a>
- <a href="#model">Record Linker model</a>
- <a href="#feature">Feature Engineering</a>
- <a href="#k">Exploring parameters</a>
<a id="load"></a> Loading and cleaning the data
End of explanation
def accuracy_at(results, truth):
'''Compute the accuracy at k of a record linkage model, given a true mapping'''
joined = truth.join(results, on={'google label' : 'query_label'})
num_correct_labels = (joined['amazon label'] == joined['reference_label']).sum()
return num_correct_labels / float(truth.num_rows())
def get_matches(results, amazon, google):
'''Reutrn the results of a record linkage model in a readable format'''
joined = results.join(amazon, on={'reference_label' : 'label'}).join(google, on={'query_label' : 'label'})
joined = joined[['name', 'name.1', 'distance', 'rank']]
joined = joined.rename({'name' : 'amazon name', 'name.1' : 'google name'})
return joined
Explanation: <a id="eval"></a> Evaluation functions
End of explanation
base_linker = gl.record_linker.create(amazon, features=['name'])
results = base_linker.link(google, k=3)
results
print 'Accuracy@3', accuracy_at(results, truth)
get_matches(results, amazon, google)
Explanation: <a id="model"></a> Record Linker model
End of explanation
# Example of features that the record linker create
amazon['3 char'] = gl.text_analytics.count_ngrams(amazon['name'], n=3, method='character')
amazon.head(3)
# Remove the feture for the sake of cleanliness
amazon = amazon.remove_column('3 char')
Explanation: <a id="feature"></a> Feature Engineering
End of explanation
from collections import Counter
# Extract numbers from the name
amazon['numbers'] = amazon['name'].apply(lambda name: dict(Counter(re.findall('\d+\.*\d*', name))))
google['numbers'] = google['name'].apply(lambda name: dict(Counter(re.findall('\d+\.*\d*', name))))
amazon.head(5)
# Create a record linker using the extracted numeric features
num_linker = gl.record_linker.create(amazon, features=['name', 'numbers'])
results = num_linker.link(google, k=3, verbose=False)
print 'Accuracy@3', accuracy_at(results, truth)
get_matches(results, amazon, google)
Explanation: <b>In product matching, numbers can be highly helpful as they can represent model identifiers, versions, ect.</b>
End of explanation
# Calculate accuracy at k for k between 1 and 10 for both models
k_range = range(1, 11)
base_accuracy = [accuracy_at(base_linker.link(google, k, verbose=False), truth) for k in k_range]
num_accuracy = [accuracy_at(num_linker.link(google, k, verbose=False), truth) for k in k_range]
# Plot the results
plt.style.use('ggplot')
plt.title('Accuracy@k')
plt.ylabel('Accuracy')
plt.xlabel('k')
plt.plot(k_range, base_accuracy, marker='o', color='b', label='Base Linker')
plt.plot(k_range, num_accuracy, marker='o', color='g', label='Number Linker')
plt.legend(loc=4)
None
Explanation: <a id="k"></a> Exploring k
End of explanation
results = num_linker.link(google, k=10, verbose=False)
print 'Accuracy:', accuracy_at(results, truth)
print 'Possible number of results to go through:', len(results)
results['distance'].show()
results = num_linker.link(google, k=None, radius=1.61, verbose=False)
print 'Accuracy:', accuracy_at(results, truth)
print 'Possible number of results to go through:', len(results)
Explanation: <b>
We can see that the extracted numbers positively affect our model. <br>
There are many more possible features to explore, and different distance functions to try. Read more in our userguide.
</b>
<a id="r"></a> Exploring the radius
<b>Some times not all k results make sense, or we just don't want to present a user with too many possibilities. <br>
For this we have the radius parameter which serves as a distance threshold. </b>
End of explanation
<END_TASK> |
15,705 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
<a href="https
Step1: Grading
We will create a grader instance below and use it to collect your answers. Note that these outputs will be stored locally inside grader and will be uploaded to the platform only after running submitting function in the last part of this assignment. If you want to make a partial submission, you can run that cell anytime you want.
Step2: Implementing EM for GMM
Indented block
For debugging, we will use samples from a Gaussian mixture model with unknown mean, variance, and priors. We also added initial values of parameters for grading purposes.
Step4: Reminder
Remember, that EM algorithm is a coordinate descent optimization of variational lower bound $\mathcal{L}(\theta, q) = \int q(T) \log\frac{p(X, T|\theta)}{q(T)}dT\to \max$.
<b>E-step</b>
Step6: M-step
In M-step we need to maximize $\mathbb{E}{q(T)}\log p(X,T | \theta)$ with respect to $\theta$. In our model this means that we need to find optimal values of $\pi$, $\mu$, $\Sigma$. To do so, you need to compute the derivatives and
set them to zero. You should start by deriving formulas for $\mu$ as it is the easiest part. Then move on to $\Sigma$. Here it is crucial to optimize function w.r.t. to $\Lambda = \Sigma^{-1}$ and then inverse obtained result. Finaly, to compute $\pi$, you will need <a href="https
Step8: Loss function
Finally, we need some function to track convergence. We will use variational lower bound $\mathcal{L}$ for this purpose. We will stop our EM iterations when $\mathcal{L}$ will saturate. Usually, you will need only about 10-20 iterations to converge. It is also useful to check that this function never decreases during training. If it does, you have a bug in your code.
<b>Task 3
Step9: Bringing it all together
Now that we have E step, M step and VLB, we can implement the training loop. We will initialize values of $\pi$, $\mu$ and $\Sigma$ to some random numbers, train until $\mathcal{L}$ stops changing, and return the resulting points. We also know that the EM algorithm converges to local optima. To find a better local optima, we will restart the algorithm multiple times from different (random) starting positions. Each training trial should stop either when maximum number of iterations is reached or when relative improvement is smaller than given tolerance ($|\frac{\mathcal{L}i-\mathcal{L}{i-1}}{\mathcal{L}_{i-1}}| \le \text{rtol}$).
Remember, that initial (random) values of $\pi$ that you generate must be non-negative and sum up to 1. Also, $\Sigma$ matrices must be symmetric and positive semi-definite. If you don't know how to generate those matrices, you can use $\Sigma=I$ as initialization.
You will also sometimes get numerical errors because of component collapsing. The easiest way to deal with this problems is to restart the procedure.
<b>Task 4
Step10: If you implemented all the steps correctly, your algorithm should converge in about 20 iterations. Let's plot the clusters to see it. We will assign a cluster label as the most probable cluster index. This can be found using a matrix $\gamma$ computed on last E-step.
Step11: Authorization & Submission
To submit assignment parts to Cousera platform, please, enter your e-mail and token into variables below. You can generate a token on this programming assignment's page. <b>Note
Step12: If you want to submit these answers, run cell below | <ASSISTANT_TASK:>
Python Code:
try:
import google.colab
IN_COLAB = True
except:
IN_COLAB = False
if IN_COLAB:
print("Downloading Colab files")
! shred -u setup_google_colab.py
! wget https://raw.githubusercontent.com/hse-aml/bayesian-methods-for-ml/master/setup_google_colab.py -O setup_google_colab.py
import setup_google_colab
setup_google_colab.load_data_week2()
import numpy as np
from numpy.linalg import slogdet, det, solve
from scipy.stats import multivariate_normal
import matplotlib.pyplot as plt
import time
from sklearn.datasets import load_digits
from w2_grader import EMGrader
%matplotlib inline
Explanation: <a href="https://colab.research.google.com/github/saketkc/notebooks/blob/master/python/coursera-BayesianML/02_EM_assignment.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
First things first
Click File -> Save a copy in Drive and click Open in new tab in the pop-up window to save your progress in Google Drive.
Expectation-maximization algorithm
In this assignment, we will derive and implement formulas for Gaussian Mixture Model — one of the most commonly used methods for performing soft clustering of the data.
Setup
Loading auxiliary files and importing the necessary libraries.
End of explanation
grader = EMGrader()
Explanation: Grading
We will create a grader instance below and use it to collect your answers. Note that these outputs will be stored locally inside grader and will be uploaded to the platform only after running submitting function in the last part of this assignment. If you want to make a partial submission, you can run that cell anytime you want.
End of explanation
samples = np.load('samples.npz')
X = samples['data']
pi0 = samples['pi0']
mu0 = samples['mu0']
sigma0 = samples['sigma0']
plt.scatter(X[:, 0], X[:, 1], c='grey', s=30)
plt.axis('equal')
plt.show()
Explanation: Implementing EM for GMM
Indented block
For debugging, we will use samples from a Gaussian mixture model with unknown mean, variance, and priors. We also added initial values of parameters for grading purposes.
End of explanation
def E_step(X, pi, mu, sigma):
Performs E-step on GMM model
Each input is numpy array:
X: (N x d), data points
pi: (C), mixture component weights
mu: (C x d), mixture component means
sigma: (C x d x d), mixture component covariance matrices
Returns:
gamma: (N x C), probabilities of clusters for objects
N = X.shape[0] # number of objects
C = pi.shape[0] # number of clusters
d = mu.shape[1] # dimension of each object
gamma = np.zeros((N, C)) # distribution q(T)
### YOUR CODE HERE
for c in range(C):
gamma[:, c] = 0.5*np.diag(np.dot((X-mu[c,:]), np.linalg.solve(sigma[c,:,:], (X-mu[c,:]).T)))
gamma_max = np.amax(gamma, axis=1, keepdims=True)
gamma = gamma-gamma_max
sigma_det = np.linalg.det(sigma)
gamma = pi * np.exp(-gamma) * np.power(np.power(2*np.pi, N)*sigma_det, -0.5)
gamma = gamma/np.sum(gamma, axis=1, keepdims=True)
return gamma
gamma = E_step(X, pi0, mu0, sigma0)
grader.submit_e_step(gamma)
Explanation: Reminder
Remember, that EM algorithm is a coordinate descent optimization of variational lower bound $\mathcal{L}(\theta, q) = \int q(T) \log\frac{p(X, T|\theta)}{q(T)}dT\to \max$.
<b>E-step</b>:<br>
$\mathcal{L}(\theta, q) \to \max\limits_{q} \Leftrightarrow \mathcal{KL} [q(T) \,\|\, p(T|X, \theta)] \to \min \limits_{q\in Q} \Rightarrow q(T) = p(T|X, \theta)$<br>
<b>M-step</b>:<br>
$\mathcal{L}(\theta, q) \to \max\limits_{\theta} \Leftrightarrow \mathbb{E}{q(T)}\log p(X,T | \theta) \to \max\limits{\theta}$
For GMM, $\theta$ is a set of parameters that consists of mean vectors $\mu_c$, covariance matrices $\Sigma_c$ and priors $\pi_c$ for each component.
Latent variables $T$ are indices of components to which each data point is assigned, i.e. $t_i$ is the cluster index for object $x_i$.
The joint distribution can be written as follows: $\log p(T, X \mid \theta) = \sum\limits_{i=1}^N \log p(t_i, x_i \mid \theta) = \sum\limits_{i=1}^N \sum\limits_{c=1}^C q(t_i = c) \log \left (\pi_c \, f_{!\mathcal{N}}(x_i \mid \mu_c, \Sigma_c)\right)$,
where $f_{!\mathcal{N}}(x \mid \mu_c, \Sigma_c) = \frac{1}{\sqrt{(2\pi)^n|\boldsymbol\Sigma_c|}}
\exp\left(-\frac{1}{2}({x}-{\mu_c})^T{\boldsymbol\Sigma_c}^{-1}({x}-{\mu_c})
\right)$ is the probability density function (pdf) of the normal distribution $\mathcal{N}(x_i \mid \mu_c, \Sigma_c)$.
E-step
In this step we need to estimate the posterior distribution over the latent variables with fixed values of parameters: $q_i(t_i) = p(t_i \mid x_i, \theta)$. We assume that $t_i$ equals to the cluster index of the true component of the $x_i$ object. To do so we need to compute $\gamma_{ic} = p(t_i = c \mid x_i, \theta)$. Note that $\sum\limits_{c=1}^C\gamma_{ic}=1$.
<b>Important trick 1:</b> It is important to avoid numerical errors. At some point you will have to compute the formula of the following form: $\frac{e^{y_i}}{\sum_j e^{y_j}}$, which is called softmax. When you compute exponents of large numbers, some numbers may become infinity. You can avoid this by dividing numerator and denominator by $e^{\max(y)}$: $\frac{e^{y_i-\max(y)}}{\sum_j e^{y_j - \max(y)}}$. After this transformation maximum value in the denominator will be equal to one. All other terms will contribute smaller values. So, to compute desired formula you first subtract maximum value from each component in vector $\mathbf{y}$ and then compute everything else as before.
<b>Important trick 2:</b> You will probably need to compute formula of the form $A^{-1}x$ at some point. You would normally inverse $A$ and then multiply it by $x$. A bit faster and more numerically accurate way to do this is to directly solve equation $Ay = x$ by using a special function. Its solution is $y=A^{-1}x$, but the equation $Ay = x$ can be solved by methods which do not explicitely invert the matrix. You can use np.linalg.solve for this.
<b>Other usefull functions: </b> <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.slogdet.html">slogdet</a> and <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.det.html#numpy.linalg.det">det</a>
<b>Task 1:</b> Implement E-step for GMM using template below.
End of explanation
def M_step(X, gamma):
Performs M-step on GMM model
Each input is numpy array:
X: (N x d), data points
gamma: (N x C), distribution q(T)
Returns:
pi: (C)
mu: (C x d)
sigma: (C x d x d)
N = X.shape[0] # number of objects
C = gamma.shape[1] # number of clusters
d = X.shape[1] # dimension of each object
pi = np.zeros(C)
mu = np.zeros((C,d))
sigma = np.zeros((C,d,d))
pi = gamma.mean(axis=0)
mu = np.dot(gamma.T,X)/gamma.sum(axis=0)[:, np.newaxis]
for c in range(C):
sigma[c,:,:] = np.dot((X-mu[c]).T, gamma[:,c].reshape(N,1) * (X-mu[c]))/gamma.sum(axis=0)[c]
return pi, mu, sigma
gamma = E_step(X, pi0, mu0, sigma0)
pi, mu, sigma = M_step(X, gamma)
grader.submit_m_step(pi, mu, sigma)
Explanation: M-step
In M-step we need to maximize $\mathbb{E}{q(T)}\log p(X,T | \theta)$ with respect to $\theta$. In our model this means that we need to find optimal values of $\pi$, $\mu$, $\Sigma$. To do so, you need to compute the derivatives and
set them to zero. You should start by deriving formulas for $\mu$ as it is the easiest part. Then move on to $\Sigma$. Here it is crucial to optimize function w.r.t. to $\Lambda = \Sigma^{-1}$ and then inverse obtained result. Finaly, to compute $\pi$, you will need <a href="https://www3.nd.edu/~jstiver/FIN360/Constrained%20Optimization.pdf">Lagrange Multipliers technique</a> to satisfy constraint $\sum\limits{i=1}^{n}\pi_i = 1$.
<br>
<b>Important note:</b> You will need to compute derivatives of scalars with respect to matrices. To refresh this technique from previous courses, see <a href="https://en.wikipedia.org/wiki/Matrix_calculus"> wiki article</a> about it . Main formulas of matrix derivatives can be found in <a href="http://www2.imm.dtu.dk/pubdb/views/edoc_download.php/3274/pdf/imm3274.pdf">Chapter 2 of The Matrix Cookbook</a>. For example, there you may find that $\frac{\partial}{\partial A}\log |A| = A^{-T}$.
<b>Task 2:</b> Implement M-step for GMM using template below.
End of explanation
def compute_vlb(X, pi, mu, sigma, gamma):
Each input is numpy array:
X: (N x d), data points
gamma: (N x C), distribution q(T)
pi: (C)
mu: (C x d)
sigma: (C x d x d)
Returns value of variational lower bound
N = X.shape[0] # number of objects
C = gamma.shape[1] # number of clusters
d = X.shape[1] # dimension of each object
### YOUR CODE HERE
loss = np.zeros(N)
for c in range(C):
mvn = multivariate_normal(mu[c, :], sigma[c, :, :], allow_singular=True)
loss += ((np.log(pi[c]) + mvn.logpdf(X)) - np.log(gamma[:,c]))*gamma[:,c]
loss = np.sum(loss)
return loss
pi, mu, sigma = pi0, mu0, sigma0
gamma = E_step(X, pi, mu, sigma)
pi, mu, sigma = M_step(X, gamma)
loss = compute_vlb(X, pi, mu, sigma, gamma)
grader.submit_VLB(loss)
Explanation: Loss function
Finally, we need some function to track convergence. We will use variational lower bound $\mathcal{L}$ for this purpose. We will stop our EM iterations when $\mathcal{L}$ will saturate. Usually, you will need only about 10-20 iterations to converge. It is also useful to check that this function never decreases during training. If it does, you have a bug in your code.
<b>Task 3:</b> Implement a function that will compute $\mathcal{L}$ using template below.
$$\mathcal{L} = \sum_{i=1}^{N} \sum_{c=1}^{C} q(t_i =c) (\log \pi_c + \log f_{!\mathcal{N}}(x_i \mid \mu_c, \Sigma_c)) - \sum_{i=1}^{N} \sum_{c=1}^{K} q(t_i =c) \log q(t_i =c)$$
End of explanation
def train_EM(X, C, rtol=1e-3, max_iter=100, restarts=10):
'''
Starts with random initialization *restarts* times
Runs optimization until saturation with *rtol* reached
or *max_iter* iterations were made.
X: (N, d), data points
C: int, number of clusters
'''
N = X.shape[0] # number of objects
d = X.shape[1] # dimension of each object
best_loss = None
best_pi = None
best_mu = None
best_sigma = None
for _ in range(restarts):
try:
### YOUR CODE HERE
# X: (N x d), data points
# gamma: (N x C), distribution q(T)
# pi: (C)
# mu: (C x d)
# sigma: (C x d x d)
pi = np.array([1/C]*C)
mu = np.random.randn(C,d)
#sigmas = np.eye(d)
#sigma = np.array([sigmas]*C)
sigma = np.zeros((C, d, d))
for c in range(C):
sigma[c] = np.eye(d) * np.random.uniform(1, C)
for i in range(max_iter):
gamma = E_step(X, pi, mu, sigma)
pi, mu, sigma = M_step(X, gamma)
loss = compute_vlb(X, pi, mu, sigma, gamma)
if best_loss is None or loss<best_loss:
best_loss = loss
best_pi = pi
best_mu = mu
best_sigma = sigma
except np.linalg.LinAlgError:
print("Singular matrix: components collapsed")
pass
return best_loss, best_pi, best_mu, best_sigma
best_loss, best_pi, best_mu, best_sigma = train_EM(X, 3)
grader.submit_EM(best_loss)
Explanation: Bringing it all together
Now that we have E step, M step and VLB, we can implement the training loop. We will initialize values of $\pi$, $\mu$ and $\Sigma$ to some random numbers, train until $\mathcal{L}$ stops changing, and return the resulting points. We also know that the EM algorithm converges to local optima. To find a better local optima, we will restart the algorithm multiple times from different (random) starting positions. Each training trial should stop either when maximum number of iterations is reached or when relative improvement is smaller than given tolerance ($|\frac{\mathcal{L}i-\mathcal{L}{i-1}}{\mathcal{L}_{i-1}}| \le \text{rtol}$).
Remember, that initial (random) values of $\pi$ that you generate must be non-negative and sum up to 1. Also, $\Sigma$ matrices must be symmetric and positive semi-definite. If you don't know how to generate those matrices, you can use $\Sigma=I$ as initialization.
You will also sometimes get numerical errors because of component collapsing. The easiest way to deal with this problems is to restart the procedure.
<b>Task 4:</b> Implement training procedure
End of explanation
gamma = E_step(X, best_pi, best_mu, best_sigma)
labels = gamma.argmax(axis=1)
colors = np.array([(31, 119, 180), (255, 127, 14), (44, 160, 44)]) / 255.
plt.scatter(X[:, 0], X[:, 1], c=colors[labels], s=30)
plt.axis('equal')
plt.show()
Explanation: If you implemented all the steps correctly, your algorithm should converge in about 20 iterations. Let's plot the clusters to see it. We will assign a cluster label as the most probable cluster index. This can be found using a matrix $\gamma$ computed on last E-step.
End of explanation
STUDENT_EMAIL = ""
STUDENT_TOKEN = ""
grader.status()
Explanation: Authorization & Submission
To submit assignment parts to Cousera platform, please, enter your e-mail and token into variables below. You can generate a token on this programming assignment's page. <b>Note:</b> The token expires 30 minutes after generation.
End of explanation
grader.submit(STUDENT_EMAIL, STUDENT_TOKEN)
Explanation: If you want to submit these answers, run cell below
End of explanation
<END_TASK> |
15,706 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Writing Low-Level TensorFlow Code
Learning Objectives
Practice defining and performing basic operations on constant Tensors
Use Tensorflow's automatic differentiation capability
Learn how to train a linear regression from scratch with TensorFLow
Introduction
In this notebook, we will start by reviewing the main operations on Tensors in TensorFlow and understand how to manipulate TensorFlow Variables. We explain how these are compatible with python built-in list and numpy arrays.
Then we will jump to the problem of training a linear regression from scratch with gradient descent. The first order of business will be to understand how to compute the gradients of a function (the loss here) with respect to some of its arguments (the model weights here). The TensorFlow construct allowing us to do that is tf.GradientTape, which we will describe.
At last we will create a simple training loop to learn the weights of a 1-dim linear regression using synthetic data generated from a linear model.
As a bonus exercise, we will do the same for data generated from a non linear model, forcing us to manual engineer non-linear features to improve our linear model performance.
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
Step1: Operations on Tensors
Variables and Constants
Tensors in TensorFlow are either constant (tf.constant) or variables (tf.Variable).
Constant values can not be changed, while variables values can be.
The main difference is that instances of tf.Variable have methods allowing us to change
their values while tensors constructed with tf.constant don't have these methods, and
therefore their values can not be changed. When you want to change the value of a tf.Variable
x use one of the following method
Step2: Point-wise operations
Tensorflow offers similar point-wise tensor operations as numpy does
Step3: NumPy Interoperability
In addition to native TF tensors, tensorflow operations can take native python types and NumPy arrays as operands.
Step4: Linear Regression
Now let's use low level tensorflow operations to implement linear regression.
Later in the course you'll see abstracted ways to do this using high level TensorFlow.
Toy Dataset
We'll model the following function
Step5: Let's also create a test dataset to evaluate our models
Step6: Loss Function
The simplest model we can build is a model that for each value of x returns the sample mean of the training set
Step7: Using mean squared error, our loss is
Step8: This value for the MSE loss above will give us a baseline to compare how a more complex model is doing.
Now, if $\hat{Y}$ represents the vector containing our model's predictions when we use a linear regression model
\begin{equation}
\hat{Y} = w_0X + w_1
\end{equation}
Step9: Gradient Function
To use gradient descent we need to take the partial derivatives of the loss function with respect to each of the weights. We could manually compute the derivatives, but with Tensorflow's automatic differentiation capabilities we don't have to!
During gradient descent we think of the loss as a function of the parameters $w_0$ and $w_1$. Thus, we want to compute the partial derivative with respect to these variables.
For that we need to wrap our loss computation within the context of tf.GradientTape instance which will record gradient information
Step10: Training Loop
Here we have a very simple training loop that converges. Note we are ignoring best practices like batching, creating a separate test set, and random weight initialization for the sake of simplicity.
Step11: This is indeed much better!
Bonus
Try modeling a non-linear function such as | <ASSISTANT_TASK:>
Python Code:
# Here we'll import data processing libraries like Numpy and Tensorflow
import numpy as np
import tensorflow as tf
# Use matplotlib for visualizing the model
from matplotlib import pyplot as plt
# Here we'll show the currently installed version of TensorFlow
print(tf.__version__)
Explanation: Writing Low-Level TensorFlow Code
Learning Objectives
Practice defining and performing basic operations on constant Tensors
Use Tensorflow's automatic differentiation capability
Learn how to train a linear regression from scratch with TensorFLow
Introduction
In this notebook, we will start by reviewing the main operations on Tensors in TensorFlow and understand how to manipulate TensorFlow Variables. We explain how these are compatible with python built-in list and numpy arrays.
Then we will jump to the problem of training a linear regression from scratch with gradient descent. The first order of business will be to understand how to compute the gradients of a function (the loss here) with respect to some of its arguments (the model weights here). The TensorFlow construct allowing us to do that is tf.GradientTape, which we will describe.
At last we will create a simple training loop to learn the weights of a 1-dim linear regression using synthetic data generated from a linear model.
As a bonus exercise, we will do the same for data generated from a non linear model, forcing us to manual engineer non-linear features to improve our linear model performance.
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
End of explanation
# Creates a constant tensor from a tensor-like object.
x = tf.constant([2, 3, 4])
x
# The Variable() constructor requires an initial value for the variable, which can be a Tensor of any type and shape.
x = tf.Variable(2.0, dtype=tf.float32, name='my_variable')
# The .assign() method will assign the value to referance object.
x.assign(45.8)
x
# The .assign_add() method will update the referance object by adding value to it.
x.assign_add(4)
x
# The .assign_add() method will update the referance object by subtracting value to it.
x.assign_sub(3)
x
Explanation: Operations on Tensors
Variables and Constants
Tensors in TensorFlow are either constant (tf.constant) or variables (tf.Variable).
Constant values can not be changed, while variables values can be.
The main difference is that instances of tf.Variable have methods allowing us to change
their values while tensors constructed with tf.constant don't have these methods, and
therefore their values can not be changed. When you want to change the value of a tf.Variable
x use one of the following method:
x.assign(new_value)
x.assign_add(value_to_be_added)
x.assign_sub(value_to_be_subtracted
End of explanation
# Creates a constant tensor from a tensor-like object.
a = tf.constant([5, 3, 8]) # TODO 1a
b = tf.constant([3, -1, 2])
# Using the .add() method components of a tensor will be added.
c = tf.add(a, b)
d = a + b
# Let's output the value of `c` and `d`.
print("c:", c)
print("d:", d)
# Creates a constant tensor from a tensor-like object.
a = tf.constant([5, 3, 8]) # TODO 1b
b = tf.constant([3, -1, 2])
# Using the .multiply() method components of a tensor will be multiplied.
c = tf.multiply(a, b)
d = a * b
# Let's output the value of `c` and `d`.
print("c:", c)
print("d:", d)
# TODO 1c
# tf.math.exp expects floats so we need to explicitly give the type
a = tf.constant([5, 3, 8], dtype=tf.float32)
b = tf.math.exp(a)
# Let's output the value of `b`.
print("b:", b)
Explanation: Point-wise operations
Tensorflow offers similar point-wise tensor operations as numpy does:
tf.add allows to add the components of a tensor
tf.multiply allows us to multiply the components of a tensor
tf.subtract allow us to substract the components of a tensor
tf.math.* contains the usual math operations to be applied on the components of a tensor
and many more...
Most of the standard arithmetic operations (tf.add, tf.substrac, etc.) are overloaded by the usual corresponding arithmetic symbols (+, -, etc.)
End of explanation
# native python list
a_py = [1, 2]
b_py = [3, 4]
# Using the .add() method components of a tensor will be added.
tf.add(a_py, b_py)
# numpy arrays
a_np = np.array([1, 2])
b_np = np.array([3, 4])
# Using the .add() method components of a tensor will be added.
tf.add(a_np, b_np)
# native TF tensor
a_tf = tf.constant([1, 2])
b_tf = tf.constant([3, 4])
# Using the .add() method components of a tensor will be added.
tf.add(a_tf, b_tf)
# Here using the .numpy() method we'll convert a `native TF tensor` to a `NumPy array`.
a_tf.numpy()
Explanation: NumPy Interoperability
In addition to native TF tensors, tensorflow operations can take native python types and NumPy arrays as operands.
End of explanation
# Creates a constant tensor from a tensor-like object.
X = tf.constant(range(10), dtype=tf.float32)
Y = 2 * X + 10
# Let's output the value of `X` and `Y`.
print("X:{}".format(X))
print("Y:{}".format(Y))
Explanation: Linear Regression
Now let's use low level tensorflow operations to implement linear regression.
Later in the course you'll see abstracted ways to do this using high level TensorFlow.
Toy Dataset
We'll model the following function:
\begin{equation}
y= 2x + 10
\end{equation}
End of explanation
# Creates a constant tensor from a tensor-like object.
X_test = tf.constant(range(10, 20), dtype=tf.float32)
Y_test = 2 * X_test + 10
# Let's output the value of `X_test` and `Y_test`.
print("X_test:{}".format(X_test))
print("Y_test:{}".format(Y_test))
Explanation: Let's also create a test dataset to evaluate our models:
End of explanation
# The numpy().mean() will compute the arithmetic mean or average of the given data (array elements) along the specified axis.
y_mean = Y.numpy().mean()
# Let's define predict_mean() function.
def predict_mean(X):
y_hat = [y_mean] * len(X)
return y_hat
Y_hat = predict_mean(X_test)
Explanation: Loss Function
The simplest model we can build is a model that for each value of x returns the sample mean of the training set:
End of explanation
# Let's evaluate the loss.
errors = (Y_hat - Y)**2
loss = tf.reduce_mean(errors)
loss.numpy()
Explanation: Using mean squared error, our loss is:
\begin{equation}
MSE = \frac{1}{m}\sum_{i=1}^{m}(\hat{Y}_i-Y_i)^2
\end{equation}
For this simple model the loss is then:
End of explanation
# Let's define loss_mse() function which is taking arguments as coefficients of the model
def loss_mse(X, Y, w0, w1):
Y_hat = w0 * X + w1
errors = (Y_hat - Y)**2
return tf.reduce_mean(errors)
Explanation: This value for the MSE loss above will give us a baseline to compare how a more complex model is doing.
Now, if $\hat{Y}$ represents the vector containing our model's predictions when we use a linear regression model
\begin{equation}
\hat{Y} = w_0X + w_1
\end{equation}
End of explanation
# Let's define compute_gradients() procedure for computing the loss gradients with respect to the model weights:
# TODO 2
def compute_gradients(X, Y, w0, w1):
with tf.GradientTape() as tape:
loss = loss_mse(X, Y, w0, w1)
return tape.gradient(loss, [w0, w1])
# The Variable() constructor requires an initial value for the variable, which can be a Tensor of any type and shape.
w0 = tf.Variable(0.0)
w1 = tf.Variable(0.0)
dw0, dw1 = compute_gradients(X, Y, w0, w1)
# Let's output the value of `dw0`.
print("dw0:", dw0.numpy())
# Let's output the value of `dw1`.
print("dw1", dw1.numpy())
Explanation: Gradient Function
To use gradient descent we need to take the partial derivatives of the loss function with respect to each of the weights. We could manually compute the derivatives, but with Tensorflow's automatic differentiation capabilities we don't have to!
During gradient descent we think of the loss as a function of the parameters $w_0$ and $w_1$. Thus, we want to compute the partial derivative with respect to these variables.
For that we need to wrap our loss computation within the context of tf.GradientTape instance which will record gradient information:
python
with tf.GradientTape() as tape:
loss = # computation
This will allow us to later compute the gradients of any tensor computed within the tf.GradientTape context with respect to instances of tf.Variable:
python
gradients = tape.gradient(loss, [w0, w1])
End of explanation
# TODO 3
STEPS = 1000
LEARNING_RATE = .02
MSG = "STEP {step} - loss: {loss}, w0: {w0}, w1: {w1}\n"
# The Variable() constructor requires an initial value for the variable, which can be a Tensor of any type and shape.
w0 = tf.Variable(0.0)
w1 = tf.Variable(0.0)
for step in range(0, STEPS + 1):
dw0, dw1 = compute_gradients(X, Y, w0, w1)
w0.assign_sub(dw0 * LEARNING_RATE)
w1.assign_sub(dw1 * LEARNING_RATE)
if step % 100 == 0:
loss = loss_mse(X, Y, w0, w1)
print(MSG.format(step=step, loss=loss, w0=w0.numpy(), w1=w1.numpy()))
# Here we can compare the test loss for this linear regression to the test loss from the baseline model.
# Its output will always be the mean of the training set:
loss = loss_mse(X_test, Y_test, w0, w1)
loss.numpy()
Explanation: Training Loop
Here we have a very simple training loop that converges. Note we are ignoring best practices like batching, creating a separate test set, and random weight initialization for the sake of simplicity.
End of explanation
X = tf.constant(np.linspace(0, 2, 1000), dtype=tf.float32)
Y = X * tf.exp(-X**2)
%matplotlib inline
# The .plot() is a versatile function, and will take an arbitrary number of arguments. For example, to plot x versus y.
plt.plot(X, Y)
# Let's make_features() procedure.
def make_features(X):
# The tf.ones_like() method will create a tensor of all ones that has the same shape as the input.
f1 = tf.ones_like(X)
f2 = X
# The tf.square() method will compute square of input tensor element-wise.
f3 = tf.square(X)
# The tf.sqrt() method will compute element-wise square root of the input tensor.
f4 = tf.sqrt(X)
# The tf.exp() method will compute exponential of input tensor element-wise.
f5 = tf.exp(X)
# The tf.stack() method will stacks a list of rank-R tensors into one rank-(R+1) tensor.
return tf.stack([f1, f2, f3, f4, f5], axis=1)
# Let's define predict() procedure that will remove dimensions of size 1 from the shape of a tensor.
def predict(X, W):
return tf.squeeze(X @ W, -1)
# Let's define loss_mse() procedure that will evaluate the mean of elements across dimensions of a tensor.
def loss_mse(X, Y, W):
Y_hat = predict(X, W)
errors = (Y_hat - Y)**2
return tf.reduce_mean(errors)
# Let's define compute_gradients() procedure for computing the loss gradients.
def compute_gradients(X, Y, W):
with tf.GradientTape() as tape:
loss = loss_mse(Xf, Y, W)
return tape.gradient(loss, W)
STEPS = 2000
LEARNING_RATE = .02
Xf = make_features(X)
n_weights = Xf.shape[1]
W = tf.Variable(np.zeros((n_weights, 1)), dtype=tf.float32)
# For plotting
steps, losses = [], []
plt.figure()
for step in range(1, STEPS + 1):
dW = compute_gradients(X, Y, W)
W.assign_sub(dW * LEARNING_RATE)
if step % 100 == 0:
loss = loss_mse(Xf, Y, W)
steps.append(step)
losses.append(loss)
plt.clf()
plt.plot(steps, losses)
print("STEP: {} MSE: {}".format(STEPS, loss_mse(Xf, Y, W)))
# The .figure() method will create a new figure, or activate an existing figure.
plt.figure()
# The .plot() is a versatile function, and will take an arbitrary number of arguments. For example, to plot x versus y.
plt.plot(X, Y, label='actual')
plt.plot(X, predict(Xf, W), label='predicted')
# The .legend() method will place a legend on the axes.
plt.legend()
Explanation: This is indeed much better!
Bonus
Try modeling a non-linear function such as: $y=xe^{-x^2}$
End of explanation
<END_TASK> |
15,707 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Working on new API
The clustergrammer_widget class is now being loaded into the Network class. The class and widget instance are saved in th Network instance, net. This allows us to load data, cluster, and finally produce a new widget instance using the widget method. The instance of the widget is saved in net and can be used to grab the data from the clustergram as a Pandas DataFrame using the widget_df method. The exported DataFrame will reflect any filtering or imported categories that were added on the front end.
In these examples, we will filter the matrix using the brush crop tool, export the filtered matrix as a DataFrame, and finally visualize this as a new clustergram widget.
Step1: Make widget using new API
Step2: Above, we have filtered the matrix to a region of interest using the brush cropping tool. Below we will get export this region of interest, defined on the front end, to a DataFrame, df_genes. This demonstrates the two-way communication capabilities of widgets.
Step3: Above, we made a new widget visualizing this region of interest.
Generate random DataFrame
Here we will genrate a DataFrame with random data and visualize it using the widget.
Step4: Above, we selected a region of interest using the front-end brush crop tool and export to DataFrame, df_random. Below we will visualize it using a new widget. | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
from clustergrammer_widget import *
net = Network(clustergrammer_widget)
Explanation: Working on new API
The clustergrammer_widget class is now being loaded into the Network class. The class and widget instance are saved in th Network instance, net. This allows us to load data, cluster, and finally produce a new widget instance using the widget method. The instance of the widget is saved in net and can be used to grab the data from the clustergram as a Pandas DataFrame using the widget_df method. The exported DataFrame will reflect any filtering or imported categories that were added on the front end.
In these examples, we will filter the matrix using the brush crop tool, export the filtered matrix as a DataFrame, and finally visualize this as a new clustergram widget.
End of explanation
net.load_file('rc_two_cats.txt')
net.cluster()
net.widget()
Explanation: Make widget using new API
End of explanation
df_genes = net.widget_df()
df_genes.shape
net.load_df(df_genes)
net.cluster()
net.widget()
Explanation: Above, we have filtered the matrix to a region of interest using the brush cropping tool. Below we will get export this region of interest, defined on the front end, to a DataFrame, df_genes. This demonstrates the two-way communication capabilities of widgets.
End of explanation
# generate random matrix
num_rows = 500
num_cols = 10
np.random.seed(seed=100)
mat = np.random.rand(num_rows, num_cols)
# make row and col labels
rows = range(num_rows)
cols = range(num_cols)
rows = [str(i) for i in rows]
cols = [str(i) for i in cols]
# make dataframe
df = pd.DataFrame(data=mat, columns=cols, index=rows)
net.load_df(df)
net.cluster()
net.widget()
Explanation: Above, we made a new widget visualizing this region of interest.
Generate random DataFrame
Here we will genrate a DataFrame with random data and visualize it using the widget.
End of explanation
df_random = net.widget_df()
df_random.shape
net.load_df(df_random)
net.cluster()
net.widget()
Explanation: Above, we selected a region of interest using the front-end brush crop tool and export to DataFrame, df_random. Below we will visualize it using a new widget.
End of explanation
<END_TASK> |
15,708 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
LAB 01
Step1: The source dataset
Our dataset is hosted in BigQuery. The taxi fare data is a publically available dataset, meaning anyone with a GCP account has access. Click here to acess the dataset.
The Taxi Fare dataset is relatively large at 55 million training rows, but simple to understand, with only six features. The fare_amount is the target, the continuous value we’ll train a model to predict.
Create a BigQuery Dataset
A BigQuery dataset is a container for tables, views, and models built with BigQuery ML. Let's create one called feat_eng if we have not already done so in an earlier lab. We'll do the same for a GCS bucket for our project too.
Step2: Create the training data table
Since there is already a publicly available dataset, we can simply create the training data table using this raw input data. Note the WHERE clause in the below query
Step3: Verify table creation
Verify that you created the dataset.
Step4: Baseline Model
Step5: Note, the query takes several minutes to complete. After the first iteration is complete, your model (baseline_model) appears in the navigation panel of the BigQuery web UI. Because the query uses a CREATE MODEL statement to create a model, you do not see query results.
You can observe the model as it's being trained by viewing the Model stats tab in the BigQuery web UI. As soon as the first iteration completes, the tab is updated. The stats continue to update as each iteration completes.
Once the training is done, visit the BigQuery Cloud Console and look at the model that has been trained. Then, come back to this notebook.
Evaluate the baseline model
Note that BigQuery automatically split the data we gave it, and trained on only a part of the data and used the rest for evaluation. After creating your model, you evaluate the performance of the regressor using the ML.EVALUATE function. The ML.EVALUATE function evaluates the predicted values against the actual data.
NOTE
Step6: NOTE
Step7: Model 1
Step8: Once the training is done, visit the BigQuery Cloud Console and look at the model that has been trained. Then, come back to this notebook.
Next, two distinct SQL statements show the TRAINING and EVALUATION metrics of model_1.
Step9: Here we run a SQL query to take the SQRT() of the mean squared error as your loss metric for evaluation for the benchmark_model.
Step10: Model 2
Step11: Model 3 | <ASSISTANT_TASK:>
Python Code:
%%bash
export PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "$PROJECT
Explanation: LAB 01: Basic Feature Engineering in BQML
Learning Objectives
Create SQL statements to evaluate the model
Extract temporal features
Perform a feature cross on temporal features
Introduction
In this lab, we utilize feature engineering to improve the prediction of the fare amount for a taxi ride in New York City. We will use BigQuery ML to build a taxifare prediction model, using feature engineering to improve and create a final model.
In this Notebook we set up the environment, create the project dataset, create a feature engineering table, create and evaluate a baseline model, extract temporal features, perform a feature cross on temporal features, and evaluate model performance throughout the process.
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
Set up environment variables and load necessary libraries
End of explanation
%%bash
# Create a BigQuery dataset for feat_eng if it doesn't exist
datasetexists=$(bq ls -d | grep -w feat_eng)
if [ -n "$datasetexists" ]; then
echo -e "BigQuery dataset already exists, let's not recreate it."
else
echo "Creating BigQuery dataset titled: feat_eng"
bq --location=US mk --dataset \
--description 'Taxi Fare' \
$PROJECT:feat_eng
echo "\nHere are your current datasets:"
bq ls
fi
Explanation: The source dataset
Our dataset is hosted in BigQuery. The taxi fare data is a publically available dataset, meaning anyone with a GCP account has access. Click here to acess the dataset.
The Taxi Fare dataset is relatively large at 55 million training rows, but simple to understand, with only six features. The fare_amount is the target, the continuous value we’ll train a model to predict.
Create a BigQuery Dataset
A BigQuery dataset is a container for tables, views, and models built with BigQuery ML. Let's create one called feat_eng if we have not already done so in an earlier lab. We'll do the same for a GCS bucket for our project too.
End of explanation
%%bigquery
CREATE OR REPLACE TABLE
feat_eng.feateng_training_data AS
SELECT
(tolls_amount + fare_amount) AS fare_amount,
passenger_count*1.0 AS passengers,
pickup_datetime,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat
FROM
`nyc-tlc.yellow.trips`
WHERE
MOD(ABS(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING))), 10000) = 1
AND fare_amount >= 2.5
AND passenger_count > 0
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
Explanation: Create the training data table
Since there is already a publicly available dataset, we can simply create the training data table using this raw input data. Note the WHERE clause in the below query: This clause allows us to TRAIN a portion of the data (e.g. one hundred thousand rows versus one million rows), which keeps your query costs down. If you need a refresher on using MOD() for repeatable splits see this post.
Note: The dataset in the create table code below is the one created previously, e.g. "feat_eng". The table name is "feateng_training_data". Run the query to create the table.
End of explanation
%%bigquery
# LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT
*
FROM
feat_eng.feateng_training_data
LIMIT
0
Explanation: Verify table creation
Verify that you created the dataset.
End of explanation
%%bigquery
CREATE OR REPLACE MODEL
feat_eng.baseline_model OPTIONS (model_type='linear_reg',
input_label_cols=['fare_amount']) AS
SELECT
fare_amount,
passengers,
pickup_datetime,
pickuplon,
pickuplat,
dropofflon,
dropofflat
FROM
feat_eng.feateng_training_data
Explanation: Baseline Model: Create the baseline model
Next, you create a linear regression baseline model with no feature engineering. Recall that a model in BigQuery ML represents what an ML system has learned from the training data. A baseline model is a solution to a problem without applying any machine learning techniques.
When creating a BQML model, you must specify the model type (in our case linear regression) and the input label (fare_amount). Note also that we are using the training data table as the data source.
Now we create the SQL statement to create the baseline model.
End of explanation
%%bigquery
# Eval statistics on the held out data.
SELECT
*,
SQRT(loss) AS rmse
FROM
ML.TRAINING_INFO(MODEL feat_eng.baseline_model)
%%bigquery
SELECT
*
FROM
ML.EVALUATE(MODEL feat_eng.baseline_model)
Explanation: Note, the query takes several minutes to complete. After the first iteration is complete, your model (baseline_model) appears in the navigation panel of the BigQuery web UI. Because the query uses a CREATE MODEL statement to create a model, you do not see query results.
You can observe the model as it's being trained by viewing the Model stats tab in the BigQuery web UI. As soon as the first iteration completes, the tab is updated. The stats continue to update as each iteration completes.
Once the training is done, visit the BigQuery Cloud Console and look at the model that has been trained. Then, come back to this notebook.
Evaluate the baseline model
Note that BigQuery automatically split the data we gave it, and trained on only a part of the data and used the rest for evaluation. After creating your model, you evaluate the performance of the regressor using the ML.EVALUATE function. The ML.EVALUATE function evaluates the predicted values against the actual data.
NOTE: The results are also displayed in the BigQuery Cloud Console under the Evaluation tab.
Review the learning and eval statistics for the baseline_model.
End of explanation
%%bigquery
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL feat_eng.baseline_model)
Explanation: NOTE: Because you performed a linear regression, the results include the following columns:
mean_absolute_error
mean_squared_error
mean_squared_log_error
median_absolute_error
r2_score
explained_variance
Resource for an explanation of the Regression Metrics.
Mean squared error (MSE) - Measures the difference between the values our model predicted using the test set and the actual values. You can also think of it as the distance between your regression (best fit) line and the predicted values.
Root mean squared error (RMSE) - The primary evaluation metric for this ML problem is the root mean-squared error. RMSE measures the difference between the predictions of a model, and the observed values. A large RMSE is equivalent to a large average error, so smaller values of RMSE are better. One nice property of RMSE is that the error is given in the units being measured, so you can tell very directly how incorrect the model might be on unseen data.
R2: An important metric in the evaluation results is the R2 score. The R2 score is a statistical measure that determines if the linear regression predictions approximate the actual data. Zero (0) indicates that the model explains none of the variability of the response data around the mean. One (1) indicates that the model explains all the variability of the response data around the mean.
Next, we write a SQL query to take the SQRT() of the mean squared error as your loss metric for evaluation for the benchmark_model.
End of explanation
%%bigquery
CREATE OR REPLACE MODEL
feat_eng.model_1 OPTIONS (model_type='linear_reg',
input_label_cols=['fare_amount']) AS
SELECT
fare_amount,
passengers,
pickup_datetime,
EXTRACT(DAYOFWEEK
FROM
pickup_datetime) AS dayofweek,
pickuplon,
pickuplat,
dropofflon,
dropofflat
FROM
feat_eng.feateng_training_data
Explanation: Model 1: EXTRACT dayofweek from the pickup_datetime feature.
As you recall, dayofweek is an enum representing the 7 days of the week. This factory allows the enum to be obtained from the int value. The int value follows the ISO-8601 standard, from 1 (Monday) to 7 (Sunday).
If you were to extract the dayofweek from pickup_datetime using BigQuery SQL, the dataype returned would be integer.
Next, we create a model titled "model_1" from the benchmark model and extract out the DayofWeek.
End of explanation
%%bigquery
SELECT
*,
SQRT(loss) AS rmse
FROM
ML.TRAINING_INFO(MODEL feat_eng.model_1)
%%bigquery
SELECT
*
FROM
ML.EVALUATE(MODEL feat_eng.model_1)
Explanation: Once the training is done, visit the BigQuery Cloud Console and look at the model that has been trained. Then, come back to this notebook.
Next, two distinct SQL statements show the TRAINING and EVALUATION metrics of model_1.
End of explanation
%%bigquery
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL feat_eng.model_1)
Explanation: Here we run a SQL query to take the SQRT() of the mean squared error as your loss metric for evaluation for the benchmark_model.
End of explanation
%%bigquery
CREATE OR REPLACE MODEL
feat_eng.model_2 OPTIONS (model_type='linear_reg',
input_label_cols=['fare_amount']) AS
SELECT
fare_amount,
passengers,
#pickup_datetime,
EXTRACT(DAYOFWEEK
FROM
pickup_datetime) AS dayofweek,
EXTRACT(HOUR
FROM
pickup_datetime) AS hourofday,
pickuplon,
pickuplat,
dropofflon,
dropofflat
FROM
`feat_eng.feateng_training_data`
%%bigquery
SELECT
*
FROM
ML.EVALUATE(MODEL feat_eng.model_2)
%%bigquery
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL feat_eng.model_2)
Explanation: Model 2: EXTRACT hourofday from the pickup_datetime feature
As you recall, pickup_datetime is stored as a TIMESTAMP, where the Timestamp format is retrieved in the standard output format – year-month-day hour:minute:second (e.g. 2016-01-01 23:59:59). Hourofday returns the integer number representing the hour number of the given date.
Hourofday is best thought of as a discrete ordinal variable (and not a categorical feature), as the hours can be ranked (e.g. there is a natural ordering of the values). Hourofday has an added characteristic of being cyclic, since 12am follows 11pm and precedes 1am.
Next, we create a model titled "model_2" and EXTRACT the hourofday from the pickup_datetime feature to improve our model's rmse.
End of explanation
%%bigquery
CREATE OR REPLACE MODEL
feat_eng.model_3 OPTIONS (model_type='linear_reg',
input_label_cols=['fare_amount']) AS
SELECT
fare_amount,
passengers,
#pickup_datetime,
#EXTRACT(DAYOFWEEK FROM pickup_datetime) AS dayofweek,
#EXTRACT(HOUR FROM pickup_datetime) AS hourofday,
CONCAT(CAST(EXTRACT(DAYOFWEEK
FROM
pickup_datetime) AS STRING), CAST(EXTRACT(HOUR
FROM
pickup_datetime) AS STRING)) AS hourofday,
pickuplon,
pickuplat,
dropofflon,
dropofflat
FROM
`feat_eng.feateng_training_data`
%%bigquery
SELECT
*
FROM
ML.EVALUATE(MODEL feat_eng.model_3)
%%bigquery
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL feat_eng.model_3)
Explanation: Model 3: Feature cross dayofweek and hourofday using CONCAT
First, let’s allow the model to learn traffic patterns by creating a new feature that combines the time of day and day of week (this is called a feature cross.
Note: BQML by default assumes that numbers are numeric features, and strings are categorical features. We need to convert both the dayofweek and hourofday features to strings because the model (Neural Network) will automatically treat any integer as a numerical value rather than a categorical value. Thus, if not cast as a string, the dayofweek feature will be interpreted as numeric values (e.g. 1,2,3,4,5,6,7) and hour ofday will also be interpreted as numeric values (e.g. the day begins at midnight, 00:00, and the last minute of the day begins at 23:59 and ends at 24:00). As such, there is no way to distinguish the "feature cross" of hourofday and dayofweek "numerically". Casting the dayofweek and hourofday as strings ensures that each element will be treated like a label and will get its own coefficient associated with it.
Create the SQL statement to feature cross the dayofweek and hourofday using the CONCAT function. Name the model "model_3"
End of explanation
<END_TASK> |
15,709 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Intuit craft demonstration
Kyle Willett (Fellow, Insight Data Science)
<font color='red'>Create a reasonable definition(s) of rule performance.</font>
The definition of rule performance I used involves three related measures of success
Step3: Descriptive statistics
Step6: Plot the distribution of money involved per transaction. There are ~900 cases where no money was exchanged, but an alert was still triggered. These are potential bad data points, and might need to be removed from the sample. I'd consult with other members of the team to determine whether that would be appropriate. The distribution of money spent is roughly log-normal.
Step8: No - every case in the cases table is associated with at least one rule triggering an alert.
Step9: Defining metrics
So they're asking for a dashboard that predicts "rules performance". I have individual cases, some of which had funds withheld because of rules performance, and then some fraction of those which were flagged as actual bad cases following judgement by a human.
So the rules performance is strictly whether a case is likely to have funds automatically withheld and forwarded to a human for review. The badmerch label is another level on top of that; the current success ratio should be a measure of how successful the automated system is.
Based on the outcomes above, a 4
Step11: I have a label that predicts both a specific rule and its associated class for each transaction. So a reasonable ordered set of priorities might be
Step12: So the distribution of outcomes is very different depending on the overall rule type. Let's look at the actual numbers in each category.
Step14: This data splits the number of alerts by the category of the triggering rule and the ultimate outcome. In every category, the most common outcome is that funds were not withheld and there was no corresponding loss. However, the ratio of outcomes varies strongly by rule type. For rules on compliance, more than 80% of cases are benign and flagged as such. The benign fraction drops to 61% for financial risk and 56% for fraud. So the type of rule being broken is strongly correlated with the likelihood of a bad transaction.
Results
Step15: This is one of the initial plots in the mock dashboard. It shows the overall performance of each rule sorted by outcome. Rule 17 stands out because it has only a single triggered alert in the dataset (agent placed funds on hold, but there was no fraud involved - false negative).
Good rules are ones dominated by true positives and where every other category is low; a high true negative rate would indicate that the agents are being accurate, but that the rule is overly sensitive (eg, Rule 31). The best at this by eye is Rule 18.
Next, we'll calculate our metrics of choice (precision, recall, F1) for the dataset when split by rule.
Step16: This is a good overall summary; we have three metrics for each rule, of which the combined F1 is considered to be the most important. For any rule, we can look at the corresponding plot in the dashboard and examine whether F1 is above a chosen threshold value (labeled here as 0.5). Reading from left to right in the top row, for example, Rule 1 is performing well, Rule 2 is acceptable, Rules 3-5 are performing below the desired accuracy, etc.
Splitting by rule type
Step17: Financial risk rules are the largest category, and are mostly cases that were true negatives (money not held and it wasn't a bad transaction). The false negative rate is slightly larger than the true positive, though, indicating that financial risks are missing more than half of the genuinely bad transactions. Fraud rules also have true negatives as the most common category, but a significantly lower false negative rate compard to true positives. So these types are less likely to be missed by the agents. Compliance rules trigger the fewest total number of alerts; the rates of anything except a false negative are all low (81% of these alerts are benign).
Step20: Grouping by type; fraud rules have by a significant amount the best performance across all three metrics. Financial risk has comparable precision, but much worse recall. Compliance is poor across the board.
Cumulative performance of metrics split by rule
The above dashboard is a useful start, since we've defined a metric and looked at how it differs for each rule. However, the data being used was collected over a period of several months, and the data should be examined for variations in the metrics as a function of time. This would examine whether a rule is performing well (and if it improves or degrades with more data), the response of the risk agents to different triggers, and possibly variations in the population of merchants submitting cases.
We'll look at this analysis in the context of an expanding window - for every point in a time series of data, we use data up to and including that point. This gives the cumulative performance as a function of time, which is useful for looking at how the performance of a given rule stabilizes.
Step21: This will be the second set of plots in our dashboard. This shows the results over an expanding window covering the full length of time in the dataset, where the value of the three metrics (precision, recall, F1) track how the rules are performing with respect to the analysts and true outcomes over time.
By definition, data over an expanding window should stabilize as more data comes in and the variance decreases (assuming that the rule definitions, performance of risk agents, and underlying merchant behavior is all the same). Large amounts of recent variation would indicate that we don't know whether the rule is performing well yet.
To assess how much the rules are varying in performance, we'll measure the stability of each metric weighted more heavily toward the most recent results. A simple measure which will use is the largest absolute deviation over the second half of the data.
Step22: Six out of the thirty rules have a variation $\Delta_\mathrm{max,abs} < 0.1$ in the second half of the current data. Of those, two (Rules 7 and 26) have only a handful of datapoints and estimates of the true accuracy are very uncertain. Two others (Rules 2 and 30) more data, although less than 100 points each. Rule 2 has very different behavior starting a few weeks toward the end, sharply increasing both its precision and recall. This could indicate either a difference in merchant tendencies or a re-definition of the existing rule. Rule 30 has shown a gradual improvement from an early nadir, which might be a sign of a set of bad/unlikely transactions earlier and now regressing to the mean. Rule 4 basically only has data in the second half of the set (not stabilized yet) and Rule 5 has a gradually decreasing recall, which may be a counterexample to the trend in Rule 30.
The remainder of the rules (especially for those with a few hundred data points) are relatively stable over the expanding window. So we can broadly categorize rule performance in three categories
Step24: Cumulative performance of metrics split by rule type
Step25: Analysis
Step26: Rolling performance of metrics split by rule type
Step27: Co-occurence and effectiveness of rules
Are there any rules that occur together at very high rates (indicating that the model is too complicated)?
Step28: Rules 8, 14, 15, and 27 all have fairly strong co-occurrences with other rules in the set. These would be good candidates to check for the overall F1 scores and evaluate whether they're a necessary trigger for the system.
Other questions that I'd explore in the data given more time | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
from sqlalchemy import create_engine
from sqlalchemy_utils import database_exists, create_database
import psycopg2
import pandas as pd # Requires v 0.18.0
import numpy as np
import seaborn as sns
sns.set_style("whitegrid")
dbname = 'risk'
username = 'willettk'
# Note: password must be entered to run, but don't put this anywhere public.
psswd = ''
engine = create_engine('postgresql://%s:%s@localhost/%s'%(username,psswd,dbname))
# Check if database exists
database_exists(engine.url)
# Load the risk databases from CSV files
cases = pd.read_csv('risk_data/cases.csv',
parse_dates=['alertDate'],
infer_datetime_format=True)
cases.rename(columns=lambda x: x.lower(), inplace=True)
print cases.dtypes
cases.head()
rules = pd.read_csv('risk_data/rules.csv')
rules.rename(columns=lambda x: x.lower(), inplace=True)
rules.head()
categories = pd.read_csv('risk_data/ruleCategories.csv')
categories.rename(columns=lambda x: x.lower(), inplace=True)
categories.head()
# Insert tables into PostgreSQL
cases.to_sql('cases', engine, if_exists='replace', index=False)
rules.to_sql('rules', engine, if_exists='replace', index=False)
categories.to_sql('categories', engine, if_exists='replace', index=False)
# As when setting up PSQL, the connection will need the password for the database entered here
con = psycopg2.connect(database = dbname, user = username, host='localhost', password=psswd)
Explanation: Intuit craft demonstration
Kyle Willett (Fellow, Insight Data Science)
<font color='red'>Create a reasonable definition(s) of rule performance.</font>
The definition of rule performance I used involves three related measures of success: precision, recall, and the combined F1-score. This primarily evaluates the decisions of the risk agents, where a (true) positive result occurred when a rule was triggered, the agent withheld funds, and the case was ultimately labeled as fraud. A poorly-performing rule is one where a trigger either meant the agent released the funds the majority of the time (which would mean the rule is too sensitive to false positives) or with a high rate of released funds for cases labeled as fraud (which would mean that the agents do not recognize the merits of this rule).
I calculated precision, recall, and F1 for each rule based on the rates of held funds and bad cases. Roughly 1/3 of the rules (11/30) have high marks for both precision and recall and have triggered a sufficient number of alerts that their performance is fairly well characterized. A second group of rules (10/30) have enough triggers to measure their performance, but low precision and recall scores; these should be re-assessed and potentially modified to lower the number of false detections. The remaining 9/30 rules either have very few triggers and/or exhibit rapidly changing behavior, and need more triggers before their effectiveness can be evaluated.
I also looked at performance grouped by the overall rule type. Fraud rules have by a significant amount the best performance across all three metrics. Financial risk has comparable precision, but much worse recall. Metrics for both fraud and financial risk stabilized after $\sim1$ month of collecting data. Compliance metrics are poor both in precision and recall, and have been mildly but steadily decreasing over the last two months of data.
<font color='red'>Build a mockup of a dashboard(s) that tracks rules performance (by rule and by RiskAlertCategory) in whatever way you think is appropriate (there may be multiple ways to assess performance).</font>
The dashboards I built are static plots in Python/Jupyter notebook, although the queries are run against a SQL database (PostgreSQL for this example) that can be updated with new data. The dashboard can be easily updated with more recent data. The key plot shows the precision, recall, and F1 scores split by rule and plotted over an expanding time window (taking in all data up to the current point). It shows the relative stability and performance of each rule simultaneously.
For the daily business of risk agents, the performance is plotted over a rolling window so that agents can assess the recent performance of each rule as well.
<font color='red'>Assess the overall decision making process (which includes Risk Agents’ decisions).</font>
Evaluation of the decision making process relies heavily on two pieces of data that are not included in this set. The first set would be the actual true negatives: cases of daily transactions that did not trigger a rule. Information on this would provide a baseline on the sensitivity of a particular rule to both holding funds and ultimate investigation into whether a transaction is fraudulent.
Secondly, there is no information on the risk agent handling each of the individual cases. This information is potentially important because of the human factor involved; a particular risk agent, for example, will have varying levels of accuracy (either overall or with respect to particular rules), each of which could be modeled. If so, that would allow better assessment of the rule performance since the effect of a particular user agent can be marginalized. This information could also be attached via anonymized ID in the same case table.
Import data to SQL database
End of explanation
# How many different rules are there, grouped by type?
sql_query =
SELECT ruletype,COUNT(ruletype)
FROM categories
GROUP BY ruleType;
pd.read_sql_query(sql_query,con).head()
# Are there cases triggered without any money involved in the transaction?
sql_query =
SELECT COUNT(caseid)
FROM cases
WHERE amount = 0;
pd.read_sql_query(sql_query,con).head()
Explanation: Descriptive statistics
End of explanation
pl = np.log10(cases.amount+1).hist(bins=50)
pl.set_xlabel("log(Transaction mount per triggered case [$])")
pl.set_ylabel("Count")
pl.axvline(np.log10(cases.amount.median()),color='r',lw=2,ls='--')
pl.set_title("Median transaction is ${:.2f}".format(cases.amount.median()));
cases.amount.max()
# What are the distributions of outcomes with regard to holds and bad merchants?
sql_query =
SELECT held, badmerch, COUNT(badmerch) as c
FROM cases
GROUP BY held,badmerch;
p = pd.read_sql_query(sql_query,con)
p.head()
# How many total cases are there?
print "Total number of cases in this data set: {}".format(len(cases))
# Does the number of rules violations equal the number of helds?
print len(rules)
print sum(cases.held)
# Are there rules violations that don't correspond to cases in the table?
sql_query =
SELECT COUNT(rules.caseid)
FROM rules
LEFT JOIN cases ON cases.caseid = rules.caseid
WHERE cases.caseid IS NULL;
pd.read_sql_query(sql_query,con).head()
Explanation: Plot the distribution of money involved per transaction. There are ~900 cases where no money was exchanged, but an alert was still triggered. These are potential bad data points, and might need to be removed from the sample. I'd consult with other members of the team to determine whether that would be appropriate. The distribution of money spent is roughly log-normal.
End of explanation
# Look at the distribution of rule types for benign cases
sql_query =
SELECT ruletype,sum(count) FROM
(SELECT X.count, categories.ruletype FROM
(SELECT rules.ruleid, COUNT(rules.ruleid)
FROM rules
LEFT JOIN cases ON cases.caseid = rules.caseid
WHERE cases.held = 0
AND cases.badmerch = 0
GROUP BY rules.ruleid) X
JOIN categories ON categories.ruleid = X.ruleid
) Y
GROUP BY ruletype
;
ruletypes_clean = pd.read_sql_query(sql_query,con)
ax = sns.barplot(x="ruletype", y="sum", data=ruletypes_clean)
Explanation: No - every case in the cases table is associated with at least one rule triggering an alert.
End of explanation
# Define helper functions for computing metrics of rule performance
def get_precision(TP,FP):
return TP* 1./ (TP + FP)
def get_recall(TP,FN):
return TP * 1./(TP + FN)
def get_accuracy(TP,FP,TN,FN):
return (TP + TN) * 1./ (TN+FN+FP+TP)
def get_f1(TP,FP,TN,FN):
precision = get_precision(TP,FP)
recall = get_recall(TP,FN)
return 2*precision*recall / (precision+recall)
# Print metrics for entire dataset
TN,FN,FP,TP = p.c / sum(p.c)
print "Precision: {:.3f}".format(get_precision(TP,FP))
print "Recall: {:.3f}".format(get_recall(TP,FN))
print "Accuracy: {:.3f}".format(get_accuracy(TP,FP,TN,FN))
print "F1: {:.3f}".format(get_f1(TP,FP,TN,FN))
Explanation: Defining metrics
So they're asking for a dashboard that predicts "rules performance". I have individual cases, some of which had funds withheld because of rules performance, and then some fraction of those which were flagged as actual bad cases following judgement by a human.
So the rules performance is strictly whether a case is likely to have funds automatically withheld and forwarded to a human for review. The badmerch label is another level on top of that; the current success ratio should be a measure of how successful the automated system is.
Based on the outcomes above, a 4:1 ratio might not be considered particularly successful.
66% of cases were not held and ultimately were good. (TN)
15% of cases were not held, but turned out to be bad. (FN)
6% of cases were held but turned out to be OK. (FP)
12% of cases were held and did turn out to be bad. (TP)
End of explanation
sql_query =
SELECT X.ruleid, X.caseid, X.outcome, categories.ruletype FROM
(SELECT rules.ruleid, rules.caseid,
CASE
WHEN cases.held = 0 and cases.badMerch = 0 THEN 'not held, good'
WHEN cases.held = 0 and cases.badMerch = 1 THEN 'not held, bad'
WHEN cases.held = 1 and cases.badMerch = 0 THEN 'held, good'
WHEN cases.held = 1 and cases.badMerch = 1 THEN 'held, bad'
END outcome
FROM rules
LEFT JOIN cases ON cases.caseid = rules.caseid
) X
JOIN categories ON categories.ruleid = X.ruleid
;
allcases = pd.read_sql_query(sql_query,con)
fig,ax = plt.subplots(1,1,figsize=(10,6))
sns.countplot(x="ruletype", hue="outcome", data=allcases, ax=ax);
Explanation: I have a label that predicts both a specific rule and its associated class for each transaction. So a reasonable ordered set of priorities might be:
predict whether \$\$ will be held (ie, a rule is triggered)
predict what type of rule will be triggered
predict which specific rule will be triggered
predict whether a triggered case will ultimately be determined to be fraudulent
I'll need to engineer some of my own features here (ie, for each case I could do something like number of past cases, number of past bad cases, average money in transactions, average time between transactions, etc). Whatever interesting/potential combinations I can get from the ID, time, cost, and history.
Then I need to turn that into a "dashboard" - that could be both a visualization of past results and/or some mockup of a "current" day's activity and who my results would flag.
Outcome as a function of rule type
The next step in the analysis will be to make some plots and assess how the rules being trigger vary by rule and rule type.
End of explanation
for g in allcases.groupby("ruletype"):
for gg in g[1].groupby("outcome"):
print "{:15}, {:15}, {:2.1f}%".format(g[0],gg[0],len(gg[1]) * 100./len(g[1]))
print ""
Explanation: So the distribution of outcomes is very different depending on the overall rule type. Let's look at the actual numbers in each category.
End of explanation
# Retrieve the outcomes of all triggered cases and encode those outcomes as numeric data
sql_query =
SELECT X.ruleid, X.caseid, X.outcome, categories.ruletype FROM
(SELECT rules.ruleid, rules.caseid,
CASE
WHEN cases.held = 0 and cases.badMerch = 0 THEN 0
WHEN cases.held = 0 and cases.badMerch = 1 THEN 1
WHEN cases.held = 1 and cases.badMerch = 0 THEN 2
WHEN cases.held = 1 and cases.badMerch = 1 THEN 3
END outcome
FROM rules
LEFT JOIN cases ON cases.caseid = rules.caseid
) X
JOIN categories ON categories.ruleid = X.ruleid
;
all_numeric = pd.read_sql_query(sql_query,con)
# Plot results as a grid of bar charts, separated by rule.
# Color indicates the overall rule type
ruleorder = list(categories[categories.ruletype=="Fraud"].ruleid.values) + \
list(categories[categories.ruletype=="Financial Risk"].ruleid.values) + \
list(categories[categories.ruletype=="Compliance"].ruleid.values)
grid = sns.FacetGrid(all_numeric,
col="ruleid",
hue="ruletype",
col_order = ruleorder,
col_wrap=8, size=2, aspect=1,
xlim=(0,3))
grid.map(plt.hist, "outcome", normed=True)
grid.set(xticks=[0,1,2,3])
grid.set_xticklabels(['TN','FN','FP','TP']);
Explanation: This data splits the number of alerts by the category of the triggering rule and the ultimate outcome. In every category, the most common outcome is that funds were not withheld and there was no corresponding loss. However, the ratio of outcomes varies strongly by rule type. For rules on compliance, more than 80% of cases are benign and flagged as such. The benign fraction drops to 61% for financial risk and 56% for fraud. So the type of rule being broken is strongly correlated with the likelihood of a bad transaction.
Results: assessing performance
The challenge from Intuit is specifically to assess rule performance. I interpret that as evaluating individually whether each of these rules is doing well, based on the ultimate accuracy.
The approach I'll begin with is to look at the rates of the various outcomes for each rule as a function of some metric (precision, accuracy, F1).
Splitting by rule: performance metrics
End of explanation
metric,value,ruleid = [],[],[]
for g in all_numeric.groupby('ruleid'):
outcomes = {}
for gg in g[1].groupby('outcome'):
outcomes[gg[0]] = len(gg[1])
TN,FN,FP,TP = [outcomes.setdefault(i, 0) for i in range(4)]
p_ = get_precision(TP,FP) if (TP + FP) > 0 and TP > 0 else 0.
r_ = get_recall(TP,FN) if (TP + FN) > 0 and TP > 0 else 0.
if p_ > 0. and r_ > 0.:
f_ = get_f1(TP,FP,TN,FN)
else:
f_ = 0.
value.append(p_)
value.append(r_)
value.append(f_)
metric.append('precision')
metric.append('recall')
metric.append('f1')
ruleid.extend([g[0],]*3)
m = pd.DataFrame(index = range(len(metric)))
m['metric'] = pd.Series(metric)
m['value'] = pd.Series(value)
m['ruleid'] = pd.Series(ruleid)
# Plot the metrics for the overall data split by rule
grid = sns.FacetGrid(m,
col="ruleid",
col_wrap=8, size=2, aspect=1)
grid.map(sns.barplot, "metric","value","metric",palette=sns.color_palette("Set1"))
grid.map(plt.axhline, y=0.5, ls="--", c="0.5",lw=1);
Explanation: This is one of the initial plots in the mock dashboard. It shows the overall performance of each rule sorted by outcome. Rule 17 stands out because it has only a single triggered alert in the dataset (agent placed funds on hold, but there was no fraud involved - false negative).
Good rules are ones dominated by true positives and where every other category is low; a high true negative rate would indicate that the agents are being accurate, but that the rule is overly sensitive (eg, Rule 31). The best at this by eye is Rule 18.
Next, we'll calculate our metrics of choice (precision, recall, F1) for the dataset when split by rule.
End of explanation
# Plot the counts of each outcome split by rule type.
grid = sns.FacetGrid(all_numeric,
col="ruletype", hue="outcome",
col_wrap=3, size=5, aspect=1,
xlim=(0,3))
grid.map(plt.hist, "outcome")
grid.set(xticks=[0,1,2,3,4])
grid.set_xticklabels(['TN','FN','FP','TP']);
Explanation: This is a good overall summary; we have three metrics for each rule, of which the combined F1 is considered to be the most important. For any rule, we can look at the corresponding plot in the dashboard and examine whether F1 is above a chosen threshold value (labeled here as 0.5). Reading from left to right in the top row, for example, Rule 1 is performing well, Rule 2 is acceptable, Rules 3-5 are performing below the desired accuracy, etc.
Splitting by rule type: performance metrics
Repeat the same analysis as above, but split by rule type (Fraud, Financial Risk, Compliance) instead of the rules themselves.
End of explanation
# Calculate precision, recall, F1 for data by rule type
rt_metric,rt_value,rt_ruletype = [],[],[]
for g in all_numeric.groupby('ruletype'):
outcomes = {}
for gg in g[1].groupby('outcome'):
outcomes[gg[0]] = len(gg[1])
TN,FN,FP,TP = [outcomes.setdefault(i, 0) for i in range(4)]
p_ = get_precision(TP,FP) if (TP + FP) > 0 and TP > 0 else 0.
r_ = get_recall(TP,FN) if (TP + FN) > 0 and TP > 0 else 0.
if p_ > 0. and r_ > 0.:
f_ = get_f1(TP,FP,TN,FN)
else:
f_ = 0.
rt_value.append(p_)
rt_value.append(r_)
rt_value.append(f_)
rt_metric.append('precision')
rt_metric.append('recall')
rt_metric.append('f1')
rt_ruletype.extend([g[0],]*3)
rtm = pd.DataFrame(index = range(len(rt_metric)))
rtm['metric'] = pd.Series(rt_metric)
rtm['value'] = pd.Series(rt_value)
rtm['ruletype'] = pd.Series(rt_ruletype)
# Plot the overall precision, recall, F1 for the dataset split by rule type
grid = sns.FacetGrid(rtm,
col="ruletype",
col_wrap=3, size=5, aspect=1)
grid.map(sns.barplot, "metric","value","metric",palette=sns.color_palette("Set1"))
grid.map(plt.axhline, y=0.5, ls="--", c="0.5",lw=1);
Explanation: Financial risk rules are the largest category, and are mostly cases that were true negatives (money not held and it wasn't a bad transaction). The false negative rate is slightly larger than the true positive, though, indicating that financial risks are missing more than half of the genuinely bad transactions. Fraud rules also have true negatives as the most common category, but a significantly lower false negative rate compard to true positives. So these types are less likely to be missed by the agents. Compliance rules trigger the fewest total number of alerts; the rates of anything except a false negative are all low (81% of these alerts are benign).
End of explanation
# Compute precision, recall, and F1 over an expanding time window
def ex_precision(ts):
TP = (ts.badmerch & ts.held).sum()
FP = (ts.held & np.logical_not(ts.badmerch)).sum()
if (TP + FP) > 0.:
return TP * 1./ (TP + FP)
else:
return 0.
def ex_recall(ts):
TP = (ts.badmerch & ts.held).sum()
FN = (ts.badmerch & np.logical_not(ts.held)).sum()
if (TP + FN) > 0.:
return TP * 1./(TP + FN)
else:
return 0.
def ex_f1(ts):
TP = (ts.badmerch & ts.held).sum()
FP = (ts.held & np.logical_not(ts.badmerch)).sum()
FN = (ts.badmerch & np.logical_not(ts.held)).sum()
num = 2*TP
den = 2*TP + FP + FN
if den > 0.:
return num * 1./den
else:
return 0.
# Make the expanded window with associated metrics by looping over every row in the dataframe
def make_expanded(ts,window=1):
expanding_precision = pd.concat([(pd.Series(ex_precision(ts.iloc[:i+window]),
index=[ts.index[i+window]])) for i in range(len(ts)-window) ])
expanding_recall = pd.concat([(pd.Series(ex_recall(ts.iloc[:i+window]),
index=[ts.index[i+window]])) for i in range(len(ts)-window) ])
expanding_f1 = pd.concat([(pd.Series(ex_f1(ts.iloc[:i+window]),
index=[ts.index[i+window]])) for i in range(len(ts)-window) ])
ex = pd.DataFrame(data={"precision":expanding_precision.values,
"recall":expanding_recall.values,
"f1":expanding_f1.values,
},
index=ts.index[1:])
return ex
# Run the expanded window for all cases, sorted by ruleid
sql_query =
SELECT cases.*,rules.ruleid
FROM cases
JOIN rules ON rules.caseid = cases.caseid
ORDER BY ruleid,alertdate
;
casejoined = pd.read_sql_query(sql_query,con)
exdict = {}
for g in casejoined.groupby("ruleid"):
ruleid = g[0]
df = g[1]
ts = pd.DataFrame(data={"amount":df.amount.values,
"held":df.held.values,
"badmerch":df.badmerch.values},
index=df.alertdate.values)
try:
exdict[ruleid] = make_expanded(ts)
except ValueError:
print "No true positives in Rule {} ({} trigger); cannot compute expanded window.".format(ruleid,len(df))
ruleid = 4
# Quick code to make single plots for presentation
pl = sns.barplot(x="metric",y="value",data=m[m.ruleid==ruleid])
pl.axhline(y=0.5, ls="--", c="0.5",lw=1)
pl.set_title("RuleID = {}".format(ruleid),fontsize=20);
pl = exdict[ruleid].plot(legend=True)
pl.set_title("RuleID = {}".format(ruleid),fontsize=20)
pl.set_ylim(0,1.05)
pl.set_ylabel("metrics",fontsize=12);
# Plot results in a grid
fig,axarr = plt.subplots(5,6,figsize=(15,15))
rules_sorted = sorted(exdict.keys())
for ruleid,ax in zip(rules_sorted,axarr.ravel()):
ex = exdict[ruleid]
pl = ex.plot(ax=ax,legend=(False | ruleid==6))
pl.set_title("ruleid = {}".format(ruleid))
pl.set_ylim(0,1.05)
pl.set_xticklabels([""])
Explanation: Grouping by type; fraud rules have by a significant amount the best performance across all three metrics. Financial risk has comparable precision, but much worse recall. Compliance is poor across the board.
Cumulative performance of metrics split by rule
The above dashboard is a useful start, since we've defined a metric and looked at how it differs for each rule. However, the data being used was collected over a period of several months, and the data should be examined for variations in the metrics as a function of time. This would examine whether a rule is performing well (and if it improves or degrades with more data), the response of the risk agents to different triggers, and possibly variations in the population of merchants submitting cases.
We'll look at this analysis in the context of an expanding window - for every point in a time series of data, we use data up to and including that point. This gives the cumulative performance as a function of time, which is useful for looking at how the performance of a given rule stabilizes.
End of explanation
# Rank rule performance by deltamax: the largest absolute deviation in the second half of the dataset.
l = []
for ruleid in exdict:
ex = exdict[ruleid]
ex_2ndhalf = ex.iloc[len(ex)//2:]
f1diff = (ex_2ndhalf.f1.max() - ex_2ndhalf.f1.min())
if np.isfinite(f1diff):
l.append((ruleid,f1diff,len(ex_2ndhalf)))
else:
print "No variation for Rule {:2} in the second half (median is zero).".format(ruleid)
lsorted = sorted(l, key=lambda x: x[1],reverse=True)
for ll in lsorted:
print "Rule {:2} varies by {:.2f} in the second half ({:4} data points)".format(*ll)
Explanation: This will be the second set of plots in our dashboard. This shows the results over an expanding window covering the full length of time in the dataset, where the value of the three metrics (precision, recall, F1) track how the rules are performing with respect to the analysts and true outcomes over time.
By definition, data over an expanding window should stabilize as more data comes in and the variance decreases (assuming that the rule definitions, performance of risk agents, and underlying merchant behavior is all the same). Large amounts of recent variation would indicate that we don't know whether the rule is performing well yet.
To assess how much the rules are varying in performance, we'll measure the stability of each metric weighted more heavily toward the most recent results. A simple measure which will use is the largest absolute deviation over the second half of the data.
End of explanation
# Sort and print the rules matching the criteria for stability and high performance.
stable_good = []
stable_bad = []
unstable = []
for ruleid in exdict:
ex = exdict[ruleid]
ex_2ndhalf = ex.iloc[len(ex)//2:]
deltamax = (ex_2ndhalf.f1.max() - ex_2ndhalf.f1.min())
f1 = ex.iloc[len(ex)-1].f1
stable = True if deltamax < 0.1 and len(ex)//2 > 10 else False
good = True if f1 >= 0.5 else False
if stable and good:
stable_good.append(ruleid)
elif stable:
stable_bad.append(ruleid)
else:
unstable.append(ruleid)
print "{:2} rules {} are performing well.".format(len(stable_good),stable_good)
print "{:2} rules {} are not performing well.".format(len(stable_bad),stable_bad)
print "{:2} rules {} are unstable and cannot be evaluated yet.".format(len(unstable),unstable)
Explanation: Six out of the thirty rules have a variation $\Delta_\mathrm{max,abs} < 0.1$ in the second half of the current data. Of those, two (Rules 7 and 26) have only a handful of datapoints and estimates of the true accuracy are very uncertain. Two others (Rules 2 and 30) more data, although less than 100 points each. Rule 2 has very different behavior starting a few weeks toward the end, sharply increasing both its precision and recall. This could indicate either a difference in merchant tendencies or a re-definition of the existing rule. Rule 30 has shown a gradual improvement from an early nadir, which might be a sign of a set of bad/unlikely transactions earlier and now regressing to the mean. Rule 4 basically only has data in the second half of the set (not stabilized yet) and Rule 5 has a gradually decreasing recall, which may be a counterexample to the trend in Rule 30.
The remainder of the rules (especially for those with a few hundred data points) are relatively stable over the expanding window. So we can broadly categorize rule performance in three categories:
rules that are performing well
rules that are not performing well
rules for which behavior is not stable/well-determined
We'll define a well-performing rule as one whose cumulative score is $F1 \ge 0.5$, and a stable rule as one with $N_\mathrm{cases}>10$ and $\Delta_\mathrm{max,abs} < 0.1$.
End of explanation
# Compute the change in performance by rule type over an expanding time window
sql_query =
SELECT cases.*,categories.ruletype
FROM cases
JOIN rules ON rules.caseid = cases.caseid
JOIN categories on categories.ruleid = rules.ruleid
ORDER BY categories.ruletype,alertdate
;
rtjoined = pd.read_sql_query(sql_query,con)
# Get the dataframes
rtd = {}
for g in rtjoined.groupby("ruletype"):
ruletype = g[0]
df = g[1]
ts = pd.DataFrame(data={"amount":df.amount.values,
"held":df.held.values,
"badmerch":df.badmerch.values},
index=df.alertdate.values)
try:
rtd[ruletype] = make_expanded(ts)
except ValueError:
print "Problems with {}".format(ruletype)
# Plot results in a grid
fig,axarr = plt.subplots(1,3,figsize=(15,6))
rules_sorted = sorted(rtd.keys())
for ruletype,ax in zip(rules_sorted,axarr.ravel()):
ex = rtd[ruletype]
pl = ex.plot(ax=ax)
pl.set_title("ruletype = {}".format(ruletype))
pl.set_ylim(0,1.05)
# Rank rules by the largest absolute deviation in the second half of the dataset.
l = []
for ruletype in rtd:
ex = rtd[ruletype]
ex_2ndhalf = ex.iloc[len(ex)//2:]
f1diff = (ex_2ndhalf.f1.max() - ex_2ndhalf.f1.min())
l.append((ruletype,f1diff,len(ex_2ndhalf)))
print ''
lsorted = sorted(l, key=lambda x: x[1],reverse=True)
for ll in lsorted:
print "{:15} rules vary by {:.2f} in the second half ({:4} data points)".format(*ll)
Explanation: Cumulative performance of metrics split by rule type
End of explanation
ts = pd.DataFrame(data={"amount":cases.amount.values,
"held":cases.held.values,
"badmerch":cases.badmerch.values},
index=cases.alertdate.values)
r = ts.rolling(window=7,min_periods=1)
# Make a rolling window with associated metrics by looping over every row in the dataframe
def r_precision(ts):
TP = (ts.badmerch & ts.held).sum()
FP = (ts.held & np.logical_not(ts.badmerch)).sum()
if (TP + FP) > 0.:
return TP * 1./ (TP + FP)
else:
return np.nan
def r_recall(ts):
TP = (ts.badmerch & ts.held).sum()
FN = (ts.badmerch & np.logical_not(ts.held)).sum()
if (TP + FN) > 0.:
return TP * 1./(TP + FN)
else:
return np.nan
def r_f1(ts):
TP = (ts.badmerch & ts.held).sum()
FP = (ts.held & np.logical_not(ts.badmerch)).sum()
FN = (ts.badmerch & np.logical_not(ts.held)).sum()
num = 2*TP
den = 2*TP + FP + FN
if den > 0.:
return num * 1./den
else:
return np.nan
def make_rolling(ts,window):
rolling_precision = pd.concat([(pd.Series(r_precision(ts.iloc[i:i+window]),
index=[ts.index[i+window]])) for i in range(len(ts)-window) ])
rolling_recall = pd.concat([(pd.Series(r_recall(ts_sorted.iloc[i:i+window]),
index=[ts.index[i+window]])) for i in range(len(ts)-window) ])
rolling_f1 = pd.concat([(pd.Series(r_f1(ts.iloc[i:i+window]),
index=[ts.index[i+window]])) for i in range(len(ts)-window) ])
r = pd.DataFrame(data={"precision":rolling_precision.values,
"recall":rolling_recall.values,
"f1":rolling_f1.values,
},
index=rolling_f1.index)
return r
# Run the rolling window for all cases, sorted by rule
rdict = {}
for g in casejoined.groupby("ruleid"):
ruleid = g[0]
df = g[1]
ts = pd.DataFrame(data={"amount":df.amount.values,
"held":df.held.values,
"badmerch":df.badmerch.values},
index=df.alertdate.values)
ts_sorted = ts.sort_index()
try:
rdict[ruleid] = make_rolling(ts_sorted,window=50)
except ValueError:
print "No true positives in Rule {} over interval ({} triggers); cannot compute rolling window.".format(ruleid,len(df))
# Empty dataframe
rdict[ruleid] = pd.DataFrame([0,]*len(df),index=[[casejoined.alertdate.min(),]*(len(df)-1) + [casejoined.alertdate.max()]])
# Plot the dashboard with rolling windows
fig,axarr = plt.subplots(5,6,figsize=(15,12))
for ax,r in zip(axarr.ravel(),rdict):
rp = rdict[r].plot(xlim=(casejoined.alertdate.min(),casejoined.alertdate.max()),
ylim=(0,1.05),
ax=ax,
legend=(False | r == 1))
if r < 25:
rp.set_xticklabels([""])
rp.set_title("ruleid = {}; N={}".format(r,len(rdict[r])));
Explanation: Analysis: all three of the rule type have a variation $\Delta_\mathrm{max,abs} \le 0.05$ in the second half of the current data. Since all three rule types have at least hundreds of data points distributed over time, stability is mostly expected. Compliance rules still show the largest deviations; there was a large amount of early variance, which is more stable but still mildly decreasing. Both fraud and financial risk have been quite stable following about the first month of data.
Rolling performance of metrics split by rule
The analysis above is useful from an overall perspective about whether a rule has been historically justified. For data scientists and risk analysts, however, it is also critical to look only at recent data so that action can be taken if performance starts to drastically change. Expanding windows do not work well for this since the data are weighted over all input and it will take time for variations to affect the integrated totals. Instead, we will run a similar analysis on a rolling window to look for changes on a weekly timescale.
End of explanation
# Same rolling analysis, but by rule type
rtrdict = {}
for g in rtjoined.groupby("ruletype"):
ruleid = g[0]
df = g[1]
ts = pd.DataFrame(data={"amount":df.amount.values,
"held":df.held.values,
"badmerch":df.badmerch.values},
index=df.alertdate.values)
ts_sorted = ts.sort_index()
try:
rtrdict[ruleid] = make_rolling(ts_sorted,window=200)
except ValueError:
print "No true positives in Rule {} over interval ({} triggers); cannot compute rolling window.".format(ruleid,len(df))
# Empty dataframe
rtrdict[ruleid] = pd.DataFrame([0,]*len(df),index=[[casejoined.alertdate.min(),]*(len(df)-1) + [casejoined.alertdate.max()]])
# Plot the dashboard with rolling windows by rule type
fig,axarr = plt.subplots(1,3,figsize=(15,6))
for ax,r in zip(axarr.ravel(),["Compliance","Financial Risk","Fraud"]):
rp = rtrdict[r].plot(xlim=(rtjoined.alertdate.min(),rtjoined.alertdate.max()),
ylim=(0,1.05),
ax=ax)
rp.set_title("Rule type = {}; N={}".format(r,len(rtrdict[r])));
Explanation: Rolling performance of metrics split by rule type
End of explanation
# Compute the co-occurrence matrix for triggering rules
df = pd.DataFrame(index=rules.caseid.unique())
rule_count_arr = np.zeros((len(rules.caseid.unique()),30),dtype=int)
for idx,g in enumerate(rules.groupby('caseid')):
g1 = g[1]
for r in g1.ruleid.values:
# Numbering is a little off because there's no Rule 28 in the dataset.
if r < 28:
rule_count_arr[idx,r-1] = 1
else:
rule_count_arr[idx,r-2] = 1
# Create pandas DataFrame and rename the columns to the actual rule IDs
df = pd.DataFrame(data=rule_count_arr,
index=rules.caseid.unique(),
columns=[sorted(rules.ruleid.unique())])
# Co-occurrence matrix is the product of the matrix and its transpose
coocc = df.T.dot(df)
coocc.head()
# Plot the co-occurrence matrix and mask the diagonal and upper triangle values
# (mirrored on the bottom half of the matrix)
fig,ax = plt.subplots(1,1,figsize=(14,10))
mask = np.zeros_like(coocc)
mask[np.triu_indices_from(mask)] = True
with sns.axes_style("white"):
sns.heatmap(coocc,
mask = mask,
annot=True, fmt="d",
vmax = 100,
square=True,
ax=ax)
ax.set_xlabel('Rule',fontsize=16)
ax.set_ylabel('Rule',fontsize=16);
Explanation: Co-occurence and effectiveness of rules
Are there any rules that occur together at very high rates (indicating that the model is too complicated)?
End of explanation
# How much money did bad transactions cost Insight in this dataset?
print "Bad money in transactions totals ${:.2f}.".format(cases[(cases.held == 0) & (cases.badmerch == 1)].amount.sum())
Explanation: Rules 8, 14, 15, and 27 all have fairly strong co-occurrences with other rules in the set. These would be good candidates to check for the overall F1 scores and evaluate whether they're a necessary trigger for the system.
Other questions that I'd explore in the data given more time:
What fraction of the triggers for each rule are co-occurrences?
What are the F1 scores produced by combinations of rules?
How do the combined F1 scores compare to the scores when triggered individually?
Predicting whether a transaction is fraudulent
None of this analysis has actually attempted to predict the effectiveness of the rule system in place; right now, it only evaluates it with respect to the hold decisions and the ultimate labels. Given more time, it should be very tractable to build a machine learning classifier that:
predicts the likelihood of being a bad transaction (based on merchant, past history, timestamp, and amount)
predicts the likelihood of funds being withheld given that it is a bad transaction
recommends triggering the rule by minimizing the false negative rate while maintaining specificity
I'd start with a simple logistic regression model and assess performance with cross-validation on later times in the dataset; depending on the accuracy, ensemble models such as random forests or support vector machines would also be good candidates for increased accuracy. If none of those achieve the desired accuracy, the next step would be trying the performance of a neural net.
End of explanation
<END_TASK> |
15,710 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Copyright 2021 The TF-Agents Authors.
Step1: Policies
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Python Policies
The interface for Python policies is defined in policies/py_policy.PyPolicy. The main methods are
Step3: The most important method is action(time_step) which maps a time_step containing an observation from the environment to a PolicyStep named tuple containing the following attributes
Step4: Example 2
Step5: TensorFlow Policies
TensorFlow policies follow the same interface as Python policies. Let us look at a few examples.
Example 1
Step6: Example 2
Step7: In TensorFlow most network layers are designed for batch operations, so we expect the input time_steps to be batched, and the output of the network will be batched as well. Also the network is responsible for producing actions in the correct range of the given action_spec. This is conventionally done using e.g. a tanh activation for the final layer to produce actions in [-1, 1] and then scaling and shifting this to the correct range as the input action_spec (e.g. see tf_agents/agents/ddpg/networks.actor_network()).
Now, we can create an actor policy using the above network.
Step8: We can apply it to any batch of time_steps that follow time_step_spec
Step9: In the above example, we created the policy using an action network that produces an action tensor. In this case, policy.distribution(time_step) is a deterministic (delta) distribution around the output of policy.action(time_step). One way to produce a stochastic policy is to wrap the actor policy in a policy wrapper that adds noise to the actions. Another way is to create the actor policy using an action distribution network instead of an action network as shown below.
Using an action distribution network
Step10: Note that in the above, actions are clipped to the range of the given action spec [-1, 1]. This is because a constructor argument of ActorPolicy clip=True by default. Setting this to false will return unclipped actions produced by the network.
Stochastic policies can be converted to deterministic policies using, for example, a GreedyPolicy wrapper which chooses stochastic_policy.distribution().mode() as its action, and a deterministic/delta distribution around this greedy action as its distribution().
Example 3
Step11: Policy Wrappers
A policy wrapper can be used to wrap and modify a given policy, e.g. add noise. Policy wrappers are a subclass of Policy (Python/TensorFlow) and can therefore be used just like any other policy.
Example | <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2021 The TF-Agents Authors.
End of explanation
!pip install tf-agents
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import abc
import tensorflow as tf
import tensorflow_probability as tfp
import numpy as np
from tf_agents.specs import array_spec
from tf_agents.specs import tensor_spec
from tf_agents.networks import network
from tf_agents.policies import py_policy
from tf_agents.policies import random_py_policy
from tf_agents.policies import scripted_py_policy
from tf_agents.policies import tf_policy
from tf_agents.policies import random_tf_policy
from tf_agents.policies import actor_policy
from tf_agents.policies import q_policy
from tf_agents.policies import greedy_policy
from tf_agents.trajectories import time_step as ts
Explanation: Policies
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/agents/tutorials/3_policies_tutorial">
<img src="https://www.tensorflow.org/images/tf_logo_32px.png" />
View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/agents/blob/master/docs/tutorials/3_policies_tutorial.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />
Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/agents/blob/master/docs/tutorials/3_policies_tutorial.ipynb">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/agents/docs/tutorials/3_policies_tutorial.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Introduction
In Reinforcement Learning terminology, policies map an observation from the environment to an action or a distribution over actions. In TF-Agents, observations from the environment are contained in a named tuple TimeStep('step_type', 'discount', 'reward', 'observation'), and policies map timesteps to actions or distributions over actions. Most policies use timestep.observation, some policies use timestep.step_type (e.g. to reset the state at the beginning of an episode in stateful policies), but timestep.discount and timestep.reward are usually ignored.
Policies are related to other components in TF-Agents in the following way. Most policies have a neural network to compute actions and/or distributions over actions from TimeSteps. Agents can contain one or more policies for different purposes, e.g. a main policy that is being trained for deployment, and a noisy policy for data collection. Policies can be saved/restored, and can be used indepedently of the agent for data collection, evaluation etc.
Some policies are easier to write in Tensorflow (e.g. those with a neural network), whereas others are easier to write in Python (e.g. following a script of actions). So in TF agents, we allow both Python and Tensorflow policies. Morever, policies written in TensorFlow might have to be used in a Python environment, or vice versa, e.g. a TensorFlow policy is used for training but later deployed in a production Python environment. To make this easier, we provide wrappers for converting between Python and TensorFlow policies.
Another interesting class of policies are policy wrappers, which modify a given policy in a certain way, e.g. add a particular type of noise, make a greedy or epsilon-greedy version of a stochastic policy, randomly mix multiple policies etc.
Setup
If you haven't installed tf-agents yet, run:
End of explanation
class Base(object):
@abc.abstractmethod
def __init__(self, time_step_spec, action_spec, policy_state_spec=()):
self._time_step_spec = time_step_spec
self._action_spec = action_spec
self._policy_state_spec = policy_state_spec
@abc.abstractmethod
def reset(self, policy_state=()):
# return initial_policy_state.
pass
@abc.abstractmethod
def action(self, time_step, policy_state=()):
# return a PolicyStep(action, state, info) named tuple.
pass
@abc.abstractmethod
def distribution(self, time_step, policy_state=()):
# Not implemented in python, only for TF policies.
pass
@abc.abstractmethod
def update(self, policy):
# update self to be similar to the input `policy`.
pass
@property
def time_step_spec(self):
return self._time_step_spec
@property
def action_spec(self):
return self._action_spec
@property
def policy_state_spec(self):
return self._policy_state_spec
Explanation: Python Policies
The interface for Python policies is defined in policies/py_policy.PyPolicy. The main methods are:
End of explanation
action_spec = array_spec.BoundedArraySpec((2,), np.int32, -10, 10)
my_random_py_policy = random_py_policy.RandomPyPolicy(time_step_spec=None,
action_spec=action_spec)
time_step = None
action_step = my_random_py_policy.action(time_step)
print(action_step)
action_step = my_random_py_policy.action(time_step)
print(action_step)
Explanation: The most important method is action(time_step) which maps a time_step containing an observation from the environment to a PolicyStep named tuple containing the following attributes:
action: The action to be applied to the environment.
state: The state of the policy (e.g. RNN state) to be fed into the next call to action.
info: Optional side information such as action log probabilities.
The time_step_spec and action_spec are specifications for the input time step and the output action. Policies also have a reset function which is typically used for resetting the state in stateful policies. The update(new_policy) function updates self towards new_policy.
Now, let us look at a couple of examples of Python policies.
Example 1: Random Python Policy
A simple example of a PyPolicy is the RandomPyPolicy which generates random actions for the discrete/continuous given action_spec. The input time_step is ignored.
End of explanation
action_spec = array_spec.BoundedArraySpec((2,), np.int32, -10, 10)
action_script = [(1, np.array([5, 2], dtype=np.int32)),
(0, np.array([0, 0], dtype=np.int32)), # Setting `num_repeats` to 0 will skip this action.
(2, np.array([1, 2], dtype=np.int32)),
(1, np.array([3, 4], dtype=np.int32))]
my_scripted_py_policy = scripted_py_policy.ScriptedPyPolicy(
time_step_spec=None, action_spec=action_spec, action_script=action_script)
policy_state = my_scripted_py_policy.get_initial_state()
time_step = None
print('Executing scripted policy...')
action_step = my_scripted_py_policy.action(time_step, policy_state)
print(action_step)
action_step= my_scripted_py_policy.action(time_step, action_step.state)
print(action_step)
action_step = my_scripted_py_policy.action(time_step, action_step.state)
print(action_step)
print('Resetting my_scripted_py_policy...')
policy_state = my_scripted_py_policy.get_initial_state()
action_step = my_scripted_py_policy.action(time_step, policy_state)
print(action_step)
Explanation: Example 2: Scripted Python Policy
A scripted policy plays back a script of actions represented as a list of (num_repeats, action) tuples. Every time the action function is called, it returns the next action from the list until the specified number of repeats is done, and then moves on to the next action in the list. The reset method can be called to start executing from the beginning of the list.
End of explanation
action_spec = tensor_spec.BoundedTensorSpec(
(2,), tf.float32, minimum=-1, maximum=3)
input_tensor_spec = tensor_spec.TensorSpec((2,), tf.float32)
time_step_spec = ts.time_step_spec(input_tensor_spec)
my_random_tf_policy = random_tf_policy.RandomTFPolicy(
action_spec=action_spec, time_step_spec=time_step_spec)
observation = tf.ones(time_step_spec.observation.shape)
time_step = ts.restart(observation)
action_step = my_random_tf_policy.action(time_step)
print('Action:')
print(action_step.action)
Explanation: TensorFlow Policies
TensorFlow policies follow the same interface as Python policies. Let us look at a few examples.
Example 1: Random TF Policy
A RandomTFPolicy can be used to generate random actions according to a given discrete/continuous action_spec. The input time_step is ignored.
End of explanation
class ActionNet(network.Network):
def __init__(self, input_tensor_spec, output_tensor_spec):
super(ActionNet, self).__init__(
input_tensor_spec=input_tensor_spec,
state_spec=(),
name='ActionNet')
self._output_tensor_spec = output_tensor_spec
self._sub_layers = [
tf.keras.layers.Dense(
action_spec.shape.num_elements(), activation=tf.nn.tanh),
]
def call(self, observations, step_type, network_state):
del step_type
output = tf.cast(observations, dtype=tf.float32)
for layer in self._sub_layers:
output = layer(output)
actions = tf.reshape(output, [-1] + self._output_tensor_spec.shape.as_list())
# Scale and shift actions to the correct range if necessary.
return actions, network_state
Explanation: Example 2: Actor Policy
An actor policy can be created using either a network that maps time_steps to actions or a network that maps time_steps to distributions over actions.
Using an action network
Let us define a network as follows:
End of explanation
input_tensor_spec = tensor_spec.TensorSpec((4,), tf.float32)
time_step_spec = ts.time_step_spec(input_tensor_spec)
action_spec = tensor_spec.BoundedTensorSpec((3,),
tf.float32,
minimum=-1,
maximum=1)
action_net = ActionNet(input_tensor_spec, action_spec)
my_actor_policy = actor_policy.ActorPolicy(
time_step_spec=time_step_spec,
action_spec=action_spec,
actor_network=action_net)
Explanation: In TensorFlow most network layers are designed for batch operations, so we expect the input time_steps to be batched, and the output of the network will be batched as well. Also the network is responsible for producing actions in the correct range of the given action_spec. This is conventionally done using e.g. a tanh activation for the final layer to produce actions in [-1, 1] and then scaling and shifting this to the correct range as the input action_spec (e.g. see tf_agents/agents/ddpg/networks.actor_network()).
Now, we can create an actor policy using the above network.
End of explanation
batch_size = 2
observations = tf.ones([2] + time_step_spec.observation.shape.as_list())
time_step = ts.restart(observations, batch_size)
action_step = my_actor_policy.action(time_step)
print('Action:')
print(action_step.action)
distribution_step = my_actor_policy.distribution(time_step)
print('Action distribution:')
print(distribution_step.action)
Explanation: We can apply it to any batch of time_steps that follow time_step_spec:
End of explanation
class ActionDistributionNet(ActionNet):
def call(self, observations, step_type, network_state):
action_means, network_state = super(ActionDistributionNet, self).call(
observations, step_type, network_state)
action_std = tf.ones_like(action_means)
return tfp.distributions.MultivariateNormalDiag(action_means, action_std), network_state
action_distribution_net = ActionDistributionNet(input_tensor_spec, action_spec)
my_actor_policy = actor_policy.ActorPolicy(
time_step_spec=time_step_spec,
action_spec=action_spec,
actor_network=action_distribution_net)
action_step = my_actor_policy.action(time_step)
print('Action:')
print(action_step.action)
distribution_step = my_actor_policy.distribution(time_step)
print('Action distribution:')
print(distribution_step.action)
Explanation: In the above example, we created the policy using an action network that produces an action tensor. In this case, policy.distribution(time_step) is a deterministic (delta) distribution around the output of policy.action(time_step). One way to produce a stochastic policy is to wrap the actor policy in a policy wrapper that adds noise to the actions. Another way is to create the actor policy using an action distribution network instead of an action network as shown below.
Using an action distribution network
End of explanation
input_tensor_spec = tensor_spec.TensorSpec((4,), tf.float32)
time_step_spec = ts.time_step_spec(input_tensor_spec)
action_spec = tensor_spec.BoundedTensorSpec((),
tf.int32,
minimum=0,
maximum=2)
num_actions = action_spec.maximum - action_spec.minimum + 1
class QNetwork(network.Network):
def __init__(self, input_tensor_spec, action_spec, num_actions=num_actions, name=None):
super(QNetwork, self).__init__(
input_tensor_spec=input_tensor_spec,
state_spec=(),
name=name)
self._sub_layers = [
tf.keras.layers.Dense(num_actions),
]
def call(self, inputs, step_type=None, network_state=()):
del step_type
inputs = tf.cast(inputs, tf.float32)
for layer in self._sub_layers:
inputs = layer(inputs)
return inputs, network_state
batch_size = 2
observation = tf.ones([batch_size] + time_step_spec.observation.shape.as_list())
time_steps = ts.restart(observation, batch_size=batch_size)
my_q_network = QNetwork(
input_tensor_spec=input_tensor_spec,
action_spec=action_spec)
my_q_policy = q_policy.QPolicy(
time_step_spec, action_spec, q_network=my_q_network)
action_step = my_q_policy.action(time_steps)
distribution_step = my_q_policy.distribution(time_steps)
print('Action:')
print(action_step.action)
print('Action distribution:')
print(distribution_step.action)
Explanation: Note that in the above, actions are clipped to the range of the given action spec [-1, 1]. This is because a constructor argument of ActorPolicy clip=True by default. Setting this to false will return unclipped actions produced by the network.
Stochastic policies can be converted to deterministic policies using, for example, a GreedyPolicy wrapper which chooses stochastic_policy.distribution().mode() as its action, and a deterministic/delta distribution around this greedy action as its distribution().
Example 3: Q Policy
A Q policy is used in agents like DQN and is based on a Q network that predicts a Q value for each discrete action. For a given time step, the action distribution in the Q Policy is a categorical distribution created using the q values as logits.
End of explanation
my_greedy_policy = greedy_policy.GreedyPolicy(my_q_policy)
action_step = my_greedy_policy.action(time_steps)
print('Action:')
print(action_step.action)
distribution_step = my_greedy_policy.distribution(time_steps)
print('Action distribution:')
print(distribution_step.action)
Explanation: Policy Wrappers
A policy wrapper can be used to wrap and modify a given policy, e.g. add noise. Policy wrappers are a subclass of Policy (Python/TensorFlow) and can therefore be used just like any other policy.
Example: Greedy Policy
A greedy wrapper can be used to wrap any TensorFlow policy that implements distribution(). GreedyPolicy.action() will return wrapped_policy.distribution().mode() and GreedyPolicy.distribution() is a deterministic/delta distribution around GreedyPolicy.action():
End of explanation
<END_TASK> |
15,711 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
<a id="navigation"></a>
Hi-C data analysis
Welcome to the Jupyter notebook dedicated to Hi-C data analysis. Here we will be working in interactive Python environment with some mixture of bash command line tools.
Here is the outline of what we are going to do
Step1: There are also other types of cells, for example, "Markdown". Double click this cell to view raw Markdown markup content.
You can define functions, classes, run pipelines and visualisations, run thousands of code lines inside a Jupyter cell.
But usually, it is convenient to write simple and clean blocks of code.
Note that behind this interactive notebook you have regular Python session running. Thus Python variables are accessible only throughout your history of actions in the notebook. To create a variable, you have to execute the corresponding block of code. All your variables will be lost when you restart the kernel of the notebook.
You can pause or stop the kernel, save notebook (.ipynb) file, copy and insert cells via buttons in the toolbar. Please, take a look at these useful buttons.
Also, try pressing 'Esc' and then 'h'. You will see shortcuts help.
Jupyter notebook allows you to create "magical" cells. We will use %%bash, %%capture, %matplotlib. For example, %%bash magic makes it easier to access bash commands
Step2: If you are not sure about the function, class or variable then use its name with '?' at the end to get available documentation. Here is an example for common module numpy
Step3: OK, it seems that now we are ready to start our Hi-C data analysis! I've placed Go top shortcut for you in each section so that you can navigate quickly throughout the notebook.
<a id="mapping"></a>
1. Reads mapping
Go top
1.1 Input raw data
Hi-C results in paired-end sequencing, where each pair represents one possible contact. The analysis starts with raw sequencing data (.fastq files).
I've downloaded raw files from Flyamer et al. 2017 (GEO ID GSE80006) and placed them in the DATA/FASTQ/ directory.
We can view these files easily with bash help. Forward and reverse reads, correspondingly
Step4: 1.2 Genome
Now we have to map these reads to the genome of interest (Homo sapiens hg19 downloaded from UCSC in this case).
We are going to use only chromosome 1 to minimise computational time.
The genome is also pre-downloaded
Step5: For Hi-C data mapping we will use hiclib. It utilizes bowtie 2 read mapping software. Bowtie 2 indexes the genome prior to reads mapping in order to reduce memory usage. Usually, you have to run genome indexing, but I've already done this time-consuming step. That's why code for this step is included but commented.
Step6: 1.3 Iterative mapping
First of all, we need to import useful Python packages
Step7: Then we need to set some parameters and prepare our environment
Step8: Let's take a look at .sam files that were created during iterative mapping
Step9: 1.4 Making sense of mapping output
For each read length and orientation, we have a file. Now we need to merge them into the single dataset (.hdf5 file)
Step10: Let's take a look at the created file
Step11: <a id="filtering"></a>
2. Data filtering
Go top
The raw Hi-C data is mapped and interpreted, the next step is to filter out possible methodological artefacts
Step12: Nice visualisation of the data
Step13: <a id="binning"></a>
3. Data binning
Go top
The previous analysis involved interactions of restriction fragments, now we would like to work with interactions of genomic bins.
Step14: <a id="visualisation"></a>
4. Hi-C data visualisation
Go top
Let's take a look at the resulting heat maps.
Step15: <a id="correction"></a>
5. Iterative correction
Go top
The next typical step is data correction for unequal amplification and accessibility of genomic regions.
We will use iterative correction.
Step16: <a id="meta"></a>
7. Compartmanets and TADs
Go top
7.1 Comparison with compartments
Compartments usually can be found at whole-genome datasets, but we have only chromosome 1. Still, we can try to find some visual signs of compartments.
Step17: Seems to be nothing special with compartments. What if we had much better coverage by reads? Let's take a look at the dataset from Rao et al. 2014, GEO GSE63525, HIC069
Step18: 7.2 Topologically associating domains (TADs)
For TADs calling we will use lavaburst package. The code below is based on this example. | <ASSISTANT_TASK:>
Python Code:
# This is regular Python comment inside Jupyter "Code" cell.
# You can easily run "Hello world" in the "Code" cell (focus on the cell and press Shift+Enter):
print("Hello world!")
Explanation: <a id="navigation"></a>
Hi-C data analysis
Welcome to the Jupyter notebook dedicated to Hi-C data analysis. Here we will be working in interactive Python environment with some mixture of bash command line tools.
Here is the outline of what we are going to do:
Notebook basics
Reads maping
Data filtering
Binning
Hi-C data visualisation
Iterative correction
Compartments and TADs
If you have any questions, please, contact Aleksandra Galitsyna ([email protected])
<a id="basics"></a>
0. Notebook basics
If you are new to Python and Jupyter notebook, please, take a quick look through this small list of tips.
First of all, Jupyter notebook is organised in cells, which may contain text, comments and code blocks of any size.
End of explanation
%%bash
echo "Current directory is: "; pwd
echo "List of files in the current directory is: "; ls
Explanation: There are also other types of cells, for example, "Markdown". Double click this cell to view raw Markdown markup content.
You can define functions, classes, run pipelines and visualisations, run thousands of code lines inside a Jupyter cell.
But usually, it is convenient to write simple and clean blocks of code.
Note that behind this interactive notebook you have regular Python session running. Thus Python variables are accessible only throughout your history of actions in the notebook. To create a variable, you have to execute the corresponding block of code. All your variables will be lost when you restart the kernel of the notebook.
You can pause or stop the kernel, save notebook (.ipynb) file, copy and insert cells via buttons in the toolbar. Please, take a look at these useful buttons.
Also, try pressing 'Esc' and then 'h'. You will see shortcuts help.
Jupyter notebook allows you to create "magical" cells. We will use %%bash, %%capture, %matplotlib. For example, %%bash magic makes it easier to access bash commands:
End of explanation
# Module import under custom name
import numpy as np
# You've started asking questions about it
np?
Explanation: If you are not sure about the function, class or variable then use its name with '?' at the end to get available documentation. Here is an example for common module numpy:
End of explanation
%%bash
head -n 8 '../DATA/FASTQ/K562_B-bulk_R1.fastq'
%%bash
head -n 8 '../DATA/FASTQ/K562_B-bulk_R2.fastq'
Explanation: OK, it seems that now we are ready to start our Hi-C data analysis! I've placed Go top shortcut for you in each section so that you can navigate quickly throughout the notebook.
<a id="mapping"></a>
1. Reads mapping
Go top
1.1 Input raw data
Hi-C results in paired-end sequencing, where each pair represents one possible contact. The analysis starts with raw sequencing data (.fastq files).
I've downloaded raw files from Flyamer et al. 2017 (GEO ID GSE80006) and placed them in the DATA/FASTQ/ directory.
We can view these files easily with bash help. Forward and reverse reads, correspondingly:
End of explanation
%%bash
ls ../GENOMES/HG19_FASTA
Explanation: 1.2 Genome
Now we have to map these reads to the genome of interest (Homo sapiens hg19 downloaded from UCSC in this case).
We are going to use only chromosome 1 to minimise computational time.
The genome is also pre-downloaded:
End of explanation
#%%bash
#bowtie2-build /home/jovyan/GENOMES/HG19_FASTA/chr1.fa /home/jovyan/GENOMES/HG19_IND/hg19_chr1
#Time consuming step
%%bash
ls ../GENOMES/HG19_IND
Explanation: For Hi-C data mapping we will use hiclib. It utilizes bowtie 2 read mapping software. Bowtie 2 indexes the genome prior to reads mapping in order to reduce memory usage. Usually, you have to run genome indexing, but I've already done this time-consuming step. That's why code for this step is included but commented.
End of explanation
import os
from hiclib import mapping
from mirnylib import h5dict, genome
Explanation: 1.3 Iterative mapping
First of all, we need to import useful Python packages:
End of explanation
%%bash
which bowtie2
# Bowtie 2 path
%%bash
pwd
# Current working directory path
# Setting parameters and environmental variables
bowtie_path = '/opt/conda/bin/bowtie2'
enzyme = 'DpnII'
bowtie_index_path = '/home/jovyan/GENOMES/HG19_IND/hg19_chr1'
fasta_path = '/home/jovyan/GENOMES/HG19_FASTA/'
chrms = ['1']
# Reading the genome
genome_db = genome.Genome(fasta_path, readChrms=chrms)
# Creating directories for further data processing
if not os.path.exists('tmp/'):
os.mkdir('tmp/', exists_)
if not os.path.exists('../DATA/SAM/'):
os.mkdir('../DATA/SAM/')
# Set parameters for iterative mapping
min_seq_len = 25
len_step = 5
nthreads = 2
temp_dir = 'tmp'
bowtie_flags = '--very-sensitive'
infile1 = '/home/jovyan/DATA/FASTQ1/K562_B-bulk_R1.fastq'
infile2 = '/home/jovyan/DATA/FASTQ1/K562_B-bulk_R2.fastq'
out1 = '/home/jovyan/DATA/SAM/K562_B-bulk_R1.chr1.sam'
out2 = '/home/jovyan/DATA/SAM/K562_B-bulk_R2.chr1.sam'
# Iterative mapping itself. Time consuming step!
mapping.iterative_mapping(
bowtie_path = bowtie_path,
bowtie_index_path = bowtie_index_path,
fastq_path = infile1,
out_sam_path = out1,
min_seq_len = min_seq_len,
len_step = len_step,
nthreads = nthreads,
temp_dir = temp_dir,
bowtie_flags = bowtie_flags)
mapping.iterative_mapping(
bowtie_path = bowtie_path,
bowtie_index_path = bowtie_index_path,
fastq_path = infile2,
out_sam_path = out2,
min_seq_len = min_seq_len,
len_step = len_step,
nthreads = nthreads,
temp_dir = temp_dir,
bowtie_flags = bowtie_flags)
Explanation: Then we need to set some parameters and prepare our environment:
End of explanation
%%bash
ls /home/jovyan/DATA/SAM/
%%bash
head -n 10 /home/jovyan/DATA/SAM/K562_B-bulk_R1.chr1.sam.25
Explanation: Let's take a look at .sam files that were created during iterative mapping:
End of explanation
# Create the directory for output
if not os.path.exists('../DATA/HDF5/'):
os.mkdir('../DATA/HDF5/')
# Define file name for output
out = '/home/jovyan/DATA/HDF5/K562_B-bulk.fragments.hdf5'
# Open output file
mapped_reads = h5dict.h5dict(out)
# Parse mapping data and write to output file
mapping.parse_sam(
sam_basename1 = out1,
sam_basename2 = out2,
out_dict = mapped_reads,
genome_db = genome_db,
enzyme_name = enzyme,
save_seqs = False,
keep_ids = False)
Explanation: 1.4 Making sense of mapping output
For each read length and orientation, we have a file. Now we need to merge them into the single dataset (.hdf5 file):
End of explanation
%%bash
ls /home/jovyan/DATA/HDF5/
import h5py
# Reading the file
a = h5py.File('/home/jovyan/DATA/HDF5/K562_B-bulk.fragments.hdf5')
# "a" variable has dictionary-like structure, we can view its keys, for example:
list( a.keys() )
# Mapping positions for forward reads are stored under 'cuts1' key:
a['cuts1'].value
Explanation: Let's take a look at the created file:
End of explanation
from hiclib import fragmentHiC
inp = '/home/jovyan/DATA/HDF5/K562_B-bulk.fragments.hdf5'
out = '/home/jovyan/DATA/HDF5/K562_B-bulk.fragments_filtered.hdf5'
# Create output file
fragments = fragmentHiC.HiCdataset(
filename = out,
genome = genome_db,
maximumMoleculeLength= 500,
mode = 'w')
# Parse input data
fragments.parseInputData(
dictLike=inp)
# Filtering
fragments.filterRsiteStart(offset=5) # reads map too close to restriction site
fragments.filterDuplicates() # remove PCR duplicates
fragments.filterLarge() # remove too large restriction fragments
fragments.filterExtreme(cutH=0.005, cutL=0) # remove fragments with too high and low counts
# Some hidden filteres were also applied, we can check them all:
fragments.printMetadata()
Explanation: <a id="filtering"></a>
2. Data filtering
Go top
The raw Hi-C data is mapped and interpreted, the next step is to filter out possible methodological artefacts:
End of explanation
import pandas as pd
df_stat = pd.DataFrame(list(fragments.metadata.items()), columns=['Feature', 'Count'])
df_stat
df_stat['Ratio of total'] = 100*df_stat['Count']/df_stat.loc[2,'Count']
df_stat
Explanation: Nice visualisation of the data:
End of explanation
# Define file name for binned data. Note "{}" prepared for string formatting
out_bin = '/home/jovyan/DATA/HDF5/K562_B-bulk.binned_{}.hdf5'
res_kb = [100, 20] # Several resolutions in Kb
for res in res_kb:
print(res)
outmap = out_bin.format(str(res)+'kb') # String formatting
fragments.saveHeatmap(outmap, res*1000) # Save heatmap
del fragments # delete unwanted object
Explanation: <a id="binning"></a>
3. Data binning
Go top
The previous analysis involved interactions of restriction fragments, now we would like to work with interactions of genomic bins.
End of explanation
# Importing visualisation modules
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('ticks')
%matplotlib inline
from hiclib.binnedData import binnedDataAnalysis
res = 100 # Resolution in Kb
# prepare to read the data
data_hic = binnedDataAnalysis(resolution=res*1000, genome=genome_db)
# read the data
data_hic.simpleLoad('/home/jovyan/DATA/HDF5/K562_B-bulk.binned_{}.hdf5'.format(str(res)+'kb'),'hic')
mtx = data_hic.dataDict['hic']
# show heatmap
plt.figure(figsize=[15,15])
plt.imshow(mtx[0:200, 0:200], cmap='jet', interpolation='None')
Explanation: <a id="visualisation"></a>
4. Hi-C data visualisation
Go top
Let's take a look at the resulting heat maps.
End of explanation
# Additional data filtering
data_hic.removeDiagonal()
data_hic.removePoorRegions()
data_hic.removeZeros()
data_hic.iterativeCorrectWithoutSS(force=True)
data_hic.restoreZeros()
mtx = data_hic.dataDict['hic']
plt.figure(figsize=[15,15])
plt.imshow(mtx[200:500, 200:500], cmap='jet', interpolation='None')
Explanation: <a id="correction"></a>
5. Iterative correction
Go top
The next typical step is data correction for unequal amplification and accessibility of genomic regions.
We will use iterative correction.
End of explanation
# Load compartments computed previously based on K562 dataset from Rao et al. 2014
eig = np.loadtxt('/home/jovyan/DATA/ANNOT/comp_K562_100Kb_chr1.tsv')
eig
from matplotlib import gridspec
bgn = 0
end = 500
fig = plt.figure(figsize=(10,10))
gs = gridspec.GridSpec(2, 1, height_ratios=[20,2])
gs.update(wspace=0.0, hspace=0.0)
ax = plt.subplot(gs[0,0])
ax.matshow(mtx[bgn:end, bgn:end], cmap='jet', origin='lower', aspect='auto')
ax.set_xticks([])
ax.set_yticks([])
axl = plt.subplot(gs[1,0])
plt.plot(range(end-bgn), eig[bgn:end] )
plt.xlim(0, end-bgn)
plt.xlabel('Eigenvector values')
ticks = range(bgn, end+1, 100)
ticklabels = ['{} Kb'.format(x) for x in ticks]
plt.xticks(ticks, ticklabels)
print('')
Explanation: <a id="meta"></a>
7. Compartmanets and TADs
Go top
7.1 Comparison with compartments
Compartments usually can be found at whole-genome datasets, but we have only chromosome 1. Still, we can try to find some visual signs of compartments.
End of explanation
mtx_Rao = np.genfromtxt('../DATA/ANNOT/Rao_K562_chr1.csv', delimiter=',')
bgn = 0
end = 500
fig = plt.figure(figsize=(10,10))
gs = gridspec.GridSpec(2, 1, height_ratios=[20,2])
gs.update(wspace=0.0, hspace=0.0)
ax = plt.subplot(gs[0,0])
ax.matshow(mtx_Rao[bgn:end, bgn:end], cmap='jet', origin='lower', aspect='auto', vmax=1000)
ax.set_xticks([])
ax.set_yticks([])
axl = plt.subplot(gs[1,0])
plt.plot(range(end-bgn), eig[bgn:end] )
plt.xlim(0, end-bgn)
plt.xlabel('Eigenvector values')
ticks = range(bgn, end+1, 100)
ticklabels = ['{} Kb'.format(x) for x in ticks]
plt.xticks(ticks, ticklabels)
print('')
Explanation: Seems to be nothing special with compartments. What if we had much better coverage by reads? Let's take a look at the dataset from Rao et al. 2014, GEO GSE63525, HIC069:
End of explanation
# Import Python package
import lavaburst
good_bins = mtx.astype(bool).sum(axis=0) > 1 # We have to mask rows/cols if data is missing
gam=[0.15, 0.25, 0.5, 0.75, 1.0] # set of parameters gamma for TADs calling
segments_dict = {}
for gam_current in gam:
print(gam_current)
S = lavaburst.scoring.armatus_score(mtx, gamma=gam_current, binmask=good_bins)
model = lavaburst.model.SegModel(S)
segments = model.optimal_segmentation() # Positions of TADs for input matrix
segments_dict[gam_current] = segments.copy()
A = mtx.copy()
good_bins = A.astype(bool).sum(axis=0) > 0
At = lavaburst.utils.tilt_heatmap(mtx, n_diags=100)
start_tmp = 0
end_tmp = 500
f = plt.figure(figsize=(20, 6))
ax = f.add_subplot(111)
blues = sns.cubehelix_palette(0.4, gamma=0.5, rot=-0.3, dark=0.1, light=0.9, as_cmap=True)
ax.matshow(np.log(At[start_tmp: end_tmp]), cmap=blues)
cmap = mpl.cm.get_cmap('brg')
gammas = segments_dict.keys()
for n, gamma in enumerate(gammas):
segments = segments_dict[gamma]
for a in segments[:-1]:
if a[1]<start_tmp or a[0]>end_tmp:
continue
ax.plot([a[0]-start_tmp, a[0]+(a[1]-a[0])/2-start_tmp], [0, -(a[1]-a[0])], c=cmap(n/len(gammas)), alpha=0.5)
ax.plot([a[0]+(a[1]-a[0])/2-start_tmp, a[1]-start_tmp], [-(a[1]-a[0]), 0], c=cmap(n/len(gammas)), alpha=0.5)
a = segments[-1]
ax.plot([a[0]-start_tmp, a[0]+(a[1]-a[0])/2-start_tmp], [0, -(a[1]-a[0])], c=cmap(n/len(gammas)), alpha=0.5, label=gamma)
ax.plot([a[0]+(a[1]-a[0])/2-start_tmp, a[1]-start_tmp], [-(a[1]-a[0]), 0], c=cmap(n/len(gammas)), alpha=0.5)
ax.set_xlim([0,end_tmp-start_tmp])
ax.set_ylim([100,-100])
ax.legend(bbox_to_anchor=(1.1, 1.05))
ax.set_aspect(0.5)
#Let's check what are median TAD sized with different parameters:
for gam_current in gam:
segments = segments_dict[gam_current]
tad_lens = segments[:,1]-segments[:,0]
good_lens = (tad_lens>=200/res)&(tad_lens<100)
print(res*1000*np.mean(tad_lens[good_lens]))
Explanation: 7.2 Topologically associating domains (TADs)
For TADs calling we will use lavaburst package. The code below is based on this example.
End of explanation
<END_TASK> |
15,712 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Example 3
Step3: Heat currents
Following Ref. [2], we consider two possible definitions of the heat currents from the qubits into the baths.
The so-called bath heat currents are $j_{\text{B}}^K = \partial_t \langle H_{\text{B}}^K \rangle$ and the system heat currents are $j_{\text{S}}^K = \mathrm i\, \langle [H_{\text{S}}, Q_K] X_{\text{B}}^K \rangle$.
As shown in Ref. [2], they can be expressed in terms of the HEOM ADOs as follows
Step4: Note that at long times, we expect $j_{\text{B}}^1 = -j_{\text{B}}^2$ and $j_{\text{S}}^1 = -j_{\text{S}}^2$ due to energy conservation. At long times, we also expect $j_{\text{B}}^1 = j_{\text{S}}^1$ and $j_{\text{B}}^2 = j_{\text{S}}^2$ since the coupling operators commute, $[Q_1, Q_2] = 0$. Hence, all four currents should agree in the long-time limit (up to a sign). This long-time value is what was analyzed in Ref. [2].
Simulations
For our simulations, we will represent the bath spectral densities using the first term of their Padé decompositions, and we will use $7$ levels of the HEOM hierarchy.
Step5: Time Evolution
We fix $J_{12} = 0.1 \epsilon$ (as in Fig. 3(a-ii) of Ref. [2]) and choose the fixed coupling strength $\lambda_1 = \lambda_2 = J_{12}\, /\, (2\epsilon)$ (corresponding to $\bar\zeta = 1$ in Ref. [2]).
Using these values, we will study the time evolution of the system state and the heat currents.
Step6: We first plot $\langle \sigma_z^1 \rangle$ to see the time evolution of the system state
Step7: We find a rather quick thermalization of the system state. For the heat currents, however, it takes a somewhat longer time until they converge to their long-time values
Step8: Steady-state currents
Here, we try to reproduce the HEOM curves in Fig. 3(a) of Ref. [1] by varying the coupling strength and finding the steady state for each coupling strength.
Step9: Create Plot | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
import qutip as qt
from qutip.nonmarkov.heom import HEOMSolver, DrudeLorentzPadeBath, BathExponent
from ipywidgets import IntProgress
from IPython.display import display
# Qubit parameters
epsilon = 1
# System operators
H1 = epsilon / 2 * qt.tensor(qt.sigmaz() + qt.identity(2), qt.identity(2))
H2 = epsilon / 2 * qt.tensor(qt.identity(2), qt.sigmaz() + qt.identity(2))
H12 = lambda J12 : J12 * (qt.tensor(qt.sigmap(), qt.sigmam()) + qt.tensor(qt.sigmam(), qt.sigmap()))
Hsys = lambda J12 : H1 + H2 + H12(J12)
# Cutoff frequencies
gamma1 = 2
gamma2 = 2
# Temperatures
Tbar = 2
Delta_T = 0.01 * Tbar
T1 = Tbar + Delta_T
T2 = Tbar - Delta_T
# Coupling operators
Q1 = qt.tensor(qt.sigmax(), qt.identity(2))
Q2 = qt.tensor(qt.identity(2), qt.sigmax())
Explanation: Example 3: Quantum Heat Transport
Setup
In this notebook, we apply the QuTiP HEOM solver to a quantum system coupled to two bosonic baths and demonstrate how to extract information about the system-bath heat currents from the auxiliary density operators (ADOs).
We consider the setup described in Ref. [1], which consists of two coupled qubits, each connected to its own heat bath.
The Hamiltonian of the qubits is given by
$$ \begin{aligned} H_{\text{S}} &= H_1 + H_2 + H_{12} , \quad\text{ where }\
H_K &= \frac{\epsilon}{2} \bigl(\sigma_z^K + 1\bigr) \quad (K=1,2) \quad\text{ and }\quad H_{12} = J_{12} \bigl( \sigma_+^1 \sigma_-^2 + \sigma_-^1 \sigma_+^2 \bigr) . \end{aligned} $$
Here, $\sigma^K_{x,y,z,\pm}$ denotes the usual Pauli matrices for the K-th qubit, $\epsilon$ is the eigenfrequency of the qubits and $J_{12}$ the coupling constant.
Each qubit is coupled to its own bath; therefore, the total Hamiltonian is
$$ H_{\text{tot}} = H_{\text{S}} + \sum_{K=1,2} \bigl( H_{\text{B}}^K + Q_K \otimes X_{\text{B}}^K \bigr) , $$
where $H_{\text{B}}^K$ is the free Hamiltonian of the K-th bath and $X_{\text{B}}^K$ its coupling operator, and $Q_K = \sigma_x^K$ are the system coupling operators.
We assume that the bath spectral densities are given by Drude distributions
$$ J_K(\omega) = \frac{2 \lambda_K \gamma_K \omega}{\omega^2 + \gamma_K^2} , $$
where $\lambda_K$ is the free coupling strength and $\gamma_K$ the cutoff frequency.
We begin by defining the system and bath parameters.
We use the parameter values from Fig. 3(a) of Ref. [1].
Note that we set $\hbar$ and $k_B$ to one and we will measure all frequencies and energies in units of $\epsilon$.
[1] Kato and Tanimura, J. Chem. Phys. 143, 064107 (2015).
End of explanation
def bath_heat_current(bath_tag, ado_state, hamiltonian, coupling_op, delta=0):
Bath heat current from the system into the heat bath with the given tag.
Parameters
----------
bath_tag : str, tuple or any other object
Tag of the heat bath corresponding to the current of interest.
ado_state : HierarchyADOsState
Current state of the system and the environment (encoded in the ADOs).
hamiltonian : Qobj
System Hamiltonian at the current time.
coupling_op : Qobj
System coupling operator at the current time.
delta : float
The prefactor of the \delta(t) term in the correlation function (the Ishizaki-Tanimura terminator).
l1_labels = ado_state.filter(level=1, tags=[bath_tag])
a_op = 1j * (hamiltonian * coupling_op - coupling_op * hamiltonian)
result = 0
cI0 = 0 # imaginary part of bath auto-correlation function (t=0)
for label in l1_labels:
[exp] = ado_state.exps(label)
result += exp.vk * (coupling_op * ado_state.extract(label)).tr()
if exp.type == BathExponent.types['I']:
cI0 += exp.ck
elif exp.type == BathExponent.types['RI']:
cI0 += exp.ck2
result -= 2 * cI0 * (coupling_op * coupling_op * ado_state.rho).tr()
if delta != 0:
result -= 1j * delta * ((a_op * coupling_op - coupling_op * a_op) * ado_state.rho).tr()
return result
def system_heat_current(bath_tag, ado_state, hamiltonian, coupling_op, delta=0):
System heat current from the system into the heat bath with the given tag.
Parameters
----------
bath_tag : str, tuple or any other object
Tag of the heat bath corresponding to the current of interest.
ado_state : HierarchyADOsState
Current state of the system and the environment (encoded in the ADOs).
hamiltonian : Qobj
System Hamiltonian at the current time.
coupling_op : Qobj
System coupling operator at the current time.
delta : float
The prefactor of the \delta(t) term in the correlation function (the Ishizaki-Tanimura terminator).
l1_labels = ado_state.filter(level=1, tags=[bath_tag])
a_op = 1j * (hamiltonian * coupling_op - coupling_op * hamiltonian)
result = 0
for label in l1_labels:
result += (a_op * ado_state.extract(label)).tr()
if delta != 0:
result -= 1j * delta * ((a_op * coupling_op - coupling_op * a_op) * ado_state.rho).tr()
return result
Explanation: Heat currents
Following Ref. [2], we consider two possible definitions of the heat currents from the qubits into the baths.
The so-called bath heat currents are $j_{\text{B}}^K = \partial_t \langle H_{\text{B}}^K \rangle$ and the system heat currents are $j_{\text{S}}^K = \mathrm i\, \langle [H_{\text{S}}, Q_K] X_{\text{B}}^K \rangle$.
As shown in Ref. [2], they can be expressed in terms of the HEOM ADOs as follows:
$$ \begin{aligned} \mbox{} \
j_{\text{B}}^K &= !!\sum_{\substack{\mathbf n\ \text{Level 1}\ \text{Bath $K$}}}!! \nu[\mathbf n] \operatorname{tr}\bigl[ Q_K \rho_{\mathbf n} \bigr] - 2 C_I^K(0) \operatorname{tr}\bigl[ Q_k^2 \rho \bigr] + \Gamma_{\text{T}}^K \operatorname{tr}\bigl[ [[H_{\text{S}}, Q_K], Q_K]\, \rho \bigr] , \[.5em]
j_{\text{S}}^K &= \mathrm i!! \sum_{\substack{\mathbf n\ \text{Level 1}\ \text{Bath $k$}}}!! \operatorname{tr}\bigl[ [H_{\text{S}}, Q_K]\, \rho_{\mathbf n} \bigr] + \Gamma_{\text{T}}^K \operatorname{tr}\bigl[ [[H_{\text{S}}, Q_K], Q_K]\, \rho \bigr] . \ \mbox{}
\end{aligned} $$
The sums run over all level-$1$ multi-indices $\mathbf n$ with one excitation corresponding to the K-th bath, $\nu[\mathbf n]$ is the corresponding (negative) exponent of the bath auto-correlation function $C^K(t)$, and $\Gamma_{\text{T}}^K$ is the Ishizaki-Tanimura terminator (i.e., a correction term accounting for the error introduced by approximating the correlation function with a finite sum of exponential terms).
In the expression for the bath heat currents, we left out terms involving $[Q_1, Q_2]$, which is zero in this example.
[2] Kato and Tanimura, J. Chem. Phys. 145, 224105 (2016).
In QuTiP, these currents can be conveniently calculated as follows:
End of explanation
Nk = 1
NC = 7
options = qt.Options(nsteps=1500, store_states=False, atol=1e-12, rtol=1e-12)
Explanation: Note that at long times, we expect $j_{\text{B}}^1 = -j_{\text{B}}^2$ and $j_{\text{S}}^1 = -j_{\text{S}}^2$ due to energy conservation. At long times, we also expect $j_{\text{B}}^1 = j_{\text{S}}^1$ and $j_{\text{B}}^2 = j_{\text{S}}^2$ since the coupling operators commute, $[Q_1, Q_2] = 0$. Hence, all four currents should agree in the long-time limit (up to a sign). This long-time value is what was analyzed in Ref. [2].
Simulations
For our simulations, we will represent the bath spectral densities using the first term of their Padé decompositions, and we will use $7$ levels of the HEOM hierarchy.
End of explanation
# fix qubit-qubit and qubit-bath coupling strengths
J12 = 0.1
lambda1 = J12 / 2
lambda2 = J12 / 2
# choose arbitrary initial state
rho0 = qt.tensor(qt.identity(2), qt.identity(2)) / 4
# simulation time span
tlist = np.linspace(0, 50, 250)
bath1 = DrudeLorentzPadeBath(Q1, lambda1, gamma1, T1, Nk, tag='bath 1')
bath2 = DrudeLorentzPadeBath(Q2, lambda2, gamma2, T2, Nk, tag='bath 2')
b1delta, b1term = bath1.terminator()
b2delta, b2term = bath2.terminator()
solver = HEOMSolver(qt.liouvillian(Hsys(J12)) + b1term + b2term,
[bath1, bath2], max_depth=NC, options=options)
result = solver.run(rho0, tlist, e_ops=[qt.tensor(qt.sigmaz(), qt.identity(2)),
lambda t, ado: bath_heat_current('bath 1', ado, Hsys(J12), Q1, b1delta),
lambda t, ado: bath_heat_current('bath 2', ado, Hsys(J12), Q2, b2delta),
lambda t, ado: system_heat_current('bath 1', ado, Hsys(J12), Q1, b1delta),
lambda t, ado: system_heat_current('bath 2', ado, Hsys(J12), Q2, b2delta)])
Explanation: Time Evolution
We fix $J_{12} = 0.1 \epsilon$ (as in Fig. 3(a-ii) of Ref. [2]) and choose the fixed coupling strength $\lambda_1 = \lambda_2 = J_{12}\, /\, (2\epsilon)$ (corresponding to $\bar\zeta = 1$ in Ref. [2]).
Using these values, we will study the time evolution of the system state and the heat currents.
End of explanation
fig, axes = plt.subplots(figsize=(8,8))
axes.plot(tlist, result.expect[0], 'r', linewidth=2)
axes.set_xlabel('t', fontsize=28)
axes.set_ylabel(r"$\langle \sigma_z^1 \rangle$", fontsize=28)
pass
Explanation: We first plot $\langle \sigma_z^1 \rangle$ to see the time evolution of the system state:
End of explanation
fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(16, 8))
ax1.plot(tlist, -np.real(result.expect[1]), color='darkorange', label='BHC (bath 1 -> system)')
ax1.plot(tlist, np.real(result.expect[2]), '--', color='darkorange', label='BHC (system -> bath 2)')
ax1.plot(tlist, -np.real(result.expect[3]), color='dodgerblue', label='SHC (bath 1 -> system)')
ax1.plot(tlist, np.real(result.expect[4]), '--', color='dodgerblue', label='SHC (system -> bath 2)')
ax1.set_xlabel('t', fontsize=28)
ax1.set_ylabel('j', fontsize=28)
ax1.set_ylim((-0.05, 0.05))
ax1.legend(loc=0, fontsize=12)
ax2.plot(tlist, -np.real(result.expect[1]), color='darkorange', label='BHC (bath 1 -> system)')
ax2.plot(tlist, np.real(result.expect[2]), '--', color='darkorange', label='BHC (system -> bath 2)')
ax2.plot(tlist, -np.real(result.expect[3]), color='dodgerblue', label='SHC (bath 1 -> system)')
ax2.plot(tlist, np.real(result.expect[4]), '--', color='dodgerblue', label='SHC (system -> bath 2)')
ax2.set_xlabel('t', fontsize=28)
ax2.set_xlim((20, 50))
ax2.set_ylim((0, 0.0002))
ax2.legend(loc=0, fontsize=12)
pass
Explanation: We find a rather quick thermalization of the system state. For the heat currents, however, it takes a somewhat longer time until they converge to their long-time values:
End of explanation
def heat_currents(J12, zeta_bar):
bath1 = DrudeLorentzPadeBath(Q1, zeta_bar * J12 / 2, gamma1, T1, Nk, tag='bath 1')
bath2 = DrudeLorentzPadeBath(Q2, zeta_bar * J12 / 2, gamma2, T2, Nk, tag='bath 2')
b1delta, b1term = bath1.terminator()
b2delta, b2term = bath2.terminator()
solver = HEOMSolver(qt.liouvillian(Hsys(J12)) + b1term + b2term,
[bath1, bath2], max_depth=NC, options=options)
_, steady_ados = solver.steady_state()
return bath_heat_current('bath 1', steady_ados, Hsys(J12), Q1, b1delta), \
bath_heat_current('bath 2', steady_ados, Hsys(J12), Q2, b2delta), \
system_heat_current('bath 1', steady_ados, Hsys(J12), Q1, b1delta), \
system_heat_current('bath 2', steady_ados, Hsys(J12), Q2, b2delta)
# Define number of points to use for final plot
plot_points = 100
progress = IntProgress(min=0, max=(3*plot_points))
display(progress)
zeta_bars = []
j1s = [] # J12 = 0.01
j2s = [] # J12 = 0.1
j3s = [] # J12 = 0.5
# --- J12 = 0.01 ---
NC = 7
# xrange chosen so that 20 is maximum, centered around 1 on a log scale
for zb in np.logspace(-np.log(20), np.log(20), plot_points, base=np.e):
j1, _, _, _ = heat_currents(0.01, zb) # the four currents are identical in the steady state
zeta_bars.append(zb)
j1s.append(j1)
progress.value += 1
# --- J12 = 0.1 ---
for zb in zeta_bars:
# higher HEOM cut-off is necessary for large coupling strength
if zb < 10:
NC = 7
else:
NC = 12
j2, _, _, _ = heat_currents(0.1, zb)
j2s.append(j2)
progress.value += 1
# --- J12 = 0.5 ---
for zb in zeta_bars:
if zb < 5:
NC = 7
elif zb < 10:
NC = 15
else:
NC = 20
j3, _, _, _ = heat_currents(0.5, zb)
j3s.append(j3)
progress.value += 1
progress.close()
np.save('data/qhb_zb.npy', zeta_bars)
np.save('data/qhb_j1.npy', j1s)
np.save('data/qhb_j2.npy', j2s)
np.save('data/qhb_j3.npy', j3s)
Explanation: Steady-state currents
Here, we try to reproduce the HEOM curves in Fig. 3(a) of Ref. [1] by varying the coupling strength and finding the steady state for each coupling strength.
End of explanation
zeta_bars = np.load('data/qhb_zb.npy')
j1s = np.load('data/qhb_j1.npy')
j2s = np.load('data/qhb_j2.npy')
j3s = np.load('data/qhb_j3.npy')
matplotlib.rcParams['figure.figsize'] = (7, 5)
matplotlib.rcParams['axes.titlesize'] = 25
matplotlib.rcParams['axes.labelsize'] = 30
matplotlib.rcParams['xtick.labelsize'] = 28
matplotlib.rcParams['ytick.labelsize'] = 28
matplotlib.rcParams['legend.fontsize'] = 28
matplotlib.rcParams['axes.grid'] = False
matplotlib.rcParams['savefig.bbox'] = 'tight'
matplotlib.rcParams['lines.markersize'] = 5
matplotlib.rcParams['font.family'] = 'STIXgeneral'
matplotlib.rcParams['mathtext.fontset'] = 'stix'
matplotlib.rcParams["font.serif"] = "STIX"
matplotlib.rcParams['text.usetex'] = False
fig, axes = plt.subplots(figsize=(12,7))
axes.plot(zeta_bars, -1000 * 100 * np.real(j1s), 'b', linewidth=2, label=r"$J_{12} = 0.01\, \epsilon$")
axes.plot(zeta_bars, -1000 * 10 * np.real(j2s), 'r--', linewidth=2, label=r"$J_{12} = 0.1\, \epsilon$")
axes.plot(zeta_bars, -1000 * 2 * np.real(j3s), 'g-.', linewidth=2, label=r"$J_{12} = 0.5\, \epsilon$")
axes.set_xscale('log')
axes.set_xlabel(r"$\bar\zeta$", fontsize=30)
axes.set_xlim((zeta_bars[0], zeta_bars[-1]))
axes.set_ylabel(r"$j_{\mathrm{ss}}\; /\; (\epsilon J_{12}) \times 10^3$", fontsize=30)
axes.set_ylim((0, 2))
axes.legend(loc=0)
#fig.savefig("figures/figHeat.pdf")
pass
from qutip.ipynbtools import version_table
version_table()
Explanation: Create Plot
End of explanation
<END_TASK> |
15,713 | <SYSTEM_TASK:>
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
<END_TASK>
<USER_TASK:>
Problem:
Let's say I have a 1d numpy array like this | <ASSISTANT_TASK:>
Python Code:
import numpy as np
a = np.array([1.5, -0.4, 1.3])
vals, idx = np.unique(a, return_inverse=True)
b = np.zeros((a.size, vals.size))
b[np.arange(a.size), idx] = 1
<END_TASK> |
15,714 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
odm2api sample
Largely from https
Step1: SamplingFeatures tests
Step2: Back to the rest of the demo | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import sys
import os
import matplotlib.pyplot as plt
from matplotlib import dates
from odm2api.ODMconnection import dbconnection
from odm2api.ODM2.services.readService import *
# Create a connection to the ODM2 database
# ----------------------------------------
odm2db_fpth = '/home/mayorga/Desktop/TylerYeats/ODM2-LittleBear1.sqlite'
session_factory = dbconnection.createConnection('sqlite', odm2db_fpth, 2.0)
read = ReadODM2(session_factory)
# Run some basic sample queries.
# ------------------------------
# Get all of the variables from the database and print their names to the console
allVars = read.getVariables()
for x in allVars:
print x.VariableCode + ": " + x.VariableNameCV
# Get all of the people from the database
allPeople = read.getPeople()
for x in allPeople:
print x.PersonFirstName + " " + x.PersonLastName
try:
print "\n-------- Information about an Affiliation ---------"
allaff = read.getAllAffiliations()
for x in allaff:
print x.PersonObj.PersonFirstName + ": " + str(x.OrganizationID)
except Exception as e:
print "Unable to demo getAllAffiliations", e
allaff = read.getAllAffiliations()
type(allaff)
Explanation: odm2api sample
Largely from https://github.com/ODM2/ODM2PythonAPI/blob/master/Examples/Sample.py
- 2/7/2016. Tested successfully with sfgeometry_em_1 branch, with my overhauls. Using odm2api_dev env.
- 2/1 - 1/31. Errors with SamplingFeatures code, with latest odm2api from master (on env odm2api_jan31test). The code also fails the same way with the odm2api env, but it does still run fine with the odm2api_jan21 env! I'm investigating the differences between those two envs.
- 1/22-20,9/2016.
Emilio Mayorga
End of explanation
from odm2api.ODM2.models import SamplingFeatures
read._session.query(SamplingFeatures).filter_by(SamplingFeatureTypeCV='Site').all()
read.getSamplingFeaturesByType('Site')
# Get all of the SamplingFeatures from the database that are Sites
try:
siteFeatures = read.getSamplingFeaturesByType('Site')
numSites = len(siteFeatures)
for x in siteFeatures:
print x.SamplingFeatureCode + ": " + x.SamplingFeatureName
except Exception as e:
print "Unable to demo getSamplingFeaturesByType", e
read.getSamplingFeatures()
read.getSamplingFeatureByCode('USU-LBR-Mendon')
# Now get the SamplingFeature object for a SamplingFeature code
sf = read.getSamplingFeatureByCode('USU-LBR-Mendon')
# vars(sf)
# 1/31/2016: Leads to error with latest from odm2api master:
# "TypeError: vars() argument must have __dict__ attribute"
print sf, "\n"
print type(sf)
print type(sf.FeatureGeometry)
vars(sf.FeatureGeometry)
sf.FeatureGeometry.__doc__
sf.FeatureGeometry.geom_wkb, sf.FeatureGeometry.geom_wkt
type(sf.shape()), sf.shape().wkt
Explanation: SamplingFeatures tests
End of explanation
# Drill down and get objects linked by foreign keys
print "\n------------ Foreign Key Example --------- \n",
try:
# Call getResults, but return only the first result
firstResult = read.getResults()[0]
action_firstResult = firstResult.FeatureActionObj.ActionObj
print "The FeatureAction object for the Result is: ", firstResult.FeatureActionObj
print "The Action object for the Result is: ", action_firstResult
print ("\nThe following are some of the attributes for the Action that created the Result: \n" +
"ActionTypeCV: " + action_firstResult.ActionTypeCV + "\n" +
"ActionDescription: " + action_firstResult.ActionDescription + "\n" +
"BeginDateTime: " + str(action_firstResult.BeginDateTime) + "\n" +
"EndDateTime: " + str(action_firstResult.EndDateTime) + "\n" +
"MethodName: " + action_firstResult.MethodObj.MethodName + "\n" +
"MethodDescription: " + action_firstResult.MethodObj.MethodDescription)
except Exception as e:
print "Unable to demo Foreign Key Example: ", e
# Now get a particular Result using a ResultID
print "\n------- Example of Retrieving Attributes of a Time Series Result -------"
try:
tsResult = read.getTimeSeriesResultByResultId(1)
# Get the site information by drilling down
sf_tsResult = tsResult.ResultObj.FeatureActionObj.SamplingFeatureObj
print(
"Some of the attributes for the TimeSeriesResult retrieved using getTimeSeriesResultByResultID(): \n" +
"ResultTypeCV: " + tsResult.ResultObj.ResultTypeCV + "\n" +
# Get the ProcessingLevel from the TimeSeriesResult's ProcessingLevel object
"ProcessingLevel: " + tsResult.ResultObj.ProcessingLevelObj.Definition + "\n" +
"SampledMedium: " + tsResult.ResultObj.SampledMediumCV + "\n" +
# Get the variable information from the TimeSeriesResult's Variable object
"Variable: " + tsResult.ResultObj.VariableObj.VariableCode + ": " + tsResult.ResultObj.VariableObj.VariableNameCV + "\n" +
"AggregationStatistic: " + tsResult.AggregationStatisticCV + "\n" +
# Get the site information by drilling down
"Elevation_m: " + str(sf_tsResult.Elevation_m) + "\n" +
"SamplingFeature: " + sf_tsResult.SamplingFeatureCode + " - " +
sf_tsResult.SamplingFeatureName)
except Exception as e:
print "Unable to demo Example of retrieving Attributes of a time Series Result: ", e
tsResult = read.getTimeSeriesResultByResultId(1)
type(tsResult), vars(tsResult)
vars(tsResult.ResultObj)
# Get the values for a particular TimeSeriesResult
print "\n-------- Example of Retrieving Time Series Result Values ---------"
tsValues = read.getTimeSeriesResultValuesByResultId(1) # Return type is a pandas dataframe
# Print a few Time Series Values to the console
# tsValues.set_index('ValueDateTime', inplace=True)
tsValues.head()
# Plot the time series
try:
fig = plt.figure()
ax = fig.add_subplot(111)
tsValues.plot(x='ValueDateTime', y='DataValue', kind='line',
title=tsResult.ResultObj.VariableObj.VariableNameCV + " at " +
tsResult.ResultObj.FeatureActionObj.SamplingFeatureObj.SamplingFeatureName,
ax=ax)
ax.set_ylabel(tsResult.ResultObj.VariableObj.VariableNameCV + " (" +
tsResult.ResultObj.UnitsObj.UnitsAbbreviation + ")")
ax.set_xlabel("Date/Time")
ax.xaxis.set_minor_locator(dates.MonthLocator())
ax.xaxis.set_minor_formatter(dates.DateFormatter('%b'))
ax.xaxis.set_major_locator(dates.YearLocator())
ax.xaxis.set_major_formatter(dates.DateFormatter('\n%Y'))
ax.grid(True)
except Exception as e:
print "Unable to demo plotting of tsValues: ", e
Explanation: Back to the rest of the demo
End of explanation
<END_TASK> |
15,715 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Random Sample
Consider comments made since min_timestamp. Take n random comments from non-bot users.
Params
Step2: Query | <ASSISTANT_TASK:>
Python Code:
n = 50000
min_timestamp = '2000-01-01T00:00:00Z' # start of time
Explanation: Random Sample
Consider comments made since min_timestamp. Take n random comments from non-bot users.
Params
End of explanation
t1 = time.time()
query =
SELECT
*
FROM
enwiki.article_talk_diff_no_bot_sample
WHERE
rev_timestamp > '%(min_timestamp)s'
AND ns = 'article'
LIMIT %(n)d
params = {
'n': int(n * 1.7),
'min_timestamp': min_timestamp
}
df = query_hive_ssh(query % params, '../../data/raw_random_sample.tsv', priority = True, quoting=3, delete = True)
df.columns = [c.split('.')[1] for c in df.columns]
t2 = time.time()
print('Query and Download Time:', (t2-t1) / 60.0)
dfc = clean(df[300:500])
show_comments(dfc, 100)
Explanation: Query
End of explanation
<END_TASK> |
15,716 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1 Least squares and linear basis functions models
1.1 Least squares
Step2: Load the data
Here we will reuse the dataset height_weight_genders.csv from previous exercise section to check the correctness of your implementation. Please compare it with your previous result.
Step3: Test it here
Step5: 1.2 Least squares with a linear basis function model
Start from this section, we will use the dataset dataEx3.csv.
Implement polynomial basis functions
Step7: Let us play with polynomial regression. Note that we will use your implemented function compute_mse. Please copy and paste your implementation from exercise02.
Step8: Run polynomial regression
Step10: 2 Evaluating model predication performance
Let us show the train and test splits for various polynomial degrees. First of all, please fill in the function split_data()
Step12: Then, test your split_data function below.
Step13: Demo time
Step16: Ridge Regression
Please fill in the function below.
Step17: Demo time | <ASSISTANT_TASK:>
Python Code:
def least_squares(y, tx):
calculate the least squares solution.
a = tx.T.dot(tx)
b = tx.T.dot(y)
return np.linalg.solve(a, b)
Explanation: 1 Least squares and linear basis functions models
1.1 Least squares
End of explanation
from helpers import *
def test_your_least_squares():
height, weight, gender = load_data_from_ex02(sub_sample=False, add_outlier=False)
x, mean_x, std_x = standardize(height)
y, tx = build_model_data(x, weight)
weight = least_squares(y, tx)
print(weight)
Explanation: Load the data
Here we will reuse the dataset height_weight_genders.csv from previous exercise section to check the correctness of your implementation. Please compare it with your previous result.
End of explanation
test_your_least_squares()
Explanation: Test it here
End of explanation
# load dataset
x, y = load_data()
print("shape of x {}".format(x.shape))
print("shape of y {}".format(y.shape))
def build_poly(x, degree):
polynomial basis functions for input data x, for j=0 up to j=degree.
poly = np.ones((len(x), 1))
for deg in range(1, degree+1):
poly = np.c_[poly, np.power(x, deg)]
return poly
Explanation: 1.2 Least squares with a linear basis function model
Start from this section, we will use the dataset dataEx3.csv.
Implement polynomial basis functions
End of explanation
from costs import compute_mse
from plots import *
def polynomial_regression():
Constructing the polynomial basis function expansion of the data,
and then running least squares regression.
# define parameters
degrees = [1, 3, 7, 12]
# define the structure of the figure
num_row = 2
num_col = 2
f, axs = plt.subplots(num_row, num_col)
for ind, degree in enumerate(degrees):
# form dataset to do polynomial regression.
tx = build_poly(x, degree)
# least squares
weights = least_squares(y, tx)
# compute RMSE
rmse = np.sqrt(2 * compute_mse(y, tx, weights))
print("Processing {i}th experiment, degree={d}, rmse={loss}".format(
i=ind + 1, d=degree, loss=rmse))
# plot fit
plot_fitted_curve(
y, x, weights, degree, axs[ind // num_col][ind % num_col])
plt.tight_layout()
plt.savefig("visualize_polynomial_regression")
plt.show()
Explanation: Let us play with polynomial regression. Note that we will use your implemented function compute_mse. Please copy and paste your implementation from exercise02.
End of explanation
polynomial_regression()
Explanation: Run polynomial regression
End of explanation
def split_data(x, y, ratio, seed=1):
split the dataset based on the split ratio.
# set seed
np.random.seed(seed)
# generate random indices
num_row = len(y)
indices = np.random.permutation(num_row)
index_split = int(np.floor(ratio * num_row))
index_tr = indices[: index_split]
index_te = indices[index_split:]
# create split
x_tr = x[index_tr]
x_te = x[index_te]
y_tr = y[index_tr]
y_te = y[index_te]
return x_tr, x_te, y_tr, y_te
Explanation: 2 Evaluating model predication performance
Let us show the train and test splits for various polynomial degrees. First of all, please fill in the function split_data()
End of explanation
def train_test_split_demo(x, y, degree, ratio, seed):
polynomial regression with different split ratios and different degrees.
x_tr, x_te, y_tr, y_te = split_data(x, y, ratio, seed)
# form tx
tx_tr = build_poly(x_tr, degree)
tx_te = build_poly(x_te, degree)
weight = least_squares(y_tr, tx_tr)
# calculate RMSE for train and test data.
rmse_tr = np.sqrt(2 * compute_mse(y_tr, tx_tr, weight))
rmse_te = np.sqrt(2 * compute_mse(y_te, tx_te, weight))
print("proportion={p}, degree={d}, Training RMSE={tr:.3f}, Testing RMSE={te:.3f}".format(
p=ratio, d=degree, tr=rmse_tr, te=rmse_te))
Explanation: Then, test your split_data function below.
End of explanation
seed = 6
degrees = [1, 3, 7, 12]
split_ratios = [0.9, 0.5, 0.1]
for split_ratio in split_ratios:
for degree in degrees:
train_test_split_demo(x, y, degree, split_ratio, seed)
Explanation: Demo time
End of explanation
def ridge_regression(y, tx, lambda_):
implement ridge regression.
aI = 2 * tx.shape[0] * lambda_ * np.identity(tx.shape[1])
a = tx.T.dot(tx) + aI
b = tx.T.dot(y)
return np.linalg.solve(a, b)
def ridge_regression_demo(x, y, degree, ratio, seed):
ridge regression demo.
# define parameter
lambdas = np.logspace(-5, 0, 15)
# split data
x_tr, x_te, y_tr, y_te = split_data(x, y, ratio, seed)
# form tx
tx_tr = build_poly(x_tr, degree)
tx_te = build_poly(x_te, degree)
# ridge regression with different lambda
rmse_tr = []
rmse_te = []
for ind, lambda_ in enumerate(lambdas):
# ridge regression
weight = ridge_regression(y_tr, tx_tr, lambda_)
rmse_tr.append(np.sqrt(2 * compute_mse(y_tr, tx_tr, weight)))
rmse_te.append(np.sqrt(2 * compute_mse(y_te, tx_te, weight)))
print("proportion={p}, degree={d}, lambda={l:.3f}, Training RMSE={tr:.3f}, Testing RMSE={te:.3f}".format(
p=ratio, d=degree, l=lambda_, tr=rmse_tr[ind], te=rmse_te[ind]))
plot_train_test(rmse_tr, rmse_te, lambdas, degree)
Explanation: Ridge Regression
Please fill in the function below.
End of explanation
seed = 56
degree = 7
split_ratio = 0.5
ridge_regression_demo(x, y, degree, split_ratio, seed)
Explanation: Demo time
End of explanation
<END_TASK> |
15,717 | <SYSTEM_TASK:>
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
<END_TASK>
<USER_TASK:>
Problem:
I have a dataframe with one of its column having a list at each index. I want to reversed each list and concatenate these lists into one string like '3,2,1,5,4'. I am using | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
df = pd.DataFrame(dict(col1=[[1, 2, 3],[4,5]]))
def g(df):
for i in df.index:
df.loc[i, 'col1'] = df.loc[i, 'col1'][::-1]
L = df.col1.sum()
L = map(lambda x:str(x), L)
return ','.join(L)
result = g(df.copy())
<END_TASK> |
15,718 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Load and process review dataset
1. Load the dataset into a data frame named products.
Step1: 2. data transformations
Step2: 3. Compute word counts (only for important_words)
Step3: 4. Show 'perfect' word counts
Step4: Train-Validation split
5. split the data into a train-validation split with 80%
Step5: 6. Convert train_data and validation_data into multi-dimensional arrays.
Using the function given in #8 of Module 3 assignment or Programming Assignment 2
Step6: Building on logistic regression with no L2 penalty assignment
7. Compute predictions given by the link function.
Take two parameters
Step8: Adding L2 penalty
9. Adding L2 penalty to the derivative
errors
Step9: 1. Quiz question
Step10: 2. Quiz question
Step11: Explore effects of L2 regularization
12. train models with different L2
Now that we have written up all the pieces needed for an L2 solver with logistic regression, let's explore the benefits of using L2 regularization while analyzing sentiment for product reviews. As iterations pass, the log likelihood should increase.
Let us train models with increasing amounts of regularization, starting with no L2 penalty, which is equivalent to our previous logistic regression implementation. Train 6 models with L2 penalty values 0, 4, 10, 1e2, 1e3, and 1e5. Use the following values for the other parameters
Step12: Compare coefficients
13. Analysis coefficient without penalty
Step13: 14. observe the effect of increasing L2 penalty on the 10 words
Step15: 3. Quiz Question
Step16: 4. Quiz question
Step17: 5. Quiz question | <ASSISTANT_TASK:>
Python Code:
products = pd.read_csv('amazon_baby_subset.csv')
Explanation: Load and process review dataset
1. Load the dataset into a data frame named products.
End of explanation
products = products.fillna({'review':''}) # fill in N/A's in the review column
def remove_punctuation(text):
import string
return text.translate(None, string.punctuation)
products['review_clean'] = products['review'].apply(remove_punctuation)
products.head(3)
Explanation: 2. data transformations:
fill n/a values in the review column with empty strings
Remove punctuation
End of explanation
with open('important_words.json') as important_words_file:
important_words = json.load(important_words_file)
print important_words[:3]
for word in important_words:
products[word] = products['review_clean'].apply(lambda s : s.split().count(word))
Explanation: 3. Compute word counts (only for important_words)
End of explanation
products['perfect'][:3]
Explanation: 4. Show 'perfect' word counts
End of explanation
with open('module-4-assignment-train-idx.json') as train_data_file:
train_data_idx = json.load(train_data_file)
with open('module-4-assignment-validation-idx.json') as validation_data_file:
validation_data_idx = json.load(validation_data_file)
print train_data_idx[:3]
print validation_data_idx[:3]
print len(train_data_idx)
print len(validation_data_idx)
train_data = products.iloc[train_data_idx]
train_data.head(2)
validation_data = products.iloc[validation_data_idx]
validation_data.head(2)
Explanation: Train-Validation split
5. split the data into a train-validation split with 80%
End of explanation
def get_numpy_data(dataframe, features, label):
dataframe['constant'] = 1
features = ['constant'] + features
features_frame = dataframe[features]
feature_matrix = features_frame.as_matrix()
label_sarray = dataframe[label]
label_array = label_sarray.as_matrix()
return(feature_matrix, label_array)
feature_matrix_train, sentiment_train = get_numpy_data(train_data, important_words, 'sentiment')
feature_matrix_valid, sentiment_valid = get_numpy_data(validation_data, important_words, 'sentiment')
print feature_matrix_train.shape
print feature_matrix_valid.shape
Explanation: 6. Convert train_data and validation_data into multi-dimensional arrays.
Using the function given in #8 of Module 3 assignment or Programming Assignment 2
End of explanation
'''
feature_matrix: N * D(intercept term included)
coefficients: D * 1
predictions: N * 1
produces probablistic estimate for P(y_i = +1 | x_i, w).
estimate ranges between 0 and 1.
'''
def predict_probability(feature_matrix, coefficients):
# Take dot product of feature_matrix and coefficients
# YOUR CODE HERE
score = np.dot(feature_matrix, coefficients) # N * 1
# Compute P(y_i = +1 | x_i, w) using the link function
# YOUR CODE HERE
predictions = 1.0/(1+np.exp(-score))
# return predictions
return predictions
Explanation: Building on logistic regression with no L2 penalty assignment
7. Compute predictions given by the link function.
Take two parameters: feature_matrix and coefficients.
First compute the dot product of feature_matrix and coefficients.
Then compute the link function P(y=+1|x,w).
Return the predictions given by the link function.
End of explanation
def feature_derivative_with_L2(errors, feature, coefficient, l2_penalty, feature_is_constant):
# Compute the dot product of errors and feature
## YOUR CODE HERE
errors: N * 1
feature: N * 1
derivative: 1
coefficient: 1
derivative = np.dot(np.transpose(errors), feature)
# add L2 penalty term for any feature that isn't the intercept.
if not feature_is_constant:
## YOUR CODE HERE
derivative -= 2 * l2_penalty * coefficient
return derivative
Explanation: Adding L2 penalty
9. Adding L2 penalty to the derivative
errors: vector whose i-th value contains
feature: vector whose i-th value contains
coefficient: the current value of the j-th coefficient.
l2_penalty: the L2 penalty constant λ
feature_is_constant: a Boolean value indicating whether the j-th feature is constant or not.
The function should do the following:
Take the five parameters as above.
Compute the dot product of errors and feature and save the result to derivative.
If feature_is_constant is False, subtract the L2 penalty term from derivative. Otherwise, do nothing.
Return derivative.
End of explanation
def compute_log_likelihood_with_L2(feature_matrix, sentiment, coefficients, l2_penalty):
indicator = (sentiment==+1)
scores = np.dot(feature_matrix, coefficients)
# scores.shape (53072L, 1L)
# indicator.shape (53072L,)
# lp = np.sum((indicator-1)*scores - np.log(1. + np.exp(-scores))) - l2_penalty*np.sum(coefficients[1:]**2)
lp = np.sum((np.transpose(np.array([indicator]))-1)*scores - np.log(1. + np.exp(-scores))) - l2_penalty*np.sum(coefficients[1:]**2)
return lp
Explanation: 1. Quiz question:
In the code above, was the intercept term regularized?
1. Answer:
No
10. computing log likelihood with L2
End of explanation
# coefficients: D * 1
def logistic_regression_with_L2(feature_matrix, sentiment, initial_coefficients, step_size, l2_penalty, max_iter):
coefficients = np.array(initial_coefficients) # make sure it's a numpy array
for itr in xrange(max_iter):
# Predict P(y_i = +1|x_i,w) using your predict_probability() function
## YOUR CODE HERE
predictions = predict_probability(feature_matrix, coefficients)
# Compute indicator value for (y_i = +1)
indicator = (sentiment==+1)
# Compute the errors as indicator - predictions
errors = np.transpose(np.array([indicator])) - predictions
for j in xrange(len(coefficients)): # loop over each coefficient
is_intercept = (j == 0)
# Recall that feature_matrix[:,j] is the feature column associated with coefficients[j].
# Compute the derivative for coefficients[j]. Save it in a variable called derivative
## YOUR CODE HERE
derivative = feature_derivative_with_L2(errors, feature_matrix[:,j], coefficients[j], l2_penalty, is_intercept)
# add the step size times the derivative to the current coefficient
## YOUR CODE HERE
coefficients[j] += step_size*derivative
# Checking whether log likelihood is increasing
if itr <= 15 or (itr <= 100 and itr % 10 == 0) or (itr <= 1000 and itr % 100 == 0) \
or (itr <= 10000 and itr % 1000 == 0) or itr % 10000 == 0:
lp = compute_log_likelihood_with_L2(feature_matrix, sentiment, coefficients, l2_penalty)
print 'iteration %*d: log likelihood of observed labels = %.8f' % \
(int(np.ceil(np.log10(max_iter))), itr, lp)
return coefficients
Explanation: 2. Quiz question:
Does the term with L2 regularization increase or decrease ℓℓ(w)?
3. Answer:
decrease
11. Write a function logistic_regression_with_L2 to fit a logistic regression model under L2 regularization.
The function accepts the following parameters:
feature_matrix: 2D array of features
sentiment: 1D array of class labels
initial_coefficients: 1D array containing initial values of coefficients
step_size: a parameter controlling the size of the gradient steps
l2_penalty: the L2 penalty constant λ
max_iter: number of iterations to run gradient ascent
The function returns the last set of coefficients after performing gradient ascent.
The function carries out the following steps:
Initialize vector coefficients to initial_coefficients.
Predict the class probability P(yi=+1|xi,w) using your predict_probability function and save it to variable predictions.
Compute indicator value for (yi=+1) by comparing sentiment against +1. Save it to variable indicator.
Compute the errors as difference between indicator and predictions. Save the errors to variable errors.
For each j-th coefficient, compute the per-coefficient derivative by calling feature_derivative_L2 with the j-th column of feature_matrix. Don't forget to supply the L2 penalty. Then increment the j-th coefficient by (step_size*derivative).
Once in a while, insert code to print out the log likelihood.
Repeat steps 2-6 for max_iter times.
End of explanation
initial_coefficients = np.zeros((194,1))
step_size = 5e-6
max_iter = 501
coefficients_0_penalty = logistic_regression_with_L2(feature_matrix_train , sentiment_train , initial_coefficients, step_size, 0, max_iter)
coefficients_4_penalty = logistic_regression_with_L2(feature_matrix_train , sentiment_train , initial_coefficients, step_size, 4, max_iter)
coefficients_10_penalty = logistic_regression_with_L2(feature_matrix_train , sentiment_train , initial_coefficients, step_size, 10, max_iter)
coefficients_1e2_penalty = logistic_regression_with_L2(feature_matrix_train , sentiment_train , initial_coefficients, step_size, 1e2, max_iter)
coefficients_1e3_penalty = logistic_regression_with_L2(feature_matrix_train , sentiment_train , initial_coefficients, step_size, 1e3, max_iter)
coefficients_1e5_penalty = logistic_regression_with_L2(feature_matrix_train , sentiment_train , initial_coefficients, step_size, 1e5, max_iter)
Explanation: Explore effects of L2 regularization
12. train models with different L2
Now that we have written up all the pieces needed for an L2 solver with logistic regression, let's explore the benefits of using L2 regularization while analyzing sentiment for product reviews. As iterations pass, the log likelihood should increase.
Let us train models with increasing amounts of regularization, starting with no L2 penalty, which is equivalent to our previous logistic regression implementation. Train 6 models with L2 penalty values 0, 4, 10, 1e2, 1e3, and 1e5. Use the following values for the other parameters:
feature_matrix = feature_matrix_train extracted in #7
sentiment = sentiment_train extracted in #7
initial_coefficients = a 194-dimensional vector filled with zeros
step_size = 5e-6
max_iter = 501
Save the 6 sets of coefficients as coefficients_0_penalty, coefficients_4_penalty, coefficients_10_penalty, coefficients_1e2_penalty, coefficients_1e3_penalty, and coefficients_1e5_penalty respectively.
End of explanation
coefficients_0_penalty_without_intercept = list(coefficients_0_penalty[1:]) # exclude intercept
word_coefficient_tuples = [(word, coefficient) for word, coefficient in zip(important_words, coefficients_0_penalty_without_intercept)]
word_coefficient_tuples = sorted(word_coefficient_tuples, key=lambda x:x[1], reverse=True)
positive_words = []
for i in range(5):
positive_words.append(word_coefficient_tuples[:5][i][0])
positive_words
negative_words = []
for i in range(5):
negative_words.append(word_coefficient_tuples[-5:][i][0])
negative_words
Explanation: Compare coefficients
13. Analysis coefficient without penalty
End of explanation
table = pd.DataFrame(data=[coefficients_0_penalty.flatten(), coefficients_4_penalty.flatten(), coefficients_10_penalty.flatten(), coefficients_1e2_penalty.flatten(), coefficients_1e3_penalty.flatten(), coefficients_1e5_penalty.flatten()],
index=[0, 4, 10, 100.0, 1000.0, 100000.0],
columns=['(intercept)'] + important_words)
table.head(2)
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = 10, 6
def make_coefficient_plot(table, positive_words, negative_words, l2_penalty_list):
cmap_positive = plt.get_cmap('Reds')
cmap_negative = plt.get_cmap('Blues')
xx = l2_penalty_list
plt.plot(xx, [0.]*len(xx), '--', lw=1, color='k')
table_positive_words = table[positive_words]
table_negative_words = table[negative_words]
#del table_positive_words['word']
#del table_negative_words['word']
for i, value in enumerate(positive_words):
color = cmap_positive(0.8*((i+1)/(len(positive_words)*1.2)+0.15))
plt.plot(xx, table_positive_words[value].as_matrix().flatten(),
'-', label=positive_words[i], linewidth=4.0, color=color)
for i, value in enumerate(negative_words):
color = cmap_negative(0.8*((i+1)/(len(negative_words)*1.2)+0.15))
plt.plot(xx, table_negative_words[value].as_matrix().flatten(),
'-', label=negative_words[i], linewidth=4.0, color=color)
plt.legend(loc='best', ncol=3, prop={'size':16}, columnspacing=0.5)
plt.axis([1, 1e5, -1, 2])
plt.title('Coefficient path')
plt.xlabel('L2 penalty ($\lambda$)')
plt.ylabel('Coefficient value')
plt.xscale('log')
plt.rcParams.update({'font.size': 18})
plt.tight_layout()
make_coefficient_plot(table, positive_words, negative_words, l2_penalty_list=[0, 4, 10, 1e2, 1e3, 1e5])
Explanation: 14. observe the effect of increasing L2 penalty on the 10 words
End of explanation
feature_matrix: N * D
coefficients: D * 1
predictions: N * 1
training_accuracy = []
for coefficient in [coefficients_0_penalty, coefficients_4_penalty, coefficients_10_penalty, coefficients_1e2_penalty, coefficients_1e3_penalty, coefficients_1e5_penalty]:
predictions = predict_probability(feature_matrix_train, coefficient)
correct_num = np.sum((np.transpose(predictions.flatten())> 0.5) == (np.array(sentiment_train)>0))
total_num = len(sentiment_train)
#print "correct_num: {}, total_num: {}".format(correct_num, total_num)
training_accuracy.append(correct_num * 1./ total_num)
#print accuracy
l2_penalty_list=[x for x in range(6)]
plt.plot(l2_penalty_list, training_accuracy,'ro')
plt.title('training_accuracy')
plt.xlabel('L2 penalty ($\lambda$)')
plt.ylabel('training_accuracy')
plt.show()
[0, 4, 10, 1e2, 1e3, 1e5][training_accuracy.index(max(training_accuracy))]
training_accuracy[training_accuracy.index(max(training_accuracy))]
Explanation: 3. Quiz Question:
(True/False) All coefficients consistently get smaller in size as L2 penalty is increased.
3. Answer:
True
Measuring accuracy
15. compute the accuracy of the classifier model.
End of explanation
vali_accuracy = []
for coefficient in [coefficients_0_penalty, coefficients_4_penalty, coefficients_10_penalty, coefficients_1e2_penalty, coefficients_1e3_penalty, coefficients_1e5_penalty]:
predictions = predict_probability(feature_matrix_valid, coefficient)
correct_num = np.sum((np.transpose(predictions.flatten())> 0.5) == (np.array(sentiment_valid)>0))
total_num = len(sentiment_valid)
#print "correct_num: {}, total_num: {}".format(correct_num, total_num)
vali_accuracy.append(correct_num * 1./ total_num)
#print accuracy
l2_penalty_list=[x for x in range(6)]
plt.plot(l2_penalty_list, vali_accuracy,'ro')
plt.title('vali_accuracy')
plt.xlabel('L2 penalty ($\lambda$)')
plt.ylabel('vali_accuracy')
plt.show()
[0, 4, 10, 1e2, 1e3, 1e5][vali_accuracy.index(max(vali_accuracy))]
vali_accuracy[vali_accuracy.index(max(vali_accuracy))]
Explanation: 4. Quiz question:
Which model (L2 = 0, 4, 10, 100, 1e3, 1e5) has the highest accuracy on the training data?
Answer:
0
End of explanation
plt.plot(l2_penalty_list, training_accuracy,'g')
plt.plot(l2_penalty_list, vali_accuracy,'r')
Explanation: 5. Quiz question:
Which model (L2 = 0, 4, 10, 100, 1e3, 1e5) has the highest accuracy on the validation data?
Answer:
10
6. Quiz question:
Does the highest accuracy on the training data imply that the model is the best one?
End of explanation
<END_TASK> |
15,719 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
MNIST Image Classification with TensorFlow on Cloud ML Engine
This notebook demonstrates how to implement different image models on MNIST using Estimator.
Note the MODEL_TYPE; change it to try out different models
Step1: Run as a Python module
In the previous notebook (mnist_linear.ipynb) we ran our code directly from the notebook.
Now since we want to run our code on Cloud ML Engine, we've packaged it as a python module.
The model.py and task.py containing the model code is in <a href="mnistmodel/trainer">mnistmodel/trainer</a>
Complete the TODOs in model.py before proceeding!
Once you've completed the TODOs, set MODEL_TYPE and run it locally for a few steps to test the code.
Step2: Now, let's do it on Cloud ML Engine so we can train on GPU
Step3: Monitoring training with TensorBoard
Use this cell to launch tensorboard
Step4: Here are my results
Step5: To predict with the model, let's take one of the example images.
Step6: Send it to the prediction service | <ASSISTANT_TASK:>
Python Code:
import os
PROJECT = "cloud-training-demos" # REPLACE WITH YOUR PROJECT ID
BUCKET = "cloud-training-demos-ml" # REPLACE WITH YOUR BUCKET NAME
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
MODEL_TYPE = "linear" # "linear", "dnn", "dnn_dropout", or "cnn"
# do not change these
os.environ["PROJECT"] = PROJECT
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
os.environ["MODEL_TYPE"] = MODEL_TYPE
os.environ["TFVERSION"] = "1.13" # Tensorflow version
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
Explanation: MNIST Image Classification with TensorFlow on Cloud ML Engine
This notebook demonstrates how to implement different image models on MNIST using Estimator.
Note the MODEL_TYPE; change it to try out different models
End of explanation
%%bash
rm -rf mnistmodel.tar.gz mnist_trained
gcloud ml-engine local train \
--module-name=trainer.task \
--package-path=${PWD}/mnistmodel/trainer \
-- \
--output_dir=${PWD}/mnist_trained \
--train_steps=100 \
--learning_rate=0.01 \
--model=$MODEL_TYPE
Explanation: Run as a Python module
In the previous notebook (mnist_linear.ipynb) we ran our code directly from the notebook.
Now since we want to run our code on Cloud ML Engine, we've packaged it as a python module.
The model.py and task.py containing the model code is in <a href="mnistmodel/trainer">mnistmodel/trainer</a>
Complete the TODOs in model.py before proceeding!
Once you've completed the TODOs, set MODEL_TYPE and run it locally for a few steps to test the code.
End of explanation
%%bash
OUTDIR=gs://${BUCKET}/mnist/trained_${MODEL_TYPE}
JOBNAME=mnist_${MODEL_TYPE}_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
gcloud ml-engine jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.task \
--package-path=${PWD}/mnistmodel/trainer \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=BASIC_GPU \
--runtime-version=$TFVERSION \
-- \
--output_dir=$OUTDIR \
--train_steps=10000 --learning_rate=0.01 --train_batch_size=512 \
--model=$MODEL_TYPE --batch_norm
Explanation: Now, let's do it on Cloud ML Engine so we can train on GPU: --scale-tier=BASIC_GPU
Note the GPU speed up depends on the model type. You'll notice the more complex CNN model trains significantly faster on GPU, however the speed up on the simpler models is not as pronounced.
End of explanation
from google.datalab.ml import TensorBoard
TensorBoard().start("gs://{}/mnist/trained_{}".format(BUCKET, MODEL_TYPE))
for pid in TensorBoard.list()["pid"]:
TensorBoard().stop(pid)
print("Stopped TensorBoard with pid {}".format(pid))
Explanation: Monitoring training with TensorBoard
Use this cell to launch tensorboard
End of explanation
%%bash
MODEL_NAME="mnist"
MODEL_VERSION=${MODEL_TYPE}
MODEL_LOCATION=$(gsutil ls gs://${BUCKET}/mnist/trained_${MODEL_TYPE}/export/exporter | tail -1)
echo "Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes"
#gcloud ml-engine versions delete ${MODEL_VERSION} --model ${MODEL_NAME}
#gcloud ml-engine models delete ${MODEL_NAME}
gcloud ml-engine models create ${MODEL_NAME} --regions $REGION
gcloud ml-engine versions create ${MODEL_VERSION} --model ${MODEL_NAME} --origin ${MODEL_LOCATION} --runtime-version=$TFVERSION
Explanation: Here are my results:
Model | Accuracy | Time taken | Model description | Run time parameters
--- | :---: | ---
linear | 91.53 | 3 min | linear | 100 steps, LR=0.01, Batch=512
linear | 92.73 | 8 min | linear | 1000 steps, LR=0.01, Batch=512
linear | 92.29 | 18 min | linear | 10000 steps, LR=0.01, Batch=512
dnn | 98.14 | 15 min | 300-100-30 nodes fully connected | 10000 steps, LR=0.01, Batch=512
dnn | 97.99 | 48 min | 300-100-30 nodes fully connected | 100000 steps, LR=0.01, Batch=512
dnn_dropout | 97.84 | 29 min | 300-100-30-DL(0.1)- nodes | 20000 steps, LR=0.01, Batch=512
cnn | 98.97 | 35 min | maxpool(10 5x5 cnn, 2)-maxpool(20 5x5 cnn, 2)-300-DL(0.25) | 20000 steps, LR=0.01, Batch=512
cnn | 98.93 | 35 min | maxpool(10 11x11 cnn, 2)-maxpool(20 3x3 cnn, 2)-300-DL(0.25) | 20000 steps, LR=0.01, Batch=512
cnn | 99.17 | 35 min | maxpool(10 11x11 cnn, 2)-maxpool(20 3x3 cnn, 2)-300-DL(0.25), batch_norm (logits only) | 20000 steps, LR=0.01, Batch=512
cnn | 99.27 | 35 min | maxpool(10 11x11 cnn, 2)-maxpool(20 3x3 cnn, 2)-300-DL(0.25), batch_norm (logits, deep) | 10000 steps, LR=0.01, Batch=512
cnn | 99.48 | 12 hr | as-above but nfil1=20, nfil2=27, dprob=0.1, lr=0.001, batchsize=233 | (hyperparameter optimization)
Create a table to keep track of your own results as you experiment with model type and hyperparameters!
Deploying and predicting with model
Deploy the model:
End of explanation
import json, codecs
import matplotlib.pyplot as plt
import tensorflow as tf
HEIGHT = 28
WIDTH = 28
# Get mnist data
mnist = tf.keras.datasets.mnist
(_, _), (x_test, _) = mnist.load_data()
# Scale our features between 0 and 1
x_test = x_test / 255.0
IMGNO = 5 # CHANGE THIS to get different images
jsondata = {"image": x_test[IMGNO].reshape(HEIGHT, WIDTH).tolist()}
json.dump(jsondata, codecs.open("test.json", 'w', encoding = "utf-8"))
plt.imshow(x_test[IMGNO].reshape(HEIGHT, WIDTH));
Explanation: To predict with the model, let's take one of the example images.
End of explanation
%%bash
gcloud ml-engine predict \
--model=mnist \
--version=${MODEL_TYPE} \
--json-instances=./test.json
Explanation: Send it to the prediction service
End of explanation
<END_TASK> |
15,720 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
<center>
<h1> ILI285 - Computación Científica I / INF285 - Computación Científica </h1>
<h2> Polynomial Interpolation
Step1: <div id='intro' />
Introduction
Hello! In this notebook we will learn how to interpolate 1D data with polynomials. A polynomial interpolation consists in finding a polynomial that fits a discrete set of known data points, allowing us to construct new data points within the range of the data. Formally, a polynomial $P(x)$ interpolate the data $(x_1,y_1),...,(x_n,y_n)$ if $P(x_i)=y_i$ for all $i$ in $1,...,n$.
Step2: <div id='vander' />
Vandermonde Matrix
First, we are going to learn the Vandermonde Matrix method. This is a $m \times m$ matrix (with $m$ being the length of the set of known data points) with the terms of a geometric progression in each row. It allows us to construct a system of linear equations with the objective of find the coefficients of the polynomial function that interpolates our data.
Example
Step3: <div id='lagrange' />
Lagrange Interpolation
With this method, we can interpolate data thanks to the Lagrange basis polynomials. Given a set of $n$ data points $(x_1,y_1),...,(x_n,y_n)$, the Lagrange interpolation polynomial is the following
Step4: Here you get some questions about Lagrange Interpolation
Step5: Questions about Newton's DD
Step6: <div id='runge' />
Runge's Phenomenon
Step7: <div id='cheby' />
Chebyshev Interpolation
With the objective of reducing the error of the polynomial interpolation, we need to find the values of $x_1,x_2,...,x_n$ that minimize $(x-x_1) \cdot (x-x_2) \cdot ... \cdot (x-x_n)$.
To choose these values of $-1 \leq x_1,x_2,...,x_n \leq 1$ (to use another interval we just need to do a change of variables) that minimize the error, we will use the roots of the Chebyshev polynomials, also called Chebyshev nodes (of the first kind), which are defined by
Step8: By using these points, we reduce the numerator of the interpolation error formula
Step9: Questions about Chebyshev | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
import sympy as sp
from functools import reduce
import matplotlib as mpl
mpl.rcParams['font.size'] = 14
mpl.rcParams['axes.labelsize'] = 20
mpl.rcParams['xtick.labelsize'] = 14
mpl.rcParams['ytick.labelsize'] = 14
%matplotlib inline
from ipywidgets import interact, fixed, IntSlider
Explanation: <center>
<h1> ILI285 - Computación Científica I / INF285 - Computación Científica </h1>
<h2> Polynomial Interpolation: Vandermonde, Lagrange, Newton, Chebyshev </h2>
<h2> <a href="#acknowledgements"> [S]cientific [C]omputing [T]eam </a> </h2>
<h2> Version: 1.27</h2>
</center>
Table of Contents
Introduction
Vandermonde Matrix
Lagrange Interpolation
Runge Phenomenon
Newton's Divided Difference
Interpolation Error
Chebyshev Interpolation
Python Modules and Functions
Acknowledgements
End of explanation
def Y(D, xi):
# Function that evaluates the xi's points in the polynomial
if D['M']=='Vandermonde':
P = lambda i: i**np.arange(len(D['P']))
elif D['M']=='Lagrange':
P = lambda i: [np.prod(i - np.delete(D['x'],j)) for j in range(len(D['x']))]
elif D['M']=='Newton':
P = lambda i: np.append([1],[np.prod(i-D['x'][:j]) for j in range(1,len(D['P']))])
return [np.dot(D['P'], P(i)) for i in xi]
def Interpolation_Plot(D,ylim=None):
# Function that shows the data points and the function that interpolates them.
xi = np.linspace(min(D['x']),max(D['x']),1000)
yi = Y(D,xi)
plt.figure(figsize=(8,8))
plt.plot(D['x'],D['y'],'ro',label='Interpolation points')
plt.plot(xi,yi,'b-',label='$P(x)$')
plt.xlim(min(xi)-0.5, max(xi)+0.5)
if ylim:
plt.ylim(ylim[0], ylim[1])
else:
plt.ylim(min(yi)-0.5, max(yi)+0.5)
plt.grid(True)
plt.legend(loc='best')
plt.xlabel('$x$')
#plt.ylabel('$P(x)$')
plt.show()
Explanation: <div id='intro' />
Introduction
Hello! In this notebook we will learn how to interpolate 1D data with polynomials. A polynomial interpolation consists in finding a polynomial that fits a discrete set of known data points, allowing us to construct new data points within the range of the data. Formally, a polynomial $P(x)$ interpolate the data $(x_1,y_1),...,(x_n,y_n)$ if $P(x_i)=y_i$ for all $i$ in $1,...,n$.
End of explanation
def Vandermonde(x, y, show=False):
# We construct the matrix and solve the system of linear equations
A = np.array([xi**np.arange(len(x)) for xi in x])
b = y
xsol = np.linalg.solve(A,b)
# The function shows the data if the flag is true
if show:
print('Data Points: '); print([(x[i],y[i]) for i in range(len(x))])
print('A = '); print(np.array_str(A, precision=2, suppress_small=True))
print("cond(A) = "+str(np.linalg.cond(A)))
print('b = '); print(np.array_str(b, precision=2, suppress_small=True))
print('x = '); print(np.array_str(xsol, precision=2, suppress_small=True))
xS = sp.Symbol('x')
F = np.dot(xS**np.arange(len(x)),xsol)
print('Interpolation Function: ')
print('F(x) = ')
print(F)
# Finally, we return a data structure with our interpolating polynomial
D = {'M':'Vandermonde',
'P':xsol,
'x':x,
'y':y}
return D
def show_time_V(epsilon=0):
x = np.array([1.0,2.0,3.0+epsilon,5.0,6.5])
y = np.array([2.0,5.0,4.0,6.0,2.0])
D = Vandermonde(x,y,True)
Interpolation_Plot(D,[-4,10])
interact(show_time_V,epsilon=(-1,2,0.1))
Explanation: <div id='vander' />
Vandermonde Matrix
First, we are going to learn the Vandermonde Matrix method. This is a $m \times m$ matrix (with $m$ being the length of the set of known data points) with the terms of a geometric progression in each row. It allows us to construct a system of linear equations with the objective of find the coefficients of the polynomial function that interpolates our data.
Example:
Given the set of known data points: $(x_1,y_1),(x_2,y_2),(x_3,y_3)$
Our system of linear equations will be:
$$ \begin{bmatrix}
1 & x_1 & x_1^2 \[0.3em]
1 & x_2 & x_2^2 \[0.3em]
1 & x_3 & x_3^2 \end{bmatrix}
\begin{bmatrix}
a_1 \[0.3em]
a_2 \[0.3em]
a_3 \end{bmatrix} =
\begin{bmatrix}
y_1 \[0.3em]
y_2 \[0.3em]
y_3 \end{bmatrix}$$
And solving it we will find the coefficients $a_1,a_2,a_3$ that we need to construct the polynomial $P(x)=a_1+a_2x+a_3x^2$ that interpolates our data.
End of explanation
def Lagrange(x, y, show=False):
# We calculate the li's
p = np.array([y[i]/np.prod(x[i] - np.delete(x,i)) for i in range(len(x))])
# The function shows the data if the flag is true
if show:
print('Data Points: '); print([(x[i],y[i]) for i in range(len(x))])
xS = sp.Symbol('x')
L = np.dot(np.array([np.prod(xS - np.delete(x,i))/np.prod(x[i] - np.delete(x,i)) for i in range(len(x))]),y)
print('Interpolation Function: ');
print(L)
# Finally, we return a data structure with our interpolating polynomial
D = {'M':'Lagrange',
'P':p,
'x':x,
'y':y}
return D
def show_time_L(epsilon=0):
x = np.array([1.0,2.0,3.0+epsilon,4.0,5.0,7.0,6.0])
y = np.array([2.0,5.0,4.0,6.0,7.0,3.0,8.0])
D = Lagrange(x,y,True)
Interpolation_Plot(D,[0,10])
interact(show_time_L,epsilon=(-1,1,0.1))
def show_time_Li(i=0, N=7):
x = np.arange(N+1)
y = np.zeros(N+1)
y[i]=1
D = Lagrange(x,y,True)
Interpolation_Plot(D,[-1,2])
i_widget = IntSlider(min=0, max=7, step=1, value=0)
N_widget = IntSlider(min=5, max=20, step=1, value=7)
def update_i_range(*args):
i_widget.max = N_widget.value
N_widget.observe(update_i_range, 'value')
interact(show_time_Li,i=i_widget,N=N_widget)
Explanation: <div id='lagrange' />
Lagrange Interpolation
With this method, we can interpolate data thanks to the Lagrange basis polynomials. Given a set of $n$ data points $(x_1,y_1),...,(x_n,y_n)$, the Lagrange interpolation polynomial is the following:
$$ P(x) = \sum^n_{i=1} y_i\,L_i(x),$$
where $L_i(x)$ are the Lagrange basis polynomials:
$$ L_i(x) = \prod^n_{j=1,j \neq i} \frac{x-x_j}{x_i-x_j} = \frac{x-x_1}{x_i-x_1} \cdot ... \cdot \frac{x-x_{i-1}}{x_i-x_{i-1}} \cdot \frac{x-x_{i+1}}{x_i-x_{i+1}} \cdot ... \cdot \frac{x-x_n}{x_i-x_n}$$
or simply $L_i(x)=\dfrac{l_i(x)}{l_i(x_i)}$, where $l_i(x)=\displaystyle{\prod^n_{j=1,j \neq i} (x-x_j)}$.
The most important property of these basis polynomials is:
$$ L_{j \neq i}(x_i) = 0 $$
$$ L_i(x_i) = 1 $$
So, we assure that $L(x_i) = y_i$, which indeed interpolates the data.
End of explanation
def Divided_Differences(x, y):
dd = np.array([y])
for i in range(len(x)-1):
ddi = []
for a in range(len(x)-i-1):
ddi.append((dd[i][a+1]-dd[i][a])/(x[a+i+1]-x[a]))
ddi = np.append(ddi,np.full((len(x)-len(ddi),),0.0))
dd = np.append(dd,[ddi],axis=0)
return np.array(dd)
def Newton(x, y, show=False):
# We calculate the divided differences and store them in a data structure
dd = Divided_Differences(x,y)
# The function shows the data if the flag is true
if show:
print('Data Points: '); print([(x[i],y[i]) for i in range(len(x))])
xS = sp.Symbol('x')
N = np.dot(dd[:,0],np.append([1],[np.prod(xS-x[:i]) for i in range(1,len(dd))]))
print('Interpolation Function: ');
print(N)
# Finally, we return a data structure with our interpolating polynomial
D = {'M':'Newton',
'P':dd[:,0],
'x':x,
'y':y}
return D
def show_time_N(epsilon=0):
x = np.array([0.0,2.0,3.0+epsilon,4.0,5.0,6.0])
y = np.array([1.0,3.0,0.0,6.0,8.0,4.0])
D = Newton(x,y,True)
Interpolation_Plot(D)
interact(show_time_N,epsilon=(-1,1,0.1))
Explanation: Here you get some questions about Lagrange Interpolation:
- Explain what happens with the interpolator polynomial when you add a new point to the set of points to interpolate. Answer: We need to recalculate the polynomial
- Why it is not a good idea to use Lagrange interpolation for a set of points which is constantly changing? A: Because we need to compute the whole interpolation again
- What is the operation count of obtaining the interpolator polynomial using Lagrange? What happens with the error?
<div id='DDN' />
Newton's Divided Difference
In this interpolation method we will use divided differences to calculate the coefficients of our interpolation polynomial. Given a set of $n$ data points $(x_1,y_1),...,(x_n,y_n)$, the Newton polynomial is:
$$ P(x) = \sum^n_{i=1} (f[x_1 ... x_i] \cdot \prod^{i-1}_{j=1} (x-x_j)) ,$$
where $ \prod^{0}_{j=1} (x-x_j) = 0 $, and:
$$ f[x_i] = y_i $$
$$ f[x_j...x_i] = \frac{f[x_{j+1}...x_i]-f[x_j...x_{i-1}]}{x_i-x_j}$$
End of explanation
def Error(f, n, xmin, xmax, method=Lagrange, points=np.linspace, plot_flag=True):
# This function plots f(x), the interpolating polynomial, and the associated error
# points can be np.linspace to equidistant points or Chebyshev to get Chebyshev points
x = points(xmin,xmax,n)
y = f(x)
xe = np.linspace(xmin,xmax,100)
ye = f(xe)
D = method(x,y)
yi = Y(D, xe)
if plot_flag:
plt.figure(figsize=(5,10))
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(15,5), sharey = False)
ax1.plot(xe, ye,'r-', label='f(x)')
ax1.plot(x, y,'ro', label='Interpolation points')
ax1.plot(xe, yi,'b-', label='Interpolation')
ax1.set_xlim(xmin-0.5,xmax+0.5)
ax1.set_ylim(min(yi)-0.5,max(yi)+0.5)
ax1.set_title('Interpolation')
ax1.grid(True)
ax1.set_xlabel('$x$')
ax1.legend(loc='best')
ax2.semilogy(xe, abs(ye-yi),'b-', label='Absolute Error')
ax2.set_xlim(xmin-0.5,xmax+0.5)
ax2.set_title('Absolute Error')
ax2.set_xlabel('$x$')
ax2.grid(True)
#ax2.legend(loc='best')
plt.show()
return max(abs(ye-yi))
def test_error_Newton(n=5):
#me = Error(lambda x: np.sin(x)**3, n, 1, 7, Newton)
me = Error(lambda x: (1/(1+12*x**2)), n, -1, 1, Newton)
print("Max Error:", me)
interact(test_error_Newton,n=(5,25))
Explanation: Questions about Newton's DD:
- What is the main problem using this method (and Lagrange)? How can you fix it? A: A problem with polynomial interpolation with equispaced date is the Runge phenomenon and can be handle with Chebyshev points
- What to do when a new point is added? A: Pro, is not necessary re-calculate the whole polynomial only a small piece
<div id='Error' />
Polynomial Interpolation Error
The interpolation error is given by:
$$ f(x)-P(x) = \frac{(x-x_1) \cdot (x-x_2) \cdot ... \cdot (x-x_n)}{n!} \cdot f^{(n)}(c) ,$$
where $c$ is within the interval from the minimun value of $x$ and the maximum one.
End of explanation
def Runge(n=9):
x = np.linspace(0,1,n)
y = np.zeros(n)
y[int((n-1.0)/2.)]=1
D = Newton(x,y,False)
Interpolation_Plot(D)
interact(Runge,n=(5,25,2))
Explanation: <div id='runge' />
Runge's Phenomenon: It is a problem of oscillation of polynomials at the edges of the interval.
We are interpolating a data that is 0 almost everywhere and 1 at the middle point, notice that when $n$ increases the oscilations increase and the red dots seems to be at 0 everywhere but it is just an artifact, there must be a 1 at the middle. The oscillations you see at the end of the interval is the Runge phenomenon.
End of explanation
def Chebyshev(xmin,xmax,n=5):
# This function calculates the n Chebyshev points and plots or returns them depending on ax
ns = np.arange(1,n+1)
x = np.cos((2*ns-1)*np.pi/(2*n))
y = np.sin((2*ns-1)*np.pi/(2*n))
plt.figure(figsize=(10,5))
plt.ylim(-0.1,1.1)
plt.xlim(-1.1,1.1)
plt.plot(np.cos(np.linspace(0,np.pi)),np.sin(np.linspace(0,np.pi)),'k-')
plt.plot([-2,2],[0,0],'k-')
plt.plot([0,0],[-1,2],'k-')
for i in range(len(y)):
plt.plot([x[i],x[i]],[0,y[i]],'r-')
plt.plot([0,x[i]],[0,y[i]],'r-')
plt.plot(x,[0]*len(x),'bo',label='Chebyshev points')
plt.plot(x,y,'ro')
plt.xlabel('$x$')
plt.title('n = '+str(n))
plt.grid(True)
plt.legend(loc='best')
plt.show()
def Chebyshev_points(xmin,xmax,n):
ns = np.arange(1,n+1)
x = np.cos((2*ns-1)*np.pi/(2*n))
#y = np.sin((2*ns-1)*np.pi/(2*n))
return (xmin+xmax)/2 + (xmax-xmin)*x/2
def Chebyshev_points_histogram(n=50,nbins=20):
xCheb=Chebyshev_points(-1,1,n)
plt.figure()
plt.hist(xCheb,bins=nbins,density=True)
plt.grid(True)
plt.show()
interact(Chebyshev,xmin=fixed(-1),xmax=fixed(1),n=(2,50))
interact(Chebyshev_points_histogram,n=(20,10000),nbins=(20,200))
Explanation: <div id='cheby' />
Chebyshev Interpolation
With the objective of reducing the error of the polynomial interpolation, we need to find the values of $x_1,x_2,...,x_n$ that minimize $(x-x_1) \cdot (x-x_2) \cdot ... \cdot (x-x_n)$.
To choose these values of $-1 \leq x_1,x_2,...,x_n \leq 1$ (to use another interval we just need to do a change of variables) that minimize the error, we will use the roots of the Chebyshev polynomials, also called Chebyshev nodes (of the first kind), which are defined by:
$$ x_i = \cos\left(\frac{(2i-1)\pi}{2n}\right), i = 1,...,n $$
End of explanation
def T(n,x):
# Recursive function that returns the n-th Chebyshev polynomial evaluated at x
if n == 0:
return x**0
elif n == 1:
return x
else:
return 2*x*T(n-1,x)-T(n-2,x)
def Chebyshev_Polynomials(n=2, Flag_All_Tn=False):
# This function plots the first n Chebyshev polynomials
x = np.linspace(-1,1,1000)
plt.figure(figsize=(10,5))
plt.xlim(-1, 1)
plt.ylim(-1.1, 1.1)
if Flag_All_Tn:
for i in np.arange(n+1):
y = T(i,x)
plt.plot(x,y,label='$T_{'+str(i)+'}(x)$')
else:
y = T(n,x)
plt.plot(x,y,label='$T_{'+str(n)+'}(x)$')
# plt.title('$T_${:}$(x)$'.format(n))
plt.legend(loc='right')
plt.grid(True)
plt.xlabel('$x$')
plt.show()
interact(Chebyshev_Polynomials,n=(0,12),Flag_All_Tn=True)
n=9
xmin=1
xmax=9
mee = Error(lambda x: np.sin(x)**3, n, xmin, xmax, method=Lagrange)
mec = Error(lambda x: np.sin(x)**3, n, xmin, xmax, method=Lagrange, points=Chebyshev_points)
print("Max error (equidistants points):", mee)
print("Max error (Chebyshev nodes):", mec)
def test_error_chebyshev(n=5):
mee = Error(lambda x: (1/(1+12*x**2)), n, -1, 1, Lagrange)
mec = Error(lambda x: (1/(1+12*x**2)), n, -1, 1, method=Lagrange, points=Chebyshev_points)
print("Max error (equidistants points):", mee)
print("Max error (Chebyshev nodes):", mec)
interact(test_error_chebyshev,n=(5,100,2))
Explanation: By using these points, we reduce the numerator of the interpolation error formula:
$$ (x-x_1) \cdot (x-x_2) \cdot ... \cdot (x-x_n) = \dfrac{1}{2^{n-1}} \cdot T_n(x), $$
where $T(x) = \cos (n \cdot \arccos (x))$ is the n-th Chebyshev polynomial.
$$ T_0(x) = 1 $$
$$ T_1(x) = x $$
$$ T_2(x) = 2x^2 -1 $$
$$...$$
$$ T_{n+1}(x) = 2 \cdot x \cdot T_n(x) - T_{n-1}(x) $$
End of explanation
n=50
shift=2
my_functions={0:lambda x: (x)**10,
1:lambda x: np.abs((x)**3),
2:lambda x: np.exp(-((x)**-2)),
3:lambda x: 1/(1+x**2),
4:lambda x: np.sin(x)**3}
labels = {0: "x^{10}",
1: "|x^3|",
2: "\exp(-x^{-2})",
3: "1/(1+x^2)",
4: "\sin^3(x)"}
n_points=np.arange(shift,n)
for k in np.arange(5):
max_error=np.zeros(n-shift)
max_error_es=np.zeros(n-shift)
for i in n_points:
max_error[i-shift] = Error(my_functions[k], i, -1, 1, Newton, Chebyshev_points, plot_flag=False)
max_error_es[i-shift] = Error(my_functions[k], i, -1, 1, Newton, points=np.linspace, plot_flag=False)
axis=plt.figure()
plt.semilogy(n_points,max_error,'kd',label='Chebyshev points')
plt.semilogy(n_points,max_error_es,'k.',label='Equalspaced poins')
plt.ylim(10**-16,10**4)
plt.grid(True)
plt.title('Interpolation Error of $f(x)='+str(labels[k])+"$")
plt.xlabel('Number of points used in the interpolation')
plt.ylabel('Max error on domain')
plt.legend(loc='best')
plt.show()
Explanation: Questions about Chebyshev:
- How can you calculate the Chebyshev points in the interval [a,b] instead of [-1,1]? A: Using a change of variable
Convergence analysis
End of explanation
<END_TASK> |
15,721 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Analysis of evoked response using ICA and PCA reduction techniques
This example computes PCA and ICA of evoked or epochs data. Then the
PCA / ICA components, a.k.a. spatial filters, are used to transform
the channel data to new sources / virtual channels. The output is
visualized on the average of all the epochs.
Step1: Transform data with PCA computed on the average ie evoked response
Step2: Transform data with ICA computed on the raw epochs (no averaging) | <ASSISTANT_TASK:>
Python Code:
# Authors: Jean-Remi King <[email protected]>
# Asish Panda <[email protected]>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.datasets import sample
from mne.decoding import UnsupervisedSpatialFilter
from sklearn.decomposition import PCA, FastICA
print(__doc__)
# Preprocess data
data_path = sample.data_path()
# Load and filter data, set up epochs
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
tmin, tmax = -0.1, 0.3
event_id = dict(aud_l=1, aud_r=2, vis_l=3, vis_r=4)
raw = mne.io.read_raw_fif(raw_fname, preload=True)
raw.filter(1, 20, fir_design='firwin')
events = mne.read_events(event_fname)
picks = mne.pick_types(raw.info, meg=False, eeg=True, stim=False, eog=False,
exclude='bads')
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=False,
picks=picks, baseline=None, preload=True,
verbose=False)
X = epochs.get_data()
Explanation: Analysis of evoked response using ICA and PCA reduction techniques
This example computes PCA and ICA of evoked or epochs data. Then the
PCA / ICA components, a.k.a. spatial filters, are used to transform
the channel data to new sources / virtual channels. The output is
visualized on the average of all the epochs.
End of explanation
pca = UnsupervisedSpatialFilter(PCA(30), average=False)
pca_data = pca.fit_transform(X)
ev = mne.EvokedArray(np.mean(pca_data, axis=0),
mne.create_info(30, epochs.info['sfreq'],
ch_types='eeg'), tmin=tmin)
ev.plot(show=False, window_title="PCA")
Explanation: Transform data with PCA computed on the average ie evoked response
End of explanation
ica = UnsupervisedSpatialFilter(FastICA(30), average=False)
ica_data = ica.fit_transform(X)
ev1 = mne.EvokedArray(np.mean(ica_data, axis=0),
mne.create_info(30, epochs.info['sfreq'],
ch_types='eeg'), tmin=tmin)
ev1.plot(show=False, window_title='ICA')
plt.show()
Explanation: Transform data with ICA computed on the raw epochs (no averaging)
End of explanation
<END_TASK> |
15,722 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
High-Fidelity Generative Image Compression
This colab can be used to compress images using HiFiC. This can also be achieved
by running tfci.py, as explained in the README.
Please visit hific.github.io for more information.
Setup Colab
Step1: Enabling GPU
GPU should be enabled for this colab. If the next cell prints a warning, do the following
Step2: Imports and Definitions
Step3: Load files
Step4: Compress images
Step5: Show output
Step6: Download all compressed images.
To download all images, run the following cell.
You can also use the Files tab on the left to manually select images.
Note | <ASSISTANT_TASK:>
Python Code:
# Installs the latest version of TFC compatible with the installed TF version.
!pip install tensorflow-compression~=$(pip show tensorflow | perl -p -0777 -e 's/.*Version: (\d\.\d).*/\1.0/sg')
# Downloads the 'models' directory from Github.
![[ -e /tfc ]] || git clone https://github.com/tensorflow/compression /tfc
%cd /tfc/models
# Checks if tfci.py is available.
import tfci
Explanation: High-Fidelity Generative Image Compression
This colab can be used to compress images using HiFiC. This can also be achieved
by running tfci.py, as explained in the README.
Please visit hific.github.io for more information.
Setup Colab
End of explanation
import tensorflow as tf
if not tf.config.list_physical_devices('GPU'):
print('WARNING: No GPU found. Might be slow!')
else:
print('Found GPU.')
Explanation: Enabling GPU
GPU should be enabled for this colab. If the next cell prints a warning, do the following:
- Navigate to Edit→Notebook Settings
- select GPU from the Hardware Accelerator drop-down
End of explanation
import os
import zipfile
from google.colab import files
import collections
from PIL import Image
from IPython.display import Image as DisplayImage
from IPython.display import Javascript
from IPython.core.display import display, HTML
import tfci
import urllib.request
tf.get_logger().setLevel('WARN') # Only show Warnings
FILES_DIR = '/content/files'
OUT_DIR = '/content/out'
DEFAULT_IMAGE_URL = ('https://storage.googleapis.com/hific/clic2020/'
'images/originals/ad249bba099568403dc6b97bc37f8d74.png')
os.makedirs(FILES_DIR, exist_ok=True)
os.makedirs(OUT_DIR, exist_ok=True)
File = collections.namedtuple('File', ['full_path', 'num_bytes', 'bpp'])
def print_html(html):
display(HTML(html + '<br/>'))
def make_cell_large():
display(Javascript(
'''google.colab.output.setIframeHeight(0, true, {maxHeight: 5000})'''))
def get_default_image(output_dir):
output_path = os.path.join(output_dir, os.path.basename(DEFAULT_IMAGE_URL))
print('Downloading', DEFAULT_IMAGE_URL, '\n->', output_path)
urllib.request.urlretrieve(DEFAULT_IMAGE_URL, output_path)
Explanation: Imports and Definitions
End of explanation
#@title Setup { vertical-output: false, run: "auto", display-mode: "form" }
#@markdown #### Custom Images
#@markdown Tick the following if you want to upload your own images to compress.
#@markdown Otherwise, a default image will be used.
#@markdown
#@markdown **Note**: We support JPG and PNG (without alpha channels).
#@markdown
upload_custom_images = False #@param {type:"boolean", label:"HI"}
if upload_custom_images:
uploaded = files.upload()
for name, content in uploaded.items():
with open(os.path.join(FILES_DIR, name), 'wb') as fout:
print('Writing', name, '...')
fout.write(content)
#@markdown #### Select a model
#@markdown Different models target different bitrates.
model = 'hific-lo' #@param ["hific-lo", "hific-mi", "hific-hi"]
if 'upload_custom_images' not in locals():
print('ERROR: Please run the previous cell!')
# Setting defaults anyway.
upload_custom_images = False
model = 'hific-lo'
all_files = os.listdir(FILES_DIR)
if not upload_custom_images or not all_files:
print('Downloading default image...')
get_default_image(FILES_DIR)
print()
all_files = os.listdir(FILES_DIR)
print(f'Got the following files ({len(all_files)}):')
for file_name in all_files:
img = Image.open(os.path.join(FILES_DIR, file_name))
w, h = img.size
img = img.resize((w // 15, h // 15))
print('- ' + file_name + ':')
display(img)
Explanation: Load files
End of explanation
SUPPORTED_EXT = {'.png', '.jpg'}
all_files = os.listdir(FILES_DIR)
if not all_files:
raise ValueError("Please upload images!")
def get_bpp(image_dimensions, num_bytes):
w, h = image_dimensions
return num_bytes * 8 / (w * h)
def has_alpha(img_p):
im = Image.open(img_p)
return im.mode == 'RGBA'
all_outputs = []
for file_name in all_files:
if os.path.isdir(file_name):
continue
if not any(file_name.endswith(ext) for ext in SUPPORTED_EXT):
print('Skipping', file_name, '...')
continue
full_path = os.path.join(FILES_DIR, file_name)
if has_alpha(full_path):
print('Skipping because of alpha channel:', file_name)
continue
file_name, _ = os.path.splitext(file_name)
compressed_path = os.path.join(OUT_DIR, f'{file_name}_{model}.tfci')
output_path = os.path.join(OUT_DIR, f'{file_name}_{model}.png')
if os.path.isfile(output_path):
print('Exists already:', output_path)
num_bytes = os.path.getsize(compressed_path)
all_outputs.append(
File(output_path, num_bytes,
get_bpp(Image.open(full_path).size, num_bytes)))
continue
print('Compressing', file_name, 'with', model, '...')
tfci.compress(model, full_path, compressed_path)
num_bytes = os.path.getsize(compressed_path)
print(f'Compressed to {num_bytes} bytes.')
print('Decompressing...')
tfci.decompress(compressed_path, output_path)
all_outputs.append(
File(output_path, num_bytes,
get_bpp(Image.open(full_path).size, num_bytes)))
print('All done!')
Explanation: Compress images
End of explanation
make_cell_large() # Larger output window.
for file in all_outputs:
print_html('<hr/>')
print(f'Showing {file.full_path} | {file.num_bytes//1000}kB | {file.bpp:.4f}bpp')
display(Image.open(file.full_path))
print_html('<hr/>')
Explanation: Show output
End of explanation
ZIP = '/content/images.zip'
with zipfile.ZipFile(ZIP, 'w') as zf:
for f in all_outputs:
path_with_bpp = f.full_path.replace('.png', f'-{f.bpp:.3f}bpp.png')
zf.write(f.full_path, os.path.basename(path_with_bpp))
files.download(ZIP)
Explanation: Download all compressed images.
To download all images, run the following cell.
You can also use the Files tab on the left to manually select images.
Note: the images are saved as PNGs and thus very large. The bitrate used by HiFiC is given in the name.
End of explanation
<END_TASK> |
15,723 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
This notebook is intended to demonstrate the basic features of the Python API for constructing input files and running OpenMC. In it, we will show how to create a basic reflective pin-cell model that is equivalent to modeling an infinite array of fuel pins. If you have never used OpenMC, this can serve as a good starting point to learn the Python API. We highly recommend having a copy of the Python API reference documentation open in another browser tab that you can refer to.
Step1: Defining Materials
Materials in OpenMC are defined as a set of nuclides with specified atom/weight fractions. To begin, we will create a material by making an instance of the Material class. In OpenMC, many objects, including materials, are identified by a "unique ID" that is simply just a positive integer. These IDs are used when exporting XML files that the solver reads in. They also appear in the output and can be used for identification. Since an integer ID is not very useful by itself, you can also give a material a name as well.
Step2: On the XML side, you have no choice but to supply an ID. However, in the Python API, if you don't give an ID, one will be automatically generated for you
Step3: We see that an ID of 2 was automatically assigned. Let's now move on to adding nuclides to our uo2 material. The Material object has a method add_nuclide() whose first argument is the name of the nuclide and second argument is the atom or weight fraction.
Step4: We see that by default it assumes we want an atom fraction.
Step5: Now we need to assign a total density to the material. We'll use the set_density for this.
Step6: You may sometimes be given a material specification where all the nuclide densities are in units of atom/b-cm. In this case, you just want the density to be the sum of the constituents. In that case, you can simply run mat.set_density('sum').
With UO2 finished, let's now create materials for the clad and coolant. Note the use of add_element() for zirconium.
Step7: An astute observer might now point out that this water material we just created will only use free-atom cross sections. We need to tell it to use an $S(\alpha,\beta)$ table so that the bound atom cross section is used at thermal energies. To do this, there's an add_s_alpha_beta() method. Note the use of the GND-style name "c_H_in_H2O".
Step8: When we go to run the transport solver in OpenMC, it is going to look for a materials.xml file. Thus far, we have only created objects in memory. To actually create a materials.xml file, we need to instantiate a Materials collection and export it to XML.
Step9: Note that Materials is actually a subclass of Python's built-in list, so we can use methods like append(), insert(), pop(), etc.
Step10: Finally, we can create the XML file with the export_to_xml() method. In a Jupyter notebook, we can run a shell command by putting ! before it, so in this case we are going to display the materials.xml file that we created.
Step11: Element Expansion
Did you notice something really cool that happened to our Zr element? OpenMC automatically turned it into a list of nuclides when it exported it! The way this feature works is as follows
Step12: We see that now O16 and O17 were automatically added. O18 is missing because our cross sections file (which is based on ENDF/B-VII.1) doesn't have O18. If OpenMC didn't know about the cross sections file, it would have assumed that all isotopes exist.
The cross_sections.xml file
The cross_sections.xml tells OpenMC where it can find nuclide cross sections and $S(\alpha,\beta)$ tables. It serves the same purpose as MCNP's xsdir file and Serpent's xsdata file. As we mentioned, this can be set either by the OPENMC_CROSS_SECTIONS environment variable or the Materials.cross_sections attribute.
Let's have a look at what's inside this file
Step13: Enrichment
Note that the add_element() method has a special argument enrichment that can be used for Uranium. For example, if we know that we want to create 3% enriched UO2, the following would work
Step14: Defining Geometry
At this point, we have three materials defined, exported to XML, and ready to be used in our model. To finish our model, we need to define the geometric arrangement of materials. OpenMC represents physical volumes using constructive solid geometry (CSG), also known as combinatorial geometry. The object that allows us to assign a material to a region of space is called a Cell (same concept in MCNP, for those familiar). In order to define a region that we can assign to a cell, we must first define surfaces which bound the region. A surface is a locus of zeros of a function of Cartesian coordinates $x$, $y$, and $z$, e.g.
A plane perpendicular to the x axis
Step15: Note that by default the sphere is centered at the origin so we didn't have to supply x0, y0, or z0 arguments. Strictly speaking, we could have omitted R as well since it defaults to one. To get the negative or positive half-space, we simply need to apply the - or + unary operators, respectively.
(NOTE
Step16: Now let's see if inside_sphere actually contains points inside the sphere
Step17: Everything works as expected! Now that we understand how to create half-spaces, we can create more complex volumes by combining half-spaces using Boolean operators
Step18: For many regions, OpenMC can automatically determine a bounding box. To get the bounding box, we use the bounding_box property of a region, which returns a tuple of the lower-left and upper-right Cartesian coordinates for the bounding box
Step19: Now that we see how to create volumes, we can use them to create a cell.
Step20: By default, the cell is not filled by any material (void). In order to assign a material, we set the fill property of a Cell.
Step21: Universes and in-line plotting
A collection of cells is known as a universe (again, this will be familiar to MCNP/Serpent users) and can be used as a repeatable unit when creating a model. Although we don't need it yet, the benefit of creating a universe is that we can visualize our geometry while we're creating it.
Step22: The Universe object has a plot method that will display our the universe as current constructed
Step23: By default, the plot will appear in the $x$-$y$ plane. We can change that with the basis argument.
Step24: If we have particular fondness for, say, fuchsia, we can tell the plot() method to make our cell that color.
Step25: Pin cell geometry
We now have enough knowledge to create our pin-cell. We need three surfaces to define the fuel and clad
Step26: With the surfaces created, we can now take advantage of the built-in operators on surfaces to create regions for the fuel, the gap, and the clad
Step27: Now we can create corresponding cells that assign materials to these regions. As with materials, cells have unique IDs that are assigned either manually or automatically. Note that the gap cell doesn't have any material assigned (it is void by default).
Step28: Finally, we need to handle the coolant outside of our fuel pin. To do this, we create x- and y-planes that bound the geometry.
Step29: The water region is going to be everything outside of the clad outer radius and within the box formed as the intersection of four half-spaces.
Step30: OpenMC also includes a factory function that generates a rectangular prism that could have made our lives easier.
Step31: Pay attention here -- the object that was returned is NOT a surface. It is actually the intersection of four surface half-spaces, just like we created manually before. Thus, we don't need to apply the unary operator (-box). Instead, we can directly combine it with +clad_or.
Step32: The final step is to assign the cells we created to a universe and tell OpenMC that this universe is the "root" universe in our geometry. The Geometry is the final object that is actually exported to XML.
Step33: Starting source and settings
The Python API has a module openmc.stats with various univariate and multivariate probability distributions. We can use these distributions to create a starting source using the openmc.Source object.
Step34: Now let's create a Settings object and give it the source we created along with specifying how many batches and particles we want to run.
Step35: User-defined tallies
We actually have all the required files needed to run a simulation. Before we do that though, let's give a quick example of how to create tallies. We will show how one would tally the total, fission, absorption, and (n,$\gamma$) reaction rates for $^{235}$U in the cell containing fuel. Recall that filters allow us to specify where in phase-space we want events to be tallied and scores tell us what we want to tally
Step36: The what is the total, fission, absorption, and (n,$\gamma$) reaction rates in $^{235}$U. By default, if we only specify what reactions, it will gives us tallies over all nuclides. We can use the nuclides attribute to name specific nuclides we're interested in.
Step37: Similar to the other files, we need to create a Tallies collection and export it to XML.
Step38: Running OpenMC
Running OpenMC from Python can be done using the openmc.run() function. This function allows you to set the number of MPI processes and OpenMP threads, if need be.
Step39: Great! OpenMC already told us our k-effective. It also spit out a file called tallies.out that shows our tallies. This is a very basic method to look at tally data; for more sophisticated methods, see other example notebooks.
Step40: Geometry plotting
We saw before that we could call the Universe.plot() method to show a universe while we were creating our geometry. There is also a built-in plotter in the Fortran codebase that is much faster than the Python plotter and has more options. The interface looks somewhat similar to the Universe.plot() method. Instead though, we create Plot instances, assign them to a Plots collection, export it to XML, and then run OpenMC in geometry plotting mode. As an example, let's specify that we want the plot to be colored by material (rather than by cell) and we assign yellow to fuel and blue to water.
Step41: With our plot created, we need to add it to a Plots collection which can be exported to XML.
Step42: Now we can run OpenMC in plotting mode by calling the plot_geometry() function. Under the hood this is calling openmc --plot.
Step43: OpenMC writes out a peculiar image with a .ppm extension. If you have ImageMagick installed, this can be converted into a more normal .png file.
Step44: We can use functionality from IPython to display the image inline in our notebook
Step45: That was a little bit cumbersome. Thankfully, OpenMC provides us with a function that does all that "boilerplate" work. | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import openmc
Explanation: This notebook is intended to demonstrate the basic features of the Python API for constructing input files and running OpenMC. In it, we will show how to create a basic reflective pin-cell model that is equivalent to modeling an infinite array of fuel pins. If you have never used OpenMC, this can serve as a good starting point to learn the Python API. We highly recommend having a copy of the Python API reference documentation open in another browser tab that you can refer to.
End of explanation
uo2 = openmc.Material(1, "uo2")
print(uo2)
Explanation: Defining Materials
Materials in OpenMC are defined as a set of nuclides with specified atom/weight fractions. To begin, we will create a material by making an instance of the Material class. In OpenMC, many objects, including materials, are identified by a "unique ID" that is simply just a positive integer. These IDs are used when exporting XML files that the solver reads in. They also appear in the output and can be used for identification. Since an integer ID is not very useful by itself, you can also give a material a name as well.
End of explanation
mat = openmc.Material()
print(mat)
Explanation: On the XML side, you have no choice but to supply an ID. However, in the Python API, if you don't give an ID, one will be automatically generated for you:
End of explanation
help(uo2.add_nuclide)
Explanation: We see that an ID of 2 was automatically assigned. Let's now move on to adding nuclides to our uo2 material. The Material object has a method add_nuclide() whose first argument is the name of the nuclide and second argument is the atom or weight fraction.
End of explanation
# Add nuclides to uo2
uo2.add_nuclide('U235', 0.03)
uo2.add_nuclide('U238', 0.97)
uo2.add_nuclide('O16', 2.0)
Explanation: We see that by default it assumes we want an atom fraction.
End of explanation
uo2.set_density('g/cm3', 10.0)
Explanation: Now we need to assign a total density to the material. We'll use the set_density for this.
End of explanation
zirconium = openmc.Material(2, "zirconium")
zirconium.add_element('Zr', 1.0)
zirconium.set_density('g/cm3', 6.6)
water = openmc.Material(3, "h2o")
water.add_nuclide('H1', 2.0)
water.add_nuclide('O16', 1.0)
water.set_density('g/cm3', 1.0)
Explanation: You may sometimes be given a material specification where all the nuclide densities are in units of atom/b-cm. In this case, you just want the density to be the sum of the constituents. In that case, you can simply run mat.set_density('sum').
With UO2 finished, let's now create materials for the clad and coolant. Note the use of add_element() for zirconium.
End of explanation
water.add_s_alpha_beta('c_H_in_H2O')
Explanation: An astute observer might now point out that this water material we just created will only use free-atom cross sections. We need to tell it to use an $S(\alpha,\beta)$ table so that the bound atom cross section is used at thermal energies. To do this, there's an add_s_alpha_beta() method. Note the use of the GND-style name "c_H_in_H2O".
End of explanation
mats = openmc.Materials([uo2, zirconium, water])
Explanation: When we go to run the transport solver in OpenMC, it is going to look for a materials.xml file. Thus far, we have only created objects in memory. To actually create a materials.xml file, we need to instantiate a Materials collection and export it to XML.
End of explanation
mats = openmc.Materials()
mats.append(uo2)
mats += [zirconium, water]
isinstance(mats, list)
Explanation: Note that Materials is actually a subclass of Python's built-in list, so we can use methods like append(), insert(), pop(), etc.
End of explanation
mats.export_to_xml()
!cat materials.xml
Explanation: Finally, we can create the XML file with the export_to_xml() method. In a Jupyter notebook, we can run a shell command by putting ! before it, so in this case we are going to display the materials.xml file that we created.
End of explanation
water.remove_nuclide('O16')
water.add_element('O', 1.0)
mats.export_to_xml()
!cat materials.xml
Explanation: Element Expansion
Did you notice something really cool that happened to our Zr element? OpenMC automatically turned it into a list of nuclides when it exported it! The way this feature works is as follows:
First, it checks whether Materials.cross_sections has been set, indicating the path to a cross_sections.xml file.
If Materials.cross_sections isn't set, it looks for the OPENMC_CROSS_SECTIONS environment variable.
If either of these are found, it scans the file to see what nuclides are actually available and will expand elements accordingly.
Let's see what happens if we change O16 in water to elemental O.
End of explanation
!cat $OPENMC_CROSS_SECTIONS | head -n 10
print(' ...')
!cat $OPENMC_CROSS_SECTIONS | tail -n 10
Explanation: We see that now O16 and O17 were automatically added. O18 is missing because our cross sections file (which is based on ENDF/B-VII.1) doesn't have O18. If OpenMC didn't know about the cross sections file, it would have assumed that all isotopes exist.
The cross_sections.xml file
The cross_sections.xml tells OpenMC where it can find nuclide cross sections and $S(\alpha,\beta)$ tables. It serves the same purpose as MCNP's xsdir file and Serpent's xsdata file. As we mentioned, this can be set either by the OPENMC_CROSS_SECTIONS environment variable or the Materials.cross_sections attribute.
Let's have a look at what's inside this file:
End of explanation
uo2_three = openmc.Material()
uo2_three.add_element('U', 1.0, enrichment=3.0)
uo2_three.add_element('O', 2.0)
uo2_three.set_density('g/cc', 10.0)
Explanation: Enrichment
Note that the add_element() method has a special argument enrichment that can be used for Uranium. For example, if we know that we want to create 3% enriched UO2, the following would work:
End of explanation
sph = openmc.Sphere(R=1.0)
Explanation: Defining Geometry
At this point, we have three materials defined, exported to XML, and ready to be used in our model. To finish our model, we need to define the geometric arrangement of materials. OpenMC represents physical volumes using constructive solid geometry (CSG), also known as combinatorial geometry. The object that allows us to assign a material to a region of space is called a Cell (same concept in MCNP, for those familiar). In order to define a region that we can assign to a cell, we must first define surfaces which bound the region. A surface is a locus of zeros of a function of Cartesian coordinates $x$, $y$, and $z$, e.g.
A plane perpendicular to the x axis: $x - x_0 = 0$
A cylinder parallel to the z axis: $(x - x_0)^2 + (y - y_0)^2 - R^2 = 0$
A sphere: $(x - x_0)^2 + (y - y_0)^2 + (z - z_0)^2 - R^2 = 0$
Between those three classes of surfaces (planes, cylinders, spheres), one can construct a wide variety of models. It is also possible to define cones and general second-order surfaces (tori are not currently supported).
Note that defining a surface is not sufficient to specify a volume -- in order to define an actual volume, one must reference the half-space of a surface. A surface half-space is the region whose points satisfy a positive or negative inequality of the surface equation. For example, for a sphere of radius one centered at the origin, the surface equation is $f(x,y,z) = x^2 + y^2 + z^2 - 1 = 0$. Thus, we say that the negative half-space of the sphere, is defined as the collection of points satisfying $f(x,y,z) < 0$, which one can reason is the inside of the sphere. Conversely, the positive half-space of the sphere would correspond to all points outside of the sphere.
Let's go ahead and create a sphere and confirm that what we've told you is true.
End of explanation
inside_sphere = -sph
outside_sphere = +sph
Explanation: Note that by default the sphere is centered at the origin so we didn't have to supply x0, y0, or z0 arguments. Strictly speaking, we could have omitted R as well since it defaults to one. To get the negative or positive half-space, we simply need to apply the - or + unary operators, respectively.
(NOTE: Those unary operators are defined by special methods: __pos__ and __neg__ in this case).
End of explanation
print((0,0,0) in inside_sphere, (0,0,2) in inside_sphere)
print((0,0,0) in outside_sphere, (0,0,2) in outside_sphere)
Explanation: Now let's see if inside_sphere actually contains points inside the sphere:
End of explanation
z_plane = openmc.ZPlane(z0=0)
northern_hemisphere = -sph & +z_plane
Explanation: Everything works as expected! Now that we understand how to create half-spaces, we can create more complex volumes by combining half-spaces using Boolean operators: & (intersection), | (union), and ~ (complement). For example, let's say we want to define a region that is the top part of the sphere (all points inside the sphere that have $z > 0$.
End of explanation
northern_hemisphere.bounding_box
Explanation: For many regions, OpenMC can automatically determine a bounding box. To get the bounding box, we use the bounding_box property of a region, which returns a tuple of the lower-left and upper-right Cartesian coordinates for the bounding box:
End of explanation
cell = openmc.Cell()
cell.region = northern_hemisphere
# or...
cell = openmc.Cell(region=northern_hemisphere)
Explanation: Now that we see how to create volumes, we can use them to create a cell.
End of explanation
cell.fill = water
Explanation: By default, the cell is not filled by any material (void). In order to assign a material, we set the fill property of a Cell.
End of explanation
universe = openmc.Universe()
universe.add_cell(cell)
# this also works
universe = openmc.Universe(cells=[cell])
Explanation: Universes and in-line plotting
A collection of cells is known as a universe (again, this will be familiar to MCNP/Serpent users) and can be used as a repeatable unit when creating a model. Although we don't need it yet, the benefit of creating a universe is that we can visualize our geometry while we're creating it.
End of explanation
universe.plot(width=(2.0, 2.0))
Explanation: The Universe object has a plot method that will display our the universe as current constructed:
End of explanation
universe.plot(width=(2.0, 2.0), basis='xz')
Explanation: By default, the plot will appear in the $x$-$y$ plane. We can change that with the basis argument.
End of explanation
universe.plot(width=(2.0, 2.0), basis='xz',
colors={cell: 'fuchsia'})
Explanation: If we have particular fondness for, say, fuchsia, we can tell the plot() method to make our cell that color.
End of explanation
fuel_or = openmc.ZCylinder(R=0.39)
clad_ir = openmc.ZCylinder(R=0.40)
clad_or = openmc.ZCylinder(R=0.46)
Explanation: Pin cell geometry
We now have enough knowledge to create our pin-cell. We need three surfaces to define the fuel and clad:
The outer surface of the fuel -- a cylinder parallel to the z axis
The inner surface of the clad -- same as above
The outer surface of the clad -- same as above
These three surfaces will all be instances of openmc.ZCylinder, each with a different radius according to the specification.
End of explanation
fuel_region = -fuel_or
gap_region = +fuel_or & -clad_ir
clad_region = +clad_ir & -clad_or
Explanation: With the surfaces created, we can now take advantage of the built-in operators on surfaces to create regions for the fuel, the gap, and the clad:
End of explanation
fuel = openmc.Cell(1, 'fuel')
fuel.fill = uo2
fuel.region = fuel_region
gap = openmc.Cell(2, 'air gap')
gap.region = gap_region
clad = openmc.Cell(3, 'clad')
clad.fill = zirconium
clad.region = clad_region
Explanation: Now we can create corresponding cells that assign materials to these regions. As with materials, cells have unique IDs that are assigned either manually or automatically. Note that the gap cell doesn't have any material assigned (it is void by default).
End of explanation
pitch = 1.26
left = openmc.XPlane(x0=-pitch/2, boundary_type='reflective')
right = openmc.XPlane(x0=pitch/2, boundary_type='reflective')
bottom = openmc.YPlane(y0=-pitch/2, boundary_type='reflective')
top = openmc.YPlane(y0=pitch/2, boundary_type='reflective')
Explanation: Finally, we need to handle the coolant outside of our fuel pin. To do this, we create x- and y-planes that bound the geometry.
End of explanation
water_region = +left & -right & +bottom & -top & +clad_or
moderator = openmc.Cell(4, 'moderator')
moderator.fill = water
moderator.region = water_region
Explanation: The water region is going to be everything outside of the clad outer radius and within the box formed as the intersection of four half-spaces.
End of explanation
box = openmc.get_rectangular_prism(width=pitch, height=pitch,
boundary_type='reflective')
type(box)
Explanation: OpenMC also includes a factory function that generates a rectangular prism that could have made our lives easier.
End of explanation
water_region = box & +clad_or
Explanation: Pay attention here -- the object that was returned is NOT a surface. It is actually the intersection of four surface half-spaces, just like we created manually before. Thus, we don't need to apply the unary operator (-box). Instead, we can directly combine it with +clad_or.
End of explanation
root = openmc.Universe(cells=(fuel, gap, clad, moderator))
geom = openmc.Geometry()
geom.root_universe = root
# or...
geom = openmc.Geometry(root)
geom.export_to_xml()
!cat geometry.xml
Explanation: The final step is to assign the cells we created to a universe and tell OpenMC that this universe is the "root" universe in our geometry. The Geometry is the final object that is actually exported to XML.
End of explanation
point = openmc.stats.Point((0, 0, 0))
src = openmc.Source(space=point)
Explanation: Starting source and settings
The Python API has a module openmc.stats with various univariate and multivariate probability distributions. We can use these distributions to create a starting source using the openmc.Source object.
End of explanation
settings = openmc.Settings()
settings.source = src
settings.batches = 100
settings.inactive = 10
settings.particles = 1000
settings.export_to_xml()
!cat settings.xml
Explanation: Now let's create a Settings object and give it the source we created along with specifying how many batches and particles we want to run.
End of explanation
cell_filter = openmc.CellFilter(fuel)
t = openmc.Tally(1)
t.filters = [cell_filter]
Explanation: User-defined tallies
We actually have all the required files needed to run a simulation. Before we do that though, let's give a quick example of how to create tallies. We will show how one would tally the total, fission, absorption, and (n,$\gamma$) reaction rates for $^{235}$U in the cell containing fuel. Recall that filters allow us to specify where in phase-space we want events to be tallied and scores tell us what we want to tally:
$$X = \underbrace{\int d\mathbf{r} \int d\mathbf{\Omega} \int dE}{\text{filters}} \; \underbrace{f(\mathbf{r},\mathbf{\Omega},E)}{\text{scores}} \psi (\mathbf{r},\mathbf{\Omega},E)$$
In this case, the where is "the fuel cell". So, we will create a cell filter specifying the fuel cell.
End of explanation
t.nuclides = ['U235']
t.scores = ['total', 'fission', 'absorption', '(n,gamma)']
Explanation: The what is the total, fission, absorption, and (n,$\gamma$) reaction rates in $^{235}$U. By default, if we only specify what reactions, it will gives us tallies over all nuclides. We can use the nuclides attribute to name specific nuclides we're interested in.
End of explanation
tallies = openmc.Tallies([t])
tallies.export_to_xml()
!cat tallies.xml
Explanation: Similar to the other files, we need to create a Tallies collection and export it to XML.
End of explanation
openmc.run()
Explanation: Running OpenMC
Running OpenMC from Python can be done using the openmc.run() function. This function allows you to set the number of MPI processes and OpenMP threads, if need be.
End of explanation
!cat tallies.out
Explanation: Great! OpenMC already told us our k-effective. It also spit out a file called tallies.out that shows our tallies. This is a very basic method to look at tally data; for more sophisticated methods, see other example notebooks.
End of explanation
p = openmc.Plot()
p.filename = 'pinplot'
p.width = (pitch, pitch)
p.pixels = (200, 200)
p.color_by = 'material'
p.colors = {uo2: 'yellow', water: 'blue'}
Explanation: Geometry plotting
We saw before that we could call the Universe.plot() method to show a universe while we were creating our geometry. There is also a built-in plotter in the Fortran codebase that is much faster than the Python plotter and has more options. The interface looks somewhat similar to the Universe.plot() method. Instead though, we create Plot instances, assign them to a Plots collection, export it to XML, and then run OpenMC in geometry plotting mode. As an example, let's specify that we want the plot to be colored by material (rather than by cell) and we assign yellow to fuel and blue to water.
End of explanation
plots = openmc.Plots([p])
plots.export_to_xml()
!cat plots.xml
Explanation: With our plot created, we need to add it to a Plots collection which can be exported to XML.
End of explanation
openmc.plot_geometry()
Explanation: Now we can run OpenMC in plotting mode by calling the plot_geometry() function. Under the hood this is calling openmc --plot.
End of explanation
!convert pinplot.ppm pinplot.png
Explanation: OpenMC writes out a peculiar image with a .ppm extension. If you have ImageMagick installed, this can be converted into a more normal .png file.
End of explanation
from IPython.display import Image
Image("pinplot.png")
Explanation: We can use functionality from IPython to display the image inline in our notebook:
End of explanation
openmc.plot_inline(p)
Explanation: That was a little bit cumbersome. Thankfully, OpenMC provides us with a function that does all that "boilerplate" work.
End of explanation
<END_TASK> |
15,724 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing
Step8: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step10: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step12: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
Step15: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below
Step18: Process Decoder Input
Implement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch.
Step21: Encoding
Implement encoding_layer() to create a Encoder RNN layer
Step24: Decoding - Training
Create a training decoding layer
Step27: Decoding - Inference
Create inference decoder
Step30: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Embed the target sequences
Construct the decoder LSTM cell (just like you constructed the encoder cell above)
Create an output layer to map the outputs of the decoder to the elements of our vocabulary
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits.
Note
Step33: Build the Neural Network
Apply the functions you implemented above to
Step34: Neural Network Training
Hyperparameters
Tune the following parameters
Step36: Build the Graph
Build the graph using the neural network you implemented.
Step40: Batch and pad the source and target sequences
Step43: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forums to see if anyone is having the same problem.
Step45: Save Parameters
Save the batch_size and save_path parameters for inference.
Step47: Checkpoint
Step50: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
Step52: Translate
This will translate translate_sentence from English to French. | <ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
Explanation: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
# TODO: Implement Function
source_split, target_split = source_text.split('\n'), target_text.split('\n')
source_to_int, target_to_int = [], []
for source, target in zip(source_split, target_split):
source_to_int.append([source_vocab_to_int[word] for word in source.split()])
targets = [target_vocab_to_int[word] for word in target.split()]
targets.append((target_vocab_to_int['<EOS>']))
target_to_int.append(targets)
#print(source_to_int, target_to_int)
return source_to_int, target_to_int
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_text_to_ids(text_to_ids)
Explanation: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing:
python
target_vocab_to_int['<EOS>']
You can get other word ids using source_vocab_to_int and target_vocab_to_int.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
import helper
import problem_unittests as tests
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
from tensorflow.python.layers.core import Dense
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
End of explanation
def model_inputs():
Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences.
:return: Tuple (input, targets, learning rate, keep probability, target sequence length,
max target sequence length, source sequence length)
# TODO: Implement Function
#max_tar_seq_len = np.max([len(sentence) for sentence in target_int_text])
#max_sour_seq_len = np.max([len(sentence) for sentence in source_int_text])
#max_source_len = np.max([max_tar_seq_len, max_sour_seq_len])
inputs = tf.placeholder(tf.int32, [None, None], name='input')
targets = tf.placeholder(tf.int32, [None, None])
learning_rate = tf.placeholder(tf.float32)
keep_probability = tf.placeholder(tf.float32, name='keep_prob')
target_seq_len = tf.placeholder(tf.int32, [None], name='target_sequence_length')
max_target_seq_len = tf.reduce_max(target_seq_len, name='target_sequence_length')
source_seq_len = tf.placeholder(tf.int32, [None], name='source_sequence_length')
return inputs, targets, learning_rate, keep_probability, target_seq_len, max_target_seq_len, source_seq_len
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
Explanation: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:
- model_inputs
- process_decoder_input
- encoding_layer
- decoding_layer_train
- decoding_layer_infer
- decoding_layer
- seq2seq_model
Input
Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.
Targets placeholder with rank 2.
Learning rate placeholder with rank 0.
Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.
Target sequence length placeholder named "target_sequence_length" with rank 1
Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0.
Source sequence length placeholder named "source_sequence_length" with rank 1
Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length)
End of explanation
def process_decoder_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for encoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
# TODO: Implement Function
ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
dec_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1)
return dec_input
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_process_encoding_input(process_decoder_input)
Explanation: Process Decoder Input
Implement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch.
End of explanation
from imp import reload
reload(tests)
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size,
encoding_embedding_size):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:param source_sequence_length: a list of the lengths of each sequence in the batch
:param source_vocab_size: vocabulary size of source data
:param encoding_embedding_size: embedding size of source data
:return: tuple (RNN output, RNN state)
# TODO: Implement Function
embed_seq = tf.contrib.layers.embed_sequence(rnn_inputs, source_vocab_size, encoding_embedding_size)
def lstm_cell():
return tf.contrib.rnn.LSTMCell(rnn_size)
rnn = tf.contrib.rnn.MultiRNNCell([lstm_cell() for i in range(num_layers)])
rnn = tf.contrib.rnn.DropoutWrapper(rnn, output_keep_prob=keep_prob)
output, state = tf.nn.dynamic_rnn(rnn, embed_seq, dtype=tf.float32)
return output, state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_encoding_layer(encoding_layer)
Explanation: Encoding
Implement encoding_layer() to create a Encoder RNN layer:
* Embed the encoder input using tf.contrib.layers.embed_sequence
* Construct a stacked tf.contrib.rnn.LSTMCell wrapped in a tf.contrib.rnn.DropoutWrapper
* Pass cell and embedded input to tf.nn.dynamic_rnn()
End of explanation
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input,
target_sequence_length, max_summary_length,
output_layer, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_summary_length: The length of the longest sequence in the batch
:param output_layer: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing training logits and sample_id
# TODO: Implement Function
training_helper = tf.contrib.seq2seq.TrainingHelper(dec_embed_input, target_sequence_length)
train_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, training_helper, encoder_state, output_layer)
output, _ = tf.contrib.seq2seq.dynamic_decode(train_decoder, impute_finished=False, maximum_iterations=max_summary_length)
return output
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_train(decoding_layer_train)
Explanation: Decoding - Training
Create a training decoding layer:
* Create a tf.contrib.seq2seq.TrainingHelper
* Create a tf.contrib.seq2seq.BasicDecoder
* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode
End of explanation
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id,
end_of_sequence_id, max_target_sequence_length,
vocab_size, output_layer, batch_size, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param max_target_sequence_length: Maximum length of target sequences
:param vocab_size: Size of decoder/target vocabulary
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_layer: Function to apply the output layer
:param batch_size: Batch size
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing inference logits and sample_id
# TODO: Implement Function
start_tokens = tf.tile(tf.constant([start_of_sequence_id], dtype=tf.int32), [batch_size], name='start_tokens')
inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings, start_tokens, end_of_sequence_id)
inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, inference_helper, encoder_state, output_layer)
output, _ = tf.contrib.seq2seq.dynamic_decode(inference_decoder,
impute_finished=True, maximum_iterations=max_target_sequence_length)
return output
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_infer(decoding_layer_infer)
Explanation: Decoding - Inference
Create inference decoder:
* Create a tf.contrib.seq2seq.GreedyEmbeddingHelper
* Create a tf.contrib.seq2seq.BasicDecoder
* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode
End of explanation
def decoding_layer(dec_input, encoder_state,
target_sequence_length, max_target_sequence_length,
rnn_size,
num_layers, target_vocab_to_int, target_vocab_size,
batch_size, keep_prob, decoding_embedding_size):
Create decoding layer
:param dec_input: Decoder input
:param encoder_state: Encoder state
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_target_sequence_length: Maximum length of target sequences
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param target_vocab_size: Size of target vocabulary
:param batch_size: The size of the batch
:param keep_prob: Dropout keep probability
:param decoding_embedding_size: Decoding embedding size
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
# TODO: Implement Function
#embed_seq = tf.contrib.layers.embed_sequence(dec_input, target_vocab_size, decoding_embedding_size)
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size]))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)
def lstm_cell():
return tf.contrib.rnn.LSTMCell(rnn_size)
rnn = tf.contrib.rnn.MultiRNNCell([lstm_cell() for i in range(num_layers)])
rnn = tf.contrib.rnn.DropoutWrapper(rnn, output_keep_prob=keep_prob)
output_layer = Dense(target_vocab_size,
kernel_initializer = tf.truncated_normal_initializer(mean = 0.0, stddev=0.1))
with tf.variable_scope("decode"):
training_output = decoding_layer_train(encoder_state, rnn, dec_embed_input,
target_sequence_length, max_target_sequence_length, output_layer, keep_prob)
with tf.variable_scope("decode", reuse=True):
inference_output = decoding_layer_infer(encoder_state, rnn, dec_embeddings, target_vocab_to_int['<GO>'],
target_vocab_to_int['<EOS>'], max_target_sequence_length, target_vocab_size,
output_layer, batch_size, keep_prob)
return training_output, inference_output
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer(decoding_layer)
Explanation: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Embed the target sequences
Construct the decoder LSTM cell (just like you constructed the encoder cell above)
Create an output layer to map the outputs of the decoder to the elements of our vocabulary
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits.
Note: You'll need to use tf.variable_scope to share variables between training and inference.
End of explanation
def seq2seq_model(input_data, target_data, keep_prob, batch_size,
source_sequence_length, target_sequence_length,
max_target_sentence_length,
source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size,
rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param source_sequence_length: Sequence Lengths of source sequences in the batch
:param target_sequence_length: Sequence Lengths of target sequences in the batch
:max_target_sentence_length: Maximum target sequence lenght
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
# TODO: Implement Function
_, enc_state = encoding_layer(input_data, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size,
enc_embedding_size)
dec_input = process_decoder_input(target_data, target_vocab_to_int, batch_size)
training_output, inference_output = decoding_layer(dec_input, enc_state, target_sequence_length,
max_target_sentence_length, rnn_size, num_layers,
target_vocab_to_int, target_vocab_size, batch_size,
keep_prob, dec_embedding_size)
return training_output, inference_output
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_seq2seq_model(seq2seq_model)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size).
Process target data using your process_decoder_input(target_data, target_vocab_to_int, batch_size) function.
Decode the encoded input using your decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) function.
End of explanation
# Number of Epochs
epochs = 10
# Batch Size
batch_size = 128
# RNN Size
rnn_size = 254
# Number of Layers
num_layers = 2
# Embedding Size
encoding_embedding_size = 200
decoding_embedding_size = 200
# Learning Rate
learning_rate = 0.01
# Dropout Keep Probability
keep_probability = 0.5
display_step = 10
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set num_layers to the number of layers.
Set encoding_embedding_size to the size of the embedding for the encoder.
Set decoding_embedding_size to the size of the embedding for the decoder.
Set learning_rate to the learning rate.
Set keep_probability to the Dropout keep probability
Set display_step to state how many steps between each debug output statement
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_target_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs()
#sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]),
targets,
keep_prob,
batch_size,
source_sequence_length,
target_sequence_length,
max_target_sequence_length,
len(source_vocab_to_int),
len(target_vocab_to_int),
encoding_embedding_size,
decoding_embedding_size,
rnn_size,
num_layers,
target_vocab_to_int)
training_logits = tf.identity(train_logits.rnn_output, name='logits')
inference_logits = tf.identity(inference_logits.sample_id, name='predictions')
masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
training_logits,
targets,
masks)
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
def pad_sentence_batch(sentence_batch, pad_int):
Pad sentences with <PAD> so that each sentence of a batch has the same length
max_sentence = max([len(sentence) for sentence in sentence_batch])
return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]
def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int):
Batch targets, sources, and the lengths of their sentences together
for batch_i in range(0, len(sources)//batch_size):
start_i = batch_i * batch_size
# Slice the right amount for the batch
sources_batch = sources[start_i:start_i + batch_size]
targets_batch = targets[start_i:start_i + batch_size]
# Pad
pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))
pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))
# Need the lengths for the _lengths parameters
pad_targets_lengths = []
for target in pad_targets_batch:
pad_targets_lengths.append(len(target))
pad_source_lengths = []
for source in pad_sources_batch:
pad_source_lengths.append(len(source))
yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths
Explanation: Batch and pad the source and target sequences
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1])],
'constant')
return np.mean(np.equal(target, logits))
# Split data to training and validation sets
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = source_int_text[:batch_size]
valid_target = target_int_text[:batch_size]
(valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source,
valid_target,
batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>']))
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate(
get_batches(train_source, train_target, batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>'])):
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
target_sequence_length: targets_lengths,
source_sequence_length: sources_lengths,
keep_prob: keep_probability})
if batch_i % display_step == 0 and batch_i > 0:
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch,
source_sequence_length: sources_lengths,
target_sequence_length: targets_lengths,
keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_sources_batch,
source_sequence_length: valid_sources_lengths,
target_sequence_length: valid_targets_lengths,
keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits)
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forums to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params(save_path)
Explanation: Save Parameters
Save the batch_size and save_path parameters for inference.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
Explanation: Checkpoint
End of explanation
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
# TODO: Implement Function
sentence = sentence.lower()
sentence_to_id = [vocab_to_int[word] if word in vocab_to_int.keys() else vocab_to_int['<UNK>'] for word in sentence.split(' ')]
return sentence_to_id
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_sentence_to_seq(sentence_to_seq)
Explanation: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
End of explanation
translate_sentence = 'he saw a old yellow truck .'
DON'T MODIFY ANYTHING IN THIS CELL
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('predictions:0')
target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')
source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size,
target_sequence_length: [len(translate_sentence)*2]*batch_size,
source_sequence_length: [len(translate_sentence)]*batch_size,
keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in translate_logits]))
print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits])))
Explanation: Translate
This will translate translate_sentence from English to French.
End of explanation
<END_TASK> |
15,725 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Tables to Networks, Networks to Tables
Networks can be represented in a tabular form in two ways
Step1: At this point, we have our stations and trips data loaded into memory.
How we construct the graph depends on the kind of questions we want to answer, which makes the definition of the "unit of consideration" (or the entities for which we are trying to model their relationships) is extremely important.
Let's try to answer the question
Step2: Then, let's iterate over the stations DataFrame, and add in the node attributes.
Step3: In order to answer the question of "which stations are important", we need to specify things a bit more. Perhaps a measure such as betweenness centrality or degree centrality may be appropriate here.
The naive way would be to iterate over all the rows. Go ahead and try it at your own risk - it may take a long time
Step4: Exercise
Flex your memory muscles
Step5: Exercise
Create a new graph, and filter out the edges such that only those with more than 100 trips taken (i.e. count >= 100) are left.
Step6: Let's now try drawing the graph.
Exercise
Use nx.draw(my_graph) to draw the filtered graph to screen.
Step7: Exercise
Try visualizing the graph using a CircosPlot. Order the nodes by their connectivity in the original graph, but plot only the filtered graph edges.
Step8: In this visual, nodes are sorted from highest connectivity to lowest connectivity in the unfiltered graph.
Edges represent only trips that were taken >100 times between those two nodes.
Some things should be quite evident here. There are lots of trips between the highly connected nodes and other nodes, but there are local "high traffic" connections between stations of low connectivity as well (nodes in the top-right quadrant).
Saving NetworkX Graph Files
NetworkX's API offers many formats for storing graphs to disk. If you intend to work exclusively with NetworkX, then pickling the file to disk is probably the easiest way.
To write to disk | <ASSISTANT_TASK:>
Python Code:
# This block of code checks to make sure that a particular directory is present.
if "divvy_2013" not in os.listdir('datasets/'):
print('Unzip the divvy_2013.zip file in the datasets folder.')
stations = pd.read_csv('datasets/divvy_2013/Divvy_Stations_2013.csv', parse_dates=['online date'], index_col='id', encoding='utf-8')
stations
trips = pd.read_csv('datasets/divvy_2013/Divvy_Trips_2013.csv',
parse_dates=['starttime', 'stoptime'],
index_col=['trip_id'])
trips = trips.sort()
trips
Explanation: Tables to Networks, Networks to Tables
Networks can be represented in a tabular form in two ways: As an adjacency list with edge attributes stored as columnar values, and as a node list with node attributes stored as columnar values.
Storing the network data as a single massive adjacency table, with node attributes repeated on each row, can get unwieldy, especially if the graph is large, or grows to be so. One way to get around this is to store two files: one with node data and node attributes, and one with edge data and edge attributes.
The Divvy bike sharing dataset is one such example of a network data set that has been stored as such.
Loading Node Lists and Adjacency Lists
Let's use the Divvy bike sharing data set as a starting point. The Divvy data set is comprised of the following data:
Stations and metadata (like a node list with attributes saved)
Trips and metadata (like an edge list with attributes saved)
The README.txt file in the Divvy directory should help orient you around the data.
End of explanation
G = nx.DiGraph()
Explanation: At this point, we have our stations and trips data loaded into memory.
How we construct the graph depends on the kind of questions we want to answer, which makes the definition of the "unit of consideration" (or the entities for which we are trying to model their relationships) is extremely important.
Let's try to answer the question: "What are the most popular trip paths?" In this case, the bike station is a reasonable "unit of consideration", so we will use the bike stations as the nodes.
To start, let's initialize an directed graph G.
End of explanation
for r, d in stations.iterrows(): # call the pandas DataFrame row-by-row iterator
G.add_node(r, attr_dict=d.to_dict())
Explanation: Then, let's iterate over the stations DataFrame, and add in the node attributes.
End of explanation
# # Run the following code at your own risk :)
# for r, d in trips.iterrows():
# start = d['from_station_id']
# end = d['to_station_id']
# if (start, end) not in G.edges():
# G.add_edge(start, end, count=1)
# else:
# G.edge[start][end]['count'] += 1
for (start, stop), d in trips.groupby(['from_station_id', 'to_station_id']):
G.add_edge(start, stop, count=len(d))
G.edges(data=True)
len(G.edges())
len(G.nodes())
Explanation: In order to answer the question of "which stations are important", we need to specify things a bit more. Perhaps a measure such as betweenness centrality or degree centrality may be appropriate here.
The naive way would be to iterate over all the rows. Go ahead and try it at your own risk - it may take a long time :-). Alternatively, I would suggest doing a pandas groupby.
End of explanation
from collections import Counter
# Count the number of edges that have x trips recorded on them.
trip_count_distr = ______________________________
# Then plot the distribution of these
plt.scatter(_______________, _______________, alpha=0.1)
plt.yscale('log')
plt.xlabel('num. of trips')
plt.ylabel('num. of edges')
Explanation: Exercise
Flex your memory muscles: can you make a scatter plot of the distribution of the number edges that have a certain number of trips?
The key should be the number of trips between two nodes, and the value should be the number of edges that have that number of trips.
End of explanation
# Filter the edges to just those with more than 100 trips.
G_filtered = G.copy()
for u, v, d in G.edges(data=True):
# Fill in your code here.
len(G_filtered.edges())
Explanation: Exercise
Create a new graph, and filter out the edges such that only those with more than 100 trips taken (i.e. count >= 100) are left.
End of explanation
# Fill in your code here.
Explanation: Let's now try drawing the graph.
Exercise
Use nx.draw(my_graph) to draw the filtered graph to screen.
End of explanation
nodes = sorted(_________________, key=lambda x:_________________)
edges = ___________
edgeprops = dict(alpha=0.1)
nodecolor = plt.cm.viridis(np.arange(len(nodes)) / len(nodes))
fig = plt.figure(figsize=(6,6))
ax = fig.add_subplot(111)
c = CircosPlot(nodes, edges, radius=10, ax=ax, fig=fig, edgeprops=edgeprops, nodecolor=nodecolor)
c.draw()
plt.savefig('images/divvy.png', dpi=300)
Explanation: Exercise
Try visualizing the graph using a CircosPlot. Order the nodes by their connectivity in the original graph, but plot only the filtered graph edges.
End of explanation
nx.write_gpickle(G, 'datasets/divvy_2013/divvy_graph.pkl')
G = nx.read_gpickle('datasets/divvy_2013/divvy_graph.pkl')
Explanation: In this visual, nodes are sorted from highest connectivity to lowest connectivity in the unfiltered graph.
Edges represent only trips that were taken >100 times between those two nodes.
Some things should be quite evident here. There are lots of trips between the highly connected nodes and other nodes, but there are local "high traffic" connections between stations of low connectivity as well (nodes in the top-right quadrant).
Saving NetworkX Graph Files
NetworkX's API offers many formats for storing graphs to disk. If you intend to work exclusively with NetworkX, then pickling the file to disk is probably the easiest way.
To write to disk:
nx.write_gpickle(G, handle)
To load from disk:
G = nx.read_gpickle(handle)
End of explanation
<END_TASK> |
15,726 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Objetivos
Step1: Linear
$$y=x$$
$$y^\prime = 1$$
Step2: Sigmoid
$$y = \frac{1}{1+e^{-x}}$$
$$y^\prime = y(1-y)$$
Step3: Tanh
$$y = \frac{e^x - e^{-x}}{e^x+e^{-x}}$$
$$y^\prime = 1 - y^2$$
Step4: Rectified Linear Unit (ReLU)
$$y = max(0, x)$$
$$\frac{\partial y}{\partial x} = \begin{cases}0 &,\ x \leq 0\1 &,\ x > 0\end{cases}$$
Obs.
Step5: Leaky ReLU
$$y = \begin{cases}\alpha x &,\ x \leq 0\x &,\ x > 0\end{cases}$$
$$\frac{\partial y}{\partial x} = \begin{cases}\alpha &,\ x \leq 0\1 &,\ x > 0\end{cases}$$
Step6: Exponential Linear Unit (eLU)
$$y = \begin{cases}\alpha(e^x -1) &,\ x \leq 0\x &,\ x > 0\end{cases}$$
$$\frac{\partial y}{\partial x} = \begin{cases}y + \alpha &,\ x \leq 0\1 &,\ x > 0\end{cases}$$ | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
import ipywidgets as wg
from ipywidgets import interactive, fixed
%matplotlib inline
def plot_interactive(w, b, func, ylim=fixed((0, 1)), show_der=False):
plt.figure(0)
x = np.linspace(-10, 10, num=1000)
z = w*x + b
y = func(z)
plt.plot(x, y, color='blue')
if show_der:
der = func(z, derivative=True)
y_der_z = der
y_der_x = w*der
plt.plot(x, y_der_z, color='red')
plt.plot(x, y_der_x, color='green')
plt.xlim(-10, 10)
plt.ylim(ylim[0], ylim[1])
plt.show()
Explanation: Objetivos:
Implementar as principais funções de ativação
Entender intuitivamente como $w$ e $b$ influenciam nas funções de ativação
Sumário
Linear
Sigmoid
Tanh
Rectified Linear Unit (ReLU)
Leaky ReLU
Exponential Linear Unit (eLU)
Tabela das Funções de Ativação
Referências
End of explanation
interactive_plot = interactive(plot_interactive, w=(-2.0, 2.0), b=(-3, 3, 0.5), func=fixed(linear), ylim=fixed((-10, 10)))
interactive_plot
Explanation: Linear
$$y=x$$
$$y^\prime = 1$$
End of explanation
interactive_plot = interactive(plot_interactive, w=(-2.0, 2.0), b=(-3, 3, 0.5), func=fixed(sigmoid))
interactive_plot
Explanation: Sigmoid
$$y = \frac{1}{1+e^{-x}}$$
$$y^\prime = y(1-y)$$
End of explanation
interactive_plot = interactive(plot_interactive, w=(-2.0, 2.0), b=(-3, 3, 0.5), func=fixed(tanh), ylim=fixed((-2, 2)))
interactive_plot
Explanation: Tanh
$$y = \frac{e^x - e^{-x}}{e^x+e^{-x}}$$
$$y^\prime = 1 - y^2$$
End of explanation
interactive_plot = interactive(plot_interactive, w=(-2.0, 2.0), b=(-3, 3, 0.5), func=fixed(relu), ylim=fixed((-1, 10)))
interactive_plot
Explanation: Rectified Linear Unit (ReLU)
$$y = max(0, x)$$
$$\frac{\partial y}{\partial x} = \begin{cases}0 &,\ x \leq 0\1 &,\ x > 0\end{cases}$$
Obs.: Lembrando que a derivada da ReLU quando x = 0 não existe matematicamente, mas é convencionalmente definida como 0.
End of explanation
interactive_plot = interactive(plot_interactive, w=(-2.0, 2.0), b=(-3, 3, 0.5), func=fixed(leaky_relu), ylim=fixed((-1, 10)))
interactive_plot
Explanation: Leaky ReLU
$$y = \begin{cases}\alpha x &,\ x \leq 0\x &,\ x > 0\end{cases}$$
$$\frac{\partial y}{\partial x} = \begin{cases}\alpha &,\ x \leq 0\1 &,\ x > 0\end{cases}$$
End of explanation
interactive_plot = interactive(plot_interactive, w=(-2.0, 2.0), b=(-3, 3, 0.5), func=fixed(elu), ylim=fixed((-2, 10)))
interactive_plot
Explanation: Exponential Linear Unit (eLU)
$$y = \begin{cases}\alpha(e^x -1) &,\ x \leq 0\x &,\ x > 0\end{cases}$$
$$\frac{\partial y}{\partial x} = \begin{cases}y + \alpha &,\ x \leq 0\1 &,\ x > 0\end{cases}$$
End of explanation
<END_TASK> |
15,727 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Data Bootcamp Final Project
Kavya Beheraj | May 13, 2016
The goal of my final project for Data Bootcamp was to develop useful data visualizations for my current internship, Student Success Network (SSN). I took two typical datasets currently used by SSN, extracted two variables for both males and females, and created two graphs comparing the network average to an individual organization's outcome. The purpose of such a visualization is to allow SSN's clients, education organizations across New York, to see how well they are doing in their students' social-emotional learning and make decisions on where to distribute resources for targeted programs or interventions.
Contents
Step1: 1 | Background
Student Success Network (SSN) is a nonprofit that helps 40 partner organizations measure seven indicators of social-emotional learning (SEL) in students using a survey
Step2: 3.1 | Slicing the Organizational Data
I noticed that the male outcomes consisted of the first 14 rows of the spreadsheet, while the female outcomes consisted of the last 14. The 14 rows of outcomes were each divided into 7 rows for the mean and 7 rows for the percent positive (called percent perfect in the dataframe).
I sliced the spreadsheet into four separate dataframes
Step3: 3.2 | Joining the Organizational Data
After creating four separate dataframes, I decided to concatenate them along the seven SEL indicators. I envisioned a problem in graphing both outcomes (mean and percent positive) within the same graph, since they had different scales, but seeing all of the data within one dataframe is easier to understand.
I created two dataframes summarizing the mean and percent positive (ppos), as well as one with both outcomes (meanppos).
Step4: 4 | Network Summary Data
I read in the male and female summary data for the entire network. I then extracted the data for males and females, avoiding the rows which had a blank for "isFemale". Unlike the organizational data, SSN network data has a separate column for percent positive, which meant that I did not have to create as many dataframes to get the same output.
I created two dataframes, one summarizing male network data (mlnet) and one for female network data (fmnet).
Step5: 5 | Slicing the Final Output
After I cleaned and sliced both the organization and network-wide data, I joined them along the SEL indicators and created separate dataframes for mean and percent positive.
Step6: 6 | Visualizing the Data
I created two visualizations of the data, one for percent positive and another for the mean. I compared male vs. female and the organization's data vs. the network-wide outcome.
6.1 | Visualizing the Percent Positive, Organization vs. Network
Step7: From the data above, we can see that this organization has a greater percentage of students who meet or exceed requirements for the 7 SEL indicators, except for Problem-Solving.
5.1 | Visualizing the Mean, Organization vs. Network | <ASSISTANT_TASK:>
Python Code:
import sys
import pandas as pd
import matplotlib.pyplot as plt
import datetime as dt
import seaborn as sns
import numpy as np
%matplotlib inline
print('Python version:', sys.version)
print('Pandas version: ', pd.__version__)
print('Today: ', dt.date.today())
Explanation: Data Bootcamp Final Project
Kavya Beheraj | May 13, 2016
The goal of my final project for Data Bootcamp was to develop useful data visualizations for my current internship, Student Success Network (SSN). I took two typical datasets currently used by SSN, extracted two variables for both males and females, and created two graphs comparing the network average to an individual organization's outcome. The purpose of such a visualization is to allow SSN's clients, education organizations across New York, to see how well they are doing in their students' social-emotional learning and make decisions on where to distribute resources for targeted programs or interventions.
Contents:
1. Background
2. About the Data
+ 2.1 | Links to My Data
+ 2.2 | Dataframes
3. Organizational Data
+ 3.1 | Slicing the Organizational Data
+ 3.2 | Joining the Organizational Data
4. Network Summary Data
5. Creating the Final Output
6. Visualizing the Data
+ 6.1 | Visualizing the Percent Positive, Organization vs. Network
+ 6.2 | Visualizing the Mean, Organization vs. Network
End of explanation
MaleFemale = "/Users/kavyabeheraj/Desktop/Current Classes/Data Bootcamp/Male_Female_Sample_Org_Output.csv"
# Sample organizational output for males and females
df = pd.read_csv(MaleFemale)
df
Explanation: 1 | Background
Student Success Network (SSN) is a nonprofit that helps 40 partner organizations measure seven indicators of social-emotional learning (SEL) in students using a survey:
+ Academic Behaviors
+ Academic Self-Efficacy
+ Growth Mindset
+ Interpersonal Skills
+ Problem-Solving
+ Self-Advocacy
+ Belonging
Social-emotional learning has a huge impact on student outcomes later in life, often comparable to academic outcomes like test scores. SSN has developed a survey to measure social-emotional learning, which they distribute to their partner organizations.
SSN sends the survey responses to another company (the Research Alliance for NYC Schools) for the descriptive statistics, and what they receive is a large, unwieldy spreadsheet that they must translate into easy-to-understand and actionable visualizations. They also provide partner organizations with visualizations of specific subgroups like gender, race, and school, and compare the organization's results on SEL indicators to a network-wide average.
With this project, I attempted to create a uniform method of taking those spreadsheets and turning them into helpful data visualizations.
Source: SSN Website
2 | About the Data
I received my data directly from Student Success Network, and as such, it is not online. They gave me two spreadsheets in .csv format: a sample organizational output (real data from a de-identified partner organization) and a network wide average.
The most important outcomes to partner organizations are:
+ Percent positive: the percentage of students whose responses are positive for that SEL indicator
+ Mean: the average result on a scale of 1 to 5 for that SEL indicator
These two variables provide are the most helpful for organizations to make decisions on resource allocation for student social-emotional learning.
I decided to focus on the gender subgroup for the purposes of this project, to reduce the number of different variables at play. Since all of the descriptive statistics they receive for each partner organization are within the same format, I believe SSN can easily translate this program to different subgroups.
2.1 | Links to my data:
Organizational Data - Males/Females
Network Summary Data - Males/Females
2.2 | Dataframes
These are the dataframes I created in order to extract percent positive and mean data for males and females at the organization and network-wide level. The final output is a dataframe summarizing the mean and percent positive for all variables at both levels.
Organization Data
| Label | Dataframe |
|:---------------------------------------------------------------|
| MaleFemale | Total Organization Data for Males and Females |
| mean | Mean for Males and Females |
| ppos | Percent Positive for Males and Females |
| male | Male Data (Mean and Percent Positive) |
| female | Female Data (Mean and Percent Positive) |
| mlmean | Mean Data for Males |
| mlppos | Percent Positive Data for Males |
| fmmean | Mean Data for Females |
| fmppos | Percent Positive Data for Females |
Network Data
| Label | Dataframe |
|:--------------------------------------------------------------------|
| network | Total Network Data |
| mlnet | Male Network Data (Mean and Percent Positive) |
| fmnet | Female Network Data (Mean and Percent Positive) |
| output | Network and Organizational Data, Males and Females |
Total Data: Dataframes for both the organization and the network
| Label | Dataframe |
|:--------------------------------------------------------------------------------------|
| output | Network and Organizational Data, Males and Females |
| mean_output | Mean Data for Network and Organization, Males and Females |
| pp_output | Percent Positive Data for Network and Organization, Males and Females |
| pp_net | Mean Data for Network, Males and Females |
| mean_net | Percent Positive Data for Network, Males and Females |
3 | Organizational Data
I first focused on cleaning up the Sample Organizational Output for Males and Females. I wanted to extract the two outcomes (percent positive and mean) for both males and females along the seven SEL indicators.
The results include data from 230 males and 334 females, or 564 students in total.
End of explanation
male = pd.read_csv(MaleFemale).head(14)
female = pd.read_csv(MaleFemale).tail(14)
mlmean = male.head(7) # Reads the first seven lines of the dataframe
mlmean = mlmean[["Label","Mean"]].set_index("Label") # Slices only two columns and sets the index to be "Label"
mlmean = mlmean.rename(index={"Academic Behavior" : "AcaBeh",
"Academic Self-efficacy" : "AcaEf",
"Growth Mindset" : "Growth",
"Interpersonal Skills" : "Intp",
"Problem Solving" : "Prob",
"SELF-ADVOCACY" : "SelfAd",
"BELONGING" : "Belong"},
columns={"Mean" : "Male Mean"})
mlmean
mlpp = male.tail(7) # Reads the first seven lines of the dataframe
mlpp = mlpp[["Label","Mean"]].set_index("Label") # Slices only two columns and sets the index to be "Label"
mlpp = mlpp.rename(index={"Academic Behavior Percent Perfect" : "AcaBeh",
"Academic Self-efficacy Percent Perfect" : "AcaEf",
"Growth Mindset Percent Perfect" : "Growth",
"Interpersonal Skills Percent Perfect" : "Intp",
"Problem Solving Percent Perfect" : "Prob",
"SELF ADVOCACY PERCENT PERFECT" : "SelfAd",
"BELONGING PERCENT PERFECT ge 4" : "Belong"},
columns={"Mean" : "Male Percent Positive"})
mlpp
fmmean = female.head(7) # Reads the first seven lines of the dataframe
fmmean = fmmean[["Label","Mean"]].set_index("Label") # Slices only two columns and sets the index to be "Label"
fmmean = fmmean.rename(index={"Academic Behavior" : "AcaBeh",
"Academic Self-efficacy" : "AcaEf",
"Growth Mindset" : "Growth",
"Interpersonal Skills" : "Intp",
"Problem Solving" : "Prob",
"SELF-ADVOCACY" : "SelfAd",
"BELONGING" : "Belong"},
columns={"Mean" : "Female Mean"})
fmmean
fmpp = female.tail(7)
fmpp = fmpp[["Label","Mean"]].set_index("Label")
fmpp = fmpp.rename(index={"Academic Behavior Percent Perfect" : "AcaBeh",
"Academic Self-efficacy Percent Perfect" : "AcaEf",
"Growth Mindset Percent Perfect" : "Growth",
"Interpersonal Skills Percent Perfect" : "Intp",
"Problem Solving Percent Perfect" : "Prob",
"SELF ADVOCACY PERCENT PERFECT" : "SelfAd",
"BELONGING PERCENT PERFECT ge 4" : "Belong"},
columns={"Mean" : "Female Percent Positive"})
fmpp
Explanation: 3.1 | Slicing the Organizational Data
I noticed that the male outcomes consisted of the first 14 rows of the spreadsheet, while the female outcomes consisted of the last 14. The 14 rows of outcomes were each divided into 7 rows for the mean and 7 rows for the percent positive (called percent perfect in the dataframe).
I sliced the spreadsheet into four separate dataframes:
+ Male Mean (mlmean)
+ Male Percent Positive (mlpp)
+ Female Mean (fmmean)
+ Female Percent Positive (fmpp)
I also set the SEL indicators as the index renamed all of them for consistency.
End of explanation
mean = pd.concat([mlmean, fmmean], axis=1)
mean
ppos = pd.concat([mlpp, fmpp], axis=1)
ppos
meanppos = pd.concat([mlpp, fmpp, mlmean, fmmean], axis=1)
meanppos
mean.plot.barh(figsize = (10,7))
Explanation: 3.2 | Joining the Organizational Data
After creating four separate dataframes, I decided to concatenate them along the seven SEL indicators. I envisioned a problem in graphing both outcomes (mean and percent positive) within the same graph, since they had different scales, but seeing all of the data within one dataframe is easier to understand.
I created two dataframes summarizing the mean and percent positive (ppos), as well as one with both outcomes (meanppos).
End of explanation
df2 = "/Users/kavyabeheraj/Desktop/Current Classes/Data Bootcamp/Network_Summary_Gender.csv"
network = pd.read_csv(df2)
network
mlnet = network.tail(7)
mlnet = mlnet[["label","mean", "percentPositive"]].set_index("label")
mlnet = mlnet.rename(index={"Academic Behavior" : "AcaBeh",
"Academic Self-efficacy" : "AcaEf",
"Growth Mindset" : "Growth",
"Interpersonal Skills" : "Intp",
"Problem Solving" : "Prob",
"Self-Advocacy" : "SelfAd",
"Belonging" : "Belong"},
columns={"mean" : "Male Mean, Network",
"percentPositive" : "Male Percent Positive, Network"})
mlnet
fmnet = network[7:14]
fmnet = fmnet[["label","mean", "percentPositive"]].set_index("label")
fmnet = fmnet.rename(index={"Academic Behavior" : "AcaBeh",
"Academic Self-efficacy" : "AcaEf",
"Growth Mindset" : "Growth",
"Interpersonal Skills" : "Intp",
"Problem Solving" : "Prob",
"Self-Advocacy" : "SelfAd",
"Belonging" : "Belong"},
columns={"mean" : "Female Mean, Network", "percentPositive" : "Female Percent Positive, Network"})
fmnet
Explanation: 4 | Network Summary Data
I read in the male and female summary data for the entire network. I then extracted the data for males and females, avoiding the rows which had a blank for "isFemale". Unlike the organizational data, SSN network data has a separate column for percent positive, which meant that I did not have to create as many dataframes to get the same output.
I created two dataframes, one summarizing male network data (mlnet) and one for female network data (fmnet).
End of explanation
output = pd.concat([meanppos, fmnet, mlnet ], axis=1)
output
mean_output = output[["Male Mean",
"Female Mean",
"Male Mean, Network",
"Female Mean, Network"]]
mean_output
pp_output = output[["Male Percent Positive",
"Male Percent Positive, Network",
"Female Percent Positive",
"Female Percent Positive, Network"]]
pp_output
mean_net = output[["Male Mean, Network",
"Female Mean, Network"]]
mean_net
ppos_net = output[["Male Percent Positive, Network",
"Female Percent Positive, Network"]]
ppos_net
Explanation: 5 | Slicing the Final Output
After I cleaned and sliced both the organization and network-wide data, I joined them along the SEL indicators and created separate dataframes for mean and percent positive.
End of explanation
plt.style.use('seaborn-pastel')
ax = ppos_net.plot(linestyle='-', marker='o', colormap = "Accent")
ppos.plot(kind='bar', colormap = "Pastel2",
ax=ax,
figsize = (10,7))
ax.set_ylim(0, 0.8)
ax.set_title("Percent Positive Male and Female SEL Outcomes, Organization vs. Network")
Explanation: 6 | Visualizing the Data
I created two visualizations of the data, one for percent positive and another for the mean. I compared male vs. female and the organization's data vs. the network-wide outcome.
6.1 | Visualizing the Percent Positive, Organization vs. Network
End of explanation
plt.style.use('seaborn-pastel')
ax = mean_net.plot(linestyle='-', marker='o', colormap = "Accent")
mean.plot(kind='bar', colormap = "Pastel2",
ax=ax,
figsize = (10,7))
ax.set_ylim(0, 5)
ax.set_title("Mean Male and Female SEL Outcomes, Organization vs. Network")
Explanation: From the data above, we can see that this organization has a greater percentage of students who meet or exceed requirements for the 7 SEL indicators, except for Problem-Solving.
5.1 | Visualizing the Mean, Organization vs. Network
End of explanation
<END_TASK> |
15,728 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Week 4 - Python
Today we will cover some basic python techniques and structures that are really useful for analyzing data
Today's Agenda
Basics of Python
List Comprehension
Dictionaries
Functions
Classes
Basics of Python
The minimal Python script
Unlike many other languages, a simple Python script does not require any sort of header information in the code. So, we can look at the standard programming example, Hello World, in Python (below). Here we're simply printing to screen. If we put that single line into a blank file (called, say, HelloWorld.py]) and then run that in the command line by typing 'python HelloWorld.py' the script should run with no problems. This also shows off the first Python function, print, which can be used to print strings or numbers.
Step1: There are, however, a few lines that you will usually see in a Python script. The first line often starts with #! and is called the shebang. For a Python script, an example of the shebang line would be "#!/usr/bin/env python"
Within Python, any line that starts with # is a comment, and won't be executed when running the script. The shebang, though, is there for the shell. If you run the script by calling python explicitly, then the script will be executed in Python. If, however, you want to make the script an executable (which can be run just by typing "./HelloWorld.py") then the shell won't know what language the script should be run in. This is the information included in the shebang line. You don't need it, in general, but it's a good habit to have in case you ever decide to run a script as an executable.
Another common thing at the starts of scripts is several lines that start with 'import'. These lines allow you to allow import individual functions or entire modules (files that contain multiple functions). These can be those you write yourself, or things like numpy, matplotlib, etc.
Python variables
Some languages require that every variable be defined by a variable type. For example, in C++, you have to define a variable type, first. For example a line like "int x" would define the variable x, and specify that it be an an integer. Python, however, using dynamic typing. That means that variable types are entirely defined by what the variable is stored.
In the below example, we can see a few things happening. First of all, we can see that x behaves initally as a number (specifically, an integer, which is why 42/4=10). However, we can put a string in there instead with no problems. However, we can't treat it as a number anymore and add to it.
Try commenting out the 5th line (print x+10) by adding a # to the front of that line, and we'll see that Python will still add strings to it.
Step2: Lists
The basic way for storing larger amounts of data in Python (and without using other modules like numpy) is Python's default option, lists. A list is, by its definition, one dimensional. If we'd like to store more dimensions, then we are using what are referred to as lists of lists. This is not the same thing as an array, which is what numpy will use. Let's take a look at what a list does.
We'll start off with a nice simple list below. Here the list stores integers. Printing it back, we get exactly what we expect. However, because it's being treated as a list, not an array, it gets a little bit weird when we try to do addition or multiplication. Feel free to try changing the operations that we're using and see what causes errors, and what causes unexpected results.
Step3: We can also set up a quick list if we want to using the range function. If we use just a single number, then we'll get a list of integers from 0 to 1 less than the number we gave it.
If we want a bit fancier of a list, then we can also include the number to start at (first parameter) and the step size (last parameter). All three of these have to be integers.
If we need it, we can also set up blank lists.
Step4: If we want to, we can refer to subsets of the list. For just a single term, we can just use the number corresponding to that position. An important thing with Python is that the list index starts at 0, not at 1, starting from the first term. If we're more concerned about the last number in the list, then we can use negative numbers as the index. The last item in the list is -1, the item before that is -2, and so on.
We can also select a set of numbers by using a
Step5: Modifying lists
The simplest change we can make to a list is to change it at a specific index just be redefining it, like in the second line in the code below.
There's three other handy ways to modify a list. append will add whatever we want as the next item in the list, but this means if we're adding more than a single value, it will add a list into our list, which may not always be what we want.
extend will expand the list to include the additional values, but only if it's a list, it won't work on a single integer (go ahead and try that).
Finally, insert will let us insert a value anywhere within the list. To do this, it requires a number for what spot in the list it should go, and also what we want to add into the list.
Step6: Loops and List Comprehension
Like most languages, we can write loops in Python. One of the most standard loops is a for loop, so we'll focus on that one. Below is a 'standard' way of writing a 'for' loop. We'll do something simple, where all we want is to get the square of each number in the array.
Step7: While that loop works, even this pretty simple example can be condensed into something a bit shorter. We have to set up a blank list, and then after that, the loop itself was 3 lines, so just getting the squares of all these values took 4 lines. We can do it in one with list comprehension.
This is basically a different way of writing a for loop, and will return a list, so we don't have to set up an empty list for the results.
Step8: Dictionaries
Dictionaries are another way of storing a large amount of data in Python, except instead of being referenced by an ordered set of numbers like in a list, they are referenced by either strings/characters or numbers, referred to as keys.
Step9: These are particularly useful if you'll have a handful of values you'd like to call back to often. For an astronomy example, we can set up a dictionary that contains the absolute magnitude of the Sun in a bunch of bands (from Binney & Merrifield). We can now have a code that easily calls absolute magnitudes whenever needed using that dictionary.
We can also list out the dictionary, if needed, with AbMag.items(). There's some other tools for more advanced tricks with dictionaries, but this covers the basics.
Step10: Functions
At a certain point you'll be writing the same bits of code over and over again. That means that if you want to update it, you'll have to update it in every single spot you did the same thing. This is.... less than optimal use of time, and it also means it's really easy to screw up by forgetting to keep one spot the same as the rest.
We can try out a function by writing a crude function for the sum of a geometric series.
$$\frac{1}{r} + \frac{1}{r^2} + \frac{1}{r^3} + \frac{1}{r^4} + \ldots $$
Conveniently, so long as r is larger than 1, there's a known solution to this series. We can use that to see that this function works.
$$ \frac{1}{r-1} $$
This means we can call the function repeatedly and not need to change anything. In this case, you can try using this GeoSum function for several different numbers (remember, r>1), and see how closely this works, by just changing TermValue
Step11: Classes
To steal a good line for this, "Classes can be thought of as blueprints for creating objects."
With a class, we can create an object with a whole set of properties that we can access. This can be very useful when you want to deal with many objects with the same set of parameters, rather than trying to keep track of related variables over multiple lists, or even just having a single object's properties all stored in some hard to manage list or dictionary.
Here we'll just use a class that's set up to do some basic math. Note that the class consists of several smaller functions inside of it. The first function, called init, is going to be run as soon as we create an object belonging to this class, and so that'll create two attributes to that object, value and square. The other function, powerraise, only gets called if we call it. Try adding some other subfunctions in there to try this out. They don't need to have anything new passed to them to be run. | <ASSISTANT_TASK:>
Python Code:
print "Hello World!"
Explanation: Week 4 - Python
Today we will cover some basic python techniques and structures that are really useful for analyzing data
Today's Agenda
Basics of Python
List Comprehension
Dictionaries
Functions
Classes
Basics of Python
The minimal Python script
Unlike many other languages, a simple Python script does not require any sort of header information in the code. So, we can look at the standard programming example, Hello World, in Python (below). Here we're simply printing to screen. If we put that single line into a blank file (called, say, HelloWorld.py]) and then run that in the command line by typing 'python HelloWorld.py' the script should run with no problems. This also shows off the first Python function, print, which can be used to print strings or numbers.
End of explanation
x=42
print x+10
print x/4
x="42"
print x+10
print x+"10"
Explanation: There are, however, a few lines that you will usually see in a Python script. The first line often starts with #! and is called the shebang. For a Python script, an example of the shebang line would be "#!/usr/bin/env python"
Within Python, any line that starts with # is a comment, and won't be executed when running the script. The shebang, though, is there for the shell. If you run the script by calling python explicitly, then the script will be executed in Python. If, however, you want to make the script an executable (which can be run just by typing "./HelloWorld.py") then the shell won't know what language the script should be run in. This is the information included in the shebang line. You don't need it, in general, but it's a good habit to have in case you ever decide to run a script as an executable.
Another common thing at the starts of scripts is several lines that start with 'import'. These lines allow you to allow import individual functions or entire modules (files that contain multiple functions). These can be those you write yourself, or things like numpy, matplotlib, etc.
Python variables
Some languages require that every variable be defined by a variable type. For example, in C++, you have to define a variable type, first. For example a line like "int x" would define the variable x, and specify that it be an an integer. Python, however, using dynamic typing. That means that variable types are entirely defined by what the variable is stored.
In the below example, we can see a few things happening. First of all, we can see that x behaves initally as a number (specifically, an integer, which is why 42/4=10). However, we can put a string in there instead with no problems. However, we can't treat it as a number anymore and add to it.
Try commenting out the 5th line (print x+10) by adding a # to the front of that line, and we'll see that Python will still add strings to it.
End of explanation
x=[1, 2, 3]
y=[4,5, 6]
print x
print x*2
print x+y
Explanation: Lists
The basic way for storing larger amounts of data in Python (and without using other modules like numpy) is Python's default option, lists. A list is, by its definition, one dimensional. If we'd like to store more dimensions, then we are using what are referred to as lists of lists. This is not the same thing as an array, which is what numpy will use. Let's take a look at what a list does.
We'll start off with a nice simple list below. Here the list stores integers. Printing it back, we get exactly what we expect. However, because it's being treated as a list, not an array, it gets a little bit weird when we try to do addition or multiplication. Feel free to try changing the operations that we're using and see what causes errors, and what causes unexpected results.
End of explanation
print range(10)
print range(20, 50, 3)
print []
Explanation: We can also set up a quick list if we want to using the range function. If we use just a single number, then we'll get a list of integers from 0 to 1 less than the number we gave it.
If we want a bit fancier of a list, then we can also include the number to start at (first parameter) and the step size (last parameter). All three of these have to be integers.
If we need it, we can also set up blank lists.
End of explanation
x=range(10)
print x
print "First value", x[0]
print "Last value", x[-1]
print "Fourth to sixth values", x[3:5]
Explanation: If we want to, we can refer to subsets of the list. For just a single term, we can just use the number corresponding to that position. An important thing with Python is that the list index starts at 0, not at 1, starting from the first term. If we're more concerned about the last number in the list, then we can use negative numbers as the index. The last item in the list is -1, the item before that is -2, and so on.
We can also select a set of numbers by using a : to separate list indices. If you use this, and don't specify first or last index, it will presume you meant the start or end of the list, respectively.
After you try running the sample examples below, try to get the following results:
* [6] (using two methods)
* [3,4,5,6]
* [0,1,2,3,4,5,6]
* [7,8,9]
End of explanation
x=[1,2,3,4,5]
x[2]=8
print x
print "Testing append"
x.append(6)
print x
x.append([7,8])
print x
print "testing extend"
x=[1,2,3,4,5]
#x.extend(6)
#print x
x.extend([7,8])
print x
print "testing insert"
x=[1,2,3,4,5]
x.insert(3, "in")
print x
Explanation: Modifying lists
The simplest change we can make to a list is to change it at a specific index just be redefining it, like in the second line in the code below.
There's three other handy ways to modify a list. append will add whatever we want as the next item in the list, but this means if we're adding more than a single value, it will add a list into our list, which may not always be what we want.
extend will expand the list to include the additional values, but only if it's a list, it won't work on a single integer (go ahead and try that).
Finally, insert will let us insert a value anywhere within the list. To do this, it requires a number for what spot in the list it should go, and also what we want to add into the list.
End of explanation
x=range(1,11,1)
print x
x_2=[]
for i in x:
i_2=i*i
x_2.append(i_2)
print x_2
Explanation: Loops and List Comprehension
Like most languages, we can write loops in Python. One of the most standard loops is a for loop, so we'll focus on that one. Below is a 'standard' way of writing a 'for' loop. We'll do something simple, where all we want is to get the square of each number in the array.
End of explanation
x=range(1,11,1)
print x
x_2=[i*i for i in x]
print x_2
Explanation: While that loop works, even this pretty simple example can be condensed into something a bit shorter. We have to set up a blank list, and then after that, the loop itself was 3 lines, so just getting the squares of all these values took 4 lines. We can do it in one with list comprehension.
This is basically a different way of writing a for loop, and will return a list, so we don't have to set up an empty list for the results.
End of explanation
x={}
x['answer']=42
print x['answer']
Explanation: Dictionaries
Dictionaries are another way of storing a large amount of data in Python, except instead of being referenced by an ordered set of numbers like in a list, they are referenced by either strings/characters or numbers, referred to as keys.
End of explanation
AbMag={'U':5.61, 'B':5.48, 'V':4.83, 'R':4.42, 'I':4.08}
print AbMag['U']
print AbMag.items()
Explanation: These are particularly useful if you'll have a handful of values you'd like to call back to often. For an astronomy example, we can set up a dictionary that contains the absolute magnitude of the Sun in a bunch of bands (from Binney & Merrifield). We can now have a code that easily calls absolute magnitudes whenever needed using that dictionary.
We can also list out the dictionary, if needed, with AbMag.items(). There's some other tools for more advanced tricks with dictionaries, but this covers the basics.
End of explanation
def GeoSum(r):
powers=range(1,11,1) #set up a list for the exponents 1 to 10
terms=[(1./(r**x)) for x in powers] #calculate each term in the series
return sum(terms) #return the sum of the list
TermValue=2
print GeoSum(TermValue), (1.)/(TermValue-1)
Explanation: Functions
At a certain point you'll be writing the same bits of code over and over again. That means that if you want to update it, you'll have to update it in every single spot you did the same thing. This is.... less than optimal use of time, and it also means it's really easy to screw up by forgetting to keep one spot the same as the rest.
We can try out a function by writing a crude function for the sum of a geometric series.
$$\frac{1}{r} + \frac{1}{r^2} + \frac{1}{r^3} + \frac{1}{r^4} + \ldots $$
Conveniently, so long as r is larger than 1, there's a known solution to this series. We can use that to see that this function works.
$$ \frac{1}{r-1} $$
This means we can call the function repeatedly and not need to change anything. In this case, you can try using this GeoSum function for several different numbers (remember, r>1), and see how closely this works, by just changing TermValue
End of explanation
class SampleClass:
def __init__(self, value): #run on initial setup of the class, provide a value
self.value = value
self.square = value**2
def powerraise(self, powerval): #only run when we call it, provide powerval
self.powerval=powerval
self.raisedpower=self.value**powerval
MyNum=SampleClass(3)
print MyNum.value
print MyNum.square
MyNum.powerraise(4)
print MyNum.powerval
print MyNum.raisedpower
print MyNum.value,'^',MyNum.powerval,'=',MyNum.raisedpower
Explanation: Classes
To steal a good line for this, "Classes can be thought of as blueprints for creating objects."
With a class, we can create an object with a whole set of properties that we can access. This can be very useful when you want to deal with many objects with the same set of parameters, rather than trying to keep track of related variables over multiple lists, or even just having a single object's properties all stored in some hard to manage list or dictionary.
Here we'll just use a class that's set up to do some basic math. Note that the class consists of several smaller functions inside of it. The first function, called init, is going to be run as soon as we create an object belonging to this class, and so that'll create two attributes to that object, value and square. The other function, powerraise, only gets called if we call it. Try adding some other subfunctions in there to try this out. They don't need to have anything new passed to them to be run.
End of explanation
<END_TASK> |
15,729 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Machine Learning Engineer Nanodegree
Introduction and Foundations
Project
Step1: From a sample of the RMS Titanic data, we can see the various features present for each passenger on the ship
Step3: The very same sample of the RMS Titanic data now shows the Survived feature removed from the DataFrame. Note that data (the passenger data) and outcomes (the outcomes of survival) are now paired. That means for any passenger data.loc[i], they have the survival outcome outcomes[i].
To measure the performance of our predictions, we need a metric to score our predictions against the true outcomes of survival. Since we are interested in how accurate our predictions are, we will calculate the proportion of passengers where our prediction of their survival is correct. Run the code cell below to create our accuracy_score function and test a prediction on the first five passengers.
Think
Step5: Tip
Step6: Question 1
Using the RMS Titanic data, how accurate would a prediction be that none of the passengers survived?
Hint
Step7: Answer
Step9: Examining the survival statistics, a large majority of males did not survive the ship sinking. However, a majority of females did survive the ship sinking. Let's build on our previous prediction
Step10: Question 2
How accurate would a prediction be that all female passengers survived and the remaining passengers did not survive?
Hint
Step11: Answer
Step13: Examining the survival statistics, the majority of males younger than 10 survived the ship sinking, whereas most males age 10 or older did not survive the ship sinking. Let's continue to build on our previous prediction
Step14: Question 3
How accurate would a prediction be that all female passengers and all male passengers younger than 10 survived?
Hint
Step15: Answer
Step17: After exploring the survival statistics visualization, fill in the missing code below so that the function will make your prediction.
Make sure to keep track of the various features and conditions you tried before arriving at your final prediction model.
Hint
Step18: Question 4
Describe the steps you took to implement the final prediction model so that it got an accuracy of at least 80%. What features did you look at? Were certain features more informative than others? Which conditions did you use to split the survival outcomes in the data? How accurate are your predictions?
Hint | <ASSISTANT_TASK:>
Python Code:
# Import libraries necessary for this project
import numpy as np
import pandas as pd
from IPython.display import display # Allows the use of display() for DataFrames
# Import supplementary visualizations code visuals.py
import visuals as vs
# Pretty display for notebooks
%matplotlib inline
# Load the dataset
in_file = 'titanic_data.csv'
full_data = pd.read_csv(in_file)
# Print the first few entries of the RMS Titanic data
display(full_data.head())
Explanation: Machine Learning Engineer Nanodegree
Introduction and Foundations
Project: Titanic Survival Exploration
In 1912, the ship RMS Titanic struck an iceberg on its maiden voyage and sank, resulting in the deaths of most of its passengers and crew. In this introductory project, we will explore a subset of the RMS Titanic passenger manifest to determine which features best predict whether someone survived or did not survive. To complete this project, you will need to implement several conditional predictions and answer the questions below. Your project submission will be evaluated based on the completion of the code and your responses to the questions.
Tip: Quoted sections like this will provide helpful instructions on how to navigate and use an iPython notebook.
Getting Started
To begin working with the RMS Titanic passenger data, we'll first need to import the functionality we need, and load our data into a pandas DataFrame.
Run the code cell below to load our data and display the first few entries (passengers) for examination using the .head() function.
Tip: You can run a code cell by clicking on the cell and using the keyboard shortcut Shift + Enter or Shift + Return. Alternatively, a code cell can be executed using the Play button in the hotbar after selecting it. Markdown cells (text cells like this one) can be edited by double-clicking, and saved using these same shortcuts. Markdown allows you to write easy-to-read plain text that can be converted to HTML.
End of explanation
# Store the 'Survived' feature in a new variable and remove it from the dataset
outcomes = full_data['Survived']
data = full_data.drop('Survived', axis = 1)
# Show the new dataset with 'Survived' removed
display(data.head())
Explanation: From a sample of the RMS Titanic data, we can see the various features present for each passenger on the ship:
- Survived: Outcome of survival (0 = No; 1 = Yes)
- Pclass: Socio-economic class (1 = Upper class; 2 = Middle class; 3 = Lower class)
- Name: Name of passenger
- Sex: Sex of the passenger
- Age: Age of the passenger (Some entries contain NaN)
- SibSp: Number of siblings and spouses of the passenger aboard
- Parch: Number of parents and children of the passenger aboard
- Ticket: Ticket number of the passenger
- Fare: Fare paid by the passenger
- Cabin Cabin number of the passenger (Some entries contain NaN)
- Embarked: Port of embarkation of the passenger (C = Cherbourg; Q = Queenstown; S = Southampton)
Since we're interested in the outcome of survival for each passenger or crew member, we can remove the Survived feature from this dataset and store it as its own separate variable outcomes. We will use these outcomes as our prediction targets.
Run the code cell below to remove Survived as a feature of the dataset and store it in outcomes.
End of explanation
def accuracy_score(truth, pred):
Returns accuracy score for input truth and predictions.
# Ensure that the number of predictions matches number of outcomes
if len(truth) == len(pred):
# Calculate and return the accuracy as a percent
return "Predictions have an accuracy of {:.2f}%.".format((truth == pred).mean()*100)
else:
return "Number of predictions does not match number of outcomes!"
# Test the 'accuracy_score' function
predictions = pd.Series(np.ones(5, dtype = int))
print (accuracy_score(outcomes[:5], predictions))
Explanation: The very same sample of the RMS Titanic data now shows the Survived feature removed from the DataFrame. Note that data (the passenger data) and outcomes (the outcomes of survival) are now paired. That means for any passenger data.loc[i], they have the survival outcome outcomes[i].
To measure the performance of our predictions, we need a metric to score our predictions against the true outcomes of survival. Since we are interested in how accurate our predictions are, we will calculate the proportion of passengers where our prediction of their survival is correct. Run the code cell below to create our accuracy_score function and test a prediction on the first five passengers.
Think: Out of the first five passengers, if we predict that all of them survived, what would you expect the accuracy of our predictions to be?
End of explanation
def predictions_0(data):
Model with no features. Always predicts a passenger did not survive.
predictions = []
for _, passenger in data.iterrows():
# Predict the survival of 'passenger'
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_0(data)
Explanation: Tip: If you save an iPython Notebook, the output from running code blocks will also be saved. However, the state of your workspace will be reset once a new session is started. Make sure that you run all of the code blocks from your previous session to reestablish variables and functions before picking up where you last left off.
Making Predictions
If we were asked to make a prediction about any passenger aboard the RMS Titanic whom we knew nothing about, then the best prediction we could make would be that they did not survive. This is because we can assume that a majority of the passengers (more than 50%) did not survive the ship sinking.
The predictions_0 function below will always predict that a passenger did not survive.
End of explanation
print (accuracy_score(outcomes, predictions))
Explanation: Question 1
Using the RMS Titanic data, how accurate would a prediction be that none of the passengers survived?
Hint: Run the code cell below to see the accuracy of this prediction.
End of explanation
vs.survival_stats(data, outcomes, 'Sex')
Explanation: Answer: Replace this text with the prediction accuracy you found above.
Let's take a look at whether the feature Sex has any indication of survival rates among passengers using the survival_stats function. This function is defined in the visuals.py Python script included with this project. The first two parameters passed to the function are the RMS Titanic data and passenger survival outcomes, respectively. The third parameter indicates which feature we want to plot survival statistics across.
Run the code cell below to plot the survival outcomes of passengers based on their sex.
End of explanation
def predictions_1(data):
Model with one feature:
- Predict a passenger survived if they are female.
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# and write your prediction conditions here
predictions.append(1 if passenger['Sex'] == 'female' else 0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_1(data)
Explanation: Examining the survival statistics, a large majority of males did not survive the ship sinking. However, a majority of females did survive the ship sinking. Let's build on our previous prediction: If a passenger was female, then we will predict that they survived. Otherwise, we will predict the passenger did not survive.
Fill in the missing code below so that the function will make this prediction.
Hint: You can access the values of each feature for a passenger like a dictionary. For example, passenger['Sex'] is the sex of the passenger.
End of explanation
print (accuracy_score(outcomes, predictions))
Explanation: Question 2
How accurate would a prediction be that all female passengers survived and the remaining passengers did not survive?
Hint: Run the code cell below to see the accuracy of this prediction.
End of explanation
vs.survival_stats(data, outcomes, 'Age', ["Sex == 'male'"])
Explanation: Answer: Replace this text with the prediction accuracy you found above.
Using just the Sex feature for each passenger, we are able to increase the accuracy of our predictions by a significant margin. Now, let's consider using an additional feature to see if we can further improve our predictions. For example, consider all of the male passengers aboard the RMS Titanic: Can we find a subset of those passengers that had a higher rate of survival? Let's start by looking at the Age of each male, by again using the survival_stats function. This time, we'll use a fourth parameter to filter out the data so that only passengers with the Sex 'male' will be included.
Run the code cell below to plot the survival outcomes of male passengers based on their age.
End of explanation
def predictions_2(data):
Model with two features:
- Predict a passenger survived if they are female.
- Predict a passenger survived if they are male and younger than 10.
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# and write your prediction conditions here
predicting_data = 0
predicting_data = 1 if passenger['Sex'] == 'female' else predicting_data
predicting_data = 1 if passenger['Sex'] == 'male' and passenger['Age'] < 10 else predicting_data
predictions.append(predicting_data)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_2(data)
Explanation: Examining the survival statistics, the majority of males younger than 10 survived the ship sinking, whereas most males age 10 or older did not survive the ship sinking. Let's continue to build on our previous prediction: If a passenger was female, then we will predict they survive. If a passenger was male and younger than 10, then we will also predict they survive. Otherwise, we will predict they do not survive.
Fill in the missing code below so that the function will make this prediction.
Hint: You can start your implementation of this function using the prediction code you wrote earlier from predictions_1.
End of explanation
print (accuracy_score(outcomes, predictions))
Explanation: Question 3
How accurate would a prediction be that all female passengers and all male passengers younger than 10 survived?
Hint: Run the code cell below to see the accuracy of this prediction.
End of explanation
vs.survival_stats(data, outcomes, 'Age', ["Sex == 'male'", "Age < 18"])
Explanation: Answer: Replace this text with the prediction accuracy you found above.
Adding the feature Age as a condition in conjunction with Sex improves the accuracy by a small margin more than with simply using the feature Sex alone. Now it's your turn: Find a series of features and conditions to split the data on to obtain an outcome prediction accuracy of at least 80%. This may require multiple features and multiple levels of conditional statements to succeed. You can use the same feature multiple times with different conditions.
Pclass, Sex, Age, SibSp, and Parch are some suggested features to try.
Use the survival_stats function below to to examine various survival statistics.
Hint: To use mulitple filter conditions, put each condition in the list passed as the last argument. Example: ["Sex == 'male'", "Age < 18"]
End of explanation
def predictions_3(data):
Model with multiple features. Makes a prediction with an accuracy of at least 80%.
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# and write your prediction conditions here
predicting_data = 0
predicting_data = 1 if passenger['Sex'] == 'female' else predicting_data
predicting_data = 1 if passenger['Sex'] == 'male' and passenger['Age'] < 10 else predicting_data
predictions.append(predicting_data)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_3(data)
Explanation: After exploring the survival statistics visualization, fill in the missing code below so that the function will make your prediction.
Make sure to keep track of the various features and conditions you tried before arriving at your final prediction model.
Hint: You can start your implementation of this function using the prediction code you wrote earlier from predictions_2.
End of explanation
print (accuracy_score(outcomes, predictions))
Explanation: Question 4
Describe the steps you took to implement the final prediction model so that it got an accuracy of at least 80%. What features did you look at? Were certain features more informative than others? Which conditions did you use to split the survival outcomes in the data? How accurate are your predictions?
Hint: Run the code cell below to see the accuracy of your predictions.
End of explanation
<END_TASK> |
15,730 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
----- IMPORTANT ------
The code presented here assumes that you're running TensorFlow v1.3.0 or higher, this was not released yet so the easiet way to run this is update your TensorFlow version to TensorFlow's master.
To do that go here and then execute
Step1: 1) Simple Linear Regression with low-level TensorFlow
Generating data
This function creates a noisy dataset that's roughly linear, according to the equation y = mx + b + noise.
Notice that the expected value for m is 0.1 and for b is 0.3. This is the values we expect the model to predict.
Step2: Create training data
Step3: Plot the training data
Step4: The Model
Step5: The Loss and Optimizer
Define a loss function (here, squared error) and an optimizer (here, gradient descent).
Step6: The Training Loop and generating predictions
Step7: Visualizing predictions
Step8: What is the final weight and bias?
Step9: 2) Simple Linear Regression with a canned estimator
Input Pipeline
Step10: Describe input feature usage
Step11: Build and train the model
Step12: Generating and visualizing predictions
Step13: 3) Playing with real data
Step14: Load the data
Step15: Input pipeline
Step16: Feature description
Step17: Evaluate the model
Step18: DNN model
Update input pre-processing
Step19: Custom Input Pipeline using Datasets API
Read the data
Step20: Try the input function
Step21: 4) Building a custom estimator to classify handwritten digits (MNIST)
Image from
Step22: tf.estimator.LinearClassifier
Step23: Examine the results with TensorBoard
$> tensorboard --logdir mnnist/DNN
Step24: A Custom Model
Step25: Runs estimator
Step26: Distributed tensorflow | <ASSISTANT_TASK:>
Python Code:
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import collections
# tensorflow
import tensorflow as tf
print('Expected TensorFlow version is v1.3.0 or higher')
print('Your TensorFlow version:', tf.__version__)
# data manipulation
import numpy as np
import pandas as pd
# visualization
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
matplotlib.rcParams['figure.figsize'] = [12,8]
Explanation: ----- IMPORTANT ------
The code presented here assumes that you're running TensorFlow v1.3.0 or higher, this was not released yet so the easiet way to run this is update your TensorFlow version to TensorFlow's master.
To do that go here and then execute:
pip install --ignore-installed --upgrade <URL for the right binary for your machine>.
For example, considering a Linux CPU-only running python2:
pip install --upgrade https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=cpu-slave/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-1.2.1-cp27-none-linux_x86_64.whl
Here is walk-through to help getting started with tensorflow
1) Simple Linear Regression with low-level TensorFlow
2) Simple Linear Regression with a canned estimator
3) Playing with real data: linear regressor and DNN
4) Building a custom estimator to classify handwritten digits (MNIST)
What's next?
Dependencies
End of explanation
def make_noisy_data(m=0.1, b=0.3, n=100):
x = np.random.randn(n)
noise = np.random.normal(scale=0.01, size=len(x))
y = m * x + b + noise
return x, y
Explanation: 1) Simple Linear Regression with low-level TensorFlow
Generating data
This function creates a noisy dataset that's roughly linear, according to the equation y = mx + b + noise.
Notice that the expected value for m is 0.1 and for b is 0.3. This is the values we expect the model to predict.
End of explanation
x_train, y_train = make_noisy_data()
Explanation: Create training data
End of explanation
plt.plot(x_train, y_train, 'b.')
Explanation: Plot the training data
End of explanation
# input and output
x = tf.placeholder(shape=[None], dtype=tf.float32, name='x')
y_label = tf.placeholder(shape=[None], dtype=tf.float32, name='y_label')
# variables
W = tf.Variable(tf.random_normal([1], name="W")) # weight
b = tf.Variable(tf.random_normal([1], name="b")) # bias
# actual model
y = W * x + b
Explanation: The Model
End of explanation
loss = tf.reduce_mean(tf.square(y - y_label))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.1)
train = optimizer.minimize(loss)
Explanation: The Loss and Optimizer
Define a loss function (here, squared error) and an optimizer (here, gradient descent).
End of explanation
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init) # initialize variables
for i in range(100): # train for 100 steps
sess.run(train, feed_dict={x: x_train, y_label:y_train})
x_plot = np.linspace(-3, 3, 101) # return evenly spaced numbers over a specified interval
# using the trained model to predict values for the training data
y_plot = sess.run(y, feed_dict={x: x_plot})
# saving final weight and bias
final_W = sess.run(W)
final_b = sess.run(b)
Explanation: The Training Loop and generating predictions
End of explanation
plt.scatter(x_train, y_train)
plt.plot(x_plot, y_plot, 'g')
Explanation: Visualizing predictions
End of explanation
print('W:', final_W, 'expected: 0.1')
print('b:', final_b, 'expected: 0.3')
Explanation: What is the final weight and bias?
End of explanation
x_dict = {'x': x_train}
train_input = tf.estimator.inputs.numpy_input_fn(x_dict, y_train,
shuffle=True,
num_epochs=None) # repeat forever
Explanation: 2) Simple Linear Regression with a canned estimator
Input Pipeline
End of explanation
features = [tf.feature_column.numeric_column('x')] # because x is a real number
Explanation: Describe input feature usage
End of explanation
estimator = tf.estimator.LinearRegressor(features)
estimator.train(train_input, steps = 1000)
Explanation: Build and train the model
End of explanation
x_test_dict = {'x': np.linspace(-5, 5, 11)}
data_source = tf.estimator.inputs.numpy_input_fn(x_test_dict, shuffle=False)
predictions = list(estimator.predict(data_source))
preds = [p['predictions'][0] for p in predictions]
for y in predictions:
print(y['predictions'])
plt.scatter(x_train, y_train)
plt.plot(x_test_dict['x'], preds, 'g')
Explanation: Generating and visualizing predictions
End of explanation
census_train_url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data'
census_train_path = tf.contrib.keras.utils.get_file('census.train', census_train_url)
census_test_url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.test'
census_test_path = tf.contrib.keras.utils.get_file('census.test', census_test_url)
Explanation: 3) Playing with real data: linear regressor and DNN
Get the data
The Adult dataset is from the Census bureau and the task is to predict whether a given adult makes more than $50,000 a year based attributes such as education, hours of work per week, etc.
But the code here presented can be easilly aplicable to any csv dataset that fits in memory.
More about the data here
End of explanation
column_names = [
'age', 'workclass', 'fnlwgt', 'education', 'education-num',
'marital-status', 'occupation', 'relationship', 'race', 'sex',
'capital-gain', 'capital-loss', 'hours-per-week', 'native-country',
'income'
]
census_train = pd.read_csv(census_train_path, index_col=False, names=column_names)
census_test = pd.read_csv(census_train_path, index_col=False, names=column_names)
census_train_label = census_train.pop('income') == " >50K"
census_test_label = census_test.pop('income') == " >50K"
census_train.head(10)
census_train_label[:20]
Explanation: Load the data
End of explanation
train_input = tf.estimator.inputs.pandas_input_fn(
census_train,
census_train_label,
shuffle=True,
batch_size = 32, # process 32 examples at a time
num_epochs=None,
)
test_input = tf.estimator.inputs.pandas_input_fn(
census_test,
census_test_label,
shuffle=True,
num_epochs=1)
features, labels = train_input()
features
Explanation: Input pipeline
End of explanation
features = [
tf.feature_column.numeric_column('hours-per-week'),
tf.feature_column.bucketized_column(tf.feature_column.numeric_column('education-num'), list(range(25))),
tf.feature_column.categorical_column_with_vocabulary_list('sex', ['male','female']),
tf.feature_column.categorical_column_with_hash_bucket('native-country', 1000),
]
estimator = tf.estimator.LinearClassifier(features, model_dir='census/linear',n_classes=2)
estimator.train(train_input, steps=5000)
Explanation: Feature description
End of explanation
estimator.evaluate(test_input)
Explanation: Evaluate the model
End of explanation
features = [
tf.feature_column.numeric_column('education-num'),
tf.feature_column.numeric_column('hours-per-week'),
tf.feature_column.numeric_column('age'),
tf.feature_column.indicator_column(
tf.feature_column.categorical_column_with_vocabulary_list('sex',['male','female'])),
tf.feature_column.embedding_column( # now using embedding!
tf.feature_column.categorical_column_with_hash_bucket('native-country', 1000), 10)
]
estimator = tf.estimator.DNNClassifier(hidden_units=[20,20],
feature_columns=features,
n_classes=2,
model_dir='census/dnn')
estimator.train(train_input, steps=5000)
estimator.evaluate(test_input)
Explanation: DNN model
Update input pre-processing
End of explanation
def census_input_fn(path):
def input_fn():
dataset = (
tf.contrib.data.TextLineDataset(path)
.map(csv_decoder)
.shuffle(buffer_size=100)
.batch(32)
.repeat())
columns = dataset.make_one_shot_iterator().get_next()
income = tf.equal(columns.pop('income')," >50K")
return columns, income
return input_fn
csv_defaults = collections.OrderedDict([
('age',[0]),
('workclass',['']),
('fnlwgt',[0]),
('education',['']),
('education-num',[0]),
('marital-status',['']),
('occupation',['']),
('relationship',['']),
('race',['']),
('sex',['']),
('capital-gain',[0]),
('capital-loss',[0]),
('hours-per-week',[0]),
('native-country',['']),
('income',['']),
])
def csv_decoder(line):
parsed = tf.decode_csv(line, csv_defaults.values())
return dict(zip(csv_defaults.keys(), parsed))
Explanation: Custom Input Pipeline using Datasets API
Read the data
End of explanation
tf.reset_default_graph()
census_input = census_input_fn(census_train_path)
training_batch = census_input()
with tf.Session() as sess:
features, high_income = sess.run(training_batch)
print(features['education'])
print(features['age'])
print(high_income)
Explanation: Try the input function
End of explanation
train,test = tf.contrib.keras.datasets.mnist.load_data()
x_train,y_train = train
x_test,y_test = test
mnist_train_input = tf.estimator.inputs.numpy_input_fn({'x':np.array(x_train, dtype=np.float32)},
np.array(y_train,dtype=np.int32),
shuffle=True,
num_epochs=None)
mnist_test_input = tf.estimator.inputs.numpy_input_fn({'x':np.array(x_test, dtype=np.float32)},
np.array(y_test,dtype=np.int32),
shuffle=True,
num_epochs=1)
Explanation: 4) Building a custom estimator to classify handwritten digits (MNIST)
Image from: http://rodrigob.github.io/are_we_there_yet/build/images/mnist.png?1363085077
End of explanation
estimator = tf.estimator.LinearClassifier([tf.feature_column.numeric_column('x',shape=784)],
n_classes=10,
model_dir="mnist/linear")
estimator.train(mnist_train_input, steps = 10000)
estimator.evaluate(mnist_test_input)
Explanation: tf.estimator.LinearClassifier
End of explanation
estimator = tf.estimator.DNNClassifier(hidden_units=[256],
feature_columns=[tf.feature_column.numeric_column('x',shape=784)],
n_classes=10,
model_dir="mnist/DNN")
estimator.train(mnist_train_input, steps = 10000)
estimator.evaluate(mnist_test_input)
# Parameters
BATCH_SIZE = 128
STEPS = 10000
Explanation: Examine the results with TensorBoard
$> tensorboard --logdir mnnist/DNN
End of explanation
def build_cnn(input_layer, mode):
with tf.name_scope("conv1"):
conv1 = tf.layers.conv2d(inputs=input_layer,filters=32, kernel_size=[5, 5],
padding='same', activation=tf.nn.relu)
with tf.name_scope("pool1"):
pool1 = tf.layers.max_pooling2d(inputs=conv1, pool_size=[2, 2], strides=2)
with tf.name_scope("conv2"):
conv2 = tf.layers.conv2d(inputs=pool1,filters=64, kernel_size=[5, 5],
padding='same', activation=tf.nn.relu)
with tf.name_scope("pool2"):
pool2 = tf.layers.max_pooling2d(inputs=conv2, pool_size=[2, 2], strides=2)
with tf.name_scope("dense"):
pool2_flat = tf.reshape(pool2, [-1, 7 * 7 * 64])
dense = tf.layers.dense(inputs=pool2_flat, units=1024, activation=tf.nn.relu)
with tf.name_scope("dropout"):
is_training_mode = mode == tf.estimator.ModeKeys.TRAIN
dropout = tf.layers.dropout(inputs=dense, rate=0.4, training=is_training_mode)
logits = tf.layers.dense(inputs=dropout, units=10)
return logits
def model_fn(features, labels, mode):
# Describing the model
input_layer = tf.reshape(features['x'], [-1, 28, 28, 1])
tf.summary.image('mnist_input',input_layer)
logits = build_cnn(input_layer, mode)
# Generate Predictions
classes = tf.argmax(input=logits, axis=1)
predictions = {
'classes': classes,
'probabilities': tf.nn.softmax(logits, name='softmax_tensor')
}
if mode == tf.estimator.ModeKeys.PREDICT:
# Return an EstimatorSpec object
return tf.estimator.EstimatorSpec(mode=mode, predictions=predictions)
with tf.name_scope('loss'):
loss = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=labels, logits=logits)
loss = tf.reduce_sum(loss)
tf.summary.scalar('loss', loss)
with tf.name_scope('accuracy'):
accuracy = tf.cast(tf.equal(tf.cast(classes,tf.int32),labels),tf.float32)
accuracy = tf.reduce_mean(accuracy)
tf.summary.scalar('accuracy', accuracy)
# Configure the Training Op (for TRAIN mode)
if mode == tf.estimator.ModeKeys.TRAIN:
train_op = tf.contrib.layers.optimize_loss(
loss=loss,
global_step=tf.train.get_global_step(),
learning_rate=1e-4,
optimizer='Adam')
return tf.estimator.EstimatorSpec(mode=mode, predictions=predictions,
loss=loss, train_op=train_op)
# Configure the accuracy metric for evaluation
eval_metric_ops = {
'accuracy': tf.metrics.accuracy(
classes,
input=labels)
}
return tf.estimator.EstimatorSpec(mode=mode, predictions=predictions,
loss=loss, eval_metric_ops=eval_metric_ops)
Explanation: A Custom Model
End of explanation
# create estimator
run_config = tf.contrib.learn.RunConfig(model_dir='mnist/CNN')
estimator = tf.estimator.Estimator(model_fn=model_fn, config=run_config)
# train for 10000 steps
estimator.train(input_fn=mnist_train_input, steps=10000)
# evaluate
estimator.evaluate(input_fn=mnist_test_input)
# predict
preds = estimator.predict(input_fn=test_input_fn)
Explanation: Runs estimator
End of explanation
# Run an experiment
from tensorflow.contrib.learn.python.learn import learn_runner
# Enable TensorFlow logs
tf.logging.set_verbosity(tf.logging.INFO)
# create experiment
def experiment_fn(run_config, hparams):
# create estimator
estimator = tf.estimator.Estimator(model_fn=model_fn,
config=run_config)
return tf.contrib.learn.Experiment(
estimator,
train_input_fn=train_input_fn,
eval_input_fn=test_input_fn,
train_steps=STEPS
)
# run experiment
learn_runner.run(experiment_fn,
run_config=run_config)
Explanation: Distributed tensorflow: using experiments
End of explanation
<END_TASK> |
15,731 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Machine Intelligence II - Team MensaNord
Sheet 11
Nikolai Zaki
Alexander Moore
Johannes Rieke
Georg Hoelger
Oliver Atanaszov
Step1: Exercise 1
Load the data into a vector and normalize it such that the values are between 0 and 1.
Create two new datasets by adding Gaussian noise with zero mean and standard deviation
σ N ∈ {0.05, 0.1}.
Step2: Create a figure showing the 3 histograms (original & 2 sets of noise corrupted data – use
enough bins!). In an additional figure, show the three corresponding empirical distribution
functions in one plot.
Step4: Take a subset of P = 100 observations and estimate the probability density p̂ of intensities
with a rectangular kernel (“gliding window”) parametrized by window width h.
Plot the estimates p̂ resulting for (e.g. 10) different samples of size P
Step6: $P(\underline{x}) = \frac{1}{h^n} \frac{1}{p} \Sigma_{\alpha=1}^{p} H(\frac{\underline{x} - \underline{x}^{(\alpha)}}{h})$
Step7: Calculate the negative log-likelihood per datapoint of your estimator using 5000
samples from the data not used for the density estimation (i.e. the “test-set”). Get
the average of the negative log-likelihood over the 10 samples.
$P({\underline{x}^{(\alpha)}};\underline{w}) = - \Sigma_{\alpha=1}^{p} ln P(\underline{x}^{(\alpha)};\underline{w})$
Step8: 2) Repeat this procedure (without plotting) for a sequence of kernel widths h to get the mean
log likelihood (averaged over the different samples) resulting for each value of h.
(a) Apply this procedure to all 3 datasets (original and the two noise-corruped ones) to make
a plot showing the obtained likelihoods (y-axis) vs. kernel width h (x-axis) as one line for
each dataset.
Step9: not plotted points have value = inf because
Step11: (c) Repeat the previous steps (a & b) for the Gaussian kernel with σ^2 = h.
Step12: Exercise 2
1.1 Create dataset
Step14: 1.2 Run Expectation-Maximization algorithm
See slide 18 of the lecture for an outline of the algorithm.
Step15: 1.3 Run K-means algorithm
For simplicity, we use the sklearn version of K-means here. The detailed algorithm was already implemented in a previous exercise.
Step16: K-means clusters the data point by establishing a straight separation line. This cannot fully capture the nature of the data, e.g. the points around the lower left Gaussian, which actually belong to the upper right Gaussian.
1.4 Initialize EM algorithm with cluster parameters from K-Means
Step17: 1.5 Repeat analysis for different $\sigma_1$ values | <ASSISTANT_TASK:>
Python Code:
from __future__ import division, print_function
import matplotlib.pyplot as plt
%matplotlib inline
import scipy.stats
import numpy as np
from scipy.ndimage import imread
import sys
Explanation: Machine Intelligence II - Team MensaNord
Sheet 11
Nikolai Zaki
Alexander Moore
Johannes Rieke
Georg Hoelger
Oliver Atanaszov
End of explanation
# import image
img_orig = imread('testimg.jpg').flatten()
print("$img_orig")
print("shape: \t\t", img_orig.shape) # = vector
print("values: \t from ", img_orig.min(), " to ", img_orig.max(), "\n")
# "img" holds 3 vectors
img = np.zeros((3,img_orig.shape[0]))
print("$img")
print("shape: \t\t",img.shape)
std = [0, 0.05, 0.1]
for i in range(img.shape[1]):
# normalize => img[0]
img[0][i] = img_orig[i] / 255
# gaussian noise => img[1] img[2]
img[1][i] = img[0][i] + np.random.normal(0, std[1])
img[2][i] = img[0][i] + np.random.normal(0, std[2])
print(img[:, 0:4])
Explanation: Exercise 1
Load the data into a vector and normalize it such that the values are between 0 and 1.
Create two new datasets by adding Gaussian noise with zero mean and standard deviation
σ N ∈ {0.05, 0.1}.
End of explanation
# histograms
fig, axes = plt.subplots(1, 3, figsize=(15, 5))
for i, ax in enumerate(axes.flatten()):
plt.sca(ax)
plt.hist(img[i], 100, normed=1, alpha=0.75)
plt.xlim(-0.1, 1.1)
plt.ylim(0, 18)
plt.xlabel("value")
plt.ylabel("probability")
plt.title('img[{}]'.format(i))
# divide probablity space in 100 bins
nbins = 100
bins = np.linspace(0, 1, nbins+1)
# holds data equivalent to shown histograms (but cutted from 0 to 1)
elementsPerBin = np.zeros((3,nbins))
for i in range(3):
ind = np.digitize(img[i], bins)
elementsPerBin[i] = [len(img[i][ind == j]) for j in range(nbins)]
# counts number of elements from bin '0' to bin 'j'
sumUptoBinJ = np.asarray([[0 for i in range(nbins)] for i in range(3)])
for i in range(3):
for j in range(nbins):
sumUptoBinJ[i][j] = np.sum(elementsPerBin[i][0:j+1])
# plot
plt.figure(figsize=(15, 5))
for i in range(3):
plt.plot(sumUptoBinJ[i], '.-')
plt.legend(['img[0]', 'img[1]', 'img[2]'])
plt.xlabel('bin')
plt.ylabel('empirical distribution functions');
Explanation: Create a figure showing the 3 histograms (original & 2 sets of noise corrupted data – use
enough bins!). In an additional figure, show the three corresponding empirical distribution
functions in one plot.
End of explanation
def H(vec, h):
(rectangular) histogram kernel function
vec = np.asarray(vec)
return np.asarray([1 if abs(x)<.5 else 0 for x in vec])
Explanation: Take a subset of P = 100 observations and estimate the probability density p̂ of intensities
with a rectangular kernel (“gliding window”) parametrized by window width h.
Plot the estimates p̂ resulting for (e.g. 10) different samples of size P
End of explanation
def P_est(x, h, data, kernel = H):
returns the probability that data contains values @ (x +- h/2)
n = 1 #= data.shape[1] #number of dimensions (for multidmensional data)
p = len(data)
return 1/(h**n)/p*np.sum(kernel((data - x)/h, h))
# take 10 data sets with 100 observations (indexes 100k to 101k)
# nomenclature: data_3(3, 10, 100) holds 3 times data(10, 100)
P = 100
offset = int(100000)
data_3 = np.zeros((3, 10,P))
for j in range(3):
for i in range(10):
data_3[j][i] = img[j][offset+i*P:offset+(i+1)*P]
print(data_3.shape)
# calculate probability estimation for (center +- h/2) on the 10 data sets
h = .15
nCenters = 101
Centers = np.linspace(0,1,nCenters)
fig, ax = plt.subplots(2,5,figsize=(15,6))
ax = ax.ravel()
for i in range(10):
ax[i].plot([P_est(center,h,data_3[0][i]) for center in Centers])
Explanation: $P(\underline{x}) = \frac{1}{h^n} \frac{1}{p} \Sigma_{\alpha=1}^{p} H(\frac{\underline{x} - \underline{x}^{(\alpha)}}{h})$
End of explanation
testdata = img[0][50000:55000]
# calculate average negative log likelihood for
def avg_NegLL(data, h, kernel=H):
sys.stdout.write(".")
average = 0
for i in range(10):
L_prob = [np.log(P_est(x,h,data[i],kernel)) for x in testdata]
negLL = -1*np.sum(L_prob)
average += negLL
average /= 10
return average
Explanation: Calculate the negative log-likelihood per datapoint of your estimator using 5000
samples from the data not used for the density estimation (i.e. the “test-set”). Get
the average of the negative log-likelihood over the 10 samples.
$P({\underline{x}^{(\alpha)}};\underline{w}) = - \Sigma_{\alpha=1}^{p} ln P(\underline{x}^{(\alpha)};\underline{w})$
End of explanation
hs = np.linspace(0.001, 0.999, 20)
def plot_negLL(data_3=data_3, kernel=H):
fig = plt.figure(figsize=(12,8))
for j in range(3):
print("calc data[{}]".format(j))
LLs = [avg_NegLL(data_3[j],h,kernel=kernel) for h in hs]
plt.plot(hs,LLs)
print()
plt.legend(['img[0]', 'img[1]', 'img[2]'])
plt.show()
plot_negLL()
Explanation: 2) Repeat this procedure (without plotting) for a sequence of kernel widths h to get the mean
log likelihood (averaged over the different samples) resulting for each value of h.
(a) Apply this procedure to all 3 datasets (original and the two noise-corruped ones) to make
a plot showing the obtained likelihoods (y-axis) vs. kernel width h (x-axis) as one line for
each dataset.
End of explanation
P = 500
data_3b = np.zeros((3, 10,P))
for j in range(3):
for i in range(10):
data_3b[j][i] = img[j][offset+i*P:offset+(i+1)*P]
plot_negLL(data_3=data_3b)
Explanation: not plotted points have value = inf because:
$negLL = - log( \Pi_\alpha P(x^\alpha,w) )$
so if one single $P(x^\alpha,w) = 0$ occurs (x has 5000 elements)
the result is -log(0)=inf (not defined)
this only occurs with the histogram kernel.
(b) Repeat the previous step (LL & plot) for samples of size P = 500.
End of explanation
def Gaussian(x,h):
gaussian kernel function
return np.exp(-x**2/h/2)/np.sqrt(2*np.pi*h)
fig, ax = plt.subplots(2,5,figsize=(15,6))
h = .15
ax = ax.ravel()
for i in range(10):
ax[i].plot([P_est(center,h,data_3[0][i],kernel=Gaussian) for center in Centers])
hs = np.linspace(0.001, 0.4, 20)
plot_negLL(kernel=Gaussian)
plot_negLL(data_3=data_3b, kernel=Gaussian)
Explanation: (c) Repeat the previous steps (a & b) for the Gaussian kernel with σ^2 = h.
End of explanation
M = 2
w1, w2 = [2,2], [1,1] # means
sigma2 = 0.2 # standard deviations
N = 100
P1, P2 = 2/3, 1/3
def create_data(sigma1=0.7):
X = np.zeros((N, 2))
which_gaussian = np.zeros(N)
for n in range(N):
if np.random.rand() < P1: # sample from first Gaussian
X[n] = np.random.multivariate_normal(w1, np.eye(len(w1)) * sigma1**2)
which_gaussian[n] = 0
else: # sample from second Gaussian
X[n] = np.random.multivariate_normal(w2, np.eye(len(w2)) * sigma2**2)
which_gaussian[n] = 1
return X, which_gaussian
sigma1 = 0.7
X, which_gaussian = create_data(sigma1)
def plot_data(X, which_gaussian, centers, stds):
plt.scatter(*X[which_gaussian == 0].T, c='r', label='Cluster 1')
plt.scatter(*X[which_gaussian == 1].T, c='b', label='Cluster 2')
plt.plot(centers[0][0], centers[0][1], 'k+', markersize=15, label='Centers')
plt.plot(centers[1][0], centers[1][1], 'k+', markersize=15)
plt.gca().add_artist(plt.Circle(centers[0], stds[0], ec='k', fc='none'))
plt.gca().add_artist(plt.Circle(centers[1], stds[1], ec='k', fc='none'))
plt.xlabel('x1')
plt.ylabel('x2')
plt.legend()
plot_data(X, which_gaussian, [w1, w2], [sigma1, sigma2])
plt.title('Ground truth')
Explanation: Exercise 2
1.1 Create dataset
End of explanation
from scipy.stats import multivariate_normal
def variance(X):
Calculate a single variance value for the vectors in X.
mu = X.mean(axis=0)
return np.mean([np.linalg.norm(x - mu)**2 for x in X])
def run_expectation_maximization(X, w=None, sigma_squared=None, verbose=False):
# Initialization.
P_prior = np.ones(2) * 1 / M
P_likelihood = np.zeros((N, M))
P_posterior = np.zeros((M, N))
mu = X.mean(axis=0) # mean of the original data
var = variance(X) # variance of the original data
if w is None:
w = np.array([mu + np.random.rand(M) - 0.5, mu + np.random.rand(M) - 0.5])
if sigma_squared is None:
sigma_squared = np.array([var + np.random.rand() - 0.5,var + np.random.rand() - 0.5])
#sigma_squared = np.array([var, var])
if verbose:
print('Initial centers:', w)
print('Initial variances:', sigma_squared)
print()
print()
theta = 0.001
distance = np.inf
step = 0
# Optimization loop.
while distance > theta:
#for i in range(1):
step += 1
if verbose:
print('Step', step)
print('-'*50)
# Store old parameter values to calculate distance later on.
w_old = w.copy()
sigma_squared_old = sigma_squared.copy()
P_prior_old = P_prior.copy()
if verbose:
print('Distances of X[0] to proposed centers:', np.linalg.norm(X[0] - w[0]), np.linalg.norm(X[0] - w[1]))
# E-Step: Calculate likelihood for each data point.
for (alpha, q), _ in np.ndenumerate(P_likelihood):
P_likelihood[alpha, q] = multivariate_normal.pdf(X[alpha], w[q], sigma_squared[q])
if verbose:
print('Likelihoods of X[0]:', P_likelihood[0])
# E-Step: Calculate assignment probabilities (posterior) for each data point.
for (q, alpha), _ in np.ndenumerate(P_posterior):
P_posterior[q, alpha] = (P_likelihood[alpha, q] * P_prior[q]) / np.sum([P_likelihood[alpha, r] * P_prior[r] for r in range(M)])
if verbose:
print('Assignment probabilities of X[0]:', P_posterior[:, 0])
print()
distance = 0
# M-Step: Calculate new parameter values.
for q in range(M):
w[q] = np.sum([P_posterior[q, alpha] * X[alpha] for alpha in range(N)], axis=0) / np.sum(P_posterior[q])
#print(np.sum([P_posterior[q, alpha] * X[alpha] for alpha in range(N)], axis=0))
#print(np.sum(P_posterior[q]))
w_distance = np.linalg.norm(w[q] - w_old[q])
if verbose:
print('Distance of centers:', w_distance)
distance = max(distance, w_distance)
sigma_squared[q] = 1 / M * np.sum([np.linalg.norm(X[alpha] - w_old[q])**2 * P_posterior[q, alpha] for alpha in range(N)]) / np.sum(P_posterior[q])
sigma_squared_distance = np.abs(sigma_squared[q] - sigma_squared_old[q])
if verbose:
print('Distance of variances:', sigma_squared_distance)
distance = max(distance, sigma_squared_distance)
P_prior[q] = np.mean(P_posterior[q])
P_prior_distance = np.abs(P_prior[q] - P_prior_old[q])
if verbose:
print('Distance of priors:', P_prior_distance)
distance = max(distance, P_prior_distance)
if verbose:
print('Maximum distance:', distance)
print()
print('New centers:', w)
print('New variances:', sigma_squared)
print('New priors:', P_prior)
print('='*50)
print()
which_gaussian_EM = P_posterior.argmax(axis=0)
return which_gaussian_EM, w, np.sqrt(sigma_squared), step
which_gaussian_em, cluster_centers_em, cluster_stds_em, num_steps_em = run_expectation_maximization(X, verbose=True)
plot_data(X, which_gaussian_em, cluster_centers_em, cluster_stds_em)
plt.title('Predicted by Expectation-Maximization')
Explanation: 1.2 Run Expectation-Maximization algorithm
See slide 18 of the lecture for an outline of the algorithm.
End of explanation
from sklearn.cluster import KMeans
def run_k_means(X):
km = KMeans(2)
km.fit(X)
which_gaussian_km = km.predict(X)
cluster_stds = np.array([np.sqrt(variance(X[which_gaussian_km == 0])), np.sqrt(variance(X[which_gaussian_km == 1]))])
return which_gaussian_km, km.cluster_centers_, cluster_stds
which_gaussian_km, cluster_centers_km, cluster_stds_km = run_k_means(X)
plot_data(X, which_gaussian_km, cluster_centers_km, cluster_stds_km)
plt.title('Predicted by K-Means')
Explanation: 1.3 Run K-means algorithm
For simplicity, we use the sklearn version of K-means here. The detailed algorithm was already implemented in a previous exercise.
End of explanation
_, _, _, num_steps_em_km = run_expectation_maximization(X, cluster_centers_km, cluster_stds_km**2)
print('Took', num_steps_em, 'steps with random initalization')
print('Took', num_steps_em_km, 'steps with initialization from K-means')
Explanation: K-means clusters the data point by establishing a straight separation line. This cannot fully capture the nature of the data, e.g. the points around the lower left Gaussian, which actually belong to the upper right Gaussian.
1.4 Initialize EM algorithm with cluster parameters from K-Means
End of explanation
sigma1s = [0.1, 0.5, 1, 1.5]
fig, axes = plt.subplots(len(sigma1s), 3, figsize=(15, 15), sharex=True, sharey=True)
for i, (sigma1, horizontal_axes) in enumerate(zip(sigma1s, axes)):
X, which_gaussian = create_data(sigma1)
plt.sca(horizontal_axes[0])
plot_data(X, which_gaussian, [w1, w2], [sigma1, sigma2])
if i == 0:
plt.title('Ground truth')
which_gaussian_em, cluster_centers_em, cluster_stds_em, num_steps_em = run_expectation_maximization(X)
plt.sca(horizontal_axes[1])
plot_data(X, which_gaussian_em, cluster_centers_em, cluster_stds_em)
if i == 0:
plt.title('Predicted by Expectation-Maximization')
which_gaussian_km, cluster_centers_km, cluster_stds_km = run_k_means(X)
plt.sca(horizontal_axes[2])
plot_data(X, which_gaussian_km, cluster_centers_km, cluster_stds_km)
if i == 0:
plt.title('Predicted by K-Means')
Explanation: 1.5 Repeat analysis for different $\sigma_1$ values
End of explanation
<END_TASK> |
15,732 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
1. Data types and operations with them
(c) 2019, 2020 Dr. Ramil Nugmanov;
(c) 2019 Dr. Timur Madzhidov; Ravil Mukhametgaleev
Installation instructions of CGRtools package information and tutorial's files see on https
Step1: CGRtools has subpackage containers with data structures classes
Step2: 1.1. MoleculeContainer
Molecules are represented as undirected graphs. Molecules contain Atom objects and Bond objects.
Atom objects are represented as dictionary with unique number for each atom as key.
Bond objects are stored as sparse matrix with adjacent atoms pair as keys for rows and columns.
Hereafter, atom number is unique integer used to enumerate atoms in molecule. Please, don't confuse it with element number in Periodic Table, hereafter called element number.
Methods for molecule handling and the arguments of MoleculeContainer are described below.
Step3: Each structure has additional atoms attributes
Step4: Atom objects are dataclasses which store information about
Step5: Atomic attributes are assignable.
CGRtools has integrity checks for verification of changes induced by user
Step6: Bonds are Read-only
For bond modification one should to use delete_bond method to break bond and add_bond for creating new.
Step7: Method delete_atom removes atom from the molecule
Step8: Atoms and bonds objects can be converted into integer representation that could be used to classify their types.
Atom type is represented by 21 bit code rounded to 32 bit integer number
Step9: Connected components.
Sometimes molecules has disconnected components (salts etc).
One can find them and split molecule to separate components.
Step10: Union of molecules
Sometimes it is more convenient to represent salts as ion pair. Otherwise ambiguity could be introduced, for example in reaction of salt exchange
Step11: Substructures could be extracted from molecules.
Step12: augmented_substructure is a substructure consisting from atoms and a given number of shells of neighboring atoms around it.
deep argument is a number of considered shells.
It also returns projection by default.
Step13: Atoms Ordering.
This functionality is used for canonic numbering of atoms in molecules. Morgan algorithm is used for atom ranking. Property atoms_order returns dictionary of atom numbers as keys and their ranks according to canonicalization as values. Equal rank mean that atoms are symmetric (are mapped to each other in automorhisms).
Step14: Atom number can be changed by remap method.
This method is useful when it is needed to change order of atoms in molecules. First argument to remap method is dictionary with existing atom numbers as keys and desired atom number as values. It is possible to change atom numbers for only part of atoms. Atom numbers could be non-sequencial but need to be unique.
If argument copy is set True new object will be created, else existing molecule will be changed. Default is False.
Step15: 1.2. ReactionContainer
ReactionContainer objects has the following properties
Step16: Reactions also has standardize, kekule, thiele, implicify_hydrogens, explicify_hydrogens, etc methods (see part 3). These methods are applied independently to every molecule in reaction.
1.3. CGR
CGRContainer object is similar to MoleculeConrtainer, except some methods. The following methods are not suppoted for CGRContainer
Step17: For CGRContainer attributes charge, is_radical, neighbors and hybridization refer to atom state in reactant of reaction; arguments p_charge, p_is_radical, p_neighbors and p_hybridization could be used to extract atom state in product part in reaction.
Step18: Bonds has order and p_order attribute
If order attribute value is None, it means that bond was formed
If p_order is None, it means that bond was broken
Both order and p_order can't be None
Step19: CGR can be decomposed back to reaction, i.e. reactants and products.
Notice that CGR can lose information in case of unbalanced reactions (where some atoms of reactant does not have counterpart in product, and vice versa). Decomposition of CGRs for unbalanced reactions back to reaction may lead to strange (and erroneous) structures.
Step20: For decomposition of CGRContainer back into ReactionContainer ReactionContainer.from_cgr constructor method can be used.
Step21: You can see that water absent in products initially was restored.
This is a side-effect of CGR decomposing that could help with reaction balancing.
But balancing using CGR decomposition works correctly only if minor part atoms are lost
but multiplicity and formal charge are saved. In next release electronic state balansing will be added.
Step22: 1.4 Queries
CGRtools supports special objects for Queries. Queries are designed for substructure isomorphism. User can set number of neighbors and hybridization by himself (in molecules they could be calculated but could not be changed).
Queries don't have reset_query_marks method
Step23: CGRs also can be transformed into Query.
QueryCGRContainer is similar to QueryContainer class for CGRs and has the same API.
QueryCGRContainer take into account state of atoms and bonds in reactant and product, including neighbors and hybridization
Step24: 1.5. Molecules, CGRs, Reactions construction
CGRtools has API for objects construction from scratch.
CGR and Molecule has methods add_atom and add_bond for adding atoms and bonds.
Step25: Reactions can be constructed from molecules.
Reactions are tuple-like objects. Modification impossible.
Step26: QueryContainers can be constructed in the same way as MoleculeContainers.
Unlike other containers QueryContainers additionally support atoms, neighbors and hybridization lists.
Step27: 1.6. Extending CGRtools
You can easily customize CGRtools for your tasks.
CGRtools is OOP-oriented library with subclassing and inheritance support.
As an example, we show how special marks on atoms for ligand donor centers can be added. | <ASSISTANT_TASK:>
Python Code:
import pkg_resources
if pkg_resources.get_distribution('CGRtools').version.split('.')[:2] != ['4', '0']:
print('WARNING. Tutorial was tested on 4.0 version of CGRtools')
else:
print('Welcome!')
# load data for tutorial
from pickle import load
from traceback import format_exc
with open('molecules.dat', 'rb') as f:
molecules = load(f) # list of MoleculeContainer objects
with open('reactions.dat', 'rb') as f:
reactions = load(f) # list of ReactionContainer objects
m1, m2, m3, m4 = molecules # molecule
m7 = m3.copy()
m11 = m3.copy()
m11.standardize()
m7.standardize()
r1 = reactions[0] # reaction
m5 = r1.reactants[0]
m8 = m7.substructure([4, 5, 6, 7, 8, 9])
m10 = r1.products[0].copy()
Explanation: 1. Data types and operations with them
(c) 2019, 2020 Dr. Ramil Nugmanov;
(c) 2019 Dr. Timur Madzhidov; Ravil Mukhametgaleev
Installation instructions of CGRtools package information and tutorial's files see on https://github.com/stsouko/CGRtools
NOTE: Tutorial should be performed sequentially from the start. Random cell running will lead to unexpected results.
End of explanation
from CGRtools.containers import * # import all containers
Explanation: CGRtools has subpackage containers with data structures classes:
MoleculeContainer - for molecular structure
ReactionContainer - for chemical reaction
CGRContainer - for Condensed Graph of Reaction
QueryContainer - queries for substructure search in molecules
QueryCGRContainer - queries for substructure search in CGRs
End of explanation
m1.meta # dictionary for molecule properties storage. For example, DTYPE/DATUM fields of SDF file are read into this dictionary
m1 # MoleculeContainer supports depiction and graphic representation in Jupyter notebooks.
m1.depict() # depiction returns SVG image in format string
with open('molecule.svg', 'w') as f: # saving image to SVG file
f.write(m1.depict())
m_copy = m1.copy() # copy of molecule
m_copy
len(m1) # get number of atoms in molecule
# or
m1.atoms_count
m1.bonds_count # number of bonds
m1.atoms_numbers # list of atoms numbers
Explanation: 1.1. MoleculeContainer
Molecules are represented as undirected graphs. Molecules contain Atom objects and Bond objects.
Atom objects are represented as dictionary with unique number for each atom as key.
Bond objects are stored as sparse matrix with adjacent atoms pair as keys for rows and columns.
Hereafter, atom number is unique integer used to enumerate atoms in molecule. Please, don't confuse it with element number in Periodic Table, hereafter called element number.
Methods for molecule handling and the arguments of MoleculeContainer are described below.
End of explanation
# iterate over atoms using its numbers
list(m1.atoms()) # works the same as dict.items()
# iterate over bonds using adjacent atoms numbers
list(m1.bonds())
# access to atom by number
m1.atom(1)
try:
m1.atom(10) # raise error for absent atom numbers
except KeyError:
print(format_exc())
# access to bond using adjacent atoms numbers
m1.bond(1, 4)
try:
m1.bond(1, 3) # raise error for absent bond
except KeyError:
print(format_exc())
Explanation: Each structure has additional atoms attributes: number of neighbors and hybridization.
The following notations are used for hybridization of atoms. Values are given as numbers below (in parenthesis symbols that are used in SMILES-like signatures are shown):
1 (s) - all bonds of atom are single, i.e. sp3 hybridization
2 (d) - atom has one double bond and others are single, i.e. sp2 hybridization
3 (t) - atom has one triple or two double bonds and other are single, i.e. sp hybridization
4 (a) - atom is in aromatic ring
Neighbors and hybridizations atom attributes are required for substructure operations and structure standardization. See below
End of explanation
a = m1.atom(1)
# access to information
a.atomic_symbol # element symbol
a.charge # formal charge
a.is_radical # atom radical state
a.isotope # atom isotope. Default isotope if not set. Default isotopes are the same as used in InChI notation
a.x # coordinates
a.y
#or
a.xy
a.neighbors # Number of neighboring atoms. It is read-only.
a.hybridization # Atoms hybridization. It is read-only.
try:
a.hybridization = 2 # Not assignable. Read-only! Thus error is raised.
except AttributeError:
print(format_exc())
Explanation: Atom objects are dataclasses which store information about:
element
isotope
charge
radical state
xy coordinates
Also atoms has methods for data integrity checks and include some internally used data.
End of explanation
a.charge = 1
m1
a.charge = 0
a.is_radical = True
m1
# bond objects also are data-like classes which store information about bond order
b = m1.bond(3, 4)
b.order
try:
b.order = 1 # order change not possible
except AttributeError:
print(format_exc())
Explanation: Atomic attributes are assignable.
CGRtools has integrity checks for verification of changes induced by user
End of explanation
m1.delete_bond(3, 4)
m1
Explanation: Bonds are Read-only
For bond modification one should to use delete_bond method to break bond and add_bond for creating new.
End of explanation
m1.delete_atom(3)
m1
m_copy # copy unchanged!
Explanation: Method delete_atom removes atom from the molecule
End of explanation
int(a)
# 61705 == 000001111 0001000 0100 1
# 000001111 == 15 isotope
# 0001000 == 8 Oxygen
# 0100 == 4 (4 - 4 = 0) uncharged
# 1 == 1 is radical
int(b) # bonds are encoded by their order
a = m_copy.atom(1)
print(a.implicit_hydrogens) # get number of implicit hydrogens on atom 1
print(a.explicit_hydrogens) # get number of explicit hydrogens on atom 1
print(a.total_hydrogens) # get total number of hydrogens on atom 1
m1
m1.check_valence() # return list of numbers of atoms with invalid valences
m4 # molecule with valence errors
m4.check_valence()
m3
m3.sssr # Method for application of Smallest Set of Smallest Rings algorithm for rings
# identification. Returns tuple of tuples of atoms forming smallest rings
Explanation: Atoms and bonds objects can be converted into integer representation that could be used to classify their types.
Atom type is represented by 21 bit code rounded to 32 bit integer number:
9 bits are used for isotope (511 posibilities, highest known isotope is ~300)
7 bits stand for atom number (2 ** 7 - 1 == 127, currently 118 elements are presented in Periodic Table)
4 bits stand for formal charge. Charges range from -4 to +4 rescaled to range 0-8
1 bit are used for radical state.
End of explanation
m2 # it's a salt represented as one graph
m2.connected_components # tuple of tuples of atoms belonging to graph components
anion, cation = m2.split() # split molecule to components
anion # graph of only one salt component
cation # graph of only one salt component
Explanation: Connected components.
Sometimes molecules has disconnected components (salts etc).
One can find them and split molecule to separate components.
End of explanation
salt = anion | cation
# or
salt = anion.union(cation)
salt # this graph has disconnected components, it is considered single compound now
Explanation: Union of molecules
Sometimes it is more convenient to represent salts as ion pair. Otherwise ambiguity could be introduced, for example in reaction of salt exchange:
Ag+ + NO3- + Na+ + Br- = Ag+ + Br- + Na+ + NO3-. Reactants and products sets are the same.
In this case one can combine anion-cation pair into single graph. It could be convenient way to represent other molecule mixtures.
End of explanation
sub = m3.substructure([4,5,6,7,8,9]) # substructure with passed atoms
sub
Explanation: Substructures could be extracted from molecules.
End of explanation
aug = m3.augmented_substructure([10], deep=2) # atom 10 is Nitrogen
aug
Explanation: augmented_substructure is a substructure consisting from atoms and a given number of shells of neighboring atoms around it.
deep argument is a number of considered shells.
It also returns projection by default.
End of explanation
m5.atoms_order
Explanation: Atoms Ordering.
This functionality is used for canonic numbering of atoms in molecules. Morgan algorithm is used for atom ranking. Property atoms_order returns dictionary of atom numbers as keys and their ranks according to canonicalization as values. Equal rank mean that atoms are symmetric (are mapped to each other in automorhisms).
End of explanation
m5
remapped = m5.remap({4:2}, copy=True)
remapped
Explanation: Atom number can be changed by remap method.
This method is useful when it is needed to change order of atoms in molecules. First argument to remap method is dictionary with existing atom numbers as keys and desired atom number as values. It is possible to change atom numbers for only part of atoms. Atom numbers could be non-sequencial but need to be unique.
If argument copy is set True new object will be created, else existing molecule will be changed. Default is False.
End of explanation
r1 # depiction supported
r1.meta
print(r1.reactants, r1.products) # Access to lists of reactant and products.
reactant1, reactant2, reactant3 = r1.reactants
product = r1.products[0]
Explanation: 1.2. ReactionContainer
ReactionContainer objects has the following properties:
reactants - list of reactants molecules
reagents - list of reagents molecules
products - list of products molecules
meta - dictinary of reaction metadata (DTYPE/DATUM block in RDF)
End of explanation
cgr1 = m7 ^ m8 # CGR from molecules
# or
cgr1 = m7.compose(m8)
print(cgr1)
cgr1
r1
cgr2 = ~r1 # CGR from reactions
# or
cgr2 = r1.compose()
print(cgr2) # signature is printed out.
cgr2.clean2d()
cgr2
a = cgr2.atom(2) # atom access is the same as for MoleculeContainer
a.atomic_symbol # element attribute
a.isotope # isotope attribute
Explanation: Reactions also has standardize, kekule, thiele, implicify_hydrogens, explicify_hydrogens, etc methods (see part 3). These methods are applied independently to every molecule in reaction.
1.3. CGR
CGRContainer object is similar to MoleculeConrtainer, except some methods. The following methods are not suppoted for CGRContainer:
standardization methods
hydrogens count methods
check_valence
CGRContainer also has some methods absent in MoleculeConrtainer:
centers_list
center_atoms
center_bonds
CGRContainer is undirected graph. Atoms and bonds in CGR has two states: reactant and product.
Composing to CGR
As mentioned above, atoms in MoleculeContainer have unique numbers. These numbers are used as atom-to-atom mapping in CGRtools upon CGR creation. Thus, atom order for molecules in reaction should correspond to atom-to-atom mapping.
Pair of molecules can be transformed into CGR. Notice that, the same atom numbers in reagents and products imply the same atoms.
Reaction also can be composed into CGR. Atom numbers of molecules in reaction are used as atom-to-atom mapping of reactants to products.
End of explanation
a.charge # charge of atom in reactant
a.p_charge # charge of atom in product
a.p_is_radical # radical state of atom in product.
a.neighbors # number of neighbors of atom in reactant
a.p_neighbors # number of neighbors of atom in product
a.hybridization # hybridization of atom in reactant. 1 means only single bonds are incident to atom
a.p_hybridization # hybridization of atom in product. 1 means only single bonds are incident to atom
b = cgr1.bond(4, 10) # take bond
Explanation: For CGRContainer attributes charge, is_radical, neighbors and hybridization refer to atom state in reactant of reaction; arguments p_charge, p_is_radical, p_neighbors and p_hybridization could be used to extract atom state in product part in reaction.
End of explanation
b.order # bond order in reactant
b.p_order is None # bond order in product in None
Explanation: Bonds has order and p_order attribute
If order attribute value is None, it means that bond was formed
If p_order is None, it means that bond was broken
Both order and p_order can't be None
End of explanation
reactant_part, product_part = ~cgr1 # CGR of unbalanced reaction is decomposed back into reaction
# or
reactant_part, product_part = cgr1.decompose()
reactant_part # reactants extracted. One can notice it is initial molecule
product_part #extracted products. Originally benzene was the product.
Explanation: CGR can be decomposed back to reaction, i.e. reactants and products.
Notice that CGR can lose information in case of unbalanced reactions (where some atoms of reactant does not have counterpart in product, and vice versa). Decomposition of CGRs for unbalanced reactions back to reaction may lead to strange (and erroneous) structures.
End of explanation
decomposed = ReactionContainer.from_cgr(cgr2)
decomposed.clean2d()
decomposed
Explanation: For decomposition of CGRContainer back into ReactionContainer ReactionContainer.from_cgr constructor method can be used.
End of explanation
r1 # compare with initial reaction
Explanation: You can see that water absent in products initially was restored.
This is a side-effect of CGR decomposing that could help with reaction balancing.
But balancing using CGR decomposition works correctly only if minor part atoms are lost
but multiplicity and formal charge are saved. In next release electronic state balansing will be added.
End of explanation
from CGRtools.containers import*
m10 # ether
carb = m10.substructure([5,7,2], as_query=True) # extract of carboxyl fragment
print(carb)
carb
Explanation: 1.4 Queries
CGRtools supports special objects for Queries. Queries are designed for substructure isomorphism. User can set number of neighbors and hybridization by himself (in molecules they could be calculated but could not be changed).
Queries don't have reset_query_marks method
End of explanation
cgr_q = cgr1.substructure(cgr1, as_query=True) # transfrom CGRContainer into QueryCGRContainer
#or
cgr_q = QueryCGRContainer() | cgr1 # Union of Query container with CGR or Molecule gives QueryCGRContainer
print(cgr_q) # print out signature of query
cgr_q
Explanation: CGRs also can be transformed into Query.
QueryCGRContainer is similar to QueryContainer class for CGRs and has the same API.
QueryCGRContainer take into account state of atoms and bonds in reactant and product, including neighbors and hybridization
End of explanation
from CGRtools.containers import MoleculeContainer
from CGRtools.containers.bonds import Bond
from CGRtools.periodictable import Na
m = MoleculeContainer() # new empty molecule
m.add_atom('C') # add Carbon atom using element symbol
m.add_atom(6) # add Carbon atom using element number. {'element': 6} is not valid, but {'element': 'O'} is also acceptable
m.add_atom('O', charge=-1) # add negatively charged Oxygen atom. Similarly other atomic properties can be set
# add_atom has second argument for setting atom number.
# If not set, the next integer after the biggest among already created will be used.
m.add_atom(Na(23), 4, charge=1) # For isotopes required element object construction.
m.add_bond(1, 2, 1) # add bond with order = 1 between atoms 1 and 2
m.add_bond(3, 2, Bond(1)) # the other possibility to set bond order
m.clean2d() #experimental function to calculate atom coordinates. Has number of flaws yet
m
Explanation: 1.5. Molecules, CGRs, Reactions construction
CGRtools has API for objects construction from scratch.
CGR and Molecule has methods add_atom and add_bond for adding atoms and bonds.
End of explanation
r = ReactionContainer(reactants=[m1], products=[m11]) # one-step way to construct reaction
# or
r = ReactionContainer([m1], [m11]) # first list of MoleculeContainers is interpreted as reactants, second one - as products
r
r.fix_positions() # this method fixes coordinates of molecules in reaction without calculation of atoms coordinates.
r
Explanation: Reactions can be constructed from molecules.
Reactions are tuple-like objects. Modification impossible.
End of explanation
q = QueryContainer() # creation of empty container
q.add_atom('N') # add N atom, any isotope, not radical, neutral charge,
# number of neighbors and hybridization are irrelevant
q.add_atom('C', neighbors=[2, 3], hybridization=2) # add carbon atom, any isotope, not radical, neutral charge,
# has 2 or 3 explicit neighbors and sp2 hybridization
q.add_atom('O', neighbors=1)
q.add_bond(1, 2, 1) # add single bond between atom 1 and 2
q.add_bond(2, 3, 2) # add double bond between atom 1 and 2
# any amide group will fit this query
print(q) # print out signature (SMILES-like)
q.clean2d()
q
Explanation: QueryContainers can be constructed in the same way as MoleculeContainers.
Unlike other containers QueryContainers additionally support atoms, neighbors and hybridization lists.
End of explanation
from CGRtools.periodictable import Core, C, O
class Marked(Core):
__slots__ = '__mark' # all new attributes should be slotted!
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.__mark = None # set default value for added attribute
@property
def mark(self): # created new property
return self.__mark
@mark.setter
def mark(self, mark):
# do some checks and calculations
self.__mark = mark
def __repr__(self):
if self.__isotope:
return f'{self.__class__.__name__[6:]}({self.__isotope})'
return f'{self.__class__.__name__[6:]}()'
@property
def atomic_symbol(self) -> str:
return self.__class__.__name__[6:]
class MarkedC(Marked, C):
pass
class MarkedO(Marked, O):
pass
m = MoleculeContainer() # create newly developed container MarkedMoleculeContainer
m.add_atom(MarkedC()) # add custom atom C
m.add_atom(MarkedO()) # add custom atom O
m.add_bond(1, 2, 1)
m.atom(2).mark = 1 # set mark on atom.
print(m)
m.clean2d()
m
m.atom(2).mark # one can return mark
Explanation: 1.6. Extending CGRtools
You can easily customize CGRtools for your tasks.
CGRtools is OOP-oriented library with subclassing and inheritance support.
As an example, we show how special marks on atoms for ligand donor centers can be added.
End of explanation
<END_TASK> |
15,733 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
ES-DOC CMIP6 Model Properties - Landice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Ice Albedo
Is Required
Step7: 1.4. Atmospheric Coupling Variables
Is Required
Step8: 1.5. Oceanic Coupling Variables
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required
Step11: 2.2. Code Version
Is Required
Step12: 2.3. Code Languages
Is Required
Step13: 3. Grid
Land ice grid
3.1. Overview
Is Required
Step14: 3.2. Adaptive Grid
Is Required
Step15: 3.3. Base Resolution
Is Required
Step16: 3.4. Resolution Limit
Is Required
Step17: 3.5. Projection
Is Required
Step18: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required
Step19: 4.2. Description
Is Required
Step20: 4.3. Dynamic Areal Extent
Is Required
Step21: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required
Step22: 5.2. Grounding Line Method
Is Required
Step23: 5.3. Ice Sheet
Is Required
Step24: 5.4. Ice Shelf
Is Required
Step25: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required
Step26: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required
Step27: 7.2. Ocean
Is Required
Step28: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required
Step29: 8.2. Melting
Is Required
Step30: 9. Ice --> Dynamics
**
9.1. Description
Is Required
Step31: 9.2. Approximation
Is Required
Step32: 9.3. Adaptive Timestep
Is Required
Step33: 9.4. Timestep
Is Required | <ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mohc', 'sandbox-3', 'landice')
Explanation: ES-DOC CMIP6 Model Properties - Landice
MIP Era: CMIP6
Institute: MOHC
Source ID: SANDBOX-3
Topic: Landice
Sub-Topics: Glaciers, Ice.
Properties: 30 (21 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:15
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Ice Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify how ice albedo is modelled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Atmospheric Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the atmosphere and ice (e.g. orography, ice mass)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Oceanic Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the ocean and ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which variables are prognostically calculated in the ice model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Grid
Land ice grid
3.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is an adative grid being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Base Resolution
Is Required: TRUE Type: FLOAT Cardinality: 1.1
The base resolution (in metres), before any adaption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Resolution Limit
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If an adaptive grid is being used, what is the limit of the resolution (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.5. Projection
Is Required: TRUE Type: STRING Cardinality: 1.1
The projection of the land ice grid (e.g. albers_equal_area)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of glaciers in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of glaciers, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 4.3. Dynamic Areal Extent
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does the model include a dynamic glacial extent?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the ice sheet and ice shelf in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.2. Grounding Line Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.3. Ice Sheet
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice sheets simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Ice Shelf
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice shelves simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over bedrock
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Ocean
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of calving from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Melting
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of melting from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Ice --> Dynamics
**
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description if ice sheet and ice shelf dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Approximation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Approximation type used in modelling ice dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.3. Adaptive Timestep
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there an adaptive time scheme for the ice scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep.
End of explanation
<END_TASK> |
15,734 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
pconv - Periodic convolution, kernel origin at array origin
Synopse
1D, 2D or 3D Periodic convolution. (kernel origin at array origin)
g = pconv(f, h)
g
Step1: Examples
Step2: Numerical Example 1D
Step3: Numerical Example 2D
Step4: Numerical Example 3D
Step5: Example with Image 2D | <ASSISTANT_TASK:>
Python Code:
def pconv(f,h):
import numpy as np
h_ind=np.nonzero(h)
f_ind=np.nonzero(f)
if len(h_ind[0])>len(f_ind[0]):
h, f = f, h
h_ind,f_ind= f_ind,h_ind
gs = np.maximum(np.array(f.shape),np.array(h.shape))
if (f.dtype == 'complex') or (h.dtype == 'complex'):
g = np.zeros(gs,dtype='complex')
else:
g = np.zeros(gs)
f1 = g.copy()
f1[f_ind]=f[f_ind]
if f.ndim == 1:
(W,) = gs
col = np.arange(W)
for cc in h_ind[0]:
g[:] += f1[(col-cc)%W] * h[cc]
elif f.ndim == 2:
H,W = gs
row,col = np.indices(gs)
for rr,cc in np.transpose(h_ind):
g[:] += f1[(row-rr)%H, (col-cc)%W] * h[rr,cc]
else:
Z,H,W = gs
d,row,col = np.indices(gs)
for dd,rr,cc in np.transpose(h_ind):
g[:] += f1[(d-dd)%Z, (row-rr)%H, (col-cc)%W] * h[dd,rr,cc]
return g
Explanation: pconv - Periodic convolution, kernel origin at array origin
Synopse
1D, 2D or 3D Periodic convolution. (kernel origin at array origin)
g = pconv(f, h)
g: Image. Output image.
f: Image. Input image.
h: Image. PSF (point spread function), or kernel. The origin is at the array origin.
Description
Perform a 1D, 2D or 3D discrete periodic convolution. The kernel origin is at the origin of image h.
Both image and kernel are periodic with same period. Usually the kernel h is smaller than the image f,
so h is padded with zero until the size of f. Supports complex images.
End of explanation
testing = (__name__ == '__main__')
if testing:
! jupyter nbconvert --to python pconv.ipynb
import numpy as np
%matplotlib inline
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
import sys,os
ia898path = os.path.abspath('../../')
if ia898path not in sys.path:
sys.path.append(ia898path)
import ia898.src as ia
Explanation: Examples
End of explanation
if testing:
f = np.array([0,0,0,1,0,0,0,0,1])
print("f:",f)
h = np.array([1,2,3])
print("h:",h)
g1 = ia.pconv(f,h)
g2 = ia.pconv(h,f)
print("g1:",g1)
print("g2:",g2)
Explanation: Numerical Example 1D
End of explanation
if testing:
f = np.array([[1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0],
[0,0,0,1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,1],
[0,0,0,0,0,0,0,0,0]])
print("Image (f):")
print(f)
h = np.array([[1,2,3],
[4,5,6]])
print("\n Image Kernel (h):")
print(h)
g1 = ia.pconv(f,h)
print("Image Output (g1=f*h):")
print(g1)
g2 = ia.pconv(h,f)
print("Image Output (g2=h*f):")
print(g)
Explanation: Numerical Example 2D
End of explanation
if testing:
f = np.zeros((3,3,3))
#f[0,1,1] = 1
f[1,1,1] = 1
#f[2,1,1] = 1
print("\n Image Original (F): ")
print(f)
h = np.array([[[ 1, 2, 3 ],
[ 3, 4, 5 ],
[ 5, 6, 7 ]],
[[ 8, 9, 10],
[11, 12, 13],
[14, 15, 16]],
[[17, 18, 19],
[20, 21, 22],
[23, 24, 25]]])
print("\n Image Kernel (H): ")
print(h)
result = ia.pconv(f,h)
print("\n Image Output - (G): ")
print(result)
Explanation: Numerical Example 3D
End of explanation
if testing:
f = mpimg.imread('../data/cameraman.tif')
ia.adshow(f, title = 'a) - Original Image')
h = np.array([[-1,-1,-1],
[ 0, 0, 0],
[ 1, 1, 1]])
g = ia.pconv(f,h)
print("\nPrewitt´s Mask")
print(h)
gn = ia.normalize(g, [0,255])
ia.adshow(gn, title = 'b) Prewitt´s Mask filtering')
ia.adshow(ia.normalize(abs(g)), title = 'c) absolute of Prewitt´s Mask filtering')
Explanation: Example with Image 2D
End of explanation
<END_TASK> |
15,735 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Algorithms Exercise 2
Imports
Step2: Peak finding
Write a function find_peaks that finds and returns the indices of the local maxima in a sequence. Your function should
Step3: Here is a string with the first 10000 digits of $\pi$ (after the decimal). Write code to perform the following | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import seaborn as sns
import numpy as np
Explanation: Algorithms Exercise 2
Imports
End of explanation
def find_peaks(a):
Find the indices of the local maxima in a sequence.
# YOUR CODE HERE
#raise NotImplementedError()
ind=[]
#next two if checks end points
if a[0]> a[1]:
ind.append(0)
if a[len(a)-1]>a[len(a)-2]:
ind.append(len(a)-1)
#finds local maxima in string by comparing adjacent
for i in range(1,len(a)-1):
if a[i]>a[i-1] and a[i]>a[i+1]:
ind.append(i)
#sorts by increasing order
return sorted(ind)
find_peaks([2,0,1,0,2,0,1])
p1 = find_peaks([2,0,1,0,2,0,1])
assert np.allclose(p1, np.array([0,2,4,6]))
p2 = find_peaks(np.array([0,1,2,3]))
assert np.allclose(p2, np.array([3]))
p3 = find_peaks([3,2,1,0])
assert np.allclose(p3, np.array([0]))
Explanation: Peak finding
Write a function find_peaks that finds and returns the indices of the local maxima in a sequence. Your function should:
Properly handle local maxima at the endpoints of the input array.
Return a Numpy array of integer indices.
Handle any Python iterable as input.
End of explanation
from sympy import pi, N
pi_digits_str = str(N(pi, 10001))[2:]
# YOUR CODE HERE
#raise NotImplementedError()
def pimax(x):
'''uses find_peaks to find the local maxima then finds the space between the maxima and
plots the distribution of space between local maxima'''
pi=np.ones(10000)
for i in range(len(x)):
pi[i]=int(x[i])
m = find_peaks(pi)
dist = np.diff(m)
p = plt.hist(dist,bins=17)
plt.title('Distances Between Local Maxima in First 10000 digtis of $\pi$')
plt.xlabel('Distance Between Local Maxima')
plt.ylabel('Number of Times Occured')
plt.grid(False)
plt.xlim([1,19])
a=range(2,19)
plt.xticks(a[::2])
plt.ylim(0,1100)
plt.show()
pimax(pi_digits_str)
assert True # use this for grading the pi digits histogram
Explanation: Here is a string with the first 10000 digits of $\pi$ (after the decimal). Write code to perform the following:
Convert that string to a Numpy array of integers.
Find the indices of the local maxima in the digits of $\pi$.
Use np.diff to find the distances between consequtive local maxima.
Visualize that distribution using an appropriately customized histogram.
End of explanation
<END_TASK> |
15,736 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Generate data to simulate a dataset of samples $x$ in which all features/columns (3) could be collected. x has size $n$ by $features$.
Step1: Transform the generated data $x$ to a new basis.
The basis is given by the eigenvectors of $cov(x)$. In this new basis, the eigenvectors are the same as the $x$, $y$, $z$ axis vectors $(1,0,0)$, $(0,1,0)$, etc.
Step2: Generate another dataset $y$ with the same distribution as $x$ (this is a very strong assumption!)
Step3: Lets simulate the fact that for $y$ we can't measure all values. In this case, we will create y_missing, which only has 2 features
Step4: Now, lets assume that we can recover the last feature of y_missing using information from the eigendecomposition of $cov(x)$.
We perform PCA on $x$, but use only the two most important eigenvectors (those with bigger eigenvalues). pca_reconstruction allows us to perform a forward transform from $R^3$ to $R^2$ (compression) and an inverse transform from $R^2$ to $R^3$ (reconstruction). This can also be interpreted as going from the canonical basis to the eigenbasis and viceversa.
Step5: We still can't reconstruct y from y_missing with pca_reconstruction because it is encoded with the canonical basis. We first need to encode it in the eigenbasis. To do that we will augment y_missing with a new feature with value equal to the mean of that feature (a reasonable assumption), and encode it. | <ASSISTANT_TASK:>
Python Code:
#mean and std of multivariate normal dist to generate samples
mu=np.array([5,0,-2])
σ=np.array([[9,1, -1],
[1, 3, -2],
[-1, -2,2],])
if not is_covariance(σ):
print("Warning: σ is not a valid covariance matrix (not symmetric or positive definite)")
n=1000 # number of samples
x=np.random.multivariate_normal(mu,σ,size=(n,))
#plot generated data
plt.close("all")
plot_data(x,"data in original space",eigen=True)
plt.show()
Explanation: Generate data to simulate a dataset of samples $x$ in which all features/columns (3) could be collected. x has size $n$ by $features$.
End of explanation
pca_exact = PCA(n_components=3) # since x has 3 features, this PCA model does not do compression
pca_exact.fit(x) # calculate eigen decomposition of cov(x)
x_transformed=pca_exact.transform(x) #encode x with the eigenvectors as basis
plot_data(x_transformed,"x in natural (eigenvectors) space",eigen=True)
#save the eigenvectors and eigenvalues
eigenvectors=pca_exact.components_
eigenvalues=pca_exact.explained_variance_
Explanation: Transform the generated data $x$ to a new basis.
The basis is given by the eigenvectors of $cov(x)$. In this new basis, the eigenvectors are the same as the $x$, $y$, $z$ axis vectors $(1,0,0)$, $(0,1,0)$, etc.
End of explanation
y=np.random.multivariate_normal(mu,σ,size=(n,))
plot_data(y,"y in original space",eigen=True)
Explanation: Generate another dataset $y$ with the same distribution as $x$ (this is a very strong assumption!)
End of explanation
y_missing=y[:,0:2]
plt.figure()
plt.scatter(y_missing[:,0],y_missing[:,1])
plt.title("y_missing in original space (2d)")
Explanation: Lets simulate the fact that for $y$ we can't measure all values. In this case, we will create y_missing, which only has 2 features
End of explanation
pca_reconstruction=PCA(n_components=2)
pca_reconstruction.fit(x)
print(pca_reconstruction.components_)
print(eigenvectors)
Explanation: Now, lets assume that we can recover the last feature of y_missing using information from the eigendecomposition of $cov(x)$.
We perform PCA on $x$, but use only the two most important eigenvectors (those with bigger eigenvalues). pca_reconstruction allows us to perform a forward transform from $R^3$ to $R^2$ (compression) and an inverse transform from $R^2$ to $R^3$ (reconstruction). This can also be interpreted as going from the canonical basis to the eigenbasis and viceversa.
End of explanation
y_augmented=np.copy(y_missing)
y3=np.zeros((n,1))+mu[2]
y_augmented=np.hstack([y_missing,y3])
y_eigen=pca_exact.transform(y_augmented)
least_eigenvalue_index=np.argmin(eigenvalues)
y_eigen_2d=y_eigen[:,np.arange(3)!=least_eigenvalue_index]
y_reconstructed=pca_reconstruction.inverse_transform(y_eigen_2d)
plot_data(y_reconstructed, "y_reconstructed",eigen=True)
mean_reconstruction_error=((y_reconstructed-y)**2).sum()/n
print(mean_reconstruction_error)
Explanation: We still can't reconstruct y from y_missing with pca_reconstruction because it is encoded with the canonical basis. We first need to encode it in the eigenbasis. To do that we will augment y_missing with a new feature with value equal to the mean of that feature (a reasonable assumption), and encode it.
End of explanation
<END_TASK> |
15,737 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
ES-DOC CMIP6 Model Properties - Ocean
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables
Is Required
Step9: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required
Step10: 2.2. Eos Functional Temp
Is Required
Step11: 2.3. Eos Functional Salt
Is Required
Step12: 2.4. Eos Functional Depth
Is Required
Step13: 2.5. Ocean Freezing Point
Is Required
Step14: 2.6. Ocean Specific Heat
Is Required
Step15: 2.7. Ocean Reference Density
Is Required
Step16: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required
Step17: 3.2. Type
Is Required
Step18: 3.3. Ocean Smoothing
Is Required
Step19: 3.4. Source
Is Required
Step20: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required
Step21: 4.2. River Mouth
Is Required
Step22: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required
Step23: 5.2. Code Version
Is Required
Step24: 5.3. Code Languages
Is Required
Step25: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required
Step26: 6.2. Canonical Horizontal Resolution
Is Required
Step27: 6.3. Range Horizontal Resolution
Is Required
Step28: 6.4. Number Of Horizontal Gridpoints
Is Required
Step29: 6.5. Number Of Vertical Levels
Is Required
Step30: 6.6. Is Adaptive Grid
Is Required
Step31: 6.7. Thickness Level 1
Is Required
Step32: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required
Step33: 7.2. Global Mean Metrics Used
Is Required
Step34: 7.3. Regional Metrics Used
Is Required
Step35: 7.4. Trend Metrics Used
Is Required
Step36: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required
Step37: 8.2. Scheme
Is Required
Step38: 8.3. Consistency Properties
Is Required
Step39: 8.4. Corrected Conserved Prognostic Variables
Is Required
Step40: 8.5. Was Flux Correction Used
Is Required
Step41: 9. Grid
Ocean grid
9.1. Overview
Is Required
Step42: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required
Step43: 10.2. Partial Steps
Is Required
Step44: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required
Step45: 11.2. Staggering
Is Required
Step46: 11.3. Scheme
Is Required
Step47: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required
Step48: 12.2. Diurnal Cycle
Is Required
Step49: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required
Step50: 13.2. Time Step
Is Required
Step51: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required
Step52: 14.2. Scheme
Is Required
Step53: 14.3. Time Step
Is Required
Step54: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required
Step55: 15.2. Time Step
Is Required
Step56: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required
Step57: 17. Advection
Ocean advection
17.1. Overview
Is Required
Step58: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required
Step59: 18.2. Scheme Name
Is Required
Step60: 18.3. ALE
Is Required
Step61: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required
Step62: 19.2. Flux Limiter
Is Required
Step63: 19.3. Effective Order
Is Required
Step64: 19.4. Name
Is Required
Step65: 19.5. Passive Tracers
Is Required
Step66: 19.6. Passive Tracers Advection
Is Required
Step67: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required
Step68: 20.2. Flux Limiter
Is Required
Step69: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required
Step70: 21.2. Scheme
Is Required
Step71: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required
Step72: 22.2. Order
Is Required
Step73: 22.3. Discretisation
Is Required
Step74: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required
Step75: 23.2. Constant Coefficient
Is Required
Step76: 23.3. Variable Coefficient
Is Required
Step77: 23.4. Coeff Background
Is Required
Step78: 23.5. Coeff Backscatter
Is Required
Step79: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required
Step80: 24.2. Submesoscale Mixing
Is Required
Step81: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required
Step82: 25.2. Order
Is Required
Step83: 25.3. Discretisation
Is Required
Step84: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required
Step85: 26.2. Constant Coefficient
Is Required
Step86: 26.3. Variable Coefficient
Is Required
Step87: 26.4. Coeff Background
Is Required
Step88: 26.5. Coeff Backscatter
Is Required
Step89: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required
Step90: 27.2. Constant Val
Is Required
Step91: 27.3. Flux Type
Is Required
Step92: 27.4. Added Diffusivity
Is Required
Step93: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required
Step94: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required
Step95: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required
Step96: 30.2. Closure Order
Is Required
Step97: 30.3. Constant
Is Required
Step98: 30.4. Background
Is Required
Step99: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required
Step100: 31.2. Closure Order
Is Required
Step101: 31.3. Constant
Is Required
Step102: 31.4. Background
Is Required
Step103: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required
Step104: 32.2. Tide Induced Mixing
Is Required
Step105: 32.3. Double Diffusion
Is Required
Step106: 32.4. Shear Mixing
Is Required
Step107: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required
Step108: 33.2. Constant
Is Required
Step109: 33.3. Profile
Is Required
Step110: 33.4. Background
Is Required
Step111: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required
Step112: 34.2. Constant
Is Required
Step113: 34.3. Profile
Is Required
Step114: 34.4. Background
Is Required
Step115: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required
Step116: 35.2. Scheme
Is Required
Step117: 35.3. Embeded Seaice
Is Required
Step118: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required
Step119: 36.2. Type Of Bbl
Is Required
Step120: 36.3. Lateral Mixing Coef
Is Required
Step121: 36.4. Sill Overflow
Is Required
Step122: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required
Step123: 37.2. Surface Pressure
Is Required
Step124: 37.3. Momentum Flux Correction
Is Required
Step125: 37.4. Tracers Flux Correction
Is Required
Step126: 37.5. Wave Effects
Is Required
Step127: 37.6. River Runoff Budget
Is Required
Step128: 37.7. Geothermal Heating
Is Required
Step129: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required
Step130: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required
Step131: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required
Step132: 40.2. Ocean Colour
Is Required
Step133: 40.3. Extinction Depth
Is Required
Step134: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required
Step135: 41.2. From Sea Ice
Is Required
Step136: 41.3. Forced Mode Restoring
Is Required | <ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'inm', 'sandbox-1', 'ocean')
Explanation: ES-DOC CMIP6 Model Properties - Ocean
MIP Era: CMIP6
Institute: INM
Source ID: SANDBOX-1
Topic: Ocean
Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing.
Properties: 133 (101 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:05
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean model code (NEMO 3.6, MOM 5.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the ocean.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the ocean component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
Explanation: 2.2. Eos Functional Temp
Is Required: TRUE Type: ENUM Cardinality: 1.1
Temperature used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
Explanation: 2.3. Eos Functional Salt
Is Required: TRUE Type: ENUM Cardinality: 1.1
Salinity used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
Explanation: 2.4. Eos Functional Depth
Is Required: TRUE Type: ENUM Cardinality: 1.1
Depth or pressure used in EOS for sea water ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2.5. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.6. Ocean Specific Heat
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Specific heat in ocean (cpocean) in J/(kg K)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.7. Ocean Reference Density
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Boussinesq reference density (rhozero) in kg / m3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Reference date of bathymetry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Type
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the bathymetry fixed in time in the ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Ocean Smoothing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any smoothing or hand editing of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Source
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe source of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how isolated seas is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. River Mouth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how river mouth mixing or estuaries specific treatment is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.4. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.5. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.6. Is Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.7. Thickness Level 1
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Thickness of first surface ocean level (in meters)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Brief description of conservation methodology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in the ocean by the numerical schemes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Consistency Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Corrected Conserved Prognostic Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Set of variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.5. Was Flux Correction Used
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does conservation involve flux correction ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Grid
Ocean grid
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of grid in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical coordinates in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10.2. Partial Steps
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Using partial steps with Z or Z vertical coordinate in ocean ?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Staggering
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal grid staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Diurnal Cycle
Is Required: TRUE Type: ENUM Cardinality: 1.1
Diurnal cycle type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracers time stepping scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Tracers time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Baroclinic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time splitting method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.2. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Barotropic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Details of vertical time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Advection
Ocean advection
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of advection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
Explanation: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of lateral momemtum advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Scheme Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean momemtum advection scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 18.3. ALE
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Using ALE for vertical advection ? (if vertical coordinates are sigma)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Order of lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 19.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for lateral tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Effective Order
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Effective order of limited lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.4. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.5. Passive Tracers
Is Required: FALSE Type: ENUM Cardinality: 0.N
Passive tracers advected
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.6. Passive Tracers Advection
Is Required: FALSE Type: STRING Cardinality: 0.1
Is advection of passive tracers different than active ? if so, describe.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 20.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for vertical tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lateral physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
Explanation: 21.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transient eddy representation in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics momemtum eddy viscosity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 23.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Coeff Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a mesoscale closure in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24.2. Submesoscale Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics tracers eddy diffusity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.4. Coeff Background
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 26.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EIV in lateral physics tracers in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27.2. Constant Val
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If EIV scheme for tracers is constant, specify coefficient value (M2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Flux Type
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV flux (advective or skew)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Added Diffusivity
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV added diffusivity (constant, flow dependent or none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vertical physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there Langmuir cells mixing in upper ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical convection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.2. Tide Induced Mixing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how tide induced mixing is modelled (barotropic, baroclinic, none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.3. Double Diffusion
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there double diffusion
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.4. Shear Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there interior shear mixing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 33.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 34.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of free surface in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Free surface scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 35.3. Embeded Seaice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the sea-ice embeded in the ocean model (instead of levitating) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.2. Type Of Bbl
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 36.3. Lateral Mixing Coef
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.4. Sill Overflow
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any specific treatment of sill overflows
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of boundary forcing in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Surface Pressure
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.3. Momentum Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.4. Tracers Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.5. Wave Effects
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how wave effects are modelled at ocean surface.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.6. River Runoff Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how river runoff from land surface is routed to ocean and any global adjustment done.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.7. Geothermal Heating
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how geothermal heating is present at ocean bottom.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum bottom friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum lateral friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of sunlight penetration scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 40.2. Ocean Colour
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the ocean sunlight penetration scheme ocean colour dependent ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40.3. Extinction Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe and list extinctions depths for sunlight penetration scheme (if applicable).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from atmos in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. From Sea Ice
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from sea-ice in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 41.3. Forced Mode Restoring
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of surface salinity restoring in forced mode (OMIP)
End of explanation
<END_TASK> |
15,738 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Generative Adversarial Network
In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!
GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out
Step1: Model Inputs
First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks.
Step2: Generator network
Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.
Variable Scope
Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.
We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.
To use tf.variable_scope, you use a with statement
Step3: Discriminator
The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.
Step4: Hyperparameters
Step5: Build network
Now we're building the network from the functions defined above.
First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.
Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.
Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).
Step6: Discriminator and Generator Losses
Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropys, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like
python
tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)
The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.
Step7: Optimizers
We want to update the generator and discriminator variables separately. So we need to get the variables for each part build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.
For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables to start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance).
We can do something similar with the discriminator. All the variables in the discriminator start with discriminator.
Then, in the optimizer we pass the variable lists to var_list in the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.
Step8: Training
Step9: Training loss
Here we'll check out the training losses for the generator and discriminator.
Step10: Generator samples from training
Here we can view samples of images from the generator. First we'll look at images taken while training.
Step11: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 1, 7, 3, 2. Since this is just a sample, it isn't representative of the full range of images this generator can make.
Step12: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
Step13: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise like 1s and 9s.
Sampling from the generator
We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples! | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pickle as pkl
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data')
Explanation: Generative Adversarial Network
In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!
GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out:
Pix2Pix
CycleGAN
A whole list
The idea behind GANs is that you have two networks, a generator $G$ and a discriminator $D$, competing against each other. The generator makes fake data to pass to the discriminator. The discriminator also sees real data and predicts if the data it's received is real or fake. The generator is trained to fool the discriminator, it wants to output data that looks as close as possible to real data. And the discriminator is trained to figure out which data is real and which is fake. What ends up happening is that the generator learns to make data that is indistiguishable from real data to the discriminator.
The general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector the generator uses to contruct it's fake images. As the generator learns through training, it figures out how to map these random vectors to recognizable images that can foold the discriminator.
The output of the discriminator is a sigmoid function, where 0 indicates a fake image and 1 indicates an real image. If you're interested only in generating new images, you can throw out the discriminator after training. Now, let's see how we build this thing in TensorFlow.
End of explanation
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, (None, real_dim), name='input_real')
inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')
return inputs_real, inputs_z
Explanation: Model Inputs
First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks.
End of explanation
def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):
with tf.variable_scope('generator', reuse=reuse):
# Hidden layer
h1 = tf.layers.dense(z, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(alpha * h1, h1)
# Logits and tanh output
logits = tf.layers.dense(h1, out_dim, activation=None)
out = tf.tanh(logits)
return out
Explanation: Generator network
Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.
Variable Scope
Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.
We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.
To use tf.variable_scope, you use a with statement:
python
with tf.variable_scope('scope_name', reuse=False):
# code here
Here's more from the TensorFlow documentation to get another look at using tf.variable_scope.
Leaky ReLU
TensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can use take the outputs from a linear fully connected layer and pass them to tf.maximum. Typically, a parameter alpha sets the magnitude of the output for negative values. So, the output for negative input (x) values is alpha*x, and the output for positive x is x:
$$
f(x) = max(\alpha * x, x)
$$
Tanh Output
The generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1.
End of explanation
def discriminator(x, n_units=128, reuse=False, alpha=0.01):
with tf.variable_scope('discriminator', reuse=reuse):
# Hidden layer
h1 = tf.layers.dense(x, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(alpha * h1, h1)
logits = tf.layers.dense(h1, 1, activation=None)
out = tf.sigmoid(logits)
return out, logits
Explanation: Discriminator
The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.
End of explanation
# Size of input image to discriminator
input_size = 784
# Size of latent vector to generator
z_size = 100
# Sizes of hidden layers in generator and discriminator
g_hidden_size = 128
d_hidden_size = 128
# Leak factor for leaky ReLU
alpha = 0.01
# Smoothing
smooth = 0.1
Explanation: Hyperparameters
End of explanation
tf.reset_default_graph()
# Create our input placeholders
input_real, input_z = model_inputs(input_size, z_size)
# Build the model
g_model = generator(input_z, input_size, n_units=g_hidden_size, alpha=alpha)
# g_model is the generator output
d_model_real, d_logits_real = discriminator(input_real, n_units=d_hidden_size, alpha=alpha)
d_model_fake, d_logits_fake = discriminator(g_model, reuse=True, n_units=d_hidden_size, alpha=alpha)
Explanation: Build network
Now we're building the network from the functions defined above.
First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.
Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.
Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).
End of explanation
# Calculate losses
d_loss_real = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real,
labels=tf.ones_like(d_logits_real) * (1 - smooth)))
d_loss_fake = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,
labels=tf.zeros_like(d_logits_real)))
d_loss = d_loss_real + d_loss_fake
g_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,
labels=tf.ones_like(d_logits_fake)))
Explanation: Discriminator and Generator Losses
Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropys, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like
python
tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)
The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.
End of explanation
# Optimizers
learning_rate = 0.002
# Get the trainable_variables, split into G and D parts
t_vars = tf.trainable_variables()
g_vars = [var for var in t_vars if var.name.startswith('generator')]
d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
d_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(g_loss, var_list=g_vars)
Explanation: Optimizers
We want to update the generator and discriminator variables separately. So we need to get the variables for each part build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.
For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables to start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance).
We can do something similar with the discriminator. All the variables in the discriminator start with discriminator.
Then, in the optimizer we pass the variable lists to var_list in the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.
End of explanation
batch_size = 100
epochs = 100
samples = []
losses = []
# Only save generator variables
saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images, reshape and rescale to pass to D
batch_images = batch[0].reshape((batch_size, 784))
batch_images = batch_images*2 - 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})
_ = sess.run(g_train_opt, feed_dict={input_z: batch_z})
# At the end of each epoch, get the losses and print them out
train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images})
train_loss_g = g_loss.eval({input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
# Sample from generator as we're training for viewing afterwards
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, n_units=g_hidden_size, reuse=True, alpha=alpha),
feed_dict={input_z: sample_z})
samples.append(gen_samples)
saver.save(sess, './checkpoints/generator.ckpt')
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
Explanation: Training
End of explanation
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator')
plt.plot(losses.T[1], label='Generator')
plt.title("Training Losses")
plt.legend()plt.legend()
Explanation: Training loss
Here we'll check out the training losses for the generator and discriminator.
End of explanation
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')
return fig, axes
# Load samples from generator taken while training
with open('train_samples.pkl', 'rb') as f:
samples = pkl.load(f)
Explanation: Generator samples from training
Here we can view samples of images from the generator. First we'll look at images taken while training.
End of explanation
_ = view_samples(-1, samples)
Explanation: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 1, 7, 3, 2. Since this is just a sample, it isn't representative of the full range of images this generator can make.
End of explanation
rows, cols = 10, 6
fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)
for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):
for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):
ax.imshow(img.reshape((28,28)), cmap='Greys_r')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
Explanation: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
End of explanation
saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, n_units=g_hidden_size, reuse=True, alpha=alpha),
feed_dict={input_z: sample_z})
_ = view_samples(0, [gen_samples])
Explanation: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise like 1s and 9s.
Sampling from the generator
We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples!
End of explanation
<END_TASK> |
15,739 | <SYSTEM_TASK:>
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
<END_TASK>
<USER_TASK:>
Problem:
I am struggling with the basic task of constructing a DataFrame of counts by value from a tuple produced by np.unique(arr, return_counts=True), such as: | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
np.random.seed(123)
birds = np.random.choice(['African Swallow', 'Dead Parrot', 'Exploding Penguin'], size=int(5e4))
someTuple = np.unique(birds, return_counts=True)
def g(someTuple):
return pd.DataFrame(np.column_stack(someTuple),columns=['birdType','birdCount'])
result = g(someTuple)
<END_TASK> |
15,740 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Implementing a Neural Network
In this exercise we will develop a neural network with fully-connected layers to perform classification, and test it out on the CIFAR-10 dataset.
Step2: We will use the class TwoLayerNet in the file cs231n/classifiers/neural_net.py to represent instances of our network. The network parameters are stored in the instance variable self.params where keys are string parameter names and values are numpy arrays. Below, we initialize toy data and a toy model that we will use to develop your implementation.
Step3: Forward pass
Step4: Forward pass
Step5: Backward pass
Implement the rest of the function. This will compute the gradient of the loss with respect to the variables W1, b1, W2, and b2. Now that you (hopefully!) have a correctly implemented forward pass, you can debug your backward pass using a numeric gradient check
Step6: Train the network
To train the network we will use stochastic gradient descent (SGD), similar to the SVM and Softmax classifiers. Look at the function TwoLayerNet.train and fill in the missing sections to implement the training procedure. This should be very similar to the training procedure you used for the SVM and Softmax classifiers. You will also have to implement TwoLayerNet.predict, as the training process periodically performs prediction to keep track of accuracy over time while the network trains.
Once you have implemented the method, run the code below to train a two-layer network on toy data. You should achieve a training loss less than 0.2.
Step8: Load the data
Now that you have implemented a two-layer network that passes gradient checks and works on toy data, it's time to load up our favorite CIFAR-10 data so we can use it to train a classifier on a real dataset.
Step9: Train a network
To train our network we will use SGD with momentum. In addition, we will adjust the learning rate with an exponential learning rate schedule as optimization proceeds; after each epoch, we will reduce the learning rate by multiplying it by a decay rate.
Step10: Debug the training
With the default parameters we provided above, you should get a validation accuracy of about 0.29 on the validation set. This isn't very good.
One strategy for getting insight into what's wrong is to plot the loss function and the accuracies on the training and validation sets during optimization.
Another strategy is to visualize the weights that were learned in the first layer of the network. In most neural networks trained on visual data, the first layer weights typically show some visible structure when visualized.
Step11: Tune your hyperparameters
What's wrong?. Looking at the visualizations above, we see that the loss is decreasing more or less linearly, which seems to suggest that the learning rate may be too low. Moreover, there is no gap between the training and validation accuracy, suggesting that the model we used has low capacity, and that we should increase its size. On the other hand, with a very large model we would expect to see more overfitting, which would manifest itself as a very large gap between the training and validation accuracy.
Tuning. Tuning the hyperparameters and developing intuition for how they affect the final performance is a large part of using Neural Networks, so we want you to get a lot of practice. Below, you should experiment with different values of the various hyperparameters, including hidden layer size, learning rate, numer of training epochs, and regularization strength. You might also consider tuning the learning rate decay, but you should be able to get good performance using the default value.
Approximate results. You should be aim to achieve a classification accuracy of greater than 48% on the validation set. Our best network gets over 52% on the validation set.
Experiment
Step12: Run on the test set
When you are done experimenting, you should evaluate your final trained network on the test set; you should get above 48%.
We will give you extra bonus point for every 1% of accuracy above 52%. | <ASSISTANT_TASK:>
Python Code:
# A bit of setup
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.neural_net import TwoLayerNet
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
returns relative error
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
Explanation: Implementing a Neural Network
In this exercise we will develop a neural network with fully-connected layers to perform classification, and test it out on the CIFAR-10 dataset.
End of explanation
# Create a small net and some toy data to check your implementations.
# Note that we set the random seed for repeatable experiments.
input_size = 4
hidden_size = 10
num_classes = 3
num_inputs = 5
def init_toy_model():
np.random.seed(0)
return TwoLayerNet(input_size, hidden_size, num_classes, std=1e-1)
def init_toy_data():
np.random.seed(1)
X = 10 * np.random.randn(num_inputs, input_size)
y = np.array([0, 1, 2, 2, 1])
return X, y
net = init_toy_model()
X, y = init_toy_data()
Explanation: We will use the class TwoLayerNet in the file cs231n/classifiers/neural_net.py to represent instances of our network. The network parameters are stored in the instance variable self.params where keys are string parameter names and values are numpy arrays. Below, we initialize toy data and a toy model that we will use to develop your implementation.
End of explanation
scores = net.loss(X)
print 'Your scores:'
print scores
print
print 'correct scores:'
correct_scores = np.asarray([
[-0.81233741, -1.27654624, -0.70335995],
[-0.17129677, -1.18803311, -0.47310444],
[-0.51590475, -1.01354314, -0.8504215 ],
[-0.15419291, -0.48629638, -0.52901952],
[-0.00618733, -0.12435261, -0.15226949]])
print correct_scores
# The difference should be very small. We get < 1e-7
print 'Difference between your scores and correct scores:'
print np.sum(np.abs(scores - correct_scores))
Explanation: Forward pass: compute scores
Open the file cs231n/classifiers/neural_net.py and look at the method TwoLayerNet.loss. This function is very similar to the loss functions you have written for the SVM and Softmax exercises: It takes the data and weights and computes the class scores, the loss, and the gradients on the parameters.
Implement the first part of the forward pass which uses the weights and biases to compute the scores for all inputs.
End of explanation
loss, _ = net.loss(X, y, reg=0.1)
correct_loss = 1.30378789133
# should be very small, we get < 1e-12
print 'Difference between your loss and correct loss:'
print np.sum(np.abs(loss - correct_loss))
Explanation: Forward pass: compute loss
In the same function, implement the second part that computes the data and regularizaion loss.
End of explanation
from cs231n.gradient_check import eval_numerical_gradient
# Use numeric gradient checking to check your implementation of the backward pass.
# If your implementation is correct, the difference between the numeric and
# analytic gradients should be less than 1e-8 for each of W1, W2, b1, and b2.
loss, grads = net.loss(X, y, reg=0.1)
# these should all be less than 1e-8 or so
for param_name in grads:
f = lambda W: net.loss(X, y, reg=0.1)[0]
param_grad_num = eval_numerical_gradient(f, net.params[param_name], verbose=False)
print '%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name]))
Explanation: Backward pass
Implement the rest of the function. This will compute the gradient of the loss with respect to the variables W1, b1, W2, and b2. Now that you (hopefully!) have a correctly implemented forward pass, you can debug your backward pass using a numeric gradient check:
End of explanation
net = init_toy_model()
stats = net.train(X, y, X, y,
learning_rate=1e-1, reg=1e-5,
num_iters=100, verbose=False)
print 'Final training loss: ', stats['loss_history'][-1]
# plot the loss history
plt.plot(stats['loss_history'])
plt.xlabel('iteration')
plt.ylabel('training loss')
plt.title('Training Loss history')
plt.show()
Explanation: Train the network
To train the network we will use stochastic gradient descent (SGD), similar to the SVM and Softmax classifiers. Look at the function TwoLayerNet.train and fill in the missing sections to implement the training procedure. This should be very similar to the training procedure you used for the SVM and Softmax classifiers. You will also have to implement TwoLayerNet.predict, as the training process periodically performs prediction to keep track of accuracy over time while the network trains.
Once you have implemented the method, run the code below to train a two-layer network on toy data. You should achieve a training loss less than 0.2.
End of explanation
from cs231n.data_utils import load_CIFAR10
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
Load the CIFAR-10 dataset from disk and perform preprocessing to prepare
it for the two-layer neural net classifier. These are the same steps as
we used for the SVM, but condensed to a single function.
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
# Normalize the data: subtract the mean image
mean_image = np.mean(X_train, axis=0)
X_train -= mean_image
X_val -= mean_image
X_test -= mean_image
# Reshape data to rows
X_train = X_train.reshape(num_training, -1)
X_val = X_val.reshape(num_validation, -1)
X_test = X_test.reshape(num_test, -1)
return X_train, y_train, X_val, y_val, X_test, y_test
# Invoke the above function to get our data.
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
print 'Train data shape: ', X_train.shape
print 'Train labels shape: ', y_train.shape
print 'Validation data shape: ', X_val.shape
print 'Validation labels shape: ', y_val.shape
print 'Test data shape: ', X_test.shape
print 'Test labels shape: ', y_test.shape
Explanation: Load the data
Now that you have implemented a two-layer network that passes gradient checks and works on toy data, it's time to load up our favorite CIFAR-10 data so we can use it to train a classifier on a real dataset.
End of explanation
input_size = 32 * 32 * 3
hidden_size = 50
num_classes = 10
net = TwoLayerNet(input_size, hidden_size, num_classes)
# Train the network
stats = net.train(X_train, y_train, X_val, y_val,
num_iters=1000, batch_size=200,
learning_rate=1e-4, learning_rate_decay=0.95,
reg=0.5, verbose=True)
# Predict on the validation set
val_acc = (net.predict(X_val) == y_val).mean()
print 'Validation accuracy: ', val_acc
Explanation: Train a network
To train our network we will use SGD with momentum. In addition, we will adjust the learning rate with an exponential learning rate schedule as optimization proceeds; after each epoch, we will reduce the learning rate by multiplying it by a decay rate.
End of explanation
# Plot the loss function and train / validation accuracies
plt.subplot(2, 1, 1)
plt.plot(stats['loss_history'])
plt.title('Loss history')
plt.xlabel('Iteration')
plt.ylabel('Loss')
plt.subplot(2, 1, 2)
plt.plot(stats['train_acc_history'], label='train')
plt.plot(stats['val_acc_history'], label='val')
plt.title('Classification accuracy history')
plt.xlabel('Epoch')
plt.ylabel('Clasification accuracy')
plt.show()
from cs231n.vis_utils import visualize_grid
# Visualize the weights of the network
def show_net_weights(net):
W1 = net.params['W1']
W1 = W1.reshape(32, 32, 3, -1).transpose(3, 0, 1, 2)
plt.imshow(visualize_grid(W1, padding=3).astype('uint8'))
plt.gca().axis('off')
plt.show()
show_net_weights(net)
Explanation: Debug the training
With the default parameters we provided above, you should get a validation accuracy of about 0.29 on the validation set. This isn't very good.
One strategy for getting insight into what's wrong is to plot the loss function and the accuracies on the training and validation sets during optimization.
Another strategy is to visualize the weights that were learned in the first layer of the network. In most neural networks trained on visual data, the first layer weights typically show some visible structure when visualized.
End of explanation
best_net = None # store the best model into this
#################################################################################
# TODO: Tune hyperparameters using the validation set. Store your best trained #
# model in best_net. #
# #
# To help debug your network, it may help to use visualizations similar to the #
# ones we used above; these visualizations will have significant qualitative #
# differences from the ones we saw above for the poorly tuned network. #
# #
# Tweaking hyperparameters by hand can be fun, but you might find it useful to #
# write code to sweep through possible combinations of hyperparameters #
# automatically like we did on the previous exercises. #
#################################################################################
best_val = -1
for hidden_size in [200,500,700]:
for learning_rate in [5e-4,1e-3,5e-3]:
net = TwoLayerNet(input_size, hidden_size, num_classes)
# Train the network
stats = net.train(X_train, y_train, X_val, y_val,
num_iters=1000, batch_size=200,
learning_rate=learning_rate, learning_rate_decay=0.95,
reg=0.5, verbose=False)
print "."
# Predict on the validation set
val_acc = (net.predict(X_val) == y_val).mean()
if best_val < val_acc:
best_val = val_acc
best_net = net
print "best till now ",best_val
#################################################################################
# END OF YOUR CODE #
#################################################################################
# visualize the weights of the best network
show_net_weights(best_net)
Explanation: Tune your hyperparameters
What's wrong?. Looking at the visualizations above, we see that the loss is decreasing more or less linearly, which seems to suggest that the learning rate may be too low. Moreover, there is no gap between the training and validation accuracy, suggesting that the model we used has low capacity, and that we should increase its size. On the other hand, with a very large model we would expect to see more overfitting, which would manifest itself as a very large gap between the training and validation accuracy.
Tuning. Tuning the hyperparameters and developing intuition for how they affect the final performance is a large part of using Neural Networks, so we want you to get a lot of practice. Below, you should experiment with different values of the various hyperparameters, including hidden layer size, learning rate, numer of training epochs, and regularization strength. You might also consider tuning the learning rate decay, but you should be able to get good performance using the default value.
Approximate results. You should be aim to achieve a classification accuracy of greater than 48% on the validation set. Our best network gets over 52% on the validation set.
Experiment: You goal in this exercise is to get as good of a result on CIFAR-10 as you can, with a fully-connected Neural Network. For every 1% above 52% on the Test set we will award you with one extra bonus point. Feel free implement your own techniques (e.g. PCA to reduce dimensionality, or adding dropout, or adding features to the solver, etc.).
End of explanation
test_acc = (best_net.predict(X_test) == y_test).mean()
print 'Test accuracy: ', test_acc
Explanation: Run on the test set
When you are done experimenting, you should evaluate your final trained network on the test set; you should get above 48%.
We will give you extra bonus point for every 1% of accuracy above 52%.
End of explanation
<END_TASK> |
15,741 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Tutorial
Step1: Installs DOcplexif needed
Step2: If either CPLEX or docplex where installed in the steps above, you will need to restart your jupyter kernel for the changes to be taken into account.
Step 2
Step3: Define the decision variables
The continuous variable desk represents the production of desk telephones.
The continuous variable cell represents the production of cell phones.
Step4: Set up the constraints
Desk and cell phone must both be greater than 100
Assembly time is limited
Painting time is limited.
Step5: Express the objective
We want to maximize the expected revenue.
Step6: A few remarks about how we formulated the mathemtical model in Python using DOcplex
Step7: Graphical representation of a Linear Problem
A simple 2-dimensional LP (with 2 decision variables) can be represented graphically using a x- and y-axis.
This is often done to demonstrate optimization concepts.
To do this, follow these steps
Step8: In this case, CPLEX has found an optimal solution at (300, 850). You can check that this point is indeed an extreme point of the feasible region.
Multiple Optimal Solutions
It is possible that an LP has multiple optimal solutions.
At least one optimal solution will be at a vertex.
By default, the CPLEX® Optimizer reports the first optimal solution found.
Example of multiple optimal solutions
<p>
<ul>
<img src = "https
Step9: Correcting infeasible models
To correct an infeasible model, you must use your knowledge of the real-world situation you are modeling.
If you know that the model is realizable, you can usually manually construct an example of a feasible solution and use it to determine where your model or data is incorrect. For example, the telephone production manager may input the previous month's production figures as a solution to the model and discover that they violate the erroneously entered bounds of 1100.
DOcplex can help perform infeasibility analysis, which can get very complicated in large models. In this analysis, DOcplex may suggest relaxing one or more constraints.
Relaxing constraints by changing the model
In the case of LP models, the term “relaxation” refers to changing the right hand side of the constraint to allow some violation of the original constraint.
For example, a relaxation of the assembly time constraint is as follows
Step10: Modify the assembly time constraint by changing its right-hand side by adding overtime.
Note
Step11: Last, modify the objective expression to add the penalization term.
Note that we use the Python decrement operator.
Step12: And solve again using DOcplex
Step13: Unbounded Variable vs. Unbounded model
A variable is unbounded when one or both of its bounds is infinite.
A model is unbounded when its objective value can be increased or decreased without limit.
The fact that a variable is unbounded does not necessarily influence the solvability of the model and should not be confused with a model being unbounded.
An unbounded model is almost certainly not correctly formulated.
While infeasibility implies a model where constraints are too limiting, unboundedness implies a model where an important constraint is either missing or not restrictive enough.
By default, DOcplex variables are unbounded
Step14: Default optimality criteria for CPLEX optimizer
Because CPLEX Optimizer operates on finite precision computers, it uses an optimality tolerance to test the reduced costs.
The default optimality tolerance is –1e-6, with optimality criteria for the simplest form of an LP then being
Step15: Degeneracy
It is possible that multiple non-optimal solutions with the same objective value exist.
As the simplex algorithm attempts to move in the direction of an improved objective value, it might happen that the algorithm starts cycling between non-optimal solutions with equivalent objective values. This is known as degeneracy.
Modern LP solvers, such as CPLEX Simplex Optimizer, have built-in mechanisms to help escape such cycling by using perturbation techniques involving the variable bounds.
If the default algorithm does not break the degenerate cycle, it's a good idea to try some other algorithms, for example the Dual-simplex Optimizer. Problem that are primal degenerate, are often not dual degenerate, and vice versa.
Setting a LP algorithm with DOcplex
Users can change the algorithm by editing the lpmethod parameter of the model.
We won't go into details here, it suffices to know this parameter accepts an integer from 0 to 6, where 0 denotes automatic choice of the algorithm, 1 is for primal simplex, 2 is for dual simplex, and 4 is for barrier...
For example, choosing the barrier algorithm is done by setting value 4 to this parameter. We access the parameters property of the model and from there, assign the lpmethod parameter | <ASSISTANT_TASK:>
Python Code:
import sys
try:
import cplex
except:
if hasattr(sys, 'real_prefix'):
#we are in a virtual env.
!pip install cplex
else:
!pip install --user cplex
Explanation: Tutorial: Linear Programming, (CPLEX Part 1)
This notebook gives an overview of Linear Programming (or LP). After completing this unit, you should be able to
- describe the characteristics of an LP in terms of the objective, decision variables and constraints,
- formulate a simple LP model on paper,
- conceptually explain some standard terms related to LP, such as dual, feasible region, infeasible, unbounded, slack, reduced cost, and degenerate.
You should also be able to describe some of the algorithms used to solve LPs, explain what presolve does, and recognize the elements of an LP in a basic DOcplex model.
This notebook is part of Prescriptive Analytics for Python
It requires either an installation of CPLEX Optimizers or it can be run on IBM Cloud Pak for Data as a Service (Sign up for a free IBM Cloud account
and you can start using IBM Cloud Pak for Data as a Service right away).
CPLEX is available on <i>IBM Cloud Pack for Data</i> and <i>IBM Cloud Pak for Data as a Service</i>:
- <i>IBM Cloud Pak for Data as a Service</i>: Depends on the runtime used:
- <i>Python 3.x</i> runtime: Community edition
- <i>Python 3.x + DO</i> runtime: full edition
- <i>Cloud Pack for Data</i>: Community edition is installed by default. Please install the DO addon in Watson Studio Premium for the full edition
Table of contents:
Introduction to Linear Programming
Example: a production problem
CPLEX Modeling for Python
Algorithms for solving LPs
Summary
References
Introduction to Linear Programming
In this topic, you’ll learn what the basic characteristics of a linear program are.
What is Linear Programming?
Linear programming deals with the maximization (or minimization) of a linear objective function, subject to linear constraints, where all the decision variables are continuous. That is, no discrete variables are allowed. The linear objective and constraints must consist of linear expressions.
What is a linear expression?
A linear expression is a scalar product, for example, the expression:
$$
\sum{a_i x_i}
$$
where a_i represents constants (that is, data) and x_i represents variables or unknowns.
Such an expression can also be written in short form as a vector product:
$$^{t}A X
$$
where $A$ is the vector of constants and $X$ is the vector of variables.
Note: Nonlinear terms that involve variables (such as x and y) are not allowed in linear expressions.
Terms that are not allowed in linear expressions include
- multiplication of two or more variables (such as x times y),
- quadratic and higher order terms (such as x squared or x cubed),
- exponents,
- logarithms,
- absolute values.
What is a linear constraint?
A linear constraint is expressed by an equality or inequality as follows:
- $linear_expression = linear_expression$
- $linear_expression \le linear_expression$
- $linear_expression \ge linear_expression$
Any linear constraint can be rewritten as one or two expressions of the type linear expression is less than or equal to zero.
Note that strict inequality operators (that is, $>$ and $<$) are not allowed in linear constraints.
What is a continuous variable?
A variable (or decision variable) is an unknown of the problem. Continuous variables are variables the set of real numbers (or an interval).
Restrictions on their values that create discontinuities, for example a restriction that a variable should take integer values, are not allowed.
Symbolic representation of an LP
A typical symbolic representation of a Linear Programming is as follows:
$
minimize \sum c_{i} x_{i}\
\
subject\ to:\
\ a_{11}x_{1} + a_{12} x_{2} ... + a_{1n} x_{n} \ge b_{1}\
\ a_{21}x_{1} + a_{22} x_{2} ... + a_{2n} x_{n} \ge b_{2}\
...
\ a_{m1}x_{1} + a_{m2} x_{2} ... + a_{mn} x_{n} \ge b_{m}\
x_{1}, x_{2}...x_{n} \ge 0
$
This can be written in a concise form using matrices and vectors as:
$
min\ C^{t}x\
s.\ t.\ Ax \ge B\
x \ge 0
$
Where $x$ denotes the vector of variables with size $n$, $A$ denotes the matrix of constraint coefficients, with $m$ rows and $n$ columns and $B$ is a vector of numbers with size $m$.
Characteristics of a linear program
<p>
<ul>
<img src = "https://ibmdecisionoptimization.github.io/tutorials/jupyter/training/1.png?raw=true" >
</ul>
# Example: a production problem
In this topic, you’ll analyze a simple production problem in terms of decision variables, the objective function, and constraints.
You’ll learn how to write an LP formulation of this problem, and how to construct a graphical representation of the model. You’ll also learn what feasible, optimal, infeasible, and unbounded mean in the context of LP.
## Problem description: telephone production
A telephone company produces and sells two kinds of telephones, namely desk phones and cellular phones.
Each type of phone is assembled and painted by the company. The objective is to maximize profit, and the company has to produce at least 100 of each type of phone.
There are limits in terms of the company’s production capacity, and the company has to calculate the optimal number of each type of phone to produce, while not exceeding the capacity of the plant.
## Writing a descriptive model
It is good practice to start with a descriptive model before attempting to write a mathematical model. In order to come up with a descriptive model, you should consider what the decision variables, objectives, and constraints for the business problem are, and write these down in words.
In order to come up with a descriptive model, consider the following questions:
- What are the decision variables?
- What is the objective?
- What are the constraints?
## Telephone production: a descriptive model
A possible descriptive model of the telephone production problem is as follows:
- Decision variables:
- Number of desk phones produced (DeskProduction)
- Number of cellular phones produced (CellProduction)
- Objective: Maximize profit
- Constraints:
1. The DeskProduction should be greater than or equal to 100.
2. The CellProduction should be greater than or equal to 100.
3. The assembly time for DeskProduction plus the assembly time for CellProduction should not exceed 400 hours.
4. The painting time for DeskProduction plus the painting time for CellProduction should not exceed 490 hours.
## Writing a mathematical model
Convert the descriptive model into a mathematical model:
- Use the two decision variables DeskProduction and CellProduction
- Use the data given in the problem description (remember to convert minutes to hours where appropriate)
- Write the objective as a mathematical expression
- Write the constraints as mathematical expressions (use “=”, “<=”, or “>=”, and name the constraints to describe their purpose)
- Define the domain for the decision variables
### Telephone production: a mathematical model
To express the last two constraints, we model assembly time and painting time as linear combinations of the two productions, resulting in the following mathematical model:
$
maximize:\\
\ \ 12\ desk\_production + 20\ cell\_production\\
subject\ to: \\
\ \ desk\_production >= 100 \\
\ \ cell\_production >= 100 \\
\ \ 0.2\ desk\_production + 0.4\ cell\_production <= 400 \\
\ \ 0.5\ desk\_production + 0.4\ cell\_production <= 490 \\
$
### Using DOcplex to formulate the mathematical model in Python
Use the [DOcplex](http://ibmdecisionoptimization.github.io/docplex-doc/) Python library to write the mathematical model in Python.
This is done in four steps:
- create a instance of docplex.mp.Model to hold all model objects
- create decision variables,
- create linear constraints,
- finally, define the objective.
But first, we have to import the class `Model` from the docplex module.
## Use IBM Decision Optimization CPLEX Modeling for Python
Let's use the DOcplex Python library to write the mathematical model in Python.
### Step 1: Download the library
Install `CPLEX` (Community Edition) and `docplex` if they are not installed.
In `IBM Cloud Pak for Data as a Service` notebooks, `CPLEX` and `docplex` are preinstalled.
End of explanation
import sys
try:
import docplex.mp
except:
if hasattr(sys, 'real_prefix'):
#we are in a virtual env.
!pip install docplex
else:
!pip install --user docplex
Explanation: Installs DOcplexif needed
End of explanation
# first import the Model class from docplex.mp
from docplex.mp.model import Model
# create one model instance, with a name
m = Model(name='telephone_production')
Explanation: If either CPLEX or docplex where installed in the steps above, you will need to restart your jupyter kernel for the changes to be taken into account.
Step 2: Set up the prescriptive model
Create the model
All objects of the model belong to one model instance.
End of explanation
# by default, all variables in Docplex have a lower bound of 0 and infinite upper bound
desk = m.continuous_var(name='desk')
cell = m.continuous_var(name='cell')
Explanation: Define the decision variables
The continuous variable desk represents the production of desk telephones.
The continuous variable cell represents the production of cell phones.
End of explanation
# write constraints
# constraint #1: desk production is greater than 100
m.add_constraint(desk >= 100)
# constraint #2: cell production is greater than 100
m.add_constraint(cell >= 100)
# constraint #3: assembly time limit
ct_assembly = m.add_constraint( 0.2 * desk + 0.4 * cell <= 400)
# constraint #4: paiting time limit
ct_painting = m.add_constraint( 0.5 * desk + 0.4 * cell <= 490)
Explanation: Set up the constraints
Desk and cell phone must both be greater than 100
Assembly time is limited
Painting time is limited.
End of explanation
m.maximize(12 * desk + 20 * cell)
Explanation: Express the objective
We want to maximize the expected revenue.
End of explanation
m.print_information()
Explanation: A few remarks about how we formulated the mathemtical model in Python using DOcplex:
- all arithmetic operations (+, *, -) are done using Python operators
- comparison operators used in writing linear constraint use Python comparison operators too.
Print information about the model
We can print information about the model to see how many objects of each type it holds:
End of explanation
s = m.solve()
m.print_solution()
Explanation: Graphical representation of a Linear Problem
A simple 2-dimensional LP (with 2 decision variables) can be represented graphically using a x- and y-axis.
This is often done to demonstrate optimization concepts.
To do this, follow these steps:
- Assign one variable to the x-axis and the other to the y-axis.
- Draw each of the constraints as you would draw any line in 2 dimensions.
- Use the signs of the constraints (=, <= or >=) to determine which side of each line falls within the feasible region (allowable solutions).
- Draw the objective function as you would draw any line in 2 dimensions, by substituting any value for the objective (for example, 12 * DeskProduction + 20 * CellProduction = 4000)
Feasible set of solutions
<p>
<ul>
<img src = "https://ibmdecisionoptimization.github.io/tutorials/jupyter/training/19.png?raw=true" >
</ul>
This graphic shows the feasible region for the telephone problem.
Recall that the feasible region of an LP is the region delimited by the constraints, and it represents all feasible solutions. In this graphic, the variables DeskProduction and CellProduction are abbreviated to be desk and cell instead. Look at this diagram and search intuitively for the optimal solution. That is, which combination of desk and cell phones will yield the highest profit.
#### The optimal solution
<p>
<ul>
<img src = "https://ibmdecisionoptimization.github.io/tutorials/jupyter/training/20.png?raw=true" >
</ul>
To find the optimal solution to the LP, you must find values for the decision variables, within the feasible region, that maximize profit as defined by the objective function. In this problem, the objective function is to maximize
$$12 * desk + 20 * cell
$$
To do this, first draw a line representing the objective by substituting a value for the objective.
Next move the line up (because this is a maximization problem) to find the point where the line last touches the feasible region. Note that all the solutions on one objective line, such as AB, yield the same objective value. Other values of the objective will be found along parallel lines (such as line CD).
In a profit maximizing problem such as this one, these parallel lines are often called isoprofit lines, because all the points along such a line represent the same profit. In a cost minimization problem, they are known as isocost lines. Since all isoprofit lines have the same slope, you can find all other isoprofit lines by pushing the objective value further out, moving in parallel, until the isoprofit lines no longer intersect the feasible region. The last isoprofit line that touches the feasible region defines the largest (therefore maximum) possible value of the objective function. In the case of the telephone production problem, this is found along line EF.
The optimal solution of a linear program always belongs to an extreme point of the feasible region (that is, at a vertex or an edge).
### Solve with the model
If you're using a Community Edition of CPLEX runtimes, depending on the size of the problem, the solve stage may fail and will need a paying subscription or product installation.
In any case, `Model.solve()` returns a solution object in Python, containing the optimal values of decision variables, if the solve succeeds, or else it returns `None`.
End of explanation
# create a new model, copy of m
im = m.copy()
# get the 'desk' variable of the new model from its name
idesk = im.get_var_by_name('desk')
# add a new (infeasible) constraint
im.add_constraint(idesk >= 1100);
# solve the new proble, we expect a result of None as the model is now infeasible
ims = im.solve()
if ims is None:
print('- model is infeasible')
Explanation: In this case, CPLEX has found an optimal solution at (300, 850). You can check that this point is indeed an extreme point of the feasible region.
Multiple Optimal Solutions
It is possible that an LP has multiple optimal solutions.
At least one optimal solution will be at a vertex.
By default, the CPLEX® Optimizer reports the first optimal solution found.
Example of multiple optimal solutions
<p>
<ul>
<img src = "https://ibmdecisionoptimization.github.io/tutorials/jupyter/training/22.png?raw=true" >
</ul>
This graphic shows an example of an LP with multiple optimal solutions. This can happen when the slope of the objective function is the same as the slope of one of the constraints, in this case line AB. All the points on line AB are optimal solutions, with the same objective value, because they are all extreme points within the feasible region.
### Binding and nonbinding constraints
A constraint is binding if the constraint becomes an equality when the solution values are substituted.
Graphically, binding constraints are constraints where the optimal solution lies exactly on the line representing that constraint.
In the telephone production problem, the constraint limiting time on the assembly machine is binding:
$$
0.2desk + 0.4 cell <= 400\\
desk = 300
cell = 850
0.2(300) + 0.4(850) = 400
$$
The same is true for the time limit on the painting machine:
$$
0.5desk + 0.4cell <= 490
0.5(300) + 0.4(850) = 490
$$
On the other hand, the requirement that at least 100 of each telephone type be produced is nonbinding because the left and right hand sides are not equal:
$$
desk >= 100\\
300 \neq 100
$$
### Infeasibility
A model is infeasible when no solution exists that satisfies all the constraints. This may be because:
The model formulation is incorrect.
The data is incorrect.
The model and data are correct, but represent a real-world conflict in the system being modeled.
When faced with an infeasible model, it's not always easy to identify the source of the infeasibility.
DOcplex helps you identify potential causes of infeasibilities, and it will also suggest changes to make the model feasible.
#### An example of infeasible problem
<p>
<ul>
<img src = "https://ibmdecisionoptimization.github.io/tutorials/jupyter/training/26.png?raw=true" >
</ul>
This graphic shows an example of an infeasible constraint set for the telephone production problem. Assume in this case that the person entering data had accidentally entered lower bounds on the production of 1100 instead of 100. The arrows show the direction of the feasible region with respect to each constraint. This data entry error moves the lower bounds on production higher than the upper bounds from the assembly and painting constraints, meaning that the feasible region is empty and there are no possible solutions.
#### Infeasible models in DOcplex
Calling `solve()` on an infeasible model returns None. Let's experiment this with DOcplex. First, we take a copy of our model and an extra infeasible constraint which states that desk telephone production must be greater than 1100
End of explanation
overtime = m.continuous_var(name='overtime', ub=40)
Explanation: Correcting infeasible models
To correct an infeasible model, you must use your knowledge of the real-world situation you are modeling.
If you know that the model is realizable, you can usually manually construct an example of a feasible solution and use it to determine where your model or data is incorrect. For example, the telephone production manager may input the previous month's production figures as a solution to the model and discover that they violate the erroneously entered bounds of 1100.
DOcplex can help perform infeasibility analysis, which can get very complicated in large models. In this analysis, DOcplex may suggest relaxing one or more constraints.
Relaxing constraints by changing the model
In the case of LP models, the term “relaxation” refers to changing the right hand side of the constraint to allow some violation of the original constraint.
For example, a relaxation of the assembly time constraint is as follows:
$$
0.2 \ desk + 0.4\ cell <= 440
$$
Here, the right hand side has been relaxed from 400 to 440, meaning that you allow more time for assembly than originally planned.
Relaxing model by converting hard constraints to soft constraints
A soft constraint is a constraint that can be violated in some circumstances.
A hard constraint cannot be violated under any circumstances. So far, all constraints we have encountered are hard constraints.
Converting hard constraints to soft is one way to resolve infeasibilities.
The original hard constraint on assembly time is as follows:
$$
0.2 \ desk + 0.4 \ cell <= 400
$$
You can turn this into a soft constraint if you know that, for example, an additional 40 hours of overtime are available at an additional cost. First add an overtime term to the right-hand side:
$$
0.2 \ desk + 0.4 \ cell <= 400 + overtime
$$
Next, add a hard limit to the amount of overtime available:
$$
overtime <= 40
$$
Finally, add an additional cost to the objective to penalize use of overtime.
Assume that in this case overtime costs an additional $2/hour, then the new objective becomes:
$$
maximize\ 12 * desk + 20 * cell — 2 * overtime
$$
Implement the soft constraint model using DOcplex
First add an extra variable for overtime, with an upper bound of 40. This suffices to express the hard limit on overtime.
End of explanation
ct_assembly.rhs = 400 + overtime
Explanation: Modify the assembly time constraint by changing its right-hand side by adding overtime.
Note: this operation modifies the model by performing a side-effect on the constraint object. DOcplex allows dynamic edition of model elements.
End of explanation
m.maximize(12*desk + 20 * cell - 2 * overtime)
Explanation: Last, modify the objective expression to add the penalization term.
Note that we use the Python decrement operator.
End of explanation
s2 = m.solve()
m.print_solution()
Explanation: And solve again using DOcplex:
End of explanation
print('* desk variable has reduced cost: {0}'.format(desk.reduced_cost))
print('* cell variable has reduced cost: {0}'.format(cell.reduced_cost))
Explanation: Unbounded Variable vs. Unbounded model
A variable is unbounded when one or both of its bounds is infinite.
A model is unbounded when its objective value can be increased or decreased without limit.
The fact that a variable is unbounded does not necessarily influence the solvability of the model and should not be confused with a model being unbounded.
An unbounded model is almost certainly not correctly formulated.
While infeasibility implies a model where constraints are too limiting, unboundedness implies a model where an important constraint is either missing or not restrictive enough.
By default, DOcplex variables are unbounded: their upper bound is infinite (but their lower bound is zero).
Unbounded feasible region
The telephone production problem would become unbounded if, for example, the constraints on the assembly and painting time were neglected. The feasible region would then look as in this diagram where the objective value can increase without limit, up to infinity, because there is no upper boundary to the region.
<p>
<ul>
<img src = "https://ibmdecisionoptimization.github.io/tutorials/jupyter/training/32.png?raw=true" >
</ul>
## Algorithms for solving LPs
The IBM® CPLEX® Optimizers to solve LP problems in CPLEX include:
- Simplex Optimizer
- Dual-simplex Optimizer
- Barrier Optimizer
### The Simplex algorithm
The Simplex algorithm, developed by George Dantzig in 1947, was the first generalized algorithm for solving LP problems. It is the basis of many optimization algorithms. The simplex method is an iterative method. It starts with an initial feasible solution, and then tests to see if it can improve the result of the objective function. It continues until the objective function cannot be further improved.
The following diagram illustrates how the simplex algorithm traverses the boundary of the feasible region for the telephone production problem. The algorithm, starts somewhere along the edge of the shaded feasible region, and advances vertex-by-vertex until arriving at the vertex that also intersects the optimal objective line. Assume it starts at the red dot indicated on the diagam.
<p>
<ul>
<img src = "https://ibmdecisionoptimization.github.io/tutorials/jupyter/training/36.png?raw=true" >
</ul>
<p>
<ul>
<img src = "https://ibmdecisionoptimization.github.io/tutorials/jupyter/training/37.png?raw=true" >
</ul>
<p>
<ul>
<img src = "https://ibmdecisionoptimization.github.io/tutorials/jupyter/training/38.png?raw=true" >
</ul>
<p>
<ul>
<img src = "https://ibmdecisionoptimization.github.io/tutorials/jupyter/training/39.png?raw=true" >
</ul>
### The Revised Simplex algorithm
To improve the efficiency of the Simplex algorithm, George Dantzig and W. Orchard-Hays revised it in 1953. CPLEX uses the Revised Simplex algorithm, with a number of improvements. The CPLEX Optimizers are particularly efficient and can solve very large problems rapidly. You can tune some CPLEX Optimizer parameters to change the algorithmic behavior according to your needs.
### The Dual Simplex algorithm
#### The dual of a LP
The concept of duality is important in Linear Programming (LP). Every LP problem has an associated LP problem known as its _dual_. The dual of this associated problem is the original LP problem (known as the primal problem). If the primal problem is a minimization problem, then the dual problem is a maximization problem and vice versa.
#### A primal-dual pair
<p>
<ul>
<img src = "https://ibmdecisionoptimization.github.io/tutorials/jupyter/training/42.png?raw=true" >
</ul>
*Primal (P)*
--------------------
$max\ z=\sum_{i} c_{i}x_{i}$
*Dual (D)*
-------------------------------
$min\ w= \sum_{j}b_{j}y_{j}$
- Each constraint in the primal has an associated dual variable, yi.
- Any feasible solution to D is an upper bound to P, and any feasible solution to P is a lower bound to D.
- In LP, the optimal objective values of D and P are equivalent, and occurs where these bounds meet.
- The dual can help solve difficult primal problems by providing a bound that in the best case equals the optimal solution to the primal problem.
#### Dual prices
In any solution to the dual, the values of the dual variables are known as the dual prices, also called shadow prices.
For each constraint in the primal problem, its associated dual price indicates how much the dual objective will change with a unit change in the right hand side of the constraint.
The dual price of a non-binding constraint is zero. That is, changing the right hand side of the constraint will not affect the objective value.
The dual price of a binding constraint can help you make decisions regarding the constraint.
For example, the dual price of a binding resource constraint can be used to determine whether more of the resource should be purchased or not.
#### The Dual Simplex algorithm
The Simplex algorithm works by finding a feasible solution and moving progressively toward optimality.
The Dual Simplex algorithm implicitly uses the dual to try and find an optimal solution to the primal as early as it can, and regardless of whether the solution is feasible or not.
It then moves from one vertex to another, gradually decreasing the infeasibility while maintaining optimality, until an optimal feasible solution to the primal problem is found.
In CPLEX, the Dual-Simplex Optimizer is the first choice for most LP problems.
### Basic solutions and basic variables
You learned earlier that the Simplex algorithm travels from vertex to vertex to search for the optimal solution.
A solution at a vertex is known as a _basic_ solution. Without getting into too much detail, it's worth knowing that part of the Simplex algorithm involves setting a subset of variables to zero at each iteration.
These variables are known as non-basic variables. The remaining variables are the _basic_ variables. The concepts of basic solutions and variables are relevant in the definition of reduced costs that follows next.
### Reduced Costs
The reduced cost of a variable gives an indication of the amount the objective will change with a unit increase in the variable value.
Consider the simplest form of an LP:
$
minimize\ c^{t}x\\
s.t. \\
Ax = b \\
x \ge 0
$
If $y$ represents the dual variables for a given basic solution, then the reduced costs are defined as:
$$
c - y^{t}A
$$
Such a basic solution is optimal if:
$$
c - y^{t}A \ge 0
$$
If all reduced costs for this LP are non-negative, it follows that the objective value can only increase with a change in the variable value, and therefore the solution (when minimizing) is optimal.
DOcplex lets you acces sreduced costs of variable, after a successful solve. Let's experiment with the two decision variables of our problem:
#### Getting reduced cost values with DOcplex
DOcplex lets you access reduced costs of variable, after a successful solve. Let's experiment with the two decision variables of our problem:
End of explanation
# revert soft constraints
ct_assembly.rhs = 440
s3 = m.solve()
# now get slack value for assembly constraint: expected value is 40
print('* slack value for assembly time constraint is: {0}'.format(ct_assembly.slack_value))
# get slack value for painting time constraint, expected value is 0.
print('* slack value for painting time constraint is: {0}'.format(ct_painting.slack_value))
Explanation: Default optimality criteria for CPLEX optimizer
Because CPLEX Optimizer operates on finite precision computers, it uses an optimality tolerance to test the reduced costs.
The default optimality tolerance is –1e-6, with optimality criteria for the simplest form of an LP then being:
$$
c — y^{t}A> –10^{-6}
$$
You can adjust this optimality tolerance, for example if the algorithm takes very long to converge and has already achieved a solution sufficiently close to optimality.
Reduced Costs and multiple optimal solutions
In the earlier example you saw how one can visualize multiple optimal solutions for an LP with two variables.
For larger LPs, the reduced costs can be used to determine whether multiple optimal solutions exist. Multiple optimal solutions exist when one or more non-basic variables with a zero reduced cost exist in an optimal solution (that is, variable values that can change without affecting the objective value).
In order to determine whether multiple optimal solutions exist, you can examine the values of the reduced costs with DOcplex.
Slack values
For any solution, the difference between the left and right hand sides of a constraint is known as the slack value for that constraint.
For example, if a constraint states that f(x) <= 100, and in the solution f(x) = 80, then the slack value of this constraint is 20.
In the earlier example, you learned about binding and non-binding constraints. For example, f(x) <= 100 is binding if f(x) = 100, and non-binding if f(x) = 80.
The slack value for a binding constraint is always zero, that is, the constraint is met exactly.
You can determine which constraints are binding in a solution by examining the slack values with DOcplex.
This might help to better interpret the solution and help suggest which constraints may benefit from a change in bounds or a change into a soft constraint.
Accessing slack values with DOcplex
As an example, let's examine the slack values of some constraints in our problem, after we revert the change to soft constrants
End of explanation
m.parameters.lpmethod = 4
m.solve(log_output=True)
Explanation: Degeneracy
It is possible that multiple non-optimal solutions with the same objective value exist.
As the simplex algorithm attempts to move in the direction of an improved objective value, it might happen that the algorithm starts cycling between non-optimal solutions with equivalent objective values. This is known as degeneracy.
Modern LP solvers, such as CPLEX Simplex Optimizer, have built-in mechanisms to help escape such cycling by using perturbation techniques involving the variable bounds.
If the default algorithm does not break the degenerate cycle, it's a good idea to try some other algorithms, for example the Dual-simplex Optimizer. Problem that are primal degenerate, are often not dual degenerate, and vice versa.
Setting a LP algorithm with DOcplex
Users can change the algorithm by editing the lpmethod parameter of the model.
We won't go into details here, it suffices to know this parameter accepts an integer from 0 to 6, where 0 denotes automatic choice of the algorithm, 1 is for primal simplex, 2 is for dual simplex, and 4 is for barrier...
For example, choosing the barrier algorithm is done by setting value 4 to this parameter. We access the parameters property of the model and from there, assign the lpmethod parameter
End of explanation
<END_TASK> |
15,742 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
AIA Response Functions
Step2: SSW Response Functions
Calculate the wavelength and temperature response functions using SSW.
Step3: Detailed Channel Information
Note first that we can easily read detailed information about each of the AIA channels. Here, I've used instrument files from local SSW installation.
Step4: Wavelength Response Functions
Next, we'll actually instantiate the response object and calculate the wavelength response functions. We'll go ahead and calculate the response for all 7 EUV channels.
Step5: When the Response object is instantiated, if no instrument files are supplied, it checks the the default SunPy downloads directory for the EUV and FUV instrument files. If they are not there, they are downloaded from the Goddard SSW mirror. This way, a local installation of SSW is not needed. However, to calculate the temperature response functions, you'll need CHIANTI and ChiantiPy installed.
Step6: Note that this shows the SSW AIA wavelength response functions with the time-dependent corrections so I guess we need to account for this somehow...
Temperature Response Functions
Now, calculate the temperature response functions. The first time you do this, it will build an emissivity table which may take around 30 minutes, but this only needs to be done once. Alternatively, you can also build your own emissivity table using the built-in function. This is necessary if, for example, you want to change the abundances or ionization equilibrium file, or calculate over a different temperature/density range.
Step7: Note that there is an issue with the 304 angstrom channel (even when the time-dependent correction to the wavelength response is not included). Perhaps there is some correction made in the 304 angstrom case, or ions that contribute in this wavlength range, that I've not accounted for here.
Oddly, the lack of a time-dependent correction in the SunPy case actually gives better agreement between the two in the 304 angstrom case.
You can also choose the number of ions that you'd like to include in the temperature response function calculation. So for example if wanted to see how only including Fe ions affected the response functions,
Step8: Or no Fe at all.
Step9: Calculating a Custom Emission Table
If you want to change the temperature/density/wavelength over which the emissivities are calculated or change things like the abundance, you can calculate your own emissivity table and then pass that file name in when calculating the temperature response functions.
In this short example, we'll calculate a new table using photospheric abundances, with constant density, and using only handful of ions with very high ionization states. To show the progress bar in the notebook, we need to pass in the notebook=True kwarg. Default is false and nothing will be displayed in notebook. For this to actually work though, you seem to need to have ipywidgets installed.
Step10: Detailed Information about Emissivity Tables
The EmissTableInterface object provides a convenient way to inspect the different parts of the atomic data used to calculate the response functions, including the temperature, density, and line and continuum contributions. | <ASSISTANT_TASK:>
Python Code:
import os
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import astropy.units as u
import ChiantiPy.tools.data as ch_data
import hissw
from sunpy.instr.aia.response import Response
from sunpy.instr.aia.response_utils import EmissTableInterface,aia_instr_properties_to_table,make_emiss_table
%matplotlib inline
Explanation: AIA Response Functions: Comparing SunPy and SSW Results
End of explanation
ssw_script=
; wavelength response
wresp = aia_get_response(/area,/dn,/timedepend_date,/evenorm)
wavelength = wresp.wave
wresponse_94 = wresp.a94.ea
wresponse_131 = wresp.a131.ea
wresponse_171 = wresp.a171.ea
wresponse_193 = wresp.a193.ea
wresponse_211 = wresp.a211.ea
wresponse_304 = wresp.a304.ea
wresponse_335 = wresp.a335.ea
; temperature response
tresp = aia_get_response(/temp,/dn,/timedepend_date,/evenorm)
temperature = 10.^tresp.logte
tresponse_94 = tresp.a94.tresp
tresponse_131 = tresp.a131.tresp
tresponse_171 = tresp.a171.tresp
tresponse_193 = tresp.a193.tresp
tresponse_211 = tresp.a211.tresp
tresponse_304 = tresp.a304.tresp
tresponse_335 = tresp.a335.tresp
save_vars = ['temperature',
'wavelength',
'tresponse_94',
'tresponse_131',
'tresponse_171',
'tresponse_193',
'tresponse_211',
'tresponse_304',
'tresponse_335',
'wresponse_94',
'wresponse_131',
'wresponse_171',
'wresponse_193',
'wresponse_211',
'wresponse_304',
'wresponse_335']
ssw_runner = hissw.ScriptMaker(ssw_pkg_list=['sdo/aia'],ssw_path_list=['aia'])
ssw_response_functions = ssw_runner.run([(ssw_script,{})],save_vars=save_vars,cleanup=True,verbose=True)
Explanation: SSW Response Functions
Calculate the wavelength and temperature response functions using SSW.
End of explanation
info_table = aia_instr_properties_to_table([94,131,171,193,211,335],
[os.path.join('/usr/local/ssw/','sdo/aia/response/aia_V6_all_fullinst.genx')])
info_table
Explanation: Detailed Channel Information
Note first that we can easily read detailed information about each of the AIA channels. Here, I've used instrument files from local SSW installation.
End of explanation
aia_response = Response()
Explanation: Wavelength Response Functions
Next, we'll actually instantiate the response object and calculate the wavelength response functions. We'll go ahead and calculate the response for all 7 EUV channels.
End of explanation
wavelength_response = aia_response.calculate_wavelength_response()
fig,axes = plt.subplots(3,3,figsize=(12,12))
for channel,ax in zip(sorted(wavelength_response.keys()),axes.flatten()):
# sunpy
ax.plot(wavelength_response[channel].wavelength,wavelength_response[channel].response,
linestyle=':',lw=3,color=sns.color_palette('deep')[0],label='SunPy')
# SSW
ax.plot(ssw_response_functions['wavelength'],ssw_response_functions['wresponse_{}'.format(channel)],
linestyle='-',color=sns.color_palette('deep')[2],label='SSW')
ax.set_xlim([channel-20,channel+20])
ax.set_xlabel(r'$\lambda$ [{:latex}]'.format(wavelength_response[channel].wavelength.unit))
ax.set_ylabel(r'$R_{{{}}}$ [{:latex}]'.format(channel,wavelength_response[channel].response.unit))
axes[0,0].legend(loc='best')
plt.tight_layout()
Explanation: When the Response object is instantiated, if no instrument files are supplied, it checks the the default SunPy downloads directory for the EUV and FUV instrument files. If they are not there, they are downloaded from the Goddard SSW mirror. This way, a local installation of SSW is not needed. However, to calculate the temperature response functions, you'll need CHIANTI and ChiantiPy installed.
End of explanation
temperature_response = aia_response.calculate_temperature_response()
fig,axes = plt.subplots(3,3,figsize=(12,12),sharex=True,sharey=True)
for channel,ax in zip(sorted(wavelength_response.keys()),axes.flatten()):
# sunpy
ax.plot(temperature_response[channel].temperature,temperature_response[channel].response,
linestyle=':',lw=3,color=sns.color_palette('deep')[0],label='SunPy')
# SSW
ax.plot(ssw_response_functions['temperature'],ssw_response_functions['tresponse_{}'.format(channel)],
linestyle='-',color=sns.color_palette('deep')[2],label='SSW')
ax.set_xlabel(r'$T$ [{:latex}]'.format(temperature_response[channel].temperature.unit))
ax.set_ylabel(r'$K_{{{}}}$ [{:latex}]'.format(channel,temperature_response[channel].response.unit))
ax.set_xscale('log')
ax.set_yscale('log')
ax.set_ylim([1e-30,2e-24])
ax.set_xlim([1e5,1e8])
axes[0,0].legend(loc='best')
plt.tight_layout()
Explanation: Note that this shows the SSW AIA wavelength response functions with the time-dependent corrections so I guess we need to account for this somehow...
Temperature Response Functions
Now, calculate the temperature response functions. The first time you do this, it will build an emissivity table which may take around 30 minutes, but this only needs to be done once. Alternatively, you can also build your own emissivity table using the built-in function. This is necessary if, for example, you want to change the abundances or ionization equilibrium file, or calculate over a different temperature/density range.
End of explanation
fe_ions = [ion for ion in ch_data.MasterList if ion.split('_')[0]=='fe']
Explanation: Note that there is an issue with the 304 angstrom channel (even when the time-dependent correction to the wavelength response is not included). Perhaps there is some correction made in the 304 angstrom case, or ions that contribute in this wavlength range, that I've not accounted for here.
Oddly, the lack of a time-dependent correction in the SunPy case actually gives better agreement between the two in the 304 angstrom case.
You can also choose the number of ions that you'd like to include in the temperature response function calculation. So for example if wanted to see how only including Fe ions affected the response functions,
End of explanation
no_fe_ions = [ion for ion in ch_data.MasterList if ion.split('_')[0]!='fe']
temperature_response_fe_only = aia_response.calculate_temperature_response(ion_list=fe_ions)
temperature_response_no_fe = aia_response.calculate_temperature_response(ion_list=no_fe_ions)
fig,axes = plt.subplots(3,3,figsize=(12,12),sharex=True,sharey=True)
for channel,ax in zip(sorted(wavelength_response.keys()),axes.flatten()):
# all_ions
ax.plot(temperature_response[channel].temperature,temperature_response[channel].response,
linestyle='-',color=sns.color_palette('deep')[0],label='All ions')
# Fe only
ax.plot(temperature_response_fe_only[channel].temperature,
temperature_response_fe_only[channel].response,
linestyle='-',color=sns.color_palette('deep')[2],label='Fe only')
# No Fe
ax.plot(temperature_response_no_fe[channel].temperature,
temperature_response_no_fe[channel].response,
linestyle='-',color=sns.color_palette('deep')[1],label='No Fe')
ax.set_xlabel(r'$T$ [{:latex}]'.format(temperature_response[channel].temperature.unit))
ax.set_ylabel(r'$K_{{{}}}$ [{:latex}]'.format(channel,temperature_response[channel].response.unit))
ax.set_xscale('log')
ax.set_yscale('log')
ax.set_ylim([1e-30,2e-24])
ax.set_xlim([1e5,1e8])
axes[0,0].legend(loc='best')
plt.tight_layout()
Explanation: Or no Fe at all.
End of explanation
temperature = np.logspace(5.5,8,100)*u.K
density = 1e9*np.ones(temperature.shape)*u.cm**(-3)
abundance_file = 'sun_photospheric_1998_grevesse'
high_ionization_state_ions = [ion for ion in ch_data.MasterList if ion[-1] != 'd' and int(ion.split('_')[1]) >= 20]
emiss_table_filename = '/home/wtb2/Desktop/test_emiss_table.h5'
make_emiss_table(emiss_table_filename,ion_list=high_ionization_state_ions,temperature=temperature,density=density,
notebook=True, abundance_file=abundance_file)
custom_temperature_response = aia_response.calculate_temperature_response(ion_list=high_ionization_state_ions,
emiss_table_file='/home/wtb2/Desktop/test_emiss_table.h5')
fig,axes = plt.subplots(3,3,figsize=(12,12),sharex=True,sharey=True)
for channel,ax in zip(sorted(wavelength_response.keys()),axes.flatten()):
# all_ions
ax.plot(temperature_response[channel].temperature,temperature_response[channel].response,
linestyle='-',color=sns.color_palette('deep')[0],label='Original')
# Fe only
ax.plot(custom_temperature_response[channel].temperature,
custom_temperature_response[channel].response,
linestyle='-',color=sns.color_palette('deep')[2],label='Custom')
ax.set_xlabel(r'$T$ [{:latex}]'.format(temperature_response[channel].temperature.unit))
ax.set_ylabel(r'$K_{{{}}}$ [{:latex}]'.format(channel,temperature_response[channel].response.unit))
ax.set_xscale('log')
ax.set_yscale('log')
ax.set_ylim([1e-30,2e-24])
ax.set_xlim([1e5,1e8])
axes[0,0].legend(loc='best')
plt.tight_layout()
Explanation: Calculating a Custom Emission Table
If you want to change the temperature/density/wavelength over which the emissivities are calculated or change things like the abundance, you can calculate your own emissivity table and then pass that file name in when calculating the temperature response functions.
In this short example, we'll calculate a new table using photospheric abundances, with constant density, and using only handful of ions with very high ionization states. To show the progress bar in the notebook, we need to pass in the notebook=True kwarg. Default is false and nothing will be displayed in notebook. For this to actually work though, you seem to need to have ipywidgets installed.
End of explanation
emiss_table = EmissTableInterface('/home/wtb2/sunpy/data/aia_emiss_table.h5')
emiss_table.temperature
emiss_table.density
for i in range(emiss_table['fe_12'].contribution_function.shape[0]):
plt.plot(emiss_table.temperature,emiss_table['fe_12'].contribution_function[:,i])
plt.xlabel(r'$T$ [{:latex}]'.format(emiss_table.temperature.unit))
plt.ylabel(r'$G(T)$ [{:latex}], Fe XII'.format(emiss_table['fe_12'].contribution_function.unit))
plt.ylim([1e-30,1e-18])
plt.yscale('log')
plt.xscale('log')
Explanation: Detailed Information about Emissivity Tables
The EmissTableInterface object provides a convenient way to inspect the different parts of the atomic data used to calculate the response functions, including the temperature, density, and line and continuum contributions.
End of explanation
<END_TASK> |
15,743 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Setup/Cleaning Data
Step1: 1) What is the average synapse density per voxel? How does it compare to average and min/max synapse density per bin?
Step2: Note the 2 large spikes in the histogram.
1.1) What fraction of the samples have 0 density? How much data was thrown out due to unmasked being 0?
Step3: 2) Is joint distribution uniform?
Step4: Conclude that the joint distribution is not uniform.
3) Are the marginal distributions uniform?
Step6: Conclude that none of the marginals are uniform.
4) Regression
Step7: Overall, regressions not successful.
4.1) Marginal regressions
Step8: 5) What is the optimal number of clusters on the 'cleaned' data?
Step9: Observe local maximums at 4 clusters and 12 clusters.
6) Compute mean, covariance on each cluster seperately
Step10: Observations
Step11: 7) Are distribution within clusters uniform?
Step12: 8) Graph clusters in Euclidean 3D Space
Step13: 9) Can we estimate the gradient of density?
Step14: 9.1) What's the average gradient? What's the standard dev of each component of the gradient?
Step15: 10) PCA
Step16: Observe fairly well defined boundary between clusters. Lets plot the 2D PCA when there are 12 clusters. | <ASSISTANT_TASK:>
Python Code:
from mpl_toolkits.mplot3d import axes3d
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import urllib2
import scipy.stats as stats
np.set_printoptions(precision=3, suppress=True)
url = ('https://raw.githubusercontent.com/Upward-Spiral-Science'
'/data/master/syn-density/output.csv')
data = urllib2.urlopen(url)
csv = np.genfromtxt(data, delimiter=",")[1:] # don't want first row (labels)
# chopping data based on thresholds on x and y coordinates
x_bounds = (409, 3529)
y_bounds = (1564, 3124)
def check_in_bounds(row, x_bounds, y_bounds):
if row[0] < x_bounds[0] or row[0] > x_bounds[1]:
return False
if row[1] < y_bounds[0] or row[1] > y_bounds[1]:
return False
if row[3] == 0:
return False
return True
indices_in_bound, = np.where(np.apply_along_axis(check_in_bounds, 1, csv, x_bounds, y_bounds))
data_thresholded = csv[indices_in_bound]
n = data_thresholded.shape[0]
Explanation: Setup/Cleaning Data
End of explanation
total_unmasked = np.sum(data_thresholded[:, 3])
total_syn = np.sum(data_thresholded[:, 4])
print "avg density per voxel: ", total_syn/total_unmasked
a = np.apply_along_axis(lambda x:x[4]/x[3], 1, data_thresholded)
print "average per bin: ", np.average(a), ", std dev: ", np.std(a)
print "max/min bin density: ", np.max(a), ", ", np.min(a)
print np.sum(a)
hist_n, bins, _ = plt.hist(a, 5000)
plt.xlim(-.0001, .0035)
plt.show()
Explanation: 1) What is the average synapse density per voxel? How does it compare to average and min/max synapse density per bin?
End of explanation
print "fraction: ", hist_n[0]/len(a)
ux = np.unique(data_thresholded[:, 0]).shape[0]
uy = np.unique(data_thresholded[:, 1]).shape[0]
uz = np.unique(data_thresholded[:, 2]).shape[0]
exp = ux*uy*uz
actual = data_thresholded.shape[0]
print "actual: %d, expected: %d, difference: %d" % (actual, exp, exp-actual)
Explanation: Note the 2 large spikes in the histogram.
1.1) What fraction of the samples have 0 density? How much data was thrown out due to unmasked being 0?
End of explanation
# chi squared test on all bins
def synapses_over_unmasked(row):
s = (row[4]/row[3])*(64**3)
return [row[0], row[1], row[2], s]
syn_unmasked = np.apply_along_axis(synapses_over_unmasked, 1, data_thresholded)
# divide synapses/unmasked by std_dev
# syn_normalized = np.apply_along_axis(normalize_syn, 1, syn_unmasked,
# np.mean(syn_unmasked[:,-1]), np.std(syn_unmasked[:,-1]))
syn_normalized = syn_unmasked
sum_syn_norm = np.sum(syn_normalized[:, 3])
avg_syn_norm = (sum_syn_norm/n)*np.ones((n))
syn_norm_1col = syn_normalized[:, -1]
print avg_syn_norm
print syn_norm_1col
print stats.chisquare(syn_norm_1col, avg_syn_norm)
Explanation: 2) Is joint distribution uniform?
End of explanation
def marginalize_along_axis(axis):
unique = np.unique(syn_normalized[:, axis])
idx_dict = dict(zip(unique, range(len(unique))))
syn_per_unique = np.zeros(len(unique))
for row in syn_normalized[:,]:
syn_per_unique[idx_dict[row[axis]]] += row[-1]
return syn_per_unique
for axis in range(3):
marginalized_data = marginalize_along_axis(axis)
n = len(np.unique(syn_normalized[:, axis]))
print "n=", n
avg = sum_syn_norm/n
avg_vec = np.ones((n))*avg
print "axis = ", axis, "chisquare test: ", stats.chisquare(marginalized_data, avg_vec)
Explanation: Conclude that the joint distribution is not uniform.
3) Are the marginal distributions uniform?
End of explanation
# load our regressions
from sklearn.linear_model import LinearRegression
from sklearn.svm import LinearSVR
from sklearn.neighbors import KNeighborsRegressor as KNN
from sklearn.ensemble import RandomForestRegressor as RF
from sklearn.preprocessing import PolynomialFeatures as PF
from sklearn.pipeline import Pipeline
from sklearn import cross_validation
np.random.seed(1)
names = ['Linear Regression','SVR','KNN Regression','Random Forest Regression','Polynomial Regression']
regressions = [LinearRegression(),
LinearSVR(C=1.0),
KNN(n_neighbors=10, algorithm='auto'),
RF(max_depth=5, max_features=1),
Pipeline([('poly', PF(degree=2)),('linear', LinearRegression(fit_intercept=False))])]
k_fold = 10
def normalize_syn(row, pos, scale):
row[-1] -= pos
row[-1] *= 1.0/scale
return row
syn_normalized = np.apply_along_axis(normalize_syn, 1, syn_unmasked,
np.mean(syn_unmasked[:,-1]), np.std(syn_unmasked[:,-1]))
syn_normalized = np.apply_along_axis(normalize_syn, 1, syn_unmasked,
np.min(syn_unmasked[:,-1]),
np.max(syn_unmasked[:,-1])-np.min(syn_unmasked[:,-1]))
syn_normalized = syn_unmasked
X = syn_normalized[:, [0, 1, 2]]
Y = syn_normalized[:, -1]
for name, reg in zip(names, regressions):
scores = cross_validation.cross_val_score(reg, X, Y, scoring='r2', cv=k_fold)
print("R^2 of %s: %0.2f (+/- %0.2f)" % (name, scores.mean(), scores.std() * 2))
Explanation: Conclude that none of the marginals are uniform.
4) Regression: predicting synapses/unmasked conditioned on x, y, z
End of explanation
for i in xrange(3):
X = syn_normalized[:, i].reshape(-1, 1)
Y = syn_normalized[:, -1]
print i
for name, reg in zip(names, regressions):
scores = cross_validation.cross_val_score(reg, X, Y, scoring='r2', cv=k_fold)
print("R^2 of %s: %0.2f (+/- %0.2f)" % (name, scores.mean(), scores.std() * 2))
print
Explanation: Overall, regressions not successful.
4.1) Marginal regressions
End of explanation
import sklearn.mixture as mixture
max_clusters = 15
bic = np.array([])
i = np.array(range(1, max_clusters))
for idx in range(1, max_clusters):
gmm = mixture.GMM(n_components=idx, n_iter=1000, covariance_type='diag', random_state=1)
gmm.fit(syn_normalized)
bic = np.append(bic, gmm.bic(syn_normalized))
print bic
plt.figure(figsize=(7,7))
plt.plot(i, 1.0/bic)
plt.title('BIC')
plt.ylabel('score')
plt.xlabel('number of clusters')
plt.show()
Explanation: 5) What is the optimal number of clusters on the 'cleaned' data?
End of explanation
# to keep model as simple as possible, let's assume 4 clusters
n_clusters = 4
gmm = mixture.GMM(n_components=n_clusters, n_iter=1000, covariance_type='diag', random_state=1)
clusters = [[] for i in xrange(n_clusters)]
predicted = gmm.fit_predict(syn_normalized)
for label, row in zip(predicted, syn_normalized[:,]):
clusters[label].append(row)
for i in xrange(n_clusters):
clusters[i] = np.array(clusters[i])
print "# of samples in cluster %d: %d" % (i+1, len(clusters[i]))
print "centroid: ", np.average(clusters[i], axis=0)
print "cluster covariance: "
covar = np.cov(clusters[i].T)
print covar
print "determinant of covariance matrix: ", np.linalg.det(covar)
print
Explanation: Observe local maximums at 4 clusters and 12 clusters.
6) Compute mean, covariance on each cluster seperately
End of explanation
# compare diagonal covariances computed to whats returned by the GMM
print gmm.covars_
Explanation: Observations:
- 2 larger clusters of approximately equal size, and 2 smaller clusters of approximately equal size
- determinant of covariance matrix correlates with number of samples in cluster
- note on-diagonal covariances are generally higher
- fairly large deviation between the average synapses across clusters
End of explanation
# check if uniform distribution within cluster
for cluster in clusters:
sum_syn = np.sum(cluster[:, -1])
avg_syn_vec = (sum_syn/cluster.shape[0])*np.ones((cluster.shape[0]))
print stats.chisquare(cluster[:, -1], avg_syn_vec)
Explanation: 7) Are distribution within clusters uniform?
End of explanation
fig = plt.figure(figsize=(10, 7))
ax = fig.gca(projection='3d')
ax.view_init()
ax.dist = 10 # distance
ax.scatter(
syn_normalized[:, 0], syn_normalized[:, 1], syn_normalized[:, 2], # data
c=predicted, # marker colour
alpha=.5
)
plt.show()
#TODO: graph the centroids and diagonal covariances as ellipsoids
Explanation: 8) Graph clusters in Euclidean 3D Space
End of explanation
syn_normalized = syn_unmasked
uniques = [np.unique(syn_normalized[:, i]) for i in xrange(3)]
coord_mapping = {}
for xi, x in enumerate(uniques[0]):
for yi, y in enumerate(uniques[1]):
for zi, z in enumerate(uniques[2]):
coord_mapping[(x, y, z)] = (xi, yi, zi)
gridded_data = np.empty((len(uniques[0]), len(uniques[1]), len(uniques[2])))
for row in syn_normalized[:, ]:
coord = coord_mapping[tuple(row[:3])]
gridded_data[coord[0], coord[1], coord[2]] = row[-1]
dx = uniques[0][1]-uniques[0][0]
dy = uniques[1][1]-uniques[1][0]
dz = uniques[2][1]-uniques[2][0]
grad = np.gradient(gridded_data, dx, dy, dz)
def get_gradient_components(x, y, z):
u = grad[0][x, y, z]
v = grad[1][x, y, z]
w = grad[2][x, y, z]
return (u, v, w)
x, y, z = np.meshgrid(np.arange(1, 41, 4), np.arange(1, 81, 8), np.arange(1, 11))
fig = plt.figure(figsize=(10, 10))
ax = fig.gca(projection='3d')
ax.quiver(x, y, z, grad[0][1::8, 1::4, 1:], grad[1][1::8, 1::4, 1:], grad[2][1::8, 1::4, 1:], length=2, color="Tomato", alpha=.8,
arrow_length_ratio=.4)
ax.view_init(elev=18, azim=30)
ax.dist = 8
plt.show()
Explanation: 9) Can we estimate the gradient of density?
End of explanation
print "Avg. gradient: (%.4f, %.4f, %.4f)" % (np.average(grad[0]), np.average(grad[1]), np.average(grad[2]))
print "Std. dev per element: (%.4f, %.4f, %.4f)" % (np.std(grad[0]), np.std(grad[1]), np.std(grad[2]))
print grad[0]
Explanation: 9.1) What's the average gradient? What's the standard dev of each component of the gradient?
End of explanation
from sklearn.decomposition import PCA
# center each variable and give it unit variance for PCA
def center(row, means, std_devs):
for idx, mean, std_dev in zip(range(4), means, std_devs):
row[idx] -= mean
row[idx] *= 1.0/std_dev
return row
syn_centered = np.apply_along_axis(center, 1, syn_normalized,
*zip(*[(np.average(syn_normalized[:, i]),
np.std(syn_normalized[:, i])) for i in range(4)]))
pca = PCA(n_components = 4)
transform = pca.fit_transform(syn_centered)
print pca.components_
print pca.explained_variance_ratio_
print transform.shape
# plot the clusters along the first 2 principal components
n_clusters = 4
gmm = mixture.GMM(n_components=n_clusters, n_iter=1000, covariance_type='diag', random_state=1)
predicted = gmm.fit_predict(syn_normalized)
plt.scatter(transform[:, 0], transform[:, 1], c=predicted, alpha=.3)
plt.show()
Explanation: 10) PCA
End of explanation
n_clusters = 12
gmm = mixture.GMM(n_components=n_clusters, n_iter=1000, covariance_type='diag', random_state=1)
predicted = gmm.fit_predict(syn_normalized)
plt.scatter(transform[:, 0], transform[:, 1], c=predicted, alpha=.3)
plt.show()
Explanation: Observe fairly well defined boundary between clusters. Lets plot the 2D PCA when there are 12 clusters.
End of explanation
<END_TASK> |
15,744 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Overview of MEG/EEG analysis with MNE-Python
This tutorial covers the basic EEG/MEG pipeline for event-related analysis
Step1: Loading data
MNE-Python data structures are based around the FIF file format from
Neuromag, but there are reader functions for a wide variety of other
data formats <data-formats>. MNE-Python also has interfaces to a
variety of publicly available datasets <datasets>,
which MNE-Python can download and manage for you.
We'll start this tutorial by loading one of the example datasets (called
"sample-dataset"), which contains EEG and MEG data from one subject
performing an audiovisual experiment, along with structural MRI scans for
that subject. The
Step2: By default,
Step3:
Step4: Preprocessing
MNE-Python supports a variety of preprocessing approaches and techniques
(maxwell filtering, signal-space projection, independent components analysis,
filtering, downsampling, etc); see the full list of capabilities in the
Step5: Once we're confident about which component(s) we want to remove, we pass them
as the exclude parameter and then apply the ICA to the raw signal. The
Step6: Detecting experimental events
The sample dataset includes several
Step7: The resulting events array is an ordinary 3-column
Step8: Event dictionaries like this one are used when extracting epochs from
continuous data; the / character in the dictionary keys allows pooling
across conditions by requesting partial condition descriptors (i.e.,
requesting 'auditory' will select all epochs with Event IDs 1 and 2;
requesting 'left' will select all epochs with Event IDs 1 and 3). An
example of this is shown in the next section. There is also a convenient
Step9: For paradigms that are not event-related (e.g., analysis of resting-state
data), you can extract regularly spaced (possibly overlapping) spans of data
by creating events using
Step10: We'll also pass the event dictionary as the event_id parameter (so we can
work with easy-to-pool event labels instead of the integer event IDs), and
specify tmin and tmax (the time relative to each event at which to
start and end each epoch). As mentioned above, by default
Step11: Next we'll pool across left/right stimulus presentations so we can compare
auditory versus visual responses. To avoid biasing our signals to the
left or right, we'll use
Step12: Like
Step13: <div class="alert alert-info"><h4>Note</h4><p>Both
Step14: Estimating evoked responses
Now that we have our conditions in aud_epochs and vis_epochs, we can
get an estimate of evoked responses to auditory versus visual stimuli by
averaging together the epochs in each condition. This is as simple as calling
the
Step15: We can also get a more detailed view of each
Step16: Evoked objects can also be combined to show contrasts between conditions,
using the mne.combine_evoked function. A simple difference can be
generated by passing weights=[1, -1]. We'll then plot the difference wave
at each sensor using ~mne.Evoked.plot_topo
Step17: Inverse modeling
Finally, we can estimate the origins of the evoked activity by projecting the
sensor data into this subject's
Step18: Finally, in order to plot the source estimate on the subject's cortical
surface we'll also need the path to the sample subject's structural MRI files
(the subjects_dir) | <ASSISTANT_TASK:>
Python Code:
import os
import numpy as np
import mne
Explanation: Overview of MEG/EEG analysis with MNE-Python
This tutorial covers the basic EEG/MEG pipeline for event-related analysis:
loading data, epoching, averaging, plotting, and estimating cortical activity
from sensor data. It introduces the core MNE-Python data structures
:class:~mne.io.Raw, :class:~mne.Epochs, :class:~mne.Evoked, and
:class:~mne.SourceEstimate, and covers a lot of ground fairly quickly (at the
expense of depth). Subsequent tutorials address each of these topics in greater
detail.
:depth: 1
We begin by importing the necessary Python modules:
End of explanation
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_filt-0-40_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file)
Explanation: Loading data
MNE-Python data structures are based around the FIF file format from
Neuromag, but there are reader functions for a wide variety of other
data formats <data-formats>. MNE-Python also has interfaces to a
variety of publicly available datasets <datasets>,
which MNE-Python can download and manage for you.
We'll start this tutorial by loading one of the example datasets (called
"sample-dataset"), which contains EEG and MEG data from one subject
performing an audiovisual experiment, along with structural MRI scans for
that subject. The :func:mne.datasets.sample.data_path function will
automatically download the dataset if it isn't found in one of the expected
locations, then return the directory path to the dataset (see the
documentation of :func:~mne.datasets.sample.data_path for a list of places
it checks before downloading). Note also that for this tutorial to run
smoothly on our servers, we're using a filtered and downsampled version of
the data (:file:sample_audvis_filt-0-40_raw.fif), but an unfiltered version
(:file:sample_audvis_raw.fif) is also included in the sample dataset and
could be substituted here when running the tutorial locally.
End of explanation
print(raw)
print(raw.info)
Explanation: By default, :func:~mne.io.read_raw_fif displays some information about the
file it's loading; for example, here it tells us that there are four
"projection items" in the file along with the recorded data; those are
:term:SSP projectors <projector> calculated to remove environmental noise
from the MEG signals, plus a projector to mean-reference the EEG channels;
these are discussed in the tutorial tut-projectors-background.
In addition to the information displayed during loading,
you can get a glimpse of the basic details of a :class:~mne.io.Raw object
by printing it; even more is available by printing its info attribute
(a :class:dictionary-like object <mne.Info> that is preserved across
:class:~mne.io.Raw, :class:~mne.Epochs, and :class:~mne.Evoked
objects). The info data structure keeps track of channel locations,
applied filters, projectors, etc. Notice especially the chs entry,
showing that MNE-Python detects different sensor types and handles each
appropriately. See tut-info-class for more on the :class:~mne.Info
class.
End of explanation
raw.plot_psd(fmax=50)
raw.plot(duration=5, n_channels=30)
Explanation: :class:~mne.io.Raw objects also have several built-in plotting methods;
here we show the power spectral density (PSD) for each sensor type with
:meth:~mne.io.Raw.plot_psd, as well as a plot of the raw sensor traces with
:meth:~mne.io.Raw.plot. In the PSD plot, we'll only plot frequencies below
50 Hz (since our data are low-pass filtered at 40 Hz). In interactive Python
sessions, :meth:~mne.io.Raw.plot is interactive and allows scrolling,
scaling, bad channel marking, annotation, projector toggling, etc.
End of explanation
# set up and fit the ICA
ica = mne.preprocessing.ICA(n_components=20, random_state=97, max_iter=800)
ica.fit(raw)
ica.exclude = [1, 2] # details on how we picked these are omitted here
ica.plot_properties(raw, picks=ica.exclude)
Explanation: Preprocessing
MNE-Python supports a variety of preprocessing approaches and techniques
(maxwell filtering, signal-space projection, independent components analysis,
filtering, downsampling, etc); see the full list of capabilities in the
:mod:mne.preprocessing and :mod:mne.filter submodules. Here we'll clean
up our data by performing independent components analysis
(:class:~mne.preprocessing.ICA); for brevity we'll skip the steps that
helped us determined which components best capture the artifacts (see
tut-artifact-ica for a detailed walk-through of that process).
End of explanation
orig_raw = raw.copy()
raw.load_data()
ica.apply(raw)
# show some frontal channels to clearly illustrate the artifact removal
chs = ['MEG 0111', 'MEG 0121', 'MEG 0131', 'MEG 0211', 'MEG 0221', 'MEG 0231',
'MEG 0311', 'MEG 0321', 'MEG 0331', 'MEG 1511', 'MEG 1521', 'MEG 1531',
'EEG 001', 'EEG 002', 'EEG 003', 'EEG 004', 'EEG 005', 'EEG 006',
'EEG 007', 'EEG 008']
chan_idxs = [raw.ch_names.index(ch) for ch in chs]
orig_raw.plot(order=chan_idxs, start=12, duration=4)
raw.plot(order=chan_idxs, start=12, duration=4)
Explanation: Once we're confident about which component(s) we want to remove, we pass them
as the exclude parameter and then apply the ICA to the raw signal. The
:meth:~mne.preprocessing.ICA.apply method requires the raw data to be
loaded into memory (by default it's only read from disk as-needed), so we'll
use :meth:~mne.io.Raw.load_data first. We'll also make a copy of the
:class:~mne.io.Raw object so we can compare the signal before and after
artifact removal side-by-side:
End of explanation
events = mne.find_events(raw, stim_channel='STI 014')
print(events[:5]) # show the first 5
Explanation: Detecting experimental events
The sample dataset includes several :term:"STIM" channels <stim channel>
that recorded electrical
signals sent from the stimulus delivery computer (as brief DC shifts /
squarewave pulses). These pulses (often called "triggers") are used in this
dataset to mark experimental events: stimulus onset, stimulus type, and
participant response (button press). The individual STIM channels are
combined onto a single channel, in such a way that voltage
levels on that channel can be unambiguously decoded as a particular event
type. On older Neuromag systems (such as that used to record the sample data)
this summation channel was called STI 014, so we can pass that channel
name to the :func:mne.find_events function to recover the timing and
identity of the stimulus events.
End of explanation
event_dict = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3,
'visual/right': 4, 'smiley': 5, 'buttonpress': 32}
Explanation: The resulting events array is an ordinary 3-column :class:NumPy array
<numpy.ndarray>, with sample number in the first column and integer event ID
in the last column; the middle column is usually ignored. Rather than keeping
track of integer event IDs, we can provide an event dictionary that maps
the integer IDs to experimental conditions or events. In this dataset, the
mapping looks like this:
+----------+----------------------------------------------------------+
| Event ID | Condition |
+==========+==========================================================+
| 1 | auditory stimulus (tone) to the left ear |
+----------+----------------------------------------------------------+
| 2 | auditory stimulus (tone) to the right ear |
+----------+----------------------------------------------------------+
| 3 | visual stimulus (checkerboard) to the left visual field |
+----------+----------------------------------------------------------+
| 4 | visual stimulus (checkerboard) to the right visual field |
+----------+----------------------------------------------------------+
| 5 | smiley face (catch trial) |
+----------+----------------------------------------------------------+
| 32 | subject button press |
+----------+----------------------------------------------------------+
End of explanation
fig = mne.viz.plot_events(events, event_id=event_dict, sfreq=raw.info['sfreq'],
first_samp=raw.first_samp)
Explanation: Event dictionaries like this one are used when extracting epochs from
continuous data; the / character in the dictionary keys allows pooling
across conditions by requesting partial condition descriptors (i.e.,
requesting 'auditory' will select all epochs with Event IDs 1 and 2;
requesting 'left' will select all epochs with Event IDs 1 and 3). An
example of this is shown in the next section. There is also a convenient
:func:~mne.viz.plot_events function for visualizing the distribution of
events across the duration of the recording (to make sure event detection
worked as expected). Here we'll also make use of the :class:~mne.Info
attribute to get the sampling frequency of the recording (so our x-axis will
be in seconds instead of in samples).
End of explanation
reject_criteria = dict(mag=4000e-15, # 4000 fT
grad=4000e-13, # 4000 fT/cm
eeg=150e-6, # 150 µV
eog=250e-6) # 250 µV
Explanation: For paradigms that are not event-related (e.g., analysis of resting-state
data), you can extract regularly spaced (possibly overlapping) spans of data
by creating events using :func:mne.make_fixed_length_events and then
proceeding with epoching as described in the next section.
Epoching continuous data
The :class:~mne.io.Raw object and the events array are the bare minimum
needed to create an :class:~mne.Epochs object, which we create with the
:class:~mne.Epochs class constructor. Here we'll also specify some data
quality constraints: we'll reject any epoch where peak-to-peak signal
amplitude is beyond reasonable limits for that channel type. This is done
with a rejection dictionary; you may include or omit thresholds for any of
the channel types present in your data. The values given here are reasonable
for this particular dataset, but may need to be adapted for different
hardware or recording conditions. For a more automated approach, consider
using the autoreject package_.
End of explanation
epochs = mne.Epochs(raw, events, event_id=event_dict, tmin=-0.2, tmax=0.5,
reject=reject_criteria, preload=True)
Explanation: We'll also pass the event dictionary as the event_id parameter (so we can
work with easy-to-pool event labels instead of the integer event IDs), and
specify tmin and tmax (the time relative to each event at which to
start and end each epoch). As mentioned above, by default
:class:~mne.io.Raw and :class:~mne.Epochs data aren't loaded into memory
(they're accessed from disk only when needed), but here we'll force loading
into memory using the preload=True parameter so that we can see the
results of the rejection criteria being applied:
End of explanation
conds_we_care_about = ['auditory/left', 'auditory/right',
'visual/left', 'visual/right']
epochs.equalize_event_counts(conds_we_care_about) # this operates in-place
aud_epochs = epochs['auditory']
vis_epochs = epochs['visual']
del raw, epochs # free up memory
Explanation: Next we'll pool across left/right stimulus presentations so we can compare
auditory versus visual responses. To avoid biasing our signals to the
left or right, we'll use :meth:~mne.Epochs.equalize_event_counts first to
randomly sample epochs from each condition to match the number of epochs
present in the condition with the fewest good epochs.
End of explanation
aud_epochs.plot_image(picks=['MEG 1332', 'EEG 021'])
Explanation: Like :class:~mne.io.Raw objects, :class:~mne.Epochs objects also have a
number of built-in plotting methods. One is :meth:~mne.Epochs.plot_image,
which shows each epoch as one row of an image map, with color representing
signal magnitude; the average evoked response and the sensor location are
shown below the image:
End of explanation
frequencies = np.arange(7, 30, 3)
power = mne.time_frequency.tfr_morlet(aud_epochs, n_cycles=2, return_itc=False,
freqs=frequencies, decim=3)
power.plot(['MEG 1332'])
Explanation: <div class="alert alert-info"><h4>Note</h4><p>Both :class:`~mne.io.Raw` and :class:`~mne.Epochs` objects have
:meth:`~mne.Epochs.get_data` methods that return the underlying data
as a :class:`NumPy array <numpy.ndarray>`. Both methods have a ``picks``
parameter for subselecting which channel(s) to return; ``raw.get_data()``
has additional parameters for restricting the time domain. The resulting
matrices have dimension ``(n_channels, n_times)`` for
:class:`~mne.io.Raw` and ``(n_epochs, n_channels, n_times)`` for
:class:`~mne.Epochs`.</p></div>
Time-frequency analysis
The :mod:mne.time_frequency submodule provides implementations of several
algorithms to compute time-frequency representations, power spectral density,
and cross-spectral density. Here, for example, we'll compute for the auditory
epochs the induced power at different frequencies and times, using Morlet
wavelets. On this dataset the result is not especially informative (it just
shows the evoked "auditory N100" response); see here
<inter-trial-coherence> for a more extended example on a dataset with richer
frequency content.
End of explanation
aud_evoked = aud_epochs.average()
vis_evoked = vis_epochs.average()
mne.viz.plot_compare_evokeds(dict(auditory=aud_evoked, visual=vis_evoked),
legend='upper left', show_sensors='upper right')
Explanation: Estimating evoked responses
Now that we have our conditions in aud_epochs and vis_epochs, we can
get an estimate of evoked responses to auditory versus visual stimuli by
averaging together the epochs in each condition. This is as simple as calling
the :meth:~mne.Epochs.average method on the :class:~mne.Epochs object,
and then using a function from the :mod:mne.viz module to compare the
global field power for each sensor type of the two :class:~mne.Evoked
objects:
End of explanation
aud_evoked.plot_joint(picks='eeg')
aud_evoked.plot_topomap(times=[0., 0.08, 0.1, 0.12, 0.2], ch_type='eeg')
Explanation: We can also get a more detailed view of each :class:~mne.Evoked object
using other plotting methods such as :meth:~mne.Evoked.plot_joint or
:meth:~mne.Evoked.plot_topomap. Here we'll examine just the EEG channels,
and see the classic auditory evoked N100-P200 pattern over dorso-frontal
electrodes, then plot scalp topographies at some additional arbitrary times:
End of explanation
evoked_diff = mne.combine_evoked([aud_evoked, vis_evoked], weights=[1, -1])
evoked_diff.pick_types(meg='mag').plot_topo(color='r', legend=False)
Explanation: Evoked objects can also be combined to show contrasts between conditions,
using the mne.combine_evoked function. A simple difference can be
generated by passing weights=[1, -1]. We'll then plot the difference wave
at each sensor using ~mne.Evoked.plot_topo:
End of explanation
# load inverse operator
inverse_operator_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis-meg-oct-6-meg-inv.fif')
inv_operator = mne.minimum_norm.read_inverse_operator(inverse_operator_file)
# set signal-to-noise ratio (SNR) to compute regularization parameter (λ²)
snr = 3.
lambda2 = 1. / snr ** 2
# generate the source time course (STC)
stc = mne.minimum_norm.apply_inverse(vis_evoked, inv_operator,
lambda2=lambda2,
method='MNE') # or dSPM, sLORETA, eLORETA
Explanation: Inverse modeling
Finally, we can estimate the origins of the evoked activity by projecting the
sensor data into this subject's :term:source space (a set of points either
on the cortical surface or within the cortical volume of that subject, as
estimated by structural MRI scans). MNE-Python supports lots of ways of doing
this (dynamic statistical parametric mapping, dipole fitting, beamformers,
etc.); here we'll use minimum-norm estimation (MNE) to generate a continuous
map of activation constrained to the cortical surface. MNE uses a linear
:term:inverse operator to project EEG+MEG sensor measurements into the
source space. The inverse operator is computed from the
:term:forward solution for this subject and an estimate of the
covariance of sensor measurements <tut_compute_covariance>. For this
tutorial we'll skip those computational steps and load a pre-computed inverse
operator from disk (it's included with the sample data
<sample-dataset>). Because this "inverse problem" is underdetermined (there
is no unique solution), here we further constrain the solution by providing a
regularization parameter specifying the relative smoothness of the current
estimates in terms of a signal-to-noise ratio (where "noise" here is akin to
baseline activity level across all of cortex).
End of explanation
# path to subjects' MRI files
subjects_dir = os.path.join(sample_data_folder, 'subjects')
# plot
stc.plot(initial_time=0.1, hemi='split', views=['lat', 'med'],
subjects_dir=subjects_dir)
Explanation: Finally, in order to plot the source estimate on the subject's cortical
surface we'll also need the path to the sample subject's structural MRI files
(the subjects_dir):
End of explanation
<END_TASK> |
15,745 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
cf. pp. 60 McDowell, 6th Ed. What you Need to Know, Core data Structures, Algorithms, and Concepts
| Data Structures | Algorithms | Concepts |
|
Step1: cf. 3 Stacks and Queues, Cracking the Coding Interview, 6th Ed., McDowell, stack uses LIFO - as in a stack of dinner plates, the most recent item added to the stack is the 1st item to be removed.
Step2: cf. 3.6. Simple Balanced Parentheses
Step3: Linked List
Advantages over arrays, 1. dynamic size 2. ease of insertion/deletion
Drawbacks; 1) random access not allowed, access elements sequentially, 2) extra memory space for pointer | <ASSISTANT_TASK:>
Python Code:
class Stack:
def __init__(self):
self.items = []
def isEmpty(self):
return self.items == []
def push(self, item):
self.items.append(item)
def pop(self):
return self.items.pop()
def peek(self):
return self.items[len(self.items)-1]
def size(self):
return len(self.items)
s=Stack()
print(s.isEmpty())
s.push(4)
s.push('dog')
print(s.peek())
s.push(True)
print(s.size())
print(s.isEmpty())
s.push(8.4)
print(s.items)
print(s.pop())
print(s.pop())
print(s.size())
Explanation: cf. pp. 60 McDowell, 6th Ed. What you Need to Know, Core data Structures, Algorithms, and Concepts
| Data Structures | Algorithms | Concepts |
| :-- | -- | :-- |
| Linked Lists | Breadth-First Search | Bit Manipulation |
| Trees, Tries, & Graphs | Depth-First Search | Memory (Stack vs. Heap) |
| Stacks & Queues | Binary Search | Recursion |
| Heaps | Merge Sort | Dynamic Programming |
| Vectors/ArrayLists | QuickSort | Big O Time & Space |
| Hash Tables | | |
http://interactivepython.org/runestone/static/pythonds/BasicDS/ImplementingaStackinPython.html
End of explanation
class Queue:
def __init__(self):
self.items = []
def add(self,item):
self.items.append( item )
def remove(self):
self.items.remove( self.items[0])
def peek(self):
return self.items[0]
def isEmpty(self):
return self.items == []
Explanation: cf. 3 Stacks and Queues, Cracking the Coding Interview, 6th Ed., McDowell, stack uses LIFO - as in a stack of dinner plates, the most recent item added to the stack is the 1st item to be removed.
End of explanation
def parChecker(symbolString):
s = Stack()
balanced = True
index = 0
while index < len(symbolString) and balanced:
symbol = symbolString[index]
if symbol == "(":
s.push(symbol)
else:
if s.isEmpty():
balanced = False
else:
s.pop()
index = index + 1
if balanced and s.isEmpty():
return True
else:
return False
print(parChecker('((()))'))
print(parChecker('(()'))
def parChekcer(symbolString):
s = Stack()
balanced = True
index = 0
while index < len(symbolString) and balanced:
symbol = symbolString[index]
if symbol in "([{":
s.push(symbol)
else:
if s.isEmpty():
balanced = False
else:
top = s.pop()
if not matches(top,symbol):
balanced = False
index = index + 1
if balanced and s.isEmpty():
return True
else:
return False
def matches(open,close):
opens = "([{"
closers = ")]}"
return opens.index(open) == closers.index(close)
print(parChecker('{{([][])}()}') )
print(parChecker('[{()]'))
def divideBy2(decNumber):
remstack = Stack()
while decNumber >0:
rem = decNumber % 2
remstack.push(rem)
decNumber = decNumber // 2
binString = ""
while not remstack.isEmpty():
binString = binString + str(remstack.pop())
return binString
print(divideBy2(42))
divideBy2(233)
def baseConverter(decNumber,base):
digits = "0123456789ABCDEF"
remstack=Stack()
while decNumber >0:
rem = decNumber % base
remstack.push(rem)
decNumber = decNumber // base
newString = ""
while not remstack.isEmpty():
newString = newString + digits[remstack.pop()]
return newString
print(baseConverter(25,2))
print(baseConverter(25,16))
print(baseConverter(25,8))
print(baseConverter(256,16))
print(baseConverter(26,26))
Explanation: cf. 3.6. Simple Balanced Parentheses
End of explanation
# Node class
class Node:
# Function to initialize the node object
def __init__(self,data):
self.data = data # Assign data
self.next= None # Initialize
# next as null
# Linked List class
class LinkedList:
# Function to initialize the Linked
# List object
def __init__(self):
self.head = None
# This function prints contents of linked list
# starting from head
# traversal of a linked list
def printList(self):
temp = self.head
while (temp):
print temp.data
temp = temp.next
# Start with the empty list
llist = LinkedList()
llist.head = Node(1)
second = Node(2)
third = Node(3)
llist.head.next = second; # Link 1st node with second
second.next = third
llist.printList()
class Node:
def __init__(self,val):
self.val = val
self.next = None
class LinkedList:
def __init__(self,val=None):
if val is None:
self.head = None
else:
self.head = Node(val)
def insertEnd(self,val):
temp = self.head
while(temp.next): # check if temp has a next
temp = temp.next # keep traversing
temp.next = Node(val)
def printList(self):
temp = self.head
while (temp): # stop when temp is a None, which could happen with next in Node
print temp.val
temp = temp.next
llist = LinkedList(1)
llist.printList()
llist.insertEnd(2)
llist.printList()
llist.insertEnd(4)
llist.printList()
llist.insertEnd(2)
llist.printList()
llist.insertEnd(6)
llist.printList()
llist.insertEnd(7)
llist.printList()
Explanation: Linked List
Advantages over arrays, 1. dynamic size 2. ease of insertion/deletion
Drawbacks; 1) random access not allowed, access elements sequentially, 2) extra memory space for pointer
End of explanation
<END_TASK> |
15,746 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Credit Risk - Computing the speed of repayment of loans
1 - Introduction
Credit Risk refers to the likelihood that a borrower will not be able to repay a loan contracted by a lender. Thus throughout the years, financial institutions have developed various ways to quantify that risk so as to limit their exposure.
Here, instead of simply modeling whether a borrower will repay, by using Survival Analysis, it becomes possible to determine when this will happen. Indeed, it is easy to consider that fully repaying a loan is a explicit event, and therefore not having paid back the loan yet can be defined as the censored situation.
By using this configuration, one can predict the speed of repayment of a loan.
In this tutorial, we will be using the German Credit dataset, which was originally provided by Professor Dr. Hans Hofmann of the University of Hamburg and available on the UCI Machine Learning Repository. The current version was adapted to be directly usable with a minimum amount of feature transformation.
2 - Dataset
2.1 - Description and Overview
Step1: 2.2 - From categorical to numerical
There are several categorical features that need to be encoded into one-hot vectors.
Step2: 3 - Exploratory Data Analysis
As this tutorial is mainly designed to provide an example of how to use Pysurvival, we will not perform a thorough exploratory data analysis but we greatly encourage the reader to do so by taking a look at the predictive maintenance tutorial that provides a very detailed study.
Here, we will just check if the dataset contains Null values or duplicated rows, and have a look at feature correlations.
3.1 - Null values and duplicates
The first thing to do is checking if the raw_dataset contains Null values and has duplicated rows.
Step3: As it turns out the raw_dataset doesn't have any Null values or duplicates.
3.2 - Correlations
Let's compute and visualize the correlation between the features
Step4: Based on the correlations chart, we should remove the following features
* credit_history_existing_credit_paid
* housing_own
Step5: 4 - Modeling
So as to perform cross-validation later on and assess the performances of the model, let's split the dataset into training and testing sets.
Step6: Let's now fit a Neural MTLR model to the training set.
Note
Step7: We can take a look at the values of N-MTLR loss function to ensure that the fitting isn't incomplete
Step8: 5 - Cross Validation
In order to assess the model performance, we previously split the original dataset into training and testing sets, so that we can now compute its performance metrics on the testing set
Step9: 5.2 - Brier Score
The Brier score measures the average discrepancies between the status and the estimated probabilities at a given time. Thus, the lower the score (usually below 0.25), the better the predictive performance. To assess the overall error measure across multiple time points, the Integrated Brier Score (IBS) is usually computed as well.
Step10: The IBS is equal to 0.07 on the entire model time axis. This indicates that the model will have good predictive abilities.
6 - Predictions
6.1 - Overall predictions
Now that we have built a model that seems to provide great performances, let's compare the following
Step11: the time series of the actual and predicted number of loans that were still active, for each time t.
Step12: Both comparisons show that the model do a great job predicting the number of loans that were fully repaid (average absolute error of 4.5 loan) or that were still active (average absolute error of 18.4 loans) for all times t of the 70+ months time window.
6.2 - Individual predictions
Now that we know that we can provide reliable predictions for an entire cohort, let's compute the speed of repayment at the individual level. The speed of repayment is given by $\text{Speed}(t) = 1−\text{Survival}(t)$
First, we can construct the risk groups based on risk scores distribution. The helper function create_risk_groups, which can be found in pysurvival.utils.display, will help us do that
Step13: Here, it is possible to distinguish 2 main groups, low and high risk groups. Because the C-index is high, the model will be able to perfectly rank the survival times of a random unit of each group, such that $t_{high} \leq t_{low}$.
Let's randomly select individual unit in each group and compare their speed of repayment functions. To demonstrate our point, we will purposely select units which experienced an event to visualize the actual time of event.
Step14: 7 - Conclusion
We can now save our model so as to put it in production and score future borrowers.
Thanks to Survival Analysis, we can see that it is indeed possible to predict the speed of repayment of loans and forecast the number of loans that will be fully repaid throughout time, which is a great advantage over classification modeling. | <ASSISTANT_TASK:>
Python Code:
# Importing modules
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
from pysurvival.datasets import Dataset
%pylab inline
# Reading the dataset
raw_dataset = Dataset('credit_risk').load()
print("The raw_dataset has the following shape: {}.".format(raw_dataset.shape))
raw_dataset.head(3)
Explanation: Credit Risk - Computing the speed of repayment of loans
1 - Introduction
Credit Risk refers to the likelihood that a borrower will not be able to repay a loan contracted by a lender. Thus throughout the years, financial institutions have developed various ways to quantify that risk so as to limit their exposure.
Here, instead of simply modeling whether a borrower will repay, by using Survival Analysis, it becomes possible to determine when this will happen. Indeed, it is easy to consider that fully repaying a loan is a explicit event, and therefore not having paid back the loan yet can be defined as the censored situation.
By using this configuration, one can predict the speed of repayment of a loan.
In this tutorial, we will be using the German Credit dataset, which was originally provided by Professor Dr. Hans Hofmann of the University of Hamburg and available on the UCI Machine Learning Repository. The current version was adapted to be directly usable with a minimum amount of feature transformation.
2 - Dataset
2.1 - Description and Overview
End of explanation
# From category to numerical
category_columns = [
'checking_account_status', 'credit_history', 'purpose',
'savings_account_status', 'employment_years', 'personal_status',
'other_debtors', 'property', 'other_installment_plans', 'housing', 'job']
dataset = pd.get_dummies(raw_dataset, columns=category_columns, drop_first=True)
# Creating the time and event columns
time_column = 'duration'
event_column = 'full_repaid'
# Creating the features
features = np.setdiff1d(dataset.columns, [time_column, event_column] ).tolist()
Explanation: 2.2 - From categorical to numerical
There are several categorical features that need to be encoded into one-hot vectors.
End of explanation
# Checking for null values
N_null = sum(dataset[features].isnull().sum())
print("The raw_dataset contains {} null values".format(N_null)) #0 null values
# Removing duplicates if there exist
N_dupli = sum(dataset.duplicated(keep='first'))
dataset = dataset.drop_duplicates(keep='first').reset_index(drop=True)
print("The raw_dataset contains {} duplicates".format(N_dupli))
# Number of samples in the dataset
N = dataset.shape[0]
Explanation: 3 - Exploratory Data Analysis
As this tutorial is mainly designed to provide an example of how to use Pysurvival, we will not perform a thorough exploratory data analysis but we greatly encourage the reader to do so by taking a look at the predictive maintenance tutorial that provides a very detailed study.
Here, we will just check if the dataset contains Null values or duplicated rows, and have a look at feature correlations.
3.1 - Null values and duplicates
The first thing to do is checking if the raw_dataset contains Null values and has duplicated rows.
End of explanation
from pysurvival.utils.display import correlation_matrix
correlation_matrix(dataset[features], figure_size=(40,15), text_fontsize=8)
Explanation: As it turns out the raw_dataset doesn't have any Null values or duplicates.
3.2 - Correlations
Let's compute and visualize the correlation between the features
End of explanation
to_remove = ['credit_history_existing_credit_paid', 'housing_own']
features = np.setdiff1d(features, to_remove).tolist()
Explanation: Based on the correlations chart, we should remove the following features
* credit_history_existing_credit_paid
* housing_own
End of explanation
# Building training and testing sets
from sklearn.model_selection import train_test_split
index_train, index_test = train_test_split( range(N), test_size = 0.4)
data_train = dataset.loc[index_train].reset_index( drop = True )
data_test = dataset.loc[index_test].reset_index( drop = True )
# Creating the X, T and E inputs
X_train, X_test = data_train[features], data_test[features]
T_train, T_test = data_train[time_column], data_test[time_column]
E_train, E_test = data_train[event_column], data_test[event_column]
Explanation: 4 - Modeling
So as to perform cross-validation later on and assess the performances of the model, let's split the dataset into training and testing sets.
End of explanation
from pysurvival.models.multi_task import NeuralMultiTaskModel
# Initializing the Neural MTLR with a time axis split into 100 intervals
structure = [
{'activation': 'ReLU', 'num_units': 70},
{'activation': 'Swish', 'num_units': 70},
]
neural_mtlr = NeuralMultiTaskModel(bins=100, structure=structure)
# Fitting the model
neural_mtlr.fit(X_train, T_train, E_train,
init_method = 'orthogonal', optimizer ='rprop', lr = 1e-3,
l2_reg = 1e-1, l2_smooth = 1e-1,
batch_normalization = True, bn_and_dropout = True,
dropout=0.6, num_epochs = 500)
Explanation: Let's now fit a Neural MTLR model to the training set.
Note: The choice of the structure of the neural network was obtained using grid-search hyperparameters selection, not displayed in this tutorial.
End of explanation
from pysurvival.utils.display import display_loss_values
display_loss_values(neural_mtlr)
Explanation: We can take a look at the values of N-MTLR loss function to ensure that the fitting isn't incomplete
End of explanation
from pysurvival.utils.metrics import concordance_index
c_index = concordance_index(neural_mtlr, X_test, T_test, E_test)
print('C-index: {:.2f}'.format(c_index)) #0.70
Explanation: 5 - Cross Validation
In order to assess the model performance, we previously split the original dataset into training and testing sets, so that we can now compute its performance metrics on the testing set:
5.1 - C-index
The C-index represents the global assessment of the model discrimination power: this is the model’s ability to correctly provide a reliable ranking of the survival times based on the individual risk scores. In general, when the C-index is close to 1, the model has an almost perfect discriminatory power; but if it is close to 0.5, it has no ability to discriminate between low and high risk subjects.
End of explanation
from pysurvival.utils.display import integrated_brier_score
ibs = integrated_brier_score(neural_mtlr, X_test, T_test, E_test, t_max=100,
figure_size=(20, 6.5) )
print('IBS: {:.2f}'.format(ibs)) #0.07
Explanation: 5.2 - Brier Score
The Brier score measures the average discrepancies between the status and the estimated probabilities at a given time. Thus, the lower the score (usually below 0.25), the better the predictive performance. To assess the overall error measure across multiple time points, the Integrated Brier Score (IBS) is usually computed as well.
End of explanation
from pysurvival.utils.display import compare_to_actual
results = compare_to_actual(neural_mtlr, X_test, T_test, E_test,
is_at_risk = False, figure_size=(16, 6),
metrics = ['rmse', 'mean', 'median'])
Explanation: The IBS is equal to 0.07 on the entire model time axis. This indicates that the model will have good predictive abilities.
6 - Predictions
6.1 - Overall predictions
Now that we have built a model that seems to provide great performances, let's compare the following:
the time series of the actual and predicted number of loans that were fully repaid, for each time t.
End of explanation
results = compare_to_actual(neural_mtlr, X_test, T_test, E_test,
is_at_risk = True, figure_size=(16, 6),
metrics = ['rmse', 'mean', 'median'])
Explanation: the time series of the actual and predicted number of loans that were still active, for each time t.
End of explanation
from pysurvival.utils.display import create_risk_groups
risk_groups = create_risk_groups(model=neural_mtlr, X=X_test,
use_log = False, num_bins=30, figure_size=(20, 4),
low= {'lower_bound':0, 'upper_bound':430, 'color':'red'},
high= {'lower_bound':430, 'upper_bound':800, 'color':'blue'}
)
Explanation: Both comparisons show that the model do a great job predicting the number of loans that were fully repaid (average absolute error of 4.5 loan) or that were still active (average absolute error of 18.4 loans) for all times t of the 70+ months time window.
6.2 - Individual predictions
Now that we know that we can provide reliable predictions for an entire cohort, let's compute the speed of repayment at the individual level. The speed of repayment is given by $\text{Speed}(t) = 1−\text{Survival}(t)$
First, we can construct the risk groups based on risk scores distribution. The helper function create_risk_groups, which can be found in pysurvival.utils.display, will help us do that:
End of explanation
# Initializing the figure
fig, ax = plt.subplots(figsize=(15, 5))
# Selecting a random individual that experienced an event from each group
groups = []
for i, (label, (color, indexes)) in enumerate(risk_groups.items()) :
# Selecting the individuals that belong to this group
if len(indexes) == 0 :
continue
X = X_test.values[indexes, :]
T = T_test.values[indexes]
E = E_test.values[indexes]
# Randomly extracting an individual that experienced an event
choices = np.argwhere((E==1.)).flatten()
if len(choices) == 0 :
continue
k = np.random.choice( choices, 1)[0]
# Saving the time of event
t = T[k]
# Computing the Survival function for all times t
cdf = 1. - neural_mtlr.predict_survival(X[k, :]).flatten()
# Displaying the functions
label_ = '{} risk'.format(label)
plt.plot(neural_mtlr.times, cdf, color = color, label=label_, lw=2)
groups.append(label)
# Actual time
plt.axvline(x=t, color=color, ls ='--')
ax.annotate('T={:.1f}'.format(t), xy=(t, 0.5*(1.+0.2*i)),
xytext=(t, 0.5*(1.+0.2*i)), fontsize=12)
# Show everything
groups_str = ', '.join(groups)
title = "Comparing Cumulative Density functions between {} risk grades".format(groups_str)
plt.legend(fontsize=12)
plt.title(title, fontsize=15)
plt.xlim(0, 65)
plt.ylim(0, 1.05)
plt.show()
Explanation: Here, it is possible to distinguish 2 main groups, low and high risk groups. Because the C-index is high, the model will be able to perfectly rank the survival times of a random unit of each group, such that $t_{high} \leq t_{low}$.
Let's randomly select individual unit in each group and compare their speed of repayment functions. To demonstrate our point, we will purposely select units which experienced an event to visualize the actual time of event.
End of explanation
# Let's now save our model
from pysurvival.utils import save_model
save_model(neural_mtlr, '/Users/xxx/Desktop/credit_neural_mtlr.zip')
Explanation: 7 - Conclusion
We can now save our model so as to put it in production and score future borrowers.
Thanks to Survival Analysis, we can see that it is indeed possible to predict the speed of repayment of loans and forecast the number of loans that will be fully repaid throughout time, which is a great advantage over classification modeling.
End of explanation
<END_TASK> |
15,747 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Getting data into Python (basic python i/o)
Step1: Basic python string manipulation
Step2: CHALLENGE
Step3: Introducing TextBlob
Like a supercharged string, with lots of NLP niceties
Step4: Summarizing/keywording text
How might we find representative words or phrases of a document?
A place to start
Step5: Challenge
Step6: We could continue to add on stopwords as we try to make these keywords better. But it's kind of like playing whack-a-mole
An additional solution to The Problem
Step7: For each word $ w $ in a given document $ D $, we can multiply the term frequency
$$\frac { { D }{ w } }{ { W }{ D } } $$
where $ { D }_{ w } $ is the number of occurrences of word $ w $ in document $ D $
and $ { W }_{ D } $ is the total number of words in document $ D $
with the word's IDF that we just calculated to get TF-IDF scores, the highest ones being words that likely to be good representatives of that document.
Step8: Note that TF-IDF can be tweaked in lots of other ways if you aren't getting good results.
It can also be done with "n-grams"— phrases that are n words long to capture multi word phrases like "gay rights" or "hillary clinton"
Additional demonstrations
Boiling down words
Step9: Seeing words in context
Step10: Sentiment Analysis | <ASSISTANT_TASK:>
Python Code:
csvfile = open(
reader =
data = []
for line in reader:
line[3] = line[3].decode('utf-8')
# getting the number of rows
#taking a look at the first row
comment_text =
Explanation: Getting data into Python (basic python i/o)
End of explanation
comment_text
# strings are like lists of characters
# use a colon for start:end indexes
# they can be stuck together easily
# and split apart
comment_text
split_on_questions =
# it's easy to strip whitespace off of them
for string in split_on_questions:
# and cast them to one case
cleaned =
cleaned
# join them back together
# and look for substring inside them
Explanation: Basic python string manipulation
End of explanation
for row in data:
comment_text = row[-1]
Explanation: CHALLENGE: count the number of times the words "Hilary" or "Clinton" appear in the dataset
End of explanation
blob = TextBlob(data[80][-1])
blob
# we can get lists of sentences
# lists of words
# lists of "tokens" (punctuation included)
# even parts of speech and noun phrases
Explanation: Introducing TextBlob
Like a supercharged string, with lots of NLP niceties
End of explanation
word_count = collections.Counter(
word_count
Explanation: Summarizing/keywording text
How might we find representative words or phrases of a document?
A place to start: which words appear at the highest frequency in this document?
End of explanation
stopwords = nltk.corpus.stopwords.words('english')
nltk.download()
for key in word_count.keys():
Explanation: Challenge: get overall word counts for all comments combined
potential approaches:
* glue together all comments into one big blob
* get word counts for each comment individually and use Counter's update function
The Problem: words we use frequently don't make good unique identifiers.
One solution: use a list of words we don't want to include
"Stop Words"
End of explanation
N_documents = float(len(data))
word_document_counts =
word_idf = {}
for row in data[1:]:
blob = TextBlob(row[-1].lower())
# calculate IDFs
Explanation: We could continue to add on stopwords as we try to make these keywords better. But it's kind of like playing whack-a-mole
An additional solution to The Problem: add a new term to our "representative-ness" measure that accounts for the overall rarity of the word
$$\frac { { n }_{ w } }{ N } $$
where ${ n }_{ w }$ is the number of documents containing word $ w $, and $ N $ is the total number of documents.
But we want a potential keyword to have a lower score if it is common in the corpus and a higher score if it is rarer, so we flip it:
$$\frac { N }{ { n }_{ w } } $$
It's also common to take the log of this to reduce the amount of disparity between extremely common and extremely uncommon terms.
$$ \log \frac { N }{ { n }_{ w } } $$
This is called IDF, or Inverse Document Frequency. Let's calculate it for all the words in our comment dataset!
End of explanation
comment = data[80][-1]
blob = TextBlob(comment.lower())
num_words_in_comment = len(blob.words)
word_count = blob.word_counts
tf_scores = {}
for word, count in word_count.iteritems():
if word not in stopwords and len(word) > 2:
tf_scores[word] =
tf_idf = {}
for word, tf in tf_scores.iteritems():
tf_idf[word] =
sorted(tf_idf.iteritems(), key=lambda k: k[1], reverse=True)[:5]
Explanation: For each word $ w $ in a given document $ D $, we can multiply the term frequency
$$\frac { { D }{ w } }{ { W }{ D } } $$
where $ { D }_{ w } $ is the number of occurrences of word $ w $ in document $ D $
and $ { W }_{ D } $ is the total number of words in document $ D $
with the word's IDF that we just calculated to get TF-IDF scores, the highest ones being words that likely to be good representatives of that document.
End of explanation
from nltk.stem.porter import PorterStemmer
stemmer = PorterStemmer()
print stemmer.stem('political')
print stemmer.stem('politics')
print stemmer.stem('politician')
Explanation: Note that TF-IDF can be tweaked in lots of other ways if you aren't getting good results.
It can also be done with "n-grams"— phrases that are n words long to capture multi word phrases like "gay rights" or "hillary clinton"
Additional demonstrations
Boiling down words: stemming
End of explanation
from nltk.text import Text
tokens = TextBlob(data[80][-1]).tokens
text_object = Text(tokens)
text_object.concordance('Hilary')
Explanation: Seeing words in context: concordance
End of explanation
blob = TextBlob(data[41][-1])
blob
blob.sentiment
blob.sentences[1].sentiment
Explanation: Sentiment Analysis
End of explanation
<END_TASK> |
15,748 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Lesson 10 - Text Learning
When learning from text the biggest problem is that different text have different length. A smaller email would require lesser features while longer email would require more features.
Bag of words
Make a dictonary of counts of all the words that we care about.
word order does not matter
long phrases give different vectors
complex phrases cannot be handled like "chicago bulls"
Step1: Not all words are equal
like the, hi etc.
stopwords
occur very frequently, low information and should be removed
Step2: Not all unique words different
unresponsive
response
responsivity
responsiveness
respond
All of them can be passed through to get a root/stem - respon
We don't need all of them as their meaning is only slightly different and we don't get information. | <ASSISTANT_TASK:>
Python Code:
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer()
string1 = "hi aseem the car will be late regards company"
string2 = "hi company why will it be late I paid in advance regards aseem"
string3 = "hi aseem we don't know why will it be late regards company of company"
email_list = [string1, string2, string3]
vectorizer.fit(email_list)
bag_of_words = vectorizer.transform(email_list)
print vectorizer.vocabulary_
print bag_of_words
Explanation: Lesson 10 - Text Learning
When learning from text the biggest problem is that different text have different length. A smaller email would require lesser features while longer email would require more features.
Bag of words
Make a dictonary of counts of all the words that we care about.
word order does not matter
long phrases give different vectors
complex phrases cannot be handled like "chicago bulls"
End of explanation
import nltk
nltk.download()
from nltk.corpus import stopwords
sw = stopwords.words("english")
len(sw)
Explanation: Not all words are equal
like the, hi etc.
stopwords
occur very frequently, low information and should be removed
End of explanation
from nltk.stem.snowball import SnowballStemmer
stemmer = SnowballStemmer("english")
print stemmer.stem("responsiveness")
print stemmer.stem("responsivity")
print stemmer.stem("unresponsive")
Explanation: Not all unique words different
unresponsive
response
responsivity
responsiveness
respond
All of them can be passed through to get a root/stem - respon
We don't need all of them as their meaning is only slightly different and we don't get information.
End of explanation
<END_TASK> |
15,749 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Student-Performance-Evaluation using Classification-Regression
Here we would try to predict student performance in secondary education (high school).
We would perform data analysis for 3 cases
Step1: CASE 1
Step2: Feature importances with forests of trees
This examples shows the use of forests of trees to evaluate the importance of features on an artificial classification task. The red bars are the feature importances of the forest, along with their inter-trees variability.
Step3: Decision Tree accuracy and time elapsed caculation
Step4: cross validation for DT
Step5: Tuning our hyperparameters using GridSearch
Step6: Random Forest accuracy and time elapsed caculation
Step7: cross validation for RF
Step8: Receiver Operating Characteristic (ROC) curve
Step9: Tuning Models using GridSearch
Step10: Naive Bayes accuracy and time elapsed caculation
Step11: cross-validation for NB
Step12: KNN accuracy and time elapsed caculation
Step13: cross validation for KNN
Step14: SVM accuracy and time elapsed caculation
Step15: cross validation for SVM
Step16:
Step17: CASE 2
Step18: Feature importances with forests of trees
This examples shows the use of forests of trees to evaluate the importance of features on an artificial classification task. The red bars are the feature importances of the forest, along with their inter-trees variability.
Step19: Decision Tree accuracy and time elapsed caculation
Step20: cross validation for DT
Step21: Tuning our hyperparameters using GridSearch
Step22: Random Forest accuracy and time elapsed caculation
Step23: cross validation for RF
Step24: Tuning Models using GridSearch
Step25: Naive Bayes accuracy and time elapsed caculation
Step26: cross-validation for NB
Step27: KNN accuracy and time elapsed caculation
Step28: cross validation for KNN
Step29: SVM accuracy and time elapsed caculation
Step30: cross validation for SVM
Step31: Case 3
Step32: Using regularized methods for regression
A Lasso Regression model can be initialized as follows
Step33: Similiarly Ridge regression can be used
Step34: Lastly, the ElasticNet implementation allows us to vary the L1 to L2 ratio
Step35: For example, if we set l1_ratio to 1.0, the ElasticNet regressor would be
equal to LASSO regression.
Decision tree regression
Step36: Random forest regression
Step37: Linear Regression
Step38: Cross Validation
Step39: Fitting models with gradient descent
SGDRegressor
Step40: Support Vector Machines for regression
The regression version of SVM can be used instead to find the hyperplane (note how easy is to change the classification method in scikit-learn!). We will try a linear kernel, a polynomial kernel, and finally, a rbf kernel. For more information on kernels, see http
Step41: Random Forests for Regression
Finally, let's try again Random Forests, in their Extra Trees, and Regression version
Step42: An interesting side effect of random forest classification, is that you can measure how 'important' each feature is when predicting the final result
Step43: Finally, evaluate our classifiers on the testing set | <ASSISTANT_TASK:>
Python Code:
import os
from sklearn.tree import DecisionTreeClassifier, export_graphviz
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn.cross_validation import train_test_split
from sklearn import cross_validation, metrics
from sklearn.ensemble import RandomForestClassifier
from sklearn.naive_bayes import BernoulliNB
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
from time import time
from sklearn.pipeline import Pipeline
from sklearn.metrics import roc_auc_score , classification_report
from sklearn.grid_search import GridSearchCV
from sklearn.pipeline import Pipeline
from sklearn.metrics import precision_score, recall_score, accuracy_score, classification_report
# read .csv from provided dataset
csv_filename="student/student-mat.csv"
# df=pd.read_csv(csv_filename,index_col=0)
df=pd.read_csv(csv_filename, sep=";")
df.head()
df.describe()
Explanation: Student-Performance-Evaluation using Classification-Regression
Here we would try to predict student performance in secondary education (high school).
We would perform data analysis for 3 cases :
Case 1: Binary-Classification :-G3>10:-1-else-0
Case 2: Multi-Class-Classification
Case 3: Regression
Data Set Information:
This data approaches student achievement in secondary education of two Portuguese schools. The data attributes include student grades, demographic, social and school related features) and it was collected by using school reports and questionnaires. Two datasets are provided regarding the performance in two distinct subjects: Mathematics (mat) and Portuguese language (por). In [Cortez and Silva, 2008], the two datasets were modeled under binary/five-level classification and regression tasks. Important note: the target attribute G3 has a strong correlation with attributes G2 and G1. This occurs because G3 is the final year grade (issued at the 3rd period), while G1 and G2 correspond to the 1st and 2nd period grades. It is more difficult to predict G3 without G2 and G1, but such prediction is much more useful.
Dataset : http://archive.ics.uci.edu/ml/datasets/Student+Performance
Attributes for both student-mat.csv (Math course) and student-por.csv (Portuguese language course) datasets:
school - student's school (binary: 'GP' - Gabriel Pereira or 'MS' - Mousinho da Silveira)
sex - student's sex (binary: 'F' - female or 'M' - male)
age - student's age (numeric: from 15 to 22)
address - student's home address type (binary: 'U' - urban or 'R' - rural)
famsize - family size (binary: 'LE3' - less or equal to 3 or 'GT3' - greater than 3)
Pstatus - parent's cohabitation status (binary: 'T' - living together or 'A' - apart)
Medu - mother's education (numeric: 0 - none, 1 - primary education (4th grade), 2 - 5th to 9th grade, 3 - secondary education or 4 - higher education)
Fedu - father's education (numeric: 0 - none, 1 - primary education (4th grade), 2 - 5th to 9th grade, 3 - secondary education or 4 - higher education)
Mjob - mother's job (nominal: 'teacher', 'health' care related, civil 'services' (e.g. administrative or police), 'at_home' or 'other')
Fjob - father's job (nominal: 'teacher', 'health' care related, civil 'services' (e.g. administrative or police), 'at_home' or 'other')
reason - reason to choose this school (nominal: close to 'home', school 'reputation', 'course' preference or 'other')
guardian - student's guardian (nominal: 'mother', 'father' or 'other')
traveltime - home to school travel time (numeric: 1 - <15 min., 2 - 15 to 30 min., 3 - 30 min. to 1 hour, or 4 - >1 hour)
studytime - weekly study time (numeric: 1 - <2 hours, 2 - 2 to 5 hours, 3 - 5 to 10 hours, or 4 - >10 hours)
failures - number of past class failures (numeric: n if 1<=n\<3, else 4)
schoolsup - extra educational support (binary: yes or no)
famsup - family educational support (binary: yes or no)
paid - extra paid classes within the course subject (Math or Portuguese) (binary: yes or no)
activities - extra-curricular activities (binary: yes or no)
nursery - attended nursery school (binary: yes or no)
higher - wants to take higher education (binary: yes or no)
internet - Internet access at home (binary: yes or no)
romantic - with a romantic relationship (binary: yes or no)
famrel - quality of family relationships (numeric: from 1 - very bad to 5 - excellent)
freetime - free time after school (numeric: from 1 - very low to 5 - very high)
goout - going out with friends (numeric: from 1 - very low to 5 - very high)
Dalc - workday alcohol consumption (numeric: from 1 - very low to 5 - very high)
Walc - weekend alcohol consumption (numeric: from 1 - very low to 5 - very high)
health - current health status (numeric: from 1 - very bad to 5 - very good)
absences - number of school absences (numeric: from 0 to 93)
G1 - first period grade (numeric: from 0 to 20)
G2 - second period grade (numeric: from 0 to 20)
G3 - final grade (numeric: from 0 to 20, output target)
these grades are related with the course subject, Math or Portuguese:
End of explanation
df.G3.describe()
# handle G3 attrubte to binary
high = df.G3 >= 10
low = df.G3 < 10
df.loc[high,'G3'] = 1
df.loc[low,'G3'] = 0
df.head()
df.G3.describe()
cols = list(df.columns)
categorical_features = []
for f in cols:
if df[f].dtype != 'int64':
categorical_features.append(f)
categorical_features
for f in categorical_features:
#Get binarized columns
df[f] = pd.get_dummies(df[f])
df.head()
features=list(df.columns[:-1])
X = df[features]
y = df['G3']
# split dataset to 60% training and 40% testing
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X,y, test_size=0.4, random_state=0)
print (X_train.shape, y_train.shape)
Explanation: CASE 1: Binary Classification : G3>10: 1 else 0
End of explanation
import numpy as np
import matplotlib.pyplot as plt
from sklearn.ensemble import ExtraTreesClassifier
# Build a classification task using 3 informative features
# Build a forest and compute the feature importances
forest = ExtraTreesClassifier(n_estimators=250,
random_state=0)
forest.fit(X, y)
importances = forest.feature_importances_
std = np.std([tree.feature_importances_ for tree in forest.estimators_],
axis=0)
indices = np.argsort(importances)[::-1]
# Print the feature ranking
print("Feature ranking:")
for f in range(X.shape[1]):
print("%d. feature %d - %s (%f) " % (f + 1, indices[f], features[indices[f]], importances[indices[f]]))
# Plot the feature importances of the forest
plt.figure(num=None, figsize=(14, 10), dpi=80, facecolor='w', edgecolor='k')
plt.title("Feature importances")
plt.bar(range(X.shape[1]), importances[indices],
color="r", yerr=std[indices], align="center")
plt.xticks(range(X.shape[1]), indices)
plt.xlim([-1, X.shape[1]])
plt.show()
importances[indices[:5]]
for f in range(5):
print("%d. feature %d - %s (%f)" % (f + 1, indices[f], features[indices[f]] ,importances[indices[f]]))
best_features = []
for i in indices[:5]:
best_features.append(features[i])
# Plot the top 5 feature importances of the forest
plt.figure(num=None, figsize=(8, 6), dpi=80, facecolor='w', edgecolor='k')
plt.title("Feature importances")
plt.bar(range(5), importances[indices][:5],
color="r", yerr=std[indices][:5], align="center")
plt.xticks(range(5), best_features)
plt.xlim([-1, 5])
plt.show()
Explanation: Feature importances with forests of trees
This examples shows the use of forests of trees to evaluate the importance of features on an artificial classification task. The red bars are the feature importances of the forest, along with their inter-trees variability.
End of explanation
t0=time()
print ("DecisionTree")
dt = DecisionTreeClassifier(min_samples_split=20,random_state=99)
# dt = DecisionTreeClassifier(min_samples_split=20,max_depth=5,random_state=99)
clf_dt=dt.fit(X_train,y_train)
print ("Acurracy: ", clf_dt.score(X_test,y_test))
t1=time()
print ("time elapsed: ", t1-t0)
Explanation: Decision Tree accuracy and time elapsed caculation
End of explanation
tt0=time()
print ("cross result========")
scores = cross_validation.cross_val_score(dt, X,y, cv=5)
print (scores)
print (scores.mean())
tt1=time()
print ("time elapsed: ", tt1-tt0)
Explanation: cross validation for DT
End of explanation
from sklearn.metrics import classification_report
pipeline = Pipeline([
('clf', DecisionTreeClassifier(criterion='entropy'))
])
parameters = {
'clf__max_depth': (5, 25 , 50),
'clf__min_samples_split': (1, 5, 10),
'clf__min_samples_leaf': (1, 2, 3)
}
grid_search = GridSearchCV(pipeline, parameters, n_jobs=-1, verbose=1, scoring='f1')
grid_search.fit(X_train, y_train)
print 'Best score: %0.3f' % grid_search.best_score_
print 'Best parameters set:'
best_parameters = grid_search.best_estimator_.get_params()
for param_name in sorted(parameters.keys()):
print '\t%s: %r' % (param_name, best_parameters[param_name])
predictions = grid_search.predict(X_test)
print classification_report(y_test, predictions)
Explanation: Tuning our hyperparameters using GridSearch
End of explanation
t2=time()
print ("RandomForest")
rf = RandomForestClassifier(n_estimators=100,n_jobs=-1)
clf_rf = rf.fit(X_train,y_train)
print ("Acurracy: ", clf_rf.score(X_test,y_test))
t3=time()
print ("time elapsed: ", t3-t2)
Explanation: Random Forest accuracy and time elapsed caculation
End of explanation
tt0=time()
print ("cross result========")
scores = cross_validation.cross_val_score(rf, X,y, cv=5)
print (scores)
print (scores.mean())
tt1=time()
print ("time elapsed: ", tt1-tt0)
Explanation: cross validation for RF
End of explanation
roc_auc_score(y_test,rf.predict(X_test))
%matplotlib inline
import matplotlib.pyplot as plt
from sklearn.metrics import roc_curve, auc
predictions = rf.predict_proba(X_test)
false_positive_rate, recall, thresholds = roc_curve(y_test, predictions[:, 1])
roc_auc = auc(false_positive_rate, recall)
plt.title('Receiver Operating Characteristic')
plt.plot(false_positive_rate, recall, 'b', label='AUC = %0.2f' % roc_auc)
plt.legend(loc='lower right')
plt.plot([0, 1], [0, 1], 'r--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.0])
plt.ylabel('Recall')
plt.xlabel('Fall-out')
plt.show()
Explanation: Receiver Operating Characteristic (ROC) curve
End of explanation
pipeline2 = Pipeline([
('clf', RandomForestClassifier(criterion='entropy'))
])
parameters = {
'clf__n_estimators': (5, 25, 50, 100),
'clf__max_depth': (5, 25 , 50),
'clf__min_samples_split': (1, 5, 10),
'clf__min_samples_leaf': (1, 2, 3)
}
grid_search = GridSearchCV(pipeline2, parameters, n_jobs=-1, verbose=1, scoring='accuracy', cv=3)
grid_search.fit(X_train, y_train)
print 'Best score: %0.3f' % grid_search.best_score_
print 'Best parameters set:'
best_parameters = grid_search.best_estimator_.get_params()
for param_name in sorted(parameters.keys()):
print '\t%s: %r' % (param_name, best_parameters[param_name])
predictions = grid_search.predict(X_test)
print 'Accuracy:', accuracy_score(y_test, predictions)
print classification_report(y_test, predictions)
Explanation: Tuning Models using GridSearch
End of explanation
t4=time()
print ("NaiveBayes")
nb = BernoulliNB()
clf_nb=nb.fit(X_train,y_train)
print ("Acurracy: ", clf_nb.score(X_test,y_test))
t5=time()
print ("time elapsed: ", t5-t4)
Explanation: Naive Bayes accuracy and time elapsed caculation
End of explanation
tt0=time()
print ("cross result========")
scores = cross_validation.cross_val_score(nb, X,y, cv=5)
print (scores)
print (scores.mean())
tt1=time()
print ("time elapsed: ", tt1-tt0)
Explanation: cross-validation for NB
End of explanation
t6=time()
print ("KNN")
# knn = KNeighborsClassifier(n_neighbors=3)
knn = KNeighborsClassifier()
clf_knn=knn.fit(X_train, y_train)
print ("Acurracy: ", clf_knn.score(X_test,y_test) )
t7=time()
print ("time elapsed: ", t7-t6)
Explanation: KNN accuracy and time elapsed caculation
End of explanation
tt0=time()
print ("cross result========")
scores = cross_validation.cross_val_score(knn, X,y, cv=5)
print (scores)
print (scores.mean())
tt1=time()
print ("time elapsed: ", tt1-tt0)
Explanation: cross validation for KNN
End of explanation
t7=time()
print ("SVM")
svc = SVC()
clf_svc=svc.fit(X_train, y_train)
print ("Acurracy: ", clf_svc.score(X_test,y_test) )
t8=time()
print ("time elapsed: ", t8-t7)
Explanation: SVM accuracy and time elapsed caculation
End of explanation
tt0=time()
print ("cross result========")
scores = cross_validation.cross_val_score(svc, X,y, cv=5)
print (scores)
print (scores.mean())
tt1=time()
print ("time elapsed: ", tt1-tt0)
from sklearn.svm import SVC
from sklearn.cross_validation import cross_val_score
from sklearn.pipeline import Pipeline
from sklearn import grid_search
svc = SVC()
parameters = {'kernel':('linear', 'rbf'), 'C':[1, 10]}
grid = grid_search.GridSearchCV(svc, parameters, n_jobs=-1, verbose=1, scoring='accuracy')
grid.fit(X_train, y_train)
print 'Best score: %0.3f' % grid.best_score_
print 'Best parameters set:'
best_parameters = grid.best_estimator_.get_params()
for param_name in sorted(parameters.keys()):
print '\t%s: %r' % (param_name, best_parameters[param_name])
predictions = grid.predict(X_test)
print classification_report(y_test, predictions)
pipeline = Pipeline([
('clf', SVC(kernel='rbf', gamma=0.01, C=100))
])
parameters = {
'clf__gamma': (0.01, 0.03, 0.1, 0.3, 1),
'clf__C': (0.1, 0.3, 1, 3, 10, 30),
}
grid_search = GridSearchCV(pipeline, parameters, n_jobs=-1, verbose=1, scoring='accuracy')
grid_search.fit(X_train, y_train)
print 'Best score: %0.3f' % grid_search.best_score_
print 'Best parameters set:'
best_parameters = grid_search.best_estimator_.get_params()
for param_name in sorted(parameters.keys()):
print '\t%s: %r' % (param_name, best_parameters[param_name])
predictions = grid_search.predict(X_test)
print classification_report(y_test, predictions)
Explanation: cross validation for SVM
End of explanation
# read .csv from provided dataset
csv_filename="student/student-mat.csv"
# df=pd.read_csv(csv_filename,index_col=0)
df=pd.read_csv(csv_filename, sep=";")
df.head()
df.describe()
Explanation:
End of explanation
df.G3.describe()
for i in range(len(df.G3)):
if df.G3.loc[i] < 10:
df.G3.loc[i] = 5
elif df.G3.loc[i] < 12:
df.G3.loc[i] = 4
elif df.G3.loc[i] < 14:
df.G3.loc[i] = 3
elif df.G3.loc[i] < 16:
df.G3.loc[i] = 2
elif df.G3.loc[i] < 21:
df.G3.loc[i] = 1
df.G3.unique()
df.head()
df.G3.describe()
cols = list(df.columns)
categorical_features = []
for f in cols:
if df[f].dtype != 'int64':
categorical_features.append(f)
categorical_features
for f in categorical_features:
#Get binarized columns
df[f] = pd.get_dummies(df[f])
df.head()
features=list(df.columns[:-1])
X = df[features]
y = df['G3']
# split dataset to 60% training and 40% testing
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X,y, test_size=0.4, random_state=0)
print (X_train.shape, y_train.shape)
Explanation: CASE 2: Multi Class Classification :
<table>
<tr>
<th>Class</th><th>G3</th><th>Label</th>
</tr>
<tr>
<td>I (excellent/very good)</td><td>16-20</td><td>A</td>
</tr>
<tr>
<td>II (good)</td><td>14-15</td><td>B</td>
</tr>
<tr>
<td>III (satisfactory)</td><td>12-13</td><td>C</td>
</tr>
<tr>
<td>IV (sufficient)</td><td>10-11</td><td>D</td>
</tr>
<tr>
<td>V (fail)</td><td>0-9</td><td>E</td>
</tr>
End of explanation
import numpy as np
import matplotlib.pyplot as plt
from sklearn.ensemble import ExtraTreesClassifier
# Build a classification task using 3 informative features
# Build a forest and compute the feature importances
forest = ExtraTreesClassifier(n_estimators=250,
random_state=0)
forest.fit(X, y)
importances = forest.feature_importances_
std = np.std([tree.feature_importances_ for tree in forest.estimators_],
axis=0)
indices = np.argsort(importances)[::-1]
# Print the feature ranking
print("Feature ranking:")
for f in range(X.shape[1]):
print("%d. feature %d - %s (%f) " % (f + 1, indices[f], features[indices[f]], importances[indices[f]]))
# Plot the feature importances of the forest
plt.figure(num=None, figsize=(14, 10), dpi=80, facecolor='w', edgecolor='k')
plt.title("Feature importances")
plt.bar(range(X.shape[1]), importances[indices],
color="r", yerr=std[indices], align="center")
plt.xticks(range(X.shape[1]), indices)
plt.xlim([-1, X.shape[1]])
plt.show()
importances[indices[:5]]
for f in range(5):
print("%d. feature %d - %s (%f)" % (f + 1, indices[f], features[indices[f]] ,importances[indices[f]]))
best_features = []
for i in indices[:5]:
best_features.append(features[i])
# Plot the top 5 feature importances of the forest
plt.figure(num=None, figsize=(8, 6), dpi=80, facecolor='w', edgecolor='k')
plt.title("Feature importances")
plt.bar(range(5), importances[indices][:5],
color="r", yerr=std[indices][:5], align="center")
plt.xticks(range(5), best_features)
plt.xlim([-1, 5])
plt.show()
Explanation: Feature importances with forests of trees
This examples shows the use of forests of trees to evaluate the importance of features on an artificial classification task. The red bars are the feature importances of the forest, along with their inter-trees variability.
End of explanation
t0=time()
print ("DecisionTree")
dt = DecisionTreeClassifier(min_samples_split=20,random_state=99)
# dt = DecisionTreeClassifier(min_samples_split=20,max_depth=5,random_state=99)
clf_dt=dt.fit(X_train,y_train)
print ("Acurracy: ", clf_dt.score(X_test,y_test))
t1=time()
print ("time elapsed: ", t1-t0)
Explanation: Decision Tree accuracy and time elapsed caculation
End of explanation
tt0=time()
print ("cross result========")
scores = cross_validation.cross_val_score(dt, X,y, cv=5)
print (scores)
print (scores.mean())
tt1=time()
print ("time elapsed: ", tt1-tt0)
Explanation: cross validation for DT
End of explanation
from sklearn.metrics import classification_report
pipeline = Pipeline([
('clf', DecisionTreeClassifier(criterion='entropy'))
])
parameters = {
'clf__max_depth': (5, 25 , 50),
'clf__min_samples_split': (1, 5, 10),
'clf__min_samples_leaf': (1, 2, 3)
}
grid_search = GridSearchCV(pipeline, parameters, n_jobs=-1, verbose=1, scoring='f1')
grid_search.fit(X_train, y_train)
print 'Best score: %0.3f' % grid_search.best_score_
print 'Best parameters set:'
best_parameters = grid_search.best_estimator_.get_params()
for param_name in sorted(parameters.keys()):
print '\t%s: %r' % (param_name, best_parameters[param_name])
predictions = grid_search.predict(X_test)
print classification_report(y_test, predictions)
Explanation: Tuning our hyperparameters using GridSearch
End of explanation
t2=time()
print ("RandomForest")
rf = RandomForestClassifier(n_estimators=100,n_jobs=-1)
clf_rf = rf.fit(X_train,y_train)
print ("Acurracy: ", clf_rf.score(X_test,y_test))
t3=time()
print ("time elapsed: ", t3-t2)
Explanation: Random Forest accuracy and time elapsed caculation
End of explanation
tt0=time()
print ("cross result========")
scores = cross_validation.cross_val_score(rf, X,y, cv=5)
print (scores)
print (scores.mean())
tt1=time()
print ("time elapsed: ", tt1-tt0)
Explanation: cross validation for RF
End of explanation
pipeline2 = Pipeline([
('clf', RandomForestClassifier(criterion='entropy'))
])
parameters = {
'clf__n_estimators': (5, 25, 50, 100),
'clf__max_depth': (5, 25 , 50),
'clf__min_samples_split': (1, 5, 10),
'clf__min_samples_leaf': (1, 2, 3)
}
grid_search = GridSearchCV(pipeline2, parameters, n_jobs=-1, verbose=1, scoring='accuracy', cv=3)
grid_search.fit(X_train, y_train)
print 'Best score: %0.3f' % grid_search.best_score_
print 'Best parameters set:'
best_parameters = grid_search.best_estimator_.get_params()
for param_name in sorted(parameters.keys()):
print '\t%s: %r' % (param_name, best_parameters[param_name])
predictions = grid_search.predict(X_test)
print 'Accuracy:', accuracy_score(y_test, predictions)
print classification_report(y_test, predictions)
Explanation: Tuning Models using GridSearch
End of explanation
t4=time()
print ("NaiveBayes")
nb = BernoulliNB()
clf_nb=nb.fit(X_train,y_train)
print ("Acurracy: ", clf_nb.score(X_test,y_test))
t5=time()
print ("time elapsed: ", t5-t4)
Explanation: Naive Bayes accuracy and time elapsed caculation
End of explanation
tt0=time()
print ("cross result========")
scores = cross_validation.cross_val_score(nb, X,y, cv=5)
print (scores)
print (scores.mean())
tt1=time()
print ("time elapsed: ", tt1-tt0)
Explanation: cross-validation for NB
End of explanation
t6=time()
print ("KNN")
# knn = KNeighborsClassifier(n_neighbors=3)
knn = KNeighborsClassifier()
clf_knn=knn.fit(X_train, y_train)
print ("Acurracy: ", clf_knn.score(X_test,y_test) )
t7=time()
print ("time elapsed: ", t7-t6)
Explanation: KNN accuracy and time elapsed caculation
End of explanation
tt0=time()
print ("cross result========")
scores = cross_validation.cross_val_score(knn, X,y, cv=5)
print (scores)
print (scores.mean())
tt1=time()
print ("time elapsed: ", tt1-tt0)
Explanation: cross validation for KNN
End of explanation
t7=time()
print ("SVM")
svc = SVC()
clf_svc=svc.fit(X_train, y_train)
print ("Acurracy: ", clf_svc.score(X_test,y_test) )
t8=time()
print ("time elapsed: ", t8-t7)
Explanation: SVM accuracy and time elapsed caculation
End of explanation
tt0=time()
print ("cross result========")
scores = cross_validation.cross_val_score(svc, X,y, cv=5)
print (scores)
print (scores.mean())
tt1=time()
print ("time elapsed: ", tt1-tt0)
from sklearn.svm import SVC
from sklearn.cross_validation import cross_val_score
from sklearn.pipeline import Pipeline
from sklearn import grid_search
svc = SVC()
parameters = {'kernel':('linear', 'rbf'), 'C':[1, 10]}
grid = grid_search.GridSearchCV(svc, parameters, n_jobs=-1, verbose=1, scoring='accuracy')
grid.fit(X_train, y_train)
print 'Best score: %0.3f' % grid.best_score_
print 'Best parameters set:'
best_parameters = grid.best_estimator_.get_params()
for param_name in sorted(parameters.keys()):
print '\t%s: %r' % (param_name, best_parameters[param_name])
predictions = grid.predict(X_test)
print classification_report(y_test, predictions)
pipeline = Pipeline([
('clf', SVC(kernel='rbf', gamma=0.01, C=100))
])
parameters = {
'clf__gamma': (0.01, 0.03, 0.1, 0.3, 1),
'clf__C': (0.1, 0.3, 1, 3, 10, 30),
}
grid_search = GridSearchCV(pipeline, parameters, n_jobs=-1, verbose=1, scoring='accuracy')
grid_search.fit(X_train, y_train)
print 'Best score: %0.3f' % grid_search.best_score_
print 'Best parameters set:'
best_parameters = grid_search.best_estimator_.get_params()
for param_name in sorted(parameters.keys()):
print '\t%s: %r' % (param_name, best_parameters[param_name])
predictions = grid_search.predict(X_test)
print classification_report(y_test, predictions)
Explanation: cross validation for SVM
End of explanation
import numpy as np
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import SGDRegressor
from sklearn.preprocessing import StandardScaler
from sklearn.cross_validation import train_test_split
from sklearn. cross_validation import cross_val_score
from sklearn.feature_selection import *
from sklearn import metrics
# read .csv from provided dataset
csv_filename="student/student-mat.csv"
# df=pd.read_csv(csv_filename,index_col=0)
df=pd.read_csv(csv_filename,sep=";")
df.head()
cols = list(df.columns)
categorical_features = []
for f in cols:
if df[f].dtype != 'int64':
categorical_features.append(f)
categorical_features
for f in categorical_features:
#Get binarized columns
df[f] = pd.get_dummies(df[f])
df.head()
features=list(df.columns[:-1])
X = df[features]
y = df['G3']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
from sklearn.feature_selection import *
fs=SelectKBest(score_func=f_regression,k=5)
X_new=fs.fit_transform(X_train,y_train)
z = zip(fs.get_support(),features)
print z
x_min, x_max = X_new[:,0].min() - .5, X_new[:, 0].max() + .5
y_min, y_max = y_train.min() - .5, y_train.max() + .5
#fig=plt.figure()
#fig.subplots_adjust(left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05)
# Two subplots, unpack the axes array immediately
fig, axes = plt.subplots(1,5)
fig.set_size_inches(12,12)
for i in range(5):
axes[i].set_aspect('equal')
axes[i].set_title('Feature {}'.format(i))
axes[i].set_xlabel('Feature')
axes[i].set_ylabel('Grades')
axes[i].set_xlim(x_min, x_max)
axes[i].set_ylim(y_min, y_max)
plt.sca(axes[i])
plt.scatter(X_new[:,i],y_train)
best_features = []
for bool,feature in z:
if bool:
best_features.append(feature)
correlated = best_features + ['G3']
correlated
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(style='whitegrid', context='notebook')
sns.pairplot(df[correlated], size=2.0);
plt.tight_layout()
# plt.savefig('./figures/scatter.png', dpi=300)
plt.show()
import numpy as np
cm = np.corrcoef(df[correlated].values.T)
sns.set(font_scale=1.5)
hm = sns.heatmap(cm,
cbar=True,
annot=True,
square=True,
fmt='.2f',
annot_kws={'size': 15},
yticklabels=correlated,
xticklabels=correlated)
plt.tight_layout()
# plt.savefig('./figures/corr_mat.png', dpi=300)
plt.show()
%matplotlib inline
import matplotlib.pyplot as plt
plt.scatter(df['failures'], df['G3'])
plt.xlabel('Failures')
plt.ylabel('G3')
plt.title('Failures Against G3')
plt.show()
from sklearn.cross_validation import train_test_split
X = df[features].values
y = df['G3'].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
slr = LinearRegression()
slr.fit(X_train, y_train)
y_train_pred = slr.predict(X_train)
y_test_pred = slr.predict(X_test)
plt.scatter(y_train_pred, y_train_pred - y_train, c='blue', marker='o', label='Training data')
plt.scatter(y_test_pred, y_test_pred - y_test, c='lightgreen', marker='s', label='Test data')
plt.xlabel('Predicted values')
plt.ylabel('Residuals')
plt.legend(loc='upper left')
plt.hlines(y=0, xmin=0, xmax=20, lw=2, color='red')
plt.xlim([0, 20])
plt.tight_layout()
# plt.savefig('./figures/slr_residuals.png', dpi=300)
plt.show()
from sklearn.metrics import r2_score
from sklearn.metrics import mean_squared_error
print('MSE train: %.3f, test: %.3f' % (
mean_squared_error(y_train, y_train_pred),
mean_squared_error(y_test, y_test_pred)))
print('R^2 train: %.3f, test: %.3f' % (
r2_score(y_train, y_train_pred),
r2_score(y_test, y_test_pred)))
Explanation: Case 3 : Regression
End of explanation
from sklearn.linear_model import Lasso
lasso = Lasso(alpha=0.1)
lasso.fit(X_train, y_train)
y_train_pred = lasso.predict(X_train)
y_test_pred = lasso.predict(X_test)
print(lasso.coef_)
print('MSE train: %.3f, test: %.3f' % (
mean_squared_error(y_train, y_train_pred),
mean_squared_error(y_test, y_test_pred)))
print('R^2 train: %.3f, test: %.3f' % (
r2_score(y_train, y_train_pred),
r2_score(y_test, y_test_pred)))
Explanation: Using regularized methods for regression
A Lasso Regression model can be initialized as follows:
End of explanation
from sklearn.linear_model import Ridge
ridge = Ridge(alpha=1.0)
ridge.fit(X_train, y_train)
y_train_pred = ridge.predict(X_train)
y_test_pred = ridge.predict(X_test)
print(ridge.coef_)
Explanation: Similiarly Ridge regression can be used:
End of explanation
from sklearn.linear_model import ElasticNet
en = ElasticNet(alpha=1.0, l1_ratio=0.5)
en.fit(X_train, y_train)
y_train_pred = en.predict(X_train)
y_test_pred = en.predict(X_test)
print(en.coef_)
Explanation: Lastly, the ElasticNet implementation allows us to vary the L1 to L2 ratio:
End of explanation
X = df[features].values
y = df['G3'].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
from sklearn.tree import DecisionTreeRegressor
tree = DecisionTreeRegressor(max_depth=3)
tree.fit(X_train, y_train)
y_train_pred = tree.predict(X_train)
y_test_pred = tree.predict(X_test)
print('MSE train: %.3f, test: %.3f' % (
mean_squared_error(y_train, y_train_pred),
mean_squared_error(y_test, y_test_pred)))
print('R^2 train: %.3f, test: %.3f' % (
r2_score(y_train, y_train_pred),
r2_score(y_test, y_test_pred)))
Explanation: For example, if we set l1_ratio to 1.0, the ElasticNet regressor would be
equal to LASSO regression.
Decision tree regression
End of explanation
X = df[features].values
y = df['G3'].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
from sklearn.ensemble import RandomForestRegressor
forest = RandomForestRegressor(n_estimators=1000,
criterion='mse',
random_state=1,
n_jobs=-1)
forest.fit(X_train, y_train)
y_train_pred = forest.predict(X_train)
y_test_pred = forest.predict(X_test)
print('MSE train: %.3f, test: %.3f' % (
mean_squared_error(y_train, y_train_pred),
mean_squared_error(y_test, y_test_pred)))
print('R^2 train: %.3f, test: %.3f' % (
r2_score(y_train, y_train_pred),
r2_score(y_test, y_test_pred)))
plt.scatter(y_train_pred,
y_train_pred - y_train,
c='black',
marker='o',
s=35,
alpha=0.5,
label='Training data')
plt.scatter(y_test_pred,
y_test_pred - y_test,
c='lightgreen',
marker='s',
s=35,
alpha=0.7,
label='Test data')
plt.xlabel('Predicted values')
plt.ylabel('Residuals')
plt.legend(loc='upper left')
plt.hlines(y=0, xmin=0, xmax=22, lw=2, color='red')
plt.xlim([0, 22])
plt.tight_layout()
# plt.savefig('./figures/slr_residuals.png', dpi=300)
plt.show()
Explanation: Random forest regression
End of explanation
X_train, X_test, y_train, y_test = train_test_split(X, y)
regressor = LinearRegression()
regressor.fit(X_train, y_train)
y_predictions = regressor.predict(X_test)
print 'R-squared:', regressor.score(X_test, y_test)
Explanation: Linear Regression
End of explanation
scores = cross_val_score(regressor, X, y, cv=5)
print "Average of scores: ", scores.mean()
print "Cross validation scores: ", scores
plt.scatter(y_test,y_predictions)
plt.xlabel('True Quality')
plt.ylabel('Predicted Quality')
plt.title('Predicted Quality Against True Quality')
plt.show()
Explanation: Cross Validation
End of explanation
# Scaling the features using StandardScaler:
X_scaler = StandardScaler()
y_scaler = StandardScaler()
X_train = X_scaler.fit_transform(X_train)
y_train = y_scaler.fit_transform(y_train)
X_test = X_scaler.transform(X_test)
y_test = y_scaler.transform(y_test)
regressor = SGDRegressor(loss='squared_loss')
scores = cross_val_score(regressor, X_train, y_train, cv=5)
print 'Cross validation r-squared scores:', scores
print 'Average cross validation r-squared score:', np.mean(scores)
regressor.fit_transform(X_train, y_train)
print 'Test set r-squared score', regressor.score(X_test, y_test)
from sklearn.cross_validation import *
def train_and_evaluate(clf, X_train, y_train):
clf.fit(X_train, y_train)
print "Coefficient of determination on training set:",clf.score(X_train, y_train)
# create a k-fold croos validation iterator of k=5 folds
cv = KFold(X_train.shape[0], 5, shuffle=True, random_state=33)
scores = cross_val_score(clf, X_train, y_train, cv=cv)
print "Average coefficient of determination using 5-fold crossvalidation:",np.mean(scores)
Explanation: Fitting models with gradient descent
SGDRegressor
End of explanation
from sklearn import svm
clf_svr= svm.SVR(kernel='linear')
train_and_evaluate(clf_svr,X_train,y_train)
clf_svr_poly= svm.SVR(kernel='poly')
train_and_evaluate(clf_svr_poly,X_train,y_train)
clf_svr_rbf= svm.SVR(kernel='rbf')
train_and_evaluate(clf_svr_rbf,X_train,y_train)
clf_svr_poly2= svm.SVR(kernel='poly',degree=2)
train_and_evaluate(clf_svr_poly2,X_train,y_train)
Explanation: Support Vector Machines for regression
The regression version of SVM can be used instead to find the hyperplane (note how easy is to change the classification method in scikit-learn!). We will try a linear kernel, a polynomial kernel, and finally, a rbf kernel. For more information on kernels, see http://scikit-learn.org/stable/modules/svm.html#svm-kernels
End of explanation
from sklearn import ensemble
clf_et=ensemble.ExtraTreesRegressor(n_estimators=10,random_state=42)
train_and_evaluate(clf_et,X_train,y_train)
Explanation: Random Forests for Regression
Finally, let's try again Random Forests, in their Extra Trees, and Regression version
End of explanation
print np.sort(zip(clf_et.feature_importances_,features),axis=0)
Explanation: An interesting side effect of random forest classification, is that you can measure how 'important' each feature is when predicting the final result
End of explanation
from sklearn import metrics
def measure_performance(X,y,clf, show_accuracy=True,
show_classification_report=True,
show_confusion_matrix=True,
show_r2_score=False):
y_pred=clf.predict(X)
if show_accuracy:
print "Accuracy:{0:.3f}".format(metrics.accuracy_score(y,y_pred)),"\n"
if show_classification_report:
print "Classification report"
print metrics.classification_report(y,y_pred),"\n"
if show_confusion_matrix:
print "Confusion matrix"
print metrics.confusion_matrix(y,y_pred),"\n"
if show_r2_score:
print "Coefficient of determination:{0:.3f}".format(metrics.r2_score(y,y_pred)),"\n"
measure_performance(X_test,y_test,clf_et,
show_accuracy=False,
show_classification_report=False,
show_confusion_matrix=False,
show_r2_score=True)
Explanation: Finally, evaluate our classifiers on the testing set
End of explanation
<END_TASK> |
15,750 | <SYSTEM_TASK:>
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
<END_TASK>
<USER_TASK:>
Problem:
>>> arr = np.array([[1,2,3,4], [5,6,7,8], [9,10,11,12]]) | <ASSISTANT_TASK:>
Python Code:
import numpy as np
a = np.arange(12).reshape(3, 4)
a = np.delete(a, 2, axis = 0)
<END_TASK> |
15,751 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Visualize and Explore
The Dataset - Fisher's Irises
Most machine learning algorithms implemented in scikit-learn expect data to be stored in a
two-dimensional array or matrix. The arrays can be
either numpy arrays, or in some cases scipy.sparse matrices.
The size of the array is expected to be n_samples x n_features.
n_samples
Step1: Let's Dive In!
Step2: Features (aka columns in data)
Step3: Targets (aka labels)
Step4: sklearn TIP
Step5: <b>Sneak a peek at data (a reminder of your pandas dataframe methods)<b>
Step6: <b>Describe the dataset with some summary statitsics<b>
Step7: We don't have to do much with the iris dataset. It has no missing values. It's already in numpy arrays and has the correct shape for sklearn. However we could try <b>standardization</b> and/or <b>normalization</b>. (later, in the transforms section, we will show one hot encoding, a preprocessing step)
Preprocessing (Bonus Material)
<p>What you might have to do before using a learner in `sklearn`
Step8: PRO TIP | <ASSISTANT_TASK:>
Python Code:
from sklearn.datasets import load_iris
iris = load_iris()
print(type(iris.data))
print(type(iris.target))
Explanation: Visualize and Explore
The Dataset - Fisher's Irises
Most machine learning algorithms implemented in scikit-learn expect data to be stored in a
two-dimensional array or matrix. The arrays can be
either numpy arrays, or in some cases scipy.sparse matrices.
The size of the array is expected to be n_samples x n_features.
n_samples: The number of samples: each sample is an item to process (e.g. classify).
A sample can be a document, a picture, a sound, a video, an astronomical object,
a row in database or CSV file,
or whatever you can describe with a fixed set of quantitative traits.
n_features: The number of features or distinct traits that can be used to describe each
item in a quantitative manner. Features are generally real-valued, but may be boolean or
discrete-valued in some cases.<br><br>
<font color = "lightgrey">
The number of features must be fixed in advance. However it can be very high dimensional
(e.g. millions of features) with most of them being zeros for a given sample. This is a case
where scipy.sparse matrices can be useful, in that they are
much more memory-efficient than numpy arrays.
</font><br><br>
If there are labels or targets, they need to be stored in one-dimensional arrays or lists.
Today we are going to use the <b>iris</b> dataset which comes with sklearn. It's fairly small as we'll see shortly.
<b>Remember our ML TIP: Ask sharp questions.</b><br>e.g. What type of flower is this (pictured below) closest to of the three given classes?
(This links out to source)
<a href="http://www.madlantern.com/photography/wild-iris/"><img border="0" alt="iris species" src="imgs/iris-setosa.jpg" width="400" height="400"></a>
Labels (species names/classes):
(This links out to source)
<a href="http://articles.concreteinteractive.com/machine-learning-a-new-tool-for-humanity/"><img border="0" alt="iris species" src="imgs/irises.png" width="500" height="500"></a>
<font color = "lightgrey"> TIP: Commonly, machine learning algorithms will require your data to be standardized, normalized or even reguarlized and preprocessed. In sklearn the data must also take on a certain structure as discussed above.</font>
QUICK QUESTION:
1. What do you expect this data set to be if you are trying to recognize an iris species?
* For our [n_samples x n_features] data array, what do you think
* the samples are?
* the features are?
End of explanation
import pandas as pd
import numpy as np
%matplotlib inline
Explanation: Let's Dive In!
End of explanation
import pandas as pd
from sklearn import datasets
iris = datasets.load_iris()
pd.DataFrame({'feature name': iris.feature_names})
Explanation: Features (aka columns in data)
End of explanation
import pandas as pd
from sklearn import datasets
iris = datasets.load_iris()
pd.DataFrame({'target name': iris.target_names})
Explanation: Targets (aka labels)
End of explanation
import pandas as pd
from sklearn import datasets
iris = datasets.load_iris()
# How many data points (rows) x how many features (columns)
print(iris.data.shape)
print(iris.target.shape)
Explanation: sklearn TIP: all included datasets for have at least feature_names and sometimes target_names
Get to know the data - visualize and explore
Features (columns/measurements) come from this diagram (links out to source on kaggle):
<a href="http://blog.kaggle.com/2015/04/22/scikit-learn-video-3-machine-learning-first-steps-with-the-iris-dataset/"><img border="0" alt="iris data features" src="imgs/iris_petal_sepal.png" width="200" height="200"></a>
Shape
Peek at data
Summaries
<b>Shape and representation<b>
End of explanation
# convert to pandas df (adding real column names)
iris.df = pd.DataFrame(iris.data,
columns = iris.feature_names)
# first few rows
iris.df.head()
Explanation: <b>Sneak a peek at data (a reminder of your pandas dataframe methods)<b>
End of explanation
# summary stats
iris.df.describe()
Explanation: <b>Describe the dataset with some summary statitsics<b>
End of explanation
# Standardization aka scaling
from sklearn import preprocessing, datasets
# make sure we have iris loaded
iris = datasets.load_iris()
X, y = iris.data, iris.target
# scale it to a gaussian distribution
X_scaled = preprocessing.scale(X)
# how does it look now
pd.DataFrame(X_scaled).head()
# let's just confirm our standardization worked (mean is 0 w/ unit variance)
pd.DataFrame(X_scaled).describe()
# also could:
#print(X_scaled.mean(axis = 0))
#print(X_scaled.std(axis = 0))
Explanation: We don't have to do much with the iris dataset. It has no missing values. It's already in numpy arrays and has the correct shape for sklearn. However we could try <b>standardization</b> and/or <b>normalization</b>. (later, in the transforms section, we will show one hot encoding, a preprocessing step)
Preprocessing (Bonus Material)
<p>What you might have to do before using a learner in `sklearn`:</p>
Non-numerics transformed to numeric (tip: use applymap() method from pandas)
Fill in missing values
Standardization
Normalization
Encoding categorical features (e.g. one-hot encoding or dummy variables)
<b>Features should end up in a numpy.ndarray (hence numeric) and labels in a list.</b>
Data options:
* Use pre-processed datasets from scikit-learn
* Create your own
* Read from a file
If you use your own data or "real-world" data you will likely have to do some data wrangling and need to leverage pandas for some data manipulation.
Standardization - make our data look like a standard Gaussian distribution (commonly needed for sklearn learners)
FYI: you'll commonly see the data or feature set (ML word for data without it's labels) represented as a capital <b>X</b> and the targets or labels (if we have them) represented as a lowercase <b>y</b>. This is because the data is a 2D array or list of lists and the targets are a 1D array or simple list.
End of explanation
# Standardization aka scaling
from sklearn import preprocessing, datasets
# make sure we have iris loaded
iris = datasets.load_iris()
X, y = iris.data, iris.target
# scale it to a gaussian distribution
X_norm = preprocessing.normalize(X, norm='l1')
# how does it look now
pd.DataFrame(X_norm).tail()
# let's just confirm our standardization worked (mean is 0 w/ unit variance)
pd.DataFrame(X_norm).describe()
# cumulative sum of normalized and original data:
#print(pd.DataFrame(X_norm.cumsum().reshape(X.shape)).tail())
#print(pd.DataFrame(X).cumsum().tail())
# unit norm (convert to unit vectors) - all row sums should be 1 now
X_norm.sum(axis = 1)
Explanation: PRO TIP: To save our standardization and reapply later (say to the test set or some new data), create a transformer object like so:
```python
scaler = preprocessing.StandardScaler().fit(X_train)
apply to a new dataset (e.g. test set):
scaler.transform(X_test)
```
Normalization - scaling samples <i>individually</i> to have unit norm
This type of scaling is really important if doing some downstream transformations and learning (see sklearn docs here for more) where similarity of pairs of samples is examined
A basic intro to normalization and the unit vector can be found here
End of explanation
<END_TASK> |
15,752 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
DM_Halos and DM_IGM
Splitting $\langle DM_{cosmic}\rangle$ into its constituents.
Step1: $\langle \rho_{diffuse, cosmic}\rangle$
Use f_diffuse to calculate the average mass fraction of diffuse gas and diffuse gas density (physical). Math described in DM_cosmic.ipynb.
Step2: $\langle n_{e,cosmic}\rangle$
Step3: $\langle DM_{cosmic}\rangle$
See DM_cosmic.ipynb for details regarding its computation.
Step4: $\langle DM_{halos}\rangle$ and $\langle DM_{IGM}\rangle$
The fraction of free electrons present in halos should be equal to the fraction of diffuse gas in halos assuming the ionization state of the individual species is only dependent on redshift (and not gas density as well).
$$
\begin{aligned}
\frac{\langle n_{e, halos}\rangle}{\langle n_{e, cosmic}\rangle} & = \frac{\rho_{diffuse,halos}}{\rho_{diffuse,cosmic}}\
& = \frac{\rho_{b, halos}f_{hot}}{\rho_{b, cosmic}f_{diffuse, cosmic}}\
\end{aligned}
$$
Here $\rho_b$ refers to baryon density. $f_{hot}$ refers to the fraction of baryons in halos that is in the hot phase ($\sim10^7$ K). The remaining baryons are either in the neutral phase or in dense objects like stars. Assuming halos have the same baryon mass fraction as the universal average ($\Omega_b/\Omega_M$)
$$
\begin{aligned}
\frac{\langle n_{e, halos}\rangle}{\langle n_{e, cosmic}\rangle} & = \frac{\rho_{m, halos}f_{hot}}{\rho_{m, cosmic}f_{diffuse, cosmic}}\
& = \frac{f_{halos} f_{hot}}{f_{diffuse, cosmic}}\
\end{aligned}
$$
$f_{halos}$ can be computed as a function of redshift by integrating the halo mass function (HMF) times mass over some mass range and dividing it by the density of matter in the universe. This allows us to compute a line of sight integral of $\langle n_{e, halos} \rangle$ to get $\langle DM_{halos}\rangle$. $\langle DM_{IGM}\rangle$ is just obtained by subtracting this from $\langle DM_{cosmic}\rangle$.
Apart from $f_{hot}$ being an obvious free parameter, we also allow variation in the radial extent of halos. This is encoded in the parameter $r_{max}$ which is the radial extent of halos in units of $r_{200}$. Setting $r_{max}>1$ (for all halos; currently it is mass independent) smoothly extends the NFW profile and the modifid profile of the encased diffuse baryons. | <ASSISTANT_TASK:>
Python Code:
# imports
from importlib import reload
import numpy as np
from scipy.interpolate import InterpolatedUnivariateSpline as IUS
from astropy import units as u
from frb.halos.models import ModifiedNFW
from frb.halos import models as frb_halos
from frb.halos import hmf as frb_hmf
from frb.dm import igm as frb_igm
from frb.figures import utils as ff_utils
from matplotlib import pyplot as plt
plt.rcParams['font.size'] = 17
Explanation: DM_Halos and DM_IGM
Splitting $\langle DM_{cosmic}\rangle$ into its constituents.
End of explanation
help(frb_igm.f_diffuse)
# Define redshifts
zvals = np.linspace(0, 8)
# Get <n_e>
f_diffuse, rho_diffuse = frb_igm.f_diffuse(zvals, return_rho = True)
# Plot
fig, axs = plt.subplots(2,1, sharex=True, figsize = (8,7))
fig.tight_layout()
ax1 = axs[0]
ax1.plot(zvals, f_diffuse, lw=2)
ax1.set_ylabel(r'$\langle f_{diffuse, cosmic}\rangle$')
ax2 = axs[1]
ax2.plot(zvals, rho_diffuse.to('Msun*Mpc**-3'), lw=2)
ax2.set_yscale("log")
ax2.set_xlabel('z')
ax2.set_ylabel(r'$\langle \rho_{diffuse, cosmic}\rangle$ $M_\odot~Mpc^{-3}$')
plt.show()
Explanation: $\langle \rho_{diffuse, cosmic}\rangle$
Use f_diffuse to calculate the average mass fraction of diffuse gas and diffuse gas density (physical). Math described in DM_cosmic.ipynb.
End of explanation
help(frb_igm.ne_cosmic)
# Define redshifts
zvals = np.linspace(0, 8)
# Get <n_e>
avg_ne = frb_igm.ne_cosmic(zvals)
# Visualize
fig = plt.figure(figsize = (10, 6))
plt.plot(zvals, avg_ne, label=r'$\langle n_{e, cosmic}\rangle$', lw=2)
plt.yscale("log")
plt.legend(loc = "upper left")
plt.xlabel('z')
plt.ylabel(r'$\langle n_{e, cosmic}\rangle$ [$cm^{-3}$]')
plt.show()
Explanation: $\langle n_{e,cosmic}\rangle$
End of explanation
help(frb_igm.average_DM)
DM_cosmic, zvals = frb_igm.average_DM(8, cumul=True)
# Visualize
fig = plt.figure(figsize = (10, 6))
plt.plot(zvals, DM_cosmic, lw=2)
plt.xlabel('z')
plt.ylabel(r'$\langle DM_{cosmic}\rangle$ $pc~cm^{-3}$')
plt.show()
Explanation: $\langle DM_{cosmic}\rangle$
See DM_cosmic.ipynb for details regarding its computation.
End of explanation
help(frb_igm.average_DMhalos)
# evaluation
frb_igm.average_DMhalos(0.1)
# get cumulative DM_halos
dm, zvals = frb_igm.average_DMhalos(0.1, cumul = True)
dm
zvals
fhot_array = [0.2, 0.5, 0.75]
rmax_array = [0.5, 1.0 , 2.0]
# <DM_halos> for different f_hot
fig, axs = plt.subplots(2,1, sharex=True, figsize = (8,7))
fig.tight_layout()
ax1 = axs[0]
for f_hot in fhot_array:
DM_halos, zeval = frb_igm.average_DMhalos(3, f_hot = f_hot, cumul=True)
ax1.plot(zeval, DM_halos, label="{:0.1f}".format(f_hot))
ax1.legend(title="f_hot")
ax1.set_ylabel(r'$\langle DM_{halos}\rangle$ $pc~cm^{-3}$')
# <DM_halos> for different rmax
ax2 = axs[1]
for rmax in rmax_array:
DM_halos, zeval = frb_igm.average_DMhalos(3, rmax = rmax, cumul = True)
ax2.plot(zeval, DM_halos, label="{:0.1f}".format(rmax))
ax2.legend(title="rmax")
ax2.set_xlabel('z')
ax2.set_ylabel(r'$\langle DM_{halos}\rangle$ $pc~cm^{-3}$')
plt.show()
# Limits of calculation
frb_igm.average_DMhalos(3.1)
# Failure above redshift 5
frb_igm.average_DMhalos(5.1)
help(frb_igm.average_DMIGM)
# Sanity check. <DM_cosmic> - (<DM_halos> + <DM_IGM) = 0
dm, zvals = frb_igm.average_DM(0.1, cumul= True)
dm_halos, _ = frb_igm.average_DMhalos(0.1, cumul = True)
dm_igm, _ = frb_igm.average_DMIGM(0.1, cumul = True)
plt.plot(zvals, dm - dm_halos - dm_igm)
plt.ylabel(r"DM $pc~cm^{-3}$")
plt.xlabel("z")
plt.show()
Explanation: $\langle DM_{halos}\rangle$ and $\langle DM_{IGM}\rangle$
The fraction of free electrons present in halos should be equal to the fraction of diffuse gas in halos assuming the ionization state of the individual species is only dependent on redshift (and not gas density as well).
$$
\begin{aligned}
\frac{\langle n_{e, halos}\rangle}{\langle n_{e, cosmic}\rangle} & = \frac{\rho_{diffuse,halos}}{\rho_{diffuse,cosmic}}\
& = \frac{\rho_{b, halos}f_{hot}}{\rho_{b, cosmic}f_{diffuse, cosmic}}\
\end{aligned}
$$
Here $\rho_b$ refers to baryon density. $f_{hot}$ refers to the fraction of baryons in halos that is in the hot phase ($\sim10^7$ K). The remaining baryons are either in the neutral phase or in dense objects like stars. Assuming halos have the same baryon mass fraction as the universal average ($\Omega_b/\Omega_M$)
$$
\begin{aligned}
\frac{\langle n_{e, halos}\rangle}{\langle n_{e, cosmic}\rangle} & = \frac{\rho_{m, halos}f_{hot}}{\rho_{m, cosmic}f_{diffuse, cosmic}}\
& = \frac{f_{halos} f_{hot}}{f_{diffuse, cosmic}}\
\end{aligned}
$$
$f_{halos}$ can be computed as a function of redshift by integrating the halo mass function (HMF) times mass over some mass range and dividing it by the density of matter in the universe. This allows us to compute a line of sight integral of $\langle n_{e, halos} \rangle$ to get $\langle DM_{halos}\rangle$. $\langle DM_{IGM}\rangle$ is just obtained by subtracting this from $\langle DM_{cosmic}\rangle$.
Apart from $f_{hot}$ being an obvious free parameter, we also allow variation in the radial extent of halos. This is encoded in the parameter $r_{max}$ which is the radial extent of halos in units of $r_{200}$. Setting $r_{max}>1$ (for all halos; currently it is mass independent) smoothly extends the NFW profile and the modifid profile of the encased diffuse baryons.
End of explanation
<END_TASK> |
15,753 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Think Bayes
This notebook presents example code and exercise solutions for Think Bayes.
Copyright 2018 Allen B. Downey
MIT License
Step5: World Cup problem, part two
In the final match of the 2014 FIFA World Cup, Germany defeated Argentina 1-0. How much evidence does this victory provide that Germany had the better team? What is the probability that Germany would win a rematch?
Scoring in games like soccer and hockey can be modeled by a Poisson process, which assumes that each team, against a given opponent, will score goals at some goal-scoring rate, $\lambda$, and that this rate does not vary; in other words, the probability of scoring a goal is about the same at any point during the game.
Based on this modeling decision, we can answer the questions by
Defining a prior distribution for each team's goal-scoring rate against the other,
Updating the prior based on the outcome of the game,
Using the posterior distributions to compute the probability that Germany's goal-scoring rate is higher.
Generating a predictive distribution for the number of goals each team would score in a rematch.
I'll start with Step 2.
Step 2
Step6: Likelihood computes the likelihood of data given hypo, where data is an observed number of goals, and hypo is a hypothetical goal-scoring rate in goals per game. We can compute the likelihood of the data by evaluating the Poisson probability mass function (PMF).
Now we can get back to Step 1.
Step 1
Step7: According to this prior, the goal-scoring rates are always greater than zero, with the most likely value (a priori) near 0.5. Goal scoring rates greater than 5 are considered unlikely.
Step 3
Step8: Update invokes the likelihood function for each hypothetical value of $\lambda$ and updates the distribution accordingly.
Since both teams scored fewer goals than the prior mean (1.4), we expect both posterior means to be lower.
Here are the posteriors
Step9: To answer the first question, "How much evidence does this victory provide that Germany had the better team?", we can compute the posterior probability that Germany had a higher goal-scoring rate
Step10: Based on the prior distributions, we would have said that Germany had a 50% chance of having the better team, or 1
Step11: The Bayes factor is about 2.3, which is generally considered weak evidence.
Now on to Step 4.
Step 4
Step13: Instead of running simulations, you could compute the posterior predictive distributions explicitly.
Write a function called PredictiveDist that takes the posterior distribution of $\lambda$ and a duration (in units of games).
It should loop through the hypotheses in suite, compute the predictive distribution of goals for each hypothesis, and assemble a "meta-Pmf" which is a Pmf that maps from each predictive distribution to its probability.
Finally, it should use MakeMixture to compute the mixture of the predictive distributions.
Step14: Using the predictive distributions, we can compute probabilities for the outcomes of a rematch. | <ASSISTANT_TASK:>
Python Code:
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import classes from thinkbayes2
from thinkbayes2 import Pmf, Suite
import thinkbayes2
import thinkplot
import numpy as np
from scipy.special import gamma
Explanation: Think Bayes
This notebook presents example code and exercise solutions for Think Bayes.
Copyright 2018 Allen B. Downey
MIT License: https://opensource.org/licenses/MIT
End of explanation
class Soccer2(thinkbayes2.Suite):
Represents hypotheses about goal-scoring rates.
def Likelihood(self, data, hypo):
Computes the likelihood of the data under the hypothesis.
hypo: goal rate in goals per game
data: goals scored in a game
# FILL THIS IN!
return 1
# Solution
from scipy.stats import poisson
class Soccer2(thinkbayes2.Suite):
Represents hypotheses about goal-scoring rates.
def Likelihood(self, data, hypo):
Computes the likelihood of the data under the hypothesis.
hypo: goal rate in goals per game
data: goals scored in a game
return poisson.pmf(data, hypo)
Explanation: World Cup problem, part two
In the final match of the 2014 FIFA World Cup, Germany defeated Argentina 1-0. How much evidence does this victory provide that Germany had the better team? What is the probability that Germany would win a rematch?
Scoring in games like soccer and hockey can be modeled by a Poisson process, which assumes that each team, against a given opponent, will score goals at some goal-scoring rate, $\lambda$, and that this rate does not vary; in other words, the probability of scoring a goal is about the same at any point during the game.
Based on this modeling decision, we can answer the questions by
Defining a prior distribution for each team's goal-scoring rate against the other,
Updating the prior based on the outcome of the game,
Using the posterior distributions to compute the probability that Germany's goal-scoring rate is higher.
Generating a predictive distribution for the number of goals each team would score in a rematch.
I'll start with Step 2.
Step 2: Updating
If goal-scoring is a Poisson process, the distribution of goals per game is Poisson with parameter $\lambda$. To compute the distribution of $\lambda$ we can define a new class that inherits from thinkbayes2.Suite and provides an appropriate Likelihood function:
End of explanation
from thinkbayes2 import MakeGammaPmf
xs = np.linspace(0, 8, 101)
pmf = MakeGammaPmf(xs, 1.3)
thinkplot.Pdf(pmf)
thinkplot.decorate(xlabel='Goal-scoring rate (λ)',
ylabel='PMF')
pmf.Mean()
suite = Soccer2(pmf);
germany = suite.Copy(label='Germany')
argentina = suite.Copy(label='Argentina')
thinkplot.Pdf(germany)
thinkplot.Pdf(argentina)
thinkplot.decorate(xlabel='Goal-scoring rate (λ)',
ylabel='PMF')
pmf.Mean()
Explanation: Likelihood computes the likelihood of data given hypo, where data is an observed number of goals, and hypo is a hypothetical goal-scoring rate in goals per game. We can compute the likelihood of the data by evaluating the Poisson probability mass function (PMF).
Now we can get back to Step 1.
Step 1: Constructing the prior
Before the game starts, what should we believe about each team's goal scoring rate against each other? We could use previous tournament results to construct the priors, but to keep things simple, I'll just use the average goal-scoring rate from all matches in the tournament, which was 2.67 goals per game (total for both teams).
To construct the prior, I use a gamma distribution with a mean of 1.34 goals per game.
End of explanation
germany = suite.Copy(label='Germany')
argentina = suite.Copy(label='Argentina')
germany.Update(1)
argentina.Update(0)
print('posterior mean Germany', germany.Mean())
print('posterior mean Argentina', argentina.Mean())
Explanation: According to this prior, the goal-scoring rates are always greater than zero, with the most likely value (a priori) near 0.5. Goal scoring rates greater than 5 are considered unlikely.
Step 3: Comparing posteriors
The next step is to compute the posteriors for the two teams:
End of explanation
thinkplot.Pdf(germany)
thinkplot.Pdf(argentina)
thinkplot.decorate(xlabel='Goal-scoring rate (λ)',
ylabel='PMF')
Explanation: Update invokes the likelihood function for each hypothetical value of $\lambda$ and updates the distribution accordingly.
Since both teams scored fewer goals than the prior mean (1.4), we expect both posterior means to be lower.
Here are the posteriors:
End of explanation
post_prob = germany.ProbGreater(argentina)
print('posterior prob Germany > Argentina', post_prob)
Explanation: To answer the first question, "How much evidence does this victory provide that Germany had the better team?", we can compute the posterior probability that Germany had a higher goal-scoring rate:
End of explanation
prior_odds = 1
post_odds = post_prob / (1 - post_prob)
print('posterior odds Germany > Argentina', post_odds)
k = post_odds / prior_odds
print('Bayes factor', k)
Explanation: Based on the prior distributions, we would have said that Germany had a 50% chance of having the better team, or 1:1 odds. Based on the posteriors, we would say that Germany has a 70% chance. We can use the ratio of the prior and posterior odds to compute the Bayes factor, which measures the strength of the evidence.
End of explanation
# Solution
gdr_goals = poisson.rvs(germany.Sample(1000))
arg_goals = poisson.rvs(argentina.Sample(1000))
np.mean(gdr_goals > arg_goals)
# Solution
np.mean(gdr_goals == arg_goals)
# Solution
np.mean(gdr_goals < arg_goals)
Explanation: The Bayes factor is about 2.3, which is generally considered weak evidence.
Now on to Step 4.
Step 4: Comparing posterior distributions
Exercise: Write a few lines of code to
Choose a random value of lam from the posterior distribution of each team.
Choose a random number of goals for each team, conditioned on the value of lam you chose.
Run that "simulation" many times and accumulate the distribution of wins, losses, and ties.
Use the results to estimate the probability that Germany would win a rematch.
End of explanation
# Solution
def PredictiveDist(suite, duration=1, label='pred'):
Computes the distribution of goals scored in a game.
returns: new Pmf (mixture of Poissons)
metapmf = thinkbayes2.Pmf()
for lam, prob in suite.Items():
pred = thinkbayes2.MakePoissonPmf(lam * duration, 10)
metapmf[pred] = prob
mix = thinkbayes2.MakeMixture(metapmf, label=label)
return mix
germany_pred = PredictiveDist(germany, label='germany')
argentina_pred = PredictiveDist(argentina, label='argentina');
thinkplot.Hist(germany_pred, width=0.45, align='right')
thinkplot.Hist(argentina_pred, width=0.45, align='left')
thinkplot.decorate(xlabel='Predicted # goals',
ylabel='Pmf')
Explanation: Instead of running simulations, you could compute the posterior predictive distributions explicitly.
Write a function called PredictiveDist that takes the posterior distribution of $\lambda$ and a duration (in units of games).
It should loop through the hypotheses in suite, compute the predictive distribution of goals for each hypothesis, and assemble a "meta-Pmf" which is a Pmf that maps from each predictive distribution to its probability.
Finally, it should use MakeMixture to compute the mixture of the predictive distributions.
End of explanation
win = germany_pred.ProbGreater(argentina_pred)
lose = germany_pred.ProbLess(argentina_pred)
tie = 1 - (win + lose)
print('Posterior prob Germany wins rematch', win)
print('Posterior prob tie', tie)
print('Posterior prob Argentina wins rematch', lose)
Explanation: Using the predictive distributions, we can compute probabilities for the outcomes of a rematch.
End of explanation
<END_TASK> |
15,754 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
# Scipy and Numpy
Scipy builds on Numpy, and for all basic array handling needs you can use Numpy functions
Step1: If you need to using only some function, you may use the from ... import command.
Step2: This whole stuff with numpy is required a lot of practice since it's a big library itself. In this session we only cover some of it.
Array
You can make an array right of the blue with np.array.
Step3: As usual, you could access the coordinates with the brackets. Remember that Python start it's counting at 0. For better or worse, Numpy have a different data type than pure Python. We can access the dimension of the matrix with a.shape if a is the name of our matriks. It could also reshape with a.reshape(m,n) where (m,n) is the new dimension (NumPy can have more than 2 dimensional being). There's also arange to create an array of evenly spaced numbers. You also could transpose the matrix simply by transpose.
Step4: Numpy has an ability to broadcast matrix, that is to treat arrays of different shapes during arithmetic operations. If the arrays are in the same shape, we did the binary operation coordinat-by-coordinate. But operations on arrays of non-similar shapes is still possible in NumPy.
Step5: Iteration on Array
You can also iterating in array using nditer. We can compare it if we reshape it first, then iterating as iteration on list, or we iterating directly with nditer.
Step6: Arithmetic Operations
NumPy is not failed our expectation with linear algebra. Beside binary operation, NumPy also support the "usual" arithmetic operation on matrix, namely
Step7: Ploting with Matplotlib
Matplotlib is one of the most favorite library to ploting data. Lots of capability with great support and easy to use nature make matplotlib one of the choice. Another libraries worth mentioned are Cairoplot, plotly, and MathGL. You'll see more that just that if you dig a little bit deeper.
Step8: Example
Given function $f$ such that $$f
Step9: Attempt to solve the problem
For a glance, we didn't see any anomaly. We will try approach with two ways, first one is to bruto force, and the second one is to find it's derivatives to get a clear understanding of it's nature.
With the first approach, we need to see the graph closer. We may need to zooming it a little bit.
Step10: Hence, we knew that it cutted by a horizontal lines three times, so he is not injective.
The second approach is more elegant on many ways, but it's not imaginable with graphics. We knew that an injective function would be a strictly increasing (or decreasing) function) with finitely many stasionary point. We can come up with it's derivatives to get to know whether it's alwas positive or always negative or not. The given function's derivatives are
Step11: And we got the result that the peak value is negative. So this graph at some point is going down, so it is not injective.
Example
Find all solution of the equation $$ 8 \sin x = x$$ for all reals $x$.
Attempt to solve the problem
Before we going anywhere, we should plot it (since we have the ability to do so). First thing first, remember that $\sin x$ are bounded on $[-1,1]$, so we can bound our graphics to be around $[-8,8]$.
Step12: Little bit easy huh? Since both $x$ and $\sin x$ are odd, we can consider on one sides positive or negative, and we got one solution, namely $x=0$.
Step13: Noting that $\sin$ is always up and down, it would only intersect the lines once every $\pi/2$ interval. Hence, we could brute force our way in in the intevals. | <ASSISTANT_TASK:>
Python Code:
import numpy as np
Explanation: # Scipy and Numpy
Scipy builds on Numpy, and for all basic array handling needs you can use Numpy functions:
End of explanation
from scipy import some_module
some_module.some_function()
Explanation: If you need to using only some function, you may use the from ... import command.
End of explanation
import numpy as np
a = np.array([[1,2,3],[2,3,4]])
print(a)
print(a[0,1])
Explanation: This whole stuff with numpy is required a lot of practice since it's a big library itself. In this session we only cover some of it.
Array
You can make an array right of the blue with np.array.
End of explanation
import numpy as np
a = np.array([[1,2,3],[2,3,4]])
b = np.arange(1,7)
c = np.arange(6)
b = b+c
b = b.reshape(2,3)
print(a+b)
Explanation: As usual, you could access the coordinates with the brackets. Remember that Python start it's counting at 0. For better or worse, Numpy have a different data type than pure Python. We can access the dimension of the matrix with a.shape if a is the name of our matriks. It could also reshape with a.reshape(m,n) where (m,n) is the new dimension (NumPy can have more than 2 dimensional being). There's also arange to create an array of evenly spaced numbers. You also could transpose the matrix simply by transpose.
End of explanation
a = np.array([1,2,3])
b = np.array([[10,20,30],[20,30,40],[30,40,50]])
a+b
Explanation: Numpy has an ability to broadcast matrix, that is to treat arrays of different shapes during arithmetic operations. If the arrays are in the same shape, we did the binary operation coordinat-by-coordinate. But operations on arrays of non-similar shapes is still possible in NumPy.
End of explanation
import numpy as np
a = np.arange(0,60,5)
a = a.reshape(3,4)
print('Original array is:')
print(a)
print('\n')
print('Transpose of the original array is:')
b = a.T
print(b)
print('\n')
print('Sorted in C-style order:')
c = b.copy(order = 'C')
print(c)
for x in np.nditer(c):
print(x),
print('\n')
print('Sorted in F-style order:')
c = b.copy(order = 'F')
print(c)
for x in np.nditer(c):
print(x),
Explanation: Iteration on Array
You can also iterating in array using nditer. We can compare it if we reshape it first, then iterating as iteration on list, or we iterating directly with nditer.
End of explanation
import numpy as np
a = np.arange(9, dtype = np.float_).reshape(3,3)
b = np.arange(9, dtype = np.float_).reshape(3,3)
print(a)
print(np.add(a,b))
print(np.subtract(a,b))
print(np.multiply(a,b))
Explanation: Arithmetic Operations
NumPy is not failed our expectation with linear algebra. Beside binary operation, NumPy also support the "usual" arithmetic operation on matrix, namely: np.add,np.subtract,np.multiply.
End of explanation
import numpy as np
import matplotlib.pyplot as plt
x = np.arange(1,11)
y = 1 + 2 * x + x*x
plt.title("Matplotlib demo")
plt.xlabel("x axis caption")
plt.ylabel("y axis caption")
plt.plot(x,y)
plt.show()
Explanation: Ploting with Matplotlib
Matplotlib is one of the most favorite library to ploting data. Lots of capability with great support and easy to use nature make matplotlib one of the choice. Another libraries worth mentioned are Cairoplot, plotly, and MathGL. You'll see more that just that if you dig a little bit deeper.
End of explanation
import numpy as np
from numpy import poly1d
import matplotlib.pyplot as plt
x = np.linspace(-10,10)
y = x*x*x + 2*x*x + 1.3333*x + 1
plt.title("Computational Mathematics, Problem Final Test ")
plt.xlabel("x axis")
plt.ylabel("y axis")
plt.plot(x,y)
plt.show()
Explanation: Example
Given function $f$ such that $$f:x \mapsto x^3 + 2x^2 + 1.3333x + 1. $$
Plot graphic $y=f(x)$. Prove that the function is not injective.
End of explanation
import numpy as np
from numpy import poly1d
import matplotlib.pyplot as plt
x = np.linspace(-.675,-.658,100)
y = x*x*x + 2*x*x + 1.3333*x + 1
z = x * 0 + 0.7037259
plt.title("Computational Mathematics, Problem Final Test ")
plt.xlabel("x axis")
plt.ylabel("y axis")
plt.plot(x,y,x,z)
plt.show()
Explanation: Attempt to solve the problem
For a glance, we didn't see any anomaly. We will try approach with two ways, first one is to bruto force, and the second one is to find it's derivatives to get a clear understanding of it's nature.
With the first approach, we need to see the graph closer. We may need to zooming it a little bit.
End of explanation
a, b, c = 3,4,1.33333
D1 = (-b + np.sqrt(b*b - 4*a*c))/(2*a)
D2 = (-b - np.sqrt(b*b - 4*a*c))/(2*a)
x = (D1+D2)/2
0 < 3*x*x + 4*x + 1.33333
Explanation: Hence, we knew that it cutted by a horizontal lines three times, so he is not injective.
The second approach is more elegant on many ways, but it's not imaginable with graphics. We knew that an injective function would be a strictly increasing (or decreasing) function) with finitely many stasionary point. We can come up with it's derivatives to get to know whether it's alwas positive or always negative or not. The given function's derivatives are: $$3x^2 + 4x + 1.33333.$$
Since it's a quadratics, we can find it's roots, check it's peak, and tell is it different sign than the rest of the graph or not. But since it's concave up, we only need to check whether it's negative or not.
End of explanation
import numpy as np
from numpy import poly1d
import matplotlib.pyplot as plt
x = np.linspace(-9,9,10000)
y = 8*np.sin(x)
z = x
plt.title("Computational Mathematics, Problem Final Test ")
plt.xlabel("x axis")
plt.ylabel("y axis")
plt.plot(x,y,x,z)
plt.show()
Explanation: And we got the result that the peak value is negative. So this graph at some point is going down, so it is not injective.
Example
Find all solution of the equation $$ 8 \sin x = x$$ for all reals $x$.
Attempt to solve the problem
Before we going anywhere, we should plot it (since we have the ability to do so). First thing first, remember that $\sin x$ are bounded on $[-1,1]$, so we can bound our graphics to be around $[-8,8]$.
End of explanation
import numpy as np
from numpy import poly1d
import matplotlib.pyplot as plt
x = np.linspace(0,9,1000)
y = 8*np.sin(x)
z = x
plt.title("Computational Mathematics, Problem Final Test ")
plt.xlabel("x axis")
plt.ylabel("y axis")
plt.plot(x,y,x,z)
plt.show()
Explanation: Little bit easy huh? Since both $x$ and $\sin x$ are odd, we can consider on one sides positive or negative, and we got one solution, namely $x=0$.
End of explanation
from scipy import optimize
import numpy as np
def f(x):
return 8*np.sin(x) - x
a = (np.pi)/2
root1 = optimize.bisect(f, 2, 4)
root2 = optimize.bisect(f, 4, 5*a)
root3 = optimize.bisect(f, 5*a, 8)
print(root1)
print(root2)
print(root3)
Explanation: Noting that $\sin$ is always up and down, it would only intersect the lines once every $\pi/2$ interval. Hence, we could brute force our way in in the intevals.
End of explanation
<END_TASK> |
15,755 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Calculating radial distribution functions
Radial distribution functions can be calculated from one or more pymatgen Structure objects by using the vasppy.rdf.RadialDistributionFunction class.
Step1: The default required arguments for creating a RadialDistributionFunction object are a list of pymatgen Structure objects, and the numerical indices of the atoms (or Site objects) that we want to compute the rdf between.
Step2: To compute a rdf between different species, we need to pass both indices_i and indices_j.
Step3: The Na and Cl sublattices are equivalent, so the Na–Na and Cl–Cl rdfs sit on top of each other.
A smeared rdf can be produced using the smeared_rdf() method, which applies a Gaussian kernel to the raw rdf data. The smeared_rdf() method takes on optional argument sigma, which can be used to set the width of the Gaussian (default = 0.1)
Step4: Selecting atoms by their species strings
Atom indices can also be selected by species string with the RadialDistributionFunction.from_species_strings() method
Step5: Calculating a RDF from a VASP XDATCAR
Step6: Weighted RDF calculations
For calculating RDFs from Monte Carlo simulation trajectories RadialDistributionFunction can be passed an optional weights argument, which takes a list of numerical weights for each structure. | <ASSISTANT_TASK:>
Python Code:
# Create a pymatgen Structure for NaCl
from pymatgen import Structure, Lattice
# Create a pymatgen Structure for NaCl
from pymatgen import Structure, Lattice
a = 5.6402 # NaCl lattice parameter
lattice = Lattice.from_parameters(a, a, a, 90.0, 90.0, 90.0)
lattice
structure = Structure.from_spacegroup(sg='Fm-3m', lattice=lattice,
species=['Na', 'Cl'],
coords=[[0,0,0], [0.5, 0, 0]])
structure
from vasppy.rdf import RadialDistributionFunction
Explanation: Calculating radial distribution functions
Radial distribution functions can be calculated from one or more pymatgen Structure objects by using the vasppy.rdf.RadialDistributionFunction class.
End of explanation
indices_na = [i for i, site in enumerate(structure) if site.species_string is 'Na']
indices_cl = [i for i, site in enumerate(structure) if site.species_string is 'Cl']
print(indices_na)
print(indices_cl)
rdf_nana = RadialDistributionFunction(structures=[structure],
indices_i=indices_na)
rdf_clcl = RadialDistributionFunction(structures=[structure],
indices_i=indices_cl)
Explanation: The default required arguments for creating a RadialDistributionFunction object are a list of pymatgen Structure objects, and the numerical indices of the atoms (or Site objects) that we want to compute the rdf between.
End of explanation
rdf_nacl = RadialDistributionFunction(structures=[structure],
indices_i=indices_na, indices_j=indices_cl)
import matplotlib.pyplot as plt
plt.plot(rdf_nana.r, rdf_nana.rdf, label='Na-Na')
plt.plot(rdf_clcl.r, rdf_clcl.rdf, label='Cl-Cl')
plt.plot(rdf_nacl.r, rdf_nacl.rdf, label='Na-Cl')
plt.legend()
plt.show()
Explanation: To compute a rdf between different species, we need to pass both indices_i and indices_j.
End of explanation
plt.plot(rdf_nana.r, rdf_nana.smeared_rdf(), label='Na-Na') # default smearing of 0.1
plt.plot(rdf_clcl.r, rdf_clcl.smeared_rdf(sigma=0.050), label='Cl-Cl')
plt.plot(rdf_nacl.r, rdf_nacl.smeared_rdf(sigma=0.2), label='Na-Cl')
plt.legend()
plt.show()
Explanation: The Na and Cl sublattices are equivalent, so the Na–Na and Cl–Cl rdfs sit on top of each other.
A smeared rdf can be produced using the smeared_rdf() method, which applies a Gaussian kernel to the raw rdf data. The smeared_rdf() method takes on optional argument sigma, which can be used to set the width of the Gaussian (default = 0.1)
End of explanation
rdf_nana = RadialDistributionFunction.from_species_strings(structures=[structure],
species_i='Na')
rdf_clcl = RadialDistributionFunction.from_species_strings(structures=[structure],
species_i='Cl')
rdf_nacl = RadialDistributionFunction.from_species_strings(structures=[structure],
species_i='Na', species_j='Cl')
plt.plot(rdf_nana.r, rdf_nana.smeared_rdf(), label='Na-Na')
plt.plot(rdf_clcl.r, rdf_clcl.smeared_rdf(), label='Cl-Cl')
plt.plot(rdf_nacl.r, rdf_nacl.smeared_rdf(), label='Na-Cl')
plt.legend()
plt.show()
Explanation: Selecting atoms by their species strings
Atom indices can also be selected by species string with the RadialDistributionFunction.from_species_strings() method:
End of explanation
from pymatgen.io.vasp import Xdatcar
xd = Xdatcar('data/NaCl_800K_MD_XDATCAR')
rdf_nana_800K = RadialDistributionFunction.from_species_strings(structures=xd.structures,
species_i='Na')
rdf_clcl_800K = RadialDistributionFunction.from_species_strings(structures=xd.structures,
species_i='Cl')
rdf_nacl_800K = RadialDistributionFunction.from_species_strings(structures=xd.structures,
species_i='Na', species_j='Cl')
plt.plot(rdf_nana_800K.r, rdf_nana_800K.smeared_rdf(), label='Na-Na')
plt.plot(rdf_clcl_800K.r, rdf_clcl_800K.smeared_rdf(), label='Cl-Cl')
plt.plot(rdf_nacl_800K.r, rdf_nacl_800K.smeared_rdf(), label='Na-Cl')
plt.legend()
plt.show()
Explanation: Calculating a RDF from a VASP XDATCAR
End of explanation
struct_1 = struct_2 = struct_3 = structure
rdf_nacl_mc = RadialDistributionFunction(structures=[struct_1, struct_2, struct_3],
indices_i=indices_na, indices_j=indices_cl,
weights=[34, 27, 146])
# structures and weights lists must be equal lengths
Explanation: Weighted RDF calculations
For calculating RDFs from Monte Carlo simulation trajectories RadialDistributionFunction can be passed an optional weights argument, which takes a list of numerical weights for each structure.
End of explanation
<END_TASK> |
15,756 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Creating a SequenceVariant from scratch
0. Overview
A SequenceVariant consists of an accession (a string), a sequence type (a string), and a PosEdit, like this
Step1: 2. Make an edit object
Step2: 3. Make the variant
Step3: Important | <ASSISTANT_TASK:>
Python Code:
import hgvs.location
import hgvs.posedit
start = hgvs.location.BaseOffsetPosition(base=200,offset=-6,datum=hgvs.location.Datum.CDS_START)
start, str(start)
end = hgvs.location.BaseOffsetPosition(base=22,datum=hgvs.location.Datum.CDS_END)
end, str(end)
iv = hgvs.location.Interval(start=start,end=end)
iv, str(iv)
Explanation: Creating a SequenceVariant from scratch
0. Overview
A SequenceVariant consists of an accession (a string), a sequence type (a string), and a PosEdit, like this:
var = hgvs.sequencevariant.SequenceVariant(ac='NM_01234.5', type='c', posedit=...)
Unsurprisingly, a PosEdit consists of separate position and Edit objects.
A position is generally an Interval, which in turn is comprised of SimplePosition or BaseOffsetPosition objects. An edit is a subclass of Edit, which includes classes like NARefAlt for substitutions, deletions, and insertions) and Dup (for duplications).
Importantly, each of the objects we're building has a rule in the parser, which means that you have the tools to serialize and deserialize (parse) each of the components that we're about to construct.
1. Make an Interval to defined a position of the edit
End of explanation
import hgvs.edit, hgvs.posedit
edit = hgvs.edit.NARefAlt(ref='A',alt='T')
edit, str(edit)
posedit = hgvs.posedit.PosEdit(pos=iv,edit=edit)
posedit, str(posedit)
Explanation: 2. Make an edit object
End of explanation
import hgvs.sequencevariant
var = hgvs.sequencevariant.SequenceVariant(ac='NM_01234.5', type='c', posedit=posedit)
var, str(var)
Explanation: 3. Make the variant
End of explanation
import copy
var2 = copy.deepcopy(var)
var2.posedit.pos.start.base=456
str(var2)
var2 = copy.deepcopy(var)
var2.posedit.edit.alt='CT'
str(var2)
var2 = copy.deepcopy(var)
str(var2)
Explanation: Important: It is possible to bogus variants with the hgvs package. For example, the above interval is incompatible with a SNV. See hgvs.validator.Validator for validation options.
4. Update your variant
The stringification happens on-the-fly. That means that you can update components of the variant and see the effects immediately.
End of explanation
<END_TASK> |
15,757 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Reinventing Fast Inverse Roots Using $8^{th}$ Grade Math.
Algebra, Straight Lines, and Logarithms Oh My!
By
Step1: What's the Fast Inverse Root Method you ask?
This is actual code used in Quake and other early 3D games to help us calculate $\frac{1}{\sqrt{x}}$, seriously
c
float rsqrt(float x) {
long i = * ( long * ) &x; // floating point bit hack
i = 0x5f3759df - ( i >> 1 ); // what the fuck?
return * ( float * ) &i;
}
Supposedly,
$$
\textsf{rsqrt}(x) \approx \frac{1}{\sqrt{x}}
$$
and the secret is unlocked behind this impenetrable magical constant 0x5f3759df.
Does this actually work?
Step2: Close Enough!
So what's up with the 0x5f3759df
Where did this thing come from?
Is this some alien technology long lost to our generation?
Is it super complicated and does it require advanced mathematical machinery?
Is it just plain magic that only wizards understand?
Should we open up Pandora's Box and peer inside? We are but mere mortals after all.
As it turns out, yes we can
In fact, I believe that the idea behind this technique is so easy that an $8^{th}$ grader can figure it all out.
Seriously? When I was an $8^{th}$ grader, I could barely use a calculator...
Hey, $8^{th}$ graders are pretty smart nowadays, plus, this isn't really as much of a mindfuck as you think it is.
So easy, even a caveman can do it!
<p style="text-align
Step3: Here, $$\begin{align}
\textsf{f2l}(x) &= (\textsf{long}) \textsf{&} x \
\textsf{l2f}(y) &= (\textrm{float}) \textsf{&} y
\end{align}
$$
Don't worry too much about the implementation details.
I can't believe it's not Casting!
What do we know about $\textsf{f2l}$ and $\textsf{l2f}$?
Inverse Laws
Step4: A Tale of two Variables
Let's forget about the magic constant 0x5f3759df for a moment.
There are actually two magical constants in rsqrt, the other constant is the $-\frac{1}{2}$.
Hey, $\frac{1}{\sqrt{x}}$ is just $x^{-\frac{1}{2}}$, maybe they are related!
Let's look at what happens when we change these two constants.
Since we don't really know what it's doing, let's just name it as if we don't know what we're doing
Step5: Recall that the log-log plot for $y = M \cdot x^C$ is linear because
$$\begin{align}
\log(y) &= \log\left(M x^C\right) \
&= \log(M) + C \cdot \log(x)
\end{align}$$
where $\log(y)$ varies linearly with $\log(x)$ with slope $C$.
What does foobar looks like under the log-log scale?
Step6: Notice how as $C$ gets closer to $-\frac{1}{2}$, the 0x5f3759df line also gets closer to $x^C$.
What do you see?
All the slopes are parallel to each other.
...
Honestly, it's hard to grok anything off of a series of pictures, so let's invoke the ancient wisdom of Python and look at a video of this instead.
Step7: Graphing Calculator Woes
This clearly suggests that all of the lines have the same slopes since they are all parallel.
We only varied the $C$ with time, which changes the slopes of the lines.
We can also definitely see the lines shift vertically over time as well, with the exception of $x^C$
Also, the distances between the solid lines do not change even though their slopes do.
Throwback to the $8^{th}$ Grade
If you think back to Algebra 1, you should remember that the equation for a line is just
$$
w = a + b \cdot z
$$
Here, we can substitute $w = \log(y)$, $z = \log(x)$, and $b = C$ to get it log-log equation
Step8: Hmm, weren't we expecting 0x5f3759df instead of 0x5f400000?
* It turns out that magic constants that are close to each other have nearly identicle behavior. Here, 0x5f3759df is only 0.035% away from 0x5f400000! Close enough.
Behold! The Fast Arbitrary Power Method!
Step9: ```c
define MAGIC 0x3f800000
float qpow(float x, float exponent) {
long i = * ( long * ) &x; // floating point bit hack
i = (1 - exponent) * MAGIC + exponent * i; // what the fuck?
return * ( float * ) &i;
}```
Step10: Hey, that actually looks pretty good! But what about the errors?
Step11: An error of around $10\%$? That's like nothing!
Conclusion
The idea behind Fast Inverse Sqrt isn't hard.
In fact, you can write a "Fast Exponentiation" method for any arbitrary exponent!
We're all wizards.
Next Steps
We didn't really talk about how people went from 0x5f400000 to 0x5f3759df. There's a wealth of resources for this, but as a shameless plug, check out my Fast Inverse Sqrt the Easy(ish) Way.
We can generate arbitrary fast power approximations, but it would be cool to generate something along the style of the original inverse-sqrt program. | <ASSISTANT_TASK:>
Python Code:
setup_html = r'''
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/github-fork-ribbon-css/0.2.0/gh-fork-ribbon.min.css" />
<!--[if lt IE 9]>
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/github-fork-ribbon-css/0.2.0/gh-fork-ribbon.ie.min.css" />
<![endif]-->
<style>
.github-fork-ribbon::after {line-height: initial !important;padding: initial !important;}
.github-fork-ribbon {font-size: 14px;}
.navigate-up, .navigate-down {
display: none !important;
}
</style>
<script>
$(document).ready(function() {
$("body").append('<a class="github-fork-ribbon" href="http://www.bullshitmath.lol" title="Bullshit Math">Bullshit Math</a>')
});
</script>
'''
# IPython.display.display_html(setup_html, raw=True)
hide_code_in_slideshow()
Explanation: Reinventing Fast Inverse Roots Using $8^{th}$ Grade Math.
Algebra, Straight Lines, and Logarithms Oh My!
By: Lee
Notebook View
<small><em>Press the Right key to navigate these slides.</em></small>
End of explanation
%matplotlib inline
from struct import pack, unpack
import numpy as np
import matplotlib.pyplot as plt
@np.vectorize
def sharp(x):
return unpack('I', pack('f', x))[0]
@np.vectorize
def flat(y):
return unpack('f', pack('I', int(y) & 0xffffffff))[0]
star_long_star_amp = sharp;
star_float_star_amp = flat;
hide_code_in_slideshow();
@np.vectorize
def rsqrt(x): # float rsqrt(float x) {
i = star_long_star_amp(x); # long i = * ( long * ) &x;
i = 0x5f3759df - ( i >> 1 ); # i = 0x5f3759df - ( i >> 1 );
return star_float_star_amp(i); # return * ( float * ) &i;
# }
# Construct a plot
fig = plt.figure(figsize=(16,8));
ax = plt.axes();
# Plot the approximation and the actual inverse sqrt function
x = np.linspace(1, 50, 5000);
approximation, = ax.plot(x, rsqrt(x))
actual, = ax.plot(x, 1/np.sqrt(x))
fig.legend(handles=[approximation, actual], labels=[r'qsqrt(x)', r'$\frac{1}{\sqrt{x}}$'], fontsize=20);
fig.suptitle(r"$\frac{1}{\sqrt{x}}$ versus qsqrt(x)", fontsize=26);
hide_code_in_slideshow()
Explanation: What's the Fast Inverse Root Method you ask?
This is actual code used in Quake and other early 3D games to help us calculate $\frac{1}{\sqrt{x}}$, seriously
c
float rsqrt(float x) {
long i = * ( long * ) &x; // floating point bit hack
i = 0x5f3759df - ( i >> 1 ); // what the fuck?
return * ( float * ) &i;
}
Supposedly,
$$
\textsf{rsqrt}(x) \approx \frac{1}{\sqrt{x}}
$$
and the secret is unlocked behind this impenetrable magical constant 0x5f3759df.
Does this actually work?
End of explanation
from struct import pack, unpack
to_long = lambda hole: unpack('i', hole)[0] # y = (long*) x
to_float = lambda hole: unpack('f', hole)[0] # y = (float*) x
from_long = lambda hole: pack('i', int(hole) % 0x80000000) # long* y = &x
from_float = lambda hole: pack('f', float(hole)) # float* y = &x
hide_code_in_slideshow()
@np.vectorize
def f2l(x):
return to_long(from_float(x))
@np.vectorize
def l2f(y):
return to_float(from_long(y))
Explanation: Close Enough!
So what's up with the 0x5f3759df
Where did this thing come from?
Is this some alien technology long lost to our generation?
Is it super complicated and does it require advanced mathematical machinery?
Is it just plain magic that only wizards understand?
Should we open up Pandora's Box and peer inside? We are but mere mortals after all.
As it turns out, yes we can
In fact, I believe that the idea behind this technique is so easy that an $8^{th}$ grader can figure it all out.
Seriously? When I was an $8^{th}$ grader, I could barely use a calculator...
Hey, $8^{th}$ graders are pretty smart nowadays, plus, this isn't really as much of a mindfuck as you think it is.
So easy, even a caveman can do it!
<p style="text-align: center;">
<img src="http://i.imgur.com/MxWzdDS.jpg" style="margin-left:auto; margin-right:auto;" />
</p>
Notation
Recall that the fast inverse square root method looks like
c
long i = * ( long * ) &x;
i = 0x5f3759df - ( i >> 1 );
return * ( float * ) &i;
Let's look at those two casts:
* *(long*)& takes a float and outputs a long
* *(float*)& takes a long and outputs a float
Let's name these two operations the float-to-long ($\textsf{f2l}: \textsf{float} \to \textsf{long}$) and the long-to-float ($\textsf{l2f}: \textsf{long} \to \textsf{float}$) operators.
$$
\textsf{rsqrt}(x) = \textsf{l2f}\left(\textsf{0x5f3759df} - \frac{1}{2} \textsf{f2l}\left(x\right)\right)
$$
<p><img src="http://i.imgur.com/KgLwu8O.png" style="margin-left:auto;margin-right:auto;"/></p>
<p id="conv123"><img src="http://i.imgur.com/cfDeVpO.png?1" style="margin-left:auto;margin-right:auto;"/></p>
<script>
var show_conv = function() {
var p = $("#conv123");
if (p.length==0) return;
while (!p.is("section")) {
p=p.parent();
if (p.prop("tagName") =="body") return;
}
var section = p;
section.attr('data-transition', 'fade-in slide-out');
}
show_conv();
</script>
Do Snakes Dream of IEEE 754?
What would this look like in Python?
End of explanation
int( l2f(f2l(1) + f2l(1)) ) # 1 + 1 is ...
Explanation: Here, $$\begin{align}
\textsf{f2l}(x) &= (\textsf{long}) \textsf{&} x \
\textsf{l2f}(y) &= (\textrm{float}) \textsf{&} y
\end{align}
$$
Don't worry too much about the implementation details.
I can't believe it's not Casting!
What do we know about $\textsf{f2l}$ and $\textsf{l2f}$?
Inverse Laws:
$$
\textsf{f2l}(\textsf{l2f}(x)) = x = \textsf{l2f}(\textsf{f2l}(x))
$$
<sub><small><em>e.g. If you convert a float to a long and then convert that long back to a float, you get the same float back.</em></small></sub>
Anihilator Laws:
$$
\textsf{f2l}(0) = 0 = \textsf{l2f}(0)
$$
<sub><small><em>e.g. If you convert 0 from a float to a long, it's still 0.</em></small></sub>
There are a few other algebraic properties of f2l and l2f, but unfortunately, they're pretty unstructured. In particular, $x + y \ne \textsf{l2f}(\textsf{f2l}(x) + \textsf{f2l}(y))$:
End of explanation
def foobar(M, C):
return np.vectorize(lambda x: l2f(M + C * f2l(x)))
# rsqrt(x) is instantiated with M = 0x5f3759df and C = -1/2
rsqrt = foobar(0x5f3759df, -1.0/2.0)
Explanation: A Tale of two Variables
Let's forget about the magic constant 0x5f3759df for a moment.
There are actually two magical constants in rsqrt, the other constant is the $-\frac{1}{2}$.
Hey, $\frac{1}{\sqrt{x}}$ is just $x^{-\frac{1}{2}}$, maybe they are related!
Let's look at what happens when we change these two constants.
Since we don't really know what it's doing, let's just name it as if we don't know what we're doing:
$$
\textsf{foobar}_{M,C}(x) = \textsf{l2f}\left(M + C \cdot \textsf{f2l}(x)\right)
$$
Foo, meet Bar
$$
\textsf{foobar}_{M,C}(x) = \textsf{l2f}\left(M + C \cdot \textsf{f2l}(x)\right)
$$
End of explanation
import matplotlib
matplotlib.rcParams['text.usetex'] = False
matplotlib.rcParams['text.latex.unicode'] = False
x = np.linspace(1, 1000, 5000)
allM = (1 << 26, 1 << 28, 0x5f3759df)
properties = {
(0, 0): {'M': allM, 'C': -2},
(1, 0): {'M': allM, 'C': 8},
(0, 1): {'M': allM, 'C': 0.3},
(1, 1): {'M': allM, 'C': -0.6},
}
fig, axarr = plt.subplots(2, 2, figsize=(14,8));
for key, property in properties.items():
C = property['C']
axarr[key].set_ylim(1e-39, 1e41)
handle, = axarr[key].loglog(x, x ** C, linestyle='dotted');
handles = [handle]
for M in property['M']:
baz = foobar(M, C)
kwargs = {'ls' : 'dashed'} if M == 0x5f3759df else {}
handle, = axarr[key].loglog(x, np.abs(baz(x)), **kwargs)
handles.append(handle)
axarr[key].set_title(r'For slope C = $%s$, ${\rm foobar}_{M,%s}(x)$' % (C, C))
axarr[key].legend(
handles,
[
r'$x^{%s}$' % C,
r'$M = 2^{26}$',
r'$M = 2^{28}$',
r'$M = {\rm 0x5f3759df}$'
], loc=4)
hide_code_in_slideshow()
Explanation: Recall that the log-log plot for $y = M \cdot x^C$ is linear because
$$\begin{align}
\log(y) &= \log\left(M x^C\right) \
&= \log(M) + C \cdot \log(x)
\end{align}$$
where $\log(y)$ varies linearly with $\log(x)$ with slope $C$.
What does foobar looks like under the log-log scale?
End of explanation
from IPython.display import HTML
from matplotlib import animation
animation.Animation._repr_html_ = lambda anim: anim.to_html5_video()
x = np.linspace(1, 1000, 5000)
allM = (1 << 26, 1 << 28, 0x5f3759df)
fig = plt.figure(figsize=(14,8))
ax = plt.axes(ylim=(1e-39, 1e41))
def plotSomeMagic(C, fig, ax, handles=None):
if not handles:
handle, = ax.loglog(x, x ** C, linestyle='dotted');
handles = [handle]
for M in allM:
baz = foobar(M, C)
kwargs = {'ls' : 'dashed'} if M == 0x5f3759df else {}
handle, = ax.loglog(x, np.abs(baz(x)), **kwargs)
handles.append(handle)
else:
handles[0].set_data(x, x ** C)
baz = foobar(allM[0], C)
handles[1].set_data(x, np.abs(baz(x)))
baz = foobar(allM[1], C)
handles[2].set_data(x, np.abs(baz(x)))
baz = foobar(allM[2], C)
handles[3].set_data(x, np.abs(baz(x)))
ax.set_title(r'For slope C = $%s$, ${\rm foobar}_{M,%s}(x)$' % (C, C))
ax.legend(
handles,
[
r'$x^{%s}$' % C,
r'$M = 2^{26}$',
r'$M = 2^{28}$',
r'$M = {\rm 0x5f3759df}$'
], loc=4)
return tuple(handles)
handles = plotSomeMagic(0, fig, ax)
# initialization function: plot the background of each frame
def init():
return plotSomeMagic(1, fig, ax, handles)
# animation function. This is called sequentially
def animate(i):
return plotSomeMagic(i, fig, ax, handles)
hide_code_in_slideshow()
video = animation.FuncAnimation(fig, animate, init_func=init, frames=np.arange(-2,8,0.10), interval=100, blit=True)
plt.close();
video
Explanation: Notice how as $C$ gets closer to $-\frac{1}{2}$, the 0x5f3759df line also gets closer to $x^C$.
What do you see?
All the slopes are parallel to each other.
...
Honestly, it's hard to grok anything off of a series of pictures, so let's invoke the ancient wisdom of Python and look at a video of this instead.
End of explanation
# What is 1#?
display(Latex(r'Just $\textsf{f2l}(1) = \textsf{%s}$.' % hex(f2l(1))))
# What about inverse square-root?
display(Latex(r'For the inverse square-root, its magical constant should be \
$$\left(1 - \frac{-1}{2}\right)\textsf{f2l}(1) = \textsf{%s}$$'
% hex(3 * f2l(1) // 2)))
hide_code_in_slideshow()
Explanation: Graphing Calculator Woes
This clearly suggests that all of the lines have the same slopes since they are all parallel.
We only varied the $C$ with time, which changes the slopes of the lines.
We can also definitely see the lines shift vertically over time as well, with the exception of $x^C$
Also, the distances between the solid lines do not change even though their slopes do.
Throwback to the $8^{th}$ Grade
If you think back to Algebra 1, you should remember that the equation for a line is just
$$
w = a + b \cdot z
$$
Here, we can substitute $w = \log(y)$, $z = \log(x)$, and $b = C$ to get it log-log equation:
$$
\log(y) = a + C \cdot \log(x)
$$
In the case of $y = 1 \cdot x^C$, the corresponding log-log equation is
$$
\begin{align}
\log(y) &= \log\left(1 \cdot x^C\right) \
&= \log(1) + C \cdot \log(x).
\end{align}
$$
Since all of the other lines are parallel to the $x^C$ line, we also know, for some unknown constant $\alpha$, that
$$
\log\left(\textsf{foobar}_{M,C}(x)\right) = \alpha + C \cdot \log(x)\
$$
This confirms our earlier suspicion that the slope $C$ is related to the exponent.
In fact, since the distances between the lines do not change as we vary the slope $C$ in the video, it's clear that the unknown constant $\alpha$ will only depend on the value of our magical constant $M$:
$$
\alpha = \alpha_M
$$
Eureka!
It means that for every exponent $C$, there exists some constant $M$ such that $\textsf{foobar}_{M,C}(x) \approx x^C$.
This is all we need to find meaningful "magic" constants!
In fact, you can find a magic constant for every exponent $x^C$.
Fast Exponentiation [1]
We know that for each $C$, there's some magic $M^$ such that
$$
\textsf{foobar}_{M^,C}(x) = \textsf{l2f}\left(M^* + C \cdot \textsf{f2l}(x)\right) \approx x^C
$$
How do we find $M^$? We know that for inverse square-roots, $M^$ is around 0x5f3759df.
Answer: More $8^{th}$ grade algebra.
Fast Exponentiation [2]
Question: What happens if you send $x = 1$ into $x^C$?
* Answer: $1^C = 1$ no matter what $C$ is. (Dude, Spoilers!)
This is the only crucial insight that we need.
Fast Exponentiation [3]
So let's use $x = 1$ as a fixed boundary. If $\textsf{foobar}_{M^,C}(x)$ is supposed to approximate $x^C$, then we should also expect that $\textsf{foobar}_{M^,C}(x = 1) = 1^C$.
$$
\textsf{foobar}_{M^,C}(1) = \textsf{l2f}\left(M^ + C \cdot \textsf{f2l}(1)\right) = 1^C = 1
$$
What do we do with that l2f? Recall that $\textsf{f2l}(\textsf{l2f}(a)) = a$, so let's apply f2l to both sides to cancel the l2f out.
$$
\textsf{f2l}\left(\textsf{l2f}\left(M^ + C \cdot \textsf{f2l}(1)\right)\right) = M^ + C \cdot \textsf{f2l}(1) = \textsf{f2l}(1)
$$
Unfortunately, $\textsf{f2l}(1) \ne 1$, but we can subtract the $C \cdot \textsf{f2l}(1)$ from both sides to get
$$
\begin{align}
\left(M^ + C \cdot \textsf{f2l}(1)\right) - C \cdot \textsf{f2l}(1) &= \left(\textsf{f2l}(1)\right) - C \cdot \textsf{f2l}(1) \
M^ &= \boxed{(1 - C) \cdot \textsf{f2l}(1)}
\end{align}
$$
Fast Exponentiations [Fin]
This seems to suggest that $\textsf{foobar}_{(1 - C)\textsf{f2l}(1),C}(x) \approx x^C$. How close is this to the truth?
End of explanation
def qexp(C):
# (1 - C) * f2l(1) + C * f2l(x)
return np.vectorize(lambda x: l2f((1 - C) * f2l(1) + C * f2l(x)))
Explanation: Hmm, weren't we expecting 0x5f3759df instead of 0x5f400000?
* It turns out that magic constants that are close to each other have nearly identicle behavior. Here, 0x5f3759df is only 0.035% away from 0x5f400000! Close enough.
Behold! The Fast Arbitrary Power Method!
End of explanation
x = np.linspace(1, 1000, 5000)
properties = {
(0, 0): {'M': allM, 'C': -1},
(1, 0): {'M': allM, 'C': 2},
(0, 1): {'M': allM, 'C': 0.3},
(1, 1): {'M': allM, 'C': -0.6},
}
fig, axarr = plt.subplots(2, 2, figsize=(14,8));
for key, property in properties.items():
C = property['C']
handle, = axarr[key].plot(x, x ** C);
handles = [handle]
baz = qexp(C)
handle, = axarr[key].plot(x, baz(x))
handles.append(handle)
# axarr[key].set_title(r'For slope C = $%s$, ${\rm foobar}_{M,%s}(x)$' % (C, C))
axarr[key].legend(
handles,
[
r'$x^{%s}$' % C,
r'$M^* = $ %s' % hex(int(C * sharp(1))),
], loc=4)
hide_code_in_slideshow()
Explanation: ```c
define MAGIC 0x3f800000
float qpow(float x, float exponent) {
long i = * ( long * ) &x; // floating point bit hack
i = (1 - exponent) * MAGIC + exponent * i; // what the fuck?
return * ( float * ) &i;
}```
End of explanation
from matplotlib.ticker import FuncFormatter
def to_percent(y, position):
# Ignore the passed in position. This has the effect of scaling the default
# tick locations.
s = str(int(100 * y))
# The percent symbol needs escaping in latex
if matplotlib.rcParams['text.usetex'] is True:
return s + r'$\%$'
else:
return s + '%'
# Create the formatter using the function to_percent. This multiplies all the
# default labels by 100, making them all percentages
formatter = FuncFormatter(to_percent)
# ax.yaxis.set_major_formatter(formatter)
hide_code_in_slideshow()
x = np.linspace(1, 1000, 5000)
properties = {
(0, 0): {'C': -1},
(1, 0): {'C': 2},
(0, 1): {'C': 0.3},
(1, 1): {'C': -0.6},
}
fig, axarr = plt.subplots(2, 2, figsize=(14,8));
for key, property in properties.items():
axarr[key].set_ylim(0, 0.5)
axarr[key].yaxis.set_major_formatter(formatter)
C = property['C']
baz = qexp(C)
handle, = axarr[key].plot(x, np.abs(x ** C - baz(x))/(x ** C));
axarr[key].set_title(r'Relative error for $x^{%s}$' % C)
axarr[key].legend(
[handle],
[r'Relative error for $x^{%s}$' % C])
hide_code_in_slideshow()
Explanation: Hey, that actually looks pretty good! But what about the errors?
End of explanation
%%html
<div id="meh">
<small style="font-size: 8px;">[Double Click for Code]</small>
<style>
.hide-in-slideshow-meh {
display: None ! important;
}
</style>
</div>
<script type="text/javascript">
var show_meh = function() {
var p = $("#meh");
var orig = p;
if (p.length==0) return;
while (!p.hasClass("cell")) {
p=p.parent();
if (p.prop("tagName") =="body") return;
}
var cell = p;
cell.dblclick(function() {
if (!orig.hasClass("hide-in-slideshow-meh")) {
cell.find(".input").removeClass("hide-in-slideshow-meh");
orig.addClass("hide-in-slideshow-meh");
} else {
cell.find(".input").addClass("hide-in-slideshow-meh");
orig.removeClass("hide-in-slideshow-meh");
}
});
cell.find(".input").addClass("hide-in-slideshow-meh");
}
show_meh();
</script>
<pre id="wee" class="language-c cm-s-ipython highlight">
// For x^(-0.5)
float qpow(float x) {
long i = * ( long * ) &x;
i = 0x5f400000 + -0.5 * i;
return * ( float * ) &i;
}
</pre>
<p>
<input type="text" id="pown" val="-0.5"/>
</p>
<script type="text/javascript">
require.config({
paths: {
"highlight": "https://cdnjs.cloudflare.com/ajax/libs/highlight.js/9.4.0/highlight.min",
}
});
require(["highlight"], function(hljs) {
hljs.configure({
classPrefix: 'cm-' // don't append class prefix
});
$('#wee').each(function(i, block) {
hljs.highlightBlock(block);
});
$("#pown").keyup(function() {
var C = Number($("#pown").val());
if (Number.isNaN(C)) return;
var M = Math.floor(1065353216 * (1 - C)) % 4294967296;
var sign = M >= 0 ? '' : '-';
var code = "// For x^(" + C + ")\nfloat qpow(float x) {\n long i = * ( long * ) &x;\n i = " + sign + "0x" + Math.abs(M).toString(16) + " + " + C + " * i;\n return * ( float * ) &i;\n}\n";
$("#wee").html(code);
$('#wee').each(function(i, block) {
hljs.highlightBlock(block);
});
});
});
</script>
Explanation: An error of around $10\%$? That's like nothing!
Conclusion
The idea behind Fast Inverse Sqrt isn't hard.
In fact, you can write a "Fast Exponentiation" method for any arbitrary exponent!
We're all wizards.
Next Steps
We didn't really talk about how people went from 0x5f400000 to 0x5f3759df. There's a wealth of resources for this, but as a shameless plug, check out my Fast Inverse Sqrt the Easy(ish) Way.
We can generate arbitrary fast power approximations, but it would be cool to generate something along the style of the original inverse-sqrt program.
End of explanation
<END_TASK> |
15,758 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Loading, saving and exporting data
Pymrio includes several functions for data reading and storing. This section presents the methods to use for saving and loading data already in a pymrio compatible format. For parsing raw MRIO data see the different tutorials for working with available MRIO databases.
Here, we use the included small test MRIO system to highlight the different function. The same functions are available for any MRIO loaded into pymrio. Expect, however, significantly decreased performance due to the size of real MRIO system.
Step1: Basic save and read
To save the full system, use
Step2: To read again from that folder do
Step3: The fileio activities are stored in the included meta data history field
Step4: Storage format
Internally, pymrio stores data in csv format, with the 'economic core' data in the root and each satellite account in a subfolder. Metadata as file as a file describing the data format ('file_parameters.json') are included in each folder.
Step5: The file format for storing the MRIO data can be switched to a binary pickle format with
Step6: This can be used to reduce the storage space required on the disk for large MRIO databases.
Archiving MRIOs databases
To archive a MRIO system after saving use pymrio.archive
Step7: Data can be read directly from such an archive by
Step8: Currently data can not be saved directly into a zip archive.
It is, however, possible to remove the source files after archiving
Step9: Several MRIO databases can be stored in the same archive
Step10: When loading from an archive which includes multiple MRIO databases, specify
one with the parameter 'path_in_arc'
Step11: The pymrio.load function can be used directly to only a specific satellite account
of a MRIO database from a zip archive
Step12: The archive function is a wrapper around python.zipfile module.
There are, however, some differences to the defaults choosen in the original
Step13: This can then be loaded again as separate satellite account
Step14: As all data in pymrio is stored as pandas DataFrame, the full pandas stack for exporting tables is available. For example, to export a table as excel sheet use | <ASSISTANT_TASK:>
Python Code:
import pymrio
import os
io = pymrio.load_test().calc_all()
Explanation: Loading, saving and exporting data
Pymrio includes several functions for data reading and storing. This section presents the methods to use for saving and loading data already in a pymrio compatible format. For parsing raw MRIO data see the different tutorials for working with available MRIO databases.
Here, we use the included small test MRIO system to highlight the different function. The same functions are available for any MRIO loaded into pymrio. Expect, however, significantly decreased performance due to the size of real MRIO system.
End of explanation
save_folder_full = '/tmp/testmrio/full'
io.save_all(path=save_folder_full)
Explanation: Basic save and read
To save the full system, use:
End of explanation
io_read = pymrio.load_all(path=save_folder_full)
Explanation: To read again from that folder do:
End of explanation
io_read.meta
Explanation: The fileio activities are stored in the included meta data history field:
End of explanation
import os
os.listdir(save_folder_full)
Explanation: Storage format
Internally, pymrio stores data in csv format, with the 'economic core' data in the root and each satellite account in a subfolder. Metadata as file as a file describing the data format ('file_parameters.json') are included in each folder.
End of explanation
save_folder_bin = '/tmp/testmrio/binary'
io.save_all(path=save_folder_bin, table_format='pkl')
os.listdir(save_folder_bin)
Explanation: The file format for storing the MRIO data can be switched to a binary pickle format with:
End of explanation
mrio_arc = '/tmp/testmrio/archive.zip'
# Remove a potentially existing archive from before
try:
os.remove(mrio_arc)
except FileNotFoundError:
pass
pymrio.archive(source=save_folder_full, archive=mrio_arc)
Explanation: This can be used to reduce the storage space required on the disk for large MRIO databases.
Archiving MRIOs databases
To archive a MRIO system after saving use pymrio.archive:
End of explanation
tt = pymrio.load_all(mrio_arc)
Explanation: Data can be read directly from such an archive by:
End of explanation
tmp_save = '/tmp/testmrio/tmp'
# Remove a potentially existing archive from before
try:
os.remove(mrio_arc)
except FileNotFoundError:
pass
io.save_all(tmp_save)
print("Directories before archiving: {}".format(os.listdir('/tmp/testmrio')))
pymrio.archive(source=tmp_save, archive=mrio_arc, remove_source=True)
print("Directories after archiving: {}".format(os.listdir('/tmp/testmrio')))
Explanation: Currently data can not be saved directly into a zip archive.
It is, however, possible to remove the source files after archiving:
End of explanation
# Remove a potentially existing archive from before
try:
os.remove(mrio_arc)
except FileNotFoundError:
pass
tmp_save = '/tmp/testmrio/tmp'
io.save_all(tmp_save)
pymrio.archive(source=tmp_save, archive=mrio_arc, path_in_arc='version1/', remove_source=True)
io2 = io.copy()
del io2.emissions
io2.save_all(tmp_save)
pymrio.archive(source=tmp_save, archive=mrio_arc, path_in_arc='version2/', remove_source=True)
Explanation: Several MRIO databases can be stored in the same archive:
End of explanation
io1_load = pymrio.load_all(mrio_arc, path_in_arc='version1/')
io2_load = pymrio.load_all(mrio_arc, path_in_arc='version2/')
print("Extensions of the loaded io1 {ver1} and of io2: {ver2}".format(
ver1=sorted(io1_load.get_extensions()),
ver2=sorted(io2_load.get_extensions())))
Explanation: When loading from an archive which includes multiple MRIO databases, specify
one with the parameter 'path_in_arc':
End of explanation
emissions = pymrio.load(mrio_arc, path_in_arc='version1/emissions')
print(emissions)
Explanation: The pymrio.load function can be used directly to only a specific satellite account
of a MRIO database from a zip archive:
End of explanation
save_folder_em= '/tmp/testmrio/emissions'
io.emissions.save(path=save_folder_em)
Explanation: The archive function is a wrapper around python.zipfile module.
There are, however, some differences to the defaults choosen in the original:
In contrast to zipfile.write,
pymrio.archive raises an
error if the data (path + filename) are identical in the zip archive.
Background: the zip standard allows that files with the same name and path
are stored side by side in a zip file. This becomes an issue when unpacking
this files as they overwrite each other upon extraction.
The standard for the parameter 'compression' is set to ZIP_DEFLATED
This is different from the zipfile default (ZIP_STORED) which would
not give any compression.
See the zipfile docs
for further information.
Depending on the value given for the parameter 'compression'
additional modules might be necessary (e.g. zlib for ZIP_DEFLATED).
Futher information on this can also be found in the zipfile python docs.
Storing or exporting a specific table or extension
Each extension of the MRIO system can be stored separetly with:
End of explanation
emissions = pymrio.load(save_folder_em)
emissions
emissions.D_cba
Explanation: This can then be loaded again as separate satellite account:
End of explanation
io.emissions.D_cba.to_excel('/tmp/testmrio/emission_footprints.xlsx')
Explanation: As all data in pymrio is stored as pandas DataFrame, the full pandas stack for exporting tables is available. For example, to export a table as excel sheet use:
End of explanation
<END_TASK> |
15,759 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Objectives
The objectives of this notebook is to provide examples about working with AQUA MODIS L1B files, including
Step1: Content
Convert HDF4 format file to HDF5 format
Convert HDF4 to HDF5 on Windows
Convert HDF4 to HDF5 on Linux
Import modules & tools
Read MODIS L1B data through h5py
Get HDF5 file object
The structure of HDF file
Get attribute and sub-attribute information
Some external tools
Read MODIS L1B channel-31 and channel-1 data
Read raw data
Select the channel
Scale factor and offset value
Geolocation field
Reproject MODIS L1B data to a regular grid
Define function reproj_L1B
Test if reproj_L1B works well
Convert channel-31 and channel-1 data
Identify cloud and ocean via 2-D histogram
How to use numpy.histogram2d
Create 2-D Histgram for channel-31 and channel-1 data
Eliminate ocean points based on 2-D histgram
Test if the "criteria" works well
Calculate brightness temperature on the top of the clouds
Save the brightness temperature result as *.mat file
Plot cloud_TBright in mpl_toolkits.basemap
Convert HDF4 format file to HDF5 format
The hdf4 module in Python, <a href='http
Step2: We import h5py to read HDF5 files
Step3: scipy.io for saving data in *.mat format
Step4: For the map view of data, we need mpl_toolkits.basemap
Step5: Read MODIS L1B data through h5py
Searching all *.h5 files in _data/MODIS_L1B/ directory, Make sure that you have the required HDF5 files exist in it.
Step6: Get HDF5 file object
Step7: h5py.File('filename.h5', 'r') returns the object of HDF5 file.
Here we read MODIS L1B channe-31 and channel-1 data from MYD021KM.A*.h5 file and read geolocation fields from a corresponding MYD02QKM.A*.h5 file.
The structure of HDF file
Get attribute and sub-attribute information
Something abou using h5py to discover the structure of HDF files
Step8: Some external tools
If you have tools like <a href='http
Step9: Here the file has a size of Channels * Longitude * Latitude.
Select the channel
The channel information of 'EV_1KM_Emissive' and 'EV_500_Aggr1km_RefSB' can be found in
Step10: Then we can chose the channel we want
Step11: Scale factor and offset value
Simply read raw data with channel is not enough, in order to maximum the precision, MODIS L1B data formed as
$Data = (RawData - offset) \times scale$
this information is included in the attributes of each variable.
Here we use obj[...].attrs.items() to get all the attributes and see if we can find something interesting.
Step12: Here radiance_scales and radiance_offsets are what we want. Number of channels can also be seen through band_names.
We can use a_list=obj[...].attrs.values() to get these info.
Step13: radiance_scales and radiance_offsets are the 7th and 8th group of a_list, and channel-31 is the 11th element of the group.
Step14: We do the same thing for channel-1 data, but now we use reflectance_scales
Step15: Finally, we correct the data, numpy.ones is the same as ones.m in MATLAB.
Step16: Print the maximum data and see if it is reasonable.
Step17: Geolocation field
For the geolocation field, we do not using 'Geolocation Fields' in MYD021KM*.h5 file because 'EV_1KM_Emissive' and 'EV_250_Aggr1km_RefSB'as a resolution of 250m, but the resolution of latitude and longitude in 'Geolocation Fields' is 1km.
We use the 'Geolocation Field' in MYD03*.h5, they are in the following place
Step18: Reproject MODIS L1B data to a regular grid
Here we define function reproj_L1B to reproject data to a regular grid.
One can also use <a href='http
Step19: Test if reproj_L1B works well
Here we reproject channel-31 data to see if reproj_L1B works well.
Step20: Mask NaN for plot, also make sure that the data is not too big to plot.
Step21: Plot the result
Step22: Convert channel-31 and channel-1 data
Be careful to chose res here, high resolution makes your computer slow. Also, it affects the result on <a href='#Create-2-D-Histgram-for-channel-31-and-channel-1-data'>2-D histgram</a> part.
Step23: Identify cloud and ocean via 2-D histogram
Histogram is some kind of a basic <a href='http
Step24: Here, we define channel-1 data on x-axis and call np.histogram2d as what's in <a href='#How-to-use-numpy.histogram2d'>above section</a> to get bin_count value x_edges and y_edges. Noted that masked NumPy array has no attribute flat.
Step25: Then we make 2-D histgram to see the difference between clouds and ocean, the core idea is
Step26: We can see that, there are generally two place where data points are very dense, one is typical ocean, one is typical cloud.
One can set an arbitrary criteria to segment the image. An over strict criteria will wrongly eliminate the information of clouds, but a over loose one will view some of the ocean points as clouds. Here we use
Step27: Eliminate ocean points based on 2-D histgram
We replace the ocean data point to NaN based on the criteria above.
Using for loops and if command is a general way to do that, sometimes there will be more simple ways like
Step28: Test if the "criteria" works well
Then we mask and plot C31_clouds to see if our criteria works well
Step30: Calculate brightness temperature on the top of the clouds
We use planckInvert function as what we have done on satellite3.ipynb to get the brightness temperature at the center of channel-31 (11.02 $\mu m$).
Step31: Print the maximum and see if it is reasonable.
Step32: Save the output as *.mat
We use *.mat as the output data format.
Step33: Plot cloud_TBright in mpl_toolkits.basemap
Here we use mpl_toolkits.basemap to Visualize the result. | <ASSISTANT_TASK:>
Python Code:
__author__ = 'ATSC-301 UBC'
Explanation: Objectives
The objectives of this notebook is to provide examples about working with AQUA MODIS L1B files, including:
Convert HDF4 format data to HDF5 format;
Read *.HDF5 data through "h5py" module;
Correct the raw data by using scale_factor and offset
Reproject sinusoidal projection grid to a regular grid (cylindrical equidistant projection);
Identify cloud and ocean based on 2-D histograms;
Calculate brightness temperature on the top of the clouds and save the result as *.mat file;
Visualizing cloud data through mpl_toolkits.basemap.
The files required for this notebook including:
MYD021KM.A2006303.2220.006.2012078143305.hdf
MYD03.A2006303.2220.006.2012078135515.hdf
MYD021KM.A2006303.2220.006.2012078143305.h5
MYD03.A2006303.2220.006.2012078135515.h5
MYD021KM* are MODIS AQUA L1B files version 6, observed on Oct 30 2006. Which shows a extra-tropical cyclone approaching the Vancouver Island.
MYD03* is the geolocation file indecates the MODIS AQUA L1B geolocation field
*.hdf is the HDF4 file which can be downloaded from <a href='http://ladsweb.nascom.nasa.gov/data/search.html'>LAADS Web</a>.
*.h5 is the HDF5 file which can be <a href='#Convert-HDF4-format-file-to-HDF5-format'>converted</a> from HDF4 files.
End of explanation
import glob
import numpy as np
import matplotlib.pyplot as plt
from __future__ import division
from __future__ import print_function
% matplotlib inline
Explanation: Content
Convert HDF4 format file to HDF5 format
Convert HDF4 to HDF5 on Windows
Convert HDF4 to HDF5 on Linux
Import modules & tools
Read MODIS L1B data through h5py
Get HDF5 file object
The structure of HDF file
Get attribute and sub-attribute information
Some external tools
Read MODIS L1B channel-31 and channel-1 data
Read raw data
Select the channel
Scale factor and offset value
Geolocation field
Reproject MODIS L1B data to a regular grid
Define function reproj_L1B
Test if reproj_L1B works well
Convert channel-31 and channel-1 data
Identify cloud and ocean via 2-D histogram
How to use numpy.histogram2d
Create 2-D Histgram for channel-31 and channel-1 data
Eliminate ocean points based on 2-D histgram
Test if the "criteria" works well
Calculate brightness temperature on the top of the clouds
Save the brightness temperature result as *.mat file
Plot cloud_TBright in mpl_toolkits.basemap
Convert HDF4 format file to HDF5 format
The hdf4 module in Python, <a href='http://pysclint.sourceforge.net/pyhdf'>pyhdf</a> is not included in anaconda channel for both Windows and Mac. So here we plan to convert HDF4 into HDF5 or netCDF format and solve the problem.
But one can also try to install this module manually, the source code and documents can be get from <a href='https://github.com/fhs/python-hdf4'>here</a>. Linux user can try to install python module <a href='https://conda.binstar.org/mutirri'>mutirri</a> , which includes pyhdf in it.
Convert HDF4 to HDF5 on Windows
Windows user can convert from HDF4 to HDF5 using <a href='http://www.hdfgroup.org/h4toh5/download.html'>h4toh5</a> which provided by hdf group.
One need to do the following steps:
* Download h4h5tools-2.2.2-win32.zip or h4h5tools-2.2.2-win64.zip on local machine.
* Uncompress the *.zip file, run installation file *.exe
* Open DOS command line window
* cd to bin folder (e.g. cd F:\HDF_Group\H4TOH5\2.2.2\bin)
* use h4toh5 tool in the following ways:
bash
h4toh5convert input.hdf output.h5
h4toh5convert -na input.hdf output.h5
h4toh5convert -nc4 input.hdf output.h5`
* You can add h4toh5 on your system's Environmental Variable PATH, or set PATH=F:\HDF_Group\H4TOH5\2.2.2\bin and then you can use h4toh5convert everywhere.
Here is a screen shot of h4toh5convert in cmd:
<img src="_figures/01_h4toh5convert.PNG" width=550 align="left"/>
Convert HDF4 to HDF5 on Linux
<a href='http://www.hdfgroup.org/h4toh5/download.html'>h4toh5</a> has a Linux version, so we can use it by repeating the steps above and edit PATH in .bashrc file.
Import modules & tools
Before you run the cell below, you need to install h5py through anaconda:
bash
conda install h5py
Here we use h5py to read HDF5 file. The h5py package is a open-source Pythonic interface to the HDF5 binary data format,
one can find something useful via h5py's <a href='http://docs.h5py.org/en/latest/index.html'>documentation</a> and their <a href='http://www.h5py.org/'>website</a>. Also, one can find more information about HDF5's file format from <a href='http://www.hdfgroup.org/HDF5/doc/H5.format.html'>here</a>.
End of explanation
import h5py
Explanation: We import h5py to read HDF5 files:
End of explanation
import scipy.io
Explanation: scipy.io for saving data in *.mat format
End of explanation
from mpl_toolkits.basemap import Basemap
Explanation: For the map view of data, we need mpl_toolkits.basemap
End of explanation
hdf5_L1B=glob.glob('_data/MODIS_L1B/MYD021*.h5')
print("MODIS L1B file found {}".format(hdf5_L1B))
hdf5_Geo=glob.glob('_data/MODIS_L1B/MYD03*.h5')
print("MODIS Geolocation file found {}".format(hdf5_Geo))
Explanation: Read MODIS L1B data through h5py
Searching all *.h5 files in _data/MODIS_L1B/ directory, Make sure that you have the required HDF5 files exist in it.
End of explanation
hdf5_obj=h5py.File(hdf5_L1B[0], 'r')
geo_obj=h5py.File(hdf5_Geo[0], 'r')
Explanation: Get HDF5 file object
End of explanation
hdf5_attr=hdf5_obj.keys()
print('Attributes in {}'.format(hdf5_L1B))
print('=============================================================')
print('hdf5_attr=hdf5_obj.keys() \n\n{}'.format(hdf5_attr))
sub_attr=hdf5_obj['MODIS_SWATH_Type_L1B'].keys()
print("\n\n\tSub-attributes in 'MODIS_SWATH_Type_L1B'")
print('\t=============================================================')
print("\tsub_attr=hdf5_obj['MODIS_SWATH_Type_L1B'].keys() \n\n\t{}".format(sub_attr))
subsub_attr=hdf5_obj['MODIS_SWATH_Type_L1B']['Data Fields'].keys()
print("\n\n\t\tSubsub-attributes in 'Data Fields'")
print('\t\t=============================================================')
print("\t\tsubsub_attr=hdf5_obj['MODIS_SWATH_Type_L1B']['Data Fields'].keys() \n\n\t\t{}".format(subsub_attr))
Explanation: h5py.File('filename.h5', 'r') returns the object of HDF5 file.
Here we read MODIS L1B channe-31 and channel-1 data from MYD021KM.A*.h5 file and read geolocation fields from a corresponding MYD02QKM.A*.h5 file.
The structure of HDF file
Get attribute and sub-attribute information
Something abou using h5py to discover the structure of HDF files:
Apply a_obj.keys() to get all the attributes in a file object.
HDF file usually has more than one layer attributes.
Call a_obj['an_attribute'].keys() to see the sub-attribute.
Use build-in function dir(a_obj['an_attribute']) to see if the option keys still available.
If you cannot find keys by dir(a_obj['an_attribute']['more_attr']), you reach the bottom layer of the file, and usually there are data in it.
Here we attempt to read MODIS L1B channe-31 data as an example. The data 'EV_1KM_Emissive' is in:
End of explanation
# Channel-31
L1B_emiss=hdf5_obj['MODIS_SWATH_Type_L1B']['Data Fields']['EV_1KM_Emissive'][:];
print("Size of 'EV_1KM_Emissive':\n===========================\n{}".format(L1B_emiss.shape))
# Channel-1
L1B_ref=hdf5_obj['MODIS_SWATH_Type_L1B']['Data Fields']['EV_250_Aggr1km_RefSB'][:];
print("\nSize of 'EV_500_Aggr1km_RefSB':\n================================\n{}".format(L1B_ref.shape))
Explanation: Some external tools
If you have tools like <a href='http://hdfeos.org/software/hdfexplorer.php'>HDF Explorer</a> or <a href='http://www.hdfgroup.org/products/java/hdfview/'>HDF Viewer</a>. You may have a more direct view of HDF's file structure.
<img src="_figures/01_HDF_file_structure.PNG" width=850 align="left"/>
Read MODIS L1B channel-31 and channel-1 data
Read raw data
Based on chapter <a href='#Structure-of-HDF-file'>3.B</a>, we read channe-31 and channel-1 data.
For channel-31, it is in 'EV_1KM_Emissive' (from channel-20 to 36, 16 different channels)
'MODIS_SWATH_Type_L1B' '/' 'Data Fields' '/' 'EV_1KM_Emissive'
For channel-1, it is in 'EV_250_Aggr1km_RefSB' (channel-1 and 2, 2 different channels)
'MODIS_SWATH_Type_L1B' '/' 'Data Fields' '/' 'EV_250_Aggr1km_RefSB'
End of explanation
band_info=hdf5_obj['MODIS_SWATH_Type_L1B']['Data Fields']['Band_1KM_Emissive'][:]
print('List of MODIS L1B Channels\n=====================================\n{}'.format(band_info))
band_info=hdf5_obj['MODIS_SWATH_Type_L1B']['Data Fields']['Band_250M'][:]
print('\nList of MODIS L1B Channels\n=====================================\n{}'.format(band_info))
Explanation: Here the file has a size of Channels * Longitude * Latitude.
Select the channel
The channel information of 'EV_1KM_Emissive' and 'EV_500_Aggr1km_RefSB' can be found in:
'MODIS_SWATH_Type_L1B' '/' 'Data Fields' '/' 'Band_1KM_Emissive'
'MODIS_SWATH_Type_L1B' '/' 'Data Fields' '/' 'Band_250M'
End of explanation
C31=L1B_emiss[10, :, :]
C1=L1B_ref[1, :, :]
Explanation: Then we can chose the channel we want:
End of explanation
print('Channel-31 info\n===============================================================')
hdf5_obj['MODIS_SWATH_Type_L1B']['Data Fields']['EV_1KM_Emissive'].attrs.items()
Explanation: Scale factor and offset value
Simply read raw data with channel is not enough, in order to maximum the precision, MODIS L1B data formed as
$Data = (RawData - offset) \times scale$
this information is included in the attributes of each variable.
Here we use obj[...].attrs.items() to get all the attributes and see if we can find something interesting.
End of explanation
a_list=hdf5_obj['MODIS_SWATH_Type_L1B']['Data Fields']['EV_1KM_Emissive'].attrs.values()
print(a_list)
Explanation: Here radiance_scales and radiance_offsets are what we want. Number of channels can also be seen through band_names.
We can use a_list=obj[...].attrs.values() to get these info.
End of explanation
C31_scale=a_list[6][10]
C31_offset=a_list[7][10]
Explanation: radiance_scales and radiance_offsets are the 7th and 8th group of a_list, and channel-31 is the 11th element of the group.
End of explanation
C1_scale=hdf5_obj['MODIS_SWATH_Type_L1B']['Data Fields']['EV_250_Aggr1km_RefSB'].attrs.values()[9][0]
C1_offset=hdf5_obj['MODIS_SWATH_Type_L1B']['Data Fields']['EV_250_Aggr1km_RefSB'].attrs.values()[10][0]
#corrected_counts_scales=hdf5_obj['MODIS_SWATH_Type_L1B']['Data Fields']['EV_250_Aggr1km_RefSB'].attrs.values()[12][0]
Explanation: We do the same thing for channel-1 data, but now we use reflectance_scales
End of explanation
C31=(C31 - C31_offset * np.ones(C31.shape))*C31_scale
C1=(C1 - C1_offset * np.ones(C1.shape))*C1_scale
Explanation: Finally, we correct the data, numpy.ones is the same as ones.m in MATLAB.
End of explanation
np.max(C1)
Explanation: Print the maximum data and see if it is reasonable.
End of explanation
C_x=geo_obj['MODIS_Swath_Type_GEO']['Geolocation Fields']['Longitude'][:]
C_y=geo_obj['MODIS_Swath_Type_GEO']['Geolocation Fields']['Latitude'][:]
print('===================================================')
print('Size of Longitude: {}'.format(C_x.shape))
print('Longitude Interval: {} ~ {}'.format(np.min(C_x), np.max(C_x)))
print('===================================================')
print('Size of Latitude: {}'.format(C_y.shape))
print('Latitude Interval: {} ~ {}'.format(np.min(C_y), np.max(C_y)))
Explanation: Geolocation field
For the geolocation field, we do not using 'Geolocation Fields' in MYD021KM*.h5 file because 'EV_1KM_Emissive' and 'EV_250_Aggr1km_RefSB'as a resolution of 250m, but the resolution of latitude and longitude in 'Geolocation Fields' is 1km.
We use the 'Geolocation Field' in MYD03*.h5, they are in the following place:
'MODIS_Swath_Type_GEO' '/' 'Geolocation Fields' '/' 'Longitude'
'MODIS_Swath_Type_GEO' '/' 'Geolocation Fields' '/' 'Latitude'
End of explanation
def reproj_L1B(raw_data, raw_x, raw_y, xlim, ylim, res):
'''
=========================================================================================
Reproject MODIS L1B file to a regular grid
-----------------------------------------------------------------------------------------
d_array, x_array, y_array, bin_count = reproj_L1B(raw_data, raw_x, raw_y, xlim, ylim, res)
-----------------------------------------------------------------------------------------
Input:
raw_data: L1B data, N*M 2-D array.
raw_x: longitude info. N*M 2-D array.
raw_y: latitude info. N*M 2-D array.
xlim: range of longitude, a list.
ylim: range of latitude, a list.
res: resolution, single value.
Output:
d_array: L1B reprojected data.
x_array: reprojected longitude.
y_array: reprojected latitude.
bin_count: how many raw data point included in a reprojected grid.
Note:
function do not performs well if "res" is larger than the resolution of input data.
size of "raw_data", "raw_x", "raw_y" must agree.
=========================================================================================
'''
import numpy as np
x_bins=np.arange(xlim[0], xlim[1], res)
y_bins=np.arange(ylim[0], ylim[1], res)
# x_indices=np.digitize(raw_x.flat, x_bins)
# y_indices=np.digitize(raw_y.flat, y_bins)
x_indices=np.searchsorted(x_bins, raw_x.flat, 'right')
y_indices=np.searchsorted(y_bins, raw_y.flat, 'right')
y_array=np.zeros([len(y_bins), len(x_bins)], dtype=np.float)
x_array=np.zeros([len(y_bins), len(x_bins)], dtype=np.float)
d_array=np.zeros([len(y_bins), len(x_bins)], dtype=np.float)
bin_count=np.zeros([len(y_bins), len(x_bins)], dtype=np.int)
for n in range(len(y_indices)): #indices
bin_row=y_indices[n]-1 # '-1' is because we call 'right' in np.searchsorted.
bin_col=x_indices[n]-1
bin_count[bin_row, bin_col] += 1
x_array[bin_row, bin_col] += raw_x.flat[n]
y_array[bin_row, bin_col] += raw_y.flat[n]
d_array[bin_row, bin_col] += raw_data.flat[n]
for i in range(x_array.shape[0]):
for j in range(x_array.shape[1]):
if bin_count[i, j] > 0:
x_array[i, j]=x_array[i, j]/bin_count[i, j]
y_array[i, j]=y_array[i, j]/bin_count[i, j]
d_array[i, j]=d_array[i, j]/bin_count[i, j]
else:
d_array[i, j]=np.nan
x_array[i, j]=np.nan
y_array[i,j]=np.nan
return d_array, x_array, y_array, bin_count
Explanation: Reproject MODIS L1B data to a regular grid
Here we define function reproj_L1B to reproject data to a regular grid.
One can also use <a href='http://www.whrc.org/education/rwanda/pdf/Baccini_MRT.pdf'>MODIS Reprojection Tool</a> to do the work.
Define function reproj_L1B
We now using a function to represent the reprojection process as what have done on satellite3.ipynb. One thing a bit different is that we use
python
numpy.searchsorted(b, a, 'right')
to replace
python
numpy.digitize(a, b)
Because <a href='http://docs.scipy.org/doc/numpy/reference/generated/numpy.searchsorted.html'>numpy.searchsorted</a> use binary search but <a href='http://docs.scipy.org/doc/numpy/reference/generated/numpy.digitize.html'>numpy.digitize</a> based on linear search, the later could be slow when we have lots of data.
End of explanation
xlim=[np.min(C_x), np.max(C_x)]
ylim=[np.min(C_y), np.max(C_y)]
C31_grid, longitude, latitude, bin_count = reproj_L1B(C31, C_x, C_y, xlim, ylim, 0.1)
Explanation: Test if reproj_L1B works well
Here we reproject channel-31 data to see if reproj_L1B works well.
End of explanation
C31_grid=np.ma.masked_where(np.isnan(C31_grid), C31_grid)
bin_count=np.ma.masked_where(np.isnan(bin_count), bin_count)
longitude=np.ma.masked_where(np.isnan(longitude), longitude)
latitude=np.ma.masked_where(np.isnan(latitude), latitude)
longitude.shape
Explanation: Mask NaN for plot, also make sure that the data is not too big to plot.
End of explanation
fig=plt.figure(figsize=(10.5, 9.5))
ax=plt.gca()
ax.set_xlim(xlim[0], xlim[1])
ax.set_ylim(ylim[0], ylim[1])
image=ax.pcolormesh(longitude, latitude, C31_grid)
#plt.colorbar(image)
plt.show
Explanation: Plot the result
End of explanation
res=0.05;
xlim=[np.min(C_x), np.max(C_x)]
ylim=[np.min(C_y), np.max(C_y)]
C31_grid, longitude, latitude, bin_count = reproj_L1B(C31, C_x, C_y, xlim, ylim, res)
C1_grid, longitude, latitude, bin_count = reproj_L1B(C1, C_x, C_y, xlim, ylim, res)
Explanation: Convert channel-31 and channel-1 data
Be careful to chose res here, high resolution makes your computer slow. Also, it affects the result on <a href='#Create-2-D-Histgram-for-channel-31-and-channel-1-data'>2-D histgram</a> part.
End of explanation
# create bins for channel-31
C31_bins = 100
C31_lim=[np.nanmin(C31_grid), np.nanmax(C31_grid)]
C31_bins=np.linspace(C31_lim[0], C31_lim[1], C31_bins, dtype=np.float)
# and channel-1
C1_bins = 150
C1_lim=[np.nanmin(C1_grid), np.nanmax(C1_grid)]
C1_bins=np.linspace(C1_lim[0], C1_lim[1], C1_bins, dtype=np.float)
Explanation: Identify cloud and ocean via 2-D histogram
Histogram is some kind of a basic <a href='http://en.wikipedia.org/wiki/Image_segmentation#Histogram-based_methods'>image segmentation</a> technique. Here we apply 2-D histogram to distinguish clouds from ocean.
How to use numpy.histogram2d
<a href='http://docs.scipy.org/doc/numpy/reference/generated/numpy.histogram2d.html'>numpy.histogram2d</a> is the main function we used here to create a 2-D histgram, it partitions two 1-D array into two 1-D bin array, and returns 2-D counts in a combination of 2 bins, as well as the 2-D bin edges.
The I/O format of numpy.histogram2d is not very clear, based on my understanding, a proper way is:
python
H, y_edges, x_edges = np.histogram2d(y, x, bins=(y_bins, x_bins))
X, Y = np.meshgrid(x_edges[:-1], y_edges[:-1]) # '-1' because number_bins=number_data-1
numpy.histogram2d is different from
<a href='http://docs.scipy.org/doc/numpy/reference/generated/numpy.digitize.html'>numpy.digitize</a> what we used before. numpy.digitize do not returns counts in each bin and we have to do this in a for loop (as what we did in our function <a href='#Define-function-reproj_L1B'>reproj_L1B</a>).
There is a counter part of numpy.histogram2d for 1-D histogram named
<a href='http://docs.scipy.org/doc/numpy/reference/generated/numpy.histogram.html'>numpy.histogram2d</a>. One can also make histograms through <a href='http://matplotlib.org/api/pyplot_api.html'>pyplot.hist</a> and <a href='http://matplotlib.org/api/pyplot_api.html'>pyplot.hist2d</a>.
MATLAB users can use function <a href='http://www.mathworks.com/help/matlab/ref/histcounts.html'>histcounts.m</a> and
<a href='http://www.mathworks.com/help/matlab/ref/histogram.html'>histogram.m</a> for 1-D histgram. There is no 2-D histgram function in MATLAB's official toolboxes, but one can find lots of them in MATLAB's file exchange center (e.g.
<a href='http://www.mathworks.com/matlabcentral/fileexchange/45325-efficient-2d-histogram--no-toolboxes-needed'>here</a>).
Create 2-D Histgram for channel-31 and channel-1 data
Create bins for channel-31 and channel-1
End of explanation
y=C31_grid.flat[:]; y_bins=C31_bins # x: C31
x=C1_grid.flat[:]; x_bins=C1_bins # y: C1
H, y_edges, x_edges = np.histogram2d(y, x, bins=(y_bins, x_bins))
X, Y = np.meshgrid(x_edges[:-1], y_edges[:-1])
Explanation: Here, we define channel-1 data on x-axis and call np.histogram2d as what's in <a href='#How-to-use-numpy.histogram2d'>above section</a> to get bin_count value x_edges and y_edges. Noted that masked NumPy array has no attribute flat.
End of explanation
# make_axes_locatable ---> for axis control
from mpl_toolkits.axes_grid1 import make_axes_locatable
# set axis
left=0.1; width = 0.8; bottom=0.1; height = 0.65
gap=0.02; hist_len=0.2; cbar_len=0.12
# three boxes
rect_main = [left+hist_len+gap, bottom, width, height]
rect_histx = [left+hist_len+gap, left+height+gap, width-cbar_len, hist_len]
rect_histy = [left, bottom, hist_len, height]
# clev
#clevs=range(40, 281, 40)
clevs=np.arange(3, 31, 3)
CMap=plt.cm.PuBu
CMap.set_over(CMap(np.arange(256))[-1, 0:3])
CMap.set_under('w')
#
xlim_bin=[np.min(X), np.max(X)]
ylim_bin=[np.min(Y), np.max(Y)]
#
# ========== figure ========== #
fig=plt.figure(figsize=(9, 9))
# ========== Main ========== #
# axis
axMain=plt.axes(rect_main)
axMain.yaxis.tick_right()
axMain.yaxis.set_label_position('right')
axMain.set_xlim(xlim_bin)
axMain.set_ylim(ylim_bin)
axMain.set_xlabel('Channel-1', fontsize=12)
axMain.set_ylabel('Channel-31', fontsize=12)
axMain.set_title('2-D Histgram', fontsize=16, fontweight='bold', x=1.15, y=1.15)
divider=make_axes_locatable(axMain)
# grid and frame
plt.grid() # grid on
[i.set_linewidth(2) for i in axMain.spines.itervalues()] # a bold frame
CS=axMain.contourf(X, Y, H/np.max(H)*100, clevs, cmap=CMap, extend='both') # 2-D histgram
CAx=divider.append_axes('right', size='5%', pad=0.75)
CBar=plt.colorbar(CS, cax=CAx)
CBar.set_label('Percentage ( % )', fontsize=10)
CBar.ax.tick_params(axis='y', length=22.5)
# draw line
axMain.plot(x_edges, x_edges*5.5+6.5*np.ones(x_edges.shape), \
color='k', linestyle='--', linewidth=5)
axMain.text(0.4, 6.25, 'Cloud', fontsize=16, fontweight='bold', \
ha='center', va='center', color='k')
axMain.text(0.125, 8.0, 'Ocean', fontsize=16, fontweight='bold', \
ha='center', va='center', color='k')
#
# ========== Hist-x ========== #
axHistx=plt.axes(rect_histx)
axHistx.hist(x, bins=x_bins, color=[0.3, 0.6, 0.8])
axHistx.set_xlim(xlim_bin)
axHistx.axes.get_xaxis().set_visible(False)
# scientific notation for x, y-axis
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
[i.set_linewidth(2) for i in axHistx.spines.itervalues()]
#
# ========== Hist-y ========== #
axHisty = plt.axes(rect_histy)
axHisty.hist(y, bins=y_bins, color=[0.3, 0.6, 0.8], orientation='horizontal')
axHisty.set_ylim(ylim_bin)
axHisty.invert_xaxis()
axHisty.axes.get_yaxis().set_visible(False)
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
[i.set_linewidth(2) for i in axHisty.spines.itervalues()]
# savefig
plt.savefig('_figures/01_MODIS_L1B_histgram.png', dpi=450, facecolor='w', edgecolor='w',
orientation='portrait', papertype='a4', format='png',
transparent=True, bbox_inches='tight', pad_inches=0,
frameon=None)
# show
plt.show()
Explanation: Then we make 2-D histgram to see the difference between clouds and ocean, the core idea is:
```python
# 2-D histgram
ax.contourf(X, Y, H/np.max(H)) # use percentage, because H sensitive to resolution 'res' we used before.
try to distinguish clouds from ocean through linear function
x is channel-1
axMain.plot(x, x5.5+6.5np.ones(x.shape))
```
The rest are codes for figures and axises.
End of explanation
fig=plt.figure(figsize=(8, 8))
ax=plt.gca()
plt.grid()
ax.set_xlim(xlim_bin[0], xlim_bin[1])
ax.set_ylim(ylim_bin[0], ylim_bin[1])
ax.set_xlabel('Channel-1', fontsize=12)
ax.set_ylabel('Channel-31', fontsize=12)
ax.plot(x, y, color=[0.5, 0.5, 0.5], marker='.', linestyle='None')
ax.plot(x_edges, x_edges*5.5+6.5*np.ones(x_edges.shape), linestyle='--', color='k', linewidth=5)
ax.text(0.4, 6.25, 'Cloud', fontsize=16, fontweight='bold', \
ha='center', va='center', color='k')
ax.text(0.10725, 7.75, 'Ocean', fontsize=16, fontweight='bold', \
ha='center', va='center', color='k')
plt.savefig('_figures/01_MODIS_L1B_Divide_Cloud_and_Ocean.png', dpi=450, facecolor='w', edgecolor='w',
orientation='portrait', papertype='a4', format='png',
transparent=True, bbox_inches='tight', pad_inches=0,
frameon=None)
plt.show()
Explanation: We can see that, there are generally two place where data points are very dense, one is typical ocean, one is typical cloud.
One can set an arbitrary criteria to segment the image. An over strict criteria will wrongly eliminate the information of clouds, but a over loose one will view some of the ocean points as clouds. Here we use:
$data\left.\right|{Channel-31} < data\left.\right|{Channel-1} \times 5.5 + 6.5 $
to identify clouds in all the data points.
We repeat the line to all data points to make it clear.
End of explanation
criteria_k=5.5 # less than
cirteria_b=6.5
C1_clouds=np.empty((C31_grid.shape[0], C31_grid.shape[1],))
C1_clouds[:]=np.nan
C31_clouds=np.empty((C31_grid.shape[0], C31_grid.shape[1],))
C31_clouds[:]=np.nan
for i in range(C31_grid.shape[0]):
for j in range(C31_grid.shape[1]):
if(C31_grid[i, j] < C1_grid[i, j]*0.3+cirteria_b):
# print(C31_grid[i, j])
C31_clouds[i, j]=C31_grid[i, j]
C1_clouds[i, j]=C1_grid[i, j]
Explanation: Eliminate ocean points based on 2-D histgram
We replace the ocean data point to NaN based on the criteria above.
Using for loops and if command is a general way to do that, sometimes there will be more simple ways like:
python
id=C1*5.5+6.5; C31[C31<id]=np.nan
End of explanation
C31_clouds_masked=np.ma.masked_where(np.isnan(C31_clouds), C31_clouds)
fig=plt.figure(figsize=(10.5, 9.5))
ax=plt.gca()
ax.set_xlim(xlim[0], xlim[1])
ax.set_ylim(ylim[0], ylim[1])
image=ax.pcolormesh(longitude, latitude, C31_clouds_masked)
#plt.colorbar(image)
plt.show
Explanation: Test if the "criteria" works well
Then we mask and plot C31_clouds to see if our criteria works well
End of explanation
def planckInvert(wavel,Llambda):
input wavelength in microns and Llambda in W/m^2/micron/sr, output
output brightness temperature in K (note that we've remove the factor
of pi because we are working with radiances, not fluxes)
c=2.99792458e+08 #m/s -- speed of light in vacumn
h=6.62606876e-34 #J s -- Planck's constant
kb=1.3806503e-23 # J/K -- Boltzman's constant
c1=2.*h*c**2.
c2=h*c/kb
Llambda=Llambda*1.e6 #convert to W/m^2/m/sr
wavel=wavel*1.e-6 #convert wavelength to m
Tbright=c2/(wavel*np.log(c1/(wavel**5.*Llambda) + 1.))
return Tbright
cloud_Tbright=planckInvert(11.02, C31_clouds)
Explanation: Calculate brightness temperature on the top of the clouds
We use planckInvert function as what we have done on satellite3.ipynb to get the brightness temperature at the center of channel-31 (11.02 $\mu m$).
End of explanation
np.nanmax(cloud_Tbright)
np.nanmin(cloud_Tbright)
Explanation: Print the maximum and see if it is reasonable.
End of explanation
import scipy.io
# save as *.mat
scipy.io.savemat('_share/01_MODIS_L1B_TBright', {'longitude': longitude, 'latitude': latitude, 'cloud_Tbright': cloud_Tbright})
Explanation: Save the output as *.mat
We use *.mat as the output data format.
End of explanation
# mask the result
cloud_Tbright_masked=np.ma.masked_where(np.isnan(cloud_Tbright), cloud_Tbright)
from mpl_toolkits.basemap import Basemap
lonlim=xlim
latlim=ylim
vancity_lat=49.25
vancity_lon=-123.1
# Colormap
CMap=plt.cm.hot_r#gist_heat # hot, afmhot, gnuplot
#
proj=Basemap(resolution='l', projection='lcc', \
lat_1=30, lat_2=60, lat_0=45, lon_0=-140, \
llcrnrlon=-155, llcrnrlat=30, \
urcrnrlon=-110, urcrnrlat=56)
# create figure, add axes
fig=plt.figure(figsize=(12, 12))
ax=plt.gca()
## parallels and meridians.
parallels=np.arange(-90, 90, 5)
meridians=np.arange(0, 360, 5)
proj.drawparallels(parallels, labels=[1, 0, 0, 0],\
fontsize=10, latmax=90)
proj.drawmeridians(meridians, labels=[0, 0, 0, 1],\
fontsize=10, latmax=90)
# mask ocean/land to distinguish clouds
proj.drawlsmask(land_color=[0.925, 0.875, 0.375], ocean_color=[0.375, 0.5, 0.75], \
lakes=False, resolution='l')
# draw coast
proj.drawcoastlines(linewidth=1.5, linestyle='solid', color=[0.25, 0.25, 0.25])
# compute native x,y coordinates of grid.
x, y=proj(longitude, latitude)
x_van, y_van=proj(vancity_lon, vancity_lat)
x_text, y_text=proj(vancity_lon+4.5, vancity_lat-0.25)
# pcolor plot
CS=proj.pcolor(x, y, cloud_Tbright_masked, cmap=CMap, vmin=210, vmax=275)
# colorbar
CBar=proj.colorbar(CS, 'right', size='5%', pad='5%')
CBar.set_label('Brightness Temperature ( K )', fontsize=12, fontweight='bold')
CBar.ax.tick_params(axis='y', length=0)
#CBar.ax.invert_yaxis()
# Vancouver
proj.plot(x_van, y_van, marker='o', markersize=18, mfc='k', mec='k')
plt.text(x_text, y_text, 'Vancouver', fontsize=16, fontweight='bold',
ha='center', va='center', color='k')
# title
ax.set_title('Brightness Temperature\nMYD021KM.A2006303.2220 channel-31 ',\
fontweight='bold', fontsize=14)
# Save figure
plt.savefig('_figures/01_MODIS_L1B_TBright.png', dpi=450, facecolor='w', edgecolor='w',
orientation='portrait', papertype='a4', format='png',
transparent=True, bbox_inches='tight', pad_inches=0,
frameon=None)
# Show
plt.show()
Explanation: Plot cloud_TBright in mpl_toolkits.basemap
Here we use mpl_toolkits.basemap to Visualize the result.
End of explanation
<END_TASK> |
15,760 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Infomap
Multi-level network clustering based on the Map equation.
The Map Equation
\begin{equation}
L(M) = q_\curvearrowright H(\mathcal{Q}) + \sum_{i = 1}^{m}{p_{\circlearrowright}^i H(\mathcal{P}^i)}
\end{equation}
$L(M)$ measures the amount of information it takes to describe a random walk on a network given a partition of the network into modules $M$. It is a sum of the amount of information needed to describe the movements between and within the modules, which balances the goodness of fit with the complexity of the model. For more information, see www.mapequation.org.
Import Infomap
Infomap includes an examples/python folder with some examples, including this notebook. Run make in that directory to build the python interface to a local folder.
The infomap package exposes two classes, Infomap and MemInfomap, that wraps an input network, an output tree, and a run method to run Infomap on the input network. The classes takes a string of options as input.
Step1: Simple example
Step2: Memory networks
With memory networks, the flow between two nodes depends on how you arrived at the first node. This higher-order relationships can be described by trigrams as in the example below
Step3: Overlapping modules
Notice that node 2 in the example below exists in both module 0 and 1. This is because MemInfomap partitions the higher-order state network which can include multiple state nodes for each physical node. For trigrams, a state node is a pair of previousNode node. To keep the state network in the output tree, add the --expanded flag to configure MemInfomap
Step4: As seen in the expanded output above, node 2 is represented by four state nodes partitioned into two modules depending on where you come from; if you go to node 2 from node 0 or 1 you are still considered to be in module 1, but if you go to node 2 from node 3 and 4 you are still considered to be in module 0.
Multi-layer networks
Step6: Infomap + NetworkX
Generate and draw a network with NetworkX, colored
according to the community structure found by Infomap. | <ASSISTANT_TASK:>
Python Code:
from infomap import infomap
Explanation: Infomap
Multi-level network clustering based on the Map equation.
The Map Equation
\begin{equation}
L(M) = q_\curvearrowright H(\mathcal{Q}) + \sum_{i = 1}^{m}{p_{\circlearrowright}^i H(\mathcal{P}^i)}
\end{equation}
$L(M)$ measures the amount of information it takes to describe a random walk on a network given a partition of the network into modules $M$. It is a sum of the amount of information needed to describe the movements between and within the modules, which balances the goodness of fit with the complexity of the model. For more information, see www.mapequation.org.
Import Infomap
Infomap includes an examples/python folder with some examples, including this notebook. Run make in that directory to build the python interface to a local folder.
The infomap package exposes two classes, Infomap and MemInfomap, that wraps an input network, an output tree, and a run method to run Infomap on the input network. The classes takes a string of options as input.
End of explanation
infomapWrapper = infomap.Infomap("--two-level")
# Add link weight as an optional third argument
infomapWrapper.addLink(0, 1)
infomapWrapper.addLink(0, 2)
infomapWrapper.addLink(0, 3)
infomapWrapper.addLink(1, 0)
infomapWrapper.addLink(1, 2)
infomapWrapper.addLink(2, 1)
infomapWrapper.addLink(2, 0)
infomapWrapper.addLink(3, 0)
infomapWrapper.addLink(3, 4)
infomapWrapper.addLink(3, 5)
infomapWrapper.addLink(4, 3)
infomapWrapper.addLink(4, 5)
infomapWrapper.addLink(5, 4)
infomapWrapper.addLink(5, 3)
infomapWrapper.run()
tree = infomapWrapper.tree
print("Found %d modules with codelength: %f" % (tree.numTopModules(), tree.codelength()))
print("\n#node module")
for node in tree.leafIter():
print("%d %d" % (node.physIndex, node.moduleIndex()))
Explanation: Simple example
End of explanation
infomapWrapper = infomap.MemInfomap("--two-level")
# Trigrams represents a path from node A through B to C.
# Add link weight as an optional fourth argument
infomapWrapper.addTrigram(0, 2, 0)
infomapWrapper.addTrigram(0, 2, 1)
infomapWrapper.addTrigram(1, 2, 1)
infomapWrapper.addTrigram(1, 2, 0)
infomapWrapper.addTrigram(1, 2, 3)
infomapWrapper.addTrigram(3, 2, 3)
infomapWrapper.addTrigram(2, 3, 4)
infomapWrapper.addTrigram(3, 2, 4)
infomapWrapper.addTrigram(4, 2, 4)
infomapWrapper.addTrigram(4, 2, 3)
infomapWrapper.addTrigram(4, 3, 3)
infomapWrapper.run()
tree = infomapWrapper.tree
print("Found %d modules with codelength: %f" % (tree.numTopModules(), tree.codelength()))
print("\n#node module")
for node in tree.leafIter():
print("%d %d" % (node.physIndex, node.moduleIndex()))
Explanation: Memory networks
With memory networks, the flow between two nodes depends on how you arrived at the first node. This higher-order relationships can be described by trigrams as in the example below:
Trigrams
End of explanation
# Store expanded state network
infomapWrapper = infomap.MemInfomap("--two-level --expanded")
infomapWrapper.addTrigram(0, 2, 0)
infomapWrapper.addTrigram(0, 2, 1)
infomapWrapper.addTrigram(1, 2, 1)
infomapWrapper.addTrigram(1, 2, 0)
infomapWrapper.addTrigram(1, 2, 3)
infomapWrapper.addTrigram(3, 2, 3)
infomapWrapper.addTrigram(2, 3, 4)
infomapWrapper.addTrigram(3, 2, 4)
infomapWrapper.addTrigram(4, 2, 4)
infomapWrapper.addTrigram(4, 2, 3)
infomapWrapper.addTrigram(4, 3, 3)
infomapWrapper.run()
tree = infomapWrapper.tree
print("Found %d modules with codelength: %f" % (tree.numTopModules(), tree.codelength()))
print("\n#previousNode node module")
for node in tree.leafIter():
print("%d %d %d" % (node.stateIndex, node.physIndex, node.moduleIndex()))
Explanation: Overlapping modules
Notice that node 2 in the example below exists in both module 0 and 1. This is because MemInfomap partitions the higher-order state network which can include multiple state nodes for each physical node. For trigrams, a state node is a pair of previousNode node. To keep the state network in the output tree, add the --expanded flag to configure MemInfomap:
End of explanation
infomapWrapper = infomap.MemInfomap("--two-level --expanded")
# from (layer, node) to (layer, node) weight
infomapWrapper.addMultiplexLink(2, 1, 1, 2, 1.0)
infomapWrapper.addMultiplexLink(1, 2, 2, 1, 1.0)
infomapWrapper.addMultiplexLink(3, 2, 2, 3, 1.0)
infomapWrapper.run()
tree = infomapWrapper.tree
print("Found %d modules with codelength: %f" % (tree.numTopModules(), tree.codelength()))
print("\n#layer node module:")
for node in tree.leafIter():
print("%d %d %d" % (node.stateIndex, node.physIndex, node.moduleIndex()))
Explanation: As seen in the expanded output above, node 2 is represented by four state nodes partitioned into two modules depending on where you come from; if you go to node 2 from node 0 or 1 you are still considered to be in module 1, but if you go to node 2 from node 3 and 4 you are still considered to be in module 0.
Multi-layer networks
End of explanation
import networkx as nx
import matplotlib.pyplot as plt
import matplotlib.colors as colors
%matplotlib inline
def findCommunities(G):
Partition network with the Infomap algorithm.
Annotates nodes with 'community' id and return number of communities found.
infomapWrapper = infomap.Infomap("--two-level --silent")
print("Building Infomap network from a NetworkX graph...")
for e in G.edges_iter():
infomapWrapper.addLink(*e)
print("Find communities with Infomap...")
infomapWrapper.run();
tree = infomapWrapper.tree
print("Found %d modules with codelength: %f" % (tree.numTopModules(), tree.codelength()))
communities = {}
for node in tree.leafIter():
communities[node.originalLeafIndex] = node.moduleIndex()
nx.set_node_attributes(G, 'community', communities)
return tree.numTopModules()
def drawNetwork(G):
# position map
pos = nx.spring_layout(G)
# community ids
communities = [v for k,v in nx.get_node_attributes(G, 'community').items()]
numCommunities = max(communities) + 1
# color map from http://colorbrewer2.org/
cmapLight = colors.ListedColormap(['#a6cee3', '#b2df8a', '#fb9a99', '#fdbf6f', '#cab2d6'], 'indexed', numCommunities)
cmapDark = colors.ListedColormap(['#1f78b4', '#33a02c', '#e31a1c', '#ff7f00', '#6a3d9a'], 'indexed', numCommunities)
# Draw edges
nx.draw_networkx_edges(G, pos)
# Draw nodes
nodeCollection = nx.draw_networkx_nodes(G,
pos = pos,
node_color = communities,
cmap = cmapLight
)
# Set node border color to the darker shade
darkColors = [cmapDark(v) for v in communities]
nodeCollection.set_edgecolor(darkColors)
# Draw node labels
for n in G.nodes_iter():
plt.annotate(n,
xy = pos[n],
textcoords = 'offset points',
horizontalalignment = 'center',
verticalalignment = 'center',
xytext = [0, 0],
color = cmapDark(communities[n])
)
plt.axis('off')
# plt.savefig("karate.png")
plt.show()
G=nx.karate_club_graph()
findCommunities(G)
drawNetwork(G)
Explanation: Infomap + NetworkX
Generate and draw a network with NetworkX, colored
according to the community structure found by Infomap.
End of explanation
<END_TASK> |
15,761 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
EIS metadata validation script
Used to validate Planon output with spreadsheet input
1. Data import
Step1: Read data. There are two datasets
Step2: Unify index, caps everything and strip of trailing spaces.
Step3: Drop duplicates (shouldn't be any)
Step4: Split Planon import into loggers and meters
Drop duplicates (shouldn't be any)
Step5: Index unique? show number of duplicates in index
Step6: Meters are not unique. This is becasue of the spaces served. This is ok for now, we will deal with duplicates at the comparison stage. Same is true for loggers - in the unlikely event that there are duplicates in the future.
Step7: 2. Validation
Create list of all buildings present in Planon export. These are buildings to check the data against from Master.
Step8: 2.1. Meters
Create dataframe slice for validation from master_meterssensors where the only the buildings located in buildings are contained. Save this new slice into master_meterssensors_for_validation. This is done by creating sub-slices of the dataframe for each building, then concatenating them all together.
Step9: Planon sensors are not unique because of the spaces served convention in the two data architectures. The Planon architecture devotes a new line for each space served - hence the not unique index. The Master architecture lists all the spaces only once, as a list, therefore it has a unique index. We will need to take this into account and create matching dataframe out of planon for comparison, with a unique index.
Step10: Sort datasets after index for easier comparison.
Step11: 2.1.1 Slicing of meters to only certain columns of comparison
Step12: Create dictionary that maps Planon column names onto Master.
From Nicola
Step13: Filter both dataframes based on these new columns. Then remove duplicates. Currently, this leads to loss of information of spaces served, but also a unique index for the Planon dataframe, therefore bringing the dataframes closer to each other. When including spaces explicitly in the comparison (if we want to - or just trust the Planon space mapping), this needs to be modified.
Step14: Unify headers, drop duplicates (bear the mind the spaces argument, this where it needs to be brought back in in the future!).
Step15: Fiscal/Tenant meter name needs fixing from Yes/No and 1/0.
Step16: Cross-check missing meters
Step17: Check for duplicates in index, but not duplicates over the entire row
Step18: The duplicates are the nans. Remove these for now. Could revisit later to do an index-less comparison, only over row contents.
Step19: Do comparison only on common indices. Need to revisit and identify the cause missing meters, both ways (5 Planon->Meters and 30 Meters->Planon in this example).
Step20: Still have duplicate indices. For now we just drop and keep the first.
Step21: 2.1.2. Primitive comparison
Step22: 2.1.3. Horizontal comparison
Number of cells matching
Step23: Percentage matching
Step24: 2.1.4. Vertical comparison
Step25: 2.1.5. Smart(er) comparison
Not all of the dataframe matches. Let us do some basic string formatting, maybe that helps.
Step26: Some errors fixed, some left. Let's see which ones. These are either
Step27: Let us repeat the exercise for Logger Channel. Cross-validate, flag as highly likely error where both mismatch.
Step28: All errors fixed on logger channels.
Step29: New error percentage
Step30: 2.2. Loggers
Step31: Create dictionary that maps Planon column names onto Master.
From Nicola
Step32: Loggers matching
Step33: Percentage matching
Step34: Loggers not matching on Building Name.
Step35: That didnt help.
Step36: Follow up with lexical distance comparison. That would flag this as a match.
Loggers not matching on Serial Number.
Step37: Technically the same, but there is a number format error. Compare based on float value, if they match, replace one of them. This needs to be amended, as it will throw cannot onvert to float exception if strings are left in from the previous step.
Step38: New error percentage
Step39: (Bearing in my mind the above, this is technically 0) | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: EIS metadata validation script
Used to validate Planon output with spreadsheet input
1. Data import
End of explanation
planon=pd.read_excel('EIS Assets v2.xlsx',index_col = 'Code')
#master_loggerscontrollers_old = pd.read_csv('LoggersControllers.csv', index_col = 'Asset Code')
#master_meterssensors_old = pd.read_csv('MetersSensors.csv', encoding = 'macroman', index_col = 'Asset Code')
master='MASTER PlanonLoggersAndMeters 17 10 16.xlsx'
master_loggerscontrollers=pd.read_excel(master,sheetname='Loggers Controllers', index_col = 'Asset Code')
master_meterssensors=pd.read_excel(master,sheetname='Meters Sensors', encoding = 'macroman', index_col = 'Asset Code')
planon['Code']=planon.index
master_loggerscontrollers['Code']=master_loggerscontrollers.index
master_meterssensors['Code']=master_meterssensors.index
set(master_meterssensors['Classification Group'])
set(master_loggerscontrollers['Classification Group'])
new_index=[]
for i in master_meterssensors.index:
if '/' not in i:
new_index.append(i[:i.find('-')+1]+i[i.find('-')+1:].replace('-','/'))
else:
new_index.append(i)
master_meterssensors.index=new_index
master_meterssensors['Code']=master_meterssensors.index
new_index=[]
for i in master_meterssensors.index:
logger=i[:i.find('/')]
if master_loggerscontrollers.loc[logger]['Classification Group']=='BMS controller':
meter=i[i.find('/')+1:]
if meter[0] not in {'N','n','o','i'}:
new_index.append(i)
else:
new_index.append(i)
len(master_meterssensors)
master_meterssensors=master_meterssensors.loc[new_index]
len(master_meterssensors)
master_meterssensors.to_csv('meterssensors.csv')
master_loggerscontrollers.to_csv('loggerscontrollers.csv')
Explanation: Read data. There are two datasets: Planon and Master. The latter is the EIS data nomencalture that was created. Master is made up of two subsets: loggers and meters. Loggers are sometimes called controllers and meters are sometimes called sensors. In rare cases meters or sensors are also called channels.
End of explanation
planon.index=[str(i).strip() for i in planon.index]
master_loggerscontrollers.index=[str(i).strip() for i in master_loggerscontrollers.index]
master_meterssensors.index=[str(i).strip() for i in master_meterssensors.index]
Explanation: Unify index, caps everything and strip of trailing spaces.
End of explanation
planon.drop_duplicates(inplace=True)
master_loggerscontrollers.drop_duplicates(inplace=True)
master_meterssensors.drop_duplicates(inplace=True)
Explanation: Drop duplicates (shouldn't be any)
End of explanation
# Split the Planon file into 2, one for loggers & controllers, and one for meters & sensors.
planon_loggerscontrollers = planon.loc[(planon['Classification Group'] == 'EN.EN4 BMS Controller') | (planon['Classification Group'] == 'EN.EN1 Data Logger')]
planon_meterssensors = planon.loc[(planon['Classification Group'] == 'EN.EN2 Energy Meter') | (planon['Classification Group'] == 'EN.EN3 Energy Sensor')]
planon_loggerscontrollers.drop_duplicates(inplace=True)
planon_meterssensors.drop_duplicates(inplace=True)
Explanation: Split Planon import into loggers and meters
Drop duplicates (shouldn't be any)
End of explanation
len(planon_loggerscontrollers.index[planon_loggerscontrollers.index.duplicated()])
len(planon_meterssensors.index[planon_meterssensors.index.duplicated()])
Explanation: Index unique? show number of duplicates in index
End of explanation
planon_meterssensors.head(3)
Explanation: Meters are not unique. This is becasue of the spaces served. This is ok for now, we will deal with duplicates at the comparison stage. Same is true for loggers - in the unlikely event that there are duplicates in the future.
End of explanation
buildings=set(planon_meterssensors['BuildingNo.'])
buildings
len(buildings)
Explanation: 2. Validation
Create list of all buildings present in Planon export. These are buildings to check the data against from Master.
End of explanation
master_meterssensors_for_validation = \
pd.concat([master_meterssensors.loc[master_meterssensors['Building Code'] == building] \
for building in buildings])
master_meterssensors_for_validation.head(2)
#alternative method
master_meterssensors_for_validation2 = \
master_meterssensors[master_meterssensors['Building Code'].isin(buildings)]
master_meterssensors_for_validation2.head(2)
Explanation: 2.1. Meters
Create dataframe slice for validation from master_meterssensors where the only the buildings located in buildings are contained. Save this new slice into master_meterssensors_for_validation. This is done by creating sub-slices of the dataframe for each building, then concatenating them all together.
End of explanation
len(master_meterssensors_for_validation)
len(planon_meterssensors)-len(planon_meterssensors.index[planon_meterssensors.index.duplicated()])
Explanation: Planon sensors are not unique because of the spaces served convention in the two data architectures. The Planon architecture devotes a new line for each space served - hence the not unique index. The Master architecture lists all the spaces only once, as a list, therefore it has a unique index. We will need to take this into account and create matching dataframe out of planon for comparison, with a unique index.
End of explanation
master_meterssensors_for_validation.sort_index(inplace=True)
planon_meterssensors.sort_index(inplace=True)
Explanation: Sort datasets after index for easier comparison.
End of explanation
planon_meterssensors.T
master_meterssensors_for_validation.T
Explanation: 2.1.1 Slicing of meters to only certain columns of comparison
End of explanation
#Planon:Master
meters_match_dict={
"BuildingNo.":"Building Code",
"Building":"Building Name",
"Description":"Description",
"EIS ID":"Logger Channel",
"Tenant Meter.Name":"Tenant meter",
"Fiscal Meter.Name":"Fiscal meter",
"Code":"Code"
}
Explanation: Create dictionary that maps Planon column names onto Master.
From Nicola:
- Code (Asset Code)
- Description
- EIS ID (Channel)
- Utility Type
- Fiscal Meter
- Tenant Meter
Building code and Building name are implicitly included. Logger Serial Number, IP or MAC would be essential to include, as well as Make and Model. Additional Location Info is not essnetial but would be useful to have. Locations (Locations.Space.Space number and Space Name) are included in the Planon export - but this is their only viable data source, therefore are not validated against.
End of explanation
master_meterssensors_for_validation_filtered=master_meterssensors_for_validation[list(meters_match_dict.values())]
planon_meterssensors_filtered=planon_meterssensors[list(meters_match_dict.keys())]
master_meterssensors_for_validation_filtered.head(2)
planon_meterssensors_filtered.head(2)
Explanation: Filter both dataframes based on these new columns. Then remove duplicates. Currently, this leads to loss of information of spaces served, but also a unique index for the Planon dataframe, therefore bringing the dataframes closer to each other. When including spaces explicitly in the comparison (if we want to - or just trust the Planon space mapping), this needs to be modified.
End of explanation
planon_meterssensors_filtered.columns=[meters_match_dict[i] for i in planon_meterssensors_filtered]
planon_meterssensors_filtered.drop_duplicates(inplace=True)
master_meterssensors_for_validation_filtered.drop_duplicates(inplace=True)
planon_meterssensors_filtered.head(2)
Explanation: Unify headers, drop duplicates (bear the mind the spaces argument, this where it needs to be brought back in in the future!).
End of explanation
planon_meterssensors_filtered['Fiscal meter']=planon_meterssensors_filtered['Fiscal meter'].isin(['Yes'])
planon_meterssensors_filtered['Tenant meter']=planon_meterssensors_filtered['Tenant meter'].isin(['Yes'])
master_meterssensors_for_validation_filtered['Fiscal meter']=master_meterssensors_for_validation_filtered['Fiscal meter'].isin([1])
master_meterssensors_for_validation_filtered['Tenant meter']=master_meterssensors_for_validation_filtered['Tenant meter'].isin([1])
master_meterssensors_for_validation_filtered.head(2)
planon_meterssensors_filtered.head(2)
Explanation: Fiscal/Tenant meter name needs fixing from Yes/No and 1/0.
End of explanation
a=np.sort(list(set(planon_meterssensors_filtered.index)))
b=np.sort(list(set(master_meterssensors_for_validation_filtered.index)))
meterssensors_not_in_planon=[]
for i in b:
if i not in a:
print(i+',',end=" "),
meterssensors_not_in_planon.append(i)
print('\n\nMeters in Master, but not in Planon:',
len(meterssensors_not_in_planon),'/',len(b),':',
round(len(meterssensors_not_in_planon)/len(b)*100,3),'%')
(set([i[:5] for i in meterssensors_not_in_planon]))
a=np.sort(list(set(planon_meterssensors_filtered.index)))
b=np.sort(list(set(master_meterssensors_for_validation_filtered.index)))
meterssensors_not_in_master=[]
for i in a:
if i not in b:
print(i+',',end=" "),
meterssensors_not_in_master.append(i)
print('\n\nMeters in Planon, not in Master:',
len(meterssensors_not_in_master),'/',len(a),':',
round(len(meterssensors_not_in_master)/len(a)*100,3),'%')
len(set([i for i in meterssensors_not_in_master]))
set([i[:9] for i in meterssensors_not_in_master])
set([i[:5] for i in meterssensors_not_in_master])
Explanation: Cross-check missing meters
End of explanation
print(len(planon_meterssensors_filtered.index))
print(len(set(planon_meterssensors_filtered.index)))
print(len(master_meterssensors_for_validation_filtered.index))
print(len(set(master_meterssensors_for_validation_filtered.index)))
master_meterssensors_for_validation_filtered[master_meterssensors_for_validation_filtered.index.duplicated()]
Explanation: Check for duplicates in index, but not duplicates over the entire row
End of explanation
good_index=[i for i in master_meterssensors_for_validation_filtered.index if str(i).lower().strip()!='nan']
master_meterssensors_for_validation_filtered=master_meterssensors_for_validation_filtered.loc[good_index]
master_meterssensors_for_validation_filtered.drop_duplicates(inplace=True)
len(planon_meterssensors_filtered)
len(master_meterssensors_for_validation_filtered)
Explanation: The duplicates are the nans. Remove these for now. Could revisit later to do an index-less comparison, only over row contents.
End of explanation
comon_index=list(set(master_meterssensors_for_validation_filtered.index).intersection(set(planon_meterssensors_filtered.index)))
len(comon_index)
master_meterssensors_for_validation_intersected=master_meterssensors_for_validation_filtered.loc[comon_index].sort_index()
planon_meterssensors_intersected=planon_meterssensors_filtered.loc[comon_index].sort_index()
len(master_meterssensors_for_validation_intersected)
len(planon_meterssensors_intersected)
Explanation: Do comparison only on common indices. Need to revisit and identify the cause missing meters, both ways (5 Planon->Meters and 30 Meters->Planon in this example).
End of explanation
master_meterssensors_for_validation_intersected = master_meterssensors_for_validation_intersected[~master_meterssensors_for_validation_intersected.index.duplicated(keep='first')]
master_meterssensors_for_validation_intersected.head(2)
planon_meterssensors_intersected.head(2)
Explanation: Still have duplicate indices. For now we just drop and keep the first.
End of explanation
planon_meterssensors_intersected==master_meterssensors_for_validation_intersected
np.all(planon_meterssensors_intersected==master_meterssensors_for_validation_intersected)
Explanation: 2.1.2. Primitive comparison
End of explanation
(planon_meterssensors_intersected==master_meterssensors_for_validation_intersected).sum()
Explanation: 2.1.3. Horizontal comparison
Number of cells matching
End of explanation
(planon_meterssensors_intersected==master_meterssensors_for_validation_intersected).sum()/\
len(planon_meterssensors_intersected)*100
((planon_meterssensors_intersected==master_meterssensors_for_validation_intersected).sum()/\
len(planon_meterssensors_intersected)*100).plot(kind='bar')
Explanation: Percentage matching
End of explanation
df=pd.DataFrame((planon_meterssensors_intersected.T==master_meterssensors_for_validation_intersected.T).sum())
df
df=pd.DataFrame((planon_meterssensors_intersected.T==master_meterssensors_for_validation_intersected.T).sum()/\
len(planon_meterssensors_intersected.T)*100)
df[df[0]<100]
Explanation: 2.1.4. Vertical comparison
End of explanation
sum(planon_meterssensors_intersected['Description']!=master_meterssensors_for_validation_intersected['Description'])
planon_meterssensors_intersected['Description']=[str(s).lower().strip()\
.replace(' ',' ').replace(' ',' ').replace('nan','')\
for s in planon_meterssensors_intersected['Description'].values]
master_meterssensors_for_validation_intersected['Description']=[str(s).lower().strip()\
.replace(' ',' ').replace(' ',' ').replace('nan','')\
for s in master_meterssensors_for_validation_intersected['Description'].values]
sum(planon_meterssensors_intersected['Description']!=master_meterssensors_for_validation_intersected['Description'])
Explanation: 2.1.5. Smart(er) comparison
Not all of the dataframe matches. Let us do some basic string formatting, maybe that helps.
End of explanation
for i in planon_meterssensors_intersected[planon_meterssensors_intersected['Description']!=master_meterssensors_for_validation_intersected['Description']].index:
print(i,'\t\tPlanon:',planon_meterssensors_intersected.loc[i]['Description'],'\t\tMaster:',master_meterssensors_for_validation_intersected.loc[i]['Description'])
Explanation: Some errors fixed, some left. Let's see which ones. These are either:
- Wrong duplicate dropped
- Input human erros in the description.
- Actual erros somewhere in the indexing.
End of explanation
sum(planon_meterssensors_intersected['Logger Channel']!=master_meterssensors_for_validation_intersected['Logger Channel'])
planon_meterssensors_intersected['Logger Channel']=[str(s).lower().strip().replace(' ',' ').replace(' ',' ') for s in planon_meterssensors_intersected['Logger Channel'].values]
master_meterssensors_for_validation_intersected['Logger Channel']=[str(s).lower().strip().replace(' ',' ').replace(' ',' ') for s in master_meterssensors_for_validation_intersected['Logger Channel'].values]
sum(planon_meterssensors_intersected['Logger Channel']!=master_meterssensors_for_validation_intersected['Logger Channel'])
Explanation: Let us repeat the exercise for Logger Channel. Cross-validate, flag as highly likely error where both mismatch.
End of explanation
for i in planon_meterssensors_intersected[planon_meterssensors_intersected['Logger Channel']!=master_meterssensors_for_validation_intersected['Logger Channel']].index:
print(i,'\t\tPlanon:',planon_meterssensors_intersected.loc[i]['Logger Channel'],'\t\tMaster:',master_meterssensors_for_validation_intersected.loc[i]['Logger Channel'])
Explanation: All errors fixed on logger channels.
End of explanation
(planon_meterssensors_intersected!=master_meterssensors_for_validation_intersected).sum()/\
len(planon_meterssensors_intersected)*100
Explanation: New error percentage:
End of explanation
buildings=set(planon_loggerscontrollers['BuildingNo.'])
buildings
master_loggerscontrollers_for_validation = \
pd.concat([master_loggerscontrollers.loc[master_loggerscontrollers['Building Code'] == building] \
for building in buildings])
master_loggerscontrollers_for_validation.head(2)
len(master_loggerscontrollers_for_validation)
len(planon_loggerscontrollers)-len(planon_loggerscontrollers.index[planon_loggerscontrollers.index.duplicated()])
master_loggerscontrollers_for_validation.sort_index(inplace=True)
planon_loggerscontrollers.sort_index(inplace=True)
planon_loggerscontrollers.T
master_loggerscontrollers_for_validation.T
Explanation: 2.2. Loggers
End of explanation
#Planon:Master
loggers_match_dict={
"BuildingNo.":"Building Code",
"Building":"Building Name",
"Description":"Description",
"EIS ID":"Logger Serial Number",
"Make":"Make",
"Model":"Model",
"Code":"Code"
}
master_loggerscontrollers_for_validation_filtered=master_loggerscontrollers_for_validation[list(loggers_match_dict.values())]
planon_loggerscontrollers_filtered=planon_loggerscontrollers[list(loggers_match_dict.keys())]
master_loggerscontrollers_for_validation_filtered.head(2)
planon_loggerscontrollers_filtered.head(2)
planon_loggerscontrollers_filtered.columns=[loggers_match_dict[i] for i in planon_loggerscontrollers_filtered]
planon_loggerscontrollers_filtered.drop_duplicates(inplace=True)
master_loggerscontrollers_for_validation_filtered.drop_duplicates(inplace=True)
planon_loggerscontrollers_filtered.head(2)
master_loggerscontrollers_for_validation_filtered.head(2)
a=np.sort(list(set(planon_loggerscontrollers_filtered.index)))
b=np.sort(list(set(master_loggerscontrollers_for_validation_filtered.index)))
loggerscontrollers_not_in_planon=[]
for i in b:
if i not in a:
print(i+',',end=" "),
loggerscontrollers_not_in_planon.append(i)
print('\n\nLoggers in Master, but not in Planon:',
len(loggerscontrollers_not_in_planon),'/',len(b),':',
round(len(loggerscontrollers_not_in_planon)/len(b)*100,3),'%')
a=np.sort(list(set(planon_loggerscontrollers_filtered.index)))
b=np.sort(list(set(master_loggerscontrollers_for_validation_filtered.index)))
loggerscontrollers_not_in_master=[]
for i in a:
if i not in b:
print(i+',',end=" "),
loggerscontrollers_not_in_master.append(i)
print('\n\nLoggers in Planon, not in Master:',
len(loggerscontrollers_not_in_master),'/',len(a),':',
round(len(loggerscontrollers_not_in_master)/len(a)*100,3),'%')
print(len(planon_loggerscontrollers_filtered.index))
print(len(set(planon_loggerscontrollers_filtered.index)))
print(len(master_loggerscontrollers_for_validation_filtered.index))
print(len(set(master_loggerscontrollers_for_validation_filtered.index)))
master_loggerscontrollers_for_validation_filtered[master_loggerscontrollers_for_validation_filtered.index.duplicated()]
comon_index=list(set(master_loggerscontrollers_for_validation_filtered.index).intersection(set(planon_loggerscontrollers_filtered.index)))
master_loggerscontrollers_for_validation_intersected=master_loggerscontrollers_for_validation_filtered.loc[comon_index].sort_index()
planon_loggerscontrollers_intersected=planon_loggerscontrollers_filtered.loc[comon_index].sort_index()
master_loggerscontrollers_for_validation_intersected.head(2)
planon_loggerscontrollers_intersected.head(2)
planon_loggerscontrollers_intersected==master_loggerscontrollers_for_validation_intersected
Explanation: Create dictionary that maps Planon column names onto Master.
From Nicola:
- EIS ID (Serial Number)
- Make
- Model
- Description
- Code (Asset Code)
- Building Code
Building code and Building name are implicitly included. Logger IP or MAC would be essential to include, as well as Make and Model. Additional Location Info is not essnetial but would be useful to have. Locations (Locations.Space.Space number and Space Name) are included in the Planon export - but this is their only viable data source, therefore are not validated against.
End of explanation
(planon_loggerscontrollers_intersected==master_loggerscontrollers_for_validation_intersected).sum()
Explanation: Loggers matching
End of explanation
(planon_loggerscontrollers_intersected==master_loggerscontrollers_for_validation_intersected).sum()/\
len(planon_loggerscontrollers_intersected)*100
((planon_loggerscontrollers_intersected==master_loggerscontrollers_for_validation_intersected).sum()/\
len(planon_loggerscontrollers_intersected)*100).plot(kind='bar')
Explanation: Percentage matching
End of explanation
sum(planon_loggerscontrollers_intersected['Building Name']!=master_loggerscontrollers_for_validation_intersected['Building Name'])
planon_loggerscontrollers_intersected['Building Name']=[str(s).lower().strip().replace(' ',' ').replace(' ',' ') for s in planon_loggerscontrollers_intersected['Building Name'].values]
master_loggerscontrollers_for_validation_intersected['Building Name']=[str(s).lower().strip().replace(' ',' ').replace(' ',' ') for s in master_loggerscontrollers_for_validation_intersected['Building Name'].values]
sum(planon_loggerscontrollers_intersected['Building Name']!=master_loggerscontrollers_for_validation_intersected['Building Name'])
Explanation: Loggers not matching on Building Name.
End of explanation
for i in planon_loggerscontrollers_intersected[planon_loggerscontrollers_intersected['Building Name']!=master_loggerscontrollers_for_validation_intersected['Building Name']].index:
print(i,'\t\tPlanon:',planon_loggerscontrollers_intersected.loc[i]['Building Name'],'\t\tMaster:',master_loggerscontrollers_for_validation_intersected.loc[i]['Building Name'])
Explanation: That didnt help.
End of explanation
sum(planon_loggerscontrollers_intersected['Logger Serial Number']!=master_loggerscontrollers_for_validation_intersected['Logger Serial Number'])
planon_loggerscontrollers_intersected['Logger Serial Number']=[str(s).lower().strip().replace(' ',' ').replace(' ',' ').replace('{','').replace('}','') for s in planon_loggerscontrollers_intersected['Logger Serial Number'].values]
master_loggerscontrollers_for_validation_intersected['Logger Serial Number']=[str(s).lower().strip().replace(' ',' ').replace(' ',' ').replace('{','').replace('}','') for s in master_loggerscontrollers_for_validation_intersected['Logger Serial Number'].values]
sum(planon_loggerscontrollers_intersected['Logger Serial Number']!=master_loggerscontrollers_for_validation_intersected['Logger Serial Number'])
for i in planon_loggerscontrollers_intersected[planon_loggerscontrollers_intersected['Logger Serial Number']!=master_loggerscontrollers_for_validation_intersected['Logger Serial Number']].index:
print(i,'\t\tPlanon:',planon_loggerscontrollers_intersected.loc[i]['Logger Serial Number'],'\t\tMaster:',master_loggerscontrollers_for_validation_intersected.loc[i]['Logger Serial Number'])
Explanation: Follow up with lexical distance comparison. That would flag this as a match.
Loggers not matching on Serial Number.
End of explanation
z1=[]
z2=[]
for i in planon_loggerscontrollers_intersected.index:
if planon_loggerscontrollers_intersected.loc[i]['Logger Serial Number']!=master_loggerscontrollers_for_validation_intersected.loc[i]['Logger Serial Number']:
if float(planon_loggerscontrollers_intersected.loc[i]['Logger Serial Number'])==\
float(master_loggerscontrollers_for_validation_intersected.loc[i]['Logger Serial Number']):
z1.append(str(int(float(planon_loggerscontrollers_intersected.loc[i]['Logger Serial Number']))))
z2.append(str(int(float(planon_loggerscontrollers_intersected.loc[i]['Logger Serial Number']))))
else:
z1.append(planon_loggerscontrollers_intersected.loc[i]['Logger Serial Number'])
z2.append(master_loggerscontrollers_for_validation_intersected.loc[i]['Logger Serial Number'])
else:
z1.append(planon_loggerscontrollers_intersected.loc[i]['Logger Serial Number'])
z2.append(planon_loggerscontrollers_intersected.loc[i]['Logger Serial Number'])
planon_loggerscontrollers_intersected['Logger Serial Number']=z1
master_loggerscontrollers_for_validation_intersected['Logger Serial Number']=z2
for i in planon_loggerscontrollers_intersected[planon_loggerscontrollers_intersected['Logger Serial Number']!=master_loggerscontrollers_for_validation_intersected['Logger Serial Number']].index:
print(i,'\t\tPlanon:',planon_loggerscontrollers_intersected.loc[i]['Logger Serial Number'],'\t\tMaster:',master_loggerscontrollers_for_validation_intersected.loc[i]['Logger Serial Number'])
Explanation: Technically the same, but there is a number format error. Compare based on float value, if they match, replace one of them. This needs to be amended, as it will throw cannot onvert to float exception if strings are left in from the previous step.
End of explanation
(planon_loggerscontrollers_intersected!=master_loggerscontrollers_for_validation_intersected).sum()/\
len(planon_loggerscontrollers_intersected)*100
Explanation: New error percentage:
End of explanation
a=np.sort(list(set(planon_meterssensors_filtered.index)))
b=np.sort(list(set(master_meterssensors_for_validation_filtered.index)))
meterssensors_not_in_planon=[]
for i in b:
if i not in a:
print(i+',',end=" "),
meterssensors_not_in_planon.append(i)
print('\n\nMeters in Master, but not in Planon:',
len(meterssensors_not_in_planon),'/',len(b),':',
round(len(meterssensors_not_in_planon)/len(b)*100,3),'%')
q1=pd.DataFrame(meterssensors_not_in_planon)
a=np.sort(list(set(planon_meterssensors_filtered.index)))
b=np.sort(list(set(master_meterssensors_for_validation_filtered.index)))
meterssensors_not_in_master=[]
for i in a:
if i not in b:
print(i+',',end=" "),
meterssensors_not_in_master.append(i)
print('\n\nMeters in Planon, not in Master:',
len(meterssensors_not_in_master),'/',len(a),':',
round(len(meterssensors_not_in_master)/len(a)*100,3),'%')
q2=pd.DataFrame(meterssensors_not_in_master)
a=np.sort(list(set(planon_loggerscontrollers_filtered.index)))
b=np.sort(list(set(master_loggerscontrollers_for_validation_filtered.index)))
loggerscontrollers_not_in_planon=[]
for i in b:
if i not in a:
print(i+',',end=" "),
loggerscontrollers_not_in_planon.append(i)
print('\n\nLoggers in Master, but not in Planon:',
len(loggerscontrollers_not_in_planon),'/',len(b),':',
round(len(loggerscontrollers_not_in_planon)/len(b)*100,3),'%')
q3=pd.DataFrame(loggerscontrollers_not_in_planon)
a=np.sort(list(set(planon_loggerscontrollers_filtered.index)))
b=np.sort(list(set(master_loggerscontrollers_for_validation_filtered.index)))
loggerscontrollers_not_in_master=[]
for i in a:
if i not in b:
print(i+',',end=" "),
loggerscontrollers_not_in_master.append(i)
print('\n\nLoggers in Planon, not in Master:',
len(loggerscontrollers_not_in_master),'/',len(a),':',
round(len(loggerscontrollers_not_in_master)/len(a)*100,3),'%')
q4=pd.DataFrame(loggerscontrollers_not_in_master)
q5=pd.DataFrame((planon_meterssensors_intersected!=master_meterssensors_for_validation_intersected).sum()/\
len(planon_meterssensors_intersected)*100)
q6=pd.DataFrame((planon_loggerscontrollers_intersected!=master_loggerscontrollers_for_validation_intersected).sum()/\
len(planon_loggerscontrollers_intersected)*100)
w1=[]
for i in planon_meterssensors_intersected[planon_meterssensors_intersected['Description']!=master_meterssensors_for_validation_intersected['Description']].index:
w1.append({"Meter":i,'Planon':planon_meterssensors_intersected.loc[i]['Description'],
'Master':master_meterssensors_for_validation_intersected.loc[i]['Description']})
q7=pd.DataFrame(w1)
w2=[]
for i in planon_loggerscontrollers_intersected[planon_loggerscontrollers_intersected['Building Name']!=master_loggerscontrollers_for_validation_intersected['Building Name']].index:
w2.append({"Logger":i,'Planon':planon_loggerscontrollers_intersected.loc[i]['Building Name'],
'Master':master_loggerscontrollers_for_validation_intersected.loc[i]['Building Name']})
q8=pd.DataFrame(w2)
writer = pd.ExcelWriter('final5b.xlsx')
q1.to_excel(writer,'Meters Master, not Planon')
q2.to_excel(writer,'Meters Planon, not Master')
q3.to_excel(writer,'Loggers Master, not Planon')
q4.to_excel(writer,'Loggers Planon, not Master')
q5.to_excel(writer,'Meters error perc')
q6.to_excel(writer,'Loggers error perc')
q7.to_excel(writer,'Meters naming conflcits')
q1
q9=[]
try:
for i in q1[0].values:
if i[:i.find('/')] not in set(q3[0].values):
q9.append(i)
except:pass
pd.DataFrame(q9).to_excel(writer,'Meters Master, not Planon, not Logger')
q10=[]
try:
for i in q1[0].values:
if 'L82' not in i:
q10.append(i)
except:pass
pd.DataFrame(q10).to_excel(writer,'Meters Master, not Planon, not L82')
q11=[]
try:
for i in q1[0].values:
if 'MC210' not in i:
q11.append(i)
except:pass
pd.DataFrame(q11).to_excel(writer,'Meters Master, not Planon, not 210')
writer.save()
test=[]
for i in planon_meterssensors_intersected.index:
test.append(i[:9])
planon_meterssensors_intersected['test']=test
planon_meterssensors_intersected.set_index(['test','Code'])
Explanation: (Bearing in my mind the above, this is technically 0)
End of explanation
<END_TASK> |
15,762 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Problem Set - Statistics 3
Task
Step1: Data
Bring In Titanic Passenger Data
Step2: The data we care about for this hypothesis(Survived, Fare) has no NaN values so no need to modify.
Setup a Survivor Data Set and a Non-Survivor Data Set.
Step3: Hypothesis
Step4: Based off this graph it is clear that these two distributions are best compared using a Mann-Whitney U-test
Step5: Based off the high U-stat and the very low p-value we can reject the null hypothesis that there is no difference in fare payed between the survivors and the dead.
Task
Step6: There are NaN ages which must be dealt with. In this case they will be ignored.
Step7: Visual This Data With a Scatter Plot focusing on the highest density area.
Step10: Create a Linear Function and chi-squared statistic function. These will be used to find the best slope for the linear model.
Step11: The inital range of (-20,20,1) was narrowed down to (0,2,100) based off chi-squared being closer to 1.
Step12: Visualize the linear model over the data. | <ASSISTANT_TASK:>
Python Code:
import pandas
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
Explanation: Problem Set - Statistics 3
Task: Did survivors pay more for their tickets than those that didn’t? If so, to what signifance level?
Import Necessary Python Modules
End of explanation
td = pandas.read_csv('titanic_train.csv')
td.info()
Explanation: Data
Bring In Titanic Passenger Data
End of explanation
surivors = td[td.Survived==1]
dead = td[td.Survived==0]
Explanation: The data we care about for this hypothesis(Survived, Fare) has no NaN values so no need to modify.
Setup a Survivor Data Set and a Non-Survivor Data Set.
End of explanation
plt.figure(figsize=(13,6))
plt.hist(surivors.Fare, alpha=.5, bins=np.arange(0,300,10), label="Surviors")
plt.hist(dead.Fare, alpha=.5, bins=np.arange(0,300,10), label="Died")
plt.legend()
plt.title('Fare Distribution of Passenger Groups')
plt.xlabel('Fare Payed')
plt.ylabel('Number of Passengers')
plt.show()
Explanation: Hypothesis: There is a significant difference in fare payed between the surviors and the dead.
Null-Hypothesis: There is no difference in fare payed between the surviors and dead.
First establish a distribution of fares and number of surviors/dead
End of explanation
from scipy.stats import mannwhitneyu
u, p = mannwhitneyu(surivors.Fare, dead.Fare)
print("Results:\n\tU-statistic: %.5f\n\tp-value: %g" % (u, p * 2))
Explanation: Based off this graph it is clear that these two distributions are best compared using a Mann-Whitney U-test
End of explanation
td.info()
Explanation: Based off the high U-stat and the very low p-value we can reject the null hypothesis that there is no difference in fare payed between the survivors and the dead.
Task: Find The Best Linear Model For The Relationship Between Fare and Age.
The Gloal is to find the best slope to have the model
Fare = slope * Age
Fit to the data
Looking again at the original data.
End of explanation
valid_age = td.Age[td.Age>0]
valid_fare = td.Fare[td.Age>0]
Explanation: There are NaN ages which must be dealt with. In this case they will be ignored.
End of explanation
plt.figure(figsize=(7,4))
plt.scatter(valid_age, valid_fare)
plt.xlim(0,80)
plt.ylim(0,150)
plt.title('Comparision of Age and Fare')
plt.xlabel('Age')
plt.ylabel('Fare')
plt.show()
Explanation: Visual This Data With a Scatter Plot focusing on the highest density area.
End of explanation
def linear(data, slope):
A Linear Function Method
return data * slope
def chi_sq(data, model, std, dof=1):
Function to Determine The chi-squared statistic
return sum(((data - model)/std)**2) / (len(data) - dof)
Explanation: Create a Linear Function and chi-squared statistic function. These will be used to find the best slope for the linear model.
End of explanation
slopes = np.linspace(0,2,100)
chi_results = []
for s in slopes:
model_fare = linear(valid_age,s)
chi_results.append(chi_sq(valid_fare, model_fare, valid_fare.std(), dof=1))
chi_results = np.array(chi_results)
print("Best Chi_Squared: {}".format(chi_results[chi_results.argmin()]))
print("Best Slope: {}".format(slopes[chi_results.argmin()]))
Explanation: The inital range of (-20,20,1) was narrowed down to (0,2,100) based off chi-squared being closer to 1.
End of explanation
plt.figure(figsize=(7,4))
plt.scatter(td.Age,td.Fare)
plt.xlim(0,80)
plt.ylim(0,150)
plt.plot(td.Age,linear(td.Age,slopes[chi_results.argmin()]))
plt.show()
Explanation: Visualize the linear model over the data.
End of explanation
<END_TASK> |
15,763 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Chemistry Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 1.8. Coupling With Chemical Reactivity
Is Required
Step12: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step13: 2.2. Code Version
Is Required
Step14: 2.3. Code Languages
Is Required
Step15: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required
Step16: 3.2. Split Operator Advection Timestep
Is Required
Step17: 3.3. Split Operator Physical Timestep
Is Required
Step18: 3.4. Split Operator Chemistry Timestep
Is Required
Step19: 3.5. Split Operator Alternate Order
Is Required
Step20: 3.6. Integrated Timestep
Is Required
Step21: 3.7. Integrated Scheme Type
Is Required
Step22: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required
Step23: 4.2. Convection
Is Required
Step24: 4.3. Precipitation
Is Required
Step25: 4.4. Emissions
Is Required
Step26: 4.5. Deposition
Is Required
Step27: 4.6. Gas Phase Chemistry
Is Required
Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required
Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required
Step30: 4.9. Photo Chemistry
Is Required
Step31: 4.10. Aerosols
Is Required
Step32: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required
Step33: 5.2. Global Mean Metrics Used
Is Required
Step34: 5.3. Regional Metrics Used
Is Required
Step35: 5.4. Trend Metrics Used
Is Required
Step36: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required
Step37: 6.2. Matches Atmosphere Grid
Is Required
Step38: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required
Step39: 7.2. Canonical Horizontal Resolution
Is Required
Step40: 7.3. Number Of Horizontal Gridpoints
Is Required
Step41: 7.4. Number Of Vertical Levels
Is Required
Step42: 7.5. Is Adaptive Grid
Is Required
Step43: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required
Step44: 8.2. Use Atmospheric Transport
Is Required
Step45: 8.3. Transport Details
Is Required
Step46: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required
Step47: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required
Step48: 10.2. Method
Is Required
Step49: 10.3. Prescribed Climatology Emitted Species
Is Required
Step50: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step51: 10.5. Interactive Emitted Species
Is Required
Step52: 10.6. Other Emitted Species
Is Required
Step53: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required
Step54: 11.2. Method
Is Required
Step55: 11.3. Prescribed Climatology Emitted Species
Is Required
Step56: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step57: 11.5. Interactive Emitted Species
Is Required
Step58: 11.6. Other Emitted Species
Is Required
Step59: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required
Step60: 12.2. Prescribed Upper Boundary
Is Required
Step61: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required
Step62: 13.2. Species
Is Required
Step63: 13.3. Number Of Bimolecular Reactions
Is Required
Step64: 13.4. Number Of Termolecular Reactions
Is Required
Step65: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required
Step66: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required
Step67: 13.7. Number Of Advected Species
Is Required
Step68: 13.8. Number Of Steady State Species
Is Required
Step69: 13.9. Interactive Dry Deposition
Is Required
Step70: 13.10. Wet Deposition
Is Required
Step71: 13.11. Wet Oxidation
Is Required
Step72: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required
Step73: 14.2. Gas Phase Species
Is Required
Step74: 14.3. Aerosol Species
Is Required
Step75: 14.4. Number Of Steady State Species
Is Required
Step76: 14.5. Sedimentation
Is Required
Step77: 14.6. Coagulation
Is Required
Step78: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required
Step79: 15.2. Gas Phase Species
Is Required
Step80: 15.3. Aerosol Species
Is Required
Step81: 15.4. Number Of Steady State Species
Is Required
Step82: 15.5. Interactive Dry Deposition
Is Required
Step83: 15.6. Coagulation
Is Required
Step84: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required
Step85: 16.2. Number Of Reactions
Is Required
Step86: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required
Step87: 17.2. Environmental Conditions
Is Required | <ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'messy-consortium', 'sandbox-3', 'atmoschem')
Explanation: ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era: CMIP6
Institute: MESSY-CONSORTIUM
Source ID: SANDBOX-3
Topic: Atmoschem
Sub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry.
Properties: 84 (39 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:10
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmospheric chemistry model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmospheric chemistry model code.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Chemistry Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Form of prognostic variables in the atmospheric chemistry component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of advected tracers in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry calculations (not advection) generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.8. Coupling With Chemical Reactivity
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry transport scheme turbulence is couple with chemical reactivity?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the evolution of a given variable
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemical species advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Split Operator Chemistry Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemistry (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.5. Split Operator Alternate Order
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.6. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the atmospheric chemistry model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.7. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.2. Convection
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Precipitation
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.4. Emissions
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.5. Deposition
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.6. Gas Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.9. Photo Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.10. Aerosols
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the atmopsheric chemistry grid
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
* Does the atmospheric chemistry grid match the atmosphere grid?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 7.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview of transport implementation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Use Atmospheric Transport
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is transport handled by the atmosphere, rather than within atmospheric cehmistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Transport Details
Is Required: FALSE Type: STRING Cardinality: 0.1
If transport is handled within the atmospheric chemistry scheme, describe it.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric chemistry emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the chemical species emitted at the surface that are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via any other method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview gas phase atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Species included in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.3. Number Of Bimolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of bi-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.4. Number Of Termolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of ter-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.7. Number Of Advected Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of advected species in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.8. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.9. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.10. Wet Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.11. Wet Oxidation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview stratospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
Explanation: 14.2. Gas Phase Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Gas phase species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
Explanation: 14.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.5. Sedimentation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview tropospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Gas Phase Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of gas phase species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
Explanation: 15.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the tropospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric photo chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 16.2. Number Of Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the photo-chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
Explanation: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Photolysis scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.2. Environmental Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)
End of explanation
<END_TASK> |
15,764 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Import relevant modules
Step1: Merge multiple network definitions that share the same data layers into a single definition to train within the same single process
Step2: After training, we look at the learning curves of the individual sub-networks | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import os
import matplotlib.pyplot as plt
%matplotlib inline
from cycler import cycler
from pylab import rcParams
rcParams['figure.figsize'] = 8, 6
rcParams.update({'font.size': 15})
# color and linestyle cycle
#colors = [x['color'] for x in list(rcParams['axes.prop_cycle'])]
colors_base = ['b', 'g', 'r', 'c', 'm', 'y', 'k', '0.3', '0.5', '0.75', 'chartreuse']
print 'colors_base', colors_base
colors = [item for sublist in [colors_base]*len(colors_base) for item in sublist] # replicate and flatten
print 'colors', colors, len(list(rcParams['axes.prop_cycle']))
lnstyl = [[l] * len(colors_base) for l in ['-', '--', ':', '.', '-.', '*', 'x']] # replicate per color
print 'lnstyl', lnstyl
lnstyl = [item for sublist in lnstyl for item in sublist] # flatten
plt.rc('axes', prop_cycle=(cycler('color', colors) + cycler('linestyle', lnstyl))) # define cycler
from nideep.eval.learning_curve import LearningCurve
from nideep.eval.eval_utils import Phase
import nideep.eval.log_utils as lu
def moving_avg(x, window_size):
window = np.ones(int(window_size)) / float(window_size)
return np.convolve(x, window, 'valid')
classnames = ['alarm', 'baby', 'crash', 'dog', 'engine', 'femaleSpeech', 'fire', 'footsteps',\
'knock', 'phone', 'piano']
classnames_scalar = ['alarm', 'baby', 'crash', 'dog', 'engine', 'femaleSpeech', 'fire', 'footsteps', 'general'\
'knock', 'phone', 'piano']
print("Done importing")
Explanation: Import relevant modules
End of explanation
from nideep.proto.proto_utils import Parser
from nideep.nets.net_merge import merge_indep_net_spec
# select network definitions to merge into a single prototxt
# You can also just repeat the same network over and over if you want to train the same network with different random initializations
p0 = './train_val_00.prototxt'
p1 = './train_val_01.prototxt'
p2 = './train_val_02.prototxt'
# load each network definition from file
nets = [Parser().from_net_params_file(p) for p in [p0,p1,p2]]
# merge and save merged prototxt to file
p_dst = './train_val_00_01_02.prototxt'
with open(p_dst, 'w') as f:
f.write(merge_indep_net_spec(nets))
# use p_dst file in your solver and train this 'network ensemble' like you would any single network.
Explanation: Merge multiple network definitions that share the same data layers into a single definition to train within the same single process:
End of explanation
logs = [\
'./xD/caffe.eltanin.kashefy.log.INFO.20160818-105955.20804',
'./xE_03/caffe.eltanin.kashefy.log.INFO.20160818-145600.31621',
'./xE_04/caffe.eltanin.kashefy.log.INFO.20160818-150354.710',
]
print("Found %d logs" % (len(logs),))
for phase in [Phase.TRAIN, Phase.TEST]:
print phase
plt.figure()
for p in logs:
e = LearningCurve(p)
lc_keys = e.parse()[phase == Phase.TEST]
num_iter = e.list('NumIters', phase)
print('%s: %d %s iterations' % (os.path.basename(os.path.dirname(p)), num_iter.size, phase))
for lck_idx, lck in enumerate(lc_keys):
if 'nidx' in lck or ('NumIters' not in lck and 'rate' not in lck.lower() and 'seconds' not in lck.lower()):
try:
loss = e.list(lck, phase)
plt.plot(num_iter, loss, label='%s %s' % (os.path.basename(os.path.dirname(p)), lck))
except KeyError as kerr:
print("Inavlid values for %s %s" % (phase, lck))
ticks, _ = plt.xticks()
plt.xticks(ticks, ["%dK" % int(t/1000) for t in ticks])
plt.title(phase)
plt.xlabel('iterations')
plt.ylabel(' '.join([phase, 'cross entropy loss']))
#plt.xlim([0,20e3])
#plt.xlim([0,300e3])
plt.ylim([1,20])
plt.title('on %s set' % phase)
plt.legend(loc='upper right')
plt.grid()
Explanation: After training, we look at the learning curves of the individual sub-networks
End of explanation
<END_TASK> |
15,765 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Notebook
Step1: using a cache_dir
Caching can speed the development, and provides consistent results.
Step2: using NQQ25 dataset
Step3: Previous collection contains only a dataset named 'nama_gdp_c'
Step4: Get value for year 2012. | <ASSISTANT_TASK:>
Python Code:
# all import here
import os
import jsonstat
Explanation: Notebook: using jsonstat.py with cso.ie dataset
This Jupyter notebook shows the python library jsonstat.py in action.
It shows how to explore dataset from cso.ie data provider.
End of explanation
cache_dir = os.path.abspath(os.path.join("..", "tests", "fixtures", "www.cso.ie"))
jsonstat.cache_dir(cache_dir)
Explanation: using a cache_dir
Caching can speed the development, and provides consistent results.
End of explanation
base_uri = 'http://www.cso.ie/StatbankServices/StatbankServices.svc/jsonservice/responseinstance/'
uri = base_uri + "NQQ25"
filename = "cso_ie-NQQ25.json"
collection_1 = jsonstat.from_url(uri, filename)
collection_1
Explanation: using NQQ25 dataset
End of explanation
dataset = collection_1.dataset(0)
dataset
dataset.dimension('Sector')
dataset.dimension('Quarter')
dataset.dimension('Statistic')
Explanation: Previous collection contains only a dataset named 'nama_gdp_c'
End of explanation
dataset.data(Sector='03', Quarter='1997Q4', Statistic='NQQ25S1')
Explanation: Get value for year 2012.
End of explanation
<END_TASK> |
15,766 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Slack Bot
This notebook implements the Slack outgoing webhook API to parse certain messages as code to eval on a kernel. This notebook also interfaces with the Slack incoming webhook API to send text and image output from a kernel back to Slack.
To use this example notebook, it must be running in an environment with one additional port open for the HTTP server defined herein to receive webhook calls from Slack. The default is to listen for HTTP requests on port 9001 for demo purposes.
Open and run all the cells in the discussion notebook.
Create an incoming webhook integration in your Slack project.
Select a channel to which the bot will post responses.
Note its webhook URL.
Create an outcoming webhook integration in your Slack project.
Select which channel will be monitored for select trigger words.
State the URL of this notebook server, port 9001.
Note the token.
Put the URL for the incoming webhook in the SLACK_URL global variable below. This notebook will forward select output from the kernel associated with the discussion notebook to that URL for posting in your configured channel.
Put the token for the outgoing webhook in the SLACK_TOKEN global variable below. This notebook will only accept messages bearing that token to avoid spoofing.
Run this notebook top to bottom.
If all is well, the bot should report itself available in the configured output channel.
Interact with the bot by typing your configured trigger followed by code to eval.
To Slack
This section defines the logic that takes output from a kernel and forwards it to Slack in an appropriate manner.
Step1: We'll set some hard coded values up front. These should really get read from the environment but I'm lazy at the moment.
Step2: Try connecting to an existing kernel connection file if one exists specifically for the defrag demo. If it's not there, Start kernel separate from this one. Keeps user code out of the namespace of this service. Avoids hard-to-reason-about async request handling all within a single kernel.
Step3: Slack doesn't accept base64 encoded blobs back as images. We can only send back a real URL. We'll make a local directory where we'll dump any plots generated by the kernel to disk. Later, we'll serve them up through the same web server that we'll use to handle the Slack outgoing webhook calls.
Step4: Connect to the iopub socket to receive kernel output.
Step5: Define functions to handle message types of interest. The generic on_reply below dispatches to these.
Step6: The doc says these two types of messages are equivalent. So just alias the function.
Step7: Build a HTTP client to use to push messages back to Slack.
Step8: Process all kernel replies as they come in. Use the kernel manager methods for converting ZeroMQ stream byte strings to nice Python dictionaries.
Step9: Hook the on_reply to the iopub stream.
Step10: Create a client that can be used to execute code on the kernel.
Step11: From Slack
This section defines the logic that takes messages from Slack and forwards them to a kernel in an appropriate manner.
Step12: Define a simple HTTP handler for Slack POSTs. Support a GET for liveliness checks too.
Step13: Map the handler and start listening.
Step14: Say hi. | <ASSISTANT_TASK:>
Python Code:
import json
import base64
import os
import hashlib
from pprint import pprint # for debug
from jupyter_client.ioloop import IOLoopKernelManager
from tornado.httpclient import AsyncHTTPClient, HTTPClient
Explanation: Slack Bot
This notebook implements the Slack outgoing webhook API to parse certain messages as code to eval on a kernel. This notebook also interfaces with the Slack incoming webhook API to send text and image output from a kernel back to Slack.
To use this example notebook, it must be running in an environment with one additional port open for the HTTP server defined herein to receive webhook calls from Slack. The default is to listen for HTTP requests on port 9001 for demo purposes.
Open and run all the cells in the discussion notebook.
Create an incoming webhook integration in your Slack project.
Select a channel to which the bot will post responses.
Note its webhook URL.
Create an outcoming webhook integration in your Slack project.
Select which channel will be monitored for select trigger words.
State the URL of this notebook server, port 9001.
Note the token.
Put the URL for the incoming webhook in the SLACK_URL global variable below. This notebook will forward select output from the kernel associated with the discussion notebook to that URL for posting in your configured channel.
Put the token for the outgoing webhook in the SLACK_TOKEN global variable below. This notebook will only accept messages bearing that token to avoid spoofing.
Run this notebook top to bottom.
If all is well, the bot should report itself available in the configured output channel.
Interact with the bot by typing your configured trigger followed by code to eval.
To Slack
This section defines the logic that takes output from a kernel and forwards it to Slack in an appropriate manner.
End of explanation
SLACK_URL = 'https://hooks.slack.com/services/XXXXXX/XXXXXX/XXXXXXX'
SLACK_TOKEN = 'XXXXXXXXX'
Explanation: We'll set some hard coded values up front. These should really get read from the environment but I'm lazy at the moment.
End of explanation
if os.path.exists('/tmp/defrag_demo'):
km = IOLoopKernelManager(connection_file='/tmp/defrag_demo')
km.load_connection_file()
print('connecting to existing kernel')
else:
km = IOLoopKernelManager()
km.start_kernel()
Explanation: Try connecting to an existing kernel connection file if one exists specifically for the defrag demo. If it's not there, Start kernel separate from this one. Keeps user code out of the namespace of this service. Avoids hard-to-reason-about async request handling all within a single kernel.
End of explanation
PLOT_DIR = '/home/jovyan/plots'
!mkdir -p $PLOT_DIR
def b64_to_file(b64_str, ext):
'''Dump a base64 encoded string to disk as a binary file with the given extension.'''
# decode base64 image and write to disk under a unique ID
img = base64.decodebytes(b64_str.encode('utf-8'))
# hash to filename
name = hashlib.sha1(img).hexdigest()
with open(os.path.join(PLOT_DIR, name+'.'+ext), 'wb') as f:
f.write(img)
return name
Explanation: Slack doesn't accept base64 encoded blobs back as images. We can only send back a real URL. We'll make a local directory where we'll dump any plots generated by the kernel to disk. Later, we'll serve them up through the same web server that we'll use to handle the Slack outgoing webhook calls.
End of explanation
if 'iopub' in locals():
iopub.close()
iopub = km.connect_iopub()
Explanation: Connect to the iopub socket to receive kernel output.
End of explanation
def on_stream(content):
'''Handles stdout, stderr.'''
return dict(text=content['text'])
def on_display_data(content):
'''Handles rich output.'''
data = content['data']
response = {}
# prefer images
attachments = []
for key in data.keys():
if key.startswith('image'):
_, ext = key.split('/')
name = b64_to_file(data[key], ext)
# point to plot on the web
# TODO: don't hard code the server URL
attachments.append({
"fallback": "Oh noes! The plot didn't render!",
'image_url': 'http://parente.cloudet.xyz:9001/static/{}.png'.format(name)
})
if len(attachments):
response['attachments'] = attachments
# fallback on text
if 'text/plain' in data:
response['text'] = data['text/plain']
return response if len(response) else None
Explanation: Define functions to handle message types of interest. The generic on_reply below dispatches to these.
End of explanation
on_execute_result = on_display_data
Explanation: The doc says these two types of messages are equivalent. So just alias the function.
End of explanation
http_client = AsyncHTTPClient()
Explanation: Build a HTTP client to use to push messages back to Slack.
End of explanation
def on_reply(stream, msg_list):
# process raw messages
idents, msg_list = km.session.feed_identities(msg_list)
msg = km.session.deserialize(msg_list)
# get delegate based on message type
func = globals().get('on_'+msg['msg_type'])
if func is not None:
# get an optional response
response = func(msg['content'])
if response:
# dump the response as JSON to Slack
http_client.fetch(SLACK_URL, method='POST',
body=json.dumps(response),
headers={'Content-Type' : 'application/json'})
Explanation: Process all kernel replies as they come in. Use the kernel manager methods for converting ZeroMQ stream byte strings to nice Python dictionaries.
End of explanation
iopub.on_recv_stream(on_reply)
Explanation: Hook the on_reply to the iopub stream.
End of explanation
kc = km.client()
Explanation: Create a client that can be used to execute code on the kernel.
End of explanation
import tornado.web
import tornado.httpserver
import json
Explanation: From Slack
This section defines the logic that takes messages from Slack and forwards them to a kernel in an appropriate manner.
End of explanation
class IncomingHandler(tornado.web.RequestHandler):
def get(self):
self.finish('{"status": "ok"}')
def post(self):
token = self.get_body_argument('token')
if token != SLACK_TOKEN:
return self.send_error(401)
# get code to run
code = self.get_body_argument('text')
# remove command prefix, up to first space
code = code[code.find(' ')+1:].strip()
# execute the code in the other kernel
kc.execute(code)
# return nothing for now
self.finish()
Explanation: Define a simple HTTP handler for Slack POSTs. Support a GET for liveliness checks too.
End of explanation
application = tornado.web.Application([
(r"/", IncomingHandler)
], static_path=PLOT_DIR)
if 'server' in locals():
server.stop()
server = tornado.httpserver.HTTPServer(application)
server.listen(9001, '0.0.0.0')
Explanation: Map the handler and start listening.
End of explanation
kc.execute('print("Hi, all")')
Explanation: Say hi.
End of explanation
<END_TASK> |
15,767 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: Brand New Concordance
A quick and dirty way of building a concordance
Step4: Testing and learning stuff
Step5: A result in a resultset refers to a recordStore
(which in the case of the chapter-idx are chapters)
Step6: The occurences value is wrong, from what I can gather.
Step7: The values in proxInfo are not sorted
Step8: From the result one can get an actual result
which we can use to get_xml (as a string)
or get_dom (an XML tree).
Step9: Types of XPATH queries needed for the concordance
Step10: If one has the eid and the character ofset (which
is also part of the w elements!) one can the actual
word that was a hit, but one can also go a bit further
and get the sentence and paragraphs id's.
Step11: Three ways to get the book, chapter and sentence id
Step12: If one creates a smaller tree, the computations
are more efficient.
Step16: Searching for fog
Step21: Specs / Questions / Todos
what if the search terms are more complex? how do you then do their len()?
handle quotes etc.
doing pagination
adding query builder
highlighting in a form if a word is frequent and it will thus take some time
searches for more than a word (either a phrase or an or search)
a transformer somewhere?
In quotes | <ASSISTANT_TASK:>
Python Code:
# coding: utf-8
import os
from cheshire3.baseObjects import Session
from cheshire3.document import StringDocument
from cheshire3.internal import cheshire3Root
from cheshire3.server import SimpleServer
session = Session()
session.database = 'db_dickens'
serv = SimpleServer(session, os.path.join(cheshire3Root, 'configs', 'serverConfig.xml'))
db = serv.get_object(session, session.database)
qf = db.get_object(session, 'defaultQueryFactory')
resultSetStore = db.get_object(session, 'resultSetStore')
idxStore = db.get_object(session, 'indexStore')
def build_concordance(term, context):
query = qf.get_query(session, (c3.subcorpus-idx all "dickens" and/proxinfo c3.chapter-idx all/proxinfo "{}" ).format(term))
result_set = db.search(session, query)
concordance = []
for result in result_set:
record = result.fetch_record(session)
tree = record.get_dom(session)
text_nodes = tree.xpath('//txt/text()')
text_only = ' '.join(sentence for sentence in text_nodes)
for hit in result.proxInfo:
word_id = hit[0][1]
char_location = hit[0][2]
concordance_line = text_only[char_location - context : char_location + len(term) + context]
#NOTE in these cases record.process_xpath(session, xpath) is not faster
#TODO check there is only one result
#sentence = tree.xpath('/div/descendant::w[%i]/ancestor-or-self::s/@id' % int(word_id + 1))
#paragraph = tree.xpath('/div/descendant::w[%i]/ancestor-or-self::p/@id' % int(word_id + 1))
#concordance.append((concordance_line, sentence[0], paragraph[0]))
concordance.append((hit, concordance_line, text_only, tree))
return concordance
#%timeit -n6 concordance = build_concordance("fog", 25)
concordance = build_concordance("fog", 25)
print len(concordance)
concordance[55][:2]
text = concordance[55][2]
print text[15292:19500]
# is the fourth element the number of w nodes?
from lxml import etree
xmlstring = etree.tostring(concordance[55][3])
snippet = w o="94">home</w><n>.</n></toks></s><s sid="194" id="GE.c15.s194" eid="456"><txt>Beyond town, we found a heavy mist out, and it fell wet and thick.</txt><toks><w o="0">Beyond</w><n> </n><w o="7">town</w><n>,</n><n> </n><w o="13">we</w><n> </n><w o="16">found</w><n> </n><w o="22">a</w><n> </n><w o="24">heavy</w><n> </n><w o="30">mist</w><n> </n><w o="35">out</w><n>,</n><n> </n><w o="40">and</w><n> </n><w o="44">it</w><n> </n><w o="47">fell</w><n> </n><w o="52">wet</w><n> </n><w o="56">and</w><n> </n><w o="60">thick</w><n>.</n></toks></s><s sid="195" id="GE.c15.s195" eid="457"><txt>The turnpike lamp was a blur, quite out of the lamp\'s usual place apparently, and its rays looked solid substance on the fog.</txt><toks><w o="0">The</w><n> </n><w o="4">turnpike</w><n> </n><w o="13">lamp</w><n> </n><w o="18">was</w><n> </n><w o="22">a</w><n> </n><w o="24">blur</w><n>,</n><n> </n><w o="30">quite</w><n> </n><w o="36">out</w><n> </n><w o="40">of</w><n> </n><w o="43">the</w><n> </n><w o="47">lamp\'s</w><n> </n><w o="54">usual</w><n> </n><w o="60">place</w><n> </n><w o="66">apparently</w><n>,</n><n> </n><w o="78">and</w><n> </n><w o="82">its</w><n> </n><w o="86">rays</w><n> </n><w o="91">looked</w><n> </n><w o="98">solid</w><n> </n><w o="104">substance</w><n> </n><w o="114">on</w><n> </n><w o="117">the</w><n> </n><w o="121">fog</w><n>.</n></toks></s><s sid="196" id="GE.c15.s196" eid="458"><txt>We were noticing this, and saying how that the mist rose with a change of wind from a certain quarter of our marshes, when we came upon a man, slouching under the lee of the turnpike house.</txt><toks><w o="0">We</w><n> </n><w o="3">were</w><n> </n><w o="8">noticing</w><n> </n><w o="17">this</w><n>,</n><n> </n><w o="23">and</w><n> </n><w o="27">saying</w><n> </n><w o="34">how</w><n> </n><w o="38">that</w><n> </n><w o="43">the</w><n> </n><w o="47">mist</w><n> </n><w o="52">rose</w><n> </n><w o="57">with</w><n> </n><w o="62">a</w><n> </n><w o="64">change</w><n> </n><w o="71">of</w><n> </n><w o="74">wind</w><n> </n><w o="79">from</w><n> </n><w o="84">a</w><n> </n><w o="86">certain</w><n> </n><w o="94">quarter</w><n> </n><w o="102">of</w><n> </n><w o="105">our</w><n> </n><w o="109">marshes</w><n>,</n><n> </n><w o="118">when</w><n> </n><w o="123">we</w><n> </n><w o="126">c
print snippet
# idea: use a tokenmerger?
xmlstring
# concordance = build_concordance("lamp", 25)
print len(concordance)
for line in concordance[55:56]:
print concordance.index(line), line[0], line[1], "########", line[2][:100]
for line in concordance[0:1000]:
print concordance.index(line)
def build_concordance_with_locations(term, context, max_hits):
query = qf.get_query(session, (c3.subcorpus-idx all "dickens" and/cql.proxinfo c3.chapter-idx = "{}" ).format(term))
result_set = db.search(session, query)
concordance = []
count = 0
for result in result_set:
if count < max_hits:
record = result.fetch_record(session)
tree = record.get_dom(session)
text_nodes = tree.xpath('//txt/text()')
text_only = ' '.join(sentence for sentence in text_nodes)
for hit in result.proxInfo:
if count < max_hits:
count +=1
word_id = hit[0][1]
char_location = hit[0][2]
concordance_line = text_only[char_location - context : char_location + len(term) + context]
#NOTE in these cases record.process_xpath(session, xpath) is not faster
#TODO check there is only one result
sentence = tree.xpath('/div/descendant::w[%i]/ancestor-or-self::s/@id' % int(word_id + 1))
paragraph = tree.xpath('/div/descendant::w[%i]/ancestor-or-self::p/@id' % int(word_id + 1))
concordance.append((concordance_line, sentence[0], paragraph[0]))
return concordance
%timeit build_concordance_with_locations("fog", 25, 100)
%timeit build_concordance_with_locations("the", 25, 100)
%timeit build_concordance_with_locations("the", 25, 1000)
%prun build_concordance_with_locations("the", 25)
%timeit build_concordance_with_locations("the", 25, 10000)
%timeit build_concordance_with_locations("the", 25, 100000)
%timeit -n1 concordance = build_concordance_with_locations("the", 25, 1000000)
concordance = build_concordance_with_locations("dense fog", 25, 1000)
len(concordance)
for line in concordance:
print line
Explanation: Brand New Concordance
A quick and dirty way of building a concordance
End of explanation
# coding: utf-8
import os
from cheshire3.baseObjects import Session
from cheshire3.document import StringDocument
from cheshire3.internal import cheshire3Root
from cheshire3.server import SimpleServer
session = Session()
session.database = 'db_dickens'
serv = SimpleServer(session, os.path.join(cheshire3Root, 'configs', 'serverConfig.xml'))
db = serv.get_object(session, session.database)
qf = db.get_object(session, 'defaultQueryFactory')
resultSetStore = db.get_object(session, 'resultSetStore')
idxStore = db.get_object(session, 'indexStore')
query = qf.get_query(session, '(c3.subcorpus-idx all "dickens" and/cql.proxinfo c3.chapter-idx any "fog")')
result_set = db.search(session, query)
Explanation: Testing and learning stuff
End of explanation
result = result_set[0]
print result
Explanation: A result in a resultset refers to a recordStore
(which in the case of the chapter-idx are chapters)
End of explanation
result.occurences
Explanation: The occurences value is wrong, from what I can gather.
End of explanation
result.proxInfo[:15]
Explanation: The values in proxInfo are not sorted
End of explanation
record = result.fetch_record(session)
record.byteCount
?record.fetch_proxVector(session)
record.get_xml(session)
tree = record.get_dom(session)
Explanation: From the result one can get an actual result
which we can use to get_xml (as a string)
or get_dom (an XML tree).
End of explanation
%timeit tree.xpath("//txt/text()")
one = tree.xpath("//txt/text()")
len(one)
%timeit tree.xpath("/div/p/s/txt/text()")
two = tree.xpath("/div/p/s/txt/text()")
len(two)
Explanation: Types of XPATH queries needed for the concordance
End of explanation
tree.xpath("/div/p/s[@eid=183]/toks/w[@o=215]/text()")
tree.xpath("/div/p/s[@eid=183]/toks")
tree.xpath("/div/p/s[@eid=183]/toks/w/text()")
tree.xpath("/div/p/self::s[@eid=183]/toks/w[@o=215]")
tree.xpath("/div/p/s/toks/w[@o=215]")
Explanation: If one has the eid and the character ofset (which
is also part of the w elements!) one can the actual
word that was a hit, but one can also go a bit further
and get the sentence and paragraphs id's.
End of explanation
%timeit tree.xpath("//p/s[@eid=183]/@id")
tree.xpath("//p/s[@eid=183]/@id")
%timeit tree.xpath("//*[@eid=183]/@id")
# see how slow this is!
tree.xpath("//*[@eid=183]/@id")
%timeit tree.xpath("//s[@eid=183]/@id")
tree.xpath("//s[@eid=183]/@id")
Explanation: Three ways to get the book, chapter and sentence id:
End of explanation
s = tree.xpath("//p/s[@eid=%i]" % 183)
s
print s[0]
s[0].xpath("self::s/@id")
s[0].xpath("attribute::id")
%timeit tree.xpath("//s[@eid=183]/ancestor::p/@id")
tree.xpath("//s[@eid=183]/ancestor::p/@id")
%timeit s[0].xpath("ancestor::p/@id")
s[0].xpath("ancestor::p/@id")
sentence = tree.xpath("//p/s[@eid=%i]" % 1) # format: ['BH.c1.s93']
sentence
Explanation: If one creates a smaller tree, the computations
are more efficient.
End of explanation
query = qf.get_query(session, '(c3.subcorpus-idx all "dickens" and/cql.proxinfo c3.chapter-idx any "fog")')
result_set = db.search(session, query)
result = result_set[0]
proxinfo = result.proxInfo[:15]
from pprint import pprint # to "prettry print" a nested list nicely
pprint(proxinfo)
record.get_xml(session)[6409:8000]
tree = record.get_dom(session)
prox1 = proxinfo[0][0][0]
prox2 = proxinfo[0][0][1]
prox3 = proxinfo[0][0][2]
prox4 = proxinfo[0][0][3]
for prox in (prox1, prox2, prox3, prox4): print prox
# output is a boolean
tree.xpath('/div/p/s/toks/w/@o=%i' % prox1)
w = tree.xpath('/div/p/s/toks/w[@o=%i]/text()' % prox1)
print w
# o = 0 in this case; this is interesting to see
# quickly what words sentences start with.
w = tree.xpath('/div/descendant::w[%i]' % prox2)
print w
w = tree.xpath('/div/descendant::w[%i]/text()' % int(prox2 + 1))
print w
tree.xpath('/div/descendant::w[%i]/text()' % int(prox2 + 1))
tree.xpath('/div/descendant::w[%i]/ancestor-or-self::s/@id' % int(prox2 + 1))
tree.xpath('/div/descendant::w[%i]/ancestor-or-self::p/@id' % int(prox2 + 1))
#TODO time
record.process_xpath(session, '//*[@eid="%d"]/following::w[%d+1]/ancestor-or-self::s' % (prox1, prox2))
def build_concordance_with_location(term, context):
query = qf.get_query(session, (c3.subcorpus-idx all "dickens" and/cql.proxinfo c3.chapter-idx any "{}" ).format(term))
result_set = db.search(session, query)
concordance = []
for result in result_set:
record = result.fetch_record(session)
tree = record.get_dom(session)
text_nodes = tree.xpath('//txt/text()')
text_only = ' '.join(sentence for sentence in text_nodes)
for hit in result.proxInfo:
word_id = hit[0][1]
char_location = hit[0][2]
concordance_line = text_only[char_location - context : char_location + len(term) + context]
#NOTE in these cases record.process_xpath(session, xpath) is not faster
#TODO check there is only one result
sentence = tree.xpath('/div/descendant::w[%i]/ancestor-or-self::s/@id' % int(word_id + 1))
paragraph = tree.xpath('/div/descendant::w[%i]/ancestor-or-self::p/@id' % int(word_id + 1))
concordance.append((concordance_line, sentence[0], paragraph[0]))
return concordance
%timeit -n1 concordance = build_concordance_with_locations("the", 25, 1000000)
# coding: utf-8
import os
from cheshire3.baseObjects import Session
from cheshire3.document import StringDocument
from cheshire3.internal import cheshire3Root
from cheshire3.server import SimpleServer
session = Session()
session.database = 'db_dickens'
serv = SimpleServer(session, os.path.join(cheshire3Root, 'configs', 'serverConfig.xml'))
db = serv.get_object(session, session.database)
qf = db.get_object(session, 'defaultQueryFactory')
resultSetStore = db.get_object(session, 'resultSetStore')
idxStore = db.get_object(session, 'indexStore')
%timeit build_concordance_with_locations("the", 25, 100000)
term = 'fog'
(c3.subcorpus-idx all "dickens" and/cql.proxinfo c3.chapter-idx any "{}" ).format(term)
query = qf.get_query(session, (c3.subcorpus-idx all "dickens" and/cql.proxinfo c3.chapter-idx any "{}" ).format(term))
result_set = db.search(session, query)
len(result_set)
#build_concordance(result_set, 'we', 25)
# %timeit build_concordance(result_set, 'the', 25)
# option 1: get chapter as string
# option 2: get sentence as string with location info as well.
#TODO implement eid for quotes and suspensions
#TODO implement start and end result for pagination
the_location_dickens = []
def test():
for result in result_set:
proxinfo = result.proxInfo
record = result.fetch_record(session)
tree = record.get_dom(session)
for hit in proxinfo:
word_location = hit[0][1]
char_location = hit[0][2]
word_location = tree.xpath("/div/p/s/toks/w[@o=215]")
# word_location = tree.xpath('//w[%i]' % word_location)
the_location_dickens.append(word_location)
#location = tree.xpath('//w[word_location]')
#the_in_dickens.append([concordance_line, location])
#%timeit test()
test()
len(the_location_dickens)
# option 1: merge all txt together and then do xpath to get the location
# option 2: get txt with location info as well.
for result in result_set:
proxinfo = result.proxInfo
record = result.fetch_record(session)
tree = record.get_dom(session)
tree.xpath('//w[1]')
the_in_dickens = []
for result in result_set:
proxinfo = result.proxInfo
record = result.fetch_record(session)
tree = record.get_dom(session)
raw_tree = tree.xpath('//txt/text()')
raw_str = ' '.join(sentence for sentence in raw_tree)
for hit in proxinfo:
char_location = hit[0][2]
concordance_line = raw_str[char_location-20:char_location+len("the")+20]
the_in_dickens.append(concordance_line)
for result in result_set:
proxinfo = result.proxInfo
record = result.fetch_record(session)
tree = record.get_dom(session)
raw_tree = tree.xpath('//txt/text()')
raw_str = ' '.join(sentence for sentence in raw_tree)
for hit in proxinfo:
char_location = hit[0][2]
print raw_str[char_location-20:char_location+len("the")+20]
for hit in proxinfo:
char_location = hit[0][2]
print raw_str[char_location-20:char_location+len("the")+20]
Explanation: Searching for fog
End of explanation
def build_concordance_with_locations(term, idx, context, max_hits):
query = qf.get_query(session, (c3.subcorpus-idx all "dickens" and/cql.proxinfo c3.{} any/proxinfo "{}" ).format(idx, term))
result_set = db.search(session, query)
concordance = []
count = 0
for result in result_set:
if count < max_hits:
record = result.fetch_record(session)
tree = record.get_dom(session)
text_nodes = tree.xpath('//txt/text()')
text_only = ' '.join(sentence for sentence in text_nodes)
for hit in result.proxInfo:
if count < max_hits:
count +=1
element_id = hit[0][0]
word_id = hit[0][1]
char_location = hit[0][2]
## the eid only differs from 0 for indexes other than the chapter index
if element_id:
el = tree.xpath('//*[@eid="{}"]'.format(element_id))[0]
el_char_offset = el.get('offset')
word_id = el.get('wordOffset')
char_location = int(el_char_offset) + char_location - 1
concordance_line = text_only[char_location - context : char_location + len(term) + context]
#NOTE in these cases record.process_xpath(session, xpath) is not faster
#TODO check there is only one result
sentence = tree.xpath('/div/descendant::w[{}]/ancestor-or-self::s/@id'.format(int(word_id) + 1))
paragraph = tree.xpath('/div/descendant::w[{}]/ancestor-or-self::p/@id'.format(int(word_id) + 1))
concordance.append((concordance_line, sentence[0], paragraph[0]))
return concordance
concordance = build_concordance_with_locations("fog", "quote-idx", 25, 100)
for line in concordance:
print line[0]
build_concordance_with_locations("fog", "quote-idx", 25, 100)
build_concordance_with_locations("fog", "non-quote-idx", 25, 100)
query = qf.get_query(session, (c3.subcorpus-idx all "dickens" and/cql.proxinfo c3.{} any "{}" ).format("quote-idx", "fog"))
result_set = db.search(session, query)
for result in result_set:
print result.proxInfo
one_result = result_set[0]
one_fetched_result = one_result.fetch_record(session)
dom = one_fetched_result.get_dom(session)
qs = dom.xpath('//*[@eid="1"]')
qs[0].attrib
dom_text_nodes = dom.xpath('//txt/text()')
dom_text_only = ' '.join(sentence for sentence in dom_text_nodes)
dom_text_only[30639+2:30700]
second_result = result_set[1]
print second_result.proxInfo
second_fetched_result = second_result.fetch_record(session)
second_dom = second_fetched_result.get_dom(session)
second_qs = second_dom.xpath('//qs[@eid="94"]')[0]
second_qs.attrib
second_dom_text_nodes = second_dom.xpath('//txt/text()')
second_dom_text_only = ' '.join(sentence for sentence in second_dom_text_nodes)
second_dom_text_only[3259+4:3400]
query = qf.get_query(session, (c3.subcorpus-idx all "dickens" and/cql.proxinfo c3.{} any "{}" ).format("chapter-idx", "fog"))
result_set = db.search(session, query)
for result in result_set:
print result.proxInfo
query = qf.get_query(session, (c3.subcorpus-idx all "dickens" and/cql.proxinfo c3.{} = "{}" ).format("chapter-idx", "substance on the fog"))
result_set = db.search(session, query)
print len(result_set)
for result in result_set:
print result.proxInfo
# record.get_xml(session)
result = result_set[0]
result = result.fetch_record(session)
record = result.get_xml(session)
record.find('fog')
record[143708:143800]
type(record)
dom = result.get_dom(session)
dom.xpath('//w[@o="19257"]')
xml_to_clean_txt(dom)[19257:19500]
def xml_to_clean_txt(xmltree):
text_nodes = xmltree.xpath('//txt/text()')
text_only = ' '.join(sentence for sentence in text_nodes)
return text_only
xml_to_clean_txt(dom)[19000
:19500]
Explanation: Specs / Questions / Todos
what if the search terms are more complex? how do you then do their len()?
handle quotes etc.
doing pagination
adding query builder
highlighting in a form if a word is frequent and it will thus take some time
searches for more than a word (either a phrase or an or search)
a transformer somewhere?
In quotes
End of explanation
<END_TASK> |
15,768 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Introduction
In group efforts, there is sometimes the impression that there are those who work, and those who talk. A naive question to ask is whether or not the people that tend to talk a lot actually get any work done. This is an obviously and purposefully obtuse question with an interesting answer.
We can use BigBang's newest feature, git data collection, to compare all of the contributors to a project, in this case Scipy, based on their email and git commit activity. The hypothesis in this case was that people who commit a lot will also tend to email a lot, and vice versa, since their involvement in a project would usually require them to do both. This hypothesis was proven to be correct. However, the data reveals many more interesting phenomenon.
Step1: Entity Resolution
Git and Email data comes from two different datatables. To observe a single person's git and email data, we need a way to identify that person across the two different datatables.
To solve this problem, I wrote an entity resolution client that will parse a Pandas dataframe and add a new column to it called "Person-ID" which gives each row an ID that represents one unique contributor. A person may go by many names ("Robert Smith, Rob B. Smith, Bob S., etc.) and use many different emails. However, this client will read through these data tables in one pass and consolidate these identities based on a few strategies.
Step2: After we've run entity resolution on our dataframes, we split the dataframe into slices based on time. So for the entire life-span of the project, we will have NUM_SLICES different segments to analyze. We will be able to look at the git and email data up until that certain date, which can let us analyze these changes over time.
Step3: Merging Data Tables
Now we want to merge these two tables based on their Person-ID values. Basically, we first count how many emails / commits a certain contributor had in a certain slice. We then join all the rows with the same Person-ID to each other, so that we have the number of emails and the number of commits of each person in one row per person in one consolidated dataframe. We then delete all the rows where both of these values aren't defined. These represent people for whom we have git data but not mail data, or vice versa.
Step4: Coloring
We now assign a float value [0 --> 1] to each person. This isn't neccesary, but can let us graph these changes in a scatter plot and give each contributor a unique color to differentiate them. This will help us track an individual as their dot travels over time.
Step5: Here we graph our data. Each dot represents a unique contributor's number of emails and commits. As you'll notice, the graph is on a log-log scale.
Step6: Animations
Below this point, you'll find the code for generating animations. This can take a long time (~30 mins) for a large number of slices. However, the pre-generated videos are below.
The first video just shows all the contributors over time without unique colors. The second video has a color for each contributor, but also contains a Matplotlib bug where the minimum x and y values for the axes is not followed.
There is a lot to observe. As to our hypothesis, it's clear that people who email more commit more. In our static graph, we could see many contributors on the x-axis -- people who only email -- but this dynamic graph allows us to see the truth. While it may seem that they're people who only email, the video shows that even these contributors eventually start committing. Most committers don't really get past 10 commits without starting to email the rest of the project, for pretty clear reasons. However, the emailers can "get away with" exclusively emailing for longer, but eventually they too start to commit. In general, not only is there a positive correlation, there's a general trend of everyone edging close to having a stable and relatively equal ratio of commits to emails. | <ASSISTANT_TASK:>
Python Code:
# Load the raw email and git data
url = "http://mail.python.org/pipermail/scipy-dev/"
arx = Archive(url,archive_dir="../archives")
mailInfo = arx.data
repo = repo_loader.get_repo("bigbang")
gitInfo = repo.commit_data;
Explanation: Introduction
In group efforts, there is sometimes the impression that there are those who work, and those who talk. A naive question to ask is whether or not the people that tend to talk a lot actually get any work done. This is an obviously and purposefully obtuse question with an interesting answer.
We can use BigBang's newest feature, git data collection, to compare all of the contributors to a project, in this case Scipy, based on their email and git commit activity. The hypothesis in this case was that people who commit a lot will also tend to email a lot, and vice versa, since their involvement in a project would usually require them to do both. This hypothesis was proven to be correct. However, the data reveals many more interesting phenomenon.
End of explanation
entityResolve = bigbang.entity_resolution.entityResolve
mailAct = mailInfo.apply(entityResolve, axis=1, args =("From",None))
gitAct = gitInfo.apply(entityResolve, axis=1, args =("Committer Email","Committer Name"))
Explanation: Entity Resolution
Git and Email data comes from two different datatables. To observe a single person's git and email data, we need a way to identify that person across the two different datatables.
To solve this problem, I wrote an entity resolution client that will parse a Pandas dataframe and add a new column to it called "Person-ID" which gives each row an ID that represents one unique contributor. A person may go by many names ("Robert Smith, Rob B. Smith, Bob S., etc.) and use many different emails. However, this client will read through these data tables in one pass and consolidate these identities based on a few strategies.
End of explanation
NUM_SLICES = 1500 # Number of animation frames. More means more loading time
mailAct.sort("Date")
gitAct.sort("Time")
def getSlices(df, numSlices):
sliceSize = len(df)/numSlices
slices = []
for i in range(1, numSlices + 1):
start = 0
next = (i)*sliceSize;
next = min(next, len(df)-1) # make sure we don't go out of bounds
slice = df.iloc[start:next]
slices.append(slice)
return slices
mailSlices = getSlices(mailAct, NUM_SLICES)
gitSlices = getSlices(gitAct, NUM_SLICES)
Explanation: After we've run entity resolution on our dataframes, we split the dataframe into slices based on time. So for the entire life-span of the project, we will have NUM_SLICES different segments to analyze. We will be able to look at the git and email data up until that certain date, which can let us analyze these changes over time.
End of explanation
def processSlices(slices) :
for i in range(len(slices)):
slice = slices[i]
slice = slice.groupby("Person-ID").size()
slice.sort()
slices[i] = slice
def concatSlices(slicesA, slicesB) :
# assumes they have the same number of slices
# First is emails, second is commits
ansSlices = []
for i in range(len(slicesA)):
sliceA = slicesA[i]
sliceB = slicesB[i]
ans = pd.concat({"Emails" : sliceA, "Commits": sliceB}, axis = 1)
ans = ans[pd.notnull(ans["Emails"])]
ans = ans[pd.notnull(ans["Commits"])]
ansSlices.append(ans);
return ansSlices
processSlices(mailSlices)
processSlices(gitSlices)
finalSlices = concatSlices(mailSlices, gitSlices)
Explanation: Merging Data Tables
Now we want to merge these two tables based on their Person-ID values. Basically, we first count how many emails / commits a certain contributor had in a certain slice. We then join all the rows with the same Person-ID to each other, so that we have the number of emails and the number of commits of each person in one row per person in one consolidated dataframe. We then delete all the rows where both of these values aren't defined. These represent people for whom we have git data but not mail data, or vice versa.
End of explanation
def idToFloat(id):
return id*1.0/400.0;
for i in range(len(finalSlices)):
slice = finalSlices[i]
toSet = []
for i in slice.index.values:
i = idToFloat(i)
toSet.append(i)
slice["color"] = toSet
Explanation: Coloring
We now assign a float value [0 --> 1] to each person. This isn't neccesary, but can let us graph these changes in a scatter plot and give each contributor a unique color to differentiate them. This will help us track an individual as their dot travels over time.
End of explanation
data = finalSlices[len(finalSlices)-1] # Will break if there are 0 slices
fig = plt.figure(figsize=(8, 8))
d = data
x = d["Emails"]
y = d["Commits"]
c = d["color"]
ax = plt.axes(xscale='log', yscale = 'log')
plt.scatter(x, y, c=c, s=75)
plt.ylim(0, 10000)
plt.xlim(0, 10000)
ax.set_xlabel("Emails")
ax.set_ylabel("Commits")
plt.plot([0, 1000],[0, 1000], linewidth=5)
plt.show()
Explanation: Here we graph our data. Each dot represents a unique contributor's number of emails and commits. As you'll notice, the graph is on a log-log scale.
End of explanation
from IPython.display import YouTubeVideo
display(YouTubeVideo('GCcYJBq1Bcc', width=500, height=500))
display(YouTubeVideo('uP-z4jJqxmI', width=500, height=500))
fig = plt.figure(figsize=(8, 8))
a = finalSlices[0]
print(type(plt))
ax = plt.axes(xscale='log', yscale = 'log')
graph, = ax.plot(x ,y, 'o', c='red', alpha=1, markeredgecolor='none')
ax.set_xlabel("Emails")
ax.set_ylabel("Commits")
plt.ylim(0, 10000)
plt.xlim(0, 10000)
def init():
graph.set_data([],[]);
return graph,
def animate(i):
a = finalSlices[i]
x = a["Emails"]
y = a["Commits"]
graph.set_data(x, y)
return graph,
anim = animation.FuncAnimation(fig, animate, init_func=init,
frames=NUM_SLICES, interval=1, blit=True)
anim.save('t1.mp4', fps=15)
def main():
data = finalSlices
first = finalSlices[0]
fig = plt.figure(figsize=(8, 8))
d = data
x = d[0]["Emails"]
y = d[0]["Commits"]
c = d[0]["color"]
ax = plt.axes(xscale='log', yscale='log')
scat = plt.scatter(x, y, c=c, s=100)
plt.ylim(0, 10000)
plt.xlim(0, 10000)
plt.xscale('log')
plt.yscale('log')
ani = animation.FuncAnimation(fig, update_plot, frames=NUM_SLICES,
fargs=(data, scat), blit=True)
ani.save('test.mp4', fps=10)
#plt.show()
def update_plot(i, d, scat):
x = d[i]["Emails"]
y = d[i]["Commits"]
c = d[i]["color"]
plt.cla()
ax = plt.axes()
ax.set_xscale('log')
ax.set_yscale('log')
scat = plt.scatter(x, y, c=c, s=100)
plt.ylim(0, 10000)
plt.xlim(0, 10000)
plt.xlabel("Emails")
plt.ylabel("Commits")
return scat,
main()
Explanation: Animations
Below this point, you'll find the code for generating animations. This can take a long time (~30 mins) for a large number of slices. However, the pre-generated videos are below.
The first video just shows all the contributors over time without unique colors. The second video has a color for each contributor, but also contains a Matplotlib bug where the minimum x and y values for the axes is not followed.
There is a lot to observe. As to our hypothesis, it's clear that people who email more commit more. In our static graph, we could see many contributors on the x-axis -- people who only email -- but this dynamic graph allows us to see the truth. While it may seem that they're people who only email, the video shows that even these contributors eventually start committing. Most committers don't really get past 10 commits without starting to email the rest of the project, for pretty clear reasons. However, the emailers can "get away with" exclusively emailing for longer, but eventually they too start to commit. In general, not only is there a positive correlation, there's a general trend of everyone edging close to having a stable and relatively equal ratio of commits to emails.
End of explanation
<END_TASK> |
15,769 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Reducing the size of the data to test on a cpu
Step1: Splitting the data into validation and train + One Hot Encoding the labels + data augumentation
Step2: Recizing the images into appropriate dimensions that VGG16 expects (48x48 - 224x224)
Step3: Plotting sample resized image and it's augumented copies
Step4: Barebones Convolutional model
Step5: Using VGG16 Bottleneck Features | <ASSISTANT_TASK:>
Python Code:
def reducex(X, y, reduce_classes=None, reduce_percent=.2):
# import pdb; pdb.set_trace()
idxs = []
if reduce_classes:
for c in reduce_classes:
try:
idxs += list(np.where(y == c)[0])
except IndexError:
continue
np.random.seed(1000)
new_size = int(np.round(len(idxs) * reduce_percent))
np.random.shuffle(idxs)
return (X[idxs[:int(len(idxs)*reduce_percent)]], y[idxs[:int(len(idxs)*reduce_percent)]] == 3)
batch_size = 16
Explanation: Reducing the size of the data to test on a cpu
End of explanation
(Xtrain, ytrain), (Xtest, ytest) = cifar10.load_data()
train_datagen = ImageDataGenerator(
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
test_datagen = ImageDataGenerator(rescale=1./255)
Xtrain, ytrain = reducex(Xtrain, ytrain, reduce_classes=[3,5], reduce_percent=0.3)
Xtest, ytest = reducex(Xtest, ytest, reduce_classes=[3,5], reduce_percent=0.3)
yetrain = keras.utils.to_categorical(ytrain)
yetest = keras.utils.to_categorical(ytest)
Explanation: Splitting the data into validation and train + One Hot Encoding the labels + data augumentation
End of explanation
######## Resize cifar10 images to 3x48x48 #############
rsize = 48
temp = np.zeros((Xtrain.shape[0], 3, rsize, rsize))
for i, row in enumerate(Xtrain):
temp[i] = imresize(row, size=(rsize,rsize)).transpose(2,1,0)
temp2 = np.zeros((Xtrain.shape[0], 3, rsize, rsize))
for i in Xtest:
temp2[i] = imresize(row, size=(rsize,rsize)).transpose(2,1,0)
Xtrain = temp
Xtest = temp2
Explanation: Recizing the images into appropriate dimensions that VGG16 expects (48x48 - 224x224)
End of explanation
from keras.applications import VGG16
vmodel = VGG16(include_top=True, weights='imagenet')#, input_shape=(3,48,48))
vmodel.layers.pop()
for layr in vmodel.layers:
layr.trainable = False
last = vmodel.output
# x = Flatten(last)
x = Dense(2, activation='sigmoid')(last)
vvmodel = keras.models.Model(vmodel.input, x)
# vmodel.add(Dense(2))
# vmodel.add(Activation('sigmoid'))
vvmodel.summary()
opt = RMSprop(lr=0.01)
vvmodel.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy'])
def fit_model(model, batches, val_batches, epochs=1):
model.fit_generator(batches, steps_per_epoch=1500 // batch_size, epochs=epochs,
validation_steps=700 // batch_size,
validation_data=val_batches)
vvmodel.fit(Xtrain[0:100], yetrain[0:100], verbose=True)
fit_model(vvmodel, train_generator, validation_generator, epochs=2)
# model.fit_generator(
# train_generator,
# steps_per_epoch=500 // batch_size,
# epochs=15,
# validation_data=validation_generator,
# validation_steps=300 // batch_size)
# img = load_img('data/train/cats/cat.0.jpg') # this is a PIL image
# x = img_to_array(img) # this is a Numpy array with shape (3, 150, 150)
x = Xtrain[8]
x = x.reshape((1,) + x.shape) # this is a Numpy array with shape (1, 3, 150, 150)
# the .flow() command below generates batches of randomly transformed images
# and saves the results to the `preview/` directory
i = 0
j = 0
xims = []
fig, axes = plt.subplots(2,2,figsize=(12,8))
for batch in datagen.flow(x, batch_size=1):
axes[j][i].set_axis_off()
axes[j][i].imshow(batch.reshape(x.shape[1:]).transpose(2,1,0), interpolation='nearest')
if i >= 1:
j += 1
i = -1
if j == 2:
break
i +=1
print j,i
# plt.imshow(Xtrain[0].reshape(32,32,3))
# fig, axes1 = plt.subplots(2,2,figsize=(12,8))
# for j in range(5):
# for k in range(5):
# i = np.random.choice(range(len(X)))
# axes1[j][k].set_axis_off()
# axes1[j][k].imshow(X[i:i+1][0], interpolation='nearest')
print ytrain[8]
# train_generator = datagen.flow(Xtrain,yetrain,
# batch_size=batch_size) # since we use binary_crossentropy loss, we need binary labels
# # this is a similar generator, for validation data
# validation_generator = test_datagen.flow(Xtest, yetest,
# batch_size=batch_size)
Explanation: Plotting sample resized image and it's augumented copies
End of explanation
model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=(3, 32, 32)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten()) # this converts our 3D feature maps to 1D feature vectors
model.add(Dense(32))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(2))
model.add(Activation('sigmoid'))
print(model.summary())
model.compile(loss='binary_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
batch_size = 64
model.fit_generator(
train_generator,
steps_per_epoch=500 // batch_size,
epochs=15,
validation_data=validation_generator,
validation_steps=300 // batch_size)
import datetime
now = str(datetime.datetime.now())
# model.save_weights('first_try.h5') # always save your weights after training or during training
weights = model.weights[6:]
weights[0].eval().shape
Explanation: Barebones Convolutional model
End of explanation
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow(Xtrain,
batch_size=batch_size, shuffle=True) # since we use binary_crossentropy loss, we need binary labels
# this is a similar generator, for validation data
validation_generator = test_datagen.flow(Xtest,
batch_size=batch_size, shuffle=True)
# train_batches = train_generator.flow(Xtrain, yetrain, batch_size=batch_size, shuffle=True)
# val_batches = test_datagen.flow(Xtest, yetest, batch_size=batch_size, shuffle=False)
bottleneck_features_train = vmodel.predict_generator(
train_generator, 500)
np.save(open('bottleneck_features_train.npy', 'w'),
bottleneck_features_train)
top_model = Sequential()
top_model.add(Flatten(input_shape=(256,32),weights=np.zeros((256,32))))
top_model.add(Dense(32))
top_model.add(Activation('relu'))
top_model.add(Dropout(0.5))
top_model.add(Dense(2))
top_model.add(Activation('sigmoid'))
# top_model.set_weights(weights)
top_model.layers[0].set_weights(weights[0].eval())
top_model_weights_path = 'first_try.h5'
top_model.load_weights(top_model_weights_path)
# add the model on top of the convolutional base
vmodel.add(top_model)
# set the first 25 layers (up to the last conv block)
# to non-trainable (weights will not be updated)
for layer in model.layers[:25]:
layer.trainable = False
# compile the model with a SGD/momentum optimizer
# and a very slow learning rate.
vmodel.compile(loss='binary_crossentropy',
optimizer=SGD(lr=1e-4, momentum=0.9),
metrics=['accuracy'])
# prepare data augmentation configuration
# train_datagen = ImageDataGenerator(
# rescale=1. / 255,
# shear_range=0.2,
# zoom_range=0.2,
# horizontal_flip=True)
# test_datagen = ImageDataGenerator(rescale=1. / 255)
# train_generator = train_datagen.flow_from_directory(
# train_data_dir,
# target_size=(img_height, img_width),
# batch_size=batch_size,
# class_mode='binary')
# validation_generator = test_datagen.flow_from_directory(
# validation_data_dir,
# target_size=(img_height, img_width),
# batch_size=batch_size,
# class_mode='binary')
# fine-tune the model
epochs = 25
vmodel.fit_generator(
train_generator,
samples_per_epoch=nb_train_samples,
epochs=epochs,
validation_data=validation_generator,
nb_val_samples=nb_validation_samples)
Explanation: Using VGG16 Bottleneck Features
End of explanation
<END_TASK> |
15,770 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Advanced
Step1: Accessing/Changing Multiprocessing Settings
To check the number of processors that will be used whenever multiprocessing is invoked, call phoebe.multiprocessing_get_nprocs. By default, this will be the number of detected CPUs on the machine.
Step2: To disable multiprocessing, we can call phoebe.multiprocessing_off.
Step3: To re-enable multiprocessing with all available CPUs on the machine, we can call phoebe.multiprocessing_on.
Step4: Or to manually set the number of processors to use, we can call phoebe.multiprocessing_set_nprocs. | <ASSISTANT_TASK:>
Python Code:
#!pip install -I "phoebe>=2.4,<2.5"
import phoebe
Explanation: Advanced: Running PHOEBE with Multiprocessing
Setup
Let's first make sure we have the latest version of PHOEBE 2.4 installed (uncomment this line if running in an online notebook session such as colab).
End of explanation
print(phoebe.multiprocessing_get_nprocs())
Explanation: Accessing/Changing Multiprocessing Settings
To check the number of processors that will be used whenever multiprocessing is invoked, call phoebe.multiprocessing_get_nprocs. By default, this will be the number of detected CPUs on the machine.
End of explanation
phoebe.multiprocessing_off()
print(phoebe.multiprocessing_get_nprocs())
Explanation: To disable multiprocessing, we can call phoebe.multiprocessing_off.
End of explanation
phoebe.multiprocessing_on()
print(phoebe.multiprocessing_get_nprocs())
Explanation: To re-enable multiprocessing with all available CPUs on the machine, we can call phoebe.multiprocessing_on.
End of explanation
phoebe.multiprocessing_set_nprocs(2)
print(phoebe.multiprocessing_get_nprocs())
Explanation: Or to manually set the number of processors to use, we can call phoebe.multiprocessing_set_nprocs.
End of explanation
<END_TASK> |
15,771 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Thermo equation
Statement
$$
\frac{du}{dt} = \frac{d^2u}{dx^2} + f(x, t),\
0 < x < 1, 0 < t \leq T,\
u(x, 0) = u_0(x),\
u(0, t) = y_1(t),\
u(1, t) = y_2(t)
$$
We are going to solve this problem with next substitutions
$$
f(x, t) = \frac{1}{2} x^2 - t,\
u(x, 0) = x,\
u(0, t) = 0,\
u(1, t) = \frac{1}{2} t + 1,\
T = 5, N = 10, K = 8
$$
Step1: Explicit scheme
Step2: Implicit scheme
Step3: Stability of solutions
Let's see how error depends on K | <ASSISTANT_TASK:>
Python Code:
T, N, K = 5, 10, 8
x = np.linspace(0, 1, N)
t = np.linspace(0, T, K)
h, tau = 1/(N - 1), T/(K - 1)
u0 = 1*x
y1 = np.zeros_like(t)
y2 = 1/2*t + 1
xv, tv = np.meshgrid(x, t, sparse=True)
f = xv**2 / 2 - tv
u_ans = xv**2 / 2 * tv + xv
fig = plt.figure(figsize=(15, 10))
ax = fig.gca(projection = '3d')
ax.plot_wireframe(xv, tv, u_ans)
ax.set_xlabel('position')
ax.set_ylabel('time')
ax.set_zlabel('temp')
ax.set_title('Solution')
plt.show()
Explanation: Thermo equation
Statement
$$
\frac{du}{dt} = \frac{d^2u}{dx^2} + f(x, t),\
0 < x < 1, 0 < t \leq T,\
u(x, 0) = u_0(x),\
u(0, t) = y_1(t),\
u(1, t) = y_2(t)
$$
We are going to solve this problem with next substitutions
$$
f(x, t) = \frac{1}{2} x^2 - t,\
u(x, 0) = x,\
u(0, t) = 0,\
u(1, t) = \frac{1}{2} t + 1,\
T = 5, N = 10, K = 8
$$
End of explanation
def explicit(h, tau, u0, y1, y2, f):
K = y1.size
N = u0.size
y = np.empty((K, N))
y[0, :] = u0
y[:, 0] = y1
y[:, N - 1] = y2
for j in range(1, K):
for i in range(1, N - 1):
y[j, i] = y[j - 1, i] + tau*(y[j - 1, i + 1] - 2*y[j - 1, i] + y[j - 1, i - 1])/h**2 + tau*f[j - 1, i]
return y
u_exp = explicit(h, tau, u0, y1, y2, f)
fig = plt.figure(figsize=(15, 10))
ax = fig.gca(projection = '3d')
ax.plot_wireframe(xv, tv, u_exp)
ax.set_xlabel('position')
ax.set_ylabel('time')
ax.set_zlabel('temp')
ax.set_title('Explicit scheme')
plt.show()
np.linalg.norm(u_exp - u_ans)
Explanation: Explicit scheme
End of explanation
def thomas(a, b, c, d):
n = len(d)
A = np.empty_like(d)
B = np.empty_like(d)
A[0] = -c[0]/b[0]
B[0] = d[0]/b[0]
for i in range(1, n):
A[i] = -c[i] / (b[i] + a[i]*A[i - 1])
B[i] = (d[i] - a[i]*B[i - 1])/(b[i] + a[i]*A[i - 1])
y = np.empty_like(d)
y[n - 1] = B[n - 1]
for i in range(n - 2, -1, -1):
y[i] = A[i]*y[i + 1] + B[i]
return y
def implicit(h, tau, u0, y1, y2, f):
K = y1.size
N = u0.size
y = np.empty((K, N))
y[0, :] = u0
y[:, 0] = y1
y[:, N - 1] = y2
for j in range(1, K):
a = np.ones(N - 2) * 1/h**2
a[0] = 0
b = np.ones(N - 2) * -(1/tau + 2/h**2)
c = np.ones(N - 2) * 1/h**2
c[-1] = 0
d = -y[j - 1, 1:-1]/tau - f[j, 1:-1]
d[0] = d[0] - y1[j]/h**2
d[-1] = d[-1] - y2[j]/h**2
y[j, 1:-1] = thomas(a, b, c, d)
return y
u_imp = implicit(h, tau, u0, y1, y2, f)
fig = plt.figure(figsize=(15, 10))
ax = fig.gca(projection = '3d')
ax.plot_wireframe(xv, tv, u_imp)
ax.set_xlabel('position')
ax.set_ylabel('time')
ax.set_zlabel('temp')
ax.set_title('Implicit scheme')
plt.show()
np.linalg.norm(u_imp - u_ans)
Explanation: Implicit scheme
End of explanation
N = 100
Ks = range(2, 8)
err_exp = []
err_imp = []
for K in Ks:
x = np.linspace(0, 1, N)
t = np.linspace(0, T, K)
h, tau = 1/(N - 1), T/(K - 1)
u0 = 1*x
y1 = np.zeros_like(t)
y2 = 1/2*t + 1
xv, tv = np.meshgrid(x, t, sparse=True)
f = xv**2 / 2 - tv
u_ans = xv**2 / 2 * tv + xv
u_exp = explicit(h, tau, u0, y1, y2, f)
u_imp = implicit(h, tau, u0, y1, y2, f)
err_exp.append(np.linalg.norm(u_exp - u_ans))
err_imp.append(np.linalg.norm(u_imp - u_ans))
plt.figure(figsize=(15, 10))
plt.semilogy(Ks, err_exp, Ks, err_imp)
plt.legend(['explicit', 'implicit'], loc='best')
plt.xlabel('K')
plt.ylabel('norm')
plt.title('Error(frobenius norm) of explicit vs implicit schemes')
plt.show()
Explanation: Stability of solutions
Let's see how error depends on K
End of explanation
<END_TASK> |
15,772 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Using Python to solve Regexp CrossWord Puzzles
Have a look at the amazing https
Step1: The keys "right_lines" and "bottom_columns" can be empty, as for easier problems there are no constraints on the right and bottom.
Each line and column (but not each square) contains a regular expression, on a common alphabet of letters and symbols.
Let's write $\Sigma$ this alphabet, which in the most general case is $\Sigma={$ A, B, ..., Z, 0, ..., 9,
Step2: Difficult problem of size $(5,5)$ with 20 constraints
Defining the second problem is just a question of more copy-pasting
Step3: And its alphabet
Step4: An intermediate problem of size $(3,3)$ with 12 constraints
Defining the third problem is just a question of more copy-pasting
Step5: And its alphabet
Step6: A few useful functions
Let's first extract the dimension of a problem
Step7: Now let's write a representation of a grid, a solution (or partial solution) of a problem
Step8: As well as a few complete grids which are NOT solutions
Step9: We also write these short functions to extract the $i$-th line or $j$-th column
Step10: A partial solution for the intermediate problem
Step11: And a partial solution for the harder problem
Step12: Let's extract the dimension of a grid, just to check it
Step13: Two more checks
We also have to check if a word is in an alphabet
Step14: And also check that a word matches a regexp
Step15: As the documentation explains it
Step16: Now we are ready to write our "smart" match function
Step17: Let's compare the time of the first match and next ones
Step18: Well of course it's not different for tiny test like this.
Step19: That should be enough to start the first "easy" task.
Step20: We can see that our "memoization trick" indeed helped to speed-up the time required to check a regexp, by about a factor 2, even for very small tests like this.
First easy task
Step21: Let's try it!
Step22: That was long, but it works fine!
Step23: For columns
We are ready to check the one or two constraints of a line.
The same function will be written for columns, just below.
Step24: Let's try it!
Step25: That was long, but it works fine!
Step26: Second easy task
Step27: Let's try it!
For the easy problem
For a partial grid, of course it's going to be invalid just because '_' is not in the alphabet $\Sigma$.
Step28: For a complete grid, let's check that our solution is valid
Step29: And let's also check that the few wrong solutions are indeed not valid
Step30: We can see that for each wrong grid, at least one of the contraint is violated!
That's pretty good!
For the intermediate problem
My solution for the intermediate problem problem3 is indeed valid
Step31: For the hard problem
Well I don't have a solution yet, so I cannot check it!
Third easy task
Step32: Just a quick check
Step33: The time and memory complexity of this function should be $\mathcal{O}(|\Sigma|^k)$ for words of size $k\in\mathbb{N}^*$.
Step34: We can quickly check that even for the larger alphabet of size ~40, it's quite quick for small words of length $\leq 5$
Step35: Who, it takes 12 seconds to just generate all the possible words for the largest problem (which is just of size $(5,5)$)...
I'm afraid that my naive approach to solve the puzzle will be VERY slow...
Fourth easy task
Step36: How long does it take and how many grids for the easy problem?
Step37: That's still pretty small and fast!
How long does it take and how many grids for the hard problem?
Step38: Just for $(n, m) = (2, 2)$ it takes about 7 seconds...
So to scale for $(n, m) = (5, 5)$ would just take... WAY TOO MUCH TIME!
Step39: For a grid of size $(5,5)$, the number of different possible grids is about $10^{40}$, that is CRAZY large, we have no hope of solving this problem with a brute force approach.
How much time would that require, just to generate the grids?
Step40: This rough estimate gives about $5 * 10^{22}$ seconds, about $10^{15}$ years, so about a million of billion years !
Step41: First difficult task
Step42: Let's try it!
Solving the easy problem
Let's check that we can quickly find one solution
Step43: Then can we find more solutions?
Step44: No there is indeed a unique solution here for the first "easy" problem!
Solving the intermediate problem
Step45: That was so long...
I could try to use Pypy3 IPython kernel, to speed things up?
Yes it's possible to use a Pypy kernel from your regular Python notebook!
See https
Step46: My first idea was to try to tackle each constraint independently, and generate the set of words that satisfy this contraint. (by naively checking check(constraint, word) for each word in $\Sigma^n$ or $\Sigma^m$).
if there are two line constraints (left/right), get the intersection of the two sets of words;
then, for each line we have a set of possible words
Step47: So let's write this algorithm.
I'm using a tqdm.tqdm() wrapper on the foor loops, to keep an eye on the progress.
Step48: And let's try it
Step49: So it worked!
🚀 It was also BLAZING fast compared to the naive approach
Step50: 🚀 It was also BLAZING fast compared to the naive approach | <ASSISTANT_TASK:>
Python Code:
problem1 = {
"left_lines": [
r"HE|LL|O+", # HE|LL|O+ line 1
r"[PLEASE]+", # [PLEASE]+ line 2
],
"right_lines": None,
"top_columns": [
r"[^SPEAK]+", # [^SPEAK]+ column 1
r"EP|IP|EF", # EP|IP|EF column 2
],
"bottom_columns": None,
}
Explanation: Using Python to solve Regexp CrossWord Puzzles
Have a look at the amazing https://regexcrossword.com/ website.
I played during about two hours, and could manually solve almost all problems, quite easily for most of them.
But then I got stucked on this one.
Soooooo. I want to use Python3 regular expressions and try to solve any such cross-word puzzles.
Warning: This notebook will not explain the concept and syntax of regular expressions, go read on about it on Wikipedia or in a good book. The Python documentation gives a nice introduction here.
Author: Lilian Besson (@Naereen) ;
License: MIT License ;
Date: 28-02-2021.
Representation of a problem
Here is a screenshot from the game webpage.
As you can see, an instance of this game is determined by its rectangular size, let's denote it $(m, n)$, so here there are $m=5$ lines and $n=5$ columns.
I'll also use this easy problem:
Let's define both, in a small dictionnary containing two to four lists of regexps.
Easy problem of size $(2,2)$ with four constraints
End of explanation
alphabet1 = {
'H', 'E', 'L', 'O',
'P', 'L', 'E', 'A', 'S', 'E',
'S', 'P', 'E', 'A', 'K',
'E', 'P', 'I', 'P', 'I', 'F',
}
print(f"alphabet1 = \n{sorted(alphabet1)}")
Explanation: The keys "right_lines" and "bottom_columns" can be empty, as for easier problems there are no constraints on the right and bottom.
Each line and column (but not each square) contains a regular expression, on a common alphabet of letters and symbols.
Let's write $\Sigma$ this alphabet, which in the most general case is $\Sigma={$ A, B, ..., Z, 0, ..., 9, :, ?, ., $, -$}$.
For the first beginner problem, the alphabet can be shorten:
End of explanation
problem2 = {
"left_lines": [
r"(N3|TRA|N7)+", # left line 1
r"[1LOVE2?4]+.", # left line 2
r"(A|D)M[5-8$L]+", # left line 3
r"[^\s0ILAD]+", # left line 4
r"[B-E]+(.)\1.", # left line 5
],
"right_lines": [
r"[^OLD\s]+", # right line 1
r"(\d+)[LA\s$?]+", # right line 2
r"(\-P|5\$|AM|Z|L)+", # right line 3
r"(\-D|\-WE)+[^L4-9N$?]+", # right line 4
r"[FED$?]+", # right line 5
],
"top_columns": [
r"[2TAIL\-D]+", # top column 1
r"(WE|R4|RY|M)+", # top column 2
r"[FEAL3-5S]+", # top column 3
r"[^FA\sT1-2]+F", # top column 4
r"[LO\s\?5-8]+", # top column 5
],
"bottom_columns": [
r"[^ILYO]+", # top column 1
r".+[MURDEW]+", # top column 2
r"[1ALF5$E\s]+", # top column 3
r"[\dFAN$?]+", # top column 4
r".+\s.+\?", # top column 5
],
}
Explanation: Difficult problem of size $(5,5)$ with 20 constraints
Defining the second problem is just a question of more copy-pasting:
End of explanation
import string
alphabet2 = set(string.digits) \
| set(string.ascii_uppercase) \
| { ':', '?', '.', '$', '-' }
print(f"alphabet2 = \n{sorted(alphabet2)}")
Explanation: And its alphabet:
End of explanation
problem3 = {
"left_lines": [
r"[ONE]*[SKA]", # left line 1
r".*(RE|ER)", # left line 2
r"A+[TUB]*", # left line 3
],
"right_lines": [
r".*(O|S)*", # right line 1
r"[^GOA]*", # right line 2
r"[STUPA]+", # right line 3
],
"top_columns": [
r".*[GAF]*", # top column 1
r"(P|ET|O|TEA)*", # top column 2
r"[RUSH]+", # top column 3
],
"bottom_columns": [
r"(NF|FA|A|FN)+", # top column 1
r".*(A|E|I).*", # top column 2
r"[SUPER]*", # top column 3
],
}
Explanation: An intermediate problem of size $(3,3)$ with 12 constraints
Defining the third problem is just a question of more copy-pasting:
End of explanation
alphabet3 = {
'O', 'N', 'E', 'S', 'K', 'A',
'R', 'E', 'E', 'R',
'A', 'T', 'U', 'B',
'O', 'S',
'G', 'O', 'A',
'S', 'T', 'U', 'P', 'A',
'G', 'A', 'F',
'P', 'E', 'T', 'O', 'T', 'E', 'A',
'R', 'U', 'S', 'H',
'N', 'F', 'F', 'A', 'A', 'F', 'N',
'A', 'E', 'I',
'S', 'U', 'P', 'E', 'R',
}
print(f"alphabet3 = \n{sorted(alphabet3)}")
Explanation: And its alphabet:
End of explanation
def dimension_problem(problem):
m = len(problem['left_lines'])
if problem['right_lines'] is not None:
assert m == len(problem['right_lines'])
n = len(problem['top_columns'])
if problem['bottom_columns'] is not None:
assert n == len(problem['bottom_columns'])
return (m, n)
problem1
dimension_problem(problem1)
Explanation: A few useful functions
Let's first extract the dimension of a problem:
End of explanation
___ = "_" # represents an empty answer, as _ is not in the alphabet
grid1_partial = [
[ 'H', ___ ],
[ ___, 'P' ],
]
grid1_solution = [
[ 'H', 'E' ],
[ 'L', 'P' ],
]
Explanation: Now let's write a representation of a grid, a solution (or partial solution) of a problem:
End of explanation
grid1_wrong1 = [
[ 'H', 'E' ],
[ 'L', 'F' ],
]
grid1_wrong2 = [
[ 'H', 'E' ],
[ 'E', 'P' ],
]
grid1_wrong3 = [
[ 'H', 'E' ],
[ 'O', 'F' ],
]
grid1_wrong4 = [
[ 'O', 'E' ],
[ 'O', 'F' ],
]
Explanation: As well as a few complete grids which are NOT solutions
End of explanation
def nth_line(grid, line):
return "".join(grid[line])
def nth_column(grid, column):
return "".join(grid[line][column] for line in range(len(grid)))
[ nth_line(grid1_solution, line) for line in range(len(grid1_solution)) ]
[ nth_column(grid1_solution, column) for column in range(len(grid1_solution[0])) ]
Explanation: We also write these short functions to extract the $i$-th line or $j$-th column:
End of explanation
___ = "_" # represents an empty answer, as _ is not in the alphabet
grid3_solution = [
[ 'N', 'O', 'S' ],
[ 'F', 'E', 'R' ],
[ 'A', 'T', 'U' ],
]
Explanation: A partial solution for the intermediate problem:
End of explanation
___ = "_" # represents an empty answer, as _ is not in the alphabet
grid2_partial = [
[ 'T', 'R', 'A', 'N', '7' ],
[ '2', '4', ___, ___, ' ' ],
[ 'A', ___, ___, ___, ___ ],
[ '-', ___, ___, ___, ___ ],
[ 'D', ___, ___, ___, '?' ],
]
Explanation: And a partial solution for the harder problem:
End of explanation
def dimension_grid(grid):
m = len(grid)
n = len(grid[0])
assert all(n == len(grid[i]) for i in range(1, m))
return (m, n)
print(f"Grid grid1_partial has dimension: {dimension_grid(grid1_partial)}")
print(f"Grid grid1_solution has dimension: {dimension_grid(grid1_solution)}")
print(f"Grid grid2_partial has dimension: {dimension_grid(grid2_partial)}")
def check_dimensions(problem, grid):
return dimension_problem(problem) == dimension_grid(grid)
assert check_dimensions(problem1, grid1_partial)
assert check_dimensions(problem1, grid1_solution)
assert not check_dimensions(problem2, grid1_partial)
assert check_dimensions(problem2, grid2_partial)
assert not check_dimensions(problem1, grid2_partial)
Explanation: Let's extract the dimension of a grid, just to check it:
End of explanation
def check_alphabet(alphabet, word, debug=True):
result = True
for i, letter in enumerate(word):
new_result = letter in alphabet
if debug and result and not new_result:
print(f"The word {repr(word)} is not in alphabet {repr(alphabet)}, as its #{i}th letter {letter} is not present.")
result = result and new_result
return result
assert check_alphabet(alphabet1, 'H' 'E') # concatenate the strings
assert check_alphabet(alphabet1, 'H' 'E')
assert check_alphabet(alphabet1, 'L' 'P')
assert check_alphabet(alphabet1, 'H' 'L')
assert check_alphabet(alphabet1, 'E' 'P')
assert check_alphabet(alphabet2, "TRAN7")
Explanation: Two more checks
We also have to check if a word is in an alphabet:
End of explanation
import re
Explanation: And also check that a word matches a regexp:
End of explanation
memory_of_compiled_regexps = dict()
Explanation: As the documentation explains it:
but using prog = re.compile(regepx) and saving the resulting regular expression object prog for reuse is more efficient when the expression will be used several times in a single program.
I don't want to have to think about compiling a regexp before using it, so... I'm gonna memoize them!
End of explanation
def match(regexp, word, debug=True):
global memory_of_compiled_regexps
if regexp not in memory_of_compiled_regexps:
prog = re.compile(regexp)
memory_of_compiled_regexps[regexp] = prog
print(f"For the first time seeing this regexp {repr(regexp)}, compiling it and storing in memory_of_compiled_regexps, now of size {len(memory_of_compiled_regexps)}.")
else:
prog = memory_of_compiled_regexps[regexp]
result = re.fullmatch(regexp, word)
result = prog.fullmatch(word)
entire_match = result is not None
# entire_match = result.group(0) == word
if debug:
if entire_match:
print(f"The word {repr(word)} is matched by {repr(regexp)}")
else:
print(f"The word {repr(word)} is NOT matched by {repr(regexp)}")
return entire_match
Explanation: Now we are ready to write our "smart" match function:
End of explanation
%%time
match(r"(N3|TRA|N7)+", "TRAN7")
%%time
match(r"(N3|TRA|N7)+", "TRAN8")
Explanation: Let's compare the time of the first match and next ones:
End of explanation
match(r"(N3|TRA|N7)+", "")
match(r"(N3|TRA|N7)+", "TRA")
Explanation: Well of course it's not different for tiny test like this.
End of explanation
%timeit match(r"(N3|TRA|N7)+", "TRA", debug=False)
%timeit re.fullmatch(r"(N3|TRA|N7)+", "TRA")
Explanation: That should be enough to start the first "easy" task.
End of explanation
def check_line(problem, alphabet, word, position, debug=True, early=False):
if not check_alphabet(alphabet, word, debug=debug):
return False
m, n = dimension_problem(problem)
if len(word) != n:
if debug:
print(f"Word {repr(word)} does not have correct size n = {n} for lines")
return False
assert 0 <= position < m
constraints = []
if "left_lines" in problem and problem["left_lines"] is not None:
constraints += [ problem["left_lines"][position] ]
if "right_lines" in problem and problem["right_lines"] is not None:
constraints += [ problem["right_lines"][position] ]
# okay we have one or two constraint for this line,
assert len(constraints) in {1, 2}
# let's check them!
result = True
for cnb, constraint in enumerate(constraints):
if debug:
print(f"For line constraint #{cnb} {repr(constraint)}:")
new_result = match(constraint, word, debug=debug)
if early and not new_result: return False
result = result and new_result
return result
Explanation: We can see that our "memoization trick" indeed helped to speed-up the time required to check a regexp, by about a factor 2, even for very small tests like this.
First easy task: check that a line/column word validate its contraints
Given a problem $P$ of dimension $(m, n)$, its alphabet $\Sigma$, a position $i \in [| 0, m-1 |]$ of a line or $j \times [|0, n-1 |]$ of a column, and a word $w \in \Sigma^k$ (with $k=m$ for line or $k=n$ for column), I want to write a function that checks the validity of each (left/right) line, or (top/bottom) constraints.
To ease debugging, and in the goal of using this Python program to improve my skills in solving such puzzles, I don't want this function to just reply True or False, but to also print for each constraints if it is satisfied or not.
Bonus: for each regexp contraint, highlight the parts which corresponded to each letter of the word?
For lines
We are ready to check the one or two constraints of a line.
The same function will be written for columns, just below.
End of explanation
problem1, alphabet1, grid1_solution
n, m = dimension_problem(problem1)
for line in range(n):
word = nth_line(grid1_solution, line)
print(f"- For line number {line}, checking word {repr(word)}:")
result = check_line(problem1, alphabet1, word, line)
n, m = dimension_problem(problem1)
fake_words = ["OK", "HEY", "NOT", "HELL", "N", "", "HU", "OO", "EA"]
for word in fake_words:
print(f"# For word {repr(word)}:")
for line in range(n):
result = check_line(problem1, alphabet1, word, line)
print(f" => {result}")
Explanation: Let's try it!
End of explanation
n, m = dimension_problem(problem2)
for line in [0]:
word = nth_line(grid2_partial, line)
print(f"- For line number {line}, checking word {repr(word)}:")
result = check_line(problem2, alphabet2, word, line)
print(f" => {result}")
n, m = dimension_problem(problem2)
fake_words = [
"TRAN8", "N2TRA", # violate first constraint
"N3N3N7", "N3N3", "TRA9", # smaller or bigger dimension
"O L D", "TRA ", # violate second contraint
]
for word in fake_words:
for line in [0]:
print(f"- For line number {line}, checking word {repr(word)}:")
result = check_line(problem2, alphabet2, word, line)
print(f" => {result}")
Explanation: That was long, but it works fine!
End of explanation
def check_column(problem, alphabet, word, position, debug=True, early=False):
if not check_alphabet(alphabet, word, debug=debug):
return False
m, n = dimension_problem(problem)
if len(word) != m:
if debug:
print(f"Word {repr(word)} does not have correct size n = {n} for columns")
return False
assert 0 <= position < n
constraints = []
if "top_columns" in problem and problem["top_columns"] is not None:
constraints += [ problem["top_columns"][position] ]
if "bottom_columns" in problem and problem["bottom_columns"] is not None:
constraints += [ problem["bottom_columns"][position] ]
# okay we have one or two constraint for this column,
assert len(constraints) in {1, 2}
# let's check them!
result = True
for cnb, constraint in enumerate(constraints):
if debug:
print(f"For column constraint #{cnb} {repr(constraint)}:")
new_result = match(constraint, word, debug=debug)
if early and not new_result: return False
result = result and new_result
return result
Explanation: For columns
We are ready to check the one or two constraints of a line.
The same function will be written for columns, just below.
End of explanation
problem1, alphabet1, grid1_solution
n, m = dimension_problem(problem1)
for column in range(m):
word = nth_column(grid1_solution, column)
print(f"- For column number {column}, checking word {repr(word)}:")
result = check_column(problem1, alphabet1, word, column)
n, m = dimension_problem(problem1)
fake_words = ["OK", "HEY", "NOT", "HELL", "N", "", "HU", "OO", "EA"]
for word in fake_words:
print(f"# For word {repr(word)}:")
for column in range(m):
result = check_column(problem1, alphabet1, word, column)
print(f" => {result}")
Explanation: Let's try it!
End of explanation
n, m = dimension_problem(problem2)
for column in [0]:
word = nth_column(grid2_partial, column)
print(f"- For column number {column}, checking word {repr(word)}:")
result = check_column(problem2, alphabet2, word, column)
print(f" => {result}")
n, m = dimension_problem(problem2)
fake_words = [
"TRAN8", "N2TRA", # violate first constraint
"N3N3N7", "N3N3", "TRA9", # smaller or bigger dimension
"O L D", "TRA ", # violate second contraint
]
for word in fake_words:
for line in [0]:
print(f"- For line number {line}, checking word {repr(word)}:")
result = check_column(problem2, alphabet2, word, line)
print(f" => {result}")
Explanation: That was long, but it works fine!
End of explanation
def check_grid(problem, alphabet, grid, debug=True, early=False):
m, n = dimension_problem(problem)
ok_lines = [False] * m
for line in range(m):
word = nth_line(grid, line)
ok_lines[line] = check_line(problem, alphabet, word, line, debug=debug, early=early)
ok_columns = [False] * n
for column in range(n):
word = nth_column(grid, column)
ok_columns[column] = check_column(problem, alphabet, word, column, debug=debug, early=early)
return all(ok_lines) and all(ok_columns)
Explanation: Second easy task: check that a proposed grid is a valid solution
I think it's easy, as we just have to use $m$ times the check_line and $n$ times the check_column functions.
End of explanation
check_grid(problem1, alphabet1, grid1_partial)
Explanation: Let's try it!
For the easy problem
For a partial grid, of course it's going to be invalid just because '_' is not in the alphabet $\Sigma$.
End of explanation
check_grid(problem1, alphabet1, grid1_solution)
Explanation: For a complete grid, let's check that our solution is valid:
End of explanation
check_grid(problem1, alphabet1, grid1_wrong1)
check_grid(problem1, alphabet1, grid1_wrong2)
check_grid(problem1, alphabet1, grid1_wrong3)
check_grid(problem1, alphabet1, grid1_wrong4)
Explanation: And let's also check that the few wrong solutions are indeed not valid:
End of explanation
check_grid(problem3, alphabet3, grid3_solution)
Explanation: We can see that for each wrong grid, at least one of the contraint is violated!
That's pretty good!
For the intermediate problem
My solution for the intermediate problem problem3 is indeed valid:
End of explanation
import itertools
def all_words_of_alphabet(alphabet, size):
yield from itertools.product(alphabet, repeat=size)
Explanation: For the hard problem
Well I don't have a solution yet, so I cannot check it!
Third easy task: generate all words of a given size in the alphabet
Using itertools.product and the alphabet defined above, it's going to be easy.
Note that I'll first try with a smaller alphabet, to check the result (for problem 1).
End of explanation
list(all_words_of_alphabet(['0', '1'], 3))
Explanation: Just a quick check:
End of explanation
alphabet0 = ['0', '1']
len_alphabet = len(alphabet0)
for k in [2, 3, 4, 5]:
print(f"Generating {len_alphabet**k} words of size = {k} takes about")
%timeit list(all_words_of_alphabet(alphabet0, k))
%timeit list(all_words_of_alphabet(['0', '1', '2', '3'], 10))
Explanation: The time and memory complexity of this function should be $\mathcal{O}(|\Sigma|^k)$ for words of size $k\in\mathbb{N}^*$.
End of explanation
len_alphabet = len(alphabet1)
for k in [2, 3, 4, 5]:
print(f"Generating {len_alphabet**k} words of size = {k} takes about")
%timeit list(all_words_of_alphabet(alphabet1, k))
len_alphabet = len(alphabet2)
for k in [2, 3, 4, 5]:
print(f"Generating {len_alphabet**k} words of size = {k} takes about")
%timeit list(all_words_of_alphabet(alphabet2, k))
Explanation: We can quickly check that even for the larger alphabet of size ~40, it's quite quick for small words of length $\leq 5$:
End of explanation
def all_grids_of_alphabet(alphabet, lines, columns):
all_words = list(itertools.product(alphabet, repeat=columns))
all_words = [ "".join(words) for words in all_words ]
all_grids = itertools.product(all_words, repeat=lines)
for pre_tr_grid in all_grids:
tr_grid = [
[
pre_tr_grid[line][column]
for line in range(lines)
]
for column in range(columns)
]
yield tr_grid
for alphabet in ( ['0', '1'], ['T', 'A', 'C', 'G'] ):
for (n, m) in [ (1, 1), (2, 2), (1, 2), (2, 1), (3, 3), (3, 2), (2, 3) ]:
assert len(list(all_grids_of_alphabet(alphabet, n, m))) == len(alphabet)**(n*m)
print(list(all_grids_of_alphabet(alphabet0, n, m))[0])
print(list(all_grids_of_alphabet(alphabet0, n, m))[-1])
print(f"For the alphabet {alphabet0} of size = {len(alphabet0)} :")
for (n, m) in [ (1, 1), (2, 1), (1, 2), (2, 2) ]:
%time all_these_grids = list(all_grids_of_alphabet(alphabet0, n, m))
print(f"For (n, m) = {(n, m)} the number of grids is {len(all_these_grids)}")
Explanation: Who, it takes 12 seconds to just generate all the possible words for the largest problem (which is just of size $(5,5)$)...
I'm afraid that my naive approach to solve the puzzle will be VERY slow...
Fourth easy task: generate all grids of a given size
End of explanation
print(f"For the alphabet {alphabet1} of size = {len(alphabet1)} :")
for (n, m) in [ (1, 1), (2, 1), (1, 2), (2, 2) ]:
%time all_these_grids = list(all_grids_of_alphabet(alphabet1, n, m))
print(f"For (n, m) = {(n, m)} the number of grids is {len(all_these_grids)}")
Explanation: How long does it take and how many grids for the easy problem?
End of explanation
print(f"For the alphabet {alphabet2} of size = {len(alphabet2)} :")
for (n, m) in [ (1, 1), (2, 1), (1, 2), (2, 2) ]:
%time all_these_grids = list(all_grids_of_alphabet(alphabet2, n, m))
print(f"For (n, m) = {(n, m)} the number of grids is {len(all_these_grids)}")
41**(2*3)
Explanation: That's still pretty small and fast!
How long does it take and how many grids for the hard problem?
End of explanation
n, m = 5, 5
41**(5*5)
import math
math.log10(41**(5*5))
Explanation: Just for $(n, m) = (2, 2)$ it takes about 7 seconds...
So to scale for $(n, m) = (5, 5)$ would just take... WAY TOO MUCH TIME!
End of explanation
s = 7
estimate_of_running_time = 7*s * len(alphabet1)**(5*5) / len(alphabet1)**(2*2)
estimate_of_running_time # in seconds
Explanation: For a grid of size $(5,5)$, the number of different possible grids is about $10^{40}$, that is CRAZY large, we have no hope of solving this problem with a brute force approach.
How much time would that require, just to generate the grids?
End of explanation
math.log10( estimate_of_running_time / (60*60*24*365) )
Explanation: This rough estimate gives about $5 * 10^{22}$ seconds, about $10^{15}$ years, so about a million of billion years !
End of explanation
def naive_solve(problem, alphabet, debug=False, early=True):
n, m = dimension_problem(problem)
good_grids = []
for possible_grid in all_grids_of_alphabet(alphabet, n, m):
is_good_grid = check_grid(problem, alphabet, possible_grid, debug=debug, early=early)
if is_good_grid:
if early:
return [ possible_grid ]
good_grids.append(possible_grid)
return good_grids
Explanation: First difficult task: for each possible grid, check if its valid
End of explanation
%%time
good_grids1 = naive_solve(problem1, alphabet1, debug=False, early=True)
print(f"For problem 1\n{problem1}\nOn alphabet\n{alphabet1}\n==> We found one solution:\n{good_grids1}")
Explanation: Let's try it!
Solving the easy problem
Let's check that we can quickly find one solution:
End of explanation
%%time
good_grids1 = naive_solve(problem1, alphabet1, debug=False, early=False)
print(f"For problem 1\n{problem1}\nOn alphabet\n{alphabet1}\n==> We found these solutions:\n{good_grids1}")
Explanation: Then can we find more solutions?
End of explanation
%%time
good_grids3 = naive_solve(problem3, alphabet3, debug=False, early=True)
print(f"For problem 3\n{problem3}\nOn alphabet\n{alphabet3}\n==> We found one solution:\n{good_grids3}")
Explanation: No there is indeed a unique solution here for the first "easy" problem!
Solving the intermediate problem
End of explanation
%%time
good_grids2 = naive_solve(problem2, alphabet2, debug=False, early=True)
print(f"For problem 2\n{problem2}\nOn alphabet\n{alphabet2}\n==> We found one solution:\n{good_grids2}")
Explanation: That was so long...
I could try to use Pypy3 IPython kernel, to speed things up?
Yes it's possible to use a Pypy kernel from your regular Python notebook!
See https://stackoverflow.com/questions/33850577/is-it-possible-to-run-a-pypy-kernel-in-the-jupyter-notebook
Solving the hard problem
Most probably, it will run forever if I use the naive approach of:
generate all grids of $m$ words of size $n$ in given alphabet $\Sigma$ ;
for all grid:
test it using naive algorithm
if it's valid: adds it to the list of good grids
There are $|\Sigma|^{n \times m}$ possible grids, so this approach is doubly exponential in $n$ for square grids.
I must think of a better approach...
Being just exponential in $\max(m, n)$ would imply that it's practical for the harder problem of size $(5,5)$.
End of explanation
n, m = dimension_problem(problem1)
problem1
alphabet1
len(list(all_words_of_alphabet(alphabet1, n)))
["".join(word) for word in list(all_words_of_alphabet(alphabet1, n))][:10]
[
[ "".join(word)
for word in all_words_of_alphabet(alphabet1, n)
if check_line(problem1, alphabet1, "".join(word), line, debug=False, early=True)
]
for line in range(m)
]
[
[ "".join(word)
for word in all_words_of_alphabet(alphabet1, m)
if check_column(problem1, alphabet1, "".join(word), column, debug=False, early=True)
]
for column in range(n)
]
Explanation: My first idea was to try to tackle each constraint independently, and generate the set of words that satisfy this contraint. (by naively checking check(constraint, word) for each word in $\Sigma^n$ or $\Sigma^m$).
if there are two line constraints (left/right), get the intersection of the two sets of words;
then, for each line we have a set of possible words:
we can build each column, and then check that the top/bottom constraint is valid or not
if valid, continue to next column until the last
if all columns are valid, then these lines/columns form a possible grid!
(if we want only one solution, stop now, otherwise continue)
Second difficult task: a more efficient approach to solve any problem
End of explanation
from tqdm.notebook import trange, tqdm
def smart_solve(problem, alphabet, debug=False, early=True):
n, m = dimension_problem(problem)
good_grids = []
possible_words_for_lines = [
[ "".join(word)
for word in all_words_of_alphabet(alphabet, n)
if check_line(problem, alphabet, "".join(word), line, debug=False, early=True)
# TODO don't compute this "".join(word) twice?
]
for line in range(m)
]
number_of_combinations = 1
for line in range(m):
number_of_combinations *= len(possible_words_for_lines[line])
print(f"- There are {len(possible_words_for_lines[line])} different words for line #{line}")
print(f"=> There are {number_of_combinations} combinations of words for lines #{0}..#{m-1}")
for possible_words in tqdm(
list(itertools.product(*possible_words_for_lines)),
desc="lines"
):
if debug: print(f" Trying possible_words from line constraints = {possible_words}")
column = 0
no_wrong_column = True
while no_wrong_column and column < n:
word_column = "".join(possible_words[line][column] for line in range(m))
if debug: print(f" For column #{column}, word = {word_column}, checking constraint...")
if not check_column(problem, alphabet, word_column, column, debug=False, early=True):
# this word is NOT valid for this column, so let's go to the next word
if debug: print(f" This word {word_column} is NOT valid for this column {column}, so let's go to the next word")
no_wrong_column = False
# break: this was failing... broke the outer for-loop and not the inner one
column += 1
if no_wrong_column:
print(f" These words seemed to satisfy the column constraints!\n{possible_words}")
# so all columns are valid! this choice of words is good!
possible_grid = [
list(word) for word in possible_words
]
print(f"Giving this grid:\n{possible_grid}")
# let's check it, just in case (this takes a short time, compared to the rest)
is_good_grid = check_grid(problem, alphabet, possible_grid, debug=debug, early=early)
if is_good_grid:
if early:
return [ possible_grid ]
good_grids.append(possible_grid)
# after the outer for loop on possible_words
return good_grids
Explanation: So let's write this algorithm.
I'm using a tqdm.tqdm() wrapper on the foor loops, to keep an eye on the progress.
End of explanation
grid1_solution
%%time
good_grids1 = smart_solve(problem1, alphabet1)
good_grids1
Explanation: And let's try it:
For the easy problem
End of explanation
%%time
#assert False # uncomment when ready
good_grids3 = smart_solve(problem3, alphabet3)
good_grids3
Explanation: So it worked!
🚀 It was also BLAZING fast compared to the naive approach: 160ms against about 900µs, almost a 160x speed-up factor!
🤔 I don't understand why it's so slow now I did get a time of 900 µs at first try, now it's about 90 ms... just a 2x spee-up factor.
Let's try for the harder problem!
For the intermediate problem
End of explanation
%%time
#assert False # uncomment when ready
good_grids2 = smart_solve(problem2, alphabet2)
good_grids2
Explanation: 🚀 It was also BLAZING fast compared to the naive approach: 90ms, when the naive approach was just too long that I killed it...
For the harder problem
End of explanation
<END_TASK> |
15,773 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
<!--BOOK_INFORMATION-->
<img align="left" style="padding-right
Step1: The plt interface is what we will use most often, as we shall see throughout this chapter.
Setting Styles
We will use the plt.style directive to choose appropriate aesthetic styles for our figures.
Here we will set the classic style, which ensures that the plots we create use the classic Matplotlib style
Step2: Throughout this section, we will adjust this style as needed.
Note that the stylesheets used here are supported as of Matplotlib version 1.5; if you are using an earlier version of Matplotlib, only the default style is available.
For more information on stylesheets, see Customizing Matplotlib
Step3: After running this command (it needs to be done only once per kernel/session), any cell within the notebook that creates a plot will embed a PNG image of the resulting graphic
Step4: Saving Figures to File
One nice feature of Matplotlib is the ability to save figures in a wide variety of formats.
Saving a figure can be done using the savefig() command.
For example, to save the previous figure as a PNG file, you can run this
Step5: We now have a file called my_figure.png in the current working directory
Step6: To confirm that it contains what we think it contains, let's use the IPython Image object to display the contents of this file
Step7: In savefig(), the file format is inferred from the extension of the given filename.
Depending on what backends you have installed, many different file formats are available.
The list of supported file types can be found for your system by using the following method of the figure canvas object
Step8: Note that when saving your figure, it's not necessary to use plt.show() or related commands discussed earlier.
Two Interfaces for the Price of One
A potentially confusing feature of Matplotlib is its dual interfaces
Step9: It is important to note that this interface is stateful | <ASSISTANT_TASK:>
Python Code:
import matplotlib as mpl
import matplotlib.pyplot as plt
Explanation: <!--BOOK_INFORMATION-->
<img align="left" style="padding-right:10px;" src="figures/PDSH-cover-small.png">
This notebook contains an excerpt from the Python Data Science Handbook by Jake VanderPlas; the content is available on GitHub.
The text is released under the CC-BY-NC-ND license, and code is released under the MIT license. If you find this content useful, please consider supporting the work by buying the book!
No changes were made to the contents of this notebook from the original.
<!--NAVIGATION-->
< Further Resources | Contents | Simple Line Plots >
Visualization with Matplotlib
We'll now take an in-depth look at the Matplotlib package for visualization in Python.
Matplotlib is a multi-platform data visualization library built on NumPy arrays, and designed to work with the broader SciPy stack.
It was conceived by John Hunter in 2002, originally as a patch to IPython for enabling interactive MATLAB-style plotting via gnuplot from the IPython command line.
IPython's creator, Fernando Perez, was at the time scrambling to finish his PhD, and let John know he wouldn’t have time to review the patch for several months.
John took this as a cue to set out on his own, and the Matplotlib package was born, with version 0.1 released in 2003.
It received an early boost when it was adopted as the plotting package of choice of the Space Telescope Science Institute (the folks behind the Hubble Telescope), which financially supported Matplotlib’s development and greatly expanded its capabilities.
One of Matplotlib’s most important features is its ability to play well with many operating systems and graphics backends.
Matplotlib supports dozens of backends and output types, which means you can count on it to work regardless of which operating system you are using or which output format you wish.
This cross-platform, everything-to-everyone approach has been one of the great strengths of Matplotlib.
It has led to a large user base, which in turn has led to an active developer base and Matplotlib’s powerful tools and ubiquity within the scientific Python world.
In recent years, however, the interface and style of Matplotlib have begun to show their age.
Newer tools like ggplot and ggvis in the R language, along with web visualization toolkits based on D3js and HTML5 canvas, often make Matplotlib feel clunky and old-fashioned.
Still, I'm of the opinion that we cannot ignore Matplotlib's strength as a well-tested, cross-platform graphics engine.
Recent Matplotlib versions make it relatively easy to set new global plotting styles (see Customizing Matplotlib: Configurations and Style Sheets), and people have been developing new packages that build on its powerful internals to drive Matplotlib via cleaner, more modern APIs—for example, Seaborn (discussed in Visualization With Seaborn), ggpy, HoloViews, Altair, and even Pandas itself can be used as wrappers around Matplotlib's API.
Even with wrappers like these, it is still often useful to dive into Matplotlib's syntax to adjust the final plot output.
For this reason, I believe that Matplotlib itself will remain a vital piece of the data visualization stack, even if new tools mean the community gradually moves away from using the Matplotlib API directly.
General Matplotlib Tips
Before we dive into the details of creating visualizations with Matplotlib, there are a few useful things you should know about using the package.
Importing Matplotlib
Just as we use the np shorthand for NumPy and the pd shorthand for Pandas, we will use some standard shorthands for Matplotlib imports:
End of explanation
plt.style.use('classic')
Explanation: The plt interface is what we will use most often, as we shall see throughout this chapter.
Setting Styles
We will use the plt.style directive to choose appropriate aesthetic styles for our figures.
Here we will set the classic style, which ensures that the plots we create use the classic Matplotlib style:
End of explanation
%matplotlib inline
Explanation: Throughout this section, we will adjust this style as needed.
Note that the stylesheets used here are supported as of Matplotlib version 1.5; if you are using an earlier version of Matplotlib, only the default style is available.
For more information on stylesheets, see Customizing Matplotlib: Configurations and Style Sheets.
show() or No show()? How to Display Your Plots
A visualization you can't see won't be of much use, but just how you view your Matplotlib plots depends on the context.
The best use of Matplotlib differs depending on how you are using it; roughly, the three applicable contexts are using Matplotlib in a script, in an IPython terminal, or in an IPython notebook.
Plotting from a script
If you are using Matplotlib from within a script, the function plt.show() is your friend.
plt.show() starts an event loop, looks for all currently active figure objects, and opens one or more interactive windows that display your figure or figures.
So, for example, you may have a file called myplot.py containing the following:
```python
------- file: myplot.py ------
import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(0, 10, 100)
plt.plot(x, np.sin(x))
plt.plot(x, np.cos(x))
plt.show()
```
You can then run this script from the command-line prompt, which will result in a window opening with your figure displayed:
$ python myplot.py
The plt.show() command does a lot under the hood, as it must interact with your system's interactive graphical backend.
The details of this operation can vary greatly from system to system and even installation to installation, but matplotlib does its best to hide all these details from you.
One thing to be aware of: the plt.show() command should be used only once per Python session, and is most often seen at the very end of the script.
Multiple show() commands can lead to unpredictable backend-dependent behavior, and should mostly be avoided.
Plotting from an IPython shell
It can be very convenient to use Matplotlib interactively within an IPython shell (see IPython: Beyond Normal Python).
IPython is built to work well with Matplotlib if you specify Matplotlib mode.
To enable this mode, you can use the %matplotlib magic command after starting ipython:
```ipython
In [1]: %matplotlib
Using matplotlib backend: TkAgg
In [2]: import matplotlib.pyplot as plt
```
At this point, any plt plot command will cause a figure window to open, and further commands can be run to update the plot.
Some changes (such as modifying properties of lines that are already drawn) will not draw automatically: to force an update, use plt.draw().
Using plt.show() in Matplotlib mode is not required.
Plotting from an IPython notebook
The IPython notebook is a browser-based interactive data analysis tool that can combine narrative, code, graphics, HTML elements, and much more into a single executable document (see IPython: Beyond Normal Python).
Plotting interactively within an IPython notebook can be done with the %matplotlib command, and works in a similar way to the IPython shell.
In the IPython notebook, you also have the option of embedding graphics directly in the notebook, with two possible options:
%matplotlib notebook will lead to interactive plots embedded within the notebook
%matplotlib inline will lead to static images of your plot embedded in the notebook
For this book, we will generally opt for %matplotlib inline:
End of explanation
import numpy as np
x = np.linspace(0, 10, 100)
fig = plt.figure()
plt.plot(x, np.sin(x), '-')
plt.plot(x, np.cos(x), '--');
Explanation: After running this command (it needs to be done only once per kernel/session), any cell within the notebook that creates a plot will embed a PNG image of the resulting graphic:
End of explanation
fig.savefig('my_figure.png')
Explanation: Saving Figures to File
One nice feature of Matplotlib is the ability to save figures in a wide variety of formats.
Saving a figure can be done using the savefig() command.
For example, to save the previous figure as a PNG file, you can run this:
End of explanation
!ls -lh my_figure.png
Explanation: We now have a file called my_figure.png in the current working directory:
End of explanation
from IPython.display import Image
Image('my_figure.png')
Explanation: To confirm that it contains what we think it contains, let's use the IPython Image object to display the contents of this file:
End of explanation
fig.canvas.get_supported_filetypes()
Explanation: In savefig(), the file format is inferred from the extension of the given filename.
Depending on what backends you have installed, many different file formats are available.
The list of supported file types can be found for your system by using the following method of the figure canvas object:
End of explanation
plt.figure() # create a plot figure
# create the first of two panels and set current axis
plt.subplot(2, 1, 1) # (rows, columns, panel number)
plt.plot(x, np.sin(x))
# create the second panel and set current axis
plt.subplot(2, 1, 2)
plt.plot(x, np.cos(x));
Explanation: Note that when saving your figure, it's not necessary to use plt.show() or related commands discussed earlier.
Two Interfaces for the Price of One
A potentially confusing feature of Matplotlib is its dual interfaces: a convenient MATLAB-style state-based interface, and a more powerful object-oriented interface. We'll quickly highlight the differences between the two here.
MATLAB-style Interface
Matplotlib was originally written as a Python alternative for MATLAB users, and much of its syntax reflects that fact.
The MATLAB-style tools are contained in the pyplot (plt) interface.
For example, the following code will probably look quite familiar to MATLAB users:
End of explanation
# First create a grid of plots
# ax will be an array of two Axes objects
fig, ax = plt.subplots(2)
# Call plot() method on the appropriate object
ax[0].plot(x, np.sin(x))
ax[1].plot(x, np.cos(x));
Explanation: It is important to note that this interface is stateful: it keeps track of the "current" figure and axes, which are where all plt commands are applied.
You can get a reference to these using the plt.gcf() (get current figure) and plt.gca() (get current axes) routines.
While this stateful interface is fast and convenient for simple plots, it is easy to run into problems.
For example, once the second panel is created, how can we go back and add something to the first?
This is possible within the MATLAB-style interface, but a bit clunky.
Fortunately, there is a better way.
Object-oriented interface
The object-oriented interface is available for these more complicated situations, and for when you want more control over your figure.
Rather than depending on some notion of an "active" figure or axes, in the object-oriented interface the plotting functions are methods of explicit Figure and Axes objects.
To re-create the previous plot using this style of plotting, you might do the following:
End of explanation
<END_TASK> |
15,774 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
ES-DOC CMIP6 Model Properties - Seaice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required
Step7: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required
Step8: 3.2. Ocean Freezing Point Value
Is Required
Step9: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required
Step10: 4.2. Canonical Horizontal Resolution
Is Required
Step11: 4.3. Number Of Horizontal Gridpoints
Is Required
Step12: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required
Step13: 5.2. Target
Is Required
Step14: 5.3. Simulations
Is Required
Step15: 5.4. Metrics Used
Is Required
Step16: 5.5. Variables
Is Required
Step17: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required
Step18: 6.2. Additional Parameters
Is Required
Step19: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required
Step20: 7.2. On Diagnostic Variables
Is Required
Step21: 7.3. Missing Processes
Is Required
Step22: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required
Step23: 8.2. Properties
Is Required
Step24: 8.3. Budget
Is Required
Step25: 8.4. Was Flux Correction Used
Is Required
Step26: 8.5. Corrected Conserved Prognostic Variables
Is Required
Step27: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required
Step28: 9.2. Grid Type
Is Required
Step29: 9.3. Scheme
Is Required
Step30: 9.4. Thermodynamics Time Step
Is Required
Step31: 9.5. Dynamics Time Step
Is Required
Step32: 9.6. Additional Details
Is Required
Step33: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required
Step34: 10.2. Number Of Layers
Is Required
Step35: 10.3. Additional Details
Is Required
Step36: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required
Step37: 11.2. Number Of Categories
Is Required
Step38: 11.3. Category Limits
Is Required
Step39: 11.4. Ice Thickness Distribution Scheme
Is Required
Step40: 11.5. Other
Is Required
Step41: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required
Step42: 12.2. Number Of Snow Levels
Is Required
Step43: 12.3. Snow Fraction
Is Required
Step44: 12.4. Additional Details
Is Required
Step45: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required
Step46: 13.2. Transport In Thickness Space
Is Required
Step47: 13.3. Ice Strength Formulation
Is Required
Step48: 13.4. Redistribution
Is Required
Step49: 13.5. Rheology
Is Required
Step50: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required
Step51: 14.2. Thermal Conductivity
Is Required
Step52: 14.3. Heat Diffusion
Is Required
Step53: 14.4. Basal Heat Flux
Is Required
Step54: 14.5. Fixed Salinity Value
Is Required
Step55: 14.6. Heat Content Of Precipitation
Is Required
Step56: 14.7. Precipitation Effects On Salinity
Is Required
Step57: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required
Step58: 15.2. Ice Vertical Growth And Melt
Is Required
Step59: 15.3. Ice Lateral Melting
Is Required
Step60: 15.4. Ice Surface Sublimation
Is Required
Step61: 15.5. Frazil Ice
Is Required
Step62: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Is Required
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required
Step65: 17.2. Constant Salinity Value
Is Required
Step66: 17.3. Additional Details
Is Required
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required
Step68: 18.2. Constant Salinity Value
Is Required
Step69: 18.3. Additional Details
Is Required
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required
Step72: 20.2. Additional Details
Is Required
Step73: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required
Step74: 21.2. Formulation
Is Required
Step75: 21.3. Impacts
Is Required
Step76: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required
Step77: 22.2. Snow Aging Scheme
Is Required
Step78: 22.3. Has Snow Ice Formation
Is Required
Step79: 22.4. Snow Ice Formation Scheme
Is Required
Step80: 22.5. Redistribution
Is Required
Step81: 22.6. Heat Diffusion
Is Required
Step82: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required
Step83: 23.2. Ice Radiation Transmission
Is Required | <ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mpi-m', 'sandbox-2', 'seaice')
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: MPI-M
Source ID: SANDBOX-2
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:17
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation
<END_TASK> |
15,775 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Widget List
This lecture will serve as a reference for widgets, providing a list of the GUI widgets available!
Complete list
For a complete list of the GUI widgets available to you, you can list the registered widget types. Widget and DOMWidget, not listed below, are base classes.
Step1: Numeric widgets
There are 8 widgets distributed with IPython that are designed to display numeric values. Widgets exist for displaying integers and floats, both bounded and unbounded. The integer widgets share a similar naming scheme to their floating point counterparts. By replacing Float with Int in the widget name, you can find the Integer equivalent.
FloatSlider
Step2: Sliders can also be displayed vertically.
Step3: FloatProgress
Step4: BoundedFloatText
Step5: FloatText
Step6: Boolean widgets
There are three widgets that are designed to display a boolean value.
ToggleButton
Step7: Checkbox
Step8: Valid
The valid widget provides a read-only indicator.
Step9: Selection widgets
There are four widgets that can be used to display single selection lists, and one that can be used to display multiple selection lists. All inherit from the same base class. You can specify the enumeration of selectable options by passing a list. You can also specify the enumeration as a dictionary, in which case the keys will be used as the item displayed in the list and the corresponding value will be returned when an item is selected.
Dropdown
Step10: The following is also valid
Step11: RadioButtons
Step12: Select
Step13: ToggleButtons
Step14: SelectMultiple
Multiple values can be selected with <kbd>shift</kbd> and/or <kbd>ctrl</kbd> (or <kbd>command</kbd>) pressed and mouse clicks or arrow keys.
Step15: String widgets
There are 4 widgets that can be used to display a string value. Of those, the Text and Textarea widgets accept input. The Latex and HTML widgets display the string as either Latex or HTML respectively, but do not accept input.
Text
Step16: Textarea
Step17: Latex
Step18: HTML
Step19: Button | <ASSISTANT_TASK:>
Python Code:
import ipywidgets as widgets
# Show all available widgets!
widgets.Widget.widget_types.values()
Explanation: Widget List
This lecture will serve as a reference for widgets, providing a list of the GUI widgets available!
Complete list
For a complete list of the GUI widgets available to you, you can list the registered widget types. Widget and DOMWidget, not listed below, are base classes.
End of explanation
widgets.FloatSlider(
value=7.5,
min=5.0,
max=10.0,
step=0.1,
description='Test:',
)
Explanation: Numeric widgets
There are 8 widgets distributed with IPython that are designed to display numeric values. Widgets exist for displaying integers and floats, both bounded and unbounded. The integer widgets share a similar naming scheme to their floating point counterparts. By replacing Float with Int in the widget name, you can find the Integer equivalent.
FloatSlider
End of explanation
widgets.FloatSlider(
value=7.5,
min=5.0,
max=10.0,
step=0.1,
description='Test',
orientation='vertical',
)
Explanation: Sliders can also be displayed vertically.
End of explanation
widgets.FloatProgress(
value=7.5,
min=5.0,
max=10.0,
step=0.1,
description='Loading:',
)
Explanation: FloatProgress
End of explanation
widgets.BoundedFloatText(
value=7.5,
min=5.0,
max=10.0,
description='Text:',
)
Explanation: BoundedFloatText
End of explanation
widgets.FloatText(
value=7.5,
description='Any:',
)
Explanation: FloatText
End of explanation
widgets.ToggleButton(
description='Click me',
value=False,
)
Explanation: Boolean widgets
There are three widgets that are designed to display a boolean value.
ToggleButton
End of explanation
widgets.Checkbox(
description='Check me',
value=True,
)
Explanation: Checkbox
End of explanation
widgets.Valid(
value=True,
)
Explanation: Valid
The valid widget provides a read-only indicator.
End of explanation
from IPython.display import display
w = widgets.Dropdown(
options=['1', '2', '3'],
value='2',
description='Number:',
)
display(w)
# Show value
w.value
Explanation: Selection widgets
There are four widgets that can be used to display single selection lists, and one that can be used to display multiple selection lists. All inherit from the same base class. You can specify the enumeration of selectable options by passing a list. You can also specify the enumeration as a dictionary, in which case the keys will be used as the item displayed in the list and the corresponding value will be returned when an item is selected.
Dropdown
End of explanation
w = widgets.Dropdown(
options={'One': 1, 'Two': 2, 'Three': 3},
value=2,
description='Number:')
display(w)
w.value
Explanation: The following is also valid:
End of explanation
widgets.RadioButtons(
description='Pizza topping:',
options=['pepperoni', 'pineapple', 'anchovies'],
)
Explanation: RadioButtons
End of explanation
widgets.Select(
description='OS:',
options=['Linux', 'Windows', 'OSX'],
)
Explanation: Select
End of explanation
widgets.ToggleButtons(
description='Speed:',
options=['Slow', 'Regular', 'Fast'],
)
Explanation: ToggleButtons
End of explanation
w = widgets.SelectMultiple(
description="Fruits",
options=['Apples', 'Oranges', 'Pears'])
display(w)
w.value
Explanation: SelectMultiple
Multiple values can be selected with <kbd>shift</kbd> and/or <kbd>ctrl</kbd> (or <kbd>command</kbd>) pressed and mouse clicks or arrow keys.
End of explanation
widgets.Text(
description='String:',
value='Hello World',
)
Explanation: String widgets
There are 4 widgets that can be used to display a string value. Of those, the Text and Textarea widgets accept input. The Latex and HTML widgets display the string as either Latex or HTML respectively, but do not accept input.
Text
End of explanation
widgets.Textarea(
description='String:',
value='Hello World',
)
Explanation: Textarea
End of explanation
widgets.Latex(
value="$$\\frac{n!}{k!(n-k)!}$$",
)
Explanation: Latex
End of explanation
widgets.HTML(
value="Hello <b>World</b>"
)
Explanation: HTML
End of explanation
widgets.Button(description='Click me')
Explanation: Button
End of explanation
<END_TASK> |
15,776 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Playing around with spaCy
spaCy
Using the basic introduction to spaCy, then playting with it. Let's load spaCy's english dictionary.
Step1: spaCy tokenizes words, then treats each token as a Token object. Each token has an integer and string representation. Each token also has things like
Step2: Given a test sentence (in this case
Step3: Find similar words to 'quickly' via cosine similarity
Step4: We can focus on one meaning of quickly and find similar words if we average over related words
Step5: Let's look at other parts of speech from our original sentence
Step6: How closely does one test noun match each noun found in our sentence? That is, if we say, "barrier", is it closer to "wall," "Frank", or "robot"? How about "car" or "agent"? | <ASSISTANT_TASK:>
Python Code:
from __future__ import unicode_literals # If Python 2
import spacy.en
from spacy.tokens import Token
from spacy.parts_of_speech import ADV
nlp = spacy.en.English()
# Find log probability of Nth most frequent word
probs = [lex.prob for lex in nlp.vocab]
probs.sort()
words = [w for w in nlp.vocab if w.has_repvec]
Explanation: Playing around with spaCy
spaCy
Using the basic introduction to spaCy, then playting with it. Let's load spaCy's english dictionary.
End of explanation
tokens = nlp(u'"I ran to the wall quickly," Frank explained to the robot.')
ran = tokens[2]
quickly = tokens[6]
run = nlp(moved.lemma_)[0]
# the integer and string representations of "moved" and its head
print (ran.orth, ran.orth_, ran.head.lemma, ran.head.lemma_)
print (quickly.orth, quickly.orth_, quickly.lemma, quickly.lemma_,)
print (quickly.head.orth_, quickly.head.lemma_)
print (ran.prob, run.prob, quickly.prob)
print (ran.cluster, run.cluster, quickly.cluster)
Explanation: spaCy tokenizes words, then treats each token as a Token object. Each token has an integer and string representation. Each token also has things like:
orth
The form of the word with no string normalization or processing, as it appears in the string, without trailing whitespace. i.e. " Frank " -> "frank"
head
The Token that is the immediate syntactic head of the word. If the word is the root of the dependency tree, the same word is returned.
lemma
The “base” of the word, with no inflectional suffixes, e.g. the lemma of “developing” is “develop”, the lemma of “geese” is “goose”, etc. Note that derivational suffixes are not stripped, e.g. the lemma of “instutitions” is “institution”, not “institute”. Lemmatization is performed using the WordNet data, but extended to also cover closed-class words such as pronouns. By default, the WN lemmatizer returns “hi” as the lemma of “his”. We assign pronouns the lemma -PRON-.
prob
The unigram log-probability of the word, estimated from counts from a large corpus, smoothed using Simple Good Turing estimation.
cluster
The Brown cluster ID of the word. These are often useful features for linear models. If you’re using a non-linear model, particularly a neural net or random forest, consider using the real-valued word representation vector, in Token.repvec, instead.
repvec
A “word embedding” representation: a dense real-valued vector that supports similarity queries between words. By default, spaCy currently loads vectors produced by the Levy and Goldberg (2014) dependency-based word2vec model.
End of explanation
is_adverb = lambda tok: tok.pos == ADV and tok.prob < probs[-1000]
str_ = u'"I ran to the wall quickly," Frank explained to the robot.'
tokens = nlp(str_)
print u''.join(tok.string.upper() if is_adverb(tok) else tok.string for tok in tokens)
quickly = tokens[6]
Explanation: Given a test sentence (in this case: "I ran to the wall quickly," Frank explained to the robot.), we can highlight parts of speech (i.e. adverbs):
End of explanation
from numpy import dot
from numpy.linalg import norm
cosine = lambda v1, v2: dot(v1, v2) / (norm(v1) * norm(v2))
words.sort(key=lambda w: cosine(w.repvec, quickly.repvec))
words.reverse()
print('1-20:')
print('\n'.join(w.orth_ for w in words[0:20]))
print('\n50-60:')
print('\n'.join(w.orth_ for w in words[50:60]))
print('\n100-110:')
print('\n'.join(w.orth_ for w in words[100:110]))
print('\n1000-1010:')
print('\n'.join(w.orth_ for w in words[1000:1010]))
print('\n50000-50010:')
print('\n'.join(w.orth_ for w in words[50000:50010]))
Explanation: Find similar words to 'quickly' via cosine similarity:
End of explanation
say_adverbs = ['quickly', 'swiftly', 'speedily', 'rapidly']
say_vector = sum(nlp.vocab[adverb].repvec for adverb in say_adverbs) / len(say_adverbs)
words.sort(key=lambda w: cosine(w.repvec, say_vector))
words.reverse()
print('1-20:')
print('\n'.join(w.orth_ for w in words[0:20]))
print('\n50-60:')
print('\n'.join(w.orth_ for w in words[50:60]))
print('\n1000-1010:')
print('\n'.join(w.orth_ for w in words[1000:1010]))
Explanation: We can focus on one meaning of quickly and find similar words if we average over related words:
End of explanation
from spacy.parts_of_speech import NOUN
is_noun = lambda tok: tok.pos == NOUN and tok.prob < probs[-1000]
print u''.join(tok.string.upper() if is_noun(tok) else tok.string for tok in tokens)
nouns = [tok for tok in tokens if is_noun(tok)]
Explanation: Let's look at other parts of speech from our original sentence:
End of explanation
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
barrier = nlp('barrier')[0]
car = nlp('car')[0]
agent = nlp('android')[0]
test_nouns = nouns + [barrier] + [car] + [agent]
n = len(test_nouns)
barrier_relations = np.zeros(n)
car_relations = np.zeros(n)
agent_relations = np.zeros(n)
for i, noun in enumerate(test_nouns):
barrier_relations[i] = cosine(barrier.repvec, noun.repvec)
car_relations[i] = cosine(car.repvec, noun.repvec)
agent_relations[i] = cosine(agent.repvec, noun.repvec)
fig, ax = plt.subplots(figsize=(10,8))
index = np.arange(n)
bar_width = 0.2
opacity = 0.4
rects1 = plt.bar(index, barrier_relations, bar_width,
alpha=opacity,
color='b',
label=barrier.orth_)
rects2 = plt.bar(index + bar_width, car_relations, bar_width,
alpha=opacity,
color='r',
label=car.orth_)
rects3 = plt.bar(index + 2 * bar_width, agent_relations, bar_width,
alpha=opacity,
color='g',
label=agent.orth_)
labels = [tok.orth_ for tok in test_nouns]
plt.xlabel('Test Word')
plt.ylabel('Similarity')
plt.title('Similarity of words')
plt.xticks(index + bar_width, labels)
plt.legend()
from IPython.core.display import HTML
# Borrowed style from Probabilistic Programming and Bayesian Methods for Hackers
def css_styling():
styles = open("../styles/custom.css", "r").read()
return HTML(styles)
css_styling()
Explanation: How closely does one test noun match each noun found in our sentence? That is, if we say, "barrier", is it closer to "wall," "Frank", or "robot"? How about "car" or "agent"?
End of explanation
<END_TASK> |
15,777 |
<SYSTEM_TASK:>
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
<END_TASK>
<USER_TASK:>
Problem:
making new class prediction for a classification problem
| <ASSISTANT_TASK:>
Python Code::
from keras.models import Sequential
from keras.layers import Dense
from sklearn.datasets import make_blobs
from sklearn.preprocessing import MinMaxScaler
from numpy import array
X, y = make_blobs(n_samples=100, centers=2, n_features=2, random_state=1)
scalar = MinMaxScaler()
scalar.fit(X)
X = scalar.transform(X)
model = Sequential()
model.add(Dense(4, input_shape=(2,), activation='relu'))
model.add(Dense(4, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam')
model.fit(X, y, epochs=500, verbose=0)
Xnew = array([[0.89337759, 0.65864154]])
ynew = model.predict_classes(Xnew)
<END_TASK>
|
15,778 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Activité - Faire danser PoppyTorso
Première partie
Step1: Ensuite, vous allez créer un objet s'appellant poppy et étant un robot de type PoppyTorso. Vous pouvez donner le nom que vous souhaitez à votre robot. Il vous suffit d'écrire
Step2: Comme toute chose en language Python, notre robot poppy est un objet qui contient d'autres objets qui sont ses moteurs.
Ca y est, si vous arrivez à accéder aux moteurs de Poppy, vous pourrez le faire bouger...
Vous devez donc accéder aux moteurs de poppy (qui se nomment "motors") et qui se trouve à l'intérieur de Poppy pour cela tapez
Step3: Tous les mouvements sont basés sur des rotations des moteurs situés aux articulations. Il suffit de fixer l'angle que l'on désire pour un moteur. Pour cela, nous pouvons utiliser la méthode
Step4: A présent choisissez un moteur au hasard dans la liste de moteurs obtenues précédemment et faîtes le bouger pour le localiser sur le robot.
Vous devez remplir le tableau suivant avec les noms des 10 moteurs
Step5: Si votre robot ne répond plus et que vous ne comprenez pas pourquoi, le programme de contrôle du robot ou l'interface Jupiter est peut être hors service, dans ce cas vous pouvez recharger les programmes en choissisant Kernel puis Restart dans le menu de Jupyter. Il faut ensuite tout recommencer au début de ce guide.
Maintenant, à vous de mettre les bras de votre robot à l'horizontale.
Step6: Vous avez sans doute remarqué que les mouvements de tous les moteurs s'éxécutent en même temps, en simultané.
Il peut être utile de décomposer les mouvements. Par exemple, pour mettre les bras à l'horizontale
Step7: Les bras sont à l'horizontale, remettez les dans leur position de départ, c'est à dire avec les angles des moteurs à 0 degrés.
Step8: A présent que vous savez, faire bouger votre robot, soyez créatif et inventez une danse pour lui !
Step9: Pour terminer la simulation, il faut arréter le robot
Step10: Deuxième partie | <ASSISTANT_TASK:>
Python Code:
from poppy.creatures import PoppyTorso
Explanation: Activité - Faire danser PoppyTorso
Première partie : en utilisant, le simulateur V-REP :
Compétences visées par cette activité :
Savoir utiliser des modules en y récupérant des classes. Instancier un objet à partir d'une classe. Utiliser une méthode et un attribut liée à un objet.
Faire le lien entre rotation des moteurs et position du robot dans l'espace.
Faire preuve de créativité en developpant une chorégraphie.
Lien avec les programmes scolaires, voir :
Pour ICN en classe de seconde : http://www.poppy-prof.fr/?page_id=4&id=67<br>
Pour les mathématiques en classe de seconde : http://www.poppy-prof.fr/?page_id=4&id=37
Pour faire fonctionner notre robot, il faut utiliser Python mais pas seulement. Nous allons aussi avoir besoin de ce que l'on appelle une librairie. La librairie qui permet d'utiliser notre robot s'appelle Pypot et elle est entièrement écrite avec le language Python.
Cette librairie a été construite par des chercheurs très compétants et nous allons simplement apprendre à l'utiliser.
La première chose à faire est d'aller chercher dans la librairie Pypot, les bons "livres", ceux dont nous allons avoir besoin. Ces "livres" se nomment des modules en vocabulaire Python.
Toutes les instructions seront passées au robot via l'interface sur laquelle vous êtes en train de lire ces lignes. Cette interface se nomme Jupyter ou Notebook.
Pour éxécuter les instructions écrites dans une case de Jupyter, il faut :<br>
_Sélectionner la case en cliquant dessus.<br>
_Cliquez sur la case lecture située dans la barre de menu : <img src="images/play.jpg" alt="play" /><br>
_Ou appuyez simultanément sur shitf+entrée.<br>
End of explanation
# Ecrivez votre code ci-dessous et éxecutez le.
# Une correction est donnée à titre indicatif :
poppy = PoppyTorso(simulator='vrep')
Explanation: Ensuite, vous allez créer un objet s'appellant poppy et étant un robot de type PoppyTorso. Vous pouvez donner le nom que vous souhaitez à votre robot. Il vous suffit d'écrire :
<span style="color:green">nom_du_robot</span><span style="color:red"> = PoppyTorso(simulator='vrep')</span>
La syntaxe ci-dessus donne une instruction qu'il faut adapter selon vos envies. Le texte écrit en rouge ne peut pas être modifié, il s'agit d'instruction du language Python que vous ne pouvez pas changer.
Par contre, le texte en vert, représente un nom de variable et vous pouvez mettre le nom qui vous plait sans toutefois
utliser des caractères spéciaux (' " / - ).
End of explanation
# Ecrivez votre code ci-dessous et éxecutez le.
# Une correction est donnée à titre indicatif :
poppy.motors
Explanation: Comme toute chose en language Python, notre robot poppy est un objet qui contient d'autres objets qui sont ses moteurs.
Ca y est, si vous arrivez à accéder aux moteurs de Poppy, vous pourrez le faire bouger...
Vous devez donc accéder aux moteurs de poppy (qui se nomment "motors") et qui se trouve à l'intérieur de Poppy pour cela tapez :
<span style="color:green">nom_du_robot</span><span style="color:red">.motors</span>
End of explanation
# Ecrivez votre code ci-dessous et éxecutez le.
# Une correction est donnée à titre indicatif :
poppy.head_z.goto_position(90,1)
Explanation: Tous les mouvements sont basés sur des rotations des moteurs situés aux articulations. Il suffit de fixer l'angle que l'on désire pour un moteur. Pour cela, nous pouvons utiliser la méthode :
<span style="color:red">goto_position(</span><span style="color:green">angle_en_degrées</span><span style="color:red">,</span><span style="color:green">temps</span><span style="color:red">)</span>
Dans la syntaxe ci-dessus, angle_en_degrées doit être remplacé par une valeur entre 0 et 180. Le temps doit être remplacé par une durée en seconde que vous désirez donner au mouvement. Une durée longue (5) entraine un mouvement lent une durée courte (0.5) entraine un mouvement rapide.
End of explanation
# Ecrivez votre code ci-dessous et éxecutez le.
# Une correction est donnée à titre indicatif :
# pour remettre la simulation à zéro :
poppy.reset_simulation()
Explanation: A présent choisissez un moteur au hasard dans la liste de moteurs obtenues précédemment et faîtes le bouger pour le localiser sur le robot.
Vous devez remplir le tableau suivant avec les noms des 10 moteurs :
<img src="./images/moteur_torso2.jpg" alt="poppy-torso" style="height: 500px;"/>
Le tableau suivant doit être rempli par les élèves.
La correction est donnée à titre indicatif :
Nom du moteur 1 : ........... <br>
Nom du moteur 2 : ........... <br>
Nom du moteur 3 : ........... <br>
Nom du moteur 4 : ........... <br>
Nom du moteur 5 : ........... <br>
Nom du moteur 6 : ........... <br>
Nom du moteur 7 : ........... <br>
Nom du moteur 8 : ........... <br>
Nom du moteur 9 : ........... <br>
Nom du moteur 10 : ........... <br>
Nom du moteur 11 : ........... <br>
Nom du moteur 12 : ........... <br>
Nom du moteur 13 : ........... <br>
Si lors de vos essais, vous faîtes tomber votre robot, il est important de connaitre l'instruction qui permet de remettre la simulation à zéro :
<span style="color:green">nom_du_robot<span style="color:red">.reset_simulation()
End of explanation
# Ecrivez votre code ci-dessous et éxecutez le.
# Une correction est donnée à titre indicatif :
# pour mettre les bras à l'horizontale
poppy.r_shoulder_x.goto_position(-100,1)
poppy.l_shoulder_x.goto_position(100,1)
poppy.r_elbow_y.goto_position(100,1)
poppy.l_elbow_y.goto_position(100,1)
Explanation: Si votre robot ne répond plus et que vous ne comprenez pas pourquoi, le programme de contrôle du robot ou l'interface Jupiter est peut être hors service, dans ce cas vous pouvez recharger les programmes en choissisant Kernel puis Restart dans le menu de Jupyter. Il faut ensuite tout recommencer au début de ce guide.
Maintenant, à vous de mettre les bras de votre robot à l'horizontale.
End of explanation
# Ecrivez votre code ci-dessous et éxecutez le.
# Une correction est donnée à titre indicatif :
# pour mettre les bras à l'horizontale
poppy.r_shoulder_x.goto_position(-100,1)
poppy.l_shoulder_x.goto_position(100,1,wait=True)
poppy.r_elbow_y.goto_position(100,1)
poppy.l_elbow_y.goto_position(100,1)
Explanation: Vous avez sans doute remarqué que les mouvements de tous les moteurs s'éxécutent en même temps, en simultané.
Il peut être utile de décomposer les mouvements. Par exemple, pour mettre les bras à l'horizontale : bouger d'abord les épaules puis ensuite les coudes. Pour faire cela, il faut rajouter à la méthode goto_position() un argument wait='True' :
<span style="color:green">nom_du_robot.nom_du_moteur<span style="color:red">.goto_position(<span style="color:green">angle_en_degrées,temps<span style="color:red">,wait=True)
A présent, mettez les bras à l'horizontale en bougeant d'abord les épaules, puis ensuite les coudes :
End of explanation
# Ecrivez votre code ci-dessous et éxecutez le.
# Une correction est donnée à titre indicatif :
# pour remettre les bras dans leur position de départ :
poppy.r_elbow_y.goto_position(0,1)
poppy.l_elbow_y.goto_position(0,1,wait=True)
poppy.r_shoulder_x.goto_position(0,1)
poppy.l_shoulder_x.goto_position(0,1,wait=True)
Explanation: Les bras sont à l'horizontale, remettez les dans leur position de départ, c'est à dire avec les angles des moteurs à 0 degrés.
End of explanation
# Ecrivez votre code ci-dessous et éxecutez le.
# Une correction est donnée à titre indicatif :
poppy.head_z.goto_position(40,1,wait=True)
poppy.head_z.goto_position(-40,1,wait=True)
poppy.head_z.goto_position(40,1,wait=True)
poppy.head_z.goto_position(-40,1,wait=True)
poppy.head_z.goto_position(0,1,wait=True)
poppy.r_shoulder_x.goto_position(-90,2)
poppy.l_shoulder_x.goto_position(90,2)
poppy.l_arm_z.goto_position(90,2)
poppy.r_arm_z.goto_position(50,2,wait=True)
poppy.r_shoulder_x.goto_position(0,2)
poppy.l_shoulder_x.goto_position(0,2)
poppy.l_arm_z.goto_position(0,2)
poppy.r_arm_z.goto_position(0,2,wait=True)
poppy.r_shoulder_x.goto_position(-90,2)
poppy.l_shoulder_x.goto_position(90,2)
poppy.l_arm_z.goto_position(-50,2)
poppy.r_arm_z.goto_position(-90,2,wait=True)
poppy.r_shoulder_x.goto_position(0,2)
poppy.l_shoulder_x.goto_position(0,2)
poppy.l_arm_z.goto_position(0,2)
poppy.r_arm_z.goto_position(0,2,wait=True)
poppy.l_arm_z.goto_position(90,3)
poppy.r_arm_z.goto_position(-90,3,wait=True)
poppy.r_arm_z.goto_position(0,3)
poppy.l_arm_z.goto_position(0,3,wait=True)
poppy.l_arm_z.goto_position(90,3)
poppy.r_arm_z.goto_position(-90,3,wait=True')
poppy.r_arm_z.goto_position(0,3)
poppy.l_arm_z.goto_position(0,3,wait=True)
poppy.r_shoulder_x.goto_position(-90,3)
poppy.l_shoulder_x.goto_position(90,3,wait=True')
poppy.r_shoulder_y.goto_position(30,3)
poppy.l_shoulder_y.goto_position(-30,3,wait=True)
poppy.r_shoulder_y.goto_position(-30,3)
poppy.l_shoulder_y.goto_position(30,3,wait=True)
for m in poppy.motors :
m.goto_position(0,1)
Explanation: A présent que vous savez, faire bouger votre robot, soyez créatif et inventez une danse pour lui !
End of explanation
# Ecrivez votre code ci-dessous et éxecutez le.
# Une correction est donnée à titre indicatif :
poppy.close()
Explanation: Pour terminer la simulation, il faut arréter le robot :
<span style="color:green">nom_du_robot<span style="color:red">.close()
End of explanation
# Ecrivez votre code ci-dessous et éxecutez le.
# Une correction est donnée à titre indicatif :
poppy = PoppyTorso()
Explanation: Deuxième partie : en utilisant un véritable robot :
Tout le code développé à l'aide du simulateur doit normalement être valide sur un véritable robot.
Il suffit d'instancier la class robot sans l'argument du simulateur :
<span style="color:green">nom_du_robot<span style="color:red"> = PoppyTorso()
Attention dans le cas du controle d'un véritable PoppyTorso, le code doit être éxécuté dans une interface Jupyter qui pointe sur le nom réseau du robot et non pas sur localhost comme pour le simulateur.
End of explanation
<END_TASK> |
15,779 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Sveučilište u Zagrebu
Fakultet elektrotehnike i računarstva
Strojno učenje 2019/2020
http
Step1: Zadatci
1. Jednostavna regresija
Zadan je skup primjera $\mathcal{D}={(x^{(i)},y^{(i)})}_{i=1}^4 = {(0,4),(1,1),(2,2),(4,5)}$. Primjere predstavite matrixom $\mathbf{X}$ dimenzija $N\times n$ (u ovom slučaju $4\times 1$) i vektorom oznaka $\textbf{y}$, dimenzija $N\times 1$ (u ovom slučaju $4\times 1$), na sljedeći način
Step2: (a)
Proučite funkciju PolynomialFeatures iz biblioteke sklearn i upotrijebite je za generiranje matrice dizajna $\mathbf{\Phi}$ koja ne koristi preslikavanje u prostor više dimenzije (samo će svakom primjeru biti dodane dummy jedinice; $m=n+1$).
Step3: (b)
Upoznajte se s modulom linalg. Izračunajte težine $\mathbf{w}$ modela linearne regresije kao $\mathbf{w}=(\mathbf{\Phi}^\intercal\mathbf{\Phi})^{-1}\mathbf{\Phi}^\intercal\mathbf{y}$. Zatim se uvjerite da isti rezultat možete dobiti izračunom pseudoinverza $\mathbf{\Phi}^+$ matrice dizajna, tj. $\mathbf{w}=\mathbf{\Phi}^+\mathbf{y}$, korištenjem funkcije pinv.
Step4: Radi jasnoće, u nastavku je vektor $\mathbf{x}$ s dodanom dummy jedinicom $x_0=1$ označen kao $\tilde{\mathbf{x}}$.
(c)
Prikažite primjere iz $\mathcal{D}$ i funkciju $h(\tilde{\mathbf{x}})=\mathbf{w}^\intercal\tilde{\mathbf{x}}$. Izračunajte pogrešku učenja prema izrazu $E(h|\mathcal{D})=\frac{1}{2}\sum_{i=1}^N(\tilde{\mathbf{y}}^{(i)} - h(\tilde{\mathbf{x}}))^2$. Možete koristiti funkciju srednje kvadratne pogreške mean_squared_error iz modula sklearn.metrics.
Q
Step5: (d)
Uvjerite se da za primjere iz $\mathcal{D}$ težine $\mathbf{w}$ ne možemo naći rješavanjem sustava $\mathbf{w}=\mathbf{\Phi}^{-1}\mathbf{y}$, već da nam doista treba pseudoinverz.
Q
Step6: (e)
Proučite klasu LinearRegression iz modula sklearn.linear_model. Uvjerite se da su težine koje izračunava ta funkcija (dostupne pomoću atributa coef_ i intercept_) jednake onima koje ste izračunali gore. Izračunajte predikcije modela (metoda predict) i uvjerite se da je pogreška učenja identična onoj koju ste ranije izračunali.
Step7: 2. Polinomijalna regresija i utjecaj šuma
(a)
Razmotrimo sada regresiju na većem broju primjera. Definirajte funkciju make_labels(X, f, noise=0) koja uzima matricu neoznačenih primjera $\mathbf{X}{N\times n}$ te generira vektor njihovih oznaka $\mathbf{y}{N\times 1}$. Oznake se generiraju kao $y^{(i)} = f(x^{(i)})+\mathcal{N}(0,\sigma^2)$, gdje je $f
Step8: Prikažite taj skup funkcijom scatter.
Step9: (b)
Trenirajte model polinomijalne regresije stupnja $d=3$. Na istom grafikonu prikažite naučeni model $h(\mathbf{x})=\mathbf{w}^\intercal\tilde{\mathbf{x}}$ i primjere za učenje. Izračunajte pogrešku učenja modela.
Step10: 3. Odabir modela
(a)
Na skupu podataka iz zadatka 2 trenirajte pet modela linearne regresije $\mathcal{H}_d$ različite složenosti, gdje je $d$ stupanj polinoma, $d\in{1,3,5,10,20}$. Prikažite na istome grafikonu skup za učenje i funkcije $h_d(\mathbf{x})$ za svih pet modela (preporučujemo koristiti plot unutar for petlje). Izračunajte pogrešku učenja svakog od modela.
Q
Step11: (b)
Razdvojite skup primjera iz zadatka 2 pomoću funkcije model_selection.train_test_split na skup za učenja i skup za ispitivanje u omjeru 1
Step12: (c)
Točnost modela ovisi o (1) njegovoj složenosti (stupanj $d$ polinoma), (2) broju primjera $N$, i (3) količini šuma. Kako biste to analizirali, nacrtajte grafikone pogrešaka kao u 3b, ali za sve kombinacija broja primjera $N\in{100,200,1000}$ i količine šuma $\sigma\in{100,200,500}$ (ukupno 9 grafikona). Upotrijebite funkciju subplots kako biste pregledno posložili grafikone u tablicu $3\times 3$. Podatci se generiraju na isti način kao u zadatku 2.
NB
Step13: Q
Step14: (b)
Proučite klasu Ridge iz modula sklearn.linear_model, koja implementira L2-regularizirani regresijski model. Parametar $\alpha$ odgovara parametru $\lambda$. Primijenite model na istim primjerima kao u prethodnom zadatku i ispišite težine $\mathbf{w}$ (atributi coef_ i intercept_).
Q
Step15: 5. Regularizirana polinomijalna regresija
(a)
Vratimo se na slučaj $N=50$ slučajno generiranih primjera iz zadatka 2. Trenirajte modele polinomijalne regresije $\mathcal{H}_{\lambda,d}$ za $\lambda\in{0,100}$ i $d\in{2,10}$ (ukupno četiri modela). Skicirajte pripadne funkcije $h(\mathbf{x})$ i primjere (na jednom grafikonu; preporučujemo koristiti plot unutar for petlje).
Q
Step16: (b)
Kao u zadataku 3b, razdvojite primjere na skup za učenje i skup za ispitivanje u omjeru 1
Step17: 6. L1-regularizacija i L2-regularizacija
Svrha regularizacije jest potiskivanje težina modela $\mathbf{w}$ prema nuli, kako bi model bio što jednostavniji. Složenost modela može se okarakterizirati normom pripadnog vektora težina $\mathbf{w}$, i to tipično L2-normom ili L1-normom. Za jednom trenirani model možemo izračunati i broj ne-nul značajki, ili L0-normu, pomoću sljedeće funkcije koja prima vektor težina $\mathbf{w}$
Step18: (a)
Za ovaj zadatak upotrijebite skup za učenje i skup za testiranje iz zadatka 3b. Trenirajte modele L2-regularizirane polinomijalne regresije stupnja $d=10$, mijenjajući hiperparametar $\lambda$ u rasponu ${1,2,\dots,100}$. Za svaki od treniranih modela izračunajte L{0,1,2}-norme vektora težina $\mathbf{w}$ te ih prikažite kao funkciju od $\lambda$. Pripazite što točno šaljete u funkciju za izračun normi.
Q
Step19: (b)
Glavna prednost L1-regularizirane regresije (ili LASSO regression) nad L2-regulariziranom regresijom jest u tome što L1-regularizirana regresija rezultira rijetkim modelima (engl. sparse models), odnosno modelima kod kojih su mnoge težine pritegnute na nulu. Pokažite da je to doista tako, ponovivši gornji eksperiment s L1-regulariziranom regresijom, implementiranom u klasi Lasso u modulu sklearn.linear_model. Zanemarite upozorenja.
Step20: 7. Značajke različitih skala
Često se u praksi možemo susreti sa podatcima u kojima sve značajke nisu jednakih magnituda. Primjer jednog takvog skupa je regresijski skup podataka grades u kojem se predviđa prosjek ocjena studenta na studiju (1--5) na temelju dvije značajke
Step21: a) Iscrtajte ovisnost ciljne vrijednosti (y-os) o prvoj i o drugoj značajki (x-os). Iscrtajte dva odvojena grafa.
Step22: b) Naučite model L2-regularizirane regresije ($\lambda = 0.01$), na podacima grades_X i grades_y
Step23: Sada ponovite gornji eksperiment, ali prvo skalirajte podatke grades_X i grades_y i spremite ih u varijable grades_X_fixed i grades_y_fixed. Za tu svrhu, koristite StandardScaler.
Step24: Q
Step25: Ponovno, naučite na ovom skupu L2-regularizirani model regresije ($\lambda = 0.01$).
Step26: Q
Step27: Q | <ASSISTANT_TASK:>
Python Code:
# Učitaj osnovne biblioteke...
import numpy as np
import sklearn
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings('ignore')
%pylab inline
Explanation: Sveučilište u Zagrebu
Fakultet elektrotehnike i računarstva
Strojno učenje 2019/2020
http://www.fer.unizg.hr/predmet/su
Laboratorijska vježba 1: Regresija
Verzija: 1.2
Zadnji put ažurirano: 27. rujna 2019.
(c) 2015-2019 Jan Šnajder, Domagoj Alagić
Objavljeno: 30. rujna 2019.
Rok za predaju: 21. listopada 2019. u 07:00h
Upute
Prva laboratorijska vježba sastoji se od deset zadataka. U nastavku slijedite upute navedene u ćelijama s tekstom. Rješavanje vježbe svodi se na dopunjavanje ove bilježnice: umetanja ćelije ili više njih ispod teksta zadatka, pisanja odgovarajućeg kôda te evaluiranja ćelija.
Osigurajte da u potpunosti razumijete kôd koji ste napisali. Kod predaje vježbe, morate biti u stanju na zahtjev asistenta (ili demonstratora) preinačiti i ponovno evaluirati Vaš kôd. Nadalje, morate razumjeti teorijske osnove onoga što radite, u okvirima onoga što smo obradili na predavanju. Ispod nekih zadataka možete naći i pitanja koja služe kao smjernice za bolje razumijevanje gradiva (nemojte pisati odgovore na pitanja u bilježnicu). Stoga se nemojte ograničiti samo na to da riješite zadatak, nego slobodno eksperimentirajte. To upravo i jest svrha ovih vježbi.
Vježbe trebate raditi samostalno. Možete se konzultirati s drugima o načelnom načinu rješavanja, ali u konačnici morate sami odraditi vježbu. U protivnome vježba nema smisla.
End of explanation
X = np.array([[0],[1],[2],[4]])
y = np.array([4,1,2,5])
X1 = X
y1 = y
Explanation: Zadatci
1. Jednostavna regresija
Zadan je skup primjera $\mathcal{D}={(x^{(i)},y^{(i)})}_{i=1}^4 = {(0,4),(1,1),(2,2),(4,5)}$. Primjere predstavite matrixom $\mathbf{X}$ dimenzija $N\times n$ (u ovom slučaju $4\times 1$) i vektorom oznaka $\textbf{y}$, dimenzija $N\times 1$ (u ovom slučaju $4\times 1$), na sljedeći način:
End of explanation
from sklearn.preprocessing import PolynomialFeatures
poly = PolynomialFeatures(1)
phi = poly.fit_transform(X)
print(phi)
# Vaš kôd ovdje
Explanation: (a)
Proučite funkciju PolynomialFeatures iz biblioteke sklearn i upotrijebite je za generiranje matrice dizajna $\mathbf{\Phi}$ koja ne koristi preslikavanje u prostor više dimenzije (samo će svakom primjeru biti dodane dummy jedinice; $m=n+1$).
End of explanation
from numpy import linalg
pinverse1 = pinv(phi)
pinverse2 = matmul(inv(matmul(transpose(phi), phi)), transpose(phi))
#print(pinverse1)
#print(pinverse2)
w = matmul(pinverse2, y)
print(w)
# Vaš kôd ovdje
Explanation: (b)
Upoznajte se s modulom linalg. Izračunajte težine $\mathbf{w}$ modela linearne regresije kao $\mathbf{w}=(\mathbf{\Phi}^\intercal\mathbf{\Phi})^{-1}\mathbf{\Phi}^\intercal\mathbf{y}$. Zatim se uvjerite da isti rezultat možete dobiti izračunom pseudoinverza $\mathbf{\Phi}^+$ matrice dizajna, tj. $\mathbf{w}=\mathbf{\Phi}^+\mathbf{y}$, korištenjem funkcije pinv.
End of explanation
import sklearn.metrics as mt
wt = w #(np.array([w]))
print(wt)
print(phi)
hx = np.dot(phi, w)
E = mt.mean_squared_error(hx, y)
print(E)
# Vaš kôd ovdje
Explanation: Radi jasnoće, u nastavku je vektor $\mathbf{x}$ s dodanom dummy jedinicom $x_0=1$ označen kao $\tilde{\mathbf{x}}$.
(c)
Prikažite primjere iz $\mathcal{D}$ i funkciju $h(\tilde{\mathbf{x}})=\mathbf{w}^\intercal\tilde{\mathbf{x}}$. Izračunajte pogrešku učenja prema izrazu $E(h|\mathcal{D})=\frac{1}{2}\sum_{i=1}^N(\tilde{\mathbf{y}}^{(i)} - h(\tilde{\mathbf{x}}))^2$. Možete koristiti funkciju srednje kvadratne pogreške mean_squared_error iz modula sklearn.metrics.
Q: Gore definirana funkcija pogreške $E(h|\mathcal{D})$ i funkcija srednje kvadratne pogreške nisu posve identične. U čemu je razlika? Koja je "realnija"?
End of explanation
# Vaš kôd ovdje
try:
w = matmul(inv(phi), y)
print(w)
except LinAlgError as err:
print("Exception")
print(err)
Explanation: (d)
Uvjerite se da za primjere iz $\mathcal{D}$ težine $\mathbf{w}$ ne možemo naći rješavanjem sustava $\mathbf{w}=\mathbf{\Phi}^{-1}\mathbf{y}$, već da nam doista treba pseudoinverz.
Q: Zašto je to slučaj? Bi li se problem mogao riješiti preslikavanjem primjera u višu dimenziju? Ako da, bi li to uvijek funkcioniralo, neovisno o skupu primjera $\mathcal{D}$? Pokažite na primjeru.
End of explanation
from sklearn.linear_model import LinearRegression
# Vaš kôd ovdje
lr = LinearRegression().fit(X, y)
#print(lr.score(X, y))
#print(lr.coef_)
#print(lr.intercept_)
print([lr.intercept_, lr.coef_])
print(wt)
pr = lr.predict(X)
E = mt.mean_squared_error(pr, y)
print(E)
Explanation: (e)
Proučite klasu LinearRegression iz modula sklearn.linear_model. Uvjerite se da su težine koje izračunava ta funkcija (dostupne pomoću atributa coef_ i intercept_) jednake onima koje ste izračunali gore. Izračunajte predikcije modela (metoda predict) i uvjerite se da je pogreška učenja identična onoj koju ste ranije izračunali.
End of explanation
from numpy.random import normal
def make_labels(X, f, noise=0) :
# Vaš kôd ovdje
N = numpy.random.normal
fx = f(X)
#nois = [N(0, noise) for _ in range(X.shape[0])]
#print(nois)
#y = f(X) + nois
y = [ f(x) + N(0, noise) for x in X ]
return y
def make_instances(x1, x2, N) :
return np.array([np.array([x]) for x in np.linspace(x1,x2,N)])
Explanation: 2. Polinomijalna regresija i utjecaj šuma
(a)
Razmotrimo sada regresiju na većem broju primjera. Definirajte funkciju make_labels(X, f, noise=0) koja uzima matricu neoznačenih primjera $\mathbf{X}{N\times n}$ te generira vektor njihovih oznaka $\mathbf{y}{N\times 1}$. Oznake se generiraju kao $y^{(i)} = f(x^{(i)})+\mathcal{N}(0,\sigma^2)$, gdje je $f:\mathbb{R}^n\to\mathbb{R}$ stvarna funkcija koja je generirala podatke (koja nam je u stvarnosti nepoznata), a $\sigma$ je standardna devijacija Gaussovog šuma, definirana parametrom noise. Za generiranje šuma možete koristiti funkciju numpy.random.normal.
Generirajte skup za učenje od $N=50$ primjera uniformno distribuiranih u intervalu $[-5,5]$ pomoću funkcije $f(x) = 5 + x -2 x^2 -5 x^3$ uz šum $\sigma=200$:
End of explanation
# Vaš kôd ovdje
N = 50
def f(x):
return 5 + x - 2*x*x - 5*x*x*x
noise = 200
X2 = make_instances(-5, 5, N)
y2 = make_labels(X2, f, noise)
#print(X)
#print(y)
s = scatter(X2, y2)
Explanation: Prikažite taj skup funkcijom scatter.
End of explanation
# Vaš kôd ovdje
import sklearn.linear_model as lm
def polyX(d):
p3 = PolynomialFeatures(d).fit_transform(X2)
l2 = LinearRegression().fit(p3, y2)
h2 = l2.predict(p3)
E = mt.mean_squared_error(h2, y2)
print('d: ' + str(d) + ' E: ' + str(E))
#print(p3)
plot(X2, h2, label = str(d))
scatter(X2, y2)
polyX(3)
Explanation: (b)
Trenirajte model polinomijalne regresije stupnja $d=3$. Na istom grafikonu prikažite naučeni model $h(\mathbf{x})=\mathbf{w}^\intercal\tilde{\mathbf{x}}$ i primjere za učenje. Izračunajte pogrešku učenja modela.
End of explanation
# Vaš kôd ovdje
figure(figsize=(15,10))
scatter(X2, y2)
polyX(1)
polyX(3)
polyX(5)
polyX(10)
polyX(20)
s = plt.legend(loc="center right")
Explanation: 3. Odabir modela
(a)
Na skupu podataka iz zadatka 2 trenirajte pet modela linearne regresije $\mathcal{H}_d$ različite složenosti, gdje je $d$ stupanj polinoma, $d\in{1,3,5,10,20}$. Prikažite na istome grafikonu skup za učenje i funkcije $h_d(\mathbf{x})$ za svih pet modela (preporučujemo koristiti plot unutar for petlje). Izračunajte pogrešku učenja svakog od modela.
Q: Koji model ima najmanju pogrešku učenja i zašto?
End of explanation
from sklearn.model_selection import train_test_split
# Vaš kôd ovdje
xTr, xTest, yTr, yTest = train_test_split(X2, y2, test_size=0.5)
testError = []
trainError = []
for d in range(1,33):
polyXTrain = PolynomialFeatures(d).fit_transform(xTr)
polyXTest = PolynomialFeatures(d).fit_transform(xTest)
l2 = LinearRegression().fit(polyXTrain, yTr)
h2 = l2.predict(polyXTest)
E = mt.mean_squared_error(h2, yTest)
#print('d: ' + str(d) + ' E: ' + str(E))
testError.append(E)
h2 = l2.predict(polyXTrain)
E = mt.mean_squared_error(h2, yTr)
#print('d: ' + str(d) + ' E: ' + str(E))
trainError.append(E)
#print(p3)
#plot(polyXTest, h2, label = str(d))
plot(numpy.log(numpy.array(testError)), label='test')
plot(numpy.log(numpy.array(trainError)), label='train')
legend()
Explanation: (b)
Razdvojite skup primjera iz zadatka 2 pomoću funkcije model_selection.train_test_split na skup za učenja i skup za ispitivanje u omjeru 1:1. Prikažite na jednom grafikonu pogrešku učenja i ispitnu pogrešku za modele polinomijalne regresije $\mathcal{H}_d$, sa stupnjem polinoma $d$ u rasponu $d\in [1,2,\ldots,20]$. Budući da kvadratna pogreška brzo raste za veće stupnjeve polinoma, umjesto da iscrtate izravno iznose pogrešaka, iscrtajte njihove logaritme.
NB: Podjela na skupa za učenje i skup za ispitivanje mora za svih pet modela biti identična.
Q: Je li rezultat u skladu s očekivanjima? Koji biste model odabrali i zašto?
Q: Pokrenite iscrtavanje više puta. U čemu je problem? Bi li problem bio jednako izražen kad bismo imali više primjera? Zašto?
End of explanation
# Vaš kôd ovdje
# Vaš kôd ovdje
figure(figsize=(15,15))
N = 1000
def f(x):
return 5 + x - 2*x*x - 5*x*x*x
X3 = make_instances(-5, 5, N)
xAllTrain, xAllTest = train_test_split(X3, test_size=0.5)
i = 0
j = 0
for N in [100, 200, 1000]:
for noise in [100, 200, 500]:
j += 1
xTrain = xAllTrain[:N]
xTest = xAllTest[:N]
yTrain = make_labels(xTrain, f, noise)
yTest = make_labels(xTest, f, noise)
trainError = []
testError = []
for d in range(1,21):
polyXTrain = PolynomialFeatures(d).fit_transform(xTrain)
polyXTest = PolynomialFeatures(d).fit_transform(xTest)
l2 = LinearRegression().fit(polyXTrain, yTrain)
h2 = l2.predict(polyXTest)
testE = mt.mean_squared_error(h2, yTest)
testError.append(testE)
h2 = l2.predict(polyXTrain)
trainE = mt.mean_squared_error(h2, yTrain)
trainError.append(trainE)
#print('d: ' + str(d) + ' E: ' + str(E))
#print(p3)
#plot(polyXTest, h2, label = str(d))
subplot(3,3,j, title = "N: " + str(N) + " noise: " + str(noise))
plot(numpy.log(numpy.array(trainError)), label = 'train')
plot(numpy.log(numpy.array(testError)), label = 'test')
plt.legend(loc="center right")
#print(X)
#print(y)
#s = scatter(X2, y2)
Explanation: (c)
Točnost modela ovisi o (1) njegovoj složenosti (stupanj $d$ polinoma), (2) broju primjera $N$, i (3) količini šuma. Kako biste to analizirali, nacrtajte grafikone pogrešaka kao u 3b, ali za sve kombinacija broja primjera $N\in{100,200,1000}$ i količine šuma $\sigma\in{100,200,500}$ (ukupno 9 grafikona). Upotrijebite funkciju subplots kako biste pregledno posložili grafikone u tablicu $3\times 3$. Podatci se generiraju na isti način kao u zadatku 2.
NB: Pobrinite se da svi grafikoni budu generirani nad usporedivim skupovima podataka, na sljedeći način. Generirajte najprije svih 1000 primjera, podijelite ih na skupove za učenje i skupove za ispitivanje (dva skupa od po 500 primjera). Zatim i od skupa za učenje i od skupa za ispitivanje načinite tri različite verzije, svaka s drugačijom količinom šuma (ukupno 2x3=6 verzija podataka). Kako bi simulirali veličinu skupa podataka, od tih dobivenih 6 skupova podataka uzorkujte trećinu, dvije trećine i sve podatke. Time ste dobili 18 skupova podataka -- skup za učenje i za testiranje za svaki od devet grafova.
End of explanation
# Vaš kôd ovdje
phi4 = PolynomialFeatures(3).fit_transform(X1)
def reg2(lambd):
w = matmul( matmul(inv( matmul(transpose(phi4), phi4) + lambd * identity(len(phi4))), transpose(phi4)), y1)
print(w)
reg2(0)
reg2(1)
reg2(10)
Explanation: Q: Jesu li rezultati očekivani? Obrazložite.
4. Regularizirana regresija
(a)
U gornjim eksperimentima nismo koristili regularizaciju. Vratimo se najprije na primjer iz zadatka 1. Na primjerima iz tog zadatka izračunajte težine $\mathbf{w}$ za polinomijalni regresijski model stupnja $d=3$ uz L2-regularizaciju (tzv. ridge regression), prema izrazu $\mathbf{w}=(\mathbf{\Phi}^\intercal\mathbf{\Phi}+\lambda\mathbf{I})^{-1}\mathbf{\Phi}^\intercal\mathbf{y}$. Napravite izračun težina za regularizacijske faktore $\lambda=0$, $\lambda=1$ i $\lambda=10$ te usporedite dobivene težine.
Q: Kojih je dimenzija matrica koju treba invertirati?
Q: Po čemu se razlikuju dobivene težine i je li ta razlika očekivana? Obrazložite.
End of explanation
from sklearn.linear_model import Ridge
#for s in ['auto', 'svd', 'cholesky', 'lsqr', 'sparse_cg', 'sag', 'saga']:
for l in [0, 1, 10]:
r = Ridge(l, fit_intercept = False).fit(phi4, y1)
print(r.coef_)
print(r.intercept_)
# Vaš kôd ovdje
Explanation: (b)
Proučite klasu Ridge iz modula sklearn.linear_model, koja implementira L2-regularizirani regresijski model. Parametar $\alpha$ odgovara parametru $\lambda$. Primijenite model na istim primjerima kao u prethodnom zadatku i ispišite težine $\mathbf{w}$ (atributi coef_ i intercept_).
Q: Jesu li težine identične onima iz zadatka 4a? Ako nisu, objasnite zašto je to tako i kako biste to popravili.
End of explanation
# Vaš kôd ovdje
N = 50
figure(figsize = (15, 15))
x123 = scatter(X2, y2)
for lambd in [0, 100]:
for d in [2, 10]:
phi2 = PolynomialFeatures(d).fit_transform(X2)
r = Ridge(lambd).fit(phi2, y2)
h2 = r.predict(phi2)
#print(d)
plot(X2, h2, label="lambda " + str(lambd) + " d " + str(d))
x321 = plt.legend(loc="center right")
Explanation: 5. Regularizirana polinomijalna regresija
(a)
Vratimo se na slučaj $N=50$ slučajno generiranih primjera iz zadatka 2. Trenirajte modele polinomijalne regresije $\mathcal{H}_{\lambda,d}$ za $\lambda\in{0,100}$ i $d\in{2,10}$ (ukupno četiri modela). Skicirajte pripadne funkcije $h(\mathbf{x})$ i primjere (na jednom grafikonu; preporučujemo koristiti plot unutar for petlje).
Q: Jesu li rezultati očekivani? Obrazložite.
End of explanation
# Vaš kôd ovdje
xTr, xTest, yTr, yTest = train_test_split(X2, y2, test_size=0.5)
figure(figsize=(10,10))
trainError = []
testError = []
#print(xTr)
for lambd in range(0,51):
polyXTrain = PolynomialFeatures(10).fit_transform(xTr)
polyXTest = PolynomialFeatures(10).fit_transform(xTest)
l2 = Ridge(lambd).fit(polyXTrain, yTr)
h2 = l2.predict(polyXTest)
E = mt.mean_squared_error(h2, yTest)
#print('d: ' + str(d) + ' E: ' + str(E))
testError.append(log( E))
h2 = l2.predict(polyXTrain)
E = mt.mean_squared_error(h2, yTr)
trainError.append(log(E))
#print(p3)
#plot(polyXTest, h2, label = str(d))
#print(numpy.log(numpy.array(testError)))
plot(numpy.log(numpy.array(testError)), label="test")
plot(numpy.log(numpy.array(trainError)), label="train")
grid()
legend()
Explanation: (b)
Kao u zadataku 3b, razdvojite primjere na skup za učenje i skup za ispitivanje u omjeru 1:1. Prikažite krivulje logaritama pogreške učenja i ispitne pogreške u ovisnosti za model $\mathcal{H}_{d=10,\lambda}$, podešavajući faktor regularizacije $\lambda$ u rasponu $\lambda\in{0,1,\dots,50}$.
Q: Kojoj strani na grafikonu odgovara područje prenaučenosti, a kojoj podnaučenosti? Zašto?
Q: Koju biste vrijednosti za $\lambda$ izabrali na temelju ovih grafikona i zašto?
End of explanation
def nonzeroes(coef, tol=1e-6):
return len(coef) - len(coef[np.isclose(0, coef, atol=tol)])
Explanation: 6. L1-regularizacija i L2-regularizacija
Svrha regularizacije jest potiskivanje težina modela $\mathbf{w}$ prema nuli, kako bi model bio što jednostavniji. Složenost modela može se okarakterizirati normom pripadnog vektora težina $\mathbf{w}$, i to tipično L2-normom ili L1-normom. Za jednom trenirani model možemo izračunati i broj ne-nul značajki, ili L0-normu, pomoću sljedeće funkcije koja prima vektor težina $\mathbf{w}$:
End of explanation
# Vaš kôd ovdje
d = 10
l0 = []
l1 = []
l2 = []
xTr, xTest, yTr, yTest = train_test_split(X2, y2, test_size=0.5)
for lambd in range(0,101):
polyXTrain = PolynomialFeatures(10).fit_transform(xTr)
polyXTest = PolynomialFeatures(10).fit_transform(xTest)
r = Ridge(lambd).fit(polyXTrain, yTr)
r.coef_[0] = r.intercept_
l0.append(nonzeroes(r.coef_))
#print(r.coef_)
l1.append(numpy.linalg.norm(r.coef_, ord=1))
l2.append(numpy.linalg.norm(r.coef_, ord=2))
figure(figsize=(10,10))
plot(l0, label="l0")
legend()
grid()
figure(figsize=(10,10))
plot(l1, label="l1")
legend()
grid()
figure(figsize=(10,10))
plot(l2, label="l2")
legend()
grid()
Explanation: (a)
Za ovaj zadatak upotrijebite skup za učenje i skup za testiranje iz zadatka 3b. Trenirajte modele L2-regularizirane polinomijalne regresije stupnja $d=10$, mijenjajući hiperparametar $\lambda$ u rasponu ${1,2,\dots,100}$. Za svaki od treniranih modela izračunajte L{0,1,2}-norme vektora težina $\mathbf{w}$ te ih prikažite kao funkciju od $\lambda$. Pripazite što točno šaljete u funkciju za izračun normi.
Q: Objasnite oblik obiju krivulja. Hoće li krivulja za $\|\mathbf{w}\|_2$ doseći nulu? Zašto? Je li to problem? Zašto?
Q: Za $\lambda=100$, koliki je postotak težina modela jednak nuli, odnosno koliko je model rijedak?
End of explanation
# Vaš kôd ovdje
d = 10
l0 = []
l1 = []
l2 = []
xTr, xTest, yTr, yTest = train_test_split(X2, y2, test_size=0.5)
for lambd in range(0,101):
polyXTrain = PolynomialFeatures(10).fit_transform(xTr)
polyXTest = PolynomialFeatures(10).fit_transform(xTest)
r = sklearn.linear_model.Lasso(lambd).fit(polyXTrain, yTr)
r.coef_[0] = r.intercept_
l0.append(nonzeroes(r.coef_))
#print(r.coef_)
l1.append(numpy.linalg.norm(r.coef_, ord=1))
l2.append(numpy.linalg.norm(r.coef_, ord=2))
figure(figsize=(10,10))
plot(l0, label="l0")
legend()
figure(figsize=(10,10))
plot(l1, label="l1")
legend()
figure(figsize=(10,10))
plot(l2, label="l2")
legend()
Explanation: (b)
Glavna prednost L1-regularizirane regresije (ili LASSO regression) nad L2-regulariziranom regresijom jest u tome što L1-regularizirana regresija rezultira rijetkim modelima (engl. sparse models), odnosno modelima kod kojih su mnoge težine pritegnute na nulu. Pokažite da je to doista tako, ponovivši gornji eksperiment s L1-regulariziranom regresijom, implementiranom u klasi Lasso u modulu sklearn.linear_model. Zanemarite upozorenja.
End of explanation
n_data_points = 500
np.random.seed(69)
# Generiraj podatke o bodovima na prijamnom ispitu koristeći normalnu razdiobu i ograniči ih na interval [1, 3000].
exam_score = np.random.normal(loc=1500.0, scale = 500.0, size = n_data_points)
exam_score = np.round(exam_score)
exam_score[exam_score > 3000] = 3000
exam_score[exam_score < 0] = 0
# Generiraj podatke o ocjenama iz srednje škole koristeći normalnu razdiobu i ograniči ih na interval [1, 5].
grade_in_highschool = np.random.normal(loc=3, scale = 2.0, size = n_data_points)
grade_in_highschool[grade_in_highschool > 5] = 5
grade_in_highschool[grade_in_highschool < 1] = 1
# Matrica dizajna.
grades_X = np.array([exam_score,grade_in_highschool]).T
# Završno, generiraj izlazne vrijednosti.
rand_noise = np.random.normal(loc=0.0, scale = 0.5, size = n_data_points)
exam_influence = 0.9
grades_y = ((exam_score / 3000.0) * (exam_influence) + (grade_in_highschool / 5.0) \
* (1.0 - exam_influence)) * 5.0 + rand_noise
grades_y[grades_y < 1] = 1
grades_y[grades_y > 5] = 5
Explanation: 7. Značajke različitih skala
Često se u praksi možemo susreti sa podatcima u kojima sve značajke nisu jednakih magnituda. Primjer jednog takvog skupa je regresijski skup podataka grades u kojem se predviđa prosjek ocjena studenta na studiju (1--5) na temelju dvije značajke: bodova na prijamnom ispitu (1--3000) i prosjeka ocjena u srednjoj školi. Prosjek ocjena na studiju izračunat je kao težinska suma ove dvije značajke uz dodani šum.
Koristite sljedeći kôd kako biste generirali ovaj skup podataka.
End of explanation
# Vaš kôd ovdje
figure(figsize=(10,10))
scatter(exam_score, grades_y, label="l2")
legend()
figure(figsize=(10,10))
scatter(grade_in_highschool, grades_y, label="l2")
legend()
Explanation: a) Iscrtajte ovisnost ciljne vrijednosti (y-os) o prvoj i o drugoj značajki (x-os). Iscrtajte dva odvojena grafa.
End of explanation
# Vaš kôd ovdje
r7b = Ridge(0.01).fit(grades_X, grades_y)
h2 = r7b.predict(grades_X)
E = mt.mean_squared_error(h2, grades_y)
print(E)
Explanation: b) Naučite model L2-regularizirane regresije ($\lambda = 0.01$), na podacima grades_X i grades_y:
End of explanation
from sklearn.preprocessing import StandardScaler
# Vaš kôd ovdje
ssX = StandardScaler().fit_transform(grades_X)
ssY = StandardScaler().fit_transform(grades_y.reshape(-1, 1))
r = Ridge(0.01).fit(ssX, ssY)
h2 = r.predict(ssX)
E = mt.mean_squared_error(h2, ssY)
print(E)
Explanation: Sada ponovite gornji eksperiment, ali prvo skalirajte podatke grades_X i grades_y i spremite ih u varijable grades_X_fixed i grades_y_fixed. Za tu svrhu, koristite StandardScaler.
End of explanation
# Vaš kôd ovdje
grades_X_fixed_colinear = [ [x[0], x[1], x[1]] for x in ssX]
#print(grades_X_fixed_colinear)
Explanation: Q: Gledajući grafikone iz podzadatka (a), koja značajka bi trebala imati veću magnitudu, odnosno važnost pri predikciji prosjeka na studiju? Odgovaraju li težine Vašoj intuiciji? Objasnite.
8. Multikolinearnost i kondicija matrice
a) Izradite skup podataka grades_X_fixed_colinear tako što ćete u skupu grades_X_fixed iz
zadatka 7b duplicirati zadnji stupac (ocjenu iz srednje škole). Time smo efektivno uveli savršenu multikolinearnost.
End of explanation
# Vaš kôd ovdje
r8a = Ridge(0.01).fit(grades_X_fixed_colinear, ssY)
h2 = r8a.predict(grades_X_fixed_colinear)
E = mt.mean_squared_error(h2, ssY)
print(E)
print(r7b.coef_)
print(r8a.coef_)
Explanation: Ponovno, naučite na ovom skupu L2-regularizirani model regresije ($\lambda = 0.01$).
End of explanation
# Vaš kôd ovdje
for lambd in [0.01, 1000]:
print(lambd)
ws1 = []
ws2 = []
ws3 = []
for i in range(10):
xTrain, xTest, yTrain, yTest = train_test_split(grades_X_fixed_colinear, ssY, test_size=0.5)
print(l2.coef_)
l2 = Ridge(lambd).fit(xTrain, yTrain)
ws1.append(l2.coef_[0][0])
ws2.append(l2.coef_[0][1])
ws3.append(l2.coef_[0][2])
print("std dev: " + str(np.std(ws1)))
print("std dev: " + str(np.std(ws2)))
print("std dev: " + str(np.std(ws3)))
Explanation: Q: Usporedite iznose težina s onima koje ste dobili u zadatku 7b. Što se dogodilo?
b) Slučajno uzorkujte 50% elemenata iz skupa grades_X_fixed_colinear i naučite dva modela L2-regularizirane regresije, jedan s $\lambda=0.01$ i jedan s $\lambda=1000$). Ponovite ovaj pokus 10 puta (svaki put s drugim podskupom od 50% elemenata). Za svaki model, ispišite dobiveni vektor težina u svih 10 ponavljanja te ispišite standardnu devijaciju vrijednosti svake od težina (ukupno šest standardnih devijacija, svaka dobivena nad 10 vrijednosti).
End of explanation
# Vaš kôd ovdje
#print(grades_X_fixed_colinear)
for l in [0.01, 10]:
#print(l * identity(len(grades_X_fixed_colinear)))
mm = matmul(transpose(grades_X_fixed_colinear), grades_X_fixed_colinear)
matr = mm + l * identity(len(mm))
print(matr)
print(np.linalg.cond(matr))
Explanation: Q: Kako regularizacija utječe na stabilnost težina?
Q: Jesu li koeficijenti jednakih magnituda kao u prethodnom pokusu? Objasnite zašto.
c) Koristeći numpy.linalg.cond izračunajte kondicijski broj matrice $\mathbf{\Phi}^\intercal\mathbf{\Phi}+\lambda\mathbf{I}$, gdje je $\mathbf{\Phi}$ matrica dizajna (grades_X_fixed_colinear). Ponovite i za $\lambda=0.01$ i za $\lambda=10$.
End of explanation
<END_TASK> |
15,780 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
We wrote a script for importing streamgage data through the USGS web services, cleaning up the formatting, plotting the discharge over time, and saving the figure into a file. We would like to turn this script into a tool that we can reuse for different stations and date ranges without having to rewrite the code. Let's look at the code again (I deleted the rows that were commented out)
Step1: The station number and date range we are interested in are part of the URL that we use to communicate with the web services. The specific file we receive when the read_csv command runs doesn't exist -- when our script requests the data, the server reads the URL to see what we want, pulls data from a database, packages it, and passes it on to us. The API (the protocol that governs the communication between machines) establishes the "formula" for writing the URL. As long as we follow that formula (and request data that exists), the server will provide it for us.
Let's decompose the URL into its parts and combine them back into a single string
Step2: Python dictionaries to URLs {.callout}
Another useful data type built into Python is the dictionary. While lists and other sequences are indexed by a range of numbers, dictionaries are indexed by keys. A dictionary is an unordered collection of key
Step3: Just like there is the Numpy library for matrices and Pandas for tabular data, there is a Python library that provides a simple interface for accessing resources through URLs (take a look at the most popular package repository
Step4: This is not the most elegant way to write the URL but it accomplishes the job! To clean things up a bit, we can replace the values we want to be able to change with variables
Step5: We can now combine it with the rest of our code
Step6: Creating Functions
If we wanted to import data from a different station or for a different date range, we would manually change the first three variables and run the code again. It would be a lot less work than having to download the file and plot it by hand, but it could still be very tedious! At this point, our code is also getting long and complicated; what if we had thousands of datasets but didn't want to generate a figure for every single one? Commenting out the figure-drawing code is a nuisance. Also, what if we want to use that code again, on a different dataset or at a different point in our program? Cutting and pasting it is going to make our code get very long and very repetative, very quickly. We’d like a way to package our code so that it is easier to reuse, and Python provides for this by letting us define things called function - a shorthand way of re-executing longer pieces of code.
Let's by defining a function fahr_to_kelvin that converts temperatures from Fahrenheit to Kelvin
Step7: The function definition opens with the word def, which is followed by the name of the function and a parenthesized list of parameter names. The body of the function — the statements that are executed when it runs — is indented below the definition line, typically by four spaces.
When we call the function, the values we pass to it are assigned to those variables so that we can use them inside the function. Inside the function, we use a return statement to send a result back to whoever asked for it.
Notice that nothing happened when we ran the cell that contains the function. Python became aware of the function and what it is supposed to do, but until we call it, there is nothing for the function to do. Calling our own function is no different from calling any other function (see the resemblance with the help file for read_csv?)
Step8: The boiling point of water in Kelvin should be 373.15 K, not 273.15 K!
Functions make code easier to debug by isolating each possible source of error. In this case, the first term of the equation, ((temp - 32) * (5/9)), is returning 0 (instead of 100) when the temperature is 212 F. If we look at each part of that expression, we find
Step9: 5 divided by 9 should be 0.5556, but when we ask Python 2 to divide to integers, it returns an integer! If we want to want to keep the fractional part of the division, we need to convert one or the other number to floating point
Step10: You can also turn an integer into a float by casting
Step11: Casting {.challenge}
What happens when you type float(5/9)?
Integer division in Python 3 {.callout}
The problem of integer division does not exist in Python 3, where division always returns a floating point number. We use Python 2.7 because it is much more commonly used in our community, but always keep integer division in mind as it will be a common source of bugs in your code. And as annoying as it may seem, there are memory benefits to integer division!
Let's rewrite our function with the fixed bug
Step12: Composing Functions
Now that we’ve seen how to turn Fahrenheit into Kelvin, it’s easy to turn Kelvin into Celsius
Step13: What about converting Fahrenheit to Celsius? We could write out the formula, but we don’t need to. Instead, we can compose the two functions we have already created
Step14: This is our first taste of how larger programs are built
Step15: We can make another function plot_discharge to compose to plot and save the figures
Step16: The function plot_discharge produces output that is visible to us but has no return statement because it doesn't need to give anything back when it is called.
We can also wrap up the script for composing URLs into a function called generate_URL
Step17: Now that these three functions exist, we can rewrite our previous code in a much simpler script
Step18: Testing and Documenting
It doesn't long to forget what code we wrote in the past was supposed to do. We should always write some documentation for our functions to remind ourselves later what they are for and how they are supposed to be used.
The usual way to put documentation in software is to add comments
Step19: There’s a better way, though. If the first thing in a function is a string that isn’t assigned to a variable, that string is attached to the function as its documentation. A string like this is called a docstring (one set of quotes for single line strings, three sets for multi-line strings!)
Step20: This is better because we can now ask Python’s built-in help system to show us the documentation for the function
Step21: Defining Defaults
Step22: As this example shows, parameters are matched up from left to right, and any that haven’t been given a value explicitly get their default value. We can override this behavior by naming the value as we pass it in | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
new_column_names = ['Agency', 'Station', 'OldDateTime', 'Timezone', 'Discharge_cfs', 'Discharge_stat', 'Stage_ft', 'Stage_stat']
url = 'http://waterservices.usgs.gov/nwis/iv/?format=rdb&sites=09380000&startDT=2016-01-01&endDT=2016-01-10¶meterCd=00060,00065'
data = pd.read_csv(url, header=1, sep='\t', comment='#', names = new_column_names)
data['DateTime'] = pd.to_datetime(data['OldDateTime'])
new_station_name = "0" + str(data['Station'].unique()[0])
data['Station'] = new_station_name
data.plot(x='DateTime', y='Discharge_cfs', title='Station ' + new_station_name)
plt.xlabel('Time')
plt.ylabel('Discharge (cfs)')
plt.savefig('data/discharge_' + new_station_name + '.png')
plt.show()
Explanation: We wrote a script for importing streamgage data through the USGS web services, cleaning up the formatting, plotting the discharge over time, and saving the figure into a file. We would like to turn this script into a tool that we can reuse for different stations and date ranges without having to rewrite the code. Let's look at the code again (I deleted the rows that were commented out):
End of explanation
url_root = 'http://waterservices.usgs.gov/nwis/iv/?' # root of URL
url_1 = 'format=' + 'rdb' # file format
url_2 = 'sites=' + '09380000' # station number
url_3 = 'startDT=' + '2016-01-01' # start date
url_4 = 'endDT=' + '2016-01-10' # end date
url_5 = 'parameterCd=' + '00060,00065' # data fields
url = url_root + url_1 + '&' + url_2 + '&' + url_3 + '&' + url_4 + '&' + url_5
print url
Explanation: The station number and date range we are interested in are part of the URL that we use to communicate with the web services. The specific file we receive when the read_csv command runs doesn't exist -- when our script requests the data, the server reads the URL to see what we want, pulls data from a database, packages it, and passes it on to us. The API (the protocol that governs the communication between machines) establishes the "formula" for writing the URL. As long as we follow that formula (and request data that exists), the server will provide it for us.
Let's decompose the URL into its parts and combine them back into a single string:
End of explanation
url_dict = {} # create an empty dictionary
url_dict['format'] = 'rdb'
url_dict['sites'] = '09380000'
url_dict['startDT'] = '2016-01-01'
url_dict['endDT'] = '2016-01-10'
url_dict['parameterCd'] = ['00060','00065']
print url_dict
Explanation: Python dictionaries to URLs {.callout}
Another useful data type built into Python is the dictionary. While lists and other sequences are indexed by a range of numbers, dictionaries are indexed by keys. A dictionary is an unordered collection of key:value pairs. Keys must be unique (within any one dictionary) and can be strings or numbers. Values in a dictionary can be of any type, and different pairs in one dictionary can have different types of values.
We can store the parameters of our URL in a dictionary. Here's one of several ways to add entries to a dictionary:
End of explanation
import urllib
# need to set the parameter doseq to 1 to handle the list in url_dict['parameterCd']
url_parameters = urllib.urlencode(url_dict, doseq=1)
print url_root + url_parameters
Explanation: Just like there is the Numpy library for matrices and Pandas for tabular data, there is a Python library that provides a simple interface for accessing resources through URLs (take a look at the most popular package repository: https://pypi.python.org/). Many of the most popular and useful libraries for scientific computing come pre-installed with the Anaconda distribution.
We can use the urllib package to convert the dictionary into a URL following the standard format used by web services. The order of the parameters doesn't matter to the server!
End of explanation
this_station = '09380000'
startDate = '2016-01-01'
endDate = '2016-01-10'
url_root = 'http://waterservices.usgs.gov/nwis/iv/?'
url_1 = 'format=' + 'rdb'
url_2 = 'sites=' + this_station
url_3 = 'startDT=' + startDate
url_4 = 'endDT=' + endDate
url_5 = 'parameterCd=' + '00060,00065'
url = url_root + url_1 + '&' + url_2 + '&' + url_3 + '&' + url_4 + '&' + url_5
print url
Explanation: This is not the most elegant way to write the URL but it accomplishes the job! To clean things up a bit, we can replace the values we want to be able to change with variables:
End of explanation
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
########## change these values ###########
this_station = '09380000'
startDate = '2016-01-01'
endDate = '2016-01-10'
##########################################
# create the URL
url_root = 'http://waterservices.usgs.gov/nwis/iv/?'
url_1 = 'format=' + 'rdb'
url_2 = 'sites=' + this_station
url_3 = 'startDT=' + startDate
url_4 = 'endDT=' + endDate
url_5 = 'parameterCd=' + '00060,00065'
url = url_root + url_1 + '&' + url_2 + '&' + url_3 + '&' + url_4 + '&' + url_5
# import the data
new_column_names = ['Agency', 'Station', 'OldDateTime', 'Timezone', 'Discharge_cfs', 'Discharge_stat', 'Stage_ft', 'Stage_stat']
data = pd.read_csv(url, header=1, sep='\t', comment='#', names = new_column_names)
# fix formatting
data['DateTime'] = pd.to_datetime(data['OldDateTime'])
new_station_name = "0" + str(data['Station'].unique()[0])
data['Station'] = new_station_name
# plot and save figure
data.plot(x='DateTime', y='Discharge_cfs', title='Station ' + new_station_name)
plt.xlabel('Time')
plt.ylabel('Discharge (cfs)')
plt.savefig('data/discharge_' + new_station_name + '.png')
plt.show()
Explanation: We can now combine it with the rest of our code:
End of explanation
def fahr_to_kelvin(temp):
return ((temp - 32) * (5/9)) + 273.15
Explanation: Creating Functions
If we wanted to import data from a different station or for a different date range, we would manually change the first three variables and run the code again. It would be a lot less work than having to download the file and plot it by hand, but it could still be very tedious! At this point, our code is also getting long and complicated; what if we had thousands of datasets but didn't want to generate a figure for every single one? Commenting out the figure-drawing code is a nuisance. Also, what if we want to use that code again, on a different dataset or at a different point in our program? Cutting and pasting it is going to make our code get very long and very repetative, very quickly. We’d like a way to package our code so that it is easier to reuse, and Python provides for this by letting us define things called function - a shorthand way of re-executing longer pieces of code.
Let's by defining a function fahr_to_kelvin that converts temperatures from Fahrenheit to Kelvin:
End of explanation
print 'freezing point of water:', fahr_to_kelvin(32)
print 'boiling point of water:', fahr_to_kelvin(212)
Explanation: The function definition opens with the word def, which is followed by the name of the function and a parenthesized list of parameter names. The body of the function — the statements that are executed when it runs — is indented below the definition line, typically by four spaces.
When we call the function, the values we pass to it are assigned to those variables so that we can use them inside the function. Inside the function, we use a return statement to send a result back to whoever asked for it.
Notice that nothing happened when we ran the cell that contains the function. Python became aware of the function and what it is supposed to do, but until we call it, there is nothing for the function to do. Calling our own function is no different from calling any other function (see the resemblance with the help file for read_csv?):
End of explanation
5/9
Explanation: The boiling point of water in Kelvin should be 373.15 K, not 273.15 K!
Functions make code easier to debug by isolating each possible source of error. In this case, the first term of the equation, ((temp - 32) * (5/9)), is returning 0 (instead of 100) when the temperature is 212 F. If we look at each part of that expression, we find:
End of explanation
print 'two integers:', 5/9
print '5.0/9:', 5.0/9
print '5/9.0:', 5/9.0
Explanation: 5 divided by 9 should be 0.5556, but when we ask Python 2 to divide to integers, it returns an integer! If we want to want to keep the fractional part of the division, we need to convert one or the other number to floating point:
End of explanation
float(5)/9
Explanation: You can also turn an integer into a float by casting:
End of explanation
def fahr_to_kelvin(temp):
return ((temp - 32) * (5./9)) + 273.15
print 'freezing point of water:', fahr_to_kelvin(32)
print 'boiling point of water:', fahr_to_kelvin(212)
Explanation: Casting {.challenge}
What happens when you type float(5/9)?
Integer division in Python 3 {.callout}
The problem of integer division does not exist in Python 3, where division always returns a floating point number. We use Python 2.7 because it is much more commonly used in our community, but always keep integer division in mind as it will be a common source of bugs in your code. And as annoying as it may seem, there are memory benefits to integer division!
Let's rewrite our function with the fixed bug:
End of explanation
def kelvin_to_celsius(temp_k):
return temp_k - 273.15
print 'absolute zero in Celsius:', kelvin_to_celsius(0.0)
Explanation: Composing Functions
Now that we’ve seen how to turn Fahrenheit into Kelvin, it’s easy to turn Kelvin into Celsius:
End of explanation
def fahr_to_celsius(temp_f):
temp_k = fahr_to_kelvin(temp_f)
temp_c = kelvin_to_celsius(temp_k)
return temp_c
print 'freezing point of water in Celsius:', fahr_to_celsius(32.0)
Explanation: What about converting Fahrenheit to Celsius? We could write out the formula, but we don’t need to. Instead, we can compose the two functions we have already created:
End of explanation
def import_streamgage_data(url):
new_column_names = ['Agency', 'Station', 'OldDateTime', 'Timezone', 'Discharge_cfs', 'Discharge_stat', 'Stage_ft', 'Stage_stat']
data = pd.read_csv(url, header=1, sep='\t', comment='#', names = new_column_names)
# fix formatting
data['DateTime'] = pd.to_datetime(data['OldDateTime'])
new_station_name = "0" + str(data['Station'].unique()[0])
data['Station'] = new_station_name
return data
Explanation: This is our first taste of how larger programs are built: we define basic operations, then combine them in ever-large chunks to get the effect we want. Real-life functions will usually be larger than the ones shown here — typically half a dozen to a few dozen lines — but they shouldn’t ever be much longer than that, or the next person who reads it won’t be able to understand what’s going on.
Tidying up
Now that we know how to wrap bits of code in functions, we can make our streamgage data plotting code easier to read and easier to reuse. First, let's make a import_streamgage_data function to pull the data file from the server and fix the formatting:
End of explanation
def plot_discharge(data):
data.plot(x='DateTime', y='Discharge_cfs', title='Station ' + new_station_name)
plt.xlabel('Time')
plt.ylabel('Discharge (cfs)')
plt.savefig('data/discharge_' + new_station_name + '.png')
plt.show()
Explanation: We can make another function plot_discharge to compose to plot and save the figures:
End of explanation
def generate_URL(station, startDT, endDT):
url_root = 'http://waterservices.usgs.gov/nwis/iv/?'
url_1 = 'format=' + 'rdb'
url_2 = 'sites=' + station
url_3 = 'startDT=' + startDT
url_4 = 'endDT=' + endDT
url_5 = 'parameterCd=' + '00060,00065'
url = url_root + url_1 + '&' + url_2 + '&' + url_3 + '&' + url_4 + '&' + url_5
return url
Explanation: The function plot_discharge produces output that is visible to us but has no return statement because it doesn't need to give anything back when it is called.
We can also wrap up the script for composing URLs into a function called generate_URL:
End of explanation
########## change these values ###########
this_station = '09380000'
startDate = '2016-01-01'
endDate = '2016-01-10'
##########################################
url = generate_URL(this_station, startDate, endDate)
data = import_streamgage_data(url)
plot_discharge(data)
Explanation: Now that these three functions exist, we can rewrite our previous code in a much simpler script:
End of explanation
# plot_discharge(data): take a DataFrame containing streamgage data, plot the discharge and save a figure to file.
def plot_discharge(data):
data.plot(x='DateTime', y='Discharge_cfs', title='Station ' + new_station_name)
plt.xlabel('Time')
plt.ylabel('Discharge (cfs)')
plt.savefig('data/discharge_' + new_station_name + '.png')
plt.show()
Explanation: Testing and Documenting
It doesn't long to forget what code we wrote in the past was supposed to do. We should always write some documentation for our functions to remind ourselves later what they are for and how they are supposed to be used.
The usual way to put documentation in software is to add comments:
End of explanation
def plot_discharge(data):
'''
Take a DataFrame containing streamgage data,
plot the discharge and save a figure to file.
'''
data.plot(x='DateTime', y='Discharge_cfs', title='Station ' + new_station_name)
plt.xlabel('Time')
plt.ylabel('Discharge (cfs)')
plt.savefig('data/discharge_' + new_station_name + '.png')
plt.show()
Explanation: There’s a better way, though. If the first thing in a function is a string that isn’t assigned to a variable, that string is attached to the function as its documentation. A string like this is called a docstring (one set of quotes for single line strings, three sets for multi-line strings!):
End of explanation
help(plot_discharge)
Explanation: This is better because we can now ask Python’s built-in help system to show us the documentation for the function:
End of explanation
def display(a=1, b=2, c=3):
print 'a:', a, 'b:', b, 'c:', c
print 'no parameters:'
display()
print 'one parameter:'
display(55)
print 'two parameters:'
display(55, 66)
Explanation: Defining Defaults:
When we use the read_csv method, we pass parameters in two ways: directly, as in pd.read_csv(url), and by name, as we did for the parameter sep in pd.read_csv(url, sep = '\t').
If we look at the documentation for read_csv, all parameters but the first (filepath_or_buffer) have a default value in the function definition (sep=','). The function will not run if the parameters without default values are not provided, but all parameters with defaults are optional. This is handy: if we usually want a function to work one way but occasionally need it to do something else, we can allow people to pass a parameter when they need to but provide a default to make the normal case easier.
The example below shows how Python matches values to parameters:
End of explanation
print('only setting the value of c')
display(c=77)
Explanation: As this example shows, parameters are matched up from left to right, and any that haven’t been given a value explicitly get their default value. We can override this behavior by naming the value as we pass it in:
End of explanation
<END_TASK> |
15,781 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Initialize
Define the Training Data Set
Define the training dataset for the independent and dependent variables
Step1: Define the Test Set
Define the training dataset for the independent variables. In this case it is a "continuous" curve
Step2: Train the Model
Instantiate the kernels, instantiate the GPR with the kernel, and train the model.
Step3: Regression
Perform the regression based on the set of training data. The best estimate of the prediction is given by the mean of the distribution from which the posterior samples are drawn.
Predict (Initial Hyperparameters)
Perform regression using the initial user-specified hyperparameters.
Step4: Optimize Hyperparameters
Optimize over the hyperparameters.
Step5: array([ 1.47895967, 3.99711988, 0.16295754])
array([ 1.80397587, 4.86011667, 0.18058626])
Predict (Optimized Hyperparameters)
Perform the regression from the hyperparameters that optimize the log marginal likelihood. Note the improvement in the fit in comparison to the actual function (red dotted line). | <ASSISTANT_TASK:>
Python Code:
x = np.random.RandomState(0).uniform(-5, 5, 20)
#x = np.random.uniform(-5, 5, 20)
y = x*np.sin(x)
#y += np.random.normal(0,0.5,y.size)
y += np.random.RandomState(34).normal(0,0.5,y.size)
Explanation: Initialize
Define the Training Data Set
Define the training dataset for the independent and dependent variables
End of explanation
x_star = np.linspace(-5,5,500)
Explanation: Define the Test Set
Define the training dataset for the independent variables. In this case it is a "continuous" curve
End of explanation
#Define the basic kernels
k1 = SqExp(0.45,2)
k2 = RQ(0.5,2,3)
k3 = ExpSine(0.1,2,30)
k4 = WhiteNoise(0.01)
#Define the combined kernel
k1 = k1+k4
#Instantiate the GP predictor object with the desired kernel
gp = GPR(k1)
#Train the model
gp.train(x,y)
Explanation: Train the Model
Instantiate the kernels, instantiate the GPR with the kernel, and train the model.
End of explanation
#Predict a new set of test data given the independent variable observations
y_mean1,y_var1 = gp.predict(x_star,False)
#Convert the variance to the standard deviation
y_err1 = np.sqrt(y_var1)
plt.scatter(x,y,s=30)
plt.plot(x_star,x_star*np.sin(x_star),'r:')
plt.plot(x_star,y_mean1,'k-')
plt.fill_between(x_star,y_mean1+y_err1,y_mean1-y_err1,alpha=0.5)
Explanation: Regression
Perform the regression based on the set of training data. The best estimate of the prediction is given by the mean of the distribution from which the posterior samples are drawn.
Predict (Initial Hyperparameters)
Perform regression using the initial user-specified hyperparameters.
End of explanation
gp.optimize('SLSQP')
Explanation: Optimize Hyperparameters
Optimize over the hyperparameters.
End of explanation
#Predict a new set of test data given the independent variable observations
y_mean2,y_var2 = gp.predict(x_star,False)
#Convert the variance to the standard deviation
y_err2 = np.sqrt(y_var2)
plt.scatter(x,y,s=30)
plt.plot(x_star,x_star*np.sin(x_star),'r:')
plt.plot(x_star,y_mean2,'k-')
plt.fill_between(x_star,y_mean2+y_err2,y_mean2-y_err2,alpha=0.5)
Explanation: array([ 1.47895967, 3.99711988, 0.16295754])
array([ 1.80397587, 4.86011667, 0.18058626])
Predict (Optimized Hyperparameters)
Perform the regression from the hyperparameters that optimize the log marginal likelihood. Note the improvement in the fit in comparison to the actual function (red dotted line).
End of explanation
<END_TASK> |
15,782 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Tutorial 4. Immediate mode
In this tutorial we will talk about a cute feature about Caffe2
Step1: Now, as we have known before, in the normal mode, when you create an operator, we are declaring it only, and nothing gets actually executed. Let's re-confirm that.
Step2: Entering and exiting immediate mode.
Entering immediate mode is easy
Step3: Now that you have enabled immediate mode, any operators you run will simultaneously be executed in a separate immediate workspace. Note - the main workspace that you are working on is not affected. We designed the immediate workspace to be separate from the main workspace, so that nothing in the main workspace gets polluted.
Step4: Manually feeding blobs
But wait, you say - what if I want to create an operator that uses an input that is "declared" but not present yet? Since the immediate workspace does not have the input, we will encounter an exception
Step5: This is because immediate mode, being completely imperative, requires any input to be used to already exist in the immediate workspace. To make the immediate mode aware of such external inputs, we can manually feed blobs to the immediate workspace.
Step6: When is immediate mode useful?
You might want to use immediate mode when you are not very sure about the shape of the intermediate results, such as in a CNN where there are multiple convolution and pooling layers. Let's say that you are creating an MNIST convnet model but don't want to calculate the number of dimensions for the final FC layer. Here is what you might want to do. | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from caffe2.python import cnn, core, visualize, workspace, model_helper, brew
import numpy as np
import os
core.GlobalInit(['caffe2', '--caffe2_log_level=-1'])
Explanation: Tutorial 4. Immediate mode
In this tutorial we will talk about a cute feature about Caffe2: immediate mode.
From the previous tutorials you have seen that Caffe2 declares a network, and during this declaration phase, nothing gets actually executed - it's like writing the source of a program, and "compilation/execution" only happens later.
This sometimes gets a bit tricky if we are in a researchy mind, and want to inspect typical intermediate outputs as we go. This is when the immediate mode come to help. At a high level, what the immediate mode does is to run the corresponding operators as you write them. The results live under a special workspace that can then be accessed via FetchImmediate() and FeedImmediate() runs.
Let's show some examples.
End of explanation
workspace.ResetWorkspace()
# declaration
op = core.CreateOperator("GaussianFill", [], "X", shape=[3, 5])
print('Before execution, workspace contains X: {}'
.format(workspace.HasBlob("X")))
# execution
workspace.RunOperatorOnce(op)
print('After execution, workspace contains X: {}'
.format(workspace.HasBlob("X")))
Explanation: Now, as we have known before, in the normal mode, when you create an operator, we are declaring it only, and nothing gets actually executed. Let's re-confirm that.
End of explanation
workspace.StartImmediate()
Explanation: Entering and exiting immediate mode.
Entering immediate mode is easy: you basically invoke workspace.StartImmediate(). Since immediate mode has quite a lot of side effects, it would be good to read through the warning message to make sure you understand the implications.
(If you don't want to see the messages, pass i_know=True to StartImmediate to suppress that.)
End of explanation
# declaration, and since we are in immediate mode, run it in the immediate workspace.
op = core.CreateOperator("GaussianFill", [], "X", shape=[3, 5])
print('Before execution, does workspace contain X? {}'
.format(workspace.HasBlob("X")))
print('But we can access it using the Immediate related functions.'
'Here is a list of immediate blobs:')
print(workspace.ImmediateBlobs())
print('The content is like this:')
print(workspace.FetchImmediate('X'))
# After the immediate execution, you can invoke StopImmediate() to clean up.
workspace.StopImmediate()
Explanation: Now that you have enabled immediate mode, any operators you run will simultaneously be executed in a separate immediate workspace. Note - the main workspace that you are working on is not affected. We designed the immediate workspace to be separate from the main workspace, so that nothing in the main workspace gets polluted.
End of explanation
workspace.StartImmediate(i_know=True)
op = core.CreateOperator("Relu", "X", "Y")
Explanation: Manually feeding blobs
But wait, you say - what if I want to create an operator that uses an input that is "declared" but not present yet? Since the immediate workspace does not have the input, we will encounter an exception:
End of explanation
X = np.random.randn(2, 3).astype(np.float32)
workspace.FeedImmediate("X", X)
# Now, we can safely run CreateOperator since immediate mode knows what X looks like
op = core.CreateOperator("Relu", "X", "Y")
print("Example input is:\n{}".format(workspace.FetchImmediate("X")))
print("Example output is:\n{}".format(workspace.FetchImmediate("Y")))
workspace.StopImmediate()
Explanation: This is because immediate mode, being completely imperative, requires any input to be used to already exist in the immediate workspace. To make the immediate mode aware of such external inputs, we can manually feed blobs to the immediate workspace.
End of explanation
model = model_helper.ModelHelper(name="mnist")
# Start the immediate mode.
workspace.StartImmediate(i_know=True)
data_folder = os.path.join(os.path.expanduser('~'), 'caffe2_notebooks', 'tutorial_data')
data_uint8, label = model.TensorProtosDBInput(
[], ["data_uint8", "label"], batch_size=64,
db=os.path.join(data_folder, 'mnist/mnist-train-nchw-leveldb'),
db_type='leveldb')
data = model.net.Cast(data_uint8, "data", to=core.DataType.FLOAT)
data = model.net.Scale(data, data, scale=float(1./256))
data = model.net.StopGradient(data, data)
conv1 = brew.conv(model, data, 'conv1', 1, 20, 5)
pool1 = brew.max_pool(model, conv1, 'pool1', kernel=2, stride=2)
conv2 = brew.conv(model, pool1, 'conv2', 20, 50, 5)
pool2 = brew.max_pool(model, conv2, 'pool2', kernel=2, stride=2)
# What is the shape of pool2 again...?
feature_dimensions = workspace.FetchImmediate("pool2").shape[1:]
print("Feature dimensions before FC layer: {}".format(feature_dimensions))
fc3 = brew.fc(model, pool2, 'fc3', int(np.prod(feature_dimensions)), 500)
fc3 = brew.relu(model, fc3, fc3)
pred = brew.fc(model, fc3, 'pred', 500, 10)
softmax = brew.softmax(model, pred, 'softmax')
# Let's see if the dimensions are all correct:
for blob in ["data", "conv1", "pool1", "conv2", "pool2", "fc3", "pred"]:
print("Blob {} has shape: {}".format(
blob, workspace.FetchImmediate(blob).shape))
# Let's also visualize a sample input.
print("Sample input:")
visualize.NCHW.ShowMultiple(workspace.FetchImmediate("data"))
workspace.StopImmediate()
Explanation: When is immediate mode useful?
You might want to use immediate mode when you are not very sure about the shape of the intermediate results, such as in a CNN where there are multiple convolution and pooling layers. Let's say that you are creating an MNIST convnet model but don't want to calculate the number of dimensions for the final FC layer. Here is what you might want to do.
End of explanation
<END_TASK> |
15,783 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Python for Probability, Statistics, and Machine Learning
Step1: Monte Carlo Sampling Methods
So far, we have studied analytical ways to transform random variables and how
to augment these methods using Python. In spite of all this, we frequently must
resort to purely numerical methods to solve real-world problems. Hopefully,
now that we have seen the deeper theory, these numerical methods feel more
concrete. Suppose we want to generate samples of a given density, $f(x)$,
given we already can generate samples from a uniform distribution,
$\mathcal{U}[0,1]$. How do we know a random sample $v$ comes from the $f(x)$
distribution? One approach is to look at how a histogram of samples of $v$
approximates $f(x)$. Specifically,
<!-- Equation labels as ordinary links -->
<div id="eq
Step2: The next block uses pd.cut to map the individual samples to
the set $\left{1,2,\ldots,6\right}$ labeled v.
Step3: This is what the dataframe contains. The v column contains
the samples drawn from the fair die.
Step4: The following is a count of the number of samples in each group. There
should be roughly the same number of samples in each group because the die is fair.
Step5: So far, so good. We now have a way to simulate a fair
die from a uniformly distributed random variable.
To extend this to unfair die, we need only make some small adjustments to this
code. For example, suppose that we want an unfair die so that
$\mathbb{P}(1)=\mathbb{P}(2)=\mathbb{P}(3)=1/12$ and
$\mathbb{P}(4)=\mathbb{P}(5)=\mathbb{P}(6)=1/4$. The only change we have to
make is with pd.cut as follows,
Step6: where now these are the individual probabilities of each digit. You
can take more than 100 samples to get a clearer view of the individual
probabilities but the mechanism for generating them is the same. The method is
called the inverse CDF [^CDF] method because the CDF
(namely,$\texttt{[0,1/12,2/12,3/12,2/4,3/4,1]}$) in the last example has been
inverted (using the pd.cut method) to generate the samples.
The inversion is easier to see for continuous variables, which we consider
next.
[^CDF]
Step7: Now, we have the samples from the exponential distribution, but how
do we know the method is correct with samples distributed accordingly?
Fortunately, scipy.stats already has a exponential distribution, so we can
check our work against the reference using a probability plot (i.e., also
known as a quantile-quantile plot). The following code sets up the
probability plot from scipy.stats.
Step8: Note that we have to supply an axes object (ax) for it to draw on.
The result is Figure. The more the samples
line match the diagonal line, the more they match the reference distribution
(i.e., exponential distribution in this case). You may also want to try
dist=norm in the code above To see what happens when the normal distribution
is the reference distribution.
<!-- dom
Step9: Figure shows a histogram of the
so-generated samples that nicely fits the probability density function. The
title in the figure shows the efficiency, which is poor. It means that we threw
away most of the proposed samples. Thus, even though there is nothing
conceptually wrong with this result, the low efficiency must be fixed, as a
practical matter. Figure shows where the
proposed samples were rejected. Samples under the curve were retained (i.e.,
$u_2 < \frac{f(u_1)}{M}$) but the vast majority of the samples are outside this
umbrella.
<!-- dom
Step10: <!-- dom
Step11: <!-- dom
Step12: <!-- dom | <ASSISTANT_TASK:>
Python Code:
from __future__ import division
%pylab inline
Explanation: Python for Probability, Statistics, and Machine Learning
End of explanation
import pandas as pd
import numpy as np
from pandas import DataFrame
u= np.random.rand(100)
df = DataFrame(data=u,columns=['u'])
Explanation: Monte Carlo Sampling Methods
So far, we have studied analytical ways to transform random variables and how
to augment these methods using Python. In spite of all this, we frequently must
resort to purely numerical methods to solve real-world problems. Hopefully,
now that we have seen the deeper theory, these numerical methods feel more
concrete. Suppose we want to generate samples of a given density, $f(x)$,
given we already can generate samples from a uniform distribution,
$\mathcal{U}[0,1]$. How do we know a random sample $v$ comes from the $f(x)$
distribution? One approach is to look at how a histogram of samples of $v$
approximates $f(x)$. Specifically,
<!-- Equation labels as ordinary links -->
<div id="eq:mc01"></div>
$$
\begin{equation}
\mathbb{P}( v \in N_{\Delta}(x) ) = f(x) \Delta x
\end{equation}
\label{eq:mc01} \tag{1}
$$
<!-- dom:FIGURE: [fig-probability/Sampling_Monte_Carlo_000.png, width=500 frac=0.75] The histogram approximates the target probability density. <div id="fig:Sampling_Monte_Carlo_000"></div> -->
<!-- begin figure -->
<div id="fig:Sampling_Monte_Carlo_000"></div>
<p>The histogram approximates the target probability density.</p>
<img src="fig-probability/Sampling_Monte_Carlo_000.png" width=500>
<!-- end figure -->
which says that the probability that a sample is in some $N_\Delta$
neighborhood of $x$ is approximately $f(x)\Delta x$. Figure shows the target probability density function
$f(x)$ and a histogram that approximates it. The histogram is generated from
samples $v$. The hatched rectangle in the center illustrates Equation
ref{eq:mc01}. The area of this rectangle is approximately $f(x)\Delta x$ where
$x=0$, in this case. The width of the rectangle is $N_{\Delta}(x)$ The quality
of the approximation may be clear visually, but to know that $v$ samples are
characterized by $f(x)$, we need the statement of Equation ref{eq:mc01}, which
says that the proportion of samples $v$ that fill the hatched rectangle is
approximately equal to $f(x)\Delta x$.
Now that we know how to evaluate samples $v$ that are characterized by the density
$f(x)$, let's consider how to create these samples for both discrete and
continuous random variables.
Inverse CDF Method for Discrete Variables
Suppose we want to generate samples from a fair six-sided die. Our workhouse
uniform random variable is defined continuously over the unit interval and the
fair six-sided die is discrete. We must first create a mapping between the
continuous random variable $u$ and the discrete outcomes of the die. This
mapping is shown in Figure where the unit
interval is broken up into segments, each of length $1/6$. Each individual
segment is assigned to one of the die outcomes. For example, if $u \in
[1/6,2/6)$, then the outcome for the die is $2$. Because the die is fair, all
segments on the unit interval are the same length. Thus, our new random
variable $v$ is derived from $u$ by this assignment.
<!-- dom:FIGURE: [fig-probability/Sampling_Monte_Carlo_0001.png, width=500 frac=0.75] A uniform distribution random variable on the unit interval is assigned to the six outcomes of a fair die using these segements. <div id="fig:Sampling_Monte_Carlo_0001"></div> -->
<!-- begin figure -->
<div id="fig:Sampling_Monte_Carlo_0001"></div>
<p>A uniform distribution random variable on the unit interval is assigned to the six outcomes of a fair die using these segements.</p>
<img src="fig-probability/Sampling_Monte_Carlo_0001.png" width=500>
<!-- end figure -->
For example, for $v=2$, we have,
$$
\mathbb{P}(v=2) = \mathbb{P}(u\in [1/6,2/6)) = 1/6
$$
where, in the language of the Equation ref{eq:mc01}, $f(x)=1$
(uniform distribution), $\Delta x = 1/6$, and $N_\Delta (2)=[1/6,2/6)$.
Naturally, this pattern holds for all the other die outcomes in
$\left{1,2,3,..,6\right}$. Let's consider a quick simulation to make this
concrete. The following code generates uniform random samples and stacks them
in a Pandas dataframe.
End of explanation
labels = [1,2,3,4,5,6]
df['v']=pd.cut(df.u,np.linspace(0,1,7),
include_lowest=True,labels=labels)
Explanation: The next block uses pd.cut to map the individual samples to
the set $\left{1,2,\ldots,6\right}$ labeled v.
End of explanation
>>> df.head()
Explanation: This is what the dataframe contains. The v column contains
the samples drawn from the fair die.
End of explanation
>>> df.groupby('v').count()
Explanation: The following is a count of the number of samples in each group. There
should be roughly the same number of samples in each group because the die is fair.
End of explanation
df['v']=pd.cut(df.u,[0,1/12,2/12,3/12,2/4,3/4,1],
include_lowest=True,labels=labels)
>>> df.groupby('v').count()/df.shape[0]
Explanation: So far, so good. We now have a way to simulate a fair
die from a uniformly distributed random variable.
To extend this to unfair die, we need only make some small adjustments to this
code. For example, suppose that we want an unfair die so that
$\mathbb{P}(1)=\mathbb{P}(2)=\mathbb{P}(3)=1/12$ and
$\mathbb{P}(4)=\mathbb{P}(5)=\mathbb{P}(6)=1/4$. The only change we have to
make is with pd.cut as follows,
End of explanation
from numpy import array, log
import scipy.stats
alpha = 1. # distribution parameter
nsamp = 1000 # num of samples
# define uniform random variable
u=scipy.stats.uniform(0,1)
# define inverse function
Finv=lambda u: 1/alpha*log(1/(1-u))
# apply inverse function to samples
v = array(map(Finv,u.rvs(nsamp)))
Explanation: where now these are the individual probabilities of each digit. You
can take more than 100 samples to get a clearer view of the individual
probabilities but the mechanism for generating them is the same. The method is
called the inverse CDF [^CDF] method because the CDF
(namely,$\texttt{[0,1/12,2/12,3/12,2/4,3/4,1]}$) in the last example has been
inverted (using the pd.cut method) to generate the samples.
The inversion is easier to see for continuous variables, which we consider
next.
[^CDF]: Cumulative density function. Namely, $F(x)=\mathbb{P}(X < x)$.
Inverse CDF Method for Continuous Variables
The method above applies to continuous random variables, but now we have to use
squeeze the intervals down to individual points. In the example above, our
inverse function was a piecewise function that operated on uniform random
samples. In this case, the piecewise function collapses to a continuous inverse
function. We want to generate random samples for a CDF that is invertible.
As before, the criterion for generating an appropriate sample $v$ is the
following,
$$
\mathbb{P}(F(x) < v < F(x+\Delta x)) = F(x+\Delta x) - F(x) = \int_x^{x+\Delta x} f(u) du \approx f(x) \Delta x
$$
which is saying that the probability that the sample $v$ is contained
in a $\Delta x$ interval is approximately equal to the density function, $f(x)
\Delta x$, at that point. Once again, the trick is to use a uniform random
sample $u$ and an invertible CDF $F(x)$ to construct these samples. Note
that for a uniform random variable $u \sim \mathcal{U}[0,1]$, we have,
$$
\begin{align}
\mathbb{P}(x < F^{-1}(u) < x+\Delta x) & = \mathbb{P}(F(x) < u < F(x+\Delta x)) \\
& = F(x+\Delta x) - F(x) \\
& = \int_x^{x+\Delta x} f(p) dp \approx f(x) \Delta x
\end{align}
$$
This means that $ v=F^{-1}(u) $ is distributed according to $f(x)$,
which is what we want.
Let's try this to generate samples from the
exponential distribution,
$$
f_{\alpha}(x) = \alpha e^{ -\alpha x }
$$
which has the following CDF,
$$
F(x) = 1-e^{ -\alpha x }
$$
and corresponding inverse,
$$
F^{-1}(u) = \frac{1}{\alpha}\ln \frac{1}{(1-u)}
$$
Now, all we have to do is generate some uniformly distributed
random samples and then feed them into $F^{-1}$.
End of explanation
%matplotlib inline
from matplotlib.pylab import setp, subplots
fig,ax = subplots()
fig.set_size_inches((7,5))
_=scipy.stats.probplot(v,(1,),dist='expon',plot=ax)
line=ax.get_lines()[0]
_=setp(line,'color','k')
_=setp(line,'alpha',.1)
line=ax.get_lines()[1]
_=setp(line,'color','gray')
_=setp(line,'lw',3.0)
_=setp(ax.yaxis.get_label(),'fontsize',18)
_=setp(ax.xaxis.get_label(),'fontsize',18)
_=ax.set_title('Probability Plot',fontsize=18)
_=ax.grid()
fig.tight_layout()
#fig.savefig('fig-probability/Sampling_Monte_Carlo_005.png')
fig,ax=subplots()
scipy.stats.probplot(v,(1,),dist='expon',plot=ax)
Explanation: Now, we have the samples from the exponential distribution, but how
do we know the method is correct with samples distributed accordingly?
Fortunately, scipy.stats already has a exponential distribution, so we can
check our work against the reference using a probability plot (i.e., also
known as a quantile-quantile plot). The following code sets up the
probability plot from scipy.stats.
End of explanation
import numpy as np
x = np.linspace(0.001,15,100)
f= lambda x: np.exp(-(x-1)**2/2./x)*(x+1)/12.
fx = f(x)
M=0.3 # scale factor
u1 = np.random.rand(10000)*15 # uniform random samples scaled out
u2 = np.random.rand(10000) # uniform random samples
idx,= np.where(u2<=f(u1)/M) # rejection criterion
v = u1[idx]
fig,ax=subplots()
fig.set_size_inches((9,5))
_=ax.hist(v,normed=1,bins=40,alpha=.3,color='gray')
_=ax.plot(x,fx,'k',lw=3.,label='$f(x)$')
_=ax.set_title('Estimated Efficency=%3.1f%%'%(100*len(v)/len(u1)),
fontsize=18)
_=ax.legend(fontsize=18)
_=ax.set_xlabel('$x$',fontsize=24)
fig.tight_layout()
#fig.savefig('fig-probability/Sampling_Monte_Carlo_007.png')
Explanation: Note that we have to supply an axes object (ax) for it to draw on.
The result is Figure. The more the samples
line match the diagonal line, the more they match the reference distribution
(i.e., exponential distribution in this case). You may also want to try
dist=norm in the code above To see what happens when the normal distribution
is the reference distribution.
<!-- dom:FIGURE: [fig-probability/Sampling_Monte_Carlo_005.png, width=500 frac=0.85] The samples created using the inverse cdf method match the exponential reference distribution. <div id="fig:Sampling_Monte_Carlo_005"></div> -->
<!-- begin figure -->
<div id="fig:Sampling_Monte_Carlo_005"></div>
<p>The samples created using the inverse cdf method match the exponential reference distribution.</p>
<img src="fig-probability/Sampling_Monte_Carlo_005.png" width=500>
<!-- end figure -->
Rejection Method
In some cases, inverting the CDF may be impossible. The rejection
method can handle this situation. The idea is to pick two uniform random
variables $u_1,u_2 \sim \mathcal{U}[a,b]$ so that
$$
\mathbb{P}\left(u_1 \in N_{\Delta}(x) \bigwedge u_2 < \frac{f(u_1)}{M} \right) \hspace{0.5em} \approx \frac{\Delta x}{b-a} \frac{f(u_1)}{M}
$$
where we take $x=u_1$ and $f(x) < M $. This is a two-step process.
First, draw $u_1$ uniformly from the interval $[a,b]$. Second, feed it into
$f(x)$ and if $u_2 < f(u_1)/M$, then you have a valid sample for $f(x)$. Thus,
$u_1$ is the proposed sample from $f$ that may or may not be rejected depending
on $u_2$. The only job of the $M$ constant is to scale down the $f(x)$ so that
the $u_2$ variable can span the range. The efficiency of this method is the
probability of accepting $u_1$ which comes from integrating out the above
approximation,
$$
\int \frac{f(x)}{M(b-a)} dx = \frac{1}{M(b-a)} \int f(x)dx =\frac{1}{M(b-a)}
$$
This means that we don't want an unecessarily large $M$ because that
makes it more likely that samples will be discarded.
Let's try this method for a density that does not have a continuous inverse [^normalization].
[^normalization]: Note that this example density does not exactly integrate
out to one like a probability density function should, but the normalization
constant for this is distracting for our purposes here.
$$
f(x) = \exp\left(-\frac{(x-1)^2}{2x} \right) (x+1)/12
$$
where $x>0$. The following code implements the rejection plan.
End of explanation
fig,ax=subplots()
fig.set_size_inches((9,5))
_=ax.plot(u1,u2,'+',label='rejected',alpha=.3,color='gray')
_=ax.plot(u1[idx],u2[idx],'.',label='accepted',alpha=.3,color='k')
_=ax.legend(fontsize=22)
fig.tight_layout()
#fig.savefig('fig-probability/Sampling_Monte_Carlo_008.png')
Explanation: Figure shows a histogram of the
so-generated samples that nicely fits the probability density function. The
title in the figure shows the efficiency, which is poor. It means that we threw
away most of the proposed samples. Thus, even though there is nothing
conceptually wrong with this result, the low efficiency must be fixed, as a
practical matter. Figure shows where the
proposed samples were rejected. Samples under the curve were retained (i.e.,
$u_2 < \frac{f(u_1)}{M}$) but the vast majority of the samples are outside this
umbrella.
<!-- dom:FIGURE: [fig-probability/Sampling_Monte_Carlo_007.png, width=500 frac=0.75] The rejection method generate samples in the histogram that nicely match the target distribution. Unfortunately, the efficiency is not so good. <div id="fig:Sampling_Monte_Carlo_007"></div> -->
<!-- begin figure -->
<div id="fig:Sampling_Monte_Carlo_007"></div>
<p>The rejection method generate samples in the histogram that nicely match the target distribution. Unfortunately, the efficiency is not so good.</p>
<img src="fig-probability/Sampling_Monte_Carlo_007.png" width=500>
<!-- end figure -->
End of explanation
ch=scipy.stats.chi2(4) # chi-squared
h = lambda x: f(x)/ch.pdf(x) # h-function
fig,axs=subplots(1,2,sharex=True)
fig.set_size_inches(12,4)
_=axs[0].plot(x,fx,label='$f(x)$',color='k')
_=axs[0].plot(x,ch.pdf(x),'--',lw=2,label='$g(x)$',color='gray')
_=axs[0].legend(loc=0,fontsize=24)
_=axs[0].set_xlabel(r'$x$',fontsize=22)
_=axs[1].plot(x,h(x),'-k',lw=3)
_=axs[1].set_title('$h(x)=f(x)/g(x)$',fontsize=24)
_=axs[1].set_xlabel(r'$x$',fontsize=22)
fig.tight_layout()
#fig.savefig('fig-probability/Sampling_Monte_Carlo_009.png')
Explanation: <!-- dom:FIGURE: [fig-probability/Sampling_Monte_Carlo_008.png, width=500 frac=0.75] The proposed samples under the curve were accepted and the others were not. This shows the majority of samples were rejected. <div id="fig:Sampling_Monte_Carlo_008"></div> -->
<!-- begin figure -->
<div id="fig:Sampling_Monte_Carlo_008"></div>
<p>The proposed samples under the curve were accepted and the others were not. This shows the majority of samples were rejected.</p>
<img src="fig-probability/Sampling_Monte_Carlo_008.png" width=500>
<!-- end figure -->
The rejection method uses $u_1$ to select along the domain of $f(x)$ and the
other $u_2$ uniform random variable decides whether to accept or not. One idea
would be to choose $u_1$ so that $x$ values are coincidentally those that are
near the peak of $f(x)$, instead of uniformly anywhere in the domain,
especially near the tails, which are low probability anyway. Now, the trick is
to find a new density function $g(x)$ to sample from that has a similiar
concentration of probability density. One way it to familiarize oneself with
the probability density functions that have adjustable parameters and fast random
sample generators already. There are lots of places to look and, chances are,
there is likely already such a generator for your problem. Otherwise, the
family of $\beta$ densities is a good place to start.
To be explicit, what we want is $u_1 \sim g(x)$ so that, returning to our
earlier argument,
$$
\mathbb{P}\left( u_1 \in N_{\Delta}(x) \bigwedge u_2 < \frac{f(u_1)}{M} \right) \approx g(x) \Delta x \frac{f(u_1)}{M}
$$
but this is not what we need here. The problem is with the
second part of the logical $\bigwedge$ conjunction. We need to put
something there that will give us something proportional to $f(x)$.
Let us define the following,
<!-- Equation labels as ordinary links -->
<div id="eq:rej01"></div>
$$
\begin{equation}
h(x) = \frac{f(x)}{g(x)}
\end{equation}
\label{eq:rej01} \tag{2}
$$
with corresponding maximum on the domain as $h_{\max}$ and
then go back and construct the second part of the clause as
$$
\mathbb{P}\left(u_1 \in N_{\Delta}(x) \bigwedge u_2 < \frac{h(u_1)}{h_{\max}} \right) \approx g(x) \Delta x \frac{h(u_1)}{h_{\max}} = f(x)/h_{\max}
$$
Recall that satisfying this criterion means that $u_1=x$. As before,
we can estimate the probability of acceptance of the $u_1$ as $1/h_{\max}$.
Now, how to construct the $g(x)$ function in the denominator of Equation
ref{eq:rej01}? Here's where familarity with some standard probability densities
pays off. For this case, we choose the chi-squared distribution. The following
plots the $g(x)$ and $f(x)$ (left plot) and the corresponding $h(x)=f(x)/g(x)$
(right plot). Note that $g(x)$ and $f(x)$ have peaks that almost coincide,
which is what we are looking for.
End of explanation
hmax=h(x).max()
u1 = ch.rvs(5000) # samples from chi-square distribution
u2 = np.random.rand(5000)# uniform random samples
idx = (u2 <= h(u1)/hmax) # rejection criterion
v = u1[idx] # keep these only
fig,ax=subplots()
fig.set_size_inches((7,3))
_=ax.hist(v,normed=1,bins=40,alpha=.3,color='gray')
_=ax.plot(x,fx,color='k',lw=3.,label='$f(x)$')
_=ax.set_title('Estimated Efficency=%3.1f%%'%(100*len(v)/len(u1)))
_=ax.axis(xmax=15)
_=ax.legend(fontsize=18)
#fig.savefig('fig-probability/Sampling_Monte_Carlo_010.png')
Explanation: <!-- dom:FIGURE: [fig-probability/Sampling_Monte_Carlo_009.png, width=500 frac=0.95] The plot on the right shows $h(x)=f(x)/g(x)$ and the one on the left shows $f(x)$ and $g(x)$ separately. <div id="fig:Sampling_Monte_Carlo_009"></div> -->
<!-- begin figure -->
<div id="fig:Sampling_Monte_Carlo_009"></div>
<p>The plot on the right shows $h(x)=f(x)/g(x)$ and the one on the left shows $f(x)$ and $g(x)$ separately.</p>
<img src="fig-probability/Sampling_Monte_Carlo_009.png" width=500>
<!-- end figure -->
Now, let's generate some samples from this $\chi^2$
distribution with the rejection method.
End of explanation
fig,ax=subplots()
fig.set_size_inches((7,4))
_=ax.plot(u1,u2,'+',label='rejected',alpha=.3,color='gray')
_=ax.plot(u1[idx],u2[idx],'g.',label='accepted',alpha=.3,color='k')
_=ax.plot(x,h(x)/hmax,color='k',lw=3.,label='$h(x)$')
_=ax.legend(fontsize=16,loc=0)
_=ax.set_xlabel('$x$',fontsize=24)
_=ax.set_xlabel('$h(x)$',fontsize=24)
_=ax.axis(xmax=15,ymax=1.1)
fig.tight_layout()
#fig.savefig('fig-probability/Sampling_Monte_Carlo_011.png')
Explanation: <!-- dom:FIGURE: [fig-probability/Sampling_Monte_Carlo_010.png, width=500 frac=0.85] Using the updated method, the histogram matches the target probability density function with high efficiency. <div id="fig:Sampling_Monte_Carlo_010"></div> -->
<!-- begin figure -->
<div id="fig:Sampling_Monte_Carlo_010"></div>
<p>Using the updated method, the histogram matches the target probability density function with high efficiency.</p>
<img src="fig-probability/Sampling_Monte_Carlo_010.png" width=500>
<!-- end figure -->
Using the $\chi^2$ distribution with the rejection method results in throwing
away less than 10% of the generated samples compared with our prior example
where we threw out at least 80%. This is dramatically more
efficient. Figure shows that the histogram
and the probability density function match. For completeness, Figure shows the samples with the corresponding
threshold $h(x)/h_{\max}$ that was used to select them.
End of explanation
<END_TASK> |
15,784 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
An Exclusive Guide to Exclusion Limits
In this notebook we will place exclusion limits using Python, it's the companion piece to an an introduction to exclusion limits we wrote that explains the theory and idea behind them. We will be restricting ourselves to use basic numerical packages such as numpy, scipy and matplotlib for visualisation, and the first step is to import them.
Step1: We need something to place a limit on, and in our first example we are looking for a bump in an otherwise smooth background. This could be a gamma-ray line from dark matter annihilation, or maybe a particle resonance at the LHC.
In this first toy example we keep the shape of the background, the width, and position of the signal bump are all fixed. The two parameters are then the normalisations of the signal bump and the smooth background.
Futhermore we define the data to be a measured number of events in $n=30$ energy bins, and define a data generating function with the signal spectra being a Gaussian at $E = 10$ and with a variance $\sigma^2 = 2$. The background is a power law with an index $-1$.
Step2: We define a function to visualise two realisations of this model.
Step3: On the left the signal is buried below the background and is completely invisible, and placing an upper exclusion limit on the signal normalisation $\theta_s$ would be natural. In the right panel however the signal is large enough to be visible in the data, and an upper limit would not be the thing to do.
Defining the likelihood and the test statistic
The likelihood $\mathcal{L}(\theta_s, \theta_b \,|\, D)$ is simply the product of poisson distributions for each bin. We define and work with the $\ln \mathcal{L}$ as it behaves better numerically better.
Step4: Our test statistic of choice is the logarithm of the maximum likelihood ratio which we defined as
$$ \mathrm{TS}(\theta) = -2 \ln \frac{\max_{\nu} \mathcal{L}( \theta, \nu \,|\, D)}{\max_{\theta, \nu} \mathcal{L}( \theta, \nu \,|\, D)}$$
where $\theta$ are the parameter(s) we want to constrain, and $\nu$ are the remained. So if we wanted to constrain the normalisation of the signal $\theta_s$ the test statistic to consider is
$$ \mathrm{TS}(\theta_s) = -2 \ln \frac{\max_{\theta_b} \mathcal{L}( \theta_s, \theta_b \,|\, D)}{\max_{\theta_s, \theta_b} \mathcal{L}( \theta_s, \theta_b \,|\, D)}$$
Step5: The optional bestfit arguments is to spare us the burden of finding the minima everytime we evaluate the test statistic. As always, it is useful to visualise, and as examples we take the two realisations we already plotted.
Step6: Where the black x marks the true parameter values of each of the data sets. We see that the test statistic is small in the region of the true values, not perfectly so as we have statistical noise.
We can also consider the full 2D case where
$$ \mathrm{TS}(\theta_s, \theta_b) = -2 \ln \frac{ \mathcal{L}( \theta_s, \theta_b \,|\, D)}{\max_{\theta_s, \theta_b} \mathcal{L}( \theta_s, \theta_b \,|\, D)}$$
which is simply the likelihood normalised to it's maximum.
Step7: where the white x marks the true parameter values of each of the data sets. We see that, as expected, the test statistic is smaller closer to the true value.
The white contour is simply us moving ahead of ourselves and plotting the $95.4$% confidence limit region using Wilks' theorem, i.e. by assuming that our TS is distributed as $\chi^2_{k=2}$. Ignore this for now.
Confidence Intervals
From the visualisation of the test statistics above we see that they are deep valleys close to the true values. This can be used to construct confidence intervals. The interval is simply
$${\, \theta \, | \, \mathrm{TS}(\theta) < c \, }$$
where $c$ is a threshold value such that we have the desired coverage. To determine $c$ we need to know how the test statistic is distributed under the null, i.e. when $\theta$ is the true value. If it applies, Wilks' theorem states that it's asymptotically chi-squared distributed with as many degrees of freedom $k$ as parameters of interest. So for 1D and 2D we can compute
Step8: With this we can now find the 68% CL intervals of $\theta_s$ for our two examples.
Step9: Check coverage
To check if our procedure work we will do a 100 experiments and see if the true value are covered at the correct frequency, e.g. 68ish times if we want 68% CL.
Step10: Not bad, with more experiments this should be better. Try it yourself!
Verifying that Wilk's theorem applies
We can verify that our test statistic is $\chi^2_{k=1}$ distributed by doing monte carlo experiments. We can then compare the empirical distribution of the test statistic with the chi-squared one.
Step11: As we can see, in this case the empirical distribution is well-described by a $\chi_{k=1}^2$ distribution.
Upper Limits
We've seen how the the maximum likelihood test statistic leads to two sided intervals, but our aim is to do exclusion limits. We modify our test statistic as follows.
$$\mathrm{TS}(\theta_s) =
\begin{cases}
\mathrm{TS}(\theta_s) & \quad \theta_s \geq \hat{\theta_s}\
0 & \quad \text{elsewise}\
\end{cases}
$$
This is for a upper limit on $\theta_s$.
Step12: This is now distributed as
$$\mathrm{TS}{\mathrm{ul}} \sim \frac{1}{2}\delta(0) + \frac{1}{2} \chi^2{\mathrm{df}=1}$$
from this expression we can determine the required threshold for various confidence levels. Again the threshold for a $100n$ confidence level is simply
$$ \mathrm{CDF}(x) = n $$
Step13: We can now use a root finder to find for which $\theta_s$ our new TS has this threshold value, and this is our upper limit!
Step14: Check coverage for the upper limit
We perform a multitude of 'experiments' and investigate if the upper limit covers the true value as often as it should.
Step15: Again, not bad for the number of trials.
TODO
Extend the problem by letting the bump position vary.
Make brazil plot | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from scipy.stats import poisson, norm, chi2
from scipy.optimize import minimize, brentq
import warnings; warnings.simplefilter('ignore') # ignore some numerical errors
Explanation: An Exclusive Guide to Exclusion Limits
In this notebook we will place exclusion limits using Python, it's the companion piece to an an introduction to exclusion limits we wrote that explains the theory and idea behind them. We will be restricting ourselves to use basic numerical packages such as numpy, scipy and matplotlib for visualisation, and the first step is to import them.
End of explanation
E = np.logspace(0,2,30+1)
E = (E[1:] + E[:-1]) / 2 # bin centers
def signal_shape(E=E, mu=10, var=2):
return norm.pdf(E, mu, var)
def background_shape(E=E, gamma=-1):
return E**gamma
def generate_data(bkg_norm, sig_norm, sig_mu=10, sig_var=2, E=E, seed=None):
np.random.seed(seed)
return np.random.poisson(sig_norm * signal_shape(E, mu=sig_mu, var=sig_var) +
bkg_norm * background_shape(E))
Explanation: We need something to place a limit on, and in our first example we are looking for a bump in an otherwise smooth background. This could be a gamma-ray line from dark matter annihilation, or maybe a particle resonance at the LHC.
In this first toy example we keep the shape of the background, the width, and position of the signal bump are all fixed. The two parameters are then the normalisations of the signal bump and the smooth background.
Futhermore we define the data to be a measured number of events in $n=30$ energy bins, and define a data generating function with the signal spectra being a Gaussian at $E = 10$ and with a variance $\sigma^2 = 2$. The background is a power law with an index $-1$.
End of explanation
def visualise_model(bkg_norm, sig_norm, sig_mu=10, ax=None, title=None):
if ax is None:
fig, ax = plt.subplots()
x = np.logspace(0,2,200)
b = bkg_norm*background_shape(x)
s = sig_norm*signal_shape(x, mu=sig_mu)
ax.plot(x, b, label='Background')
ax.plot(x, s, label='Signal')
ax.plot(x, s+b, color='black', linestyle='dotted', label='S+B')
N = generate_data(bkg_norm, sig_norm, sig_mu=sig_mu)
ax.errorbar(E, N, yerr=np.sqrt(N), fmt='o', color='grey', label='Data')
ax.set_ylim(0.4, 2*np.maximum(s.max(), b.max()))
ax.set_xscale('log')
ax.set_yscale('log')
ax.set_xlabel('E')
ax.set_ylabel('dN/dE')
ax.set_title(title)
ax.legend(frameon=False)
return N
fig, axes = plt.subplots(ncols=2, figsize=(10,4))
data_small_sig = visualise_model(bkg_norm=1000, sig_norm=10, ax=axes[0], title='Small signal');
data_large_sig = visualise_model(bkg_norm=1000, sig_norm=600, ax=axes[1], title='Large signal');
Explanation: We define a function to visualise two realisations of this model.
End of explanation
def lnLike(bkg_norm, sig_norm, data, gamma=-1, mu=10, var=2):
s = sig_norm*signal_shape(mu=mu, var=var)
b = bkg_norm*background_shape(gamma=gamma)
return np.log(poisson.pmf(data, mu=s+b)).sum()
Explanation: On the left the signal is buried below the background and is completely invisible, and placing an upper exclusion limit on the signal normalisation $\theta_s$ would be natural. In the right panel however the signal is large enough to be visible in the data, and an upper limit would not be the thing to do.
Defining the likelihood and the test statistic
The likelihood $\mathcal{L}(\theta_s, \theta_b \,|\, D)$ is simply the product of poisson distributions for each bin. We define and work with the $\ln \mathcal{L}$ as it behaves better numerically better.
End of explanation
def TS_sig(sig_norm, data, bestfit=None):
numerator = minimize(lambda b: -2*lnLike(b, sig_norm, data), 1e3)
if not bestfit:
bestfit = minimize(lambda x: -2*lnLike(x[0], x[1], data), (1e3,1e3))
return numerator.fun - bestfit.fun
Explanation: Our test statistic of choice is the logarithm of the maximum likelihood ratio which we defined as
$$ \mathrm{TS}(\theta) = -2 \ln \frac{\max_{\nu} \mathcal{L}( \theta, \nu \,|\, D)}{\max_{\theta, \nu} \mathcal{L}( \theta, \nu \,|\, D)}$$
where $\theta$ are the parameter(s) we want to constrain, and $\nu$ are the remained. So if we wanted to constrain the normalisation of the signal $\theta_s$ the test statistic to consider is
$$ \mathrm{TS}(\theta_s) = -2 \ln \frac{\max_{\theta_b} \mathcal{L}( \theta_s, \theta_b \,|\, D)}{\max_{\theta_s, \theta_b} \mathcal{L}( \theta_s, \theta_b \,|\, D)}$$
End of explanation
def visualise_TS_sig(data, siglim=(0, 1000), ax=None, title=None):
if ax is None:
fig, ax = plt.subplots()
bestfit = minimize(lambda x: -2*lnLike(x[0], x[1], data), (1e3,1e3))
x = np.linspace(*siglim, 100)
ts = np.array([TS_sig(s, data, bestfit=bestfit) for s in x])
ax.plot(x, ts)
ax.set_ylim(0,10)
ax.set_xlim(*siglim)
ax.set_title(title)
ax.set_xlabel('Signal Normalisation')
ax.set_ylabel('TS')
fig, axes = plt.subplots(ncols=2, figsize=(10,4))
visualise_TS_sig(data_small_sig, siglim=(-90,130), ax=axes[0], title='Small signal')
axes[0].scatter(10, 0.5, color='black', marker='x')
visualise_TS_sig(data_large_sig, siglim=(400,720), ax=axes[1], title='Large signal');
axes[1].scatter(600, 0.5, color='black', marker='x');
Explanation: The optional bestfit arguments is to spare us the burden of finding the minima everytime we evaluate the test statistic. As always, it is useful to visualise, and as examples we take the two realisations we already plotted.
End of explanation
def TS_2d(bkg_norm, sig_norm, data, bestfit=None):
numerator = -2*lnLike(bkg_norm, sig_norm, data)
if not bestfit:
bestfit = minimize(lambda x: -2*lnLike(x[0], x[1], data), (1e3,1e3))
return numerator - bestfit.fun
def visualise_TS_2d(data, siglim=(-100, 1000), bkglim=(800,1200), ax=None, title=None):
if ax is None:
fig, ax = plt.subplots()
bestfit = minimize(lambda x: -2*lnLike(x[0], x[1], data), (1e3,1e3))
bkg_norms = np.linspace(*bkglim, 100)
sig_norms = np.linspace(*siglim, 100)
ts = [[TS_2d(b, s, data, bestfit=bestfit) for s in sig_norms] for b in bkg_norms]
X, Y = np.meshgrid(bkg_norms, sig_norms)
Z = np.array(ts).T
r = ax.contourf(X, Y, Z, 100, cmap='Blues_r')
plt.colorbar(r, label='TS', ax=ax)
ax.contour(X, Y, Z, colors='white', levels=[5.991])
ax.set_xlim(*bkglim)
ax.set_ylim(*siglim)
ax.set_xlabel('Background Normalisation')
ax.set_ylabel('Signal Normalisation')
ax.set_title(title)
fig, axes = plt.subplots(ncols=2, figsize=(12,4))
visualise_TS_2d(data_small_sig, ax=axes[0], title='Small signal')
axes[0].scatter(1000, 10, color='white', marker='x')
visualise_TS_2d(data_large_sig, ax=axes[1], title='Large signal')
axes[1].scatter(1000, 600, color='white', marker='x');
Explanation: Where the black x marks the true parameter values of each of the data sets. We see that the test statistic is small in the region of the true values, not perfectly so as we have statistical noise.
We can also consider the full 2D case where
$$ \mathrm{TS}(\theta_s, \theta_b) = -2 \ln \frac{ \mathcal{L}( \theta_s, \theta_b \,|\, D)}{\max_{\theta_s, \theta_b} \mathcal{L}( \theta_s, \theta_b \,|\, D)}$$
which is simply the likelihood normalised to it's maximum.
End of explanation
from functools import partial
def threshold(cl, cdf):
return brentq(lambda x: cl-cdf(x), 0, 10)
threshold_1d = partial(threshold, cdf=partial(chi2.cdf, df=1))
threshold_2d = partial(threshold, cdf=partial(chi2.cdf, df=2))
print('68%% and 95%% thresholds for 1D: %.3f and %.3f'
% tuple([threshold_1d(x) for x in [0.68, 0.95]]))
print('68%% and 95%% thresholds for 2D: %.3f and %.3f'
% tuple([threshold_2d(x) for x in [0.68, 0.95]]))
Explanation: where the white x marks the true parameter values of each of the data sets. We see that, as expected, the test statistic is smaller closer to the true value.
The white contour is simply us moving ahead of ourselves and plotting the $95.4$% confidence limit region using Wilks' theorem, i.e. by assuming that our TS is distributed as $\chi^2_{k=2}$. Ignore this for now.
Confidence Intervals
From the visualisation of the test statistics above we see that they are deep valleys close to the true values. This can be used to construct confidence intervals. The interval is simply
$${\, \theta \, | \, \mathrm{TS}(\theta) < c \, }$$
where $c$ is a threshold value such that we have the desired coverage. To determine $c$ we need to know how the test statistic is distributed under the null, i.e. when $\theta$ is the true value. If it applies, Wilks' theorem states that it's asymptotically chi-squared distributed with as many degrees of freedom $k$ as parameters of interest. So for 1D and 2D we can compute
End of explanation
def confidence_interval(data, CL=0.68, bestfit=None,
ts=TS_sig, threshold_fun=threshold_1d):
if not bestfit:
bestfit = minimize(lambda x: -2*lnLike(x[0], x[1], data), (1e3,1e3))
threshold = threshold_fun(CL)
# Simple way to find starting points for the root finder.
# We need (a, b) for which TS-ts_threshold have different sign.
step = 10+bestfit.x[1]/2
u = bestfit.x[1] + step
while ts(u, data, bestfit=bestfit) <= threshold:
u += step
# The TS tend to be symmetrical which we can use to do a better initial guess.
l = 2*bestfit.x[1] - u
while ts(l, data, bestfit=bestfit) <= threshold:
l -= step
upper_bound = brentq(lambda x: ts(x, data, bestfit=bestfit) - threshold,
bestfit.x[1], u)
lower_bound = brentq(lambda x: ts(x, data, bestfit=bestfit) - threshold,
l, bestfit.x[1])
return lower_bound, upper_bound
print(confidence_interval(data_small_sig))
print(confidence_interval(data_large_sig))
Explanation: With this we can now find the 68% CL intervals of $\theta_s$ for our two examples.
End of explanation
def coverage_check(sig_norm, CL=0.68, bkg_norm=1000, n=100):
covered = 0
for _ in range(n):
d = generate_data(bkg_norm, sig_norm)
l, u = confidence_interval(d, CL=CL)
if l < sig_norm and u > sig_norm:
covered += 1
return covered/n
print('Coverage small signal: %.3f' % coverage_check(10))
print('Coverage large signal: %.3f' % coverage_check(600))
Explanation: Check coverage
To check if our procedure work we will do a 100 experiments and see if the true value are covered at the correct frequency, e.g. 68ish times if we want 68% CL.
End of explanation
def mc(sig_norm, bkg_norm=1000, n=100):
ts = []
for _ in range(n):
d = generate_data(bkg_norm, sig_norm)
bf = minimize(lambda x: -2*lnLike(x[0], x[1], d), (1e3,1e3))
ts.append(minimize(lambda s: TS_sig(s, d, bestfit=bf), sig_norm).fun)
return np.array(ts)
mc_small_signal = mc(sig_norm=10)
x = np.linspace(np.min(mc_small_signal), np.max(mc_small_signal), 100)
plt.hist(mc_small_signal, bins=20, normed=True, alpha=0.5, label='MC')
plt.plot(x, chi2.pdf(x, df=1), lw=4, label='chi2 df=1')
plt.legend(frameon=False)
plt.xlabel('TS');
Explanation: Not bad, with more experiments this should be better. Try it yourself!
Verifying that Wilk's theorem applies
We can verify that our test statistic is $\chi^2_{k=1}$ distributed by doing monte carlo experiments. We can then compare the empirical distribution of the test statistic with the chi-squared one.
End of explanation
def TS_upper_limit(sig_norm, data, bestfit=None):
if not bestfit:
bestfit = minimize(lambda x: -2*lnLike(x[0], x[1], data), (1e3,1e3))
if sig_norm < bestfit.x[1]:
return 0.0
else:
return TS_sig(sig_norm, data, bestfit=bestfit)
bestfit = minimize(lambda x: -2*lnLike(x[0], x[1], data_small_sig), (1e3,1e3))
x = np.linspace(-100, 100, 30)
y = [TS_sig(s, data_small_sig, bestfit=bestfit) for s in x]
plt.plot(x, y);
y = [TS_upper_limit(s, data_small_sig, bestfit=bestfit) for s in x]
plt.plot(x, y)
plt.xlabel('Signal Normalisation')
plt.ylabel('TS');
Explanation: As we can see, in this case the empirical distribution is well-described by a $\chi_{k=1}^2$ distribution.
Upper Limits
We've seen how the the maximum likelihood test statistic leads to two sided intervals, but our aim is to do exclusion limits. We modify our test statistic as follows.
$$\mathrm{TS}(\theta_s) =
\begin{cases}
\mathrm{TS}(\theta_s) & \quad \theta_s \geq \hat{\theta_s}\
0 & \quad \text{elsewise}\
\end{cases}
$$
This is for a upper limit on $\theta_s$.
End of explanation
threshold_ul = partial(threshold, cdf = lambda x: 0.5 + 0.5*chi2.cdf(x, df=1))
print('Threshold for 90%% CL upper limit: %.3f' % threshold_ul(0.90))
Explanation: This is now distributed as
$$\mathrm{TS}{\mathrm{ul}} \sim \frac{1}{2}\delta(0) + \frac{1}{2} \chi^2{\mathrm{df}=1}$$
from this expression we can determine the required threshold for various confidence levels. Again the threshold for a $100n$ confidence level is simply
$$ \mathrm{CDF}(x) = n $$
End of explanation
def upper_limit(data, bestfit=None, CL=0.90):
if not bestfit:
bestfit = minimize(lambda x: -2*lnLike(x[0], x[1], data), (1e3,1e3))
threshold = threshold_ul(CL)
return brentq(lambda x: TS_upper_limit(x, data, bestfit=bestfit)-threshold,
-1000, 1000)
print('90%% CL upper limit for small signal: %.2f' % upper_limit(data_small_sig))
print('90%% CL upper limit for large signal: %.2f' % upper_limit(data_large_sig))
Explanation: We can now use a root finder to find for which $\theta_s$ our new TS has this threshold value, and this is our upper limit!
End of explanation
def coverage_check_ul(sig_norm, CL=0.90, bkg_norm=1000, n=100):
upper_limits = []
for _ in range(n):
d = generate_data(bkg_norm, sig_norm)
upper_limits.append(upper_limit(d, CL=CL))
upper_limits = np.array(upper_limits)
not_excluded = (upper_limits >= sig_norm).sum()
return not_excluded/n
print('Coverage small signal: %.3f' % coverage_check_ul(10))
print('Coverage large signal: %.3f' % coverage_check_ul(600))
Explanation: Check coverage for the upper limit
We perform a multitude of 'experiments' and investigate if the upper limit covers the true value as often as it should.
End of explanation
def find_bestfit(data):
N = 20
bkgs = np.linspace(0, 2000, N)
sigs = np.linspace(0, 2000, N)
pos = np.linspace(0, 100, N)
points = np.array(np.meshgrid(bkgs, sigs, pos)).T.reshape(-1,3)
ts = list(map(lambda x: -2*lnLike(x[0], x[1], data, mu=x[2]), points))
start = points[np.argmin(ts),:]
return minimize(lambda x: -2*lnLike(x[0], x[1], data, mu=x[2]), start)
def TS_pos(sig_norm, sig_pos, data, bestfit=None):
numerator = minimize(lambda b: -2*lnLike(b, sig_norm, data, mu=sig_pos), 1e3)
if not bestfit:
bestfit = minimize(lambda x: -2*lnLike(x[0], x[1], data, mu=x[2]),
(1e3, 1e3, 1e1))
return numerator.fun - bestfit.fun
def visualise_TS_pos(data, signormlim=(500, 1500), sigposlim=(1,100), ax=None, title=None):
if ax is None:
fig, ax = plt.subplots()
def starting_point(data):
N = 20
bkgs = np.linspace(0, 2000, N)
sigs = np.linspace(0, 2000, N)
pos = np.linspace(0, 100, N)
points = np.array(np.meshgrid(bkgs, sigs, pos)).T.reshape(-1,3)
ts = list(map(lambda x: -2*lnLike(x[0], x[1], data, mu=x[2]), points))
return points[np.argmin(ts),:]
bestfit = minimize(lambda x: -2*lnLike(x[0], x[1], data, mu=x[2]), starting_point(data))
print(bestfit)
sig_pos = np.logspace(*np.log10(sigposlim), 50)
sig_norms = np.linspace(*signormlim, 100)
ts = [[TS_pos(n, p, data, bestfit=bestfit) for n in sig_norms] for p in sig_pos]
X, Y = np.meshgrid(sig_pos, sig_norms)
Z = np.array(ts).T
r = ax.contourf(X, Y, Z, 100, cmap='Blues_r')
plt.colorbar(r, label='TS', ax=ax)
ax.contour(X, Y, Z, colors='white', levels=[5.991])
ax.set_xlim(*sigposlim)
ax.set_ylim(*signormlim)
ax.set_xscale('log')
ax.set_xlabel('Signal Position')
ax.set_ylabel('Signal Normalisation')
ax.set_title(title)
fig, axes = plt.subplots(ncols=3, nrows=2, figsize=(14,8))
for i, p in enumerate([3, 10, 20]):
d = visualise_model(bkg_norm=1000, sig_norm=1000, sig_mu=p,
ax=axes[0,i], title='Peak at E=%i' % p)
visualise_TS_pos(d, ax=axes[1,i], title='')
d = visualise_model(bkg_norm=1000, sig_norm=1000, sig_mu=20,
ax=axes[0,i], title='Peak at E=%i' % p)
def find_starting_point(data):
N = 20
bkgs = np.linspace(0, 2000, N)
sigs = np.linspace(0, 2000, N)
pos = np.linspace(0, 100, N)
points = np.array(np.meshgrid(bkgs, sigs, pos)).T.reshape(-1,3)
ts = list(map(lambda x: -2*lnLike(x[0], x[1], data, mu=x[2]), points))
i = np.argmin(ts)
return (i, points[i,:], np.min(ts))
find_starting_point(d)
#bestfit = minimize(lambda x: -2*lnLike(x[0], x[1], d, mu=x[2]), (1e3,1e3,2e1),
# bounds=[(0, 2000)]*2+[(1,100)])
#print(bestfit)
np.argmin
Explanation: Again, not bad for the number of trials.
TODO
Extend the problem by letting the bump position vary.
Make brazil plot
End of explanation
<END_TASK> |
15,785 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Copyright 2018 The TensorFlow Authors.
Step1: TPU の使用
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: TPU の初期化
TPU は通常 Cloud TPU ワーカーであり、これはユーザーの Python プログラムを実行するローカルプロセスとは異なります。そのため、リモートクラスタに接続して TPU を初期化するには、ある程度の初期化作業が必要となります。tf.distribute.cluster_resolver.TPUClusterResolver の tpu 引数は、Colab だけの特別なアドレスであることに注意してください。Google Compute Engine(GCE)で実行している場合は、ご利用の CloudTPU の名前を渡す必要があります。
注意
Step3: 手動でデバイスを配置する
TPU が初期されたら、計算を単一の TPU デバイスに配置するために、手動によるデバイスの配置を使用できます。
Step4: 分散ストラテジー
モデルは通常、複数の TPU で並行して実行されます。複数の TPU(またはその他のアクセラレータ)でモデルを分散させるため、TensorFlow にはいくつかの分散ストラテジーが用意されています。分散ストラテジーを置き換えると、指定された任意の(TPU)デバイスでモデルが実行するようになります。詳細については、分散ストラテジーガイドをご覧ください。
これを実演するために、tf.distribute.TPUStrategy オブジェクトを作成します。
Step5: 計算を複製してすべての TPU コアで実行できるようにするには、計算を strategy.run API に渡します。次の例では、すべてのコアが同じ入力 (a, b) を受け入れて、各コアで独立して行列の乗算を実行しています。出力は、すべてのレプリカからの値となります。
Step6: TPU での分類
基本的な概念を説明したので、より具体的な例を考察しましょう。このセクションでは、分散ストラテジー tf.distribute.TPUStrategy を使用して Cloud TPU でKeras モデルをトレーニングする方法を説明します。
Keras モデルを定義する
MNIST データセットで Keras を使用して画像の分類を行う Sequential Keras モデルの定義から始めましょう。CPU または GPU でトレーニングする場合に使用するものと変わりません。Keras モデルの作成は strategy.scope 内で行う必要があることに注意してください。そうすることで、変数が各 TPU デバイスに作成されるようになります。コードの他の部分は、ストラテジースコープ内にある必要はありません。
Step7: データセットを読み込む
Cloud TPU にデータを迅速にフィードできなければ Cloud TPU を使用することは不可能であるため、Cloud TPU を使用する際は、tf.data.Dataset API を効率的に使用できることが非常に重要となります。データセットのパフォーマンスについての詳細は、入力パイプラインのパフォーマンスガイドをご覧ください。
最も単純な実験(tf.data.Dataset.from_tensor_slices またはほかのグラフ内データの使用)以外のすべての実験では、Dataset が読み取るすべてのデータファイルを Google Cloud Storage(GCS)バケットに格納する必要があります。
ほとんどの使用事例では、データを TFRecord 形式に変換し、tf.data.TFRecordDataset を使って読み取ることをお勧めします。このやり方については、「TFRecord および tf.Example のチュートリアル」を参照してください。これは絶対要件ではないため、ほかのデータセットリーダー(tf.data.FixedLengthRecordDataset または tf.data.TextLineDataset)を使用することもできます。
小さなデータセットは、tf.data.Dataset.cache を使ってすべてをメモリに読み込むことができます。
データ形式にかかわらず、100 MB 程度の大きなファイルを使用することをお勧めします。このネットワーク化された設定においては、ファイルを開くタスクのオーバーヘッドが著しく高くなるため、特に重要なことです。
以下のコードに示される通り、tensorflow_datasets モジュールを使用して、MNIST トレーニングデータのコピーを取得する必要があります。try_gcs は、パブリック GCS バケットで提供されているコピーを使用するように指定されています。これを指定しない場合、TPU はダウンロードされたデータにアクセスできません。
Step8: Keras の高位 API を使用してモデルをトレーニングする
Keras の fit と compile API を使用してモデルをトレーニングできます。ここでは、TPU 固有のステップはないため、複数の GPU と MirroredStrategy(TPUStrategy ではなく)を使用している場合と同じようにコードを記述します。詳細については、「Keras を使用した分散トレーニング」チュートリアルをご覧ください。
Step9: Python のオーバーヘッドを緩和し、TPU のパフォーマンスを最大化するには、引数 steps_per_execution を Model.compile に渡します。この例では、スループットが約 50% 増加します。
Step12: カスタムトレーニングループを使用してモデルをトレーニングする
tf.function と tf.distribute API を直接使用しても、モデルを作成してトレーニングすることができます。strategy.experimental_distribute_datasets_from_function API は、データセット関数を指定してデータセットを分散させるために使用されます。以下の例では、データセットに渡されるバッチサイズは、グローバルバッチサイズではなく、レプリカごとのバッチサイズであることに注意してください。詳細については、「tf.distribute.Strategy によるカスタムトレーニング」チュートリアルをご覧ください。
最初に、モデル、データセット、および tf.function を作成します。
Step13: 次に、トレーニングループを実行します。
Step16: tf.function 内の複数のステップでパフォーマンスを改善する
tf.function 内で複数のステップを実行することで、パフォーマンスを改善できます。これは、tf.function 内の tf.range で strategy.run 呼び出しをラッピングすることで実現されます。AutoGraph は、TPU ワーカーの tf.while_loop に変換します。
パフォーマンスは改善されますが、tf.function 内の単一のステップに比べれば、この方法にはトレードオフがあります。tf.function で複数のステップを実行すると柔軟性に劣り、ステップ内での Eager execution や任意の Python コードを実行できません。 | <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2018 The TensorFlow Authors.
End of explanation
import tensorflow as tf
import os
import tensorflow_datasets as tfds
Explanation: TPU の使用
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/guide/tpu"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org で表示</a></td>
<td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/guide/tpu.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab で実行</a> </td>
<td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/guide/tpu.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示</a> </td>
<td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/guide/tpu.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a> </td>
</table>
この Colab ノートブックをダウンロードする前に、Runtime > Change runtime type > Hardware accelerator > TPU でノートブックの設定を確認し、ハードウェアアクセラレータが TPU であることを確認してください。
セットアップ
End of explanation
resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='')
tf.config.experimental_connect_to_cluster(resolver)
# This is the TPU initialization code that has to be at the beginning.
tf.tpu.experimental.initialize_tpu_system(resolver)
print("All devices: ", tf.config.list_logical_devices('TPU'))
Explanation: TPU の初期化
TPU は通常 Cloud TPU ワーカーであり、これはユーザーの Python プログラムを実行するローカルプロセスとは異なります。そのため、リモートクラスタに接続して TPU を初期化するには、ある程度の初期化作業が必要となります。tf.distribute.cluster_resolver.TPUClusterResolver の tpu 引数は、Colab だけの特別なアドレスであることに注意してください。Google Compute Engine(GCE)で実行している場合は、ご利用の CloudTPU の名前を渡す必要があります。
注意: TPU の初期化コードはプログラムのはじめにある必要があります。
End of explanation
a = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])
with tf.device('/TPU:0'):
c = tf.matmul(a, b)
print("c device: ", c.device)
print(c)
Explanation: 手動でデバイスを配置する
TPU が初期されたら、計算を単一の TPU デバイスに配置するために、手動によるデバイスの配置を使用できます。
End of explanation
strategy = tf.distribute.TPUStrategy(resolver)
Explanation: 分散ストラテジー
モデルは通常、複数の TPU で並行して実行されます。複数の TPU(またはその他のアクセラレータ)でモデルを分散させるため、TensorFlow にはいくつかの分散ストラテジーが用意されています。分散ストラテジーを置き換えると、指定された任意の(TPU)デバイスでモデルが実行するようになります。詳細については、分散ストラテジーガイドをご覧ください。
これを実演するために、tf.distribute.TPUStrategy オブジェクトを作成します。
End of explanation
@tf.function
def matmul_fn(x, y):
z = tf.matmul(x, y)
return z
z = strategy.run(matmul_fn, args=(a, b))
print(z)
Explanation: 計算を複製してすべての TPU コアで実行できるようにするには、計算を strategy.run API に渡します。次の例では、すべてのコアが同じ入力 (a, b) を受け入れて、各コアで独立して行列の乗算を実行しています。出力は、すべてのレプリカからの値となります。
End of explanation
def create_model():
return tf.keras.Sequential(
[tf.keras.layers.Conv2D(256, 3, activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.Conv2D(256, 3, activation='relu'),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(256, activation='relu'),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10)])
Explanation: TPU での分類
基本的な概念を説明したので、より具体的な例を考察しましょう。このセクションでは、分散ストラテジー tf.distribute.TPUStrategy を使用して Cloud TPU でKeras モデルをトレーニングする方法を説明します。
Keras モデルを定義する
MNIST データセットで Keras を使用して画像の分類を行う Sequential Keras モデルの定義から始めましょう。CPU または GPU でトレーニングする場合に使用するものと変わりません。Keras モデルの作成は strategy.scope 内で行う必要があることに注意してください。そうすることで、変数が各 TPU デバイスに作成されるようになります。コードの他の部分は、ストラテジースコープ内にある必要はありません。
End of explanation
def get_dataset(batch_size, is_training=True):
split = 'train' if is_training else 'test'
dataset, info = tfds.load(name='mnist', split=split, with_info=True,
as_supervised=True, try_gcs=True)
# Normalize the input data.
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255.0
return image, label
dataset = dataset.map(scale)
# Only shuffle and repeat the dataset in training. The advantage of having an
# infinite dataset for training is to avoid the potential last partial batch
# in each epoch, so that you don't need to think about scaling the gradients
# based on the actual batch size.
if is_training:
dataset = dataset.shuffle(10000)
dataset = dataset.repeat()
dataset = dataset.batch(batch_size)
return dataset
Explanation: データセットを読み込む
Cloud TPU にデータを迅速にフィードできなければ Cloud TPU を使用することは不可能であるため、Cloud TPU を使用する際は、tf.data.Dataset API を効率的に使用できることが非常に重要となります。データセットのパフォーマンスについての詳細は、入力パイプラインのパフォーマンスガイドをご覧ください。
最も単純な実験(tf.data.Dataset.from_tensor_slices またはほかのグラフ内データの使用)以外のすべての実験では、Dataset が読み取るすべてのデータファイルを Google Cloud Storage(GCS)バケットに格納する必要があります。
ほとんどの使用事例では、データを TFRecord 形式に変換し、tf.data.TFRecordDataset を使って読み取ることをお勧めします。このやり方については、「TFRecord および tf.Example のチュートリアル」を参照してください。これは絶対要件ではないため、ほかのデータセットリーダー(tf.data.FixedLengthRecordDataset または tf.data.TextLineDataset)を使用することもできます。
小さなデータセットは、tf.data.Dataset.cache を使ってすべてをメモリに読み込むことができます。
データ形式にかかわらず、100 MB 程度の大きなファイルを使用することをお勧めします。このネットワーク化された設定においては、ファイルを開くタスクのオーバーヘッドが著しく高くなるため、特に重要なことです。
以下のコードに示される通り、tensorflow_datasets モジュールを使用して、MNIST トレーニングデータのコピーを取得する必要があります。try_gcs は、パブリック GCS バケットで提供されているコピーを使用するように指定されています。これを指定しない場合、TPU はダウンロードされたデータにアクセスできません。
End of explanation
with strategy.scope():
model = create_model()
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['sparse_categorical_accuracy'])
batch_size = 200
steps_per_epoch = 60000 // batch_size
validation_steps = 10000 // batch_size
train_dataset = get_dataset(batch_size, is_training=True)
test_dataset = get_dataset(batch_size, is_training=False)
model.fit(train_dataset,
epochs=5,
steps_per_epoch=steps_per_epoch,
validation_data=test_dataset,
validation_steps=validation_steps)
Explanation: Keras の高位 API を使用してモデルをトレーニングする
Keras の fit と compile API を使用してモデルをトレーニングできます。ここでは、TPU 固有のステップはないため、複数の GPU と MirroredStrategy(TPUStrategy ではなく)を使用している場合と同じようにコードを記述します。詳細については、「Keras を使用した分散トレーニング」チュートリアルをご覧ください。
End of explanation
with strategy.scope():
model = create_model()
model.compile(optimizer='adam',
# Anything between 2 and `steps_per_epoch` could help here.
steps_per_execution = 50,
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['sparse_categorical_accuracy'])
model.fit(train_dataset,
epochs=5,
steps_per_epoch=steps_per_epoch,
validation_data=test_dataset,
validation_steps=validation_steps)
Explanation: Python のオーバーヘッドを緩和し、TPU のパフォーマンスを最大化するには、引数 steps_per_execution を Model.compile に渡します。この例では、スループットが約 50% 増加します。
End of explanation
# Create the model, optimizer and metrics inside the strategy scope, so that the
# variables can be mirrored on each device.
with strategy.scope():
model = create_model()
optimizer = tf.keras.optimizers.Adam()
training_loss = tf.keras.metrics.Mean('training_loss', dtype=tf.float32)
training_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(
'training_accuracy', dtype=tf.float32)
# Calculate per replica batch size, and distribute the datasets on each TPU
# worker.
per_replica_batch_size = batch_size // strategy.num_replicas_in_sync
train_dataset = strategy.experimental_distribute_datasets_from_function(
lambda _: get_dataset(per_replica_batch_size, is_training=True))
@tf.function
def train_step(iterator):
The step function for one training step.
def step_fn(inputs):
The computation to run on each TPU device.
images, labels = inputs
with tf.GradientTape() as tape:
logits = model(images, training=True)
loss = tf.keras.losses.sparse_categorical_crossentropy(
labels, logits, from_logits=True)
loss = tf.nn.compute_average_loss(loss, global_batch_size=batch_size)
grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(list(zip(grads, model.trainable_variables)))
training_loss.update_state(loss * strategy.num_replicas_in_sync)
training_accuracy.update_state(labels, logits)
strategy.run(step_fn, args=(next(iterator),))
Explanation: カスタムトレーニングループを使用してモデルをトレーニングする
tf.function と tf.distribute API を直接使用しても、モデルを作成してトレーニングすることができます。strategy.experimental_distribute_datasets_from_function API は、データセット関数を指定してデータセットを分散させるために使用されます。以下の例では、データセットに渡されるバッチサイズは、グローバルバッチサイズではなく、レプリカごとのバッチサイズであることに注意してください。詳細については、「tf.distribute.Strategy によるカスタムトレーニング」チュートリアルをご覧ください。
最初に、モデル、データセット、および tf.function を作成します。
End of explanation
steps_per_eval = 10000 // batch_size
train_iterator = iter(train_dataset)
for epoch in range(5):
print('Epoch: {}/5'.format(epoch))
for step in range(steps_per_epoch):
train_step(train_iterator)
print('Current step: {}, training loss: {}, accuracy: {}%'.format(
optimizer.iterations.numpy(),
round(float(training_loss.result()), 4),
round(float(training_accuracy.result()) * 100, 2)))
training_loss.reset_states()
training_accuracy.reset_states()
Explanation: 次に、トレーニングループを実行します。
End of explanation
@tf.function
def train_multiple_steps(iterator, steps):
The step function for one training step.
def step_fn(inputs):
The computation to run on each TPU device.
images, labels = inputs
with tf.GradientTape() as tape:
logits = model(images, training=True)
loss = tf.keras.losses.sparse_categorical_crossentropy(
labels, logits, from_logits=True)
loss = tf.nn.compute_average_loss(loss, global_batch_size=batch_size)
grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(list(zip(grads, model.trainable_variables)))
training_loss.update_state(loss * strategy.num_replicas_in_sync)
training_accuracy.update_state(labels, logits)
for _ in tf.range(steps):
strategy.run(step_fn, args=(next(iterator),))
# Convert `steps_per_epoch` to `tf.Tensor` so the `tf.function` won't get
# retraced if the value changes.
train_multiple_steps(train_iterator, tf.convert_to_tensor(steps_per_epoch))
print('Current step: {}, training loss: {}, accuracy: {}%'.format(
optimizer.iterations.numpy(),
round(float(training_loss.result()), 4),
round(float(training_accuracy.result()) * 100, 2)))
Explanation: tf.function 内の複数のステップでパフォーマンスを改善する
tf.function 内で複数のステップを実行することで、パフォーマンスを改善できます。これは、tf.function 内の tf.range で strategy.run 呼び出しをラッピングすることで実現されます。AutoGraph は、TPU ワーカーの tf.while_loop に変換します。
パフォーマンスは改善されますが、tf.function 内の単一のステップに比べれば、この方法にはトレードオフがあります。tf.function で複数のステップを実行すると柔軟性に劣り、ステップ内での Eager execution や任意の Python コードを実行できません。
End of explanation
<END_TASK> |
15,786 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Data
scikit-chem provides a simple interface to chemical datasets, and a framework for constructing these datasets. The data module uses fuel to make complex out of memory iterative functionality straightforward (see the fuel documentation). It also offers an abstraction to allow easy loading of smaller datasets, that can fit in memory.
In memory datasets
Datasets consist of sets and sources. Simply put, sets are collections of molecules in the dataset, and sources are types of data relating to these molecules.
For demonstration purposes, we will use the Bursi Ames dataset. This has 3 sets
Step1: And many sources
Step2: For this example, we will load the X_morg and the y sources for all the sets. These are circular fingerprints, and the target labels (in this case, whether the molecule was a mutagen).
We can load the data for requested sets and sources using the in memory API
Step3: The requested data is loaded as nested tuples, sorted first by set, and then by source, which can easily be unpacked as above.
Step4: The raw data is loaded as numpy arrays
Step5: Which should be ready to use as fuel for modelling!
Data as pandas objects
The data is originally saved as pandas objects, and can be retrieved as such using the read_frame class method.
Features are available under the 'feats' namespace
Step6: Target variables under 'targets'
Step7: Set membership masks under 'indices'
Step8: Finally, molecules are accessible via 'structure' | <ASSISTANT_TASK:>
Python Code:
skchem.data.BursiAmes.available_sets()
Explanation: Data
scikit-chem provides a simple interface to chemical datasets, and a framework for constructing these datasets. The data module uses fuel to make complex out of memory iterative functionality straightforward (see the fuel documentation). It also offers an abstraction to allow easy loading of smaller datasets, that can fit in memory.
In memory datasets
Datasets consist of sets and sources. Simply put, sets are collections of molecules in the dataset, and sources are types of data relating to these molecules.
For demonstration purposes, we will use the Bursi Ames dataset. This has 3 sets:
End of explanation
skchem.data.BursiAmes.available_sources()
Explanation: And many sources:
End of explanation
kws = {'sets': ('train', 'valid', 'test'), 'sources':('X_morg', 'y')}
(X_train, y_train), (X_valid, y_valid), (X_test, y_test) = skchem.data.BursiAmes.load_data(**kws)
Explanation: For this example, we will load the X_morg and the y sources for all the sets. These are circular fingerprints, and the target labels (in this case, whether the molecule was a mutagen).
We can load the data for requested sets and sources using the in memory API:
End of explanation
print('train shapes:', X_train.shape, y_train.shape)
print('valid shapes:', X_valid.shape, y_valid.shape)
print('test shapes:', X_test.shape, y_test.shape)
Explanation: The requested data is loaded as nested tuples, sorted first by set, and then by source, which can easily be unpacked as above.
End of explanation
X_train
y_train
Explanation: The raw data is loaded as numpy arrays:
End of explanation
skchem.data.BursiAmes.read_frame('feats/X_morg')
Explanation: Which should be ready to use as fuel for modelling!
Data as pandas objects
The data is originally saved as pandas objects, and can be retrieved as such using the read_frame class method.
Features are available under the 'feats' namespace:
End of explanation
skchem.data.BursiAmes.read_frame('targets/y')
Explanation: Target variables under 'targets':
End of explanation
skchem.data.BursiAmes.read_frame('indices/train')
Explanation: Set membership masks under 'indices':
End of explanation
skchem.data.BursiAmes.read_frame('structure')
Explanation: Finally, molecules are accessible via 'structure':
End of explanation
<END_TASK> |
15,787 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Deriving coefficients for the implicit scheme
The ice sheet energy balance model uses an implicit scheme to solve the
heat equation for $N$ layers. It uses the Crank-Nicholson scheme to discretise
the equations. For an equation in one space dimension $x$
$\frac{df}{dt} = F$,
the Crank-Nicholson scheme discretises the equation as
$\frac{f^{i+1}_x - f^{i}_x}{\Delta t} = 0.5\left [ F^{i+1}_x + F^{i}_x \right ]$
where the superscript is time and the subscript is space.
Step1: The coefficients for the $i+1$ temperature (predicted) are
Step2: The coefficients for the $i$ temperature (current) are | <ASSISTANT_TASK:>
Python Code:
from sympy import *
init_printing()
tnew_x = Symbol('T^{i+1}_x')
tnew_xprev = Symbol('T^{i+1}_{x-1}')
tnew_xafter = Symbol('T^{i+1}_{x+1}')
told_x = Symbol('T^{i}_x')
told_xprev = Symbol('T^{i}_{x-1}')
told_xafter = Symbol('T^{i}_{x+1}')
u_x = Symbol('\kappa_x')
u_xprev = Symbol('\kappa_{x-1}')
u_xafter = Symbol('\kappa_{x+1}')
delta_t = Symbol('\Delta t')
delta_x = Symbol('\Delta x')
told_x, u_xprev, tnew_xafter, delta_x
lhs = (tnew_x - told_x)/delta_t
lhs # The time derivative
rhs_new = 0.5*(u_x*(tnew_xprev - 2*tnew_x + tnew_xafter)/delta_x**2 +
((tnew_x - tnew_xprev)/(delta_x))*((u_x - u_xprev)/(delta_x)))
rhs_old = 0.5*(u_x*(told_xprev - 2*told_x + told_xafter)/delta_x**2 +
((told_x - told_xprev)/(delta_x))*((u_x - u_xprev)/(delta_x)))
rhs_new, rhs_old # The two parts of the crank-nicholson RHS.
expr = lhs - rhs_new - rhs_old
expr
poly_form = Poly(expr, tnew_x, tnew_xafter, tnew_xprev, told_x, told_xafter, told_xprev)
poly_form
Explanation: Deriving coefficients for the implicit scheme
The ice sheet energy balance model uses an implicit scheme to solve the
heat equation for $N$ layers. It uses the Crank-Nicholson scheme to discretise
the equations. For an equation in one space dimension $x$
$\frac{df}{dt} = F$,
the Crank-Nicholson scheme discretises the equation as
$\frac{f^{i+1}_x - f^{i}_x}{\Delta t} = 0.5\left [ F^{i+1}_x + F^{i}_x \right ]$
where the superscript is time and the subscript is space.
End of explanation
(poly_form.coeff_monomial(tnew_xprev)*delta_t).simplify(), (poly_form.coeff_monomial(tnew_x)*delta_t).simplify(), (poly_form.coeff_monomial(tnew_xafter)*delta_t).simplify()
Explanation: The coefficients for the $i+1$ temperature (predicted) are
End of explanation
-(poly_form.coeff_monomial(told_xprev)*delta_t).simplify(), (poly_form.coeff_monomial(told_x)*-delta_t).simplify(), -(poly_form.coeff_monomial(told_xafter)*delta_t).simplify()
Explanation: The coefficients for the $i$ temperature (current) are
End of explanation
<END_TASK> |
15,788 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Before you turn this problem in, make sure everything runs as expected. First, restart the kernel (in the menubar, select Kernel$\rightarrow$Restart) and then run all cells (in the menubar, select Cell$\rightarrow$Run All).
Make sure you fill in any place that says YOUR CODE HERE or "YOUR ANSWER HERE", as well as your name and collaborators below
Step1: Exercise 03 - Due Friday, September 30 at 12pm
Objectives
Step2: Verify that it's what we think it is on the commandline
Step3: Looks right. How many records are there?
Step4: So that should be 145, counting the header. And the basic stats
Step5: Looks about right!
Note, though, that the column names have spaces, punctuation, and Upper Cased Names. That's annoying! First let's rename the header line.
Step6: Okay, using output redirection and tail we can write a new header line.
Step7: Much easier to work with now.
Next we convert the updated csv file into a SQLite database using CSVkit.
Step8: To work with it, we'll need the ipython-sql extension loaded, and then we'll need to connect to the db.
Step9: Note if you see a pink box above with six lines of "ShimWarning" and "UserWarning", don't panic. This is just a warning message from the ipython-sql package. You are good to go.
Step10: Looks like the same number of rows! We're good to go.
Basic queries
In the following queries, we'll do some basic exploration of the data. Let's first see what a few records look like.
Step11: This should look familiar!
Let's look at just the "change" types.
Step12: How many is that, and which year had the most?
Step13: Which year had the most of these transactions?
Step14: ...alright, your turn.
Question 1
Use DISTINCT to determine the unique set of years in this dataset.
Step15: Question 2
Use DISTINCT to determine the unique set of transaction types in this dataset.
Step16: Question 3
Use GROUP BY to determine the overall number of transactions (across all types) per year.
Step17: Question 4
Use SUM and GROUP BY to determine the overall number of transactions (across all years) per type.
Step18: Question 5
Use ORDER BY and LIMIT to determine the top five types of transactions overall.
Step19: Question 6
Using ORDER BY and LIMIT again, what were the top five types of transactions in 2010?
Step20: Question 7
Use a wildcard search to determine how many transactions in 2012 were renewals.
Step21: Question 8
How many transactions overall involve canoes?
Step22: Question 9
How many transactions in 2011 involved a new registration (as opposed to a renewal or change of type)?
Step23: Question 10
How do the transaction trends over time involving pontoons compare to overall boating transaction activity? Discuss as appropriate, adding Markdown cells for your discussion after your exploratory queries. | <ASSISTANT_TASK:>
Python Code:
NAME = "dchud"
COLLABORATORS = ""
Explanation: Before you turn this problem in, make sure everything runs as expected. First, restart the kernel (in the menubar, select Kernel$\rightarrow$Restart) and then run all cells (in the menubar, select Cell$\rightarrow$Run All).
Make sure you fill in any place that says YOUR CODE HERE or "YOUR ANSWER HERE", as well as your name and collaborators below:
End of explanation
!wget --quiet -O boating.csv "https://data.ct.gov/api/views/mrb6-7ee5/rows.csv?accessType=DOWNLOAD"
Explanation: Exercise 03 - Due Friday, September 30 at 12pm
Objectives: Gain experience loading a CSV dataset into a database and using SQL to explore its contents. Write and execute a number of SQL queries using common syntax and functions.
Grading criteria: The tasks should all be completed, and questions should all be answered with SQL queries in the space provided, unless a text answer is requested. Results should be correct, according to the embedded tests. The notebook itself should be completely reproducible; from start to finish, another person should be able to use the same code to obtain the same results as yours.
Deadline: Friday, September 30, 12pm. Submit your notebook to Blackboard and push it to your GitHub repository.
In this notebook we'll download a clean CSV dataset from data.gov, load it into a SQLite database, and perform a series of queries to answer several questions. For each problem, write and execute queries that provides the answer in the cells provided, with your SQL queries in the places marked. For each problem after executing your query, immediately execute the following test cell. If the tests complete without error, you have completed that question successfully. If errors arise, you might be missing something. Do not change the tests, just execute them as they are, and update or refine your query until the tests pass.
For this assignment, you need not add narrative description to most of your queries (except where explicitly noted), although you may do so if something you see in the data prompts you. If you do, add new text cells and use Markdown formatting.
Suggestion: if you have worked through the Software Carpentry SQL lessons and have run through the last two lecture notes notebooks, this should all be fairly easy. If you have done neither, do them now, before you begin.
Setup - obtain data and create database
The Connecticut DMV Boating Registrations dataset comprises several years of summary records. It is available from data.gov.
First we download the dataset (note: it might take several seconds):
End of explanation
!head boating.csv | csvlook
Explanation: Verify that it's what we think it is on the commandline:
End of explanation
!wc -l boating.csv
Explanation: Looks right. How many records are there?
End of explanation
!csvstat boating.csv
Explanation: So that should be 145, counting the header. And the basic stats:
End of explanation
!mv boating.csv boating-orig.csv
Explanation: Looks about right!
Note, though, that the column names have spaces, punctuation, and Upper Cased Names. That's annoying! First let's rename the header line.
End of explanation
!echo "year,tx_type,num" > boating.csv
!tail -n +2 boating-orig.csv >> boating.csv
!head boating.csv | csvlook
Explanation: Okay, using output redirection and tail we can write a new header line.
End of explanation
!csvsql --db sqlite:///boating.db --insert boating.csv
Explanation: Much easier to work with now.
Next we convert the updated csv file into a SQLite database using CSVkit.
End of explanation
%load_ext sql
Explanation: To work with it, we'll need the ipython-sql extension loaded, and then we'll need to connect to the db.
End of explanation
%sql sqlite:///boating.db
%%sql
SELECT COUNT(*)
FROM boating;
Explanation: Note if you see a pink box above with six lines of "ShimWarning" and "UserWarning", don't panic. This is just a warning message from the ipython-sql package. You are good to go.
End of explanation
%%sql
SELECT *
FROM boating
LIMIT 10;
Explanation: Looks like the same number of rows! We're good to go.
Basic queries
In the following queries, we'll do some basic exploration of the data. Let's first see what a few records look like.
End of explanation
%%sql
SELECT *
FROM boating
WHERE tx_type = "BOAT CHANGE OF TYPE";
Explanation: This should look familiar!
Let's look at just the "change" types.
End of explanation
%%sql
SELECT COUNT(*)
FROM boating
WHERE tx_type = "BOAT CHANGE OF TYPE";
Explanation: How many is that, and which year had the most?
End of explanation
%%sql
SELECT *
FROM boating
WHERE tx_type = "BOAT CHANGE OF TYPE"
ORDER BY num DESC;
Explanation: Which year had the most of these transactions?
End of explanation
%%sql
SELECT year
FROM boating;
df = _.DataFrame()
for y in [2008, 2009, 2010, 2011, 2012, 2013, 2014]:
assert y in df.year.values
assert len(df) == 7
Explanation: ...alright, your turn.
Question 1
Use DISTINCT to determine the unique set of years in this dataset.
End of explanation
-- YOUR CODE HERE
df = _.DataFrame()
assert len(df) == 21
for tx_type in ["BOAT, DUPLICATE REGISTRATION", "REN BOAT PONTOON", "BOAT REG, CANOE"]:
assert tx_type in df.tx_type.values
Explanation: Question 2
Use DISTINCT to determine the unique set of transaction types in this dataset.
End of explanation
-- YOUR CODE HERE
df = _.DataFrame()
df.rename(columns=lambda x: "year" if x == "year" else "count", inplace=True)
assert (df.loc[df['year'] == 2009]['count'] == 21).all()
assert (df.loc[df['year'] == 2013]['count'] == 20).all()
Explanation: Question 3
Use GROUP BY to determine the overall number of transactions (across all types) per year.
End of explanation
-- YOUR CODE HERE
df = _.DataFrame()
df.rename(columns=lambda x: "tx_type" if x == "tx_type" else "sum", inplace=True)
assert (df.loc[df['tx_type'] == "BOAT REG, CANOE"]['sum'] == 1129).all()
assert (df.loc[df['tx_type'] == "REN BOAT REGISTRATION"]['sum'] == 640790).all()
Explanation: Question 4
Use SUM and GROUP BY to determine the overall number of transactions (across all years) per type.
End of explanation
-- YOUR CODE HERE
df = _.DataFrame()
df.rename(columns=lambda x: "tx_type" if x == "tx_type" else "sum_num", inplace=True)
assert len(df) == 5
assert (df.loc[df['tx_type'] == "REN BOAT PONTOON"]['sum_num'] == 15556).all()
assert (df.loc[df['tx_type'] == "REN BOAT REGISTRATION"]['sum_num'] == 640790).all()
assert df['sum_num'].max() == 640790
assert df['sum_num'].min() == 2390
Explanation: Question 5
Use ORDER BY and LIMIT to determine the top five types of transactions overall.
End of explanation
-- YOUR CODE HERE
df = _.DataFrame()
assert len(df) == 5
assert (df.loc[df['tx_type'] == "REN BOAT PONTOON"]['num'] == 2118).all()
assert (df.loc[df['tx_type'] == "BOAT REG, NEW"]['num'] == 12569).all()
assert df['num'].max() == 94005
assert df['num'].min() == 555
Explanation: Question 6
Using ORDER BY and LIMIT again, what were the top five types of transactions in 2010?
End of explanation
-- YOUR CODE HERE
df = _.DataFrame()
assert df.values[0][0] in [7, 9]
Explanation: Question 7
Use a wildcard search to determine how many transactions in 2012 were renewals.
End of explanation
-- YOUR CODE HERE
df = _.DataFrame()
assert df.values[0][0] == 14
Explanation: Question 8
How many transactions overall involve canoes?
End of explanation
-- YOUR CODE HERE
df = _.DataFrame()
assert df.values[0][0] in [12071, 14466]
Explanation: Question 9
How many transactions in 2011 involved a new registration (as opposed to a renewal or change of type)?
End of explanation
-- YOUR CODE HERE
Explanation: Question 10
How do the transaction trends over time involving pontoons compare to overall boating transaction activity? Discuss as appropriate, adding Markdown cells for your discussion after your exploratory queries.
End of explanation
<END_TASK> |
15,789 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Integration Exercise 2
Imports
Step1: Indefinite integrals
Here is a table of definite integrals. Many of these integrals has a number of parameters $a$, $b$, etc.
Find five of these integrals and perform the following steps
Step2: Integral 1
$$ I_1 = \int_0^a \sqrt{a^2 - x^2} dx = \frac{\pi a^2}{4}$$
Step3: Integral 2
$$ I_2 = \int_0^{\frac{\pi}{2}} \sin^2 x dx = \frac{\pi}{4} $$
Step4: Integral 3
$$ I_3 = \int_0^\infty e^{-ax} cos(bx) dx = \frac{a}{a^2+b^2} $$
Step5: Integral 4
$$ I_4 = \int_0^\infty e^{-ax^2 - b/x^2} dx = \frac{1}{2}\sqrt{\frac{\pi}{a}} e^{-2\sqrt{ab}} $$
Step6: Integral 5
$$ I_5 = \int_0^\infty \frac{x}{\sinh ax} dx = \frac{\pi^2}{4a^2}$$ | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from scipy import integrate
def X(x):
return x**2
I,e=integrate.quad(X,0,3)
I
Explanation: Integration Exercise 2
Imports
End of explanation
def integrand(x, a):
return 1.0/(x**2 + a**2)
def integral_approx(a):
# Use the args keyword argument to feed extra arguments to your integrand
I, e = integrate.quad(integrand, 0, np.inf, args=(a,))
return I
def integral_exact(a):
return 0.5*np.pi/a
print("Numerical: ", integral_approx(1.0))
print("Exact : ", integral_exact(1.0))
assert True # leave this cell to grade the above integral
Explanation: Indefinite integrals
Here is a table of definite integrals. Many of these integrals has a number of parameters $a$, $b$, etc.
Find five of these integrals and perform the following steps:
Typeset the integral using LateX in a Markdown cell.
Define an integrand function that computes the value of the integrand.
Define an integral_approx funciton that uses scipy.integrate.quad to peform the integral.
Define an integral_exact function that computes the exact value of the integral.
Call and print the return value of integral_approx and integral_exact for one set of parameters.
Here is an example to show what your solutions should look like:
Example
Here is the integral I am performing:
$$ I_1 = \int_0^\infty \frac{dx}{x^2 + a^2} = \frac{\pi}{2a} $$
End of explanation
def integrand(x,a):
return (a**2-x**2)**(1/2)
def integral_approx(a):
I1,e1=integrate.quad(integrand,0,a,args=(a,))
return I1
def integral_exact(a):
return (np.pi*a**2)/4
print("Numerical: ", integral_approx(1.0))
print("Exact : ", integral_exact(1.0))
assert True # leave this cell to grade the above integral
Explanation: Integral 1
$$ I_1 = \int_0^a \sqrt{a^2 - x^2} dx = \frac{\pi a^2}{4}$$
End of explanation
def integrand(x):
return np.sin(x)**2
def integral_approx():
I2,e2=integrate.quad(integrand,0,np.pi/2)
return I2
def integral_exact():
return np.pi/4
print("Numerical: ", integral_approx())
print("Exact : ", integral_exact())
assert True # leave this cell to grade the above integral
Explanation: Integral 2
$$ I_2 = \int_0^{\frac{\pi}{2}} \sin^2 x dx = \frac{\pi}{4} $$
End of explanation
def integrand(x,a,b):
return np.exp(-a*x)*np.cos(b*x)
def integral_approx(a,b):
I3,e3=integrate.quad(integrand,0,np.inf,args=(a,b,))
return I3
def integral_exact(a,b):
return a/(a**2+b**2)
print("Numerical: ", integral_approx(1.0,1.0))
print("Exact : ", integral_exact(1.0,1.0))
assert True # leave this cell to grade the above integral
Explanation: Integral 3
$$ I_3 = \int_0^\infty e^{-ax} cos(bx) dx = \frac{a}{a^2+b^2} $$
End of explanation
def integrand(x,a,b):
return np.exp(-a*x**2-(b/x**2))
def integral_approx(a,b):
I4,e4=integrate.quad(integrand,0,np.inf,args=(a,b,))
return I4
def integral_exact(a,b):
return (1/2)*(np.pi/a)**(1/2)*np.exp(-2*(a*b)**(1/2))
print("Numerical: ", integral_approx(1.0,1.0))
print("Exact : ", integral_exact(1.0,1.0))
assert True # leave this cell to grade the above integral
Explanation: Integral 4
$$ I_4 = \int_0^\infty e^{-ax^2 - b/x^2} dx = \frac{1}{2}\sqrt{\frac{\pi}{a}} e^{-2\sqrt{ab}} $$
End of explanation
def integrand(x,a):
return x/(np.sinh(a*x))
def integral_approx(a):
I5,e5=integrate.quad(integrand,0,np.inf,args=(a,))
return I5
def integral_exact(a):
return np.pi**2/4*a**2
print("Numerical: ", integral_approx(1.0))
print("Exact : ", integral_exact(1.0))
assert True # leave this cell to grade the above integral
Explanation: Integral 5
$$ I_5 = \int_0^\infty \frac{x}{\sinh ax} dx = \frac{\pi^2}{4a^2}$$
End of explanation
<END_TASK> |
15,790 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Mie Scattering Efficiencies
Scott Prahl
Jan 2022
If miepython is not installed, uncomment the following cell (i.e., delete the #) and run (shift-enter)
Step1: When a monochromatic plane wave is incident on a sphere, it scatters and absorbs light depending on the properties of the light and sphere. The sphere has radius $r$ and index of refraction $m=m_\mathrm{re}- j\,m_\mathrm{im}$. The sphere size parameter $x=2\pi x/\lambda$ where $\lambda$ is the wavelength of the plane wave in a vacuum.
Step2: Efficiencies
miepython.mie(m,x) calculates three dimensionless efficiencies for a sphere with complex index of refraction $m$ and dimensionless size parameter $x$
Step3: Scattering and absorption coefficients
The scattering cross section may be related to the transmission of a beam
through a dispersion of scatterers of equal size. For $\rho$ particles per
unit volume, the attenuation due to scattering is
$$
-\frac{dI}{dx} = \rho \sigma_\mathrm{sca} I
$$
The transmission is
$$
T = I/I_0 = \exp(-\rho \sigma_\mathrm{sca} x) = \exp(-\mu_s x)
$$
and the coefficients for a sphere with radius r is
$$
\mu_\mathrm{sca} = \rho \sigma_\mathrm{sca} = \rho \pi r^2 Q_\mathrm{sca}
$$
$$
\mu_\mathrm{ext} = \rho \sigma_\mathrm{ext} = \rho \pi r^2 Q_\mathrm{ext}
$$
$$
\mu_\mathrm{abs} = \rho \sigma_\mathrm{abs} = \rho \pi r^2 (Q_\mathrm{ext}-Q_\mathrm{sca})
$$
Kerker, p. 38.
Backscattering Cross Section
For plane-wave radiation incident on a scattering object or a scattering medium, the ratio of the intensity [W/sr] scattered in the direction toward the source to the incident irradiance [W/area].
So defined, the backscattering cross section has units of area per unit solid angle.
In common usage, synonymous with radar cross section, although this can be confusing because the radar cross section is $4\pi$ times the backscattering cross section as defined above and has units of area.
If $Q_{sca}$ [unitless] is the backscattering efficiency then the scattering cross section $\sigma_\mathrm{sca}$ [area]
$$
\sigma_\mathrm{sca} = \pi r^2 Q_{sca}
$$
Thus if $Q_{back}$ [unitless] is the backscattering efficiency then the scattering cross section $\sigma_\mathrm{back}$ [area]
$$
\sigma_\mathrm{back} = \pi r^2 Q_{back}
$$
Now the phase function is normalized to one ($S_1(\theta)$ has units of sr$^{-0.5}$)
$$
\int_{4\pi} \frac{|S_1(\theta)|^2+|S_2(\theta)|^2)}{2}\,d\Omega =1
$$
Now since
$$
|S_1(-180^\circ)|^2=|S_2(-180^\circ)|^2=|S_1(180^\circ)|^2=|S_2(180^\circ)|^2
$$
The differential scattering cross section [area/sr] in the backwards direction will be
$$
\left. \frac{d\sigma_\mathrm{sca}}{d\Omega}\right|{180^\circ} =\sigma\mathrm{sca} |S_1(-180^\circ)|^2
$$
and the backscattering cross section will be $4\pi$ times this
$$
\sigma_\mathrm{back} = 4\pi \left. \frac{d\sigma_\mathrm{sca}}{d\Omega}\right|{180^\circ} = 4\pi \sigma\mathrm{sca} |S_1(-180^\circ)|^2
$$
Step4: Efficiencies
To create a non-dimensional quantity, the scattering efficiency may be defined as
$$
Q_\mathrm{sca} = \frac{\sigma_\mathrm{sca}}{ \pi r^2}
$$
where the scattering cross section is normalized by the geometric cross section. Thus when the scattering efficiency is unity, then the portion of the incident plane wave that is affected is equal to the cross sectional area of the sphere.
Similarly the absorption efficiency
$$
Q_\mathrm{abs} = \frac{\sigma_\mathrm{abs}}{ \pi r^2}
$$
And finally the extinction cross section is
$$
Q_{ext}=Q_{sca}+Q_{abs}
$$
where $Q_{sca}$ is the scattering efficiency and $Q_{abs}$ is the absorption
efficiency. $Q_{sca}$ and $Q_{ext}$ are determined by the
Mie scattering program and $Q_{abs}$ is obtained by subtraction.
Step5: Radiation Pressure
The radiation pressure is given by [e.g., Kerker, p. 94]
$$
Q_\mathrm{pr}=Q_\mathrm{ext}-g Q_\mathrm{sca}
$$
and is the momentum given to the scattering particle [van de Hulst, p. 13] in the direction of the incident wave. The radiation pressure cross section $\sigma_\mathrm{pr}$ is just the efficiency multiplied by the geometric cross section
$$
\sigma_\mathrm{pr} = \pi r^2 Q_\mathrm{pr}
$$
The radiation pressure cross section $\sigma_\mathrm{pr}$ can be interpreted as the area of a black wall that would receive the same force from the same incident wave. The actual force on the particle is
$$
F = E_0 \frac{\sigma_\mathrm{pr}}{c}
$$
where $E_0$ is the irradiance (W/m$^2$) on the sphere and $c$ is the velocity of the radiation in the medium. If the irradiance has N photons per geometric cross section ($\pi r^2$) then this can be rewritten as
$$
F = N \frac{h}{\lambda} \sigma_\mathrm{pr} = N \cdot \mbox{(photon momentum)} \cdot \sigma_\mathrm{pr}
$$
Step6: Graph of backscattering efficiency
van de Hulst has a nice graph of backscattering efficiency that we can replicate | <ASSISTANT_TASK:>
Python Code:
#!pip install --user miepython
import importlib.resources
import numpy as np
import matplotlib.pyplot as plt
try:
import miepython
except ModuleNotFoundError:
print('miepython not installed. To install, uncomment and run the cell above.')
print('Once installation is successful, rerun this cell again.')
Explanation: Mie Scattering Efficiencies
Scott Prahl
Jan 2022
If miepython is not installed, uncomment the following cell (i.e., delete the #) and run (shift-enter)
End of explanation
# import the Johnson and Christy data for silver
# ag = np.genfromtxt('https://refractiveindex.info/tmp/data/main/Ag/Johnson.txt', delimiter='\t')
nname = "data/ag-Johnson.txt"
ref = importlib.resources.files('miepython').joinpath(nname)
ag = np.genfromtxt(ref, delimiter='\t')
# data is stacked so need to rearrange
N = len(ag)//2
ag_lam = ag[1:N,0]
ag_mre = ag[1:N,1]
ag_mim = ag[N+1:,1]
plt.scatter(ag_lam*1000,ag_mre,s=2,color='blue')
plt.scatter(ag_lam*1000,ag_mim,s=2,color='red')
plt.xlim(300,800)
plt.ylim(0,5)
plt.xlabel('Wavelength (nm)')
plt.ylabel('Refractive Index')
plt.text(350, 1.2, '$m_{re}$', color='blue', fontsize=14)
plt.text(350, 2.2, '$m_{im}$', color='red', fontsize=14)
plt.title('Complex Refractive Index of Silver')
plt.show()
Explanation: When a monochromatic plane wave is incident on a sphere, it scatters and absorbs light depending on the properties of the light and sphere. The sphere has radius $r$ and index of refraction $m=m_\mathrm{re}- j\,m_\mathrm{im}$. The sphere size parameter $x=2\pi x/\lambda$ where $\lambda$ is the wavelength of the plane wave in a vacuum.
End of explanation
r = 0.3 #radius in microns
geometric_cross_section = np.pi * r**2
x = 2*np.pi*r/ag_lam;
m = ag_mre - 1.0j * ag_mim
qext, qsca, qback, g = miepython.mie(m,x)
absorb = (qext - qsca) * geometric_cross_section
scatt = qsca * geometric_cross_section
extinct = qext * geometric_cross_section
plt.plot(ag_lam*1000,absorb,color='blue')
plt.plot(ag_lam*1000,scatt,color='red')
plt.plot(ag_lam*1000,extinct,color='green')
plt.text(350, 0.35,'$\sigma_{abs}$', color='blue', fontsize=14)
plt.text(350, 0.54,'$\sigma_{sca}$', color='red', fontsize=14)
plt.text(350, 0.84,'$\sigma_{ext}$', color='green', fontsize=14)
plt.xlabel("Wavelength (nm)")
plt.ylabel("Cross Section (1/microns$^2$)")
plt.title("Cross Sections for %.1f$\mu$m Silver Spheres" % (r*2))
plt.xlim(300,800)
plt.show()
Explanation: Efficiencies
miepython.mie(m,x) calculates three dimensionless efficiencies for a sphere with complex index of refraction $m$ and dimensionless size parameter $x$:
$Q_{ext}$ the extinction efficiency
$Q_{sca}$ the scattering efficiency
$Q_{back}$ the back-scattering efficiency
as well as the dimensionless average cosine of the scattering angle
$g$ scattering anisotropy.
Cross Sections
Scattering and absorption cross sections $\sigma$ have units of area and can be obtained from the efficiencies by multiplying by the geometric cross section $\pi r^2$ of the sphere.
$$
\sigma_\mathrm{sca} = \pi r^2 Q_\mathrm{sca}
$$
$$
\sigma_\mathrm{ext} = \pi r^2 Q_\mathrm{ext}
$$
$$
\sigma_\mathrm{back} = \pi r^2 Q_\mathrm{back}
$$
For example, the scattering cross section $\sigma_\mathrm{sca}$ is effective area of a the incident plane wave that interacts and produces scattered light.
Since some of the incident light may be absorbed (when $m_\mathrm{im}$ is non-zero) then there is also an area of the incident wave that is absorbed $\sigma_\mathrm{abs}$.
$$
Q_\mathrm{ext} = Q_\mathrm{abs}+Q_\mathrm{sca}
$$
and so
$$
\sigma_\mathrm{abs} = \sigma_\mathrm{ext}-\sigma_\mathrm{sca}
$$
End of explanation
lambda0 = 1 # microns
a = lambda0/10 # also microns
k = 2*np.pi/lambda0 # per micron
m = 1.5
x = a * k
geometric_cross_section = np.pi * a**2
theta = np.linspace(-180,180,180)
mu = np.cos(theta/180*np.pi)
s1,s2 = miepython.mie_S1_S2(m,x,mu)
phase = (abs(s1[0])**2+abs(s2[0])**2)/2
print(' unpolarized =',phase)
print(' |s1[-180]|**2 =',abs(s1[0]**2))
print(' |s2[-180]|**2 =',abs(s2[0]**2))
print(' |s1[ 180]|**2 =',abs(s1[179]**2))
print(' |s2[ 180]|**2 =',abs(s2[179]**2))
print()
qext, qsca, qback, g = miepython.mie(m,x)
Cback = qback * geometric_cross_section
Csca = qsca * geometric_cross_section
print(' Csca =',Csca)
print(' Cback =',Cback)
print('4*pi*Csca*p(180) =',4*np.pi*Csca*phase)
Explanation: Scattering and absorption coefficients
The scattering cross section may be related to the transmission of a beam
through a dispersion of scatterers of equal size. For $\rho$ particles per
unit volume, the attenuation due to scattering is
$$
-\frac{dI}{dx} = \rho \sigma_\mathrm{sca} I
$$
The transmission is
$$
T = I/I_0 = \exp(-\rho \sigma_\mathrm{sca} x) = \exp(-\mu_s x)
$$
and the coefficients for a sphere with radius r is
$$
\mu_\mathrm{sca} = \rho \sigma_\mathrm{sca} = \rho \pi r^2 Q_\mathrm{sca}
$$
$$
\mu_\mathrm{ext} = \rho \sigma_\mathrm{ext} = \rho \pi r^2 Q_\mathrm{ext}
$$
$$
\mu_\mathrm{abs} = \rho \sigma_\mathrm{abs} = \rho \pi r^2 (Q_\mathrm{ext}-Q_\mathrm{sca})
$$
Kerker, p. 38.
Backscattering Cross Section
For plane-wave radiation incident on a scattering object or a scattering medium, the ratio of the intensity [W/sr] scattered in the direction toward the source to the incident irradiance [W/area].
So defined, the backscattering cross section has units of area per unit solid angle.
In common usage, synonymous with radar cross section, although this can be confusing because the radar cross section is $4\pi$ times the backscattering cross section as defined above and has units of area.
If $Q_{sca}$ [unitless] is the backscattering efficiency then the scattering cross section $\sigma_\mathrm{sca}$ [area]
$$
\sigma_\mathrm{sca} = \pi r^2 Q_{sca}
$$
Thus if $Q_{back}$ [unitless] is the backscattering efficiency then the scattering cross section $\sigma_\mathrm{back}$ [area]
$$
\sigma_\mathrm{back} = \pi r^2 Q_{back}
$$
Now the phase function is normalized to one ($S_1(\theta)$ has units of sr$^{-0.5}$)
$$
\int_{4\pi} \frac{|S_1(\theta)|^2+|S_2(\theta)|^2)}{2}\,d\Omega =1
$$
Now since
$$
|S_1(-180^\circ)|^2=|S_2(-180^\circ)|^2=|S_1(180^\circ)|^2=|S_2(180^\circ)|^2
$$
The differential scattering cross section [area/sr] in the backwards direction will be
$$
\left. \frac{d\sigma_\mathrm{sca}}{d\Omega}\right|{180^\circ} =\sigma\mathrm{sca} |S_1(-180^\circ)|^2
$$
and the backscattering cross section will be $4\pi$ times this
$$
\sigma_\mathrm{back} = 4\pi \left. \frac{d\sigma_\mathrm{sca}}{d\Omega}\right|{180^\circ} = 4\pi \sigma\mathrm{sca} |S_1(-180^\circ)|^2
$$
End of explanation
r = 0.3 #radius in microns
x = 2*np.pi*r/ag_lam;
m = ag_mre - 1.0j * ag_mim
qext, qsca, qback, g = miepython.mie(m,x)
plt.plot(ag_lam*1000,qext - qsca,color='blue')
plt.plot(ag_lam*1000,qsca,color='red')
plt.plot(ag_lam*1000,qext,color='green')
plt.text(350, 1.2,'$Q_{abs}$', color='blue', fontsize=14)
plt.text(350, 1.9,'$Q_{sca}$', color='red', fontsize=14)
plt.text(350, 3.0,'$Q_{ext}$', color='green', fontsize=14)
plt.xlabel("Wavelength (nm)")
plt.ylabel("Efficiency (-)")
plt.title("Mie Efficiencies for %.1f$\mu$m Silver Spheres" % (r*2))
plt.xlim(300,800)
plt.show()
Explanation: Efficiencies
To create a non-dimensional quantity, the scattering efficiency may be defined as
$$
Q_\mathrm{sca} = \frac{\sigma_\mathrm{sca}}{ \pi r^2}
$$
where the scattering cross section is normalized by the geometric cross section. Thus when the scattering efficiency is unity, then the portion of the incident plane wave that is affected is equal to the cross sectional area of the sphere.
Similarly the absorption efficiency
$$
Q_\mathrm{abs} = \frac{\sigma_\mathrm{abs}}{ \pi r^2}
$$
And finally the extinction cross section is
$$
Q_{ext}=Q_{sca}+Q_{abs}
$$
where $Q_{sca}$ is the scattering efficiency and $Q_{abs}$ is the absorption
efficiency. $Q_{sca}$ and $Q_{ext}$ are determined by the
Mie scattering program and $Q_{abs}$ is obtained by subtraction.
End of explanation
r = 0.3 #radius in microns
x = 2*np.pi*r/ag_lam;
m = ag_mre - 1.0j * ag_mim
qext, qsca, qback, g = miepython.mie(m,x)
qpr = qext - g*qsca
plt.plot(ag_lam*1000,qpr,color='blue')
plt.xlabel("Wavelength (nm)")
plt.ylabel("Efficiency $Q_{pr}$ (-)")
plt.title("Radiation Pressure Efficiency for %.1f$\mu$m Silver Spheres" % (r*2))
plt.xlim(300,800)
plt.ylim(1,2.5)
plt.show()
Explanation: Radiation Pressure
The radiation pressure is given by [e.g., Kerker, p. 94]
$$
Q_\mathrm{pr}=Q_\mathrm{ext}-g Q_\mathrm{sca}
$$
and is the momentum given to the scattering particle [van de Hulst, p. 13] in the direction of the incident wave. The radiation pressure cross section $\sigma_\mathrm{pr}$ is just the efficiency multiplied by the geometric cross section
$$
\sigma_\mathrm{pr} = \pi r^2 Q_\mathrm{pr}
$$
The radiation pressure cross section $\sigma_\mathrm{pr}$ can be interpreted as the area of a black wall that would receive the same force from the same incident wave. The actual force on the particle is
$$
F = E_0 \frac{\sigma_\mathrm{pr}}{c}
$$
where $E_0$ is the irradiance (W/m$^2$) on the sphere and $c$ is the velocity of the radiation in the medium. If the irradiance has N photons per geometric cross section ($\pi r^2$) then this can be rewritten as
$$
F = N \frac{h}{\lambda} \sigma_\mathrm{pr} = N \cdot \mbox{(photon momentum)} \cdot \sigma_\mathrm{pr}
$$
End of explanation
x = np.linspace(0.1,4,50)
m = 3.41-1.94j
qext, qsca, qback, g = miepython.mie(m,x)
plt.plot(x,qback)
plt.text(0.6,0,"m=3.41-1.94j")
m = 10000
qext, qsca, qback, g = miepython.mie(m,x)
plt.plot(x,qback)
plt.text(1.2,3.0,"m=10,000")
plt.xlabel("Size Parameter")
plt.ylabel(r"$Q_{back}$")
plt.title("van de Hulst Figure 61")
plt.grid(True)
plt.show()
Explanation: Graph of backscattering efficiency
van de Hulst has a nice graph of backscattering efficiency that we can replicate
End of explanation
<END_TASK> |
15,791 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Exercise numpy
The ultimate goal of this exercise is to compare the position of stars in a patch of sky as measured in two different surveys. The main task at hand is to identify matching positions of stars between the surveys. For this, we will need to compare the positions of all stars in one survey to the position of all stars in the other survey. This task can be extremely time consuming if not implemented properly, we will therefore use this to compare different coding style and their impact on computation time.
If time allows, we will move on to represent the results of our analysis in a meaningfull way.
Step1: Check that the loaded data are consistent with what we expect
Step2: To begin with, let's write a function that returns the algebraic distance between two points
Step3: Now let's test it by comparing the distance between the first point of each dataset.
Step4: Let's take it one step further and compare the distance between one point and a set of points
Step5: Let's test it on the first 10 points in the HSC catalog and the first point of the HST catalog
Step6: Now let's get to work. We would like to associate stars in one survey to their conterpart (if it exists) in the other survey. We will start by comparing the positions between each point of one survey to the position of each point in the other survey.
First, write a function that takes two sets of coordinates (hsc and hst) and returns the distance from each point of one survey to each point of the other, such that the output should be an array of size (n_hst x n_hsc) or (n_hsc x n_hst).
PS
Step7: Now, let us take a look at the computation times
Step8: Identifying matching stars (optional)
Now that we know all the distances, let us find the stars in each datasets that correspond to one another.
This is done by finding, for each star, the minimum distance recorded between the two datasets.
One problem that arises with deriving an array that computes all the distances is that we end up with a very LARGE array, which becomes impractical for fast computations. Instead, we will modify one of the previous functions so that it returns the coordinates of stars that have a match in both datasets along with their distance.
Because all stars in a given set do not have a counter part in the other, we will only accept a match if the minimum distance between two points is smaller than 0.17 arcseconds (the size of an HSC pixel).
In other words, for each star in one dataset, find the star in the other dataset that is the closest (minimum distance), check wether that star is closer that 0.17 arcseconds, if yes, store its coordinates along with the computed distance. At the end of the function, return arrays with the matching star coordinates and their distance to their match in the other dataset.
Step9: Now I would like to have a representation for the work we have done that informs me about what is in my datasets. Namely, what is the error on star positions between the two datasets? We would like to have a global view of this error but also an impression of the error as a function of the position on the field. For the latter, I suggest you use the 'scatter' function from matplotlib. | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt #We might need this
#First, let us load the data
#Catalog from HSC
cat_hsc = np.loadtxt('./Catalog_HSC.csv')
x_hsc = cat_hsc[:,0]
y_hsc = cat_hsc[:,1]
#Catalog from HST
cat_hst = np.loadtxt('./Catalog_HST.csv')
x_hst = cat_hst[:,0]
y_hst = cat_hst[:,1]
Explanation: Exercise numpy
The ultimate goal of this exercise is to compare the position of stars in a patch of sky as measured in two different surveys. The main task at hand is to identify matching positions of stars between the surveys. For this, we will need to compare the positions of all stars in one survey to the position of all stars in the other survey. This task can be extremely time consuming if not implemented properly, we will therefore use this to compare different coding style and their impact on computation time.
If time allows, we will move on to represent the results of our analysis in a meaningfull way.
End of explanation
#First, check the number of stars in each survey:
ns_hst = #fill in
ns_hsc = #...
#Print the result
print()
#This is a graphic representation of our data content:
%matplotlib qt
plt.title('star catalogs in COSMOS')
plt.plot(x_hsc, y_hsc, 'or', label = 'hsc catalog')
plt.plot(x_hst, y_hst, 'ob', label = 'hst catalog')
plt.legend()
plt.xlabel('ra')
plt.ylabel('dec')
plt.show()
Explanation: Check that the loaded data are consistent with what we expect: (ra, dec) coordinates of the same patch of sky
End of explanation
def distance(point1, point2):
''' Returns the distance between two points with coordinates (x,y).
Parameters
----------
point1: list
2D coordinates of a point
point2: list
2D coordinates of a point
Returns
-------
d: float
the distance between point1 and point2
'''
return
Explanation: To begin with, let's write a function that returns the algebraic distance between two points
End of explanation
point1 = [x_hst[0], y_hst[0]]
point2 = [x_hsc[0], y_hsc[0]]
print(distance(point1, point2))
# Answer should be 0.6648994838877168
Explanation: Now let's test it by comparing the distance between the first point of each dataset.
End of explanation
def point_to_points_distance(point, coordinates):
''' Returns the distance between one point and all the points in coordinates.
Parameters
----------
point: list
2D coordinates of a point
coordinates: list
set of N 2D coordinates stored in a list with shape Nx2
Returns
-------
d: list
the distance between point and each point in coordinates in an array with size N
'''
#Declaring an empty list
d = []
for c in coordinates:
# for each point in coordinates, take the distance to point and concatenate it to d
d.append(distance(point, c))
#make d a numpy array and return it
return np.array(d)
Explanation: Let's take it one step further and compare the distance between one point and a set of points
End of explanation
coords = np.concatenate((x_hsc[:10,None], y_hsc[:10,None]), axis = 1)
print(point_to_points_distance(point1, coords))
# The answer should look like [0.66489948 0.4628197 0.39672485 0.43854084 0.32165335 0.30223269
# 0.65765909 0.65411548 0.6474303 0.79301678]
Explanation: Let's test it on the first 10 points in the HSC catalog and the first point of the HST catalog
End of explanation
def your_function(coord1, coord2): # Choose an adequate name for your function
''' Returns the distance between points in two sets of coordinates.
Parameters
coord1: array
array of size Nx2 that contains the [x, y] positions of a catalog
coord2: array
array of size Mx2 that contains the [x, y] positions of a catalog
Returns
dist: array
array of size NxM that contains the euclidean distances between points in the two datasets
'''
return
Explanation: Now let's get to work. We would like to associate stars in one survey to their conterpart (if it exists) in the other survey. We will start by comparing the positions between each point of one survey to the position of each point in the other survey.
First, write a function that takes two sets of coordinates (hsc and hst) and returns the distance from each point of one survey to each point of the other, such that the output should be an array of size (n_hst x n_hsc) or (n_hsc x n_hst).
PS: if you have several (different) ideas about how to implement this, feel free to code them!
End of explanation
# In order not to spend the whole evening here, let us reduce the dataset size:
#Select stars in hsc in the frame: 150.0<x<150.1 and 2.0<y<2.1
loc_hsc = #please fill these
x_hsc_exp = x_hsc[loc_hsc]
y_hsc_exp = y_hsc[loc_hsc]
loc_hst = #And that
x_hst_exp = x_hst[loc_hst]
y_hst_exp = y_hst[loc_hst]
#Once you are done with the exercise, feel free to try with larger selections to see how it impacts computation time
import distances as dt
# Insert the names of your functions in the following array:
methods = [your_function, dt.double_loop, dt.with_indices, dt.one_loop, dt.one_loop_reverse, dt.scipy_version, dt.newaxis_magic]
#An empty variable to store computation time
timers = []
# Making sets of coordinates of size Nx2 to feed your functions with the right format
c2 = np.concatenate((x_hst_exp[:,None], y_hst_exp[:,None]), axis = 1)
c1 = np.concatenate((x_hsc_exp[:,None], y_hsc_exp[:,None]), axis = 1)
for f in methods:
print(f.__name__)
r = %timeit -o f(c1, c2)
timers.append(r)
#View the results:
plt.figure(figsize=(10,6))
plt.bar(np.arange(len(methods)), [r.best*1000 for r in timers], log=True) # Set log to True for logarithmic scale
plt.xticks(np.arange(len(methods))+0.2, [f.__name__ for f in methods], rotation=30)
plt.xlabel('Method')
plt.ylabel('Time (ms)')
plt.yscale('log')
plt.show()
Explanation: Now, let us take a look at the computation times:
End of explanation
#Let us compute the distances as we did before, but this time, with the whole dataset.
#Of course, a fast method is to be prefered
c1 = #Please fill these. Same as before but with all the dataset
c2 = #
def get_match(coord_ref, coord2, rad):
'''
matches coordinates of stars between two datasets and computes the distance between the position of the stars in the 2 datasets
Parameters
coord_ref: numpy array (Nx2)
coordinates (ra, dec) of stars in a FoV from a given dataset
coord2: numpy array (Mx2)
coordinates (ra dec) of stars in the same FoV in an other dataset
rad: float
radius (deg) around stars in coord_ref where to find a corresponding star in coord2
Returns
modulus:numpy array (N')
containing the distance between matching stars
v_coord: numpy array(N',2)
coordinates in the coord_ref set of matching stars
'''
#Declare two empty arrays to store the coordinates and distances.
#...
s = np.size(coord_ref[:,0])#This is just for representation
print('number of points in reference catalog: {0}'.format(s))
#for each star in coord_ref
for i,c in enumerate(coord_ref):
#This is just here to keep track of the algorithm's progression
if i % 3000 == 0:
print('point number {0} out of {1}'.format(i, s))
#compute the distance from c to all stars in coord2
r = #...
#Find the closest star from coord 2 to c
loc = #...
#Make sure that there is only one star matching (it can happen that 2 match)
#Here I just arbitrarily pick one, but you can find a way to discard these stars
if np.size(loc) > 1:
loc = loc[0]
#record the distance between matching stars
rmin = #...
#Check whether the closest distance is smaller than rad
if #...:
#if yes, place the coordinates and the distance in an array
#... tip: use append()
return #...
# Use your function
coord , r = get_match(c1, c2, 0.3/3600.)
Explanation: Identifying matching stars (optional)
Now that we know all the distances, let us find the stars in each datasets that correspond to one another.
This is done by finding, for each star, the minimum distance recorded between the two datasets.
One problem that arises with deriving an array that computes all the distances is that we end up with a very LARGE array, which becomes impractical for fast computations. Instead, we will modify one of the previous functions so that it returns the coordinates of stars that have a match in both datasets along with their distance.
Because all stars in a given set do not have a counter part in the other, we will only accept a match if the minimum distance between two points is smaller than 0.17 arcseconds (the size of an HSC pixel).
In other words, for each star in one dataset, find the star in the other dataset that is the closest (minimum distance), check wether that star is closer that 0.17 arcseconds, if yes, store its coordinates along with the computed distance. At the end of the function, return arrays with the matching star coordinates and their distance to their match in the other dataset.
End of explanation
#Spatial distribution of distances
plt.title('distribution of distances accross the FoV')
#...
#Global representation
#...
Explanation: Now I would like to have a representation for the work we have done that informs me about what is in my datasets. Namely, what is the error on star positions between the two datasets? We would like to have a global view of this error but also an impression of the error as a function of the position on the field. For the latter, I suggest you use the 'scatter' function from matplotlib.
End of explanation
<END_TASK> |
15,792 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
ES-DOC CMIP6 Model Properties - Toplevel
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required
Step7: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required
Step8: 3.2. CMIP3 Parent
Is Required
Step9: 3.3. CMIP5 Parent
Is Required
Step10: 3.4. Previous Name
Is Required
Step11: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required
Step12: 4.2. Code Version
Is Required
Step13: 4.3. Code Languages
Is Required
Step14: 4.4. Components Structure
Is Required
Step15: 4.5. Coupler
Is Required
Step16: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required
Step17: 5.2. Atmosphere Double Flux
Is Required
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Is Required
Step19: 5.4. Atmosphere Relative Winds
Is Required
Step20: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required
Step21: 6.2. Global Mean Metrics Used
Is Required
Step22: 6.3. Regional Metrics Used
Is Required
Step23: 6.4. Trend Metrics Used
Is Required
Step24: 6.5. Energy Balance
Is Required
Step25: 6.6. Fresh Water Balance
Is Required
Step26: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required
Step27: 7.2. Atmos Ocean Interface
Is Required
Step28: 7.3. Atmos Land Interface
Is Required
Step29: 7.4. Atmos Sea-ice Interface
Is Required
Step30: 7.5. Ocean Seaice Interface
Is Required
Step31: 7.6. Land Ocean Interface
Is Required
Step32: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required
Step33: 8.2. Atmos Ocean Interface
Is Required
Step34: 8.3. Atmos Land Interface
Is Required
Step35: 8.4. Atmos Sea-ice Interface
Is Required
Step36: 8.5. Ocean Seaice Interface
Is Required
Step37: 8.6. Runoff
Is Required
Step38: 8.7. Iceberg Calving
Is Required
Step39: 8.8. Endoreic Basins
Is Required
Step40: 8.9. Snow Accumulation
Is Required
Step41: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required
Step42: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required
Step43: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required
Step45: 12.2. Additional Information
Is Required
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required
Step47: 13.2. Additional Information
Is Required
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required
Step49: 14.2. Additional Information
Is Required
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required
Step51: 15.2. Additional Information
Is Required
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required
Step53: 16.2. Additional Information
Is Required
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required
Step55: 17.2. Equivalence Concentration
Is Required
Step56: 17.3. Additional Information
Is Required
Step57: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required
Step58: 18.2. Additional Information
Is Required
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required
Step60: 19.2. Additional Information
Is Required
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required
Step62: 20.2. Additional Information
Is Required
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required
Step64: 21.2. Additional Information
Is Required
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required
Step66: 22.2. Aerosol Effect On Ice Clouds
Is Required
Step67: 22.3. Additional Information
Is Required
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required
Step69: 23.2. Aerosol Effect On Ice Clouds
Is Required
Step70: 23.3. RFaci From Sulfate Only
Is Required
Step71: 23.4. Additional Information
Is Required
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required
Step73: 24.2. Additional Information
Is Required
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step77: 25.4. Additional Information
Is Required
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step81: 26.4. Additional Information
Is Required
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required
Step83: 27.2. Additional Information
Is Required
Step84: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required
Step85: 28.2. Crop Change Only
Is Required
Step86: 28.3. Additional Information
Is Required
Step87: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required
Step88: 29.2. Additional Information
Is Required | <ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nuist', 'sandbox-2', 'toplevel')
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: NUIST
Source ID: SANDBOX-2
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:34
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
<END_TASK> |
15,793 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Dropout
Dropout [1] is a technique for regularizing neural networks by randomly setting some features to zero during the forward pass. In this exercise you will implement a dropout layer and modify your fully-connected network to optionally use dropout.
[1] Geoffrey E. Hinton et al, "Improving neural networks by preventing co-adaptation of feature detectors", arXiv 2012
Step2: Dropout forward pass
In the file cs231n/layers.py, implement the forward pass for dropout. Since dropout behaves differently during training and testing, make sure to implement the operation for both modes.
Once you have done so, run the cell below to test your implementation.
Step3: Dropout backward pass
In the file cs231n/layers.py, implement the backward pass for dropout. After doing so, run the following cell to numerically gradient-check your implementation.
Step4: Fully-connected nets with Dropout
In the file cs231n/classifiers/fc_net.py, modify your implementation to use dropout. Specificially, if the constructor the the net receives a nonzero value for the dropout parameter, then the net should add dropout immediately after every ReLU nonlinearity. After doing so, run the following to numerically gradient-check your implementation.
Step5: Regularization experiment
As an experiment, we will train a pair of two-layer networks on 500 training examples | <ASSISTANT_TASK:>
Python Code:
# As usual, a bit of setup
import time
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.fc_net import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array
from cs231n.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
returns relative error
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in data.iteritems():
print '%s: ' % k, v.shape
Explanation: Dropout
Dropout [1] is a technique for regularizing neural networks by randomly setting some features to zero during the forward pass. In this exercise you will implement a dropout layer and modify your fully-connected network to optionally use dropout.
[1] Geoffrey E. Hinton et al, "Improving neural networks by preventing co-adaptation of feature detectors", arXiv 2012
End of explanation
x = np.random.randn(500, 500) + 10
for p in [0.3, 0.6, 0.75]:
out, _ = dropout_forward(x, {'mode': 'train', 'p': p})
out_test, _ = dropout_forward(x, {'mode': 'test', 'p': p})
print 'Running tests with p = ', p
print 'Mean of input: ', x.mean()
print 'Mean of train-time output: ', out.mean()
print 'Mean of test-time output: ', out_test.mean()
print 'Fraction of train-time output set to zero: ', (out == 0).mean()
print 'Fraction of test-time output set to zero: ', (out_test == 0).mean()
print
Explanation: Dropout forward pass
In the file cs231n/layers.py, implement the forward pass for dropout. Since dropout behaves differently during training and testing, make sure to implement the operation for both modes.
Once you have done so, run the cell below to test your implementation.
End of explanation
x = np.random.randn(10, 10) + 10
dout = np.random.randn(*x.shape)
dropout_param = {'mode': 'train', 'p': 0.8, 'seed': 123}
out, cache = dropout_forward(x, dropout_param)
dx = dropout_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda xx: dropout_forward(xx, dropout_param)[0], x, dout)
print 'dx relative error: ', rel_error(dx, dx_num)
Explanation: Dropout backward pass
In the file cs231n/layers.py, implement the backward pass for dropout. After doing so, run the following cell to numerically gradient-check your implementation.
End of explanation
N, D, H1, H2, C = 2, 15, 20, 30, 10
X = np.random.randn(N, D)
y = np.random.randint(C, size=(N,))
for dropout in [0, 0.25, 0.5]:
print 'Running check with dropout = ', dropout
model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,
weight_scale=5e-2, dtype=np.float64,
dropout=dropout, seed=123)
loss, grads = model.loss(X, y)
print 'Initial loss: ', loss
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)
print '%s relative error: %.2e' % (name, rel_error(grad_num, grads[name]))
print
Explanation: Fully-connected nets with Dropout
In the file cs231n/classifiers/fc_net.py, modify your implementation to use dropout. Specificially, if the constructor the the net receives a nonzero value for the dropout parameter, then the net should add dropout immediately after every ReLU nonlinearity. After doing so, run the following to numerically gradient-check your implementation.
End of explanation
# Train two identical nets, one with dropout and one without
num_train = 500
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
solvers = {}
dropout_choices = [0, 0.75]
for dropout in dropout_choices:
model = FullyConnectedNet([500], dropout=dropout)
print dropout
solver = Solver(model, small_data,
num_epochs=25, batch_size=100,
update_rule='adam',
optim_config={
'learning_rate': 5e-4,
},
verbose=True, print_every=100)
solver.train()
solvers[dropout] = solver
# Plot train and validation accuracies of the two models
train_accs = []
val_accs = []
for dropout in dropout_choices:
solver = solvers[dropout]
train_accs.append(solver.train_acc_history[-1])
val_accs.append(solver.val_acc_history[-1])
plt.subplot(3, 1, 1)
for dropout in dropout_choices:
plt.plot(solvers[dropout].train_acc_history, 'o', label='%.2f dropout' % dropout)
plt.title('Train accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend(ncol=2, loc='lower right')
plt.subplot(3, 1, 2)
for dropout in dropout_choices:
plt.plot(solvers[dropout].val_acc_history, 'o', label='%.2f dropout' % dropout)
plt.title('Val accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend(ncol=2, loc='lower right')
plt.gcf().set_size_inches(15, 15)
plt.show()
Explanation: Regularization experiment
As an experiment, we will train a pair of two-layer networks on 500 training examples: one will use no dropout, and one will use a dropout probability of 0.75. We will then visualize the training and validation accuracies of the two networks over time.
End of explanation
<END_TASK> |
15,794 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
<small><i>This notebook was prepared by Donne Martin. Source and license info is on GitHub.</i></small>
Challenge Notebook
Problem
Step1: Unit Test
The following unit test is expected to fail until you solve the challenge. | <ASSISTANT_TASK:>
Python Code:
def list_of_chars(list_chars):
# TODO: Implement me
pass
Explanation: <small><i>This notebook was prepared by Donne Martin. Source and license info is on GitHub.</i></small>
Challenge Notebook
Problem: Implement a function to reverse a string (a list of characters), in-place.
Constraints
Test Cases
Algorithm
Code
Unit Test
Solution Notebook
Constraints
Can I assume the string is ASCII?
Yes
Note: Unicode strings could require special handling depending on your language
Since we need to do this in-place, it seems we cannot use the slice operator or the reversed function?
Correct
Since Python string are immutable, can I use a list of characters instead?
Yes
Test Cases
None -> None
[''] -> ['']
['f', 'o', 'o', ' ', 'b', 'a', 'r'] -> ['r', 'a', 'b', ' ', 'o', 'o', 'f']
Algorithm
Refer to the Solution Notebook. If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start.
Code
End of explanation
# %load test_reverse_string.py
from nose.tools import assert_equal
class TestReverse(object):
def test_reverse(self):
assert_equal(list_of_chars(None), None)
assert_equal(list_of_chars(['']), [''])
assert_equal(list_of_chars(
['f', 'o', 'o', ' ', 'b', 'a', 'r']),
['r', 'a', 'b', ' ', 'o', 'o', 'f'])
print('Success: test_reverse')
def main():
test = TestReverse()
test.test_reverse()
if __name__ == '__main__':
main()
Explanation: Unit Test
The following unit test is expected to fail until you solve the challenge.
End of explanation
<END_TASK> |
15,795 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Introduction to SQLite &
Selecting Sources from the Sloan Digital Sky Survey
Version 0.1
By AA Miller 2019 Mar 25
As noted earlier, there will be full lectures on databases over the remainder of this week.
This notebook provides a quick introduction to SQLite a lightweight implementation of a Structured Query Language (SQL) database. One of the incredibly nice things about SQLite is the low overhead needed to set up a database (as you will see in a minute). We will take advantage of this low overhead to build a database later in the week.
Step1: At the most basic level - databases store your bytes, and later return those bytes (or a subset of them) when queried.
They provide a highly efficient means for filtering your bytes (there are many different strategies that the user can employ).
The backend for most databases is the Structured Query Language or SQL, which is a standard declarative language.
There are many different libraries that implement SQL
Step2: Without diving too much into the weeds (we'll investigate this further later this week), we need to establish a connection to the database. From the connection we create a cursor, which allows us to actually interact with the database.
Step4: And just like that - we have now created a new database intro.db, with which we can "store bytes" or later "retrieve bytes" once we have added some data to the database.
Aside - note that unlike many SQL libraries, SQLite does not require a server and creates an actual database file on your hard drive. This improves portability, but also creates some downsides as well.
Now we need to create a table and insert some data. We will interact with the database via the execute() method for the cursor object.
Recall that creating a table requires a specification of the table name, the columns in the table, and the data type for each column. Here's an example where I create a table to store info on my pets
Step8: Once a table is created, we can use the database to store bytes. If I were to populate my PetInfo table I would do the following
Step12: Now that we have bytes in the database, we can retrieve those bytes with one (or several) queries. There are 3 basic building blocks to a query
Step13: In closing this brief introduction to databases, note that good databases follow the 4 ACID properties
Step15: astroquery enables seemless connections to the SDSS database via the Python shell.
Problem 2a
Select 20 random sources from the PhotoObjAll table and return all columns in the table.
Hint - while this would normally be accomplished by starting the query select limit 20 ..., SDSS CasJobs uses Microsoft's SQL Server, which adopts select top 20 ... to accomplish an identical result.
Step17: That's more columns than we will likely ever need. Instead, let's focus on objID, a unique identifier, cModelMag_u, cModelMag_g, cModelMag_r, cModelMag_i, and cModelMag_z, the source magnitude in $u', g', r', i', z'$, respectively.
We will now introduce the concept of joining two tables.
The most common operation is known as an inner join (which is often referred to as just join). An inner join returns records that have matching sources in both tables in the join.
Less, but nevertheless still powerful, is the outer join. An outer join returns all records in either table, with NULL values for columns in a table in which the record does not exist.
Specialized versions of the outer join include the left join and right join, whereby all records in either the left or right table, respectively, are returned along with their counterparts.
Problem 2b
Select objid and $u'g'r'i'z'$ from PhotoObjAll and the corresponding class from specObjAll for 20 random sources.
There are multiple columns you could use to join the tables, in this case match objid to bestobjid from specObjAll and use an inner join.
Step19: Problem 2c
Perform an identical query to the one above, but this time use a left outer join (or left join).
How do your results compare to the previous query?
Step21: Problem 2d
This time use a right outer join (or right join).
How do your results compare to the previous query?
Step23: Challenge Problem
To close the notebook we will perform a nested query. In brief, the idea is to join the results of one query with a separate query.
Here, we are going to attempt to identify bright AGN that don't have SDSS spectra. To do so we will need the photoObjAll table, the specObjAll table, and the rosat table, which includes all cross matches between SDSS sources and X-ray sources detected by the Rosat satellite.
Create a nested query that selects all Rosat sources that don't have SDSS spectra with cModelFlux_u + cModelFlux_g + cModelFlux_r + cModelFlux_i + cModelFlux_z > 10000 (this flux contraint ensures the source is bright without making any cuts on color) and type = 3, this last constraint means the source is extended in SDSS images.
Hint - you may run into timeout issues in which case you should run the query on CasJobs. | <ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
%matplotlib notebook
Explanation: Introduction to SQLite &
Selecting Sources from the Sloan Digital Sky Survey
Version 0.1
By AA Miller 2019 Mar 25
As noted earlier, there will be full lectures on databases over the remainder of this week.
This notebook provides a quick introduction to SQLite a lightweight implementation of a Structured Query Language (SQL) database. One of the incredibly nice things about SQLite is the low overhead needed to set up a database (as you will see in a minute). We will take advantage of this low overhead to build a database later in the week.
End of explanation
import sqlite3
Explanation: At the most basic level - databases store your bytes, and later return those bytes (or a subset of them) when queried.
They provide a highly efficient means for filtering your bytes (there are many different strategies that the user can employ).
The backend for most databases is the Structured Query Language or SQL, which is a standard declarative language.
There are many different libraries that implement SQL: MySQL, PostgreSQL, Greenplum, Microsoft SQL server, IBM DB2, Oracle Database, etc.
Problem 1) Basic SQL Operations with SQLite
The most basic implementation is SQLite a self-contained, SQL database engine. We will discuss SQLite further later in the week, but in brief - it is a nice stand alone package that works really well for small problems (such as the example that we are about to encounter).
End of explanation
conn = sqlite3.connect("intro.db")
cur = conn.cursor()
Explanation: Without diving too much into the weeds (we'll investigate this further later this week), we need to establish a connection to the database. From the connection we create a cursor, which allows us to actually interact with the database.
End of explanation
cur.execute( # complete
cur.execute(create table DSFPstudents(
Name text,
Institution text,
Year tinyint
))
Explanation: And just like that - we have now created a new database intro.db, with which we can "store bytes" or later "retrieve bytes" once we have added some data to the database.
Aside - note that unlike many SQL libraries, SQLite does not require a server and creates an actual database file on your hard drive. This improves portability, but also creates some downsides as well.
Now we need to create a table and insert some data. We will interact with the database via the execute() method for the cursor object.
Recall that creating a table requires a specification of the table name, the columns in the table, and the data type for each column. Here's an example where I create a table to store info on my pets:
cur.execute(create table PetInfo(
Name text,
Species text,
Age tinyint,
FavoriteFood text
))
Problem 1a
Create a new table in the database called DSFPstudents with columns Name, Institution, and Year, where Year is the year in graduate school.
End of explanation
cur.execute( # complete
cur.execute(insert into DSFPstudents(Name, Institution, Year)
values ("Adam Miller", "Northwestern", 13))
cur.execute(insert into DSFPstudents(Name, Institution, Year)
values ("Lucianne Walkowicz", "Adler", 14))
Explanation: Once a table is created, we can use the database to store bytes. If I were to populate my PetInfo table I would do the following:
cur.execute(insert into PetInfo(Name, Species, Age, FavoriteFood)
values ("Rocky", "Dog", 12, "Bo-Nana"))
cur.execute(insert into PetInfo(Name, Species, Age, FavoriteFood)
values ("100 Emoji-Flames Emoji", "Red Panda", 2, "bamboo leaves"))
Note - column names do not need to be explicitly specified, but for clarity this is always preferred.
Problem 1b
Insert data for yourself, and the two people sitting next to you into the database.
End of explanation
cur.execute( # complete
cur.fetchall()
cur.execute(select Institution from DSFPstudents where year > 2)
cur.fetchall()
Explanation: Now that we have bytes in the database, we can retrieve those bytes with one (or several) queries. There are 3 basic building blocks to a query:
SELECT...
FROM...
WHERE...
Where SELECT specifies the information we want to retrieve from the database, FROM specifies the tables being queried in the database, and WHERE specifies the conditions for the query.
Problem 1c
Select the institutions for all students in the DSFPstudents table who have been in grad school for more than 2 years.
Hint - to display the results of your query run cur.fetchall().
End of explanation
# you may need to run conda install -c astropy astroquery
from astroquery.sdss import SDSS
Explanation: In closing this brief introduction to databases, note that good databases follow the 4 ACID properties:
Atomicity
Consistency
Isolation
Durability
In closing this brief introduction to databases, note that good databases follow the 4 ACID properties:
Atomicity - all parts of transaction succeed, or rollback state of database
Consistency
Isolation
Durability
In closing this brief introduction to databases, note that good databases follow the 4 ACID properties:
Atomicity - all parts of transaction succeed, or rollback state of database
Consistency - data always meets validation rules
Isolation
Durability
In closing this brief introduction to databases, note that good databases follow the 4 ACID properties:
Atomicity - all parts of transaction succeed, or rollback state of database
Consistency - data always meets validation rules
Isolation - no interference across transactions (even if concurrent)
Durability
In closing this brief introduction to databases, note that good databases follow the 4 ACID properties:
Atomicity - all parts of transaction succeed, or rollback state of database
Consistency - data always meets validation rules
Isolation - no interference across transactions (even if concurrent)
Durability - a committed transaction remains committed (even if there's a power outage, etc)
Problem 2) Complex Queries with SDSS
Above we looked at the most basic operations possible with a database (recall - databases are unnecessary, and possibly cumbersome, with small data sets). A typical database consists of many tables, and these tables may be joined together to unlock complex questions for the data.
As a reminder on (some of) this functionality, we are now going to go through some problems using the SDSS database. The full SDSS schema explains all of the tables, columns, views and functions for querying the database. We will keep things relatively simple in that regard.
End of explanation
SDSS.query_sql( # complete
SDSS.query_sql(select top 20 * from PhotoObjAll)
Explanation: astroquery enables seemless connections to the SDSS database via the Python shell.
Problem 2a
Select 20 random sources from the PhotoObjAll table and return all columns in the table.
Hint - while this would normally be accomplished by starting the query select limit 20 ..., SDSS CasJobs uses Microsoft's SQL Server, which adopts select top 20 ... to accomplish an identical result.
End of explanation
SDSS.query_sql( # complete
SDSS.query_sql(select top 20 objid, cModelMag_u, cModelMag_g, cModelMag_r, cModelMag_i, cModelMag_z,
class
from photoobjall p
inner join specobjall s on p.objid = s.bestobjid)
Explanation: That's more columns than we will likely ever need. Instead, let's focus on objID, a unique identifier, cModelMag_u, cModelMag_g, cModelMag_r, cModelMag_i, and cModelMag_z, the source magnitude in $u', g', r', i', z'$, respectively.
We will now introduce the concept of joining two tables.
The most common operation is known as an inner join (which is often referred to as just join). An inner join returns records that have matching sources in both tables in the join.
Less, but nevertheless still powerful, is the outer join. An outer join returns all records in either table, with NULL values for columns in a table in which the record does not exist.
Specialized versions of the outer join include the left join and right join, whereby all records in either the left or right table, respectively, are returned along with their counterparts.
Problem 2b
Select objid and $u'g'r'i'z'$ from PhotoObjAll and the corresponding class from specObjAll for 20 random sources.
There are multiple columns you could use to join the tables, in this case match objid to bestobjid from specObjAll and use an inner join.
End of explanation
SDSS.query_sql(select top 20 objid, cModelMag_u, cModelMag_g, cModelMag_r, cModelMag_i, cModelMag_z,
class
from photoobjall p
left outer join specobjall s on p.objid = s.bestobjid)
Explanation: Problem 2c
Perform an identical query to the one above, but this time use a left outer join (or left join).
How do your results compare to the previous query?
End of explanation
SDSS.query_sql(select top 20 objid, cModelMag_u, cModelMag_g, cModelMag_r, cModelMag_i, cModelMag_z,
class
from photoobjall p
right outer join specobjall s on s.bestobjid = p.objid
)
Explanation: Problem 2d
This time use a right outer join (or right join).
How do your results compare to the previous query?
End of explanation
SDSS.query_sql(select rm.*
from
(select r.objid, r.sourcename, r.ra, r.dec, r.cps, r.hr1, r.hr2, cModelMag_u, cModelMag_g, cModelMag_r, cModelMag_i, cModelMag_z
from photoobjall p join rosat r on p.objid = r.objid
where (cModelFlux_u + cModelFlux_g + cModelFlux_r + cModelFlux_i + cModelFlux_z > 10000)
and p.type = 3) as rm
left join specobjall p on rm.objid = p.bestobjid
where p.bestobjid is null
)
Explanation: Challenge Problem
To close the notebook we will perform a nested query. In brief, the idea is to join the results of one query with a separate query.
Here, we are going to attempt to identify bright AGN that don't have SDSS spectra. To do so we will need the photoObjAll table, the specObjAll table, and the rosat table, which includes all cross matches between SDSS sources and X-ray sources detected by the Rosat satellite.
Create a nested query that selects all Rosat sources that don't have SDSS spectra with cModelFlux_u + cModelFlux_g + cModelFlux_r + cModelFlux_i + cModelFlux_z > 10000 (this flux contraint ensures the source is bright without making any cuts on color) and type = 3, this last constraint means the source is extended in SDSS images.
Hint - you may run into timeout issues in which case you should run the query on CasJobs.
End of explanation
<END_TASK> |
15,796 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
ES-DOC CMIP6 Model Properties - Aerosol
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step12: 2.2. Code Version
Is Required
Step13: 2.3. Code Languages
Is Required
Step14: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required
Step15: 3.2. Split Operator Advection Timestep
Is Required
Step16: 3.3. Split Operator Physical Timestep
Is Required
Step17: 3.4. Integrated Timestep
Is Required
Step18: 3.5. Integrated Scheme Type
Is Required
Step19: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required
Step20: 4.2. Variables 2D
Is Required
Step21: 4.3. Frequency
Is Required
Step22: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required
Step23: 5.2. Canonical Horizontal Resolution
Is Required
Step24: 5.3. Number Of Horizontal Gridpoints
Is Required
Step25: 5.4. Number Of Vertical Levels
Is Required
Step26: 5.5. Is Adaptive Grid
Is Required
Step27: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required
Step28: 6.2. Global Mean Metrics Used
Is Required
Step29: 6.3. Regional Metrics Used
Is Required
Step30: 6.4. Trend Metrics Used
Is Required
Step31: 7. Transport
Aerosol transport
7.1. Overview
Is Required
Step32: 7.2. Scheme
Is Required
Step33: 7.3. Mass Conservation Scheme
Is Required
Step34: 7.4. Convention
Is Required
Step35: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required
Step36: 8.2. Method
Is Required
Step37: 8.3. Sources
Is Required
Step38: 8.4. Prescribed Climatology
Is Required
Step39: 8.5. Prescribed Climatology Emitted Species
Is Required
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required
Step41: 8.7. Interactive Emitted Species
Is Required
Step42: 8.8. Other Emitted Species
Is Required
Step43: 8.9. Other Method Characteristics
Is Required
Step44: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required
Step45: 9.2. Prescribed Lower Boundary
Is Required
Step46: 9.3. Prescribed Upper Boundary
Is Required
Step47: 9.4. Prescribed Fields Mmr
Is Required
Step48: 9.5. Prescribed Fields Mmr
Is Required
Step49: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required
Step50: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required
Step51: 11.2. Dust
Is Required
Step52: 11.3. Organics
Is Required
Step53: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required
Step54: 12.2. Internal
Is Required
Step55: 12.3. Mixing Rule
Is Required
Step56: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required
Step57: 13.2. Internal Mixture
Is Required
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required
Step59: 14.2. Shortwave Bands
Is Required
Step60: 14.3. Longwave Bands
Is Required
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required
Step62: 15.2. Twomey
Is Required
Step63: 15.3. Twomey Minimum Ccn
Is Required
Step64: 15.4. Drizzle
Is Required
Step65: 15.5. Cloud Lifetime
Is Required
Step66: 15.6. Longwave Bands
Is Required
Step67: 16. Model
Aerosol model
16.1. Overview
Is Required
Step68: 16.2. Processes
Is Required
Step69: 16.3. Coupling
Is Required
Step70: 16.4. Gas Phase Precursors
Is Required
Step71: 16.5. Scheme Type
Is Required
Step72: 16.6. Bulk Scheme Species
Is Required | <ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'dwd', 'sandbox-3', 'aerosol')
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: DWD
Source ID: SANDBOX-3
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 69 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:57
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact internal mixture?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation
<END_TASK> |
15,797 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
SC-4-5 Feature Engineering and Classification
Step1: The strategy, unlike our first attempt, requires a real train/test split in the dataset because we're going to fit an actual model (although a true LOO cross validation is still of course possible). But we need a train_test_split function which is able ot deal with lists of NetworkX objects.
Feature Engineering
The goal here is to construct a standard training and test data matrix of numeric values, which will contain the sorted Laplacian eigenvalues of the graphs in each data set. One feature will thus represent the largest eigenvalue for each graph, a second feature will represent the second largest eigenvalue, and so on.
We do not necessarily assume that all of the graphs have the same number of vertices, although if there are marked differences, we would need to handle missing data for those graphs which had many fewer eigenvalues (or restrict our slice of the spectrum to the smallest number of eigenvalues present).
Step2: First Classifier
We're going to be using a gradient boosted classifier, which has some of best accuracy of any of the standard classifier methods. Ultimately we'll figure out the best hyperparameters using cross-validation, but first we just want to see whether the approach gets us anywhere in the right ballpark -- remember, we can 80% accuracy with just eigenvalue distance, so we have to be in that neighborhood or higher to be worth the effort of switching to a more complex model.
Step3: Finding Optimal Hyperparameters | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import networkx as nx
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import cPickle as pickle
from copy import deepcopy
from sklearn.utils import shuffle
import sklearn_mmadsen.graphs as skmg
%matplotlib inline
plt.style.use("fivethirtyeight")
sns.set()
all_graphs = pickle.load(open("train-sc-4-5-cont-graphs.pkl",'r'))
all_labels = pickle.load(open("train-sc-4-5-cont-labels.pkl",'r'))
Explanation: SC-4-5 Feature Engineering and Classification
End of explanation
train_graphs, train_labels, test_graphs, test_labels = skmg.graph_train_test_split(all_graphs, all_labels, test_fraction=0.10)
print "train size: %s" % len(train_graphs)
print "test size: %s" % len(test_graphs)
Explanation: The strategy, unlike our first attempt, requires a real train/test split in the dataset because we're going to fit an actual model (although a true LOO cross validation is still of course possible). But we need a train_test_split function which is able ot deal with lists of NetworkX objects.
Feature Engineering
The goal here is to construct a standard training and test data matrix of numeric values, which will contain the sorted Laplacian eigenvalues of the graphs in each data set. One feature will thus represent the largest eigenvalue for each graph, a second feature will represent the second largest eigenvalue, and so on.
We do not necessarily assume that all of the graphs have the same number of vertices, although if there are marked differences, we would need to handle missing data for those graphs which had many fewer eigenvalues (or restrict our slice of the spectrum to the smallest number of eigenvalues present).
End of explanation
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.metrics import accuracy_score, classification_report, confusion_matrix
train_matrix = skmg.graphs_to_eigenvalue_matrix(train_graphs, num_eigenvalues=10)
test_matrix = skmg.graphs_to_eigenvalue_matrix(test_graphs, num_eigenvalues=10)
clf = GradientBoostingClassifier(n_estimators = 250)
clf.fit(train_matrix, train_labels)
pred_label = clf.predict(test_matrix)
cm = confusion_matrix(test_labels, pred_label)
cmdf = pd.DataFrame(cm)
cmdf.columns = map(lambda x: 'predicted {}'.format(x), cmdf.columns)
cmdf.index = map(lambda x: 'actual {}'.format(x), cmdf.index)
print cmdf
print classification_report(test_labels, pred_label)
print "Accuracy on test: %0.3f" % accuracy_score(test_labels, pred_label)
Explanation: First Classifier
We're going to be using a gradient boosted classifier, which has some of best accuracy of any of the standard classifier methods. Ultimately we'll figure out the best hyperparameters using cross-validation, but first we just want to see whether the approach gets us anywhere in the right ballpark -- remember, we can 80% accuracy with just eigenvalue distance, so we have to be in that neighborhood or higher to be worth the effort of switching to a more complex model.
End of explanation
from sklearn.pipeline import Pipeline
from sklearn.grid_search import GridSearchCV
pipeline = Pipeline([
('clf', GradientBoostingClassifier())
])
params = {
'clf__learning_rate': [5.0,2.0,1.0, 0.75, 0.5, 0.25, 0.1, 0.05, 0.01],
'clf__n_estimators': [10,25,50,100,250,500]
}
grid_search = GridSearchCV(pipeline, params, n_jobs = -1, verbose = 1)
grid_search.fit(train_matrix, train_labels)
print("Best score: %0.3f" % grid_search.best_score_)
print("Best parameters:")
best_params = grid_search.best_estimator_.get_params()
for param in sorted(params.keys()):
print("param: %s: %r" % (param, best_params[param]))
pred_label = grid_search.predict(test_matrix)
cm = confusion_matrix(test_labels, pred_label)
cmdf = pd.DataFrame(cm)
cmdf.columns = map(lambda x: 'predicted {}'.format(x), cmdf.columns)
cmdf.index = map(lambda x: 'actual {}'.format(x), cmdf.index)
print cmdf
print classification_report(test_labels, pred_label)
print "Accuracy on test: %0.3f" % accuracy_score(test_labels, pred_label)
Explanation: Finding Optimal Hyperparameters
End of explanation
<END_TASK> |
15,798 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Basic polymer simulations in ESPResSo
In this tutorial we are going to investigate diffusion of a dissolved polymer using ESPResSo. For this tutorial, you should have fundamental knowledge of the Lattice-boltzmann method and Langevin dynamics. If you are unfamiliar with those, you can go through the respective tutorials in the lattice_boltzmann and langevin_dynamics folders.
Introduction
In these exercises we want to reproduce a classic result of polymer physics
Step1: 2. Simulating the polymer
Step2: 3. Data analysis
We will calculate the means of time series with error bars obtained from
the correlation-corrected standard error of the mean [<a href='#[5]'>5</a>,<a href='#[6]'>6</a>].
Step3: 3.1 Distance-based macromolecular properties
How do $R_h$, $R_g$, $R_F$ and the diffusion coefficient $D$ depend on the number of monomers?
You can refer to the Flory theory of polymers, and assume you are simulating a real polymer in a
good solvent, with Flory exponent $\nu \approx 0.588$.
Plot the end-to-end distance $R_F$ of the polymer as a function of the number of monomers. What relation do you observe?
The end-to-end distance follows the law $R_F = c_F N^\nu$ with $c_F$ a constant and $\nu$ the Flory exponent.
Step4: Plot the radius of gyration $R_g$ of the polymer as a function of the number of monomers. What relation do you observe?
The radius of gyration follows the law $R_g = c_g N^\nu$ with $c_g$ a constant and $\nu$ the Flory exponent.
Step5: For an ideal polymer
Step6: Plot the hydrodynamic radius $R_h$ of the polymers as a function of the number of monomers. What relation do you observe?
The hydrodynamic radius can be calculated via the Stokes radius, i.e. the radius of a sphere that
diffuses at the same rate as the polymer. An approximative formula is $R_h \approx c_h N^{1/3}$
with $c_h$ a constant.
Step7: 3.2 Diffusion coefficient using the MSD method
Calculate the diffusion coefficient of the polymers using the mean-squared displacement.
Recalling that for large $t$ the diffusion coefficient can be expressed as
Step8: Plot the dependence of the diffusion coefficient on the hydrodynamic radius.
Recalling the formula for the diffusion coefficient of a short polymer in the Kirkwood–Zimm model
Step9: 3.3 Diffusion coefficient using the Green–Kubo method
Plot the autocorrelation function and check that the decay is roughly exponential.
Hint
Step10: The Green–Kubo integral for the diffusion coefficient take the following form
Step11: Plot the dependence of the diffusion coefficient on the hydrodynamic radius.
Step12: Let us compare the value of the diffusion coefficients calcualted with the MSD and Green–Kubo methods | <ASSISTANT_TASK:>
Python Code:
def solvent_langevin(system, kT, gamma):
'''
Implicit solvation model based on Langevin dynamics (Rouse model).
'''
system.thermostat.set_langevin(kT=kT, gamma=gamma, seed=42)
def solvent_lbm(system, kT, gamma):
'''
Lattice-based solvation model based on the LBM (Zimm model).
'''
lbf = espressomd.lb.LBFluidGPU(kT=kT, seed=42, agrid=1, dens=1,
visc=5, tau=system.time_step)
system.actors.add(lbf)
system.thermostat.set_lb(LB_fluid=lbf, gamma=gamma, seed=42)
Explanation: Basic polymer simulations in ESPResSo
In this tutorial we are going to investigate diffusion of a dissolved polymer using ESPResSo. For this tutorial, you should have fundamental knowledge of the Lattice-boltzmann method and Langevin dynamics. If you are unfamiliar with those, you can go through the respective tutorials in the lattice_boltzmann and langevin_dynamics folders.
Introduction
In these exercises we want to reproduce a classic result of polymer physics: the dependence
of the diffusion coefficient of a polymer on its chain length. If no hydrodynamic interactions
are present, one expects a scaling law $D \propto N ^{- 1}$ and if they are present, a scaling law
$D \propto N^{- \nu}$ is expected. Here $\nu$ is the Flory exponent that plays a very prominent
role in polymer physics. It has a value of $\sim 3/5$ in good solvent conditions in 3D.
Discussions on these scaling laws can be found in polymer physics textbooks like <a href='#[1]'>[1]</a>, <a href='#[2]'>[2]</a>, and <a href='#[3]'>[3, chapter 8]</a>.
The reason for the different scaling law is the following: when being transported, every monomer
creates a flow field that follows the direction of its motion. This flow field makes it easier for
other monomers to follow its motion. This makes a polymer (given it is sufficiently long) diffuse
more like a compact object including the fluid inside it, although it does not have clear boundaries.
It can be shown that its motion can be described by its hydrodynamic radius. It is defined as:
\begin{equation}
\left\langle \frac{1}{R_h} \right\rangle = \left\langle \frac{1}{N^2}\sum_{i\neq j} \frac{1}{\left| r_i - r_j \right|} \right\rangle
\end{equation}
This hydrodynamic radius exhibits the scaling law $R_h \propto N^{\nu}$
and the diffusion coefficient of a long polymer is proportional to its inverse $R_h$.
Polymer models
The diffusion coefficient $D$ of a spherical particle in a carrier fluid experiencing drag is
related to the friction coefficient $\zeta$ via the Einstein relation:
\begin{equation}
D = \frac{k_{\mathrm{B}}T}{\zeta},
\end{equation}
with $k_{\mathrm{B}}$ the Boltzmann constant and $T$ the temperature. For a sphere of radius $R$
moving in a fluid of viscosity $\eta$, the friction coefficient is obtained via the Stokes law:
\begin{equation}
\zeta = 6\pi\eta R.
\end{equation}
Combining both equations yields the Stokes–Einstein relation:
\begin{equation}
D = \frac{k_{\mathrm{B}}T}{6\pi\eta R}.
\end{equation}
The simplest description of a polymer is the Rouse model, where beads are connected by springs.
All beads experience a drag from the solvent, and the friction coefficient $\gamma$ is identical
for all beads. The solvent flows freely between beads and hydrodynamic interactions are neglected.
The diffusion coefficient takes the following form:
\begin{equation}
D_{\mathrm{R}} = \frac{D_0}{N} = \frac{k_{\mathrm{B}}T}{\gamma N},
\end{equation}
where $D_0$ is the diffusion coefficient of a single bead.
To account for hydrodynamic interactions mediated by the solvent, i.e. the transport of solvent
in contact with the beads and the correlation in the motion of beads due to the carried solvent,
the Zimm model was created. For an ideal chain, it takes the following form:
\begin{equation}
D_{\mathrm{Z}} = \frac{8}{3\sqrt{6\pi^3}}\frac{k_B T}{\eta R} \simeq 0.196\frac{k_B T}{\eta b N^{\nu}},
\end{equation}
with $R$ the radius of the polymer and $b$ the length of the spring connecting the beads.
For shorter polymers there is a transition region. It can be described
by the Kirkwood–Zimm model:
\begin{equation}
D=\frac{D_0}{N} + \frac{k_B T}{6 \pi \eta } \left\langle \frac{1}{R_h} \right\rangle
\end{equation}
Here $D_0$ is the monomer diffusion coefficient and $\eta$ the
viscosity of the fluid. For a finite system size the second part of the
diffusion is subject to a $1/L$ finite size effect, because
hydrodynamic interactions are proportional to the inverse
distance and thus long ranged. It can be taken into account
by a correction:
\begin{equation}
D=\frac{D_0}{N} + \frac{k_B T}{6 \pi \eta } \left\langle \frac{1}{R_h} \right\rangle \left( 1- \left\langle\frac{R_h}{L} \right\rangle \right)
\end{equation}
It is quite difficult to fit this analytical expression to simulation data with good accuracy.
It will need a LB fluid, long simulation times and a careful analysis. For this tutorial, we
will use an implicit solvent and short polymer lengths to keep the runtime short. If you want
to collect data suitable for the Zimm model, simply set the global variable POLYMER_MODEL to
'Zimm', and increase the box size and number of beads in the polymer.
We want to determine the long-time self diffusion coefficient from the mean square
displacement of the center-of-mass of a single polymer. For large $t$ the mean square displacement is
proportional to the time and the diffusion coefficient occurs as a
prefactor:
\begin{equation}
D = \lim_{t\to\infty}\left[ \frac{1}{6t} \left\langle \left(\vec{r}(t) - \vec{r}(0)\right)^2 \right\rangle \right].
\end{equation}
This equation can be found in virtually any simulation textbook, like <a href='#[4]'>[4]</a>. We will set up a
polymer in an implicit solvent, simulate for an appropriate amount of time, calculate the mean square
displacement as a function of time and obtain the diffusion coefficient from a linear
fit. However we will have a couple of steps inbetween and divide the full problem into
subproblems that allow to (hopefully) fully understand the process.
Diffusion of a polymer
One of the typical applications of ESPResSo is the simulation of polymer chains with a bead-spring-model. For this we need a repulsive interaction between all beads, for which one usually takes a shifted and truncated Lennard-Jones (so-called Weeks–Chandler–Andersen or WCA) interaction, and additionally a bonded interaction between adjacent beads to hold the polymer together. You have already learned that the command
python
system.non_bonded_inter[0, 0].lennard_jones.set_params(
epsilon=1.0, sigma=1.0, shift=0.25, cutoff=1.1225)
creates a Lennard-Jones interaction with $\varepsilon=1.$, $\sigma=1.$,
$r_\text{cut} = 1.1225$ and $\varepsilon_\text{shift}=0.25$ between particles
of type 0, which is the desired repulsive interaction. The command
python
fene = espressomd.interactions.FeneBond(k=7, r_0=1, d_r_max=2)
creates a <tt>FeneBond</tt> object (see ESPResSo manual for the details). What is left to be done is to add this bonded interaction to the system via
python
system.bonded_inter.add(fene)
and to apply the bonded interaction to all monomer pairs of the polymer as shown in the script below.
ESPResSo provides a function that tries to find monomer positions that minimize the overlap between
monomers of a chain, e.g.:
python
positions = espressomd.polymer.linear_polymer_positions(n_polymers=1,
beads_per_chain=10,
bond_length=1, seed=42,
min_distance=0.9)
which would create positions for a single polymer with 10 monomers. Please check the documentation for a more detailed description.
1. Setting up the polymer and observables
The first task is to compute the average hydrodynamic radius $R_h$, end-to-end distance $R_F$
and radius of gyration $R_g$ for different polymer lengths. This will be achieved with the
corresponding observables described in the user guide under
Analysis / Direct analysis routines / Chains.
The second task is to estimate the polymer diffusion coefficient for different polymer lengths
using two methods:
* the center of mass mean squared displacement method (introduced in a previous part of this tutorial)
* the center of mass velocity autocorrelation method (also known as Green–Kubo method)
For this purpose we can again use the multiple tau correlator.
Write a function with signature build_polymer(system, n_monomers, polymer_params, fene) that creates
a linear polymer made of n_monomers particles, with parameters polymer_params. The particles need
to be created and linked together with the fene bond.
python
def build_polymer(system, n_monomers, polymer_params, fene):
positions = espressomd.polymer.linear_polymer_positions(
beads_per_chain=n_monomers, **polymer_params)
p_previous = None
for i, pos in enumerate(positions[0]):
p = system.part.add(pos=pos)
if p_previous is not None:
p.add_bond((fene, p_previous))
p_previous = p
Write a function with signature correlator_msd(pids_monomers, tau_max) that returns a center-of-mass
mean-squared displacement correlator that is updated every time step, and a function with signature
correlator_gk(pids_monomers, tau_max) that returns a center-of-mass velocity correlator that is updated
every 10 time steps. You can find exemples in the user guide section
calculating a particle's diffusion coefficient.
```python
def correlator_msd(pids_monomers, tau_max):
com_pos = espressomd.observables.ComPosition(ids=pids_monomers)
com_pos_cor = espressomd.accumulators.Correlator(
obs1=com_pos, tau_lin=16, tau_max=tau_max, delta_N=5,
corr_operation="square_distance_componentwise", compress1="discard1")
return com_pos_cor
def correlator_gk(pids_monomers, tau_max):
com_vel = espressomd.observables.ComVelocity(ids=pids_monomers)
com_vel_cor = espressomd.accumulators.Correlator(
obs1=com_vel, tau_lin=16, tau_max=tau_max, delta_N=10,
corr_operation="scalar_product", compress1="discard1")
return com_vel_cor
```
You can simulate a polymer in the Rouse regime using an implicit solvent model, e.g. Langevin dynamics,
or in the Zimm regime using a lattice-Boltzmann fluid.
End of explanation
import logging
import sys
import numpy as np
import scipy.optimize
import espressomd
import espressomd.analyze
import espressomd.accumulators
import espressomd.observables
import espressomd.polymer
logging.basicConfig(level=logging.INFO, stream=sys.stdout)
espressomd.assert_features(['LENNARD_JONES'])
# Setup constants
BOX_L = 12.0
TIME_STEP = 0.01
LOOPS = 4000
STEPS = 100
KT = 1.0
GAMMA = 5.0
POLYMER_PARAMS = {'n_polymers': 1, 'bond_length': 1, 'seed': 42, 'min_distance': 0.9}
POLYMER_MODEL = 'Rouse'
assert POLYMER_MODEL in ('Rouse', 'Zimm')
if POLYMER_MODEL == 'Zimm':
espressomd.assert_features(['CUDA'])
import espressomd.lb
# System setup
system = espressomd.System(box_l=3 * [BOX_L])
system.cell_system.skin = 0.4
# Lennard-Jones interaction
system.non_bonded_inter[0, 0].lennard_jones.set_params(
epsilon=1.0, sigma=1.0, shift="auto", cutoff=2.0**(1.0 / 6.0))
# Fene interaction
fene = espressomd.interactions.FeneBond(k=7, r_0=1, d_r_max=2)
system.bonded_inter.add(fene)
N_MONOMERS = np.array([6, 8, 10])
com_pos_tau_results = []
com_pos_msd_results = []
com_vel_tau_results = []
com_vel_acf_results = []
rh_results = []
rf_results = []
rg_results = []
for index, N in enumerate(N_MONOMERS):
logging.info(f"Polymer size: {N}")
build_polymer(system, N, POLYMER_PARAMS, fene)
logging.info("Warming up the polymer chain.")
system.time_step = 0.002
system.integrator.set_steepest_descent(
f_max=1.0,
gamma=10,
max_displacement=0.01)
system.integrator.run(2000)
system.integrator.set_vv()
logging.info("Warmup finished.")
logging.info("Equilibration.")
system.time_step = TIME_STEP
system.thermostat.set_langevin(kT=1.0, gamma=50, seed=42)
system.integrator.run(2000)
logging.info("Equilibration finished.")
system.thermostat.turn_off()
if POLYMER_MODEL == 'Rouse':
solvent_langevin(system, KT, GAMMA)
elif POLYMER_MODEL == 'Zimm':
solvent_lbm(system, KT, GAMMA)
logging.info("Warming up the system with the fluid.")
system.integrator.run(1000)
logging.info("Warming up the system with the fluid finished.")
# configure MSD correlator
com_pos_cor = correlator_msd(np.arange(N), LOOPS * STEPS)
system.auto_update_accumulators.add(com_pos_cor)
# configure Green-Kubo correlator
com_vel_cor = correlator_gk(np.arange(N), LOOPS * STEPS)
system.auto_update_accumulators.add(com_vel_cor)
logging.info("Sampling started.")
rhs = np.zeros(LOOPS)
rfs = np.zeros(LOOPS)
rgs = np.zeros(LOOPS)
for i in range(LOOPS):
system.integrator.run(STEPS)
rhs[i] = system.analysis.calc_rh(
chain_start=0,
number_of_chains=1,
chain_length=N)[0]
rfs[i] = system.analysis.calc_re(
chain_start=0,
number_of_chains=1,
chain_length=N)[0]
rgs[i] = system.analysis.calc_rg(
chain_start=0,
number_of_chains=1,
chain_length=N)[0]
logging.info("Sampling finished.")
# store results
com_pos_cor.finalize()
com_pos_tau_results.append(com_pos_cor.lag_times())
com_pos_msd_results.append(np.sum(com_pos_cor.result(), axis=1))
com_vel_cor.finalize()
com_vel_tau_results.append(com_vel_cor.lag_times())
com_vel_acf_results.append(com_vel_cor.result())
rh_results.append(rhs)
rf_results.append(rfs)
rg_results.append(rgs)
# reset system
system.part.clear()
system.thermostat.turn_off()
system.actors.clear()
system.auto_update_accumulators.clear()
rh_results = np.array(rh_results)
rf_results = np.array(rf_results)
rg_results = np.array(rg_results)
com_pos_tau_results = np.array(com_pos_tau_results)
com_pos_msd_results = np.reshape(com_pos_msd_results, [len(N_MONOMERS), -1])
com_vel_tau_results = np.array(com_vel_tau_results)
com_vel_acf_results = np.reshape(com_vel_acf_results, [len(N_MONOMERS), -1])
Explanation: 2. Simulating the polymer
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
plt.rcParams.update({'font.size': 18})
def standard_error_mean_autocorrelation(time_series, variable_label):
'''
Calculate the mean and the correlation-corrected standard error
of the mean of time series by integrating the autocorrelation
function. See Janke 2002 [5] and Weigel, Janke 2010 [6].
Due to the short simulation length, it is not possible to fit an
exponential to the long-time tail. Instead, return a percentile.
'''
summary = []
fig = plt.figure(figsize=(10, 6))
for signal, N in zip(time_series, N_MONOMERS):
acf = espressomd.analyze.autocorrelation(signal - np.mean(signal))
# the acf cannot be integrated beyond tau=N/2
integral = np.array([acf[0] + 2 * np.sum(acf[1:j]) for j in np.arange(1, len(acf) // 2)])
# remove the noisy part of the integral
negative_number_list = np.nonzero(integral < 0)
if negative_number_list[0].size:
integral = integral[:int(0.95 * negative_number_list[0][0])]
# compute the standard error of the mean
std_err = np.sqrt(integral / acf.size)
# due to the small sample size, the long-time tail is not
# well resolved and cannot be fitted, so we use a percentile
asymptote = np.percentile(std_err, 75)
# plot the integral and asymptote
p = plt.plot([0, len(std_err)], 2 * [asymptote], '--')
plt.plot(np.arange(len(std_err)) + 1, std_err,
'-', color=p[0].get_color(),
label=rf'$\int {variable_label}$ for N={N}')
summary.append((np.mean(signal), asymptote))
plt.xlabel(r'Lag time $\tau / \Delta t$')
plt.ylabel(rf'$\int_{{-\tau}}^{{+\tau}} {variable_label}$')
plt.ticklabel_format(axis='y', style='sci', scilimits=(0, 0))
plt.legend()
plt.show()
return np.array(summary)
def fitting_polymer_theory(polymer_model, n_monomers, diffusion, rh_exponent):
'''
Fit the appropriate polymer diffusion coefficient equation (Rouse or
Kirkwood-Zimm).
'''
def rouse(x, a):
return a / x
def kirkwood_zimm(x, a, b, exponent):
return a / x + b / x**exponent
x = np.linspace(min(n_monomers) - 0.5, max(n_monomers) + 0.5, 20)
if polymer_model == 'Rouse':
popt, _ = scipy.optimize.curve_fit(rouse, n_monomers, diffusion)
label = rf'$D^{{\mathrm{{fit}}}} = \frac{{{popt[0]:.2f}}}{{N}}$'
y = rouse(x, popt[0])
elif polymer_model == 'Zimm':
fitting_function = kirkwood_zimm
popt, _ = scipy.optimize.curve_fit(
lambda x, a, b: kirkwood_zimm(x, a, b, rh_exponent), n_monomers, diffusion)
y = kirkwood_zimm(x, popt[0], popt[1], rh_exponent)
label = f'''\
$D^{{\\mathrm{{fit}}}} = \
\\frac{{{popt[0]:.2f}}}{{N}} + \
\\frac{{{popt[1] * 6 * np.pi:.3f} }}{{6\\pi}} \\cdot \
\\frac{{{1}}}{{N^{{{rh_exponent:.2f}}}}}$ \
'''
return x, y, label, popt
Explanation: 3. Data analysis
We will calculate the means of time series with error bars obtained from
the correlation-corrected standard error of the mean [<a href='#[5]'>5</a>,<a href='#[6]'>6</a>].
End of explanation
rf_summary = standard_error_mean_autocorrelation(rf_results, r'\operatorname{acf}(R_F)')
rf_exponent, rf_prefactor = np.polyfit(np.log(N_MONOMERS), np.log(rf_summary[:, 0]), 1)
rf_prefactor = np.exp(rf_prefactor)
fig = plt.figure(figsize=(10, 6))
x = np.linspace(min(N_MONOMERS) - 0.5, max(N_MONOMERS) + 0.5, 20)
plt.plot(x, rf_prefactor * x**rf_exponent, '-',
label=rf'$R_F^{{\mathrm{{fit}}}} = {rf_prefactor:.2f} N^{{{rf_exponent:.2f}}}$')
plt.errorbar(N_MONOMERS, rf_summary[:, 0],
yerr=rf_summary[:, 1],
ls='', marker='o', capsize=5, capthick=1,
label=r'$R_F^{\mathrm{simulation}}$')
plt.xlabel('Number of monomers $N$')
plt.ylabel(r'End-to-end distance [$\sigma$]')
plt.legend()
plt.show()
Explanation: 3.1 Distance-based macromolecular properties
How do $R_h$, $R_g$, $R_F$ and the diffusion coefficient $D$ depend on the number of monomers?
You can refer to the Flory theory of polymers, and assume you are simulating a real polymer in a
good solvent, with Flory exponent $\nu \approx 0.588$.
Plot the end-to-end distance $R_F$ of the polymer as a function of the number of monomers. What relation do you observe?
The end-to-end distance follows the law $R_F = c_F N^\nu$ with $c_F$ a constant and $\nu$ the Flory exponent.
End of explanation
rg_summary = standard_error_mean_autocorrelation(rg_results, r'\operatorname{acf}(R_g)')
rg_exponent, rg_prefactor = np.polyfit(np.log(N_MONOMERS), np.log(rg_summary[:, 0]), 1)
rg_prefactor = np.exp(rg_prefactor)
fig = plt.figure(figsize=(10, 6))
x = np.linspace(min(N_MONOMERS) - 0.5, max(N_MONOMERS) + 0.5, 20)
plt.plot(x, rg_prefactor * x**rg_exponent, '-',
label=rf'$R_g^{{\mathrm{{fit}}}} = {rg_prefactor:.2f} N^{{{rg_exponent:.2f}}}$')
plt.errorbar(N_MONOMERS, rg_summary[:, 0],
yerr=rg_summary[:, 1],
ls='', marker='o', capsize=5, capthick=1,
label=r'$R_g^{\mathrm{simulation}}$')
plt.xlabel('Number of monomers $N$')
plt.ylabel('Radius of gyration [$\sigma$]')
plt.legend()
plt.show()
Explanation: Plot the radius of gyration $R_g$ of the polymer as a function of the number of monomers. What relation do you observe?
The radius of gyration follows the law $R_g = c_g N^\nu$ with $c_g$ a constant and $\nu$ the Flory exponent.
End of explanation
rf2_rg2_ratio = rf_summary[:, 0]**2 / rg_summary[:, 0]**2
print(np.around(rf2_rg2_ratio, 1))
Explanation: For an ideal polymer:
$$\frac{R_F^2}{R_g^2} = 6$$
End of explanation
rh_summary = standard_error_mean_autocorrelation(rh_results, r'\operatorname{acf}(R_h)')
rh_exponent, rh_prefactor = np.polyfit(np.log(N_MONOMERS), np.log(rh_summary[:, 0]), 1)
rh_prefactor = np.exp(rh_prefactor)
fig = plt.figure(figsize=(10, 6))
x = np.linspace(min(N_MONOMERS) - 0.5, max(N_MONOMERS) + 0.5, 20)
plt.plot(x, rh_prefactor * x**rh_exponent, '-',
label=rf'$R_h^{{\mathrm{{fit}}}} = {rh_prefactor:.2f} N^{{{rh_exponent:.2f}}}$')
plt.errorbar(N_MONOMERS, rh_summary[:, 0],
yerr=rh_summary[:, 1],
ls='', marker='o', capsize=5, capthick=1,
label=r'$R_h^{\mathrm{simulation}}$')
plt.xlabel('Number of monomers $N$')
plt.ylabel('Hydrodynamic radius [$\sigma$]')
plt.legend()
plt.show()
Explanation: Plot the hydrodynamic radius $R_h$ of the polymers as a function of the number of monomers. What relation do you observe?
The hydrodynamic radius can be calculated via the Stokes radius, i.e. the radius of a sphere that
diffuses at the same rate as the polymer. An approximative formula is $R_h \approx c_h N^{1/3}$
with $c_h$ a constant.
End of explanation
# cutoff for the diffusive regime (approximative)
tau_f_index = 40
# cutoff for the data series (larger lag times have larger variance due to undersampling)
tau_max_index = 70
plt.figure(figsize=(10, 10))
plt.xlabel(r'$\tau$ [$\Delta t$]')
plt.ylabel(r'MSD [$\sigma^2$]')
for index, (tau, msd) in enumerate(zip(com_pos_tau_results, com_pos_msd_results)):
plt.loglog(tau[1:120], msd[1:120], label=f'N={N_MONOMERS[index]}')
plt.loglog(2 * [tau[tau_f_index]], [0, np.max(com_pos_msd_results)], '-', color='black')
plt.text(tau[tau_f_index], np.max(com_pos_msd_results), r'$\tau_{f}$')
plt.loglog(2 * [tau[tau_max_index]], [0, np.max(com_pos_msd_results)], '-', color='black')
plt.text(tau[tau_max_index], np.max(com_pos_msd_results), r'$\tau_{max}$')
plt.legend()
plt.show()
diffusion_msd = np.zeros(len(N_MONOMERS))
plt.figure(figsize=(10, 8))
weights = com_pos_cor.sample_sizes()
for index, (tau, msd) in enumerate(zip(com_pos_tau_results, com_pos_msd_results)):
a, b = np.polyfit(tau[tau_f_index:tau_max_index], msd[tau_f_index:tau_max_index],
1, w=weights[tau_f_index:tau_max_index])
x = np.array([tau[1], tau[tau_max_index - 1]])
p = plt.plot(x, a * x + b, '-')
plt.plot(tau[1:tau_max_index], msd[1:tau_max_index], 'o', color=p[0].get_color(),
label=rf'$N=${N_MONOMERS[index]}')
diffusion_msd[index] = a / 6
plt.xlabel(r'$\tau$ [$\Delta t$]')
plt.ylabel(r'MSD [$\sigma^2$]')
plt.legend()
plt.show()
Explanation: 3.2 Diffusion coefficient using the MSD method
Calculate the diffusion coefficient of the polymers using the mean-squared displacement.
Recalling that for large $t$ the diffusion coefficient can be expressed as:
$$6D = \lim_{t\to\infty} \frac{\partial \operatorname{MSD}(t)}{\partial t}$$
which is simply the slope of the MSD in the diffusive mode.
End of explanation
fig = plt.figure(figsize=(10, 6))
x, y, label, popt_msd = fitting_polymer_theory(POLYMER_MODEL, N_MONOMERS, diffusion_msd, rh_exponent)
plt.plot(x, y, '-', label=label)
plt.plot(N_MONOMERS, diffusion_msd, 'o', label=r'$D^{\mathrm{simulation}}$')
plt.xlabel('Number of monomers $N$')
plt.ylabel(r'Diffusion coefficient [$\sigma^2/t$]')
plt.ticklabel_format(axis='y', style='sci', scilimits=(0, 0))
plt.legend()
plt.show()
Explanation: Plot the dependence of the diffusion coefficient on the hydrodynamic radius.
Recalling the formula for the diffusion coefficient of a short polymer in the Kirkwood–Zimm model:
$$D = \frac{D_0}{N} + \frac{k_B T}{6 \pi \eta} \left\langle \frac{1}{R_h} \right\rangle$$
where $\eta$ is the fluid viscosity and $D_0 = k_BT\gamma^{-1}$ the monomer diffusion coefficient,
with $\gamma$ the fluid friction coefficient. For the Rouse regime (implicit solvent),
the second term disappears.
Hint:
for the Rouse regime, use $D = \alpha N^{-1}$ and solve for $\alpha$
for the Zimm regime, use $D = \alpha_1 N^{-1} + \alpha_2 N^{-\beta}$
with rh_exponent for $\beta$ and solve for $\alpha_1, \alpha_2$
End of explanation
def exponential(x, a, b):
return a * np.exp(-b * x)
fig = plt.figure(figsize=(10, 8))
for N, tau, acf in zip(N_MONOMERS, com_vel_tau_results, com_vel_acf_results):
popt, _ = scipy.optimize.curve_fit(exponential, tau[:20], acf[:20])
x = np.linspace(tau[0], tau[20 - 1], 100)
p = plt.plot(x, exponential(x, *popt), '-')
plt.plot(tau[:20], acf[:20], 'o',
color=p[0].get_color(), label=rf'$R(\tau)$ for N = {N}')
plt.xlabel(r'$\tau$')
plt.ylabel('Autocorrelation function')
plt.legend()
plt.show()
Explanation: 3.3 Diffusion coefficient using the Green–Kubo method
Plot the autocorrelation function and check that the decay is roughly exponential.
Hint: use $D = \alpha e^{-\beta \tau}$ and solve for $\alpha, \beta$. You can leave out
the first data point in the ACF if necessary, and limit the fit to the stable region
in the first 20 data points.
End of explanation
diffusion_gk = []
fig = plt.figure(figsize=(10, 6))
for N, tau, acf in zip(N_MONOMERS, com_vel_tau_results, com_vel_acf_results):
x = np.arange(2, 28)
y = [1 / 3 * np.trapz(acf[:j], tau[:j]) for j in x]
plt.plot(tau[x], y, label=rf'$D(\tau_{{\mathrm{{int}}}})$ for $N = {N}$')
diffusion_gk.append(np.mean(y[10:]))
plt.xlabel(r'$\tau_{\mathrm{int}}$')
plt.ylabel(r'$\frac{1}{3} \int_{\tau=0}^{\tau_{\mathrm{int}}} \left<\vec{v_c}(\tau)\cdot\vec{v_c}(0)\right>\, \mathrm{d}\tau$')
plt.ticklabel_format(axis='y', style='sci', scilimits=(0, 0))
plt.legend()
plt.show()
Explanation: The Green–Kubo integral for the diffusion coefficient take the following form:
$$D = \frac{1}{3} \int_0^{+\infty} \left<\vec{v_c}(\tau)\cdot\vec{v_c}(0)\right>\, \mathrm{d}\tau$$
Since our simulation is finite in time, we need to integrate up until $\tau_{\mathrm{int}}$. To find
the optimal value of $\tau_{\mathrm{int}}$, plot the integral as a function of $\tau_{\mathrm{int}}$
until you see a plateau. This plateau is usually followed by strong oscillations due to low
statistics in the long time tail of the autocorrelation function.
End of explanation
fig = plt.figure(figsize=(10, 8))
x, y, label, popt_gk = fitting_polymer_theory(POLYMER_MODEL, N_MONOMERS, diffusion_gk, rh_exponent)
plt.plot(x, y, '-', label=label)
plt.plot(N_MONOMERS, diffusion_gk, 'o', label=r'$D^{\mathrm{simulation}}$')
plt.xlabel('Number of monomers $N$')
plt.ylabel(r'Diffusion coefficient [$\sigma^2/t$]')
plt.ticklabel_format(axis='y', style='sci', scilimits=(0, 0))
plt.legend()
plt.show()
Explanation: Plot the dependence of the diffusion coefficient on the hydrodynamic radius.
End of explanation
print(f'N\tMSD\t\tGK\t\tdifference')
for N, d_msd, d_gk in zip(N_MONOMERS, diffusion_msd, diffusion_gk):
print(f'{N}\t{d_msd:.2e}\t{d_gk:.2e}\t{np.ceil(np.abs(d_msd-d_gk) * 100 / d_msd):.0f}%')
Explanation: Let us compare the value of the diffusion coefficients calcualted with the MSD and Green–Kubo methods:
End of explanation
<END_TASK> |
15,799 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
pypdb demos
This is a set of basic examples of the usage and outputs of the various individual functions included in. There are generally three types of functions.
Preamble
Step1: Search functions that return lists of PDB IDs
Get a list of PDBs for a specific search term
Step2: Search by PubMed ID Number
Step3: Search by source organism using NCBI TaxId
Step4: Search by a specific experimental method
Step5: Search by protein structure similarity
Step6: Search by Author
Step7: Search by organism
Step8: Information Search functions
While the basic functions described in the previous section are useful for looking up and manipulating individual unique entries, these functions are intended to be more user-facing
Step9: Functions that return information about single PDB IDs
Get the full PDB file
Step10: Get a general description of the entry's metadata
Step11: Run a Sequence search
Formerly using BLAST, this method now uses MMseqs2
Step12: Search by PFAM number
Step13: New API for advanced search
The old API will gradually migrate to use these functions
Step14: Search for all entries that mention the word 'ribosome'
Step15: Search for polymers from 'Mus musculus'
Step16: Search for non-polymers from 'Mus musculus' or 'Homo sapiens'
Step17: Search for polymer instances whose titles contain "actin" or "binding" or "protein"
Step18: Search for assemblies that contain the words "actin binding protein"
(must be in that order).
For example, "actin-binding protein" and "actin binding protein" will match,
but "protein binding actin" will not.
Step19: Search for entries released in 2019 or later
Step20: Search for entries released only in 2019
Step21: Search by cell length
Step22: Search for structures under 4 angstroms of resolution
Step23: Search for structures with a given attribute.
(Admittedly every structure has a release date, but the same logic would
apply for a more sparse RCSB attribute).
Step24: Search for 'Mus musculus' or 'Homo sapiens' structures after 2019 using graph search | <ASSISTANT_TASK:>
Python Code:
%pylab inline
from IPython.display import HTML
# Import from local directory
# import sys
# sys.path.insert(0, '../pypdb')
# from pypdb import *
# Import from installed package
from pypdb import *
%load_ext autoreload
%autoreload 2
Explanation: pypdb demos
This is a set of basic examples of the usage and outputs of the various individual functions included in. There are generally three types of functions.
Preamble
End of explanation
found_pdbs = Query("ribosome").search()
print(found_pdbs[:10])
Explanation: Search functions that return lists of PDB IDs
Get a list of PDBs for a specific search term
End of explanation
found_pdbs = Query(27499440, "PubmedIdQuery").search()
print(found_pdbs[:10])
Explanation: Search by PubMed ID Number
End of explanation
found_pdbs = Query('6239', 'TreeEntityQuery').search() #TaxID for C elegans
print(found_pdbs[:5])
Explanation: Search by source organism using NCBI TaxId
End of explanation
found_pdbs = Query('SOLID-STATE NMR', query_type='ExpTypeQuery').search()
print(found_pdbs[:10])
Explanation: Search by a specific experimental method
End of explanation
found_pdbs = Query('2E8D', query_type="structure").search()
print(found_pdbs[:10])
Explanation: Search by protein structure similarity
End of explanation
found_pdbs = Query('Perutz, M.F.', query_type='AdvancedAuthorQuery').search()
print(found_pdbs)
Explanation: Search by Author
End of explanation
q = Query("Dictyostelium", query_type="OrganismQuery")
print(q.search()[:10])
Explanation: Search by organism
End of explanation
matching_papers = find_papers('crispr', max_results=10)
print(list(matching_papers)[:10])
Explanation: Information Search functions
While the basic functions described in the previous section are useful for looking up and manipulating individual unique entries, these functions are intended to be more user-facing: they take search keywords and return lists of authors or dates
Find papers for a given keyword
End of explanation
pdb_file = get_pdb_file('4lza', filetype='cif', compression=False)
print(pdb_file[:400])
Explanation: Functions that return information about single PDB IDs
Get the full PDB file
End of explanation
all_info = get_info('4LZA')
print(list(all_info.keys()))
Explanation: Get a general description of the entry's metadata
End of explanation
q = Query("VLSPADKTNVKAAWGKVGAHAGEYGAEALERMFLSFPTTKTYFPHFDLSHGSAQVKGHGKKVADALTAVAHVDDMPNAL",
query_type="sequence",
return_type="polymer_entity")
print(q.search())
Explanation: Run a Sequence search
Formerly using BLAST, this method now uses MMseqs2
End of explanation
pfam_info = Query("PF00008", query_type="pfam").search()
print(pfam_info[:5])
Explanation: Search by PFAM number
End of explanation
from pypdb.clients.search.search_client import perform_search
from pypdb.clients.search.search_client import ReturnType
from pypdb.clients.search.operators import text_operators
Explanation: New API for advanced search
The old API will gradually migrate to use these functions
End of explanation
search_operator = text_operators.DefaultOperator(value="ribosome")
return_type = ReturnType.ENTRY
results = perform_search(search_operator, return_type)
print(results[:10])
Explanation: Search for all entries that mention the word 'ribosome'
End of explanation
search_operator = text_operators.ExactMatchOperator(value="Mus musculus",
attribute="rcsb_entity_source_organism.taxonomy_lineage.name")
return_type = ReturnType.POLYMER_ENTITY
results = perform_search(search_operator, return_type)
print(results[:5])
Explanation: Search for polymers from 'Mus musculus'
End of explanation
search_operator = text_operators.InOperator(values=["Mus musculus", "Homo sapiens"],
attribute="rcsb_entity_source_organism.taxonomy_lineage.name")
return_type = ReturnType.NON_POLYMER_ENTITY
results = perform_search(search_operator, return_type)
print(results[:5])
Explanation: Search for non-polymers from 'Mus musculus' or 'Homo sapiens'
End of explanation
search_operator = text_operators.ContainsWordsOperator(value="actin-binding protein",
attribute="struct.title")
return_type = ReturnType.POLYMER_INSTANCE
results = perform_search(search_operator, return_type)
print(results[:5])
Explanation: Search for polymer instances whose titles contain "actin" or "binding" or "protein"
End of explanation
search_operator = text_operators.ContainsPhraseOperator(value="actin-binding protein",
attribute="struct.title")
return_type = ReturnType.ASSEMBLY
results = perform_search(search_operator, return_type)
print(results[:5])
Explanation: Search for assemblies that contain the words "actin binding protein"
(must be in that order).
For example, "actin-binding protein" and "actin binding protein" will match,
but "protein binding actin" will not.
End of explanation
search_operator = text_operators.ComparisonOperator(
value="2019-01-01T00:00:00Z",
attribute="rcsb_accession_info.initial_release_date",
comparison_type=text_operators.ComparisonType.GREATER)
return_type = ReturnType.ENTRY
results = perform_search(search_operator, return_type)
print(results[:5])
Explanation: Search for entries released in 2019 or later
End of explanation
search_operator = text_operators.RangeOperator(
from_value="2019-01-01T00:00:00Z",
to_value="2020-01-01T00:00:00Z",
include_lower=True,
include_upper=False,
attribute="rcsb_accession_info.initial_release_date")
return_type = ReturnType.ENTRY
results = perform_search(search_operator, return_type)
print(results[:5])
Explanation: Search for entries released only in 2019
End of explanation
from pypdb.clients.search.search_client import perform_search_with_graph, SearchService, ReturnType
from pypdb.clients.search.operators import text_operators
cell_a_operator = text_operators.RangeOperator(
attribute='cell.length_a',
from_value=80,
to_value=84,
include_upper=True
)
results = perform_search_with_graph(
query_object=cell_a_operator,
return_type=ReturnType.ENTRY
)
print(results[:5])
Explanation: Search by cell length
End of explanation
search_operator = text_operators.ComparisonOperator(
value=4,
attribute="rcsb_entry_info.resolution_combined",
comparison_type=text_operators.ComparisonType.LESS)
return_type = ReturnType.ENTRY
results = perform_search(search_operator, return_type)
print(results[:5])
Explanation: Search for structures under 4 angstroms of resolution
End of explanation
search_operator = text_operators.ExistsOperator(
attribute="rcsb_accession_info.initial_release_date")
return_type = ReturnType.ENTRY
results = perform_search(search_operator, return_type)
print(results[:5])
Explanation: Search for structures with a given attribute.
(Admittedly every structure has a release date, but the same logic would
apply for a more sparse RCSB attribute).
End of explanation
from pypdb.clients.search.search_client import perform_search_with_graph
from pypdb.clients.search.search_client import ReturnType
from pypdb.clients.search.search_client import QueryGroup, LogicalOperator
from pypdb.clients.search.operators import text_operators
# SearchOperator associated with structures with under 4 Angstroms of resolution
under_4A_resolution_operator = text_operators.ComparisonOperator(
value=4,
attribute="rcsb_entry_info.resolution_combined",
comparison_type=text_operators.ComparisonType.GREATER)
# SearchOperator associated with entities containing 'Mus musculus' lineage
is_mus_operator = text_operators.ExactMatchOperator(
value="Mus musculus",
attribute="rcsb_entity_source_organism.taxonomy_lineage.name")
# SearchOperator associated with entities containing 'Homo sapiens' lineage
is_human_operator = text_operators.ExactMatchOperator(
value="Homo sapiens",
attribute="rcsb_entity_source_organism.taxonomy_lineage.name")
# QueryGroup associated with being either human or `Mus musculus`
is_human_or_mus_group = QueryGroup(
queries = [is_mus_operator, is_human_operator],
logical_operator = LogicalOperator.OR
)
# QueryGroup associated with being ((Human OR Mus) AND (Under 4 Angstroms))
is_under_4A_and_human_or_mus_group = QueryGroup(
queries = [is_human_or_mus_group, under_4A_resolution_operator],
logical_operator = LogicalOperator.AND
)
return_type = ReturnType.ENTRY
results = perform_search_with_graph(
query_object=is_under_4A_and_human_or_mus_group,
return_type=return_type)
print("\n", results[:10]) # Huzzah
Explanation: Search for 'Mus musculus' or 'Homo sapiens' structures after 2019 using graph search
End of explanation
<END_TASK> |