text_prompt
stringlengths 168
30.3k
| code_prompt
stringlengths 67
124k
|
---|---|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Parallel Programming in Python
Step2: We then initialize our thread with the Thread class from the threading module.
Step3: When we run a program and something is sleeping for a few seconds, we would have to wait for that portion to wake up before we can continue with the rest of the program, but the concurrency of threads can bypass this behavior. Suppose we consider the main program as the main thread and our thread as its own separate thread, the code chunk below demonstrates the concurrency property, i.e. we don't have to wait for the calling thread to finish before running the rest of our program.
Step4: Sometimes, we don't want Python to switch to the main thread until the thread we defined has finished executing its function. To do this, we can use .join method, this is essentially what people called the blocking call. It blocks the interpreter from accessing or executing the main program until the thread finishes it task.
Step5: The following code chunk showcase how to initialize and utilize multiple threads.
Step7: From the result above, we can observe from the elapse time that it doesn't take n_threads * (the time we told the sleep function to sleep) amount of time to finish all the task, which is pretty neat!
Step11: This quick introduction should do it for now. In the next section, we'll define both an I/O task (reading a file, API calls, scraping URLs) and a CPU intensive task after that we'll benchmark the two tasks by running them serially, using multithreading and using multiprocessing.
Step12: Here are some observations
Step17: The main difference here is in the result of the multithreaded approach. This time it performs very similarly to the serial approach, and here's why
Step18: Next we need some actual websites to try our system out. Create your own list or use the ones listed in the next code chunk first for experimentation. Normally, we'd keep this list in a database along with owner contact information so that you can contact them. Since this is not the main topic of this tutorial, and for the sake of simplicity, we're just going to use this Python list.
Step19: First, we'll try the serial approach and use this as our baseline.
Step20: Another library for performing parallel programming is joblib. This is my personal favorite and it is also used by the machine learning package scikit-learn to perform hyperparameter search.
|
<ASSISTANT_TASK:>
Python Code:
# code for loading the format for the notebook
import os
# path : store the current path to convert back to it later
path = os.getcwd()
os.chdir(os.path.join('..', 'notebook_format'))
from formats import load_style
load_style(plot_style=False)
os.chdir(path)
# 1. magic to print version
# 2. magic so that the notebook will reload external python modules
%load_ext watermark
%load_ext autoreload
%autoreload 2
import math
import time
import logging
import requests
import threading
import multiprocessing
import concurrent.futures
from joblib import Parallel, delayed
%watermark -a 'Ethen' -d -t -v -p joblib,requests
def sleeper(n_time):
name = threading.current_thread().name
print('I am {}. Going to sleep for {} seconds'.format(name, n_time))
time.sleep(n_time)
print('{} has woken up from sleep'.format(name))
# we call .start to start executing the function from the thread
n_time = 2
thread = threading.Thread(target = sleeper, name = 'thread1', args = (n_time,))
thread.start()
# hello is printed "before" the wake up message from the function
thread = threading.Thread(target = sleeper, name = 'thread2', args = (n_time,))
thread.start()
print()
print('hello')
# hello is printed "after" the wake up message from the function
thread = threading.Thread(target = sleeper, name = 'thread3', args = (n_time,))
thread.start()
thread.join()
print()
print('hello')
n_time = 2
n_threads = 5
start = time.time()
# create n_threads number of threads and store them in a list
threads = []
for i in range(n_threads):
name = 'thread{}'.format(i)
thread = threading.Thread(target = sleeper, name = name, args = (n_time,))
threads.append(thread)
# we can start the thread while we're creating it, or move
# this to its own loop (as shown below)
thread.start()
# we could instead start the thread in a separate loop
# for thread in threads:
# thread.start()
# ensure all threads have finished before executing main program
for thread in threads:
thread.join()
elapse = time.time() - start
print()
print('Elapse time: ', elapse)
# example from the documentation page
# https://docs.python.org/3/library/concurrent.futures.html#processpoolexecutor-example
def is_prime(n):
References
----------
https://math.stackexchange.com/questions/1343171/why-only-square-root-approach-to-check-number-is-prime
if n % 2 == 0:
return False
sqrt_n = int(math.floor(math.sqrt(n)))
for i in range(3, sqrt_n + 1, 2):
if n % i == 0:
return False
return True
PRIMES = [
112272535095293,
112582705942171,
112272535095293,
115280095190773,
115797848077099,
1099726899285419]
with concurrent.futures.ProcessPoolExecutor() as executor:
for number, prime in zip(PRIMES, executor.map(is_prime, PRIMES)):
print('{} is prime: {}'.format(number, prime))
def only_sleep():
Wait for a timer to expire
process_name = multiprocessing.current_process().name
thread_name = threading.current_thread().name
print('Process Name: {}, Thread Name: {}'.format(
process_name, thread_name))
time.sleep(4)
def crunch_numbers():
Do some computations
process_name = multiprocessing.current_process().name
thread_name = threading.current_thread().name
print('Process Name: {}, Thread Name: {}'.format(
process_name, thread_name))
x = 0
while x < 10000000:
x += 1
def experiment(target, n_workers):
run the target function serially, using threads,
using process and output the run time
# Run tasks serially
start_time = time.time()
for _ in range(n_workers):
target()
end_time = time.time()
print("Serial time=", end_time - start_time)
print()
# Run tasks using processes
start_time = time.time()
processes = [multiprocessing.Process(target = target) for _ in range(n_workers)]
for process in processes:
process.start()
for process in processes:
process.join()
end_time = time.time()
print("Parallel time=", end_time - start_time)
print()
# Run tasks using threads
start_time = time.time()
threads = [threading.Thread(target = target) for _ in range(n_workers)]
for thread in threads:
thread.start()
for thread in threads:
thread.join()
end_time = time.time()
print("Threads time=", end_time - start_time)
n_workers = 4
experiment(target = only_sleep, n_workers = n_workers)
n_workers = 4
experiment(target = crunch_numbers, n_workers = n_workers)
def check_website(address):
Utility function: check if a website is down, if so, notify the user
try:
ping_website(address)
except WebsiteDownException:
notify_owner(address)
class WebsiteDownException(Exception):
Exception if the website is down
pass
def ping_website(address, timeout = 20):
Check if a website is down. A website is considered down
if either the status_code >= 400 or if the timeout expires
Throw a WebsiteDownException if any of the website down conditions are met
try:
response = requests.head(address, timeout = timeout)
if response.status_code >= 400:
logging.warning('Website {} returned status code={}'.format(address, response.status_code))
raise WebsiteDownException()
except requests.exceptions.RequestException:
logging.warning('Timeout expired for website {}'.format(address))
raise WebsiteDownException()
def notify_owner(address):
Send the owner of the address a notification that their website is down
For now, we're just going to sleep for 0.5 seconds but this is where
you would send an email, push notification or text-message
logging.info('Notifying the owner of {} website'.format(address))
time.sleep(0.5)
WEBSITE_LIST = [
'http://envato.com',
'http://amazon.co.uk',
'http://amazon.com',
'http://facebook.com',
'http://google.com',
'http://google.fr',
'http://google.es',
'http://google.co.uk',
'http://internet.org',
'http://gmail.com',
'http://stackoverflow.com',
'http://github.com',
'http://heroku.com',
'http://really-cool-available-domain.com',
'http://djangoproject.com',
'http://rubyonrails.org',
'http://basecamp.com',
'http://trello.com',
'http://yiiframework.com',
'http://shopify.com',
'http://another-really-interesting-domain.co',
'http://airbnb.com',
'http://instagram.com',
'http://snapchat.com',
'http://youtube.com',
'http://baidu.com',
'http://yahoo.com',
'http://live.com',
'http://linkedin.com',
'http://yandex.ru',
'http://netflix.com',
'http://wordpress.com',
'http://bing.com']
start_time = time.time()
for address in WEBSITE_LIST:
check_website(address)
end_time = time.time()
print('Time for serial: {} secs'.format(end_time - start_time))
n_workers = 4
start_time = time.time()
with concurrent.futures.ThreadPoolExecutor(max_workers = n_workers) as executor:
futures = {executor.submit(check_website, address) for address in WEBSITE_LIST}
# more detailed explanation of the wait command
# https://docs.python.org/3/library/concurrent.futures.html#concurrent.futures.wait
_ = concurrent.futures.wait(futures)
end_time = time.time()
print('Time for multithreading: {} secs'.format(end_time - start_time))
# process does not result in the same performance gain as thread
start_time = time.time()
with concurrent.futures.ProcessPoolExecutor(max_workers = n_workers) as executor:
futures = {executor.submit(check_website, address) for address in WEBSITE_LIST}
_ = concurrent.futures.wait(futures)
end_time = time.time()
print('Time for multiprocessing: {} secs'.format(end_time - start_time))
start_time = time.time()
# we start off by defining a Parallel class
# the backend uses multiprocessing by default,
# here we change it to threading as the task is I/O bound;
# we can set n_jobs to -1 uses all the available cores
parallel = Parallel(n_jobs = n_workers, backend = 'threading')
# we wrapped our function with delayed and pass in our list of parameters
result = parallel(delayed(check_website)(address) for address in WEBSITE_LIST)
end_time = time.time()
print('Time for joblib threading: {} secs'.format(end_time - start_time))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Type
Step7: 1.4. Elemental Stoichiometry
Step8: 1.5. Elemental Stoichiometry Details
Step9: 1.6. Prognostic Variables
Step10: 1.7. Diagnostic Variables
Step11: 1.8. Damping
Step12: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Step13: 2.2. Timestep If Not From Ocean
Step14: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Step15: 3.2. Timestep If Not From Ocean
Step16: 4. Key Properties --> Transport Scheme
Step17: 4.2. Scheme
Step18: 4.3. Use Different Scheme
Step19: 5. Key Properties --> Boundary Forcing
Step20: 5.2. River Input
Step21: 5.3. Sediments From Boundary Conditions
Step22: 5.4. Sediments From Explicit Model
Step23: 6. Key Properties --> Gas Exchange
Step24: 6.2. CO2 Exchange Type
Step25: 6.3. O2 Exchange Present
Step26: 6.4. O2 Exchange Type
Step27: 6.5. DMS Exchange Present
Step28: 6.6. DMS Exchange Type
Step29: 6.7. N2 Exchange Present
Step30: 6.8. N2 Exchange Type
Step31: 6.9. N2O Exchange Present
Step32: 6.10. N2O Exchange Type
Step33: 6.11. CFC11 Exchange Present
Step34: 6.12. CFC11 Exchange Type
Step35: 6.13. CFC12 Exchange Present
Step36: 6.14. CFC12 Exchange Type
Step37: 6.15. SF6 Exchange Present
Step38: 6.16. SF6 Exchange Type
Step39: 6.17. 13CO2 Exchange Present
Step40: 6.18. 13CO2 Exchange Type
Step41: 6.19. 14CO2 Exchange Present
Step42: 6.20. 14CO2 Exchange Type
Step43: 6.21. Other Gases
Step44: 7. Key Properties --> Carbon Chemistry
Step45: 7.2. PH Scale
Step46: 7.3. Constants If Not OMIP
Step47: 8. Tracers
Step48: 8.2. Sulfur Cycle Present
Step49: 8.3. Nutrients Present
Step50: 8.4. Nitrous Species If N
Step51: 8.5. Nitrous Processes If N
Step52: 9. Tracers --> Ecosystem
Step53: 9.2. Upper Trophic Levels Treatment
Step54: 10. Tracers --> Ecosystem --> Phytoplankton
Step55: 10.2. Pft
Step56: 10.3. Size Classes
Step57: 11. Tracers --> Ecosystem --> Zooplankton
Step58: 11.2. Size Classes
Step59: 12. Tracers --> Disolved Organic Matter
Step60: 12.2. Lability
Step61: 13. Tracers --> Particules
Step62: 13.2. Types If Prognostic
Step63: 13.3. Size If Prognostic
Step64: 13.4. Size If Discrete
Step65: 13.5. Sinking Speed If Prognostic
Step66: 14. Tracers --> Dic Alkalinity
Step67: 14.2. Abiotic Carbon
Step68: 14.3. Alkalinity
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'csir-csiro', 'vresm-1-0', 'ocnbgchem')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Classic
Step2: We don't actually have to use the full namespace call that is nx.generators.classic.complete_graph, everythin is under the nx module
Step3: Exercises
Step4: Others are included under nx.generators.social. Explore the davis_southern_women_graph, what is special about it?
Step5: Random Graphs
Step6: A slight variant allows you to give it the total number of edges to be placed randomly
Step7: Small World Graphs
Step8: Power Law Random Graphs
Step9: We'll plot degree sequences in the next lesson on graph analysis.
Step10: A trick to write a fast BA implementation is to maintain a list of nodes with nodes repeated in the list each time they have a connection made.
|
<ASSISTANT_TASK:>
Python Code:
import networkx as nx
C = nx.generators.classic.complete_graph(5)
C.edges()
C = nx.complete_graph(5)
KC = nx.karate_club_graph()
KC.nodes(data=True)
DSW = nx.davis_southern_women_graph()
DSW.nodes()
ER = nx.gnp_random_graph(100,1.0/100)
ER.size()
ER2 =nx.gnm_random_graph(100,50)
ER2.size()
WS = nx.watts_strogatz_graph(10,4,.2)
WS.edges()
BA = nx.barabasi_albert_graph(10000,1)
deg = BA.degree().values()
(min(deg),max(deg))
import numpy as np
np.random.poisson(3.2)
def poisson_BA(n,l):
start = max(2,np.random.poisson(l)) # start with at least two nodes
G = nx.complete_graph(start)
#an easy list to grab nodes from
repeated_nodes = []
# For each node at it's label to the list for as many times as it's degree
u = start
while u < n:
# Using the the poisson random number generator, generate a number of connections
# Make Sure it's greater than 1
# Store that variable in `connections`
for _ in range(connections):
# For as many connections as it has select a node at random from repeated_nodes
# Hint numpy as a np.random.choice function
# Be sure to update repeated_nodes!
u += 1
return G
G = poisson_BA(1000,np.pi)
deg = G.degree().values()
min(deg),max(deg)
np.mean(G.degree().values())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Зависимость времени и числа итераций от точности
Step2: Пример 2
Step3: Зависимость времени и числа итераций от точности
|
<ASSISTANT_TASK:>
Python Code:
import liboptpy.constr_solvers as cs
import liboptpy.step_size as ss
import numpy as np
from tqdm import tqdm
n = 200
m = 100
A = np.random.randn(m, n)
x_true = np.random.rand(n)
b = A.dot(x_true) + 0.01 * np.random.randn(m)
eigvals = np.linalg.eigvalsh(A.T @ A)
L = np.max(eigvals)
import jax
import jax.numpy as jnp
import numpy as np
from jax.config import config
config.update("jax_enable_x64", True)
@jax.jit
def func(x, A, b):
return 0.5 * jnp.linalg.norm(A.dot(x) - b)**2
f = lambda x: func(x, A, b)
grad = jax.jit(jax.grad(f))
def linsolver(gradient):
x = np.zeros(gradient.shape[0])
pos_grad = gradient > 0
neg_grad = gradient < 0
x[pos_grad] = np.zeros(np.sum(pos_grad == True))
x[neg_grad] = np.ones(np.sum(neg_grad == True))
return x
def projection(y):
return np.clip(y, 0, 1)
def myplot(x, y, xlab, ylab, xscale="linear", yscale="log"):
plt.figure(figsize=(10, 8))
plt.xscale(xscale)
plt.yscale(yscale)
for key in y:
plt.plot(x[key], y[key], label=key)
plt.xticks(fontsize=24)
plt.yticks(fontsize=24)
plt.legend(loc="best", fontsize=24)
plt.xlabel(xlab, fontsize=24)
plt.ylabel(ylab, fontsize=24)
x0 = np.random.rand(n)
# fw = cs.FrankWolfe(f, grad, linsolver, ss.ConstantStepSize(1e-3))
fw = cs.FrankWolfe(f, grad, linsolver, ss.Backtracking("Armijo", init_alpha=1, rho=0.5, beta=0.1))
x_fw = fw.solve(x0=x0, max_iter=20000, tol=1e-5, disp=1)
print("Optimal value FW =", f(x_fw))
pg = cs.ProjectedGD(f, lambda x: grad(x), projection, ss.Backtracking("Armijo", init_alpha=1, beta=0.01, rho=0.5))
x_pg = pg.solve(x0=x0, max_iter=200, tol=1e-5, disp=1)
print("Optimal value PG =", f(x_pg))
import matplotlib.pyplot as plt
%matplotlib inline
plt.rc("text", usetex=True)
y_hist_f_fw = [f(x) for x in fw.get_convergence()]
y_hist_f_pg = [f(x) for x in pg.get_convergence()]
myplot({"FW": range(1, len(y_hist_f_fw) + 1), "PG": range(1, len(y_hist_f_pg) + 1)},
{"FW": y_hist_f_fw, "PG": y_hist_f_pg}, "Number of iteration",
r"Objective function, $\frac{1}{2}\|Ax - b\|^2_2$")
import cvxpy as cvx
x = cvx.Variable(n)
obj = cvx.Minimize(0.5 * cvx.norm(A * x - b, 2)**2)
constr = [x >= 0, x <= 1]
problem = cvx.Problem(objective=obj, constraints=constr)
value = problem.solve(solver=cvx.SCS)
x_cvx = np.array(x.value).ravel()
print("CVX optimal value =", value)
eps = [10**(-i) for i in range(8)]
time_pg = np.zeros(len(eps))
time_cg = np.zeros(len(eps))
iter_pg = np.zeros(len(eps))
iter_cg = np.zeros(len(eps))
pg = cs.ProjectedGD(f, grad, projection, ss.ConstantStepSize(1 / L))
cg = cs.FrankWolfe(f, grad, linsolver, ss.ConstantStepSize(1 / L))
for i, tol in tqdm(enumerate(eps)):
res = %timeit -o -q pg.solve(x0=x0, tol=tol, max_iter=100000)
time_pg[i] = res.average
iter_pg[i] = len(pg.get_convergence())
res = %timeit -o -q cg.solve(x0=x0, tol=tol, max_iter=100000)
time_cg[i] = res.average
iter_cg[i] = len(cg.get_convergence())
myplot({"FW":eps, "PG": eps}, {"FW": time_cg, "PG": time_pg}, r"Accuracy, $\varepsilon$", "Time, s", xscale="log")
myplot({"FW":eps, "PG": eps}, {"FW": iter_cg, "PG": iter_pg}, r"Accuracy, $\varepsilon$", "Number of iterations", xscale="log")
def linsolver(gradient):
x = np.zeros(gradient.shape[0])
idx_min = np.argmin(gradient)
if gradient[idx_min] > 0:
x[idx_min] = 0
else:
x[idx_min] = 1
return x
def projection(y):
x = y.copy()
if np.all(x >= 0) and np.sum(x) <= 1:
return x
x = np.clip(x, 0, np.max(x))
if np.sum(x) <= 1:
return x
n = x.shape[0]
bget = False
x.sort()
x = x[::-1]
temp_sum = 0
t_hat = 0
for i in range(n - 1):
temp_sum += x[i]
t_hat = (temp_sum - 1.0) / (i + 1)
if t_hat >= x[i + 1]:
bget = True
break
if not bget:
t_hat = (temp_sum + x[n - 1] - 1.0) / n
return np.maximum(y - t_hat, 0)
x0 = np.random.rand(n) * 10
x0 = x0 / x0.sum()
cg = cs.FrankWolfe(f, grad, linsolver, ss.Backtracking(rule_type="Armijo", rho=0.5, beta=0.1, init_alpha=1.))
x_cg = cg.solve(x0=x0, max_iter=200, tol=1e-3)
print("Optimal value FW =", f(x_cg))
pg = cs.ProjectedGD(f, grad, projection, ss.Backtracking(rule_type="Armijo", rho=0.5, beta=0.1, init_alpha=1.))
x_pg = pg.solve(x0=x0, max_iter=200, tol=1e-3)
print("Optimal value PG =", f(x_pg))
print(x_pg.sum(), x_cg.sum())
y_hist_f_cg = [f(x) for x in cg.get_convergence()]
y_hist_f_pg = [f(x) for x in pg.get_convergence()]
myplot({"FW": range(1, len(y_hist_f_cg) + 1), "PG": range(1, len(y_hist_f_pg) + 1)},
{"FW": y_hist_f_cg, "PG": y_hist_f_pg}, "Number of iteration",
r"Objective function, $\frac{1}{2}\|Ax - b\|^2_2$")
eps = [10**(-i) for i in range(8)]
time_pg = np.zeros(len(eps))
time_cg = np.zeros(len(eps))
iter_pg = np.zeros(len(eps))
iter_cg = np.zeros(len(eps))
pg = cs.ProjectedGD(f, grad, projection, ss.Backtracking(rule_type="Armijo", rho=0.5, beta=0.1, init_alpha=1.))
cg = cs.FrankWolfe(f, grad, linsolver, ss.Backtracking(rule_type="Armijo", rho=0.5, beta=0.1, init_alpha=1.))
for i, tol in tqdm(enumerate(eps)):
res = %timeit -o -q pg.solve(x0=x0, tol=tol, max_iter=100000)
time_pg[i] = res.average
iter_pg[i] = len(pg.get_convergence())
res = %timeit -o -q cg.solve(x0=x0, tol=tol, max_iter=100000)
time_cg[i] = res.average
iter_cg[i] = len(cg.get_convergence())
myplot({"FW":eps, "PG": eps}, {"FW": time_cg, "PG": time_pg},
r"Accuracy, $\varepsilon$", "Time, s", xscale="log")
myplot({"FW": eps, "PG": eps}, {"FW": iter_cg, "PG": iter_pg},
r"Accuracy, $\varepsilon$", "Number of iterations", xscale="log")
x = cvx.Variable(n)
obj = cvx.Minimize(0.5 * cvx.norm2(A * x - b)**2)
constr = [cvx.norm(x, 1) <= 1, x >= 0]
problem = cvx.Problem(objective=obj, constraints=constr)
value = problem.solve(solver=cvx.SCS, verbose=True)
x_cvx = np.array(x.value).ravel()
print("CVX optimal value =", value)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Build your model
Step2: Download the Data
Step3: Loading the Data
Step4: Now, let's take a look at the data to have a better understanding of it
Step5: First, let's separate the features and the target and convert them to numpy objects
Step6: Data Preprocessing
Step7: To simplify things a little, we create a pipeline object that only uses the following features
Step8: Our dataset is ready for training the model now
Step9: Once we train the model, we can simply just save it
Step10: Predictions
Step11: First, we need to preprocess the instances
Step12: Then we'll pass the processed data to the model for classification
|
<ASSISTANT_TASK:>
Python Code:
%pip install xgboost
import datetime
import os
import pandas as pd
import xgboost as xgb
import numpy as np
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import FeatureUnion, make_pipeline
import warnings
warnings.filterwarnings(action='ignore', category=DeprecationWarning)
!wget https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data
census_data_filename = './adult.data'
# These are the column labels from the census data files
COLUMNS = (
'age',
'workclass',
'fnlwgt',
'education',
'education-num',
'marital-status',
'occupation',
'relationship',
'race',
'sex',
'capital-gain',
'capital-loss',
'hours-per-week',
'native-country',
'income-level'
)
# Load the training census dataset
with open(census_data_filename, 'r') as train_data:
raw_training_data = pd.read_csv(train_data, header=None, names=COLUMNS)
raw_training_data.head()
raw_features = raw_training_data.drop('income-level', axis=1).values
# Create training labels list
train_labels = (raw_training_data['income-level'] == ' >50K').values
class PositionalSelector(BaseEstimator, TransformerMixin):
def __init__(self, positions):
self.positions = positions
def fit(self, X, y=None):
return self
def transform(self, X):
return np.array(X)[:, self.positions]
class StripString(BaseEstimator, TransformerMixin):
def fit(self, X, y=None):
return self
def transform(self, X):
strip = np.vectorize(str.strip)
return strip(np.array(X))
class SimpleOneHotEncoder(BaseEstimator, TransformerMixin):
def fit(self, X, y=None):
self.values = []
for c in range(X.shape[1]):
Y = X[:, c]
values = {v: i for i, v in enumerate(np.unique(Y))}
self.values.append(values)
return self
def transform(self, X):
X = np.array(X)
matrices = []
for c in range(X.shape[1]):
Y = X[:, c]
matrix = np.zeros(shape=(len(Y), len(self.values[c])), dtype=np.int8)
for i, x in enumerate(Y):
if x in self.values[c]:
matrix[i][self.values[c][x]] = 1
matrices.append(matrix)
res = np.concatenate(matrices, axis=1)
return res
# Categorical features: age and hours-per-week
# Numerical features: workclass, marital-status, and relationship
numerical_indices = [0, 12] # age-num, and hours-per-week
categorical_indices = [1, 3, 5, 7] # workclass, education, marital-status, and relationship
p1 = make_pipeline(PositionalSelector(categorical_indices),
StripString(),
SimpleOneHotEncoder())
p2 = make_pipeline(PositionalSelector(numerical_indices),
StandardScaler())
pipeline = FeatureUnion([
('numericals', p1),
('categoricals', p2),
])
train_features = pipeline.fit_transform(raw_features)
# train the model
model = xgb.XGBClassifier(max_depth=4)
model.fit(train_features, train_labels)
# save the mode
model.save_model('model.bst')
instances = [[
42, ' State-gov', 77516, ' Bachelors', 13, ' Never-married',
' Adm-clerical', ' Not-in-family', ' White', ' Male', 2174, 0, 40,
' United-States'
],
[
50, ' Self-emp-not-inc', 83311, ' Bachelors', 13,
' Married-civ-spouse', ' Exec-managerial', ' Husband',
' White', ' Male', 0, 0, 10, ' United-States'
]]
processed_instances = pipeline.transform(instances)
model.predict(processed_instances)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: To specify the experiments, define
Step2: Other parameters of the experiment
Step3: Load all data
Step4: Initialise the experiment
Step5: Select the trainig and testing data according to the selected fold. We split all images in 10 approximately equal parts and each fold includes these images together with all classes present in them.
Step6: Batch 1
Step7: Batch 2
Step8: Now is the time to retrain the detector and obtain new box_proposal_features. This is not done in this notebook.
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import numpy as np
from __future__ import division
from __future__ import print_function
import math
import gym
import pandas as pd
from gym import spaces
from sklearn import neural_network, model_selection
from sklearn.neural_network import MLPClassifier
from third_party import np_box_ops
import annotator, detector, dialog, environment
# desired quality: high (min_iou=0.7) and low (min_iou=0.5)
min_iou = 0.7 # @param ["0.5", "0.7"]
# drawing speed: high (time_draw=7) and low (time_draw=25)
time_draw = 7 # @param ["7", "25"]
random_seed = 80590 # global variable that fixes the random seed everywhere for replroducibility of results
# what kind of features will be used to represent the state
# numerical values 1-20 correspond to one hot encoding of class
predictive_fields = ['prediction_score', 'relative_size', 'avg_score', 'dif_avg_score', 'dif_max_score', 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20]
time_verify = 1.8 # @param
# Download GT:
# wget wget https://storage.googleapis.com/iad_pascal_annotations_and_detections/pascal_gt_for_iad.h5
# Download detections with features
# wget https://storage.googleapis.com/iad_pascal_annotations_and_detections/pascal_proposals_plus_features_for_iad.h5
download_dir = ''
ground_truth = pd.read_hdf(download_dir + 'pascal_gt_for_iad.h5', 'ground_truth')
box_proposal_features = pd.read_hdf(download_dir + 'pascal_proposals_plus_features_for_iad.h5', 'box_proposal_features')
annotator_real = annotator.AnnotatorSimple(ground_truth, random_seed, time_verify, time_draw, min_iou)
# better call it image_class_pairs later
image_class = ground_truth[['image_id', 'class_id']]
image_class = image_class.drop_duplicates()
unique_image = image_class['image_id'].drop_duplicates()
# divide the images into exponentially growing groups
im1 = unique_image.iloc[157]
im2 = unique_image.iloc[157+157]
im3 = unique_image.iloc[157+157+314]
im4 = unique_image.iloc[157+157+314+625]
im5 = unique_image.iloc[157+157+314+625+1253]
# image_class pairs groups are determined by the images in them
image_class_array = image_class.values[:,0]
in1 = np.searchsorted(image_class_array, im1, side='right')
in2 = np.searchsorted(image_class_array, im2, side='right')
in3 = np.searchsorted(image_class_array, im3, side='right')
in4 = np.searchsorted(image_class_array, im4, side='right')
in5 = np.searchsorted(image_class_array, im5, side='right')
the_detector = detector.Detector(box_proposal_features, predictive_fields)
image_class_current = image_class.iloc[0:in1]
%output_height 300
env = environment.AnnotatingDataset(annotator_real, the_detector, image_class_current)
print('Running ', len(env.image_class), 'episodes with strategy X')
total_reward = 0
new_ground_truth_all = []
all_annotations = dict()
for i in range(len(env.image_class)):
print('Episode ', i, end = ': ')
state = env.reset(current_index=i)
agent = dialog.FixedDialog(0)
done = False
while not(done):
action = agent.get_next_action(state)
if action==0:
print('V', end='')
elif action==1:
print('D', end='')
next_state, reward, done, coordinates = env.step(action)
state = next_state
total_reward += reward
dataset_id = env.current_image
# ground truth with which we will initialise the new user
new_ground_truth = {}
new_ground_truth['image_id'] = dataset_id
new_ground_truth['class_id'] = env.current_class
new_ground_truth['xmax'] = coordinates['xmax']
new_ground_truth['xmin'] = coordinates['xmin']
new_ground_truth['ymax'] = coordinates['ymax']
new_ground_truth['ymin'] = coordinates['ymin']
new_ground_truth_all.append(new_ground_truth)
if dataset_id not in all_annotations:
current_annotation = dict()
current_annotation['boxes'] = np.array([[coordinates['ymin'], coordinates['xmin'], coordinates['ymax'], coordinates['xmax']]], dtype=np.int32)
current_annotation['box_labels'] = np.array([env.current_class])
all_annotations[dataset_id] = current_annotation
else:
all_annotations[dataset_id]['boxes'] = np.append(all_annotations[dataset_id]['boxes'], np.array([[coordinates['ymin'], coordinates['xmin'], coordinates['ymax'], coordinates['xmax']]], dtype=np.int32), axis=0)
all_annotations[dataset_id]['box_labels'] = np.append(all_annotations[dataset_id]['box_labels'], np.array([env.current_class]))
print()
print('total_reward = ', total_reward)
print('average episode reward = ', total_reward/len(env.image_class))
new_ground_truth_all = pd.DataFrame(new_ground_truth_all)
ground_truth_new = pd.DataFrame(new_ground_truth_all)
annotator_new = annotator.AnnotatorSimple(ground_truth_new, random_seed, time_verify, time_draw, min_iou)
# @title Collect data for classifier
env = environment.AnnotatingDataset(annotator_new, the_detector, image_class_current)
print('Running ', len(env.image_class), 'episodes with strategy V3X')
%output_height 300
total_reward = 0
data_for_classifier = []
for i in range(len(env.image_class)):
print(i, end = ': ')
agent = dialog.FixedDialog(3)
state = env.reset(current_index=i)
done = False
while not(done):
action = agent.get_next_action(state)
next_state, reward, done, _ = env.step(action)
if action==0:
state_dict = dict(state)
state_dict['is_accepted'] = done
data_for_classifier.append(state_dict)
print('V', end='')
elif action==1:
print('D', end='')
state = next_state
total_reward += reward
print()
print('Average episode reward = ', total_reward/len(env.image_class))
data_for_classifier = pd.DataFrame(data_for_classifier)
# @title Train classification model (might take some time)
#model_mlp = neural_network.MLPClassifier(alpha = 0.0001, activation = 'relu', hidden_layer_sizes = (50, 50, 50, 50, 50), random_state=602)
#model_for_agent = model_mlp.fit(data_from_Vx3X[predictive_fields], data_from_Vx3X['is_accepted'])
np.random.seed(random_seed) # for reproducibility of fitting the classifier and cross-validation
print('Cross-validating parameters\' values... This might take some time.')
# possible parameter values
parameters = {'hidden_layer_sizes': ((20, 20, 20), (50, 50, 50), (80, 80, 80), (20, 20, 20, 20), (50, 50, 50, 50), (80, 80, 80, 80), (20, 20, 20, 20, 20), (50, 50, 50, 50, 50), (80, 80, 80, 80, 80)), 'activation': ('logistic', 'relu'), 'alpha': [0.0001, 0.001, 0.01]}
model_mlp = neural_network.MLPClassifier()
# cross-validate parameters
grid_search = model_selection.GridSearchCV(model_mlp, parameters, scoring='neg_log_loss', refit=True)
grid_search.fit(data_for_classifier[predictive_fields], data_for_classifier['is_accepted'])
print('best score = ', grid_search.best_score_)
print('best parameters = ', grid_search.best_params_)
# use the model with the best parameters
model_for_agent = grid_search.best_estimator_
image_class_current = image_class.iloc[in1:in2]
the_detector = detector.Detector(box_proposal_features, predictive_fields)
agent = dialog.DialogProb(model_for_agent, annotator_real)
# @title Annotating data with intelligent dialog
env = environment.AnnotatingDataset(annotator_real, the_detector, image_class_current)
print('Running ', len(env.image_class), 'episodes with strategy IAD-Prob')
%output_height 300
print('intelligent dialog strategy')
total_reward = 0
# reset the gound truth because the user only needs to annotate the last 10% of data using the detector from the rest of the data
new_ground_truth_all = []
for i in range(len(env.image_class)):
print(i, end = ': ')
state = env.reset(current_index=i)
done = False
while not(done):
action = agent.get_next_action(state)
if action==0:
print('V', end='')
elif action==1:
print('D', end='')
next_state, reward, done, coordinates = env.step(action)
state = next_state
total_reward += reward
dataset_id = env.current_image
# ground truth with which we will initialise the new user
new_ground_truth = {}
new_ground_truth['image_id'] = dataset_id
new_ground_truth['class_id'] = env.current_class
new_ground_truth['xmax'] = coordinates['xmax']
new_ground_truth['xmin'] = coordinates['xmin']
new_ground_truth['ymax'] = coordinates['ymax']
new_ground_truth['ymin'] = coordinates['ymin']
new_ground_truth_all.append(new_ground_truth)
if dataset_id not in all_annotations:
current_annotation = dict()
current_annotation['boxes'] = np.array([[coordinates['ymin'], coordinates['xmin'], coordinates['ymax'], coordinates['xmax']]], dtype=np.int32)
current_annotation['box_labels'] = np.array([env.current_class])
all_annotations[dataset_id] = current_annotation
else:
all_annotations[dataset_id]['boxes'] = np.append(all_annotations[dataset_id]['boxes'], np.array([[coordinates['ymin'], coordinates['xmin'], coordinates['ymax'], coordinates['xmax']]], dtype=np.int32), axis=0)
all_annotations[dataset_id]['box_labels'] = np.append(all_annotations[dataset_id]['box_labels'], np.array([env.current_class]))
print()
print('total_reward = ', total_reward)
print('average episode reward = ', total_reward/len(env.image_class))
new_ground_truth_all = pd.DataFrame(new_ground_truth_all)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
df = pd.DataFrame({'username': ['john', 'john', 'john', 'john', 'jane', 'jane', 'jane', 'jane'],
'post_id': [1, 2, 3, 4, 7, 8, 9, 10],
'views': [3, 23, 44, 82, 5, 25,46, 56]})
bins = [1, 10, 25, 50, 100]
def g(df, bins):
groups = df.groupby(['username', pd.cut(df.views, bins)])
return groups.size().unstack()
result = g(df.copy(),bins.copy())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labelled examples. Given these sizes, it should be possible to train models quickly on any machine.
Step3: Extract the dataset from the compressed .tar.gz file.
Step5: Problem 1
Step6: Problem 2
Step7: Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match.
Step8: Problem 4
|
<ASSISTANT_TASK:>
Python Code:
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import matplotlib.pyplot as plt
import numpy as np
import os
import sys
import tarfile
from IPython.display import display, Image
from scipy import ndimage
from sklearn.linear_model import LogisticRegression
from six.moves.urllib.request import urlretrieve
from six.moves import cPickle as pickle
url = 'http://yaroslavvb.com/upload/notMNIST/'
def maybe_download(filename, expected_bytes, force=False):
Download a file if not present, and make sure it's the right size.
if force or not os.path.exists(filename):
filename, _ = urlretrieve(url + filename, filename)
statinfo = os.stat(filename)
if statinfo.st_size == expected_bytes:
print('Found and verified', filename)
else:
raise Exception(
'Failed to verify' + filename + '. Can you get to it with a browser?')
return filename
train_filename = maybe_download('notMNIST_large.tar.gz', 247336696)
test_filename = maybe_download('notMNIST_small.tar.gz', 8458043)
num_classes = 10
def maybe_extract(filename, force=False):
root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz
if os.path.isdir(root) and not force:
# You may override by setting force=True.
print('%s already present - Skipping extraction of %s.' % (root, filename))
else:
print('Extracting data for %s. This may take a while. Please wait.' % root)
tar = tarfile.open(filename)
sys.stdout.flush()
tar.extractall()
tar.close()
data_folders = [
os.path.join(root, d) for d in sorted(os.listdir(root))
if os.path.isdir(os.path.join(root, d))]
if len(data_folders) != num_classes:
raise Exception(
'Expected %d folders, one per class. Found %d instead.' % (
num_classes, len(data_folders)))
print(data_folders)
return data_folders
train_folders = maybe_extract(train_filename)
test_folders = maybe_extract(test_filename)
image_size = 28 # Pixel width and height.
pixel_depth = 255.0 # Number of levels per pixel.
def load_letter(folder, min_num_images):
Load the data for a single letter label.
image_files = os.listdir(folder)
dataset = np.ndarray(shape=(len(image_files), image_size, image_size),
dtype=np.float32)
image_index = 0
print(folder)
for image in os.listdir(folder):
image_file = os.path.join(folder, image)
try:
image_data = (ndimage.imread(image_file).astype(float) -
pixel_depth / 2) / pixel_depth
if image_data.shape != (image_size, image_size):
raise Exception('Unexpected image shape: %s' % str(image_data.shape))
dataset[image_index, :, :] = image_data
image_index += 1
except IOError as e:
print('Could not read:', image_file, ':', e, '- it\'s ok, skipping.')
num_images = image_index
dataset = dataset[0:num_images, :, :]
if num_images < min_num_images:
raise Exception('Many fewer images than expected: %d < %d' %
(num_images, min_num_images))
print('Full dataset tensor:', dataset.shape)
print('Mean:', np.mean(dataset))
print('Standard deviation:', np.std(dataset))
return dataset
def maybe_pickle(data_folders, min_num_images_per_class, force=False):
dataset_names = []
for folder in data_folders:
set_filename = folder + '.pickle'
dataset_names.append(set_filename)
if os.path.exists(set_filename) and not force:
# You may override by setting force=True.
print('%s already present - Skipping pickling.' % set_filename)
else:
print('Pickling %s.' % set_filename)
dataset = load_letter(folder, min_num_images_per_class)
try:
with open(set_filename, 'wb') as f:
pickle.dump(dataset, f, pickle.HIGHEST_PROTOCOL)
except Exception as e:
print('Unable to save data to', set_filename, ':', e)
return dataset_names
train_datasets = maybe_pickle(train_folders, 45000)
test_datasets = maybe_pickle(test_folders, 1800)
def make_arrays(nb_rows, img_size):
if nb_rows:
dataset = np.ndarray((nb_rows, img_size, img_size), dtype=np.float32)
labels = np.ndarray(nb_rows, dtype=np.int32)
else:
dataset, labels = None, None
return dataset, labels
def merge_datasets(pickle_files, train_size, valid_size=0):
num_classes = len(pickle_files)
valid_dataset, valid_labels = make_arrays(valid_size, image_size)
train_dataset, train_labels = make_arrays(train_size, image_size)
vsize_per_class = valid_size // num_classes
tsize_per_class = train_size // num_classes
start_v, start_t = 0, 0
end_v, end_t = vsize_per_class, tsize_per_class
end_l = vsize_per_class+tsize_per_class
for label, pickle_file in enumerate(pickle_files):
try:
with open(pickle_file, 'rb') as f:
letter_set = pickle.load(f)
if valid_dataset is not None:
valid_letter = letter_set[:vsize_per_class, :, :]
valid_dataset[start_v:end_v, :, :] = valid_letter
valid_labels[start_v:end_v] = label
start_v += vsize_per_class
end_v += vsize_per_class
train_letter = letter_set[vsize_per_class:end_l, :, :]
train_dataset[start_t:end_t, :, :] = train_letter
train_labels[start_t:end_t] = label
start_t += tsize_per_class
end_t += tsize_per_class
except Exception as e:
print('Unable to process data from', pickle_file, ':', e)
raise
return valid_dataset, valid_labels, train_dataset, train_labels
train_size = 200000
valid_size = 10000
test_size = 10000
valid_dataset, valid_labels, train_dataset, train_labels = merge_datasets(
train_datasets, train_size, valid_size)
_, _, test_dataset, test_labels = merge_datasets(test_datasets, test_size)
print('Training:', train_dataset.shape, train_labels.shape)
print('Validation:', valid_dataset.shape, valid_labels.shape)
print('Testing:', test_dataset.shape, test_labels.shape)
np.random.seed(133)
def randomize(dataset, labels):
permutation = np.random.permutation(labels.shape[0])
shuffled_dataset = dataset[permutation,:,:]
shuffled_labels = labels[permutation]
return shuffled_dataset, shuffled_labels
train_dataset, train_labels = randomize(train_dataset, train_labels)
test_dataset, test_labels = randomize(test_dataset, test_labels)
pickle_file = 'notMNIST.pickle'
try:
f = open(pickle_file, 'wb')
save = {
'train_dataset': train_dataset,
'train_labels': train_labels,
'valid_dataset': valid_dataset,
'valid_labels': valid_labels,
'test_dataset': test_dataset,
'test_labels': test_labels,
}
pickle.dump(save, f, pickle.HIGHEST_PROTOCOL)
f.close()
except Exception as e:
print('Unable to save data to', pickle_file, ':', e)
raise
statinfo = os.stat(pickle_file)
print('Compressed pickle size:', statinfo.st_size)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import libraries
Step7: User-defined functions
Step8: Load dataset
Step9: Pre-processing data
Step10: Fix missing value
Step11: Standardize country code
Step12: Extract serving_size into gram value
Step13: Parse additives
Step14: Organic or Not
Step15: Visualize Food features
Step16: Top countries
Step17: Nutrition grade
Step18: Nutrition score
Step19: Serving size
Step20: Energy, fat, ...
Step21: Carbohydrates, protein, fiber
Step22: Sugar, Vitamins
Step23: Minerals
Step24: Explore food label
Step25: Who eats less sweet food?
|
<ASSISTANT_TASK:>
Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sb
%matplotlib inline
def remove_na_rows(df, cols=None):
remove row with NaN in any column
if cols is None:
cols = df.columns
return df[np.logical_not(np.any(df[cols].isnull().values, axis=1))]
def trans_country_name(x):
translate country name to code (2-char)
try:
country_name = x.split(',')[0]
if country_name in dictCountryName2Code:
return dictCountryName2Code[country_name]
except:
return None
def parse_additives(x):
parse additives column values into a list
try:
dict = {}
for item in x.split(']'):
token = item.split('->')[0].replace("[", "").strip()
if token: dict[token] = 1
return [len(dict.keys()), sorted(dict.keys())]
except:
return None
def trans_serving_size(x):
pick up gram value from serving_size column
try:
serving_g = float((x.split('(')[0]).replace("g", "").strip())
return serving_g
except:
return 0.0
def distplot2x2(cols):
make dist. plot on 2x2 grid for up to 4 features
sb.set(style="white", palette="muted")
f, axes = plt.subplots(2, 2, figsize=(7, 7), sharex=False)
b, g, r, p = sb.color_palette("muted", 4)
colors = [b, g, r, p]
axis = [axes[0,0],axes[0,1],axes[1,0],axes[1,1]]
for n,col in enumerate(cols):
sb.distplot(food[col].dropna(), hist=True, rug=False, color=colors[n], ax=axis[n])
food = pd.read_excel("data/openfoodfacts_5k.xlsx")
food.shape
food.columns
food.head()
# columns_to_keep = ['code','product_name','created_datetime','brands','categories','origins','manufacturing_places','energy_100g','fat_100g','saturated-fat_100g','trans-fat_100g','cholesterol_100g','carbohydrates_100g','sugars_100g','omega-3-fat_100g','omega-6-fat_100g','fiber_100g','proteins_100g','salt_100g','sodium_100g','alcohol_100g','vitamin-a_100g','vitamin-c_100g','potassium_100g','chloride_100g','calcium_100g','phosphorus_100g','iron_100g','magnesium_100g','zinc_100g','copper_100g','manganese_100g','fluoride_100g','ingredients_text','countries','countries_en','serving_size','additives','nutrition_grade_fr','nutrition_grade_uk','nutrition-score-fr_100g','nutrition-score-uk_100g','url','image_url','image_small_url']
columns_to_keep = ['code','product_name','created_datetime','brands','energy_100g','fat_100g','saturated-fat_100g','trans-fat_100g','cholesterol_100g','carbohydrates_100g','sugars_100g','fiber_100g','proteins_100g','salt_100g','sodium_100g','vitamin-a_100g','vitamin-c_100g','calcium_100g','iron_100g','ingredients_text','countries','countries_en','serving_size','additives','nutrition_grade_fr','nutrition-score-fr_100g','url']
food = food[columns_to_keep]
columns_numeric_all = ['energy_100g','fat_100g','saturated-fat_100g','trans-fat_100g','cholesterol_100g','carbohydrates_100g','sugars_100g','omega-3-fat_100g','omega-6-fat_100g','fiber_100g','proteins_100g','salt_100g','sodium_100g','alcohol_100g','vitamin-a_100g','vitamin-c_100g','potassium_100g','chloride_100g','calcium_100g','phosphorus_100g','iron_100g','magnesium_100g','zinc_100g','copper_100g','manganese_100g','fluoride_100g','nutrition-score-fr_100g','nutrition-score-uk_100g']
columns_numeric = set(columns_numeric_all) & set(columns_to_keep)
columns_categoric = set(columns_to_keep) - set(columns_numeric)
# turn off
if False:
for col in columns_numeric:
if not col in ['nutrition-score-fr_100g', 'nutrition-score-uk_100g']:
food[col] = food[col].fillna(0)
for col in columns_categoric:
if col in ['nutrition_grade_fr', 'nutrition_grade_uk']:
food[col] = food[col].fillna('-')
else:
food[col] = food[col].fillna('')
# list column names: categoric vs numeric
columns_categoric, columns_numeric
food.head(3)
# standardize country
country_lov = pd.read_excel("../../0.0-Datasets/country_cd.xlsx")
# country_lov.shape
# country_lov.head()
# country_lov[country_lov['GEOGRAPHY_NAME'].str.startswith('United')].head()
# country_lov['GEOGRAPHY_CODE'].tolist()
# country_lov.ix[0,'GEOGRAPHY_CODE'], country_lov.ix[0,'GEOGRAPHY_NAME']
# create 2 dictionaries
dictCountryCode2Name = {}
dictCountryName2Code = {}
for i in country_lov.index:
dictCountryCode2Name[country_lov.ix[i,'GEOGRAPHY_CODE']] = country_lov.ix[i,'GEOGRAPHY_NAME']
dictCountryName2Code[country_lov.ix[i,'GEOGRAPHY_NAME']] = country_lov.ix[i,'GEOGRAPHY_CODE']
# add Country_Code column - pick 1st country from list
food['countries_en'] = food['countries_en'].fillna('')
food['country_code'] = food['countries_en'].apply(str).apply(lambda x: trans_country_name(x))
# add country_code to columns_categoric set
columns_categoric.add('country_code')
# verify bad country
food[food['country_code'] != food['countries']][['country_code', 'countries']].head(20)
food['ingredients_text'].head() # leave as is
# add serving_size in gram column
food['serving_size'].head(10)
food['serving_size'] = food['serving_size'].fillna('')
food['serving_size_gram'] = food['serving_size'].apply(lambda x: trans_serving_size(x))
# add serving_size_gram
columns_numeric.add('serving_size_gram')
food[['serving_size_gram', 'serving_size']].head()
food['additives'].head(10)
food['additives'] = food['additives'].fillna('')
food['additive_list'] = food['additives'].apply(lambda x: parse_additives(x))
# add additive_list
columns_categoric.add('additive_list')
food[['additive_list', 'additives']].head()
food["creation_date"] = food["created_datetime"].apply(str).apply(lambda x: x[:x.find("T")])
food["year_added"] = food["created_datetime"].dropna().apply(str).apply(lambda x: int(x[:x.find("-")]))
# add creation_date
columns_categoric.add('creation_date')
columns_numeric.add('year_added')
food[['created_datetime', 'creation_date', 'year_added']].head()
# food['product_name']
food.head(3)
columns_numeric
year_added = food['year_added'].value_counts().sort_index()
#year_added
year_i = [int(x) for x in year_added.index]
x_pos = np.arange(len(year_i))
year_added.plot.bar()
plt.xticks(x_pos, year_i)
plt.title("Food labels added per year")
TOP_N = 10
dist_country = food['country_code'].value_counts()
top_country = dist_country[:TOP_N][::-1]
country_s = [dictCountryCode2Name[x] for x in top_country.index]
y_pos = np.arange(len(country_s))
top_country.plot.barh()
plt.yticks(y_pos, country_s)
plt.title("Top {} Country Distribution".format(TOP_N))
# dist_nutri_grade = food['nutrition_grade_uk'].value_counts()
# no value
dist_nutri_grade = food['nutrition_grade_fr'].value_counts()
dist_nutri_grade.sort_index(ascending=False).plot.barh()
plt.title("Nutrition Grade Dist")
food['nutrition-score-fr_100g'].dropna().plot.hist()
plt.title("{} Dist.".format("Nutri-Score"))
food['serving_size_gram'].dropna().plot.hist()
plt.title("{} Dist.".format("Serving Size (g)"))
distplot2x2([ 'energy_100g','fat_100g','saturated-fat_100g','trans-fat_100g'])
distplot2x2(['carbohydrates_100g', 'cholesterol_100g', 'proteins_100g', 'fiber_100g'])
distplot2x2([ 'sugars_100g', 'salt_100g', 'vitamin-a_100g', 'vitamin-c_100g'])
distplot2x2(['calcium_100g', 'iron_100g', 'sodium_100g'])
df = food[food["country_code"].isin(['US','FR'])][['energy_100g', 'carbohydrates_100g', 'sugars_100g','country_code']]
df = remove_na_rows(df)
df.head()
sb.pairplot(df, hue="country_code", size=2.5)
# prepare a small dataframe for ['US', 'FR']
df2 = food[food["country_code"].isin(['US','FR'])][['energy_100g', 'sugars_100g','country_code','nutrition_grade_fr']]
df2 = df2[df2["nutrition_grade_fr"].isin(['a','b','c','d','e'])]
df2 = df2.sort_values(by="nutrition_grade_fr")
# df2.head()
# create a grid of scatter plot
g = sb.FacetGrid(df2, row="nutrition_grade_fr", col="country_code", margin_titles=True)
g.map(plt.scatter, "sugars_100g", "energy_100g", color="steelblue")
g.set(xlim=(0, 100), ylim=(0, 3000))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Super resolution with TensorFlow Lite
Step2: Import dependencies.
Step3: Download and convert the ESRGAN model
Step4: Download a test image (insect head).
Step5: Generate a super resolution image using TensorFlow Lite
Step6: Visualize the result
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
!pip install matplotlib tensorflow tensorflow-hub
import tensorflow as tf
import tensorflow_hub as hub
import matplotlib.pyplot as plt
print(tf.__version__)
model = hub.load("https://tfhub.dev/captain-pool/esrgan-tf2/1")
concrete_func = model.signatures[tf.saved_model.DEFAULT_SERVING_SIGNATURE_DEF_KEY]
concrete_func.inputs[0].set_shape([1, 50, 50, 3])
converter = tf.lite.TFLiteConverter.from_concrete_functions([concrete_func])
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tflite_model = converter.convert()
# Save the TF Lite model.
with tf.io.gfile.GFile('ESRGAN.tflite', 'wb') as f:
f.write(tflite_model)
esrgan_model_path = './ESRGAN.tflite'
test_img_path = tf.keras.utils.get_file('lr.jpg', 'https://raw.githubusercontent.com/tensorflow/examples/master/lite/examples/super_resolution/android/app/src/main/assets/lr-1.jpg')
lr = tf.io.read_file(test_img_path)
lr = tf.image.decode_jpeg(lr)
lr = tf.expand_dims(lr, axis=0)
lr = tf.cast(lr, tf.float32)
# Load TFLite model and allocate tensors.
interpreter = tf.lite.Interpreter(model_path=esrgan_model_path)
interpreter.allocate_tensors()
# Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
# Run the model
interpreter.set_tensor(input_details[0]['index'], lr)
interpreter.invoke()
# Extract the output and postprocess it
output_data = interpreter.get_tensor(output_details[0]['index'])
sr = tf.squeeze(output_data, axis=0)
sr = tf.clip_by_value(sr, 0, 255)
sr = tf.round(sr)
sr = tf.cast(sr, tf.uint8)
lr = tf.cast(tf.squeeze(lr, axis=0), tf.uint8)
plt.figure(figsize = (1, 1))
plt.title('LR')
plt.imshow(lr.numpy());
plt.figure(figsize=(10, 4))
plt.subplot(1, 2, 1)
plt.title(f'ESRGAN (x4)')
plt.imshow(sr.numpy());
bicubic = tf.image.resize(lr, [200, 200], tf.image.ResizeMethod.BICUBIC)
bicubic = tf.cast(bicubic, tf.uint8)
plt.subplot(1, 2, 2)
plt.title('Bicubic')
plt.imshow(bicubic.numpy());
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This failed when we provided input that could not be converted to an integer.
Step2: You can handle errors in anyway that might be appropriate for your program.
|
<ASSISTANT_TASK:>
Python Code:
n = int(input("Enter an integer: "))
print("Hello " * n)
try:
n = int(input("Enter an integer: "))
print("Hello " * n)
except ValueError:
print("That wasn't an integer!")
while True:
try:
n = int(input("Enter an integer: "))
print("Hello " * n)
break
except ValueError:
print("That wasn't an integer! Try again...")
except KeyboardInterrupt:
print('bye')
break
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This notebook presents <em style="color
Step2: The function make_string is a helper function that is used to simplify the implementation of __str__.
Step3: The method $t.\texttt{toDot}()$ takes a binary trie $t$ and returns a graph that depicts the tree $t$.
Step4: The method $t.\texttt{collectIDs}(d)$ takes a binary trie $t$ and a dictionary $d$ and updates the dictionary so that the following holds
Step5: The class Nil represents an empty binary trie. It has no member variables of
Step6: The class Bin represents a binary trie of the form $\texttt{Bin}(v,l,r)$.
Step7: Implementing the Method find
Step8: Implementing the Method insert
Step9: Implementing the Method delete
Step10: Your equations here!
Step11: Testing
Step12: Let us compute the prime numbers next.
|
<ASSISTANT_TASK:>
Python Code:
import graphviz as gv
class BinaryTrie:
sNodeCount = 0
def __init__(self):
BinaryTrie.sNodeCount += 1
self.mID = BinaryTrie.sNodeCount
def getID(self):
return self.mID # used only by graphviz
def _make_string(self, attributes):
# map the function __str__ to all attributes and join them with a comma
name = self.__class__.__name__
return f"{name}({', '.join(map(str, [getattr(self, at) for at in attributes]))})"
BinaryTrie._make_string = _make_string
del _make_string
def toDot(self):
dot = gv.Digraph(node_attr={'shape': 'record', 'style': 'rounded'})
nodeDict = {}
self._collectIDs(nodeDict)
for n, t in nodeDict.items():
if isinstance(t, Nil):
dot.node(str(n), label='', shape='point')
elif isinstance(t, Bin):
if t.mValue != None:
dot.node(str(n), label='{' + str(t.mDigit) + '|' + str(t.mValue) + '}')
else:
dot.node(str(n), label='{' + str(t.mDigit) + '|' + '}')
else:
assert False, f'Unknown node {t}'
for n, t in nodeDict.items():
if isinstance(t, Bin):
dot.edge(str(n), str(t.mLeft .getID()))
dot.edge(str(n), str(t.mRight.getID()))
return dot
BinaryTrie.toDot = toDot
del toDot
def _collectIDs(self, nodeDict):
nodeDict[self.getID()] = self
if isinstance(self, Bin):
self.mLeft ._collectIDs(nodeDict)
self.mRight._collectIDs(nodeDict)
self.mLeft .mDigit = '0'
self.mRight.mDigit = '1'
BinaryTrie._collectIDs = _collectIDs
del _collectIDs
class Nil(BinaryTrie):
def __init__(self):
BinaryTrie.__init__(self)
def __str__(self):
return 'Nil()'
class Bin(BinaryTrie):
def __init__(self, value, left, right):
BinaryTrie.__init__(self)
self.mValue = value
self.mLeft = left
self.mRight = right
self.mDigit = '' # only used by graphviz
def __str__(self):
return _make_string(self, ['mValue', 'mLeft', 'mRight'])
def find(self, n):
"your code here"
Nil.find = find
del find
def find(self, n):
"your code here"
Bin.find = find
del find
def insert(self, n, v):
"your code here"
Nil.insert = insert
del insert
def insert(self, n, v):
"your code here"
Bin.insert = insert
del insert
def simplify(self):
"your code here"
Bin.simplify = simplify
del simplify
def delete(self, n):
"your code here"
Nil.delete = delete
del delete
def delete(self, n):
"your code here"
Bin.delete = delete
del delete
b = Nil()
b.toDot()
b = b.insert(0, 'a')
b.toDot()
b = b.insert(1, 'b')
b.toDot()
b = b.insert(2, 'c')
b.toDot()
b = b.delete(0)
b.toDot()
b = b.delete(1)
b.toDot()
b = b.delete(2)
b.toDot()
Primes = Nil()
for i in range(2, 101):
Primes = Primes.insert(i, True)
Primes.toDot()
for i in range(2, 51):
for j in range(i, 100 // i + 1):
Primes = Primes.delete(i * j)
display(Primes.toDot())
for i in range(2, 101):
if Primes.find(i):
print(i, end=' ')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space.
Step2: Preprocessing
Step3: And here I'm creating dictionaries to covert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words.
Step4: Subsampling
Step5: Making batches
Step6: Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory.
Step7: Building the graph
Step8: Embedding
Step9: Negative sampling
Step10: Validation
Step11: Training
Step12: Restore the trained network if you need to
Step13: Visualizing the word vectors
|
<ASSISTANT_TASK:>
Python Code:
import time
import numpy as np
import tensorflow as tf
import utils
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import zipfile
dataset_folder_path = 'data'
dataset_filename = 'text8.zip'
dataset_name = 'Text8 Dataset'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(dataset_filename):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc=dataset_name) as pbar:
urlretrieve(
'http://mattmahoney.net/dc/text8.zip',
dataset_filename,
pbar.hook)
if not isdir(dataset_folder_path):
with zipfile.ZipFile(dataset_filename) as zip_ref:
zip_ref.extractall(dataset_folder_path)
with open('data/text8') as f:
text = f.read()
words = utils.preprocess(text)
print(words[:30])
print("Total words: {}".format(len(words)))
print("Unique words: {}".format(len(set(words))))
vocab_to_int, int_to_vocab = utils.create_lookup_tables(words)
int_words = [vocab_to_int[word] for word in words]
## Your code here
from collections import Counter
import random
threshold = 1e-5
word_counts = Counter(int_words)
total_count = len(int_words)
freqs = {word: count / total_count for word, count in word_counts.items()}
p_drop = {word: 1 - np.sqrt(threshold / freqs[word]) for word in word_counts}
train_words = [word for word in int_words if random.random() < (1 - p_drop[word])]
def get_target(words, idx, window_size=5):
''' Get a list of words in a window around an index. '''
# Your code here
R = np.random.randint(1, window_size + 1)
start = idx - R if (idx - R) > 0 else 0
stop = idx + R
target_words = set(words[start: idx] + words[idx + 1: stop + 1])
return list(target_words)
def get_batches(words, batch_size, window_size=5):
''' Create a generator of word batches as a tuple (inputs, targets) '''
n_batches = len(words)//batch_size
# only full batches
words = words[:n_batches*batch_size]
for idx in range(0, len(words), batch_size):
x, y = [], []
batch = words[idx:idx+batch_size]
for ii in range(len(batch)):
batch_x = batch[ii]
batch_y = get_target(batch, ii, window_size)
y.extend(batch_y)
x.extend([batch_x]*len(batch_y))
yield x, y
train_graph = tf.Graph()
with train_graph.as_default():
inputs = tf.placeholder(tf.int32, [None], name = 'inputs')
labels = tf.placeholder(tf.int32, [None, None], name = 'labels')
n_vocab = len(int_to_vocab)
n_embedding = 200 # Number of embedding features
with train_graph.as_default():
embedding = tf.Variable(tf.random_uniform((n_vocab, n_embedding), -1, 1)) # create embedding weight matrix here
embed = tf.nn.embedding_lookup(embedding, inputs) # use tf.nn.embedding_lookup to get the hidden layer output
# Number of negative labels to sample
n_sampled = 100
with train_graph.as_default():
softmax_w = tf.Variable(tf.truncated_normal((n_vocab, n_embedding), stddev = 0.1)) # create softmax weight matrix here
softmax_b = tf.Variable(tf.zeros(n_vocab)) # create softmax biases here
# Calculate the loss using negative sampling
loss = tf.nn.sampled_softmax_loss(softmax_w, softmax_b, labels, embed, n_sampled, n_vocab)
cost = tf.reduce_mean(loss)
optimizer = tf.train.AdamOptimizer().minimize(cost)
with train_graph.as_default():
## From Thushan Ganegedara's implementation
valid_size = 16 # Random set of words to evaluate similarity on.
valid_window = 100
# pick 8 samples from (0,100) and (1000,1100) each ranges. lower id implies more frequent
valid_examples = np.array(random.sample(range(valid_window), valid_size//2))
valid_examples = np.append(valid_examples,
random.sample(range(1000,1000+valid_window), valid_size//2))
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
# We use the cosine distance:
norm = tf.sqrt(tf.reduce_sum(tf.square(embedding), 1, keep_dims=True))
normalized_embedding = embedding / norm
valid_embedding = tf.nn.embedding_lookup(normalized_embedding, valid_dataset)
similarity = tf.matmul(valid_embedding, tf.transpose(normalized_embedding))
# If the checkpoints directory doesn't exist:
!mkdir checkpoints
epochs = 10
batch_size = 1000
window_size = 10
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
iteration = 1
loss = 0
sess.run(tf.global_variables_initializer())
for e in range(1, epochs+1):
batches = get_batches(train_words, batch_size, window_size)
start = time.time()
for x, y in batches:
feed = {inputs: x,
labels: np.array(y)[:, None]}
train_loss, _ = sess.run([cost, optimizer], feed_dict=feed)
loss += train_loss
if iteration % 100 == 0:
end = time.time()
print("Epoch {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Avg. Training loss: {:.4f}".format(loss/100),
"{:.4f} sec/batch".format((end-start)/100))
loss = 0
start = time.time()
if iteration % 1000 == 0:
## From Thushan Ganegedara's implementation
# note that this is expensive (~20% slowdown if computed every 500 steps)
sim = similarity.eval()
for i in range(valid_size):
valid_word = int_to_vocab[valid_examples[i]]
top_k = 8 # number of nearest neighbors
nearest = (-sim[i, :]).argsort()[1:top_k+1]
log = 'Nearest to %s:' % valid_word
for k in range(top_k):
close_word = int_to_vocab[nearest[k]]
log = '%s %s,' % (log, close_word)
print(log)
iteration += 1
save_path = saver.save(sess, "checkpoints/text8.ckpt")
embed_mat = sess.run(normalized_embedding)
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
embed_mat = sess.run(embedding)
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
viz_words = 500
tsne = TSNE()
embed_tsne = tsne.fit_transform(embed_mat[:viz_words, :])
fig, ax = plt.subplots(figsize=(14, 14))
for idx in range(viz_words):
plt.scatter(*embed_tsne[idx, :], color='steelblue')
plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Functions in the module accept the following arguments.
Step2: The first value (left) is the probability that the first classifier (the left column of x) has a higher score than the second (or that the differences are negative, if x is given as a vector).
Step3: The posterior distribution can be plotted out
Step4: Checking sensitivity to the prior
Step5: ... and on the right
Step6: The prior with a strength of 1 has negligible effect. Only a much stronger prior on the left would shift the probabilities toward NBC
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
scores = np.loadtxt('Data/accuracy_nbc_aode.csv', delimiter=',', skiprows=1, usecols=(1, 2))
names = ("NBC", "AODE")
import bayesiantests as bt
left, within, right = bt.signtest(scores, rope=0.01)
print(left, within, right)
left, within, right = bt.signtest(scores, rope=0.01, verbose=True, names=names)
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams['figure.facecolor'] = 'black'
samples = bt.signtest_MC(scores, rope=0.01)
fig = bt.plot_posterior(samples,names)
plt.savefig('triangle.png',facecolor="black")
plt.show()
samples = bt.signtest_MC(scores, rope=0.01, prior_strength=1, prior_place=bt.LEFT)
fig = bt.plot_posterior(samples,names)
plt.show()
samples = bt.signtest_MC(scores, rope=0.01, prior_strength=1, prior_place=bt.RIGHT)
fig = bt.plot_posterior(samples,names)
plt.show()
samples = bt.signtest_MC(scores, rope=0.01, prior_strength=10, prior_place=bt.LEFT)
fig = bt.plot_posterior(samples,names)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First we need to import the matplotlib library.
Step2: Matplotlib can output graphs using various backend graphics libraries, such as Tk, wxPython, etc. When running python using the command line, the graphs are typically shown in a separate window. In a Jupyter notebook, we can simply output the graphs within the notebook itself by running the %matplotlib inline magic command.
Step3: Now let's plot our first graph!
Step4: Yep, it's as simple as calling the plot function with some data, and then calling the show function!
Step5: The axes automatically match the extent of the data. We would like to give the graph a bit more room, so let's call the axis function to change the extent of each axis [xmin, xmax, ymin, ymax].
Step6: Now, let's plot a mathematical function. We use NumPy's linspace function to create an array x containing 500 floats ranging from -2 to 2, then we create a second array y computed as the square of x (to learn about NumPy, read the NumPy tutorial).
Step7: That's a bit dry, let's add a title, and x and y labels, and draw a grid.
Step8: Line style and color
Step9: You can pass a 3rd argument to change the line's style and color.
Step10: You can plot multiple lines on one graph very simply
Step11: Or simply call plot multiple times before calling show.
Step12: You can also draw simple points instead of lines. Here's an example with green dashes, red dotted line and blue triangles.
Step13: The plot function returns a list of Line2D objects (one for each line). You can set extra attributes on these lines, such as the line width, the dash style or the alpha level. See the full list of attributes in the documentation.
Step14: Saving a figure
Step15: Subplots
Step16: Note that subplot(223) is a shorthand for subplot(2, 2, 3).
Step17: If you need more complex subplot positionning, you can use subplot2grid instead of subplot. You specify the number of rows and columns in the grid, then your subplot's position in that grid (top-left = (0,0)), and optionally how many rows and/or columns it spans. For example
Step18: If you need even more flexibility in subplot positioning, check out the GridSpec documentation
Step19: Pyplot's state machine
Step20: Fortunately, Pyplot allows you to ignore the state machine entirely, so you can write beautifully explicit code. Simply call the subplots function and use the figure object and the list of axes objects that are returned. No more magic! For example
Step21: For consistency, we will continue to use pyplot's state machine in the rest of this tutorial, but we recommend using the object-oriented interface in your programs.
Step22: Note
Step23: You can also add a bounding box around your text by using the bbox attribute
Step24: Just for fun, if you want an xkcd-style plot, just draw within a with plt.xkcd() section
Step25: Legends
Step26: Non linear scales
Step27: Ticks and tickers
Step28: Polar projection
Step30: 3D projection
Step32: Another way to display this same data is via a contour plot.
Step33: Scatter plot
Step34: You may also optionally provide the scale of each point.
Step35: And as usual there are a number of other attributes you can set, such as the fill and edge colors and the alpha level.
Step36: Lines
Step37: Histograms
Step38: Images
Step39: We have loaded a 288x432 image. Each pixel is represented by a 4-element array
Step40: Tadaaa! You may want to hide the axes when you are displaying an image
Step41: It's just as easy to generate your own image
Step42: As we did not provide RGB levels, the imshow function automatically maps values to a color gradient. By default, the color gradient goes from blue (for low values) to red (for high values), but you can select another color map. For example
Step43: You can also generate an RGB image directly
Step44: Since the img array is just quite small (20x30), when the imshow function displays it, it grows the image to the figure's size. By default it uses bilinear interpolation to fill the added pixels. This is why the edges look blurry.
Step45: Animations
Step46: In this example, we start by creating data points, then we create an empty plot, we define the update function that will be called at every iteration of the animation, and finally we add an animation to the plot by creating a FuncAnimation instance.
Step47: Saving animations to video files
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import division, print_function, unicode_literals
import matplotlib
%matplotlib inline
# matplotlib.use("TKAgg") # use this instead in your program if you want to use Tk as your graphics backend.
import matplotlib.pyplot as plt
plt.plot([1, 2, 4, 9, 5, 3])
plt.show()
plt.plot([-3, -2, 5, 0], [1, 6, 4, 3])
plt.show()
plt.plot([-3, -2, 5, 0], [1, 6, 4, 3])
plt.axis([-4, 6, 0, 7])
plt.show()
import numpy as np
x = np.linspace(-2, 2, 500)
y = x**2
plt.plot(x, y)
plt.show()
plt.plot(x, y)
plt.title("Square function")
plt.xlabel("x")
plt.ylabel("y = x**2")
plt.grid(True)
plt.show()
plt.plot([0, 100, 100, 0, 0, 100, 50, 0, 100], [0, 0, 100, 100, 0, 100, 130, 100, 0])
plt.axis([-10, 110, -10, 140])
plt.show()
plt.plot([0, 100, 100, 0, 0, 100, 50, 0, 100], [0, 0, 100, 100, 0, 100, 130, 100, 0], "g--")
plt.axis([-10, 110, -10, 140])
plt.show()
plt.plot([0, 100, 100, 0, 0], [0, 0, 100, 100, 0], "r-", [0, 100, 50, 0, 100], [0, 100, 130, 100, 0], "g--")
plt.axis([-10, 110, -10, 140])
plt.show()
plt.plot([0, 100, 100, 0, 0], [0, 0, 100, 100, 0], "r-")
plt.plot([0, 100, 50, 0, 100], [0, 100, 130, 100, 0], "g--")
plt.axis([-10, 110, -10, 140])
plt.show()
x = np.linspace(-1.4, 1.4, 30)
plt.plot(x, x, 'g--', x, x**2, 'r:', x, x**3, 'b^')
plt.show()
x = np.linspace(-1.4, 1.4, 30)
line1, line2, line3 = plt.plot(x, x, 'g--', x, x**2, 'r:', x, x**3, 'b^')
line1.set_linewidth(3.0)
line1.set_dash_capstyle("round")
line3.set_alpha(0.2)
plt.show()
x = np.linspace(-1.4, 1.4, 30)
plt.plot(x, x**2)
plt.savefig("my_square_function.png", transparent=True)
x = np.linspace(-1.4, 1.4, 30)
plt.subplot(2, 2, 1) # 2 rows, 2 columns, 1st subplot = top left
plt.plot(x, x)
plt.subplot(2, 2, 2) # 2 rows, 2 columns, 2nd subplot = top right
plt.plot(x, x**2)
plt.subplot(2, 2, 3) # 2 rows, 2 columns, 3rd subplot = bottow left
plt.plot(x, x**3)
plt.subplot(2, 2, 4) # 2 rows, 2 columns, 4th subplot = bottom right
plt.plot(x, x**4)
plt.show()
plt.subplot(2, 2, 1) # 2 rows, 2 columns, 1st subplot = top left
plt.plot(x, x)
plt.subplot(2, 2, 2) # 2 rows, 2 columns, 2nd subplot = top right
plt.plot(x, x**2)
plt.subplot(2, 1, 2) # 2 rows, *1* column, 2nd subplot = bottom
plt.plot(x, x**3)
plt.show()
plt.subplot2grid((3,3), (0, 0), rowspan=2, colspan=2)
plt.plot(x, x**2)
plt.subplot2grid((3,3), (0, 2))
plt.plot(x, x**3)
plt.subplot2grid((3,3), (1, 2), rowspan=2)
plt.plot(x, x**4)
plt.subplot2grid((3,3), (2, 0), colspan=2)
plt.plot(x, x**5)
plt.show()
x = np.linspace(-1.4, 1.4, 30)
plt.figure(1)
plt.subplot(211)
plt.plot(x, x**2)
plt.title("Square and Cube")
plt.subplot(212)
plt.plot(x, x**3)
plt.figure(2, figsize=(10, 5))
plt.subplot(121)
plt.plot(x, x**4)
plt.title("y = x**4")
plt.subplot(122)
plt.plot(x, x**5)
plt.title("y = x**5")
plt.figure(1) # back to figure 1, current subplot is 212 (bottom)
plt.plot(x, -x**3, "r:")
plt.show()
import this
x = np.linspace(-2, 2, 200)
fig1, (ax_top, ax_bottom) = plt.subplots(2, 1, sharex=True)
fig1.set_size_inches(10,5)
line1, line2 = ax_top.plot(x, np.sin(3*x**2), "r-", x, np.cos(5*x**2), "b-")
line3, = ax_bottom.plot(x, np.sin(3*x), "r-")
ax_top.grid(True)
fig2, ax = plt.subplots(1, 1)
ax.plot(x, x**2)
plt.show()
x = np.linspace(-1.5, 1.5, 30)
px = 0.8
py = px**2
plt.plot(x, x**2, "b-", px, py, "ro")
plt.text(0, 1.5, "Square function\n$y = x^2$", fontsize=20, color='blue', horizontalalignment="center")
plt.text(px - 0.08, py, "Beautiful point", ha="right", weight="heavy")
plt.text(px, py, "x = %0.2f\ny = %0.2f"%(px, py), rotation=50, color='gray')
plt.show()
plt.plot(x, x**2, px, py, "ro")
plt.annotate("Beautiful point", xy=(px, py), xytext=(px-1.3,py+0.5),
color="green", weight="heavy", fontsize=14,
arrowprops={"facecolor": "lightgreen"})
plt.show()
plt.plot(x, x**2, px, py, "ro")
bbox_props = dict(boxstyle="rarrow,pad=0.3", ec="b", lw=2, fc="lightblue")
plt.text(px-0.2, py, "Beautiful point", bbox=bbox_props, ha="right")
bbox_props = dict(boxstyle="round4,pad=1,rounding_size=0.2", ec="black", fc="#EEEEFF", lw=5)
plt.text(0, 1.5, "Square function\n$y = x^2$", fontsize=20, color='black', ha="center", bbox=bbox_props)
plt.show()
with plt.xkcd():
plt.plot(x, x**2, px, py, "ro")
bbox_props = dict(boxstyle="rarrow,pad=0.3", ec="b", lw=2, fc="lightblue")
plt.text(px-0.2, py, "Beautiful point", bbox=bbox_props, ha="right")
bbox_props = dict(boxstyle="round4,pad=1,rounding_size=0.2", ec="black", fc="#EEEEFF", lw=5)
plt.text(0, 1.5, "Square function\n$y = x^2$", fontsize=20, color='black', ha="center", bbox=bbox_props)
plt.show()
x = np.linspace(-1.4, 1.4, 50)
plt.plot(x, x**2, "r--", label="Square function")
plt.plot(x, x**3, "g-", label="Cube function")
plt.legend(loc="best")
plt.grid(True)
plt.show()
x = np.linspace(0.1, 15, 500)
y = x**3/np.exp(2*x)
plt.figure(1)
plt.plot(x, y)
plt.yscale('linear')
plt.title('linear')
plt.grid(True)
plt.figure(2)
plt.plot(x, y)
plt.yscale('log')
plt.title('log')
plt.grid(True)
plt.figure(3)
plt.plot(x, y)
plt.yscale('logit')
plt.title('logit')
plt.grid(True)
plt.figure(4)
plt.plot(x, y - y.mean())
plt.yscale('symlog', linthreshy=0.05)
plt.title('symlog')
plt.grid(True)
plt.show()
x = np.linspace(-2, 2, 100)
plt.figure(1, figsize=(15,10))
plt.subplot(131)
plt.plot(x, x**3)
plt.grid(True)
plt.title("Default ticks")
ax = plt.subplot(132)
plt.plot(x, x**3)
ax.xaxis.set_ticks(np.arange(-2, 2, 1))
plt.grid(True)
plt.title("Manual ticks on the x-axis")
ax = plt.subplot(133)
plt.plot(x, x**3)
plt.minorticks_on()
ax.tick_params(axis='x', which='minor', bottom='off')
ax.xaxis.set_ticks([-2, 0, 1, 2])
ax.yaxis.set_ticks(np.arange(-5, 5, 1))
ax.yaxis.set_ticklabels(["min", -4, -3, -2, -1, 0, 1, 2, 3, "max"])
plt.title("Manual ticks and tick labels\n(plus minor ticks) on the y-axis")
plt.grid(True)
plt.show()
radius = 1
theta = np.linspace(0, 2*np.pi*radius, 1000)
plt.subplot(111, projection='polar')
plt.plot(theta, np.sin(5*theta), "g-")
plt.plot(theta, 0.5*np.cos(20*theta), "b-")
plt.show()
from mpl_toolkits.mplot3d import Axes3D
x = np.linspace(-5, 5, 50)
y = np.linspace(-5, 5, 50)
X, Y = np.meshgrid(x, y)
R = np.sqrt(X**2 + Y**2)
Z = np.sin(R)
figure = plt.figure(1, figsize = (12, 4))
subplot3d = plt.subplot(111, projection='3d')
surface = subplot3d.plot_surface(X, Y, Z, rstride=1, cstride=1, cmap=matplotlib.cm.coolwarm, linewidth=0.1)
plt.show()
plt.contourf(X, Y, Z, cmap=matplotlib.cm.coolwarm)
plt.colorbar()
plt.show()
from numpy.random import rand
x, y = rand(2, 100)
plt.scatter(x, y)
plt.show()
x, y, scale = rand(3, 100)
scale = 500 * scale ** 5
plt.scatter(x, y, s=scale)
plt.show()
for color in ['red', 'green', 'blue']:
n = 100
x, y = rand(2, n)
scale = 500.0 * rand(n) ** 5
plt.scatter(x, y, s=scale, c=color, alpha=0.3, edgecolors='blue')
plt.grid(True)
plt.show()
from numpy.random import randn
def plot_line(axis, slope, intercept, **kargs):
xmin, xmax = axis.get_xlim()
plt.plot([xmin, xmax], [xmin*slope+intercept, xmax*slope+intercept], **kargs)
x = randn(1000)
y = 0.5*x + 5 + randn(1000)*2
plt.axis([-2.5, 2.5, -5, 15])
plt.scatter(x, y, alpha=0.2)
plt.plot(1, 0, "ro")
plt.vlines(1, -5, 0, color="red")
plt.hlines(0, -2.5, 1, color="red")
plot_line(axis=plt.gca(), slope=0.5, intercept=5, color="magenta")
plt.grid(True)
plt.show()
data = [1, 1.1, 1.8, 2, 2.1, 3.2, 3, 3, 3, 3]
plt.subplot(211)
plt.hist(data, bins = 10, rwidth=0.8)
plt.subplot(212)
plt.hist(data, bins = [1, 1.5, 2, 2.5, 3], rwidth=0.95)
plt.xlabel("Value")
plt.ylabel("Frequency")
plt.show()
data1 = np.random.randn(400)
data2 = np.random.randn(500) + 3
data3 = np.random.randn(450) + 6
data4a = np.random.randn(200) + 9
data4b = np.random.randn(100) + 10
plt.hist(data1, bins=5, color='g', alpha=0.75, label='bar hist') # default histtype='bar'
plt.hist(data2, color='b', alpha=0.65, histtype='stepfilled', label='stepfilled hist')
plt.hist(data3, color='r', histtype='step', label='step hist')
plt.hist((data4a, data4b), color=('r','m'), alpha=0.55, histtype='barstacked', label=('barstacked a', 'barstacked b'))
plt.xlabel("Value")
plt.ylabel("Frequency")
plt.legend()
plt.grid(True)
plt.show()
import matplotlib.image as mpimg
img = mpimg.imread('my_square_function.png')
print(img.shape, img.dtype)
plt.imshow(img)
plt.show()
plt.imshow(img)
plt.axis('off')
plt.show()
img = np.arange(100*100).reshape(100, 100)
print(img)
plt.imshow(img)
plt.show()
plt.imshow(img, cmap="hot")
plt.show()
img = np.empty((20,30,3))
img[:, :10] = [0, 0, 0.6]
img[:, 10:20] = [1, 1, 1]
img[:, 20:] = [0.6, 0, 0]
plt.imshow(img)
plt.show()
plt.imshow(img, interpolation="nearest")
plt.show()
%matplotlib nbagg
import matplotlib.animation as animation
x = np.linspace(-1, 1, 100)
y = np.sin(x**2*25)
data = np.array([x, y])
fig = plt.figure()
line, = plt.plot([], [], "r-") # start with an empty plot
plt.axis([-1.1, 1.1, -1.1, 1.1])
plt.plot([-0.5, 0.5], [0, 0], "b-", [0, 0], [-0.5, 0.5], "b-", 0, 0, "ro")
plt.grid(True)
plt.title("Marvelous animation")
# this function will be called at every iteration
def update_line(num, data, line):
line.set_data(data[..., :num] + np.random.rand(2, num) / 25) # we only plot the first `num` data points.
return line,
line_ani = animation.FuncAnimation(fig, update_line, frames=100, fargs=(data, line), interval=67)
plt.show()
Writer = animation.writers['ffmpeg']
writer = Writer(fps=15, metadata=dict(artist='Me'), bitrate=1800)
line_ani.save('my_wiggly_animation.mp4', writer=writer)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Illustrate playing a pre-defined melody
Step2: 2. Play a piece of music
Step3: 3. Generate a tone of desired period and for a desired number of times
Step4: 4. Controlling the tone
|
<ASSISTANT_TASK:>
Python Code:
from pynq import Overlay
Overlay("base.bit").download()
from pynq.iop import Grove_Buzzer
from pynq.iop import PMODB
from pynq.iop import PMOD_GROVE_G1
grove_buzzer = Grove_Buzzer(PMODB, PMOD_GROVE_G1)
grove_buzzer.play_melody()
# Play a tone
tone_period = 1200
num_cycles = 500
grove_buzzer.play_tone(tone_period,num_cycles)
from pynq.iop import ARDUINO
from pynq.iop import Arduino_Analog
from pynq.iop import ARDUINO_GROVE_A1
analog1 = Arduino_Analog(ARDUINO, ARDUINO_GROVE_A1)
rounds = 200
for i in range(rounds):
tone_period = int(analog1.read_raw()[0]/5)
num_cycles = 500
grove_buzzer.play_tone(tone_period,50)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Portable Greymap ( .pgm) Format
Step2: Task breakdown
Step3: The following cell has left out some code where indicated - Fill them in!
Step4: Extension
Step5: Now use the add_image method of the empty image to add on the contents of all other image in the list of imgs
|
<ASSISTANT_TASK:>
Python Code:
import os
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import numpy as np
import sys
%matplotlib inline
Nx = 72
Ny = 72
img_x = np.linspace(1, 0, Nx)
img_y = np.linspace(1, 0, Ny)
X, Y = np.meshgrid(img_x, img_y)
# Generate the gradient image - this could be stored in .pgm format!
img_z = (X+Y) * 255*0.5
print(img_z)
fig = plt.figure()
ax = fig.add_subplot(111, aspect='equal')
ax.contourf(img_x, img_y, img_z, 20, cmap=cm.Greys_r)
ax.set_xlabel('x')
ax.set_ylabel('y')
# Implement the SquareImage class here:
# The image file names
names = ['img1.pgm', 'img2.pgm', 'img3.pgm', 'img4.pgm']
files = [os.path.join('data', name) for name in names]
# Instantiate the class and plot each picture.
imgs = []
for f in files:
# Produce an instance of SquareImage, passing 'f' as an argument
# image = ...
print(image)
imgs.append(image) # objects are first class instances: add to a list
# Use the 'plot' method to plot the instance
# ...
# Create an 'empty' SquareImage
combined = SquareImage()
print(combined.z)
# EXERCISE: Loop over the list of images and add on each image's z values to `combined':
# ...
# ...
# Plot
combined.plot()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Initialize the components with cfg files that, for simplicity, use the same time step and run duration
Step2: Store initial values of time, snow depth, and air temperature
Step3: Run the coupled models to completion. In each time step, perform the following actions
Step4: Finalize the components
Step5: Plot snow depth versus time.
|
<ASSISTANT_TASK:>
Python Code:
from cmt.components import Meteorology, SnowDegreeDay
met, sno = Meteorology(), SnowDegreeDay()
met.initialize('./input/meteorology-2.cfg')
sno.initialize('./input/snow_degree_day-2.cfg')
time = [met.get_current_time()]
snow_depth = [sno.get_value('snowpack__depth').max()]
air_temp = [met.get_value('atmosphere_bottom_air__temperature').max()]
count = 1
while met.get_current_time() < met.get_end_time():
T_air = met.get_value('atmosphere_bottom_air__temperature')
P_snow = met.get_value('atmosphere_water__snowfall_leq-volume_flux')
T_surf = met.get_value('land_surface__temperature')
rho_H2O = met.get_value('water-liquid__mass-per-volume_density')
sno.set_value('atmosphere_bottom_air__temperature', T_air)
sno.set_value('atmosphere_water__snowfall_leq-volume_flux', P_snow)
sno.set_value('land_surface__temperature', T_surf)
sno.set_value('water-liquid__mass-per-volume_density', rho_H2O)
sno.update(sno.get_time_step()*count)
rho_snow = sno.get_value('snowpack__z_mean_of_mass-per-volume_density')
h_snow = sno.get_value('snowpack__depth')
h_swe = sno.get_value('snowpack__liquid-equivalent_depth')
SM = sno.get_value('snowpack__melt_volume_flux')
met.set_value('snowpack__z_mean_of_mass-per-volume_density', rho_snow)
met.set_value('snowpack__depth', h_snow)
met.set_value('snowpack__liquid-equivalent_depth', h_swe)
met.set_value('snowpack__melt_volume_flux', SM)
met.update(met.get_time_step()*count)
time.append(met.get_current_time())
snow_depth.append(sno.get_value('snowpack__depth').max())
air_temp.append(met.get_value('atmosphere_bottom_air__temperature').max())
count += 1
print time
print snow_depth
print air_temp
met.finalize(), sno.finalize()
%matplotlib inline
from matplotlib import pyplot as plt
plt.plot(time[1:], snow_depth[1:])
plt.title('Snow depth versus time')
plt.xlabel('Time [s]')
plt.ylabel('Snow depth [m]')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Contour plots of 2d wavefunctions
Step3: The contour, contourf, pcolor and pcolormesh functions of Matplotlib can be used for effective visualizations of 2d scalar fields. Use the Matplotlib documentation to learn how to use these functions along with the numpy.meshgrid function to visualize the above wavefunction
Step4: Next make a visualization using one of the pcolor functions
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
def well2d(x, y, nx, ny, L=1.0):
Compute the 2d quantum well wave function.
# YOUR CODE HERE
s=2/L*np.sin((nx*np.pi*x)/L)*np.sin ((ny*np.pi*y)/L)
return s
psi = well2d(np.linspace(0,1,10), np.linspace(0,1,10), 1, 1)
assert len(psi)==10
assert psi.shape==(10,)
# YOUR CODE HERE
x = np.linspace(0.0,1.0,100)
y = np.linspace(0.0,1.0,100)
m,n = np.meshgrid(x,y)
plt.contour(well2d(m,n,3,2))
plt.title('Wave Function')
plt.xlabel('x')
plt.ylabel('y')
plt.box(False)
assert True # use this cell for grading the contour plot
# YOUR CODE HERE
plt.pcolormesh(well2d(m,n,3,2))
plt.title('Wave Function')
plt.xlabel('x')
plt.ylabel('y')
plt.box(False)
assert True # use this cell for grading the pcolor plot
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data Mining
Step2: Building Features
Step3: Adding New Columns for Features in Matches DataFrame
Step4: Visualizations for Features vs. Response
Step5: Predictions
Step6: Training and Testing on Entire Data
Step7: Splitting train and test using train_test_split
Step8: Splitting Training Set (2008-2013) and Test Set (2013-2015) based on Seasons
Step9: Support Vector Machines
Step10: Random Forests
Step11: Naive Bayes Classifier
Step12: Cross Validation
Step13: Gradient Boosting
Step14: Get Prediction for Web App
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np # imports a fast numerical programming library
import matplotlib.pyplot as plt #sets up plotting under plt
import pandas as pd #lets us handle data as dataframes
#sets up pandas table display
pd.set_option('display.width', 500)
pd.set_option('display.max_columns', 100)
pd.set_option('display.notebook_repr_html', True)
from __future__ import division
# Reading in the data
allmatches = pd.read_csv("../data/matches.csv")
alldeliveries = pd.read_csv("../data/deliveries.csv")
allmatches.head(10)
# Selecting Seasons 2008 - 2015
matches_seasons = allmatches.loc[allmatches['season'] != 2016]
deliveries_seasons = alldeliveries.loc[alldeliveries['match_id'] < 518]
# Selecting teams DD, KKR, MI, RCB, KXIP, RR, CSK
matches_teams = matches_seasons.loc[(matches_seasons['team1'].isin(['Kolkata Knight Riders', \
'Royal Challengers Bangalore', 'Delhi Daredevils', 'Chennai Super Kings', 'Rajasthan Royals', \
'Mumbai Indians', 'Kings XI Punjab'])) & (matches_seasons['team2'].isin(['Kolkata Knight Riders', \
'Royal Challengers Bangalore', 'Delhi Daredevils', 'Chennai Super Kings', 'Rajasthan Royals', \
'Mumbai Indians', 'Kings XI Punjab']))]
matches_team_matchids = matches_teams.id.unique()
deliveries_teams = deliveries_seasons.loc[deliveries_seasons['match_id'].isin(matches_team_matchids)]
print "Teams selected:\n"
for team in matches_teams.team1.unique():
print team
# Neglect matches with inconsistencies like 'No Result' or 'D/L Applied'
matches = matches_teams.loc[(matches_teams['result'] == 'normal') & (matches_teams['dl_applied'] == 0)]
matches_matchids = matches.id.unique()
deliveries = deliveries_teams.loc[deliveries_teams['match_id'].isin(matches_matchids)]
# Verifying consistency between datasets
(matches.id.unique() == deliveries.match_id.unique()).all()
# Batsman Strike Rate Calculation (Top 5 Batsmen)
# Team 1: Batting First; Team 2: Fielding First
def getMatchDeliveriesDF(match_id):
return deliveries.loc[deliveries['match_id'] == match_id]
def getInningsOneBatsmen(match_deliveries):
return match_deliveries.loc[match_deliveries['inning'] == 1].batsman.unique()[0:5]
def getInningsTwoBatsmen(match_deliveries):
return match_deliveries.loc[match_deliveries['inning'] == 2].batsman.unique()[0:5]
def getBatsmanStrikeRate(batsman, match_id):
onstrikedeliveries = deliveries.loc[(deliveries['match_id'] < match_id) & (deliveries['batsman'] == batsman)]
total_runs = onstrikedeliveries['batsman_runs'].sum()
total_balls = onstrikedeliveries.shape[0]
if total_balls != 0:
return (total_runs/total_balls) * 100
else:
return None
def getTeamStrikeRate(batsmen, match_id):
strike_rates = []
for batsman in batsmen:
bsr = getBatsmanStrikeRate(batsman, match_id)
if bsr != None:
strike_rates.append(bsr)
return np.mean(strike_rates)
def getAverageStrikeRates(match_id):
match_deliveries = getMatchDeliveriesDF(match_id)
innOneBatsmen = getInningsOneBatsmen(match_deliveries)
innTwoBatsmen = getInningsTwoBatsmen(match_deliveries)
teamOneSR = getTeamStrikeRate(innOneBatsmen, match_id)
teamTwoSR = getTeamStrikeRate(innTwoBatsmen, match_id)
return teamOneSR, teamTwoSR
# testing functionality
getAverageStrikeRates(517)
# Bowler Rating : Wickets/Run (Higher the Better)
# Team 1: Batting First; Team 2: Fielding First
def getInningsOneBowlers(match_deliveries):
return match_deliveries.loc[match_deliveries['inning'] == 1].bowler.unique()[0:4]
def getInningsTwoBowlers(match_deliveries):
return match_deliveries.loc[match_deliveries['inning'] == 2].bowler.unique()[0:4]
def getBowlerWPR(bowler, match_id):
balls = deliveries.loc[(deliveries['match_id'] < match_id) & (deliveries['bowler'] == bowler)]
total_runs = balls['total_runs'].sum()
total_wickets = balls.loc[balls['dismissal_kind'].isin(['caught', 'bowled', 'lbw', \
'caught and bowled', 'stumped'])].shape[0]
if balls.shape[0] > 0:
return (total_wickets/total_runs) * 100
else:
return None
def getTeamWPR(bowlers, match_id):
WPRs = []
for bowler in bowlers:
bwpr = getBowlerWPR(bowler, match_id)
if bwpr != None:
WPRs.append(bwpr)
return np.mean(WPRs)
def getAverageWPR(match_id):
match_deliveries = getMatchDeliveriesDF(match_id)
innOneBowlers = getInningsOneBowlers(match_deliveries)
innTwoBowlers = getInningsTwoBowlers(match_deliveries)
teamOneWPR = getTeamWPR(innTwoBowlers, match_id)
teamTwoWPR = getTeamWPR(innOneBowlers, match_id)
return teamOneWPR, teamTwoWPR
# testing functionality
getAverageWPR(517)
# MVP Score (Total number of Player of the Match awards in a squad)
# Team 1: Batting First; Team 2: Fielding First
def getAllInningsOneBatsmen(match_deliveries):
return match_deliveries.loc[match_deliveries['inning'] == 1].batsman.unique()
def getAllInningsTwoBatsmen(match_deliveries):
return match_deliveries.loc[match_deliveries['inning'] == 2].batsman.unique()
def getAllInningsOneBowlers(match_deliveries):
return match_deliveries.loc[match_deliveries['inning'] == 1].bowler.unique()
def getAllInningsTwoBowlers(match_deliveries):
return match_deliveries.loc[match_deliveries['inning'] == 2].bowler.unique()
def makeSquad(batsmen, bowlers):
p = []
p = np.append(p, batsmen)
for i in bowlers:
if i not in batsmen:
p = np.append(p, i)
return p
def getPlayerMVPAwards(player, match_id):
return matches.loc[(matches['player_of_match'] == player) & (matches['id'] < match_id)].shape[0]
def getTeamMVPAwards(squad, match_id):
num_awards = 0
for player in squad:
num_awards += getPlayerMVPAwards(player, match_id)
return num_awards
def compareMVPAwards(match_id):
match_deliveries = getMatchDeliveriesDF(match_id)
innOneBatsmen = getAllInningsOneBatsmen(match_deliveries)
innTwoBatsmen = getAllInningsTwoBatsmen(match_deliveries)
innOneBowlers = getAllInningsOneBowlers(match_deliveries)
innTwoBowlers = getAllInningsTwoBowlers(match_deliveries)
teamOneSquad = makeSquad(innOneBatsmen, innTwoBowlers)
teamTwoSquad = makeSquad(innTwoBatsmen, innOneBowlers)
teamOneAwards = getTeamMVPAwards(teamOneSquad, match_id)
teamTwoAwards = getTeamMVPAwards(teamTwoSquad, match_id)
return teamOneAwards, teamTwoAwards
compareMVPAwards(517)
# Prints a comparison between two teams based on squad attributes
def generateSquadRating(match_id):
gameday_teams = deliveries.loc[(deliveries['match_id'] == match_id)].batting_team.unique()
teamOne = gameday_teams[0]
teamTwo = gameday_teams[1]
teamOneSR, teamTwoSR = getAverageStrikeRates(match_id)
teamOneWPR, teamTwoWPR = getAverageWPR(match_id)
teamOneMVPs, teamTwoMVPs = compareMVPAwards(match_id)
print "Comparing squads for " + teamOne + " vs " + teamTwo
print "\nAverage Strike Rate for Batsmen in " + str(teamOne) + " : " + str(teamOneSR)
print "\nAverage Strike Rate for Batsmen in " + str(teamTwo) + " : " + str(teamTwoSR)
print "\nBowler Rating for " + str(teamOne) + " : " + str(teamOneWPR)
print "\nBowler Rating for " + str(teamTwo) + " : " + str(teamTwoWPR)
print "\nNumber of MVP Awards in " + str(teamOne) + " : " + str(teamOneMVPs)
print "\nNumber of MVP Awards in " + str(teamTwo) + " : " + str(teamTwoMVPs)
generateSquadRating(517)
# Previous Encounters (All games played in previous matches)
# Win % for Team 1 against Team 2
def getTeam1(match_id):
return matches.loc[matches["id"] == match_id].team1.unique()
def getTeam2(match_id):
return matches.loc[matches["id"] == match_id].team2.unique()
def getPreviousEncDF(match_id):
team1 = getTeam1(match_id)
team2 = getTeam2(match_id)
return matches.loc[(matches["id"] < match_id) & (((matches["team1"].isin(team1)) & (matches["team2"].isin(team2))) | ((matches["team1"].isin(team2)) & (matches["team2"].isin(team1))))]
def getTeamWBR(match_id, team):
WBR = 0
DF = getPreviousEncDF(match_id)
winnerDF = DF.loc[DF["winner"] == team]
WBR = winnerDF['win_by_runs'].sum()
return WBR
def getTeamWBW(match_id, team):
WBW = 0
DF = getPreviousEncDF(match_id)
winnerDF = DF.loc[DF["winner"] == team]
WBW = winnerDF['win_by_wickets'].sum()
return WBW
def getTeamWinPerc(match_id):
dF = getPreviousEncDF(match_id)
timesPlayed = dF.shape[0]
team1 = getTeam1(match_id)[0].strip("[]")
timesWon = dF.loc[dF["winner"] == team1].shape[0]
if timesPlayed != 0:
winPerc = (timesWon/timesPlayed) * 100
else:
winPerc = 0
return winPerc
def getBothTeamStats(match_id):
DF = getPreviousEncDF(match_id)
team1 = getTeam1(match_id)[0].strip("[]")
team2 = getTeam2(match_id)[0].strip("[]")
timesPlayed = DF.shape[0]
timesWon = DF.loc[DF["winner"] == team1].shape[0]
WBRTeam1 = getTeamWBR(match_id, team1)
WBRTeam2 = getTeamWBR(match_id, team2)
WBWTeam1 = getTeamWBW(match_id, team1)
WBWTeam2 = getTeamWBW(match_id, team2)
print "Out of {} times in the past {} have won {} times({}%) from {}".format(timesPlayed, team1, timesWon, getTeamWinPerc(match_id), team2)
print "{} won by {} total runs and {} total wickets.".format(team1, WBRTeam1, WBWTeam1)
print "{} won by {} total runs and {} total wickets.".format(team2, WBRTeam2, WBWTeam2)
#Testing functionality
getBothTeamStats(517)
# Recent Form (Win Percentage of the 3 previous matches of a team in the same season)
# Higher the better
def getMatchYear(match_id):
return matches.loc[matches["id"] == match_id].season.unique()
def getTeam1DF(match_id, year):
team1 = getTeam1(match_id)
return matches.loc[(matches["id"] < match_id) & (matches["season"] == year) & ((matches["team1"].isin(team1)) | (matches["team2"].isin(team1)))].tail(3)
def getTeam2DF(match_id, year):
team2 = getTeam2(match_id)
return matches.loc[(matches["id"] < match_id) & (matches["season"] == year) & ((matches["team1"].isin(team2)) | (matches["team2"].isin(team2)))].tail(3)
def getTeamWinPercentage(match_id):
year = int(getMatchYear(match_id))
team1 = getTeam1(match_id)[0].strip("[]")
team2 = getTeam2(match_id)[0].strip("[]")
team1DF = getTeam1DF(match_id, year)
team2DF = getTeam2DF(match_id, year)
team1TotalMatches = team1DF.shape[0]
team1WinMatches = team1DF.loc[team1DF["winner"] == team1].shape[0]
team2TotalMatches = team2DF.shape[0]
team2WinMatches = team2DF.loc[team2DF["winner"] == team2].shape[0]
if (team1TotalMatches != 0) and (team2TotalMatches !=0):
winPercTeam1 = ((team1WinMatches / team1TotalMatches) * 100)
winPercTeam2 = ((team2WinMatches / team2TotalMatches) * 100)
elif (team1TotalMatches != 0) and (team2TotalMatches ==0):
winPercTeam1 = ((team1WinMatches / team1TotalMatches) * 100)
winPercTeam2 = 0
elif (team1TotalMatches == 0) and (team2TotalMatches !=0):
winPercTeam1 = 0
winPercTeam2 = ((team2WinMatches / team2TotalMatches) * 100)
else:
winPercTeam1 = 0
winPercTeam2 = 0
return winPercTeam1, winPercTeam2
getTeamWinPercentage(517)
#Function to implement all features
def getAllFeatures(match_id):
generateSquadRating(match_id)
print ("\n")
getBothTeamStats(match_id)
print("\n")
getTeamWinPercentage(match_id)
#Testing Functionality
getAllFeatures(517)
# New Column for Difference of Average Strike rates (First Team SR - Second Team SR)
# [Negative value means Second team is better]
firstTeamSR = []
secondTeamSR = []
for i in matches['id'].unique():
P, Q = getAverageStrikeRates(i)
firstTeamSR.append(P), secondTeamSR.append(Q)
firstSRSeries = pd.Series(firstTeamSR)
secondSRSeries = pd.Series(secondTeamSR)
matches["Avg_SR_Difference"] = firstSRSeries.values - secondSRSeries.values
# New Column for Difference of Wickets Per Run (First Team WPR - Second Team WPR)
# [Negative value means Second team is better]
firstTeamWPR = []
secondTeamWPR = []
for i in matches['id'].unique():
R, S = getAverageWPR(i)
firstTeamWPR.append(R), secondTeamWPR.append(S)
firstWPRSeries = pd.Series(firstTeamWPR)
secondWPRSeries = pd.Series(secondTeamWPR)
matches["Avg_WPR_Difference"] = firstWPRSeries.values - secondWPRSeries.values
# New column for difference of MVP Awards
# (Negative value means Second team is better)
firstTeamMVP = []
secondTeamMVP = []
for i in matches['id'].unique():
T, U = compareMVPAwards(i)
firstTeamMVP.append(T), secondTeamMVP.append(U)
firstMVPSeries = pd.Series(firstTeamMVP)
secondMVPSeries = pd.Series(secondTeamMVP)
matches["Total_MVP_Difference"] = firstMVPSeries.values - secondMVPSeries.values
# New column for Win Percentage of Team 1 in previous encounters
firstTeamWP = []
for i in matches['id'].unique():
WP = getTeamWinPerc(i)
firstTeamWP.append(WP)
firstWPSeries = pd.Series(firstTeamWP)
matches["Prev_Enc_Team1_WinPerc"] = firstWPSeries.values
# New column for Recent form(Win Percentage in the current season) of 1st Team compared to 2nd Team
# (Negative means 2nd team has higher win percentage)
firstTeamRF = []
secondTeamRF = []
for i in matches['id'].unique():
K, L = getTeamWinPercentage(i)
firstTeamRF.append(K), secondTeamRF.append(L)
firstRFSeries = pd.Series(firstTeamRF)
secondRFSeries = pd.Series(secondTeamRF)
matches["Total_RF_Difference"] = firstRFSeries.values - secondRFSeries.values
#Create Column for Team 1 Winning Status (1 = Won, 0 = Lost)
matches['team1Winning'] = np.where(matches['team1'] == matches['winner'], 1, 0)
#Testing
matches
matches.boxplot(column = 'Avg_SR_Difference', by='team1Winning', showfliers= False)
matches.boxplot(column = 'Avg_WPR_Difference', by='team1Winning', showfliers= False)
matches.boxplot(column = 'Total_MVP_Difference', by='team1Winning', showfliers= False)
matches.boxplot(column = 'Prev_Enc_Team1_WinPerc', by='team1Winning', showfliers= False)
matches.boxplot(column = 'Total_RF_Difference', by='team1Winning', showfliers= False)
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn import svm
from sklearn.ensemble import RandomForestClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.cross_validation import train_test_split
from sklearn import metrics
from patsy import dmatrices
y, X = dmatrices('team1Winning ~ 0 + Avg_SR_Difference + Avg_WPR_Difference + Total_MVP_Difference + Prev_Enc_Team1_WinPerc + \
Total_RF_Difference', matches, return_type="dataframe")
y_arr = np.ravel(y)
# instantiate a logistic regression model, and fit with X and y
model = LogisticRegression()
model = model.fit(X, y_arr)
# check the accuracy on the training set
print "Accuracy is", model.score(X, y_arr)*100, "%"
# evaluate the model by splitting into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y_arr, random_state = 0)
# Logistic Regression on train_test_split
model2 = LogisticRegression()
model2.fit(X_train, y_train)
# predict class labels for the test set
predicted = model2.predict(X_test)
# generate evaluation metrics
print "Accuracy is ", metrics.accuracy_score(y_test, predicted)*100, "%"
# KNN Classification on train_test_split
k_range = list(range(1, 61))
k_score = []
for k in k_range:
knn = KNeighborsClassifier(n_neighbors = k)
knn.fit(X_train, y_train)
y_pred = knn.predict(X_test)
k_score.append(metrics.accuracy_score(y_test, y_pred))
plt.plot(k_range, k_score)
# Best values of k in train_test_split
knn = KNeighborsClassifier(n_neighbors = 50)
knn.fit(X_train, y_train)
y_pred = knn.predict(X_test)
print "Accuracy is ", metrics.accuracy_score(y_test, y_pred)*100, "%"
X_timetrain = X.loc[X.index < 398]
Y_timetrain = y.loc[y.index < 398]
Y_timetrain_arr = np.ravel(Y_timetrain)
X_timetest = X.loc[X.index >= 398]
Y_timetest = y.loc[y.index >= 398]
Y_timetest_arr = np.ravel(Y_timetest)
X_timetest
# Logistic Regression on time-based split sets
model3 = LogisticRegression()
model3.fit(X_timetrain, Y_timetrain_arr)
timepredicted = model3.predict(X_timetest)
print "Accuracy is ", metrics.accuracy_score(Y_timetest_arr, timepredicted)*100, "%"
# KNN Classification on time-based split sets
k_range = list(range(1, 32))
k_score = []
for k in k_range:
knn = KNeighborsClassifier(n_neighbors = k)
knn.fit(X_timetrain, Y_timetrain_arr)
y_pred = knn.predict(X_timetest)
k_score.append(metrics.accuracy_score(Y_timetest_arr, y_pred))
plt.plot(k_range, k_score)
# Best values of k in time-based split data
knn1 = KNeighborsClassifier(n_neighbors = 31)
knn1.fit(X_timetrain, Y_timetrain_arr)
y_pred = knn1.predict(X_timetest)
print "Accuracy is ", metrics.accuracy_score(Y_timetest_arr, y_pred)*100, "%"
clf = svm.SVC(gamma=0.001, C=10)
clf.fit(X_timetrain, Y_timetrain_arr)
clf_pred = clf.predict(X_timetest)
print "Accuracy is ", metrics.accuracy_score(Y_timetest_arr, clf_pred)*100, "%"
rfc = RandomForestClassifier(n_jobs = -1, random_state = 1)
rfc.fit(X_timetrain, Y_timetrain_arr)
rfc_pred = rfc.predict(X_timetest)
print "Accuracy is ", metrics.accuracy_score(Y_timetest_arr, rfc_pred)*100, "%"
fi = zip(X.columns, rfc.feature_importances_)
print "Feature Importance according to Random Forests Model\n"
for i in fi:
print i[0], ":", i[1]
gclf = GaussianNB()
gclf.fit(X_timetrain, Y_timetrain_arr)
gclf_pred = gclf.predict(X_timetest)
print "Accuracy is ", metrics.accuracy_score(Y_timetest_arr, gclf_pred) *100, "%"
from sklearn.cross_validation import cross_val_score
rfc = LogisticRegression()
scores = cross_val_score(rfc, X, y_arr, cv=10, scoring='accuracy')
scores
k_range = list(range(1, 61))
k_scores = []
for k in k_range:
knn = KNeighborsClassifier(n_neighbors=k)
scores = cross_val_score(knn, X, y_arr, cv=10, scoring='accuracy')
k_scores.append(scores.mean())
plt.plot(k_range, k_scores)
from xgboost import XGBClassifier
xgbtest = XGBClassifier(
learning_rate =1,
n_estimators=2,
max_depth=6,
min_child_weight=8,
gamma=0.1,
subsample=0.9,
colsample_bytree=0.8,
objective= 'binary:logistic',
scale_pos_weight=1,
seed=27)
xgbtest.fit(X_timetrain, Y_timetrain_arr)
xgbtest_pred = xgbtest.predict(X_timetest)
print "Accuracy is ", metrics.accuracy_score(Y_timetest_arr, xgbtest_pred) *100, "%"
def getPrediction(match_id):
'''Returns the prediction for the given match
Args: match_id (int): Match ID for the required game
Returns: String: Predicted winner of the game and probability of victory
'''
results = {}
match_row = matches.loc[matches['id'] == match_id]
team1name = match_row.team1.unique()[0]
team2name = match_row.team2.unique()[0]
toPredict = X_timetest.loc[X_timetest.index == match_id-1].values
prediction_prob = knn1.predict_proba(toPredict)
prediction = knn1.predict(toPredict)
if prediction[0] > 0:
results['name'] = str(team1name)
results['prob'] = float(prediction_prob[0][1])*100
else:
results['name'] = str(team2name)
results['prob'] = float(prediction_prob[0][0])*100
return results
getPrediction(517)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Avant-propos
Step2: Il y a un rapport de 10 dans le temps d'exécution entre la méthode "user defined" et la méthode "builtin".
Step3: Le code suivant va lire plusieurs gigaoctets de données, et la consommation maximale de mémoire du process ne va augmenter que de quelques Mo. De plus ce code manipule des dictionnaires qu'il serait compliqué de faire rentrer dans un DataFrame pandas.
Step4: Dans le cadre du TP d'aujourd'hui, les données que nous allons utiliser peuvent largement tenir en mémoire, et de façon générale, lorsqu'on développe des codes pour gérer des gros volumes de données, on les teste sur des volumes de données qui tiennent en mémoire.
Step5: Le NoSql / json permet donc une alternative au schéma classique suivant
Step6: mongodb (pymongo) lui ,ne connait pas de colonnes, que des documents, dont le format est analogue à un objet json.
Step7: Par contre certaines syntaxes usuelles en sql, ici le groupby, ont une écriture nettement plus complexes en mongodb.
Step8: Mon retour
Step9: Et pandas ?
|
<ASSISTANT_TASK:>
Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
import pyensae.datasource
pyensae.datasource.download_data("twitter_for_network_100000.db.zip")
import numpy as np
def my_sum(l):
res = 0
for it in l:
res += it
return res
l = list(range(100000))
a = np.arange(100000)
print("User defined method or cross-method")
%timeit my_sum(a) # user defined with numpy array
%timeit sum(a) # built-in with numpy array
%timeit np.sum(l) # numpy function with list
%timeit my_sum(l) # user definedwith list
print("Builtin function")
%timeit sum(l) # built-in with list
print("Numpy function")
%timeit np.sum(a) # numpy function
%timeit a.sum() # numpy method
import os, psutil, gc, sys
if not sys.platform.startswith("win"):
import resource
def memory_usage_psutil():
gc.collect()
process = psutil.Process(os.getpid())
mem = process.memory_info()[0] / float(2 ** 20)
print( "Memory used : %i MB" % mem )
if not sys.platform.startswith("win"):
print( "Max memory usage : %i MB" % (resource.getrusage(resource.RUSAGE_SELF).ru_maxrss//1024) )
memory_usage_psutil()
import cytoolz as ct # import groupby, valmap, compose
import cytoolz.curried as ctc ## pipe, map, filter, get
import sqlite3
import pprint
try:
import ujson as json
except:
print("ujson not available")
import json
conn_sqlite = sqlite3.connect("twitter_for_network_100000.db")
cursor_sqlite = conn_sqlite.cursor()
cursor_sqlite.execute('SELECT content FROM tw_users' )
object_to_sum = ctc.pluck( "followers_count", ctc.map( json.loads, ctc.pluck( 0, cursor_sqlite ) ) )
print(sum(object_to_sum))
memory_usage_psutil()
import pprint
cursor_sqlite.execute('SELECT content FROM tw_users LIMIT 1')
user = cursor_sqlite.fetchone()[0]
print("#"*15 + " user raw json " + "#"*15)
print( user )
print("#"*15 + " user as python dict " + "#"*15)
pprint.pprint( json.loads( user ) )
cursor_sqlite.execute('SELECT content FROM tw_status LIMIT 1')
print("#"*15 + " status as python dict " + "#"*15)
pprint.pprint( json.loads( cursor_sqlite.fetchone()[0] ) )
try:
import psycopg2
from psycopg2.extras import Json
postgre_ok = True
except ImportError:
postgre_ok = False
if postgre_ok:
db_name = 'cours_ensae'
conn_string = "host='localhost' dbname='{0}' user='python' password='kyojin'".format( db_name )
try:
conn_psql = psycopg2.connect(conn_string)
cursor_psql = conn_psql.cursor()
postgre_ok = True
except psycopg2.OperationalError:
postgre_ok = False
if postgre_ok:
conn_psql.server_version
if postgre_ok:
conn_psql.rollback()
if postgre_ok:
def get_data_sql(doc_id):
cursor_psql.execute("SELECT id, company FROM document WHERE id = %s", (doc_id,))
res_1 = cursor_psql.fetchone()
cursor_psql.execute("SELECT id FROM ticket WHERE document_id = %s ORDER BY id", (doc_id,))
res_2 = cursor_psql.fetchall()
tickets_id = [it[0] for it in res_2 ]
cursor_psql.execute("SELECT id FROM coupon WHERE ticket_id = ANY( %s ) ORDER BY id", (tickets_id,))
res_3 = cursor_psql.fetchall()
return res_1 + (res_2,) + (res_3,)
%timeit get_data_sql(10000)
get_data_sql(10000)
if postgre_ok:
def get_data_sql_join(doc_id):
cursor_psql.execute("SELECT d.id, d.company, t.id, c.id FROM document as d \
JOIN ticket as t on d.id = t.document_id \
JOIN coupon as c on t.id = c.ticket_id \
WHERE d.id = %s", (doc_id,))
return cursor_psql.fetchall()
%timeit get_data_sql_join(10000)
get_data_sql_join(10000)
if postgre_ok:
def get_data_nosql(doc_id):
cursor_psql.execute("SELECT id, company, content FROM document_nosql WHERE id = %s", (doc_id,))
return cursor_psql.fetchone()
%timeit get_data_nosql(10000)
get_data_nosql(10000)
mongo = False
if mongo:
import pymongo
mongo_client = pymongo.MongoClient( 'localhost', 27017 )
mongo_db = mongo_client.ensae_db
mongo_db.table_for_ensae.delete_many( {} )
mongo_db.table_for_ensae.insert_one( {'nom' : 'Martin', 'prenom' : 'Nicolas', 'grades': [20,18,7,12]} )
mongo_db.table_for_ensae.insert_one( {'nom' : 'Dupont', 'prenom' : 'Jean', 'grades': [11,5,7,12]} )
mongo_db.table_for_ensae.insert_one( {'nom' : 'Martin', 'prenom' : 'Gilles', 'grades': [10,10,10,10]} )
user = mongo_db.table_for_ensae.find_one( {'nom' : 'Dupont'} )
user_list = mongo_db.table_for_ensae.find( {} )
_ = list(map( pprint.pprint, user_list ))
if mongo:
result = mongo_db.table_for_ensae.group(['nom'],
None,
{'list': []}, # initial
'function(obj, prev) {prev.list.push(obj)}')
pprint.pprint( result )
cursor_sqlite.execute("SELECT content FROM tw_users LIMIT 10000" )
with open("tw_users.json", 'w') as f:
for it_user in cursor_sqlite:
f.write(it_user[0])
f.write("\n")
with open("tw_users.json", 'r') as f:
nb_total_followers = 0
for it_user in f:
nb_total_followers += json.loads( it_user )["followers_count"]
print( nb_total_followers )
import pandas as pd
df = pd.read_sql( "SELECT id, screen_name from tw_users", conn_sqlite )
print( df.head() )
print( df.shape )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: I am repeating myself here! Wouldn't it be better if we can write something once and use it multiple times?
Step2: Now I can write some code which does the same thing 3 times without repeating myself
Step3: This is good! But there is one more important feature of functions. The return statement is used with a function to give back something to the code which asked the function to run.
Step4: Wouldn't it be better if we could ask the function to check if the argument was a multiple of anything?
Step5: Functions can have many arguments or none at all, when they might to the same thing every time
Step6: Question - print a, b and c. What do you see?
Step7: You will see that this function does the same thing every time!
Step8: Functions can also have default parameters, meaning you can leave an argument blank when you call the function if you need to
Step9: Question - Rewrite my_function(a,b) to check if a==b first then evaluate the same as before
Step10: Using modules and import
Step11: random is a library for random number generation and other randomness based tools. To use the random library you have to tell Python that you need it with the import statement.
Step12: Remember when I said that everything is an object in Python? Python modules have an ID and a type just like variables.
Step13: How do I use modules?
Step14: Random module functions with parameters
Step15: You can use functions from modules inside other functions
Step16: Final task - Write a function which randomly selects at least 3 integers, and returns a different string for each integer. You can select the range of random integers to choose between, but it will be more work if you choose a lot!
|
<ASSISTANT_TASK:>
Python Code:
a = 4
b = 8
c = 9
if a%2 == 0:
print('Multiple of 2')
else:
print('Not a multiple of 2')
if b%2 == 0:
print('Multiple of 2')
else:
print('Not a multiple of 2')
if c%2 == 0:
print('Multiple of 2')
else:
print('Not a multiple of 2')
# The def keyword is used to define a function
# Num is a "parameter" of the function, the input you provide to evaluate
def is_multiple_of_2(num):
if num%2==0: # Remember this is the modulus operator!
print('Multiple of 2')
else:
print('Not a multiple of 2')
# I can "call" the function I just wrote 3 times without repeating code
is_multiple_of_2(a)
is_multiple_of_2(b)
is_multiple_of_2(c)
def is_multiple_of_2_better(num):
if num%2==0:
return_string = 'Multiple of 2'
else:
return_string = 'Not a multiple of 2'
return return_string
print(is_multiple_of_2_better(a))
print(is_multiple_of_2_better(b))
print(is_multiple_of_2_better(c))
def is_multiple(num,multiple):
if num%multiple==0:
return_string = 'Multiple'
else:
return_string = 'Not a multiple'
return return_string
# Question - How could I make this even better?
multiple = 2 # What happens to the output if you change this to 3?
print(is_multiple(a,multiple))
print(is_multiple(b,multiple))
print(is_multiple(c,multiple))
def best_programming_language_ever():
lang = 'Python'
return lang
a = best_programming_language_ever() # a is equal to whatever the function returns
b = best_programming_language_ever()
c = best_programming_language_ever()
# TO DO
# END TODO
# TO DO
# I started you off with the first line (called the Function signature)
def my_function(a, b):
# END TODO
# Now write some code which uses this function called my_function with these variable pairs
a = 100
b = 1000
d = 'a'
e = 'b'
# END TO DO
def is_multiple_with_default(num,multiple=2):
if num%multiple==0:
return 'Multiple'
else:
return 'Not a multiple'
# Now if I dont specify what multiple is then it automatically checks for multiple of 2
is_multiple_with_default(1)
# Question - Can you break this function?
# We can provide an additional condition to make sure that the function doesn't break
def is_multiple_with_default_better(num,multiple=2):
if multiple == 0:
return 'Zero argument for multiple not allowed'
elif num%multiple==0:
return 'Multiple'
else:
return 'Not a multiple'
# Now the function is safe to use with 0 as multiple.
is_multiple_with_default_better(2,0)
# TODO
def my_function(a,b):
# END TODO
# Run this block to check if my_function() is working correctly
print(test(1,2))
print(test(2,1))
print(test('a','b'))
print(test(50,50))
# Remember this from the Give It a Go Session?
import random
# To import random but call it something shorter (like rand) you can run
import random as rand
# Everything is a first class object!
print(id(rand))
print(type(rand))
# Random() is a function. You can tell by the parentheses ()
rand.random()
# randint(a,b) is a function in the random which selects a random integer x from a <= x <= b
rand.randint(1,10)
# Can you call randint() without using the rand. before it?
# randint(1,10)
# Question - Is there a way to make this line work?
def weather():
num = rand.random()
if num > 0.5:
return "It's going to be sunny today!"
else:
return "It's going to rain today :("
# Now the weather function uses the random module to guess the weather
print(weather())
# We used this function before
uni = rand.choice(['LSE','UCL','Imperial'])
print(uni)
# We can use a different function to select 2 or more random universities
multi_uni = rand.sample(['LSE','UCL','Imperial','Kings','Queen Mary','SOAS','UAL'],2)
print(multi_uni)
# The [] notation denotes a list, a data structure we will look at next week
# TODO
def random_string():
# Now test your random_string function in this block
# END TODO
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data Analysis
Step2: We assign a shape of [None, 784], where 784 is the dimensionality of a single flattened 28 by 28 pixel MNIST image, and None indicates that the first dimension, corresponding to the batch size, can be of any size.
Step3: The input images x consists of a 2D tensor of floating point numbers. Here we assign it a shape of [None, 784], where 784 is the dimensionality of a single flattened 28 by 28 pixel MNIST image, and None indicates that the first dimension, corresponding to the batch size, can be of any size. The target output classes y_ also consists of a 2D tensor, where each row is a one-hot 10-dimensional vector indicating which digit class (zero through nine) the corresponding MNIST image belongs to.
Step4: To create this model, we're going to need to create a lot of weights and biases.
Step5: Our convolutions uses a stride of one and are zero padded so that the output is the same size as the input.
Step6: Build the Deep Learning architecture.
Step7: We know that every image in MNIST is of a handwritten digit between zero and nine, so we know there are only ten possible values that a given image can be.
Step8: Our loss function is the cross-entropy between the target and the softmax activation function applied to the model's prediction. In some rough sense, the cross-entropy is measuring how inefficient our predictions are for describing the truth, how large is the discrepancy between two probability distributions.
Step9: How well does our model do? We have to figure out where we predicted the correct label. tf.argmax() is an extremely useful function which gives you the index of the highest entry in a tensor along some axis. tf.argmax(y,1) is the label our model thinks is most likely for each input, while tf.argmax(y_,1) is the correct label. We can use tf.equal to check if our prediction matches the truth.
Step10: That gives us a list of booleans. To determine what fraction are correct, we cast to floating point numbers and then take the mean. For example, [True, False, True, True] would become [1,0,1,1] which would become 0.75.
Step11: Let's launch the graph!
|
<ASSISTANT_TASK:>
Python Code:
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', one_hot=True)
import tensorflow as tf
x = tf.placeholder(tf.float32, shape=[None, 784])
y_ = tf.placeholder(tf.float32, shape=[None, 10])
x_image = tf.reshape(x, [-1,28,28,1])
def weight_variable(shape):
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial)
def bias_variable(shape):
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
def conv2d(x, W):
return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')
def max_pool_2x2(x):
return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
W_conv1 = weight_variable([5, 5, 1, 32])
b_conv1 = bias_variable([32])
h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
h_pool1 = max_pool_2x2(h_conv1)
W_conv2 = weight_variable([5, 5, 32, 64])
b_conv2 = bias_variable([64])
h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
h_pool2 = max_pool_2x2(h_conv2)
W_fc1 = weight_variable([7 * 7 * 64, 1024])
b_fc1 = bias_variable([1024])
h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64])
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
keep_prob = tf.placeholder(tf.float32)
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
W_fc2 = weight_variable([1024, 10])
b_fc2 = bias_variable([10])
y_conv = tf.nn.softmax(tf.add(tf.matmul(h_fc1_drop, W_fc2), b_fc2))
cross_entropy = - tf.reduce_sum(y_ * tf.log(y_conv))
train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
correct_prediction = tf.equal(tf.argmax(y_conv, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
init = tf.initialize_all_variables()
sess = tf.Session()
with sess.as_default():
sess.run(init)
for i in range(20000):
batch = mnist.train.next_batch(50)
if i % 100 == 0:
train_accuracy = accuracy.eval(feed_dict={x:batch[0], y_: batch[1], keep_prob: 1.0})
print("step %d, training accuracy %g" % (i, train_accuracy))
train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})
print("test accuracy %g"%accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0}))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Plotting Methods
Step2: scikit-rf includes a convenient command to make nicer figures quick
Step3: Another common option is to draw admittance contours, instead of impedance. This is controlled through the chart_type argument.
Step4: See skrf.plotting.smith() for more info on customizing the Smith Chart.
Step5: Log-Magnitude
Step6: When no arguments are passed to the plotting methods, all parameters are plotted. Single parameters can be plotted by passing indices m and n to the plotting commands (indexing start from 0). Comparing the simulated reflection coefficient off the ring slot to a measurement,
Step7: Phase
Step8: Or unwrapped phase,
Step9: Phase is radian (rad) is also available
Step10: Impedance, Admittance
Step11: Customizing Plots
Step12: The frequency unit used on the x-axis is automatically filled in from
Step13: Other key word arguments given to the plotting methods are passed through to the matplotlib matplotlib.pyplot.plot function.
Step14: All components of the plots can be customized through matplotlib functions, and styles can be used with a context manager.
Step15: Saving Plots
Step16: Adding Markers Post Plot
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import skrf as rf
from skrf import Network
ring_slot = Network('data/ring slot.s2p')
ring_slot.plot_s_smith()
rf.stylely() # nicer looking. Can be configured with different styles
ring_slot.plot_s_smith()
ring_slot.plot_s_smith(draw_labels=True)
ring_slot.plot_s_smith(chart_type='y')
ring_slot.plot_s_complex()
from matplotlib import pyplot as plt
plt.axis('equal') # otherwise circles wont be circles
ring_slot.plot_s_db()
from skrf.data import ring_slot_meas
ring_slot.plot_s_db(m=0,n=0, label='Theory')
ring_slot_meas.plot_s_db(m=0,n=0, label='Measurement')
ring_slot.plot_s_deg()
ring_slot.plot_s_deg_unwrap()
gd = abs(ring_slot.s21.group_delay) *1e9 # in ns
ring_slot.plot(gd)
plt.ylabel('Group Delay (ns)')
plt.title('Group Delay of Ring Slot S21')
ring_slot.plot_z_im()
ring_slot.plot_y_im()
ring_slot.plot_s_db(m=0,n=0, label = 'Simulation')
ring_slot.frequency.unit = 'mhz'
ring_slot.plot_s_db(0,0)
ring_slot.frequency.unit='ghz'
ring_slot.plot_s_db(m=0,n=0, linewidth = 3, linestyle = '--', label = 'Simulation')
ring_slot_meas.plot_s_db(m=0,n=0, marker = 'o', markevery = 10,label = 'Measured')
from matplotlib import pyplot as plt
from matplotlib import style
with style.context('seaborn-ticks'):
ring_slot.plot_s_smith()
plt.xlabel('Real Part');
plt.ylabel('Imaginary Part');
plt.title('Smith Chart With Legend Room');
plt.axis([-1.1,2.1,-1.1,1.1])
plt.legend(loc=5)
from skrf.plotting import save_all_figs
save_all_figs('data/', format=['png','eps','pdf'])
from skrf import plotting
with plt.style.context('grayscale'):
ring_slot.plot_s_deg()
plotting.add_markers_to_lines()
plt.legend() # have to re-generate legend
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Initiate experiment with this input file
Step2: Before we start to draw random realisations of the model, we should first store the base state of the model for later reference. This is simply possibel with the freeze() method which stores the current state of the model as the "base-state"
Step3: We now intialise the random generator. We can directly assign a random seed to simplify reproducibility (note that this is not essential, as it would be for the definition in a script function
Step4: The next step is to define probability distributions to the relevant event parameters. Let's first look at the different events
Step5: Next, we define the probability distributions for the uncertain input parameters
Step6: This example shows how the base module for reproducible experiments with kinematics can be used. For further specification, child classes of Experiment can be defined, and we show examples of this type of extension in the next sections.
Step7: Idea
Step8: All in one function
Step9: Generate training set for normal faults
Step10: Generate reverse faults
Step11: Generate simple layer structure
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
# here the usual imports. If any of the imports fails,
# make sure that pynoddy is installed
# properly, ideally with 'python setup.py develop'
# or 'python setup.py install'
import sys, os
import matplotlib.pyplot as plt
import numpy as np
# adjust some settings for matplotlib
from matplotlib import rcParams
# print rcParams
rcParams['font.size'] = 15
# determine path of repository to set paths corretly below
repo_path = os.path.realpath('../..')
import pynoddy.history
import pynoddy.experiment
reload(pynoddy.experiment)
rcParams.update({'font.size': 15})
# From notebook 4/ Traning Set example 1:
reload(pynoddy.history)
reload(pynoddy.events)
nm = pynoddy.history.NoddyHistory()
# add stratigraphy
strati_options = {'num_layers' : 3,
'layer_names' : ['layer 1', 'layer 2', 'layer 3'],
'layer_thickness' : [1500, 500, 1500]}
nm.add_event('stratigraphy', strati_options )
# The following options define the fault geometry:
fault_options = {'name' : 'Fault_E',
'pos' : (4000, 0, 5000),
'dip_dir' : 90.,
'dip' : 60,
'slip' : 1000}
nm.add_event('fault', fault_options)
history = 'normal_fault.his'
output_name = 'normal_fault_out'
nm.write_history(history)
reload(pynoddy.history)
reload(pynoddy.experiment)
from pynoddy.experiment import monte_carlo
ue = pynoddy.experiment.Experiment(history)
ue.change_cube_size(100)
ue.plot_section('y')
ue.freeze()
ue.set_random_seed(12345)
ue.info(events_only = True)
ev2 = ue.events[2]
ev2.properties
param_stats = [{'event' : 2,
'parameter': 'Slip',
'stdev': 300.0,
'type': 'normal'},
{'event' : 2,
'parameter': 'Dip',
'stdev': 10.0,
'type': 'normal'},]
ue.set_parameter_statistics(param_stats)
resolution = 100
ue.change_cube_size(resolution)
tmp = ue.get_section('y')
prob_2 = np.zeros_like(tmp.block[:,:,:])
n_draws = 100
for i in range(n_draws):
ue.random_draw()
tmp = ue.get_section('y', resolution = resolution)
prob_2 += (tmp.block[:,:,:] == 2)
# Normalise
prob_2 = prob_2 / float(n_draws)
fig = plt.figure(figsize = (12,8))
ax = fig.add_subplot(111)
ax.imshow(prob_2.transpose()[:,0,:],
origin = 'lower left',
interpolation = 'none')
plt.title("Estimated probability of unit 4")
plt.xlabel("x (E-W)")
plt.ylabel("z")
ue.random_draw()
s1 = ue.get_section('y')
s1.block.shape
s1.block[np.where(s1.block == 3)] = 1
s1.plot_section('y', cmap='Greys')
nm = pynoddy.history.NoddyHistory()
# add stratigraphy
n_layers = 8
strati_options['num_layers'] = n_layers
strati_options['layer_names'] = []
strati_options['layer_thickness'] = []
for n in range(n_layers):
strati_options['layer_names'].append("layer %d" % n)
strati_options['layer_thickness'].append(5000./n_layers)
nm.add_event('stratigraphy', strati_options )
# The following options define the fault geometry:
fault_options = {'name' : 'Fault_E',
'pos' : (1000, 0, 5000),
'dip_dir' : 90.,
'dip' : 60,
'slip' : 500}
nm.add_event('fault', fault_options)
history = 'normal_fault.his'
output_name = 'normal_fault_out'
nm.write_history(history)
reload(pynoddy.history)
reload(pynoddy.experiment)
from pynoddy.experiment import monte_carlo
ue = pynoddy.experiment.Experiment(history)
ue.freeze()
ue.set_random_seed(12345)
ue.set_extent(2800, 100, 2800)
ue.change_cube_size(50)
ue.plot_section('y')
param_stats = [{'event' : 2,
'parameter': 'Slip',
'stdev': 100.0,
'type': 'lognormal'},
{'event' : 2,
'parameter': 'Dip',
'stdev': 10.0,
'type': 'normal'},
# {'event' : 2,
# 'parameter': 'Y',
# 'stdev': 150.0,
# 'type': 'normal'},
{'event' : 2,
'parameter': 'X',
'stdev': 150.0,
'type': 'normal'},]
ue.set_parameter_statistics(param_stats)
# randomly select layers:
ue.random_draw()
s1 = ue.get_section('y')
# create "feature" model:
f1 = s1.block.copy()
# randomly select layers:
f1 = np.squeeze(f1)
# n_featuers: number of "features" -> gray values in image
n_features = 5
vals = np.random.randint(0,255,size=n_features)
for n in range(n_layers):
f1[f1 == n] = np.random.choice(vals)
f1.shape
plt.imshow(f1.T, origin='lower_left', cmap='Greys', interpolation='nearest')
# blur image
from scipy import ndimage
f2 = ndimage.filters.gaussian_filter(f1, 1, mode='nearest')
plt.imshow(f2.T, origin='lower_left', cmap='Greys', interpolation='nearest', vmin=0, vmax=255)
# randomly swap image
if np.random.randint(2) == 1:
f2 = f2[::-1,:]
plt.imshow(f2.T, origin='lower_left', cmap='Greys', interpolation='nearest', vmin=0, vmax=255)
# back to before: re-initialise model:
nm = pynoddy.history.NoddyHistory()
# add stratigraphy
n_layers = 18
strati_options['num_layers'] = n_layers
strati_options['layer_names'] = []
strati_options['layer_thickness'] = []
for n in range(n_layers):
strati_options['layer_names'].append("layer %d" % n)
strati_options['layer_thickness'].append(5000./n_layers)
nm.add_event('stratigraphy', strati_options )
# The following options define the fault geometry:
fault_options = {'name' : 'Fault_E',
'pos' : (1000, 0, 5000),
'dip_dir' : 90.,
'dip' : 60,
'slip' : 500}
nm.add_event('fault', fault_options)
history = 'normal_fault.his'
output_name = 'normal_fault_out'
nm.write_history(history)
reload(pynoddy.history)
reload(pynoddy.experiment)
from pynoddy.experiment import monte_carlo
ue = pynoddy.experiment.Experiment(history)
ue.freeze()
ue.set_random_seed(12345)
ue.set_extent(2800, 100, 2800)
ue.change_cube_size(50)
param_stats = [{'event' : 2,
'parameter': 'Slip',
'stdev': 100.0,
'type': 'lognormal'},
{'event' : 2,
'parameter': 'Dip',
'stdev': 10.0,
'type': 'normal'},
# {'event' : 2,
# 'parameter': 'Y',
# 'stdev': 150.0,
# 'type': 'normal'},
{'event' : 2,
'parameter': 'X',
'stdev': 150.0,
'type': 'normal'},]
ue.set_parameter_statistics(param_stats)
n_train = 10000
F_train = np.empty((n_train, 28*28))
ue.change_cube_size(100)
for i in range(n_train):
# randomly select layers:
ue.random_draw()
s1 = ue.get_section('y')
# create "feature" model:
f1 = s1.block.copy()
# randomly select layers:
f1 = np.squeeze(f1)
# n_featuers: number of "features" -> gray values in image
n_features = 4
vals = np.random.randint(0,255,size=n_features)
for n in range(n_layers):
f1[f1 == n+1] = np.random.choice(vals)
f1 = f1.T
f2 = ndimage.filters.gaussian_filter(f1, 0, mode='nearest')
# scale image
f2 = f2 - np.min(f2)
if np.max(f2) != 0:
f2 = f2/np.max(f2)*255
# randomly swap image
if np.random.randint(2) == 1:
f2 = f2[::-1,:]
F_train[i] = f2.flatten().T
plt.imshow(f2, origin='lower_left', cmap='Greys', interpolation='nearest', vmin=0, vmax=255)
import matplotlib.pyplot as plt
%matplotlib inline
fig, ax = plt.subplots(nrows=2, ncols=5, sharex=True, sharey=True, figsize=(12,6))
ax = ax.flatten()
for i in range(10):
img = F_train[i].reshape(28, 28)
ax[i].imshow(img, cmap='Greys', interpolation='nearest')
ax[0].set_xticks([])
ax[0].set_yticks([])
plt.tight_layout()
# plt.savefig('./figures/mnist_all.png', dpi=300)
plt.show()
import pickle
pickle.dump(F_train, open("f_train_normal.pkl", 'w'))
# back to before: re-initialise model:
nm = pynoddy.history.NoddyHistory()
# add stratigraphy
n_layers = 18
strati_options['num_layers'] = n_layers
strati_options['layer_names'] = []
strati_options['layer_thickness'] = []
for n in range(n_layers):
strati_options['layer_names'].append("layer %d" % n)
strati_options['layer_thickness'].append(5000./n_layers)
nm.add_event('stratigraphy', strati_options )
# The following options define the fault geometry:
fault_options = {'name' : 'Fault_E',
'pos' : (1000, 0, 5000),
'dip_dir' : 90.,
'dip' : 60,
'slip' : -500}
nm.add_event('fault', fault_options)
history = 'normal_fault.his'
output_name = 'normal_fault_out'
nm.write_history(history)
reload(pynoddy.history)
reload(pynoddy.experiment)
from pynoddy.experiment import monte_carlo
ue = pynoddy.experiment.Experiment(history)
ue.freeze()
ue.set_random_seed(12345)
ue.set_extent(2800, 100, 2800)
ue.change_cube_size(50)
param_stats = [{'event' : 2,
'parameter': 'Slip',
'stdev': -100.0,
'type': 'lognormal'},
{'event' : 2,
'parameter': 'Dip',
'stdev': 10.0,
'type': 'normal'},
# {'event' : 2,
# 'parameter': 'Y',
# 'stdev': 150.0,
# 'type': 'normal'},
{'event' : 2,
'parameter': 'X',
'stdev': 150.0,
'type': 'normal'},]
ue.set_parameter_statistics(param_stats)
n_train = 10000
F_train_rev = np.empty((n_train, 28*28))
ue.change_cube_size(100)
for i in range(n_train):
# randomly select layers:
ue.random_draw()
s1 = ue.get_section('y')
# create "feature" model:
f1 = s1.block.copy()
# randomly select layers:
f1 = np.squeeze(f1)
# n_featuers: number of "features" -> gray values in image
n_features = 4
vals = np.random.randint(0,255,size=n_features)
for n in range(n_layers):
f1[f1 == n+1] = np.random.choice(vals)
f1 = f1.T
f2 = ndimage.filters.gaussian_filter(f1, 0, mode='nearest')
# scale image
f2 = f2 - np.min(f2)
if np.max(f2) != 0:
f2 = f2/np.max(f2)*255
# randomly swap image
if np.random.randint(2) == 1:
f2 = f2[::-1,:]
F_train_rev[i] = f2.flatten().T
fig, ax = plt.subplots(nrows=2, ncols=5, sharex=True, sharey=True, figsize=(12,6))
ax = ax.flatten()
for i in range(10):
img = F_train_rev[i].reshape(28, 28)
ax[i].imshow(img, cmap='Greys', interpolation='nearest')
ax[0].set_xticks([])
ax[0].set_yticks([])
plt.tight_layout()
# plt.savefig('./figures/mnist_all.png', dpi=300)
plt.show()
pickle.dump(F_train_rev, open("f_train_reverse.pkl", 'w'))
l1 = np.empty_like(s1.block[:,0,:])
n_layers = 18
for i in range(l1.shape[0]):
l1[:,i] = i
l1_ori = np.floor(l1*n_layers/l1.shape[0])
F_train_line = np.empty((n_train, 28*28))
for i in range(n_train):
n_features = 4
vals = np.random.randint(0,255,size=n_features)
l1 = l1_ori.copy()
for n in range(n_layers):
l1[l1 == n+1] = np.random.choice(vals)
f1 = l1.T
f2 = ndimage.filters.gaussian_filter(f1, 0, mode='nearest')
# scale image
f2 = f2 - np.min(f2)
if np.max(f2) != 0:
f2 = f2/np.max(f2)*255
F_train_line[i] = f2.flatten().T
fig, ax = plt.subplots(nrows=2, ncols=5, sharex=True, sharey=True, figsize=(12,6))
ax = ax.flatten()
for i in range(10):
img = F_train_line[i].reshape(28, 28)
ax[i].imshow(img, cmap='Greys', interpolation='nearest')
ax[0].set_xticks([])
ax[0].set_yticks([])
plt.tight_layout()
# plt.savefig('./figures/mnist_all.png', dpi=300)
plt.show()
pickle.dump(F_train_line, open("f_train_line.pkl", 'w'))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: We just defined a JulianDay class that takes one argument of type string or binary, and returns a float.
Step 2
Step3: Interlude
Step4: Step 3
Step5: Step 4
Step6: Create and execute a julianday expression
Step7: Because we've defined our operation on StringValue, and not just on StringColumn we get operations on both string scalars and string columns for free
|
<ASSISTANT_TASK:>
Python Code:
import ibis.expr.datatypes as dt
import ibis.expr.rules as rlz
from ibis.expr.operations import ValueOp
from ibis.expr.signature import Argument as Arg
class JulianDay(ValueOp):
arg = Arg(rlz.string)
output_type = rlz.shape_like('arg', 'float')
from ibis.expr.types import StringValue, BinaryValue
def julianday(string_value):
return JulianDay(string_value).to_expr()
StringValue.julianday = julianday
import ibis
t = ibis.table([('string_col', 'string')], name='t')
t.string_col.julianday()
import sqlalchemy as sa
@ibis.sqlite.add_operation(JulianDay)
def _julianday(translator, expr):
# pull out the arguments to the expression
arg, = expr.op().args
# compile the argument
compiled_arg = translator.translate(arg)
# return a SQLAlchemy expression that calls into the SQLite julianday function
return sa.func.julianday(compiled_arg)
import pathlib
import ibis
db_fname = str(pathlib.Path().resolve().parent.parent / 'tutorial' / 'data' / 'geography.db')
con = ibis.sqlite.connect(db_fname)
independence = con.table('independence')
independence
day = independence.independence_date.cast('string')
day
julianday_expr = day.julianday()
julianday_expr
sql_expr = julianday_expr.compile()
print(sql_expr)
result = julianday_expr.execute()
result.head()
scalar = ibis.literal('2010-03-14')
scalar
julianday_scalar = scalar.julianday()
con.execute(julianday_scalar)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create Parametrized Model
Step2: Search for the Critical Boron Concentration
Step3: Finally, the openmc.search_for_keff function also provided us with Lists of the guesses and corresponding keff values generated during the search process with OpenMC. Let's use that information to make a quick plot of the value of keff versus the boron concentration.
|
<ASSISTANT_TASK:>
Python Code:
# Initialize third-party libraries and the OpenMC Python API
import matplotlib.pyplot as plt
import numpy as np
import openmc
import openmc.model
%matplotlib inline
# Create the model. `ppm_Boron` will be the parametric variable.
def build_model(ppm_Boron):
# Create the pin materials
fuel = openmc.Material(name='1.6% Fuel')
fuel.set_density('g/cm3', 10.31341)
fuel.add_element('U', 1., enrichment=1.6)
fuel.add_element('O', 2.)
zircaloy = openmc.Material(name='Zircaloy')
zircaloy.set_density('g/cm3', 6.55)
zircaloy.add_element('Zr', 1.)
water = openmc.Material(name='Borated Water')
water.set_density('g/cm3', 0.741)
water.add_element('H', 2.)
water.add_element('O', 1.)
# Include the amount of boron in the water based on the ppm,
# neglecting the other constituents of boric acid
water.add_element('B', ppm_Boron * 1E-6)
# Instantiate a Materials object
materials = openmc.Materials([fuel, zircaloy, water])
# Create cylinders for the fuel and clad
fuel_outer_radius = openmc.ZCylinder(r=0.39218)
clad_outer_radius = openmc.ZCylinder(r=0.45720)
# Create boundary planes to surround the geometry
min_x = openmc.XPlane(x0=-0.63, boundary_type='reflective')
max_x = openmc.XPlane(x0=+0.63, boundary_type='reflective')
min_y = openmc.YPlane(y0=-0.63, boundary_type='reflective')
max_y = openmc.YPlane(y0=+0.63, boundary_type='reflective')
# Create fuel Cell
fuel_cell = openmc.Cell(name='1.6% Fuel')
fuel_cell.fill = fuel
fuel_cell.region = -fuel_outer_radius
# Create a clad Cell
clad_cell = openmc.Cell(name='1.6% Clad')
clad_cell.fill = zircaloy
clad_cell.region = +fuel_outer_radius & -clad_outer_radius
# Create a moderator Cell
moderator_cell = openmc.Cell(name='1.6% Moderator')
moderator_cell.fill = water
moderator_cell.region = +clad_outer_radius & (+min_x & -max_x & +min_y & -max_y)
# Create root Universe
root_universe = openmc.Universe(name='root universe', universe_id=0)
root_universe.add_cells([fuel_cell, clad_cell, moderator_cell])
# Create Geometry and set root universe
geometry = openmc.Geometry(root_universe)
# Finish with the settings file
settings = openmc.Settings()
settings.batches = 300
settings.inactive = 20
settings.particles = 1000
settings.run_mode = 'eigenvalue'
# Create an initial uniform spatial source distribution over fissionable zones
bounds = [-0.63, -0.63, -10, 0.63, 0.63, 10.]
uniform_dist = openmc.stats.Box(bounds[:3], bounds[3:], only_fissionable=True)
settings.source = openmc.source.Source(space=uniform_dist)
# We dont need a tallies file so dont waste the disk input/output time
settings.output = {'tallies': False}
model = openmc.model.Model(geometry, materials, settings)
return model
# Perform the search
crit_ppm, guesses, keffs = openmc.search_for_keff(build_model, bracket=[1000., 2500.],
tol=1.E-2, bracketed_method='bisect',
print_iterations=True)
print('Critical Boron Concentration: {:4.0f} ppm'.format(crit_ppm))
plt.figure(figsize=(8, 4.5))
plt.title('Eigenvalue versus Boron Concentration')
# Create a scatter plot using the mean value of keff
plt.scatter(guesses, [keffs[i].nominal_value for i in range(len(keffs))])
plt.xlabel('Boron Concentration [ppm]')
plt.ylabel('Eigenvalue')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the data
Step2: Setup source space and compute forward
Step3: From here on, standard inverse imaging methods can be used!
Step4: Get an infant MRI template
Step5: It comes with several helpful built-in files, including a 10-20 montage
Step6: There are also BEM and source spaces
Step7: You can ensure everything is as expected by plotting the result
|
<ASSISTANT_TASK:>
Python Code:
import os.path as op
import numpy as np
import mne
from mne.datasets import eegbci
from mne.datasets import fetch_fsaverage
# Download fsaverage files
fs_dir = fetch_fsaverage(verbose=True)
subjects_dir = op.dirname(fs_dir)
# The files live in:
subject = 'fsaverage'
trans = 'fsaverage' # MNE has a built-in fsaverage transformation
src = op.join(fs_dir, 'bem', 'fsaverage-ico-5-src.fif')
bem = op.join(fs_dir, 'bem', 'fsaverage-5120-5120-5120-bem-sol.fif')
raw_fname, = eegbci.load_data(subject=1, runs=[6])
raw = mne.io.read_raw_edf(raw_fname, preload=True)
# Clean channel names to be able to use a standard 1005 montage
new_names = dict(
(ch_name,
ch_name.rstrip('.').upper().replace('Z', 'z').replace('FP', 'Fp'))
for ch_name in raw.ch_names)
raw.rename_channels(new_names)
# Read and set the EEG electrode locations
montage = mne.channels.make_standard_montage('standard_1005')
raw.set_montage(montage)
raw.set_eeg_reference(projection=True) # needed for inverse modeling
# Check that the locations of EEG electrodes is correct with respect to MRI
mne.viz.plot_alignment(
raw.info, src=src, eeg=['original', 'projected'], trans=trans,
show_axes=True, mri_fiducials=True, dig='fiducials')
fwd = mne.make_forward_solution(raw.info, trans=trans, src=src,
bem=bem, eeg=True, mindist=5.0, n_jobs=1)
print(fwd)
# Use fwd to compute the sensitivity map for illustration purposes
eeg_map = mne.sensitivity_map(fwd, ch_type='eeg', mode='fixed')
brain = eeg_map.plot(time_label='EEG sensitivity', subjects_dir=subjects_dir,
clim=dict(lims=[5, 50, 100]))
ch_names = \
'Fz Cz Pz Oz Fp1 Fp2 F3 F4 F7 F8 C3 C4 T7 T8 P3 P4 P7 P8 O1 O2'.split()
data = np.random.RandomState(0).randn(len(ch_names), 1000)
info = mne.create_info(ch_names, 1000., 'eeg')
raw = mne.io.RawArray(data, info)
subject = mne.datasets.fetch_infant_template('6mo', subjects_dir, verbose=True)
fname_1020 = op.join(subjects_dir, subject, 'montages', '10-20-montage.fif')
mon = mne.channels.read_dig_fif(fname_1020)
mon.rename_channels(
{f'EEG{ii:03d}': ch_name for ii, ch_name in enumerate(ch_names, 1)})
trans = mne.channels.compute_native_head_t(mon)
raw.set_montage(mon)
print(trans)
bem_dir = op.join(subjects_dir, subject, 'bem')
fname_src = op.join(bem_dir, f'{subject}-oct-6-src.fif')
src = mne.read_source_spaces(fname_src)
print(src)
fname_bem = op.join(bem_dir, f'{subject}-5120-5120-5120-bem-sol.fif')
bem = mne.read_bem_solution(fname_bem)
fig = mne.viz.plot_alignment(
raw.info, subject=subject, subjects_dir=subjects_dir, trans=trans,
src=src, bem=bem, coord_frame='mri', mri_fiducials=True, show_axes=True,
surfaces=('white', 'outer_skin', 'inner_skull', 'outer_skull'))
mne.viz.set_3d_view(fig, 25, 70, focalpoint=[0, -0.005, 0.01])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The numpy.random module adds to the standard built-in Python random functions for generating efficiently whole arrays of sample values with many kinds of probability distributions.
Step2: Advantages? Built-in Random Python only samples one value at a time and it is significantly less efficient.
Step3: Write the equivalent code using the np.random.normal() function and time it! Keep in mind that the NumPy function is vectorized!
Step4: Now think of a possible alternative code using NumPy. Keep in mind that
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
samples = np.random.normal(size=(4,4))
samples
import random
N = 10000000
%timeit samples = [random.normalvariate(0,1) for i in range(N)]
import matplotlib.pyplot as plt
%matplotlib inline
#plt.plot(INSERT THE NAME OF THE VARIABLE CONTAINING THE PATH)
#plt.plot(INSERT THE NAME OF THE VARIABLE CONTAINING THE PATH)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Contour plots of 2d wavefunctions
Step3: The contour, contourf, pcolor and pcolormesh functions of Matplotlib can be used for effective visualizations of 2d scalar fields. Use the Matplotlib documentation to learn how to use these functions along with the numpy.meshgrid function to visualize the above wavefunction
Step4: Next make a visualization using one of the pcolor functions
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
def well2d(x, y, nx, ny, L=1.0):
Compute the 2d quantum well wave function.
return (2/L)*np.sin(nx * np.pi * x/L)*np.sin(ny * np.pi * y/L)
psi = well2d(np.linspace(0,1,10), np.linspace(0,1,10), 1, 1)
assert len(psi)==10
assert psi.shape==(10,)
nx = 3
ny = 2
L = 1
x = np.linspace(0,L,1000)
y = np.linspace(0,L,1000)
XX,YY = np.meshgrid(x,y)
plt.figure(figsize=(9,6))
f = plt.contourf(XX,YY,well2d(XX, YY, nx, ny, L),cmap=('seismic'))
plt.xlabel('x')
plt.ylabel('y')
plt.title('Contour plot of two dimensional wave function in an infinite well')
plt.tick_params(axis='x',top='off',direction='out')
plt.tick_params(axis='y',right='off',direction='out')
plt.colorbar(shrink=.8)
assert True # use this cell for grading the contour plot
plt.figure(figsize=(9,6))
plt.pcolormesh(XX,YY,well2d(XX, YY, nx, ny, L),cmap=('spectral'))
plt.colorbar(shrink=.8)
plt.xlabel('x')
plt.ylabel('y')
plt.title('Contour plot of two dimensional wave function in an infinite well')
plt.xlabel('x')
plt.ylabel('y')
plt.tick_params(axis='x',top='off',direction='out')
plt.tick_params(axis='y',right='off',direction='out')
assert True # use this cell for grading the pcolor plot
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Motivation and Discretizing
Step2: Running the Program
Step3: Checking the Forcing
Step4: The velocity field over time
Step5: Variations on the Problem - Linear Diffusivity
Step6: The plot below illustrates the effects of the linear diffusivity profile; the most grabbing adjustment is the asymmetry of the solution due to the initial condition. I'm not sure why the initial profile is provides an extreme starting point for the time iteration scheme, but plotting over longer periods of time (i.e. a full tidal cycle) reveals that the stable surface velocity maxima are between -1.5 and 1.5 $u^{\ast}$. Focussing on the smaller deviations, you can barely see the effects of the linear diffusivity near -1.8 $z^{\ast}$, in which the grid cells closer to the bathymetry adjust to the forcing at a slightly faster rate than the cells near the surface.
Step7: Variations on the Problem - Higher temporal resolution
Step8: As shown in the figure below, the code is able to resolve temporal variability quite well, including the adjustment of the flow from the very strong negative velocities to slightly weaker positive ones. The realistic problem could be further improved with a logarithmic diffusivity, but it is nice to see that the velocity profile is responding as it ought to with a linear $\nu$ profile.
Step9: To briefly check the output of the model, I am showing the surface velocity pattern in the figure below. It's quite clear that the initial conditions are an over-estimate of the velocities produced at the peak of the tide, but I'm not sure what has caused this error to pop up, as the initial conditions are independent of the time-step; I'll be coming back to this in the next few days to find the root of the issue, as I do not have the time to do so now.
|
<ASSISTANT_TASK:>
Python Code:
# Import clusters
from scipy import sparse #Allows me to create sparse matrices (i.e. not store all of the zeros in the 'A' matrix)
import scipy.sparse.linalg as spla #To solve each time iteration, as my SOR code (below) grows exponentially when attempting to solve with negative values on the right-hand side.
from numpy import * #To make matrices and do matrix manipulation
import matplotlib.pyplot as plt #for plotting purposes
%matplotlib inline
# Define Successive Over-Relaxation scheme
def sor(A,x,B,tolerance,w):
'''This iteration scheme relaxes Gauss-Seidel estimates toward the converging value
INPUTS:
A = Relation Matrix
x = initial estimated field
B = boundary conditions
tolerance = allowable difference between iterations
w = SOR factor - usually 1.5, but seems to work best with 1.85
OUTPUT:
T = final estimated field'''
# Initialize scheme
resid = x[:,0].dot(100.)
m = 0
T=x[:,0]
while mean(divide(resid,abs(B).max())*100)>tolerance:
T=append(T,T[:,0].dot(0.),axis=1)
for i in range(0,size(A,1)):
#Calculate individual estimates
cleaned_list1 = [ x for x in range(0,size(A,1)) if x < i ]#Previous Solutions
cleaned_list2 = [ x for x in range(0,size(A,1)) if x > i ]#Future Solutions
#Use estimates of T for next timestep
T_hat=(B[i]-(A[i,cleaned_list1]*T[cleaned_list1,m+1])-(A[i,cleaned_list2]*T[cleaned_list2,m]))/A[i,i]
T[i,m+1]=(w*T_hat)+((1-w)*T[i,m]) #Adjust based on relaxation factor
resid=abs((A*T[:,m])-B)
#print(str(mean(divide(resid,abs(B).max())*100))+'%')
print('.'),
m=m+1
print('')
return T[:,m-1]
def cranknicolson(T,T_del,Timesteps,n,eta,u0,U,Fr,Re,K):
'''This iteration scheme moves a Gauss-Seidel estimate forward at a given time interval using a Crank-Nicolson scheme
INPUTS:
T = Period of forcing fluctuations
T_del = timestep
Timesteps = number of timesteps to move forward
n = number of depth bins
eta = amplitude of sinusoidal forcing
u0 = bottom boundary condition
U = mean barotropic velocity
Fr = Froude number of flow
Re = Reynolds number of flow
K = diffusivity matrix
OUTPUT:
Time = time field
Depth = depth field (grid center)
Q = final estimated field over time'''
#K=ones(n+1)
#K=array(range(1,n+2))
#K=array(range(n+2,1,-1))
K=K/float(max(K)) #Nondimensionalize K
# Define distances at Face (F) and Center (C) = ensure nondimensionalized
Z_f=range(-n,0,1);Z_f=[x / float(n) for x in Z_f];Z_f=append(append([-1-(1/float(n))],Z_f),[0]);Z_f=Z_f[0:size(Z_f)-1]+(diff(Z_f))/2.0;
Z_c=range(-n,0,1);Z_c=[x / float(n) for x in Z_c];Z_c=append(append([-1-(1/float(n))],Z_c),[0]);
#Begin stepping forward in time
for time in range(0,Timesteps):
#Solve for initial state using boundary conditions
#Construct A matrix - dependent on time iteration scheme
if time == 0: #Solving for initial conditions
print(time),
# Construct 'A' Matrix
A=zeros((n,n))
for item in range(1,n+1): #Start from bed and work to surface
if item>1:
A[item-1,item-2]=-(K[item-1]/((Z_f[item]-Z_f[item-1])*(Z_c[item]-Z_c[item-1])) )
A[item-1,item-1]=+( (K[item-1]/((Z_f[item]-Z_f[item-1])*(Z_c[item]-Z_c[item-1]))) + (K[item]/((Z_f[item]-Z_f[item-1])*(Z_c[item+1]-Z_c[item]))) )
if item == n: #Sets free-slip boundary condition at the surface
A[item-1,item-1]=+( (K[item-1]/((Z_f[item]-Z_f[item-1])*(Z_c[item]-Z_c[item-1]))) )
if item != n:
A[item-1,item]=-(K[item]/((Z_f[item]-Z_f[item-1])*(Z_c[item+1]-Z_c[item])) )
# Construct Boundary Condition Matrix = using non-dimensional parameter
b=ones(size(A,1))*(Re/(Fr*Fr))*((eta/(U*U))*cos(2*pi*(float(time)/T)))
b[0]=b[0] + (u0* (K[item-1]/((Z_f[item]-Z_f[item-1])*(Z_c[item]-Z_c[item-1]))) ) #Because u0 is zero, this line does nothing.
# Define + Apply guess + boundary conditions
x=matrix(b[:]).T
b=matrix(b).T
# Solve Problem using hard-wired iterative scheme
#T = jacobi(A,x,b,0.05)
#T = gaussseidel(A,x,b,0.05)
Q = sor(A,x,b,0.05,1.85)
#Iterate forward in time using the Crank-Nicolson scheme
else:
print(', '+str(time)),
Q=append(Q,Q[:,0].dot(0.),axis=1) #increase size to match time dimension
# Construct 'A' Matrix
A=zeros((3,n)) # For solving for 'n+1' solution
B=zeros((3,n)) # For using 'n' solution
for item in range(1,n+1): #Start from bed and work to surface
#j-1
if item>1:
A[0,item-2]=-(K[item-1]/((Z_f[item]-Z_f[item-1])*(Z_c[item]-Z_c[item-1])) )
B[0,item-2]=+(K[item-1]/((Z_f[item]-Z_f[item-1])*(Z_c[item]-Z_c[item-1])) )
#j
A[1,item-1]=+(2/T_del)+( (K[item-1]/((Z_f[item]-Z_f[item-1])*(Z_c[item]-Z_c[item-1]))) + (K[item]/((Z_f[item]-Z_f[item-1])*(Z_c[item+1]-Z_c[item]))) )
B[1,item-1]=+(2/T_del)-( (K[item-1]/((Z_f[item]-Z_f[item-1])*(Z_c[item]-Z_c[item-1]))) + (K[item]/((Z_f[item]-Z_f[item-1])*(Z_c[item+1]-Z_c[item]))) )
if item == n: #Sets free-slip boundary condition at the surface
A[1,item-1]=+(2/T_del)+( (K[item-1]/((Z_f[item]-Z_f[item-1])*(Z_c[item]-Z_c[item-1]))) )
B[1,item-1]=+(2/T_del)-( (K[item-1]/((Z_f[item]-Z_f[item-1])*(Z_c[item]-Z_c[item-1]))) )
#j+1
if item != n:
A[2,item]=-(K[item]/((Z_f[item]-Z_f[item-1])*(Z_c[item+1]-Z_c[item])) )
B[2,item]=+(K[item]/((Z_f[item]-Z_f[item-1])*(Z_c[item+1]-Z_c[item])) )
A = sparse.spdiags(A,array([-1,0,1]),n,n)
B = sparse.spdiags(B,array([-1,0,1]),n,n)
RHS = B.dot(Q[:,time-1])
RHS[0] = RHS[0] + (u0* (K[item-1]/((Z_f[item]-Z_f[item-1])*(Z_c[item]-Z_c[item-1]))) ) #Because u0 is zero, this line does nothing.
RHS = RHS + (Re/(Fr*Fr))*((eta/(U*U))*cos(2*pi*((float(time-1)*T_del)/T))) + (Re/(Fr*Fr))*((eta/(U*U))*cos(2*pi*((float(time-1)*T_del)/T)))
#To show the matrix A and boundary conditions for the first time-step
if time == 1:
print(A)
print(RHS)
Q[:,time] = matrix(spla.spsolve( sparse.csr_matrix(A.toarray()),RHS)).T
#Q[:,time] = sor(matrix(A.toarray()),RHS,RHS,0.05,1.85) #There is an issue with my SOR code in which it does not like to solve equations with negative values on the right-hand side.
Time = matrix(range(0,Timesteps))*T_del
Depth = Z_c
return [Time,Depth,Q]
## Begin Program
eta = 0.001 #m of sea level anomaly
U=1 #mean velocity
H=1.0 #depth of the water column
K=1e-2
Fr = U/sqrt(9.81*H) #Froude number from given conditions
Re = (H*U)/K #Reynolds number from a specified maximum diffusivity
# Set number of cells = similar to resolution, as cells are evenly placed between -1 and 0.
n = 25
# Set bottom boundary condition
u0 = 0
# Create K matrix
K=ones(n+1)*K
print('The dimensionless constant = '+str((Re/(Fr*Fr))*((eta/(U*U))))) #The dimensionless constant
## Run the Crank-Nicolson scheme, having initialized the fields
[Time,Depth,Q]=cranknicolson(12.44,0.5,40,n,eta,u0,U,Fr,Re,K)
# Plot of Surface Velocity
plt.style.use('fivethirtyeight')
plt.plot(Time.T,Q[n-1,:].T)#,linewidth=2,marker='o')
plt.title('Surface Velocity')
plt.ylabel(r'$u^{\ast}$', fontsize=20)
plt.xlabel(r'$Time$', fontsize=20)
#plt.savefig('SurfaceVel.pdf', format='pdf', dpi=1200)
#Plot of changes over time
plt.figure()
plt.style.use('fivethirtyeight')
for int in range(0,16):
#lines=plt.plot(append(Q[:,int],Q[19,int]),(matrix([range(0,20)]).T/20.0))
lines=plt.plot(-append(append(Q[0,0]*0,Q[:,int*1]),Q[size(Q,0)-1,int*1]),matrix(Depth).T) #Append boundary conditions
plt.text(-Q[size(Q,0)-1,int*1]-0.025, 0.03, (Time[0,int]*1), fontproperties='serif', fontsize=12)
#plt.setp(lines, linewidth=2.0)
plt.title('$u$ over Time (hours)')
plt.ylim([Depth.min(),0.1]);#plt.xlim([-5e6,5e6])
plt.ylabel(r'$z^{\ast}$', fontsize=20)
plt.xlabel(r'$u^{\ast}$', fontsize=20)
plt.grid('on')
#plt.legend(['0','20','40','60','80','100','120'])
#plt.xscale('log')
plt.show()
# Create K matrix
K=ones(n+1)*(10**-2)
new=list(arange(1,n+1))
for l in new:
K[l]=(K[l]*float(l)) #New varying diffusivity, with higher values closer to the surface
# Plot diffusivity profile
plt.plot(append(K,K[n]),Depth)
plt.title('Linear Diffusivity Profile')
plt.ylim([Depth.min(),0.1])
plt.ylabel(r'$z^{\ast}$', fontsize=20)
plt.xlabel(r'$K$', fontsize=20)
plt.show()
## Re-Run the Crank-Nicolson scheme, having initialized the fields
[Time,Depth,Q]=cranknicolson(12.44,0.5,40,n,eta,u0,U,Fr,Re,K)
#Plot of changes over time
plt.figure()
plt.style.use('fivethirtyeight')
for int in range(0,20):
lines=plt.plot(-append(append(Q[0,0]*0,Q[:,int*1]),Q[size(Q,0)-1,int*1]),matrix(Depth).T) #Append boundary conditions
plt.text(-Q[size(Q,0)-1,int*1]-0.05, 0.03, (Time[0,int]*1), fontproperties='serif', fontsize=12)
#plt.setp(lines, linewidth=2.0)
plt.title('$u$ over Time (hours)')
plt.ylim([Depth.min(),0.1]);#plt.xlim([-5e6,5e6])
plt.ylabel(r'$z^{\ast}$', fontsize=20)
plt.xlabel(r'$u^{\ast}$', fontsize=20)
plt.grid('on')
## Re-Run the Crank-Nicolson scheme, having initialized the fields
[Time,Depth,Q]=cranknicolson(12.44,0.1,180,n,eta,u0,U,Fr,Re,K)
#Plot of changes over time
plt.figure()
plt.style.use('fivethirtyeight')
for int in range(53,65):
lines=plt.plot(-append(append(Q[0,0]*0,Q[:,int*1]),Q[size(Q,0)-1,int*1]),matrix(Depth).T) #Append boundary conditions
plt.text(-Q[size(Q,0)-1,int*1]-0.05, 0.03, (Time[0,int]*1), fontproperties='serif', fontsize=12)
#plt.setp(lines, linewidth=2.0)
plt.title('$u$ over Time (hours)')
plt.ylim([Depth.min(),0.1]);#plt.xlim([-5e6,5e6])
plt.ylabel(r'$z^{\ast}$', fontsize=20)
plt.xlabel(r'$u^{\ast}$', fontsize=20)
plt.grid('on')
# Plot of Surface Velocity
plt.style.use('fivethirtyeight')
plt.plot(Time.T,Q[n-1,:].T)#,linewidth=2,marker='o')
plt.title('Surface Velocity')
plt.ylabel(r'$u^{\ast}$', fontsize=20)
plt.xlabel(r'$Time$', fontsize=20)
#plt.savefig('SurfaceVel.pdf', format='pdf', dpi=1200)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: You can see it already has various tags identifying its structure (indeed enough to uniquely identify each gate)
Step2: The core object describing how similar two unitaries are is
Step3: For our loss function we'll normalize this and negate it (since the optimizer minimizes).
Step4: We could call optimize for pure gradient based optimization, but since unitary circuits can be tricky we'll use optimize_basinhopping which combines gradient descent with 'hopping' to escape local minima
Step5: The optimized tensor network still contains PTensor instances but now with optimized parameters.
Step6: We can see the parameters have been updated by the training
Step7: We can see what gate these parameters would generate
Step8: A final sanity check we can perform is to try evolving a random state with the target unitary and trained circuit and check the fidelity between the resulting states.
Step9: Next we create a random initial state, and evolve it with the
Step10: The (in)fidelity should broadly match our training loss
|
<ASSISTANT_TASK:>
Python Code:
V = circ.uni
V.graph(color=['U3', gate2], show_inds=True)
V.graph(color=[f'ROUND_{i}' for i in range(depth)], show_inds=True)
V.graph(color=[f'I{i}' for i in range(n)], show_inds=True)
# the hamiltonian
H = qu.ham_ising(n, jz=1.0, bx=0.7, cyclic=False)
# the propagator for the hamiltonian
t = 2
U_dense = qu.expm(-1j * t * H)
# 'tensorized' version of the unitary propagator
U = qtn.Tensor(
data=U_dense.reshape([2] * (2 * n)),
inds=[f'k{i}' for i in range(n)] + [f'b{i}' for i in range(n)],
tags={'U_TARGET'}
)
U.graph(color=['U3', gate2, 'U_TARGET'])
(V.H & U).graph(color=['U3', gate2, 'U_TARGET'])
def loss(V, U):
return 1 - abs((V.H & U).contract(all, optimize='auto-hq')) / 2**n
# check our current unitary 'infidelity':
loss(V, U)
# use the autograd/jax based optimizer
import quimb.tensor.optimize_autograd as qto
tnopt = qto.TNOptimizer(
V, # the tensor network we want to optimize
loss, # the function we want to minimize
loss_constants={'U': U}, # supply U to the loss function as a constant TN
constant_tags=[gate2], # within V we also want to keep all the CZ gates constant
autograd_backend='jax', # use 'autograd' for non-compiled optimization
optimizer='L-BFGS-B', # the optimization algorithm
)
# allow 10 hops with 500 steps in each 'basin'
V_opt = tnopt.optimize_basinhopping(n=500, nhop=10)
V_opt['U3', 'I2', 'ROUND_4']
# the initial values
V['U3', 'ROUND_4', 'I2'].params
# the optimized values
V_opt['U3', 'ROUND_4', 'I2'].params
qu.U_gate(*V_opt['U3', 'ROUND_4', 'I2'].params)
V_opt_dense = V_opt.to_dense([f'k{i}' for i in range(n)], [f'b{i}' for i in range(n)])
psi0 = qu.rand_ket(2**n)
# this is the exact state we want
psif_exact = U_dense @ psi0
# this is the state our circuit will produce if fed `psi0`
psif_apprx = V_opt_dense @ psi0
f"Fidelity: {100 * qu.fidelity(psif_apprx, psif_exact):.2f} %"
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Les instructions SQL s'écrivent d'une manière qui ressemble à celle de phrases ordinaires en anglais. Cette ressemblance voulue vise à faciliter l'apprentissage et la lecture. Il est néanmoins important de respecter un ordre pour les différentes instructions.
Step3: La méthode cursor est un peu particulière
Step4: Voir la table
Step5: Passer en pandas
Step6: Comparaison SQL et pandas
Step7: En pandas, la sélection de colonnes se fait en donnant une liste
Step8: WHERE
Step9: Avec Pandas, on peut utiliser plusieurs manières de faire
Step10: Pour mettre plusieurs conditions, on utilise
Step11: GROUP BY
Step12: Attention, en pandas, la fonction count ne fait pas la même chose qu'en SQL. count s'applique à toutes les colonnes et compte toutes les observations non nulles.
Step13: Pour réaliser la même chose qu'en SQL, il faut utiliser la méthode size.
Step14: On peut aussi appliquer des fonctions plus sophistiquées lors d'un groupby
Step15: Avec pandas, on peut appeler les fonctions classiques de numpy
Step16: Ou utiliser des fonctions lambda.
Step17: Enregistrer une table SQL sous un autre format
Step18: On peut également passer par un DataFrame pandas et utiliser .to_csv()
|
<ASSISTANT_TASK:>
Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
import sqlite3
# on va se connecter à une base de données SQL vide
# SQLite stocke la BDD dans un simple fichier
filepath = "./DataBase.db"
open(filepath, 'w').close() #crée un fichier vide
CreateDataBase = sqlite3.connect(filepath)
QueryCurs = CreateDataBase.cursor()
# On définit une fonction de création de table
def CreateTable(nom_bdd):
QueryCurs.execute('''CREATE TABLE IF NOT EXISTS ''' + nom_bdd + '''
(id INTEGER PRIMARY KEY, Name TEXT,City TEXT, Country TEXT, Price REAL)''')
# On définit une fonction qui permet d'ajouter des observations dans la table
def AddEntry(nom_bdd, Nom,Ville,Pays,Prix):
QueryCurs.execute('''INSERT INTO ''' + nom_bdd + '''
(Name,City,Country,Price) VALUES (?,?,?,?)''',(Nom,Ville,Pays,Prix))
def AddEntries(nom_bdd, data):
data : list with (Name,City,Country,Price) tuples to insert
QueryCurs.executemany('''INSERT INTO ''' + nom_bdd + '''
(Name,City,Country,Price) VALUES (?,?,?,?)''',data)
### On va créer la table clients
CreateTable('Clients')
AddEntry('Clients','Toto','Munich','Germany',5.2)
AddEntries('Clients',
[('Bill','Berlin','Germany',2.3),
('Tom','Paris','France',7.8),
('Marvin','Miami','USA',15.2),
('Anna','Paris','USA',7.8)])
# on va "commit" c'est à dire qu'on va valider la transaction.
# > on va envoyer ses modifications locales vers le référentiel central - la base de données SQL
CreateDataBase.commit()
QueryCurs.execute('SELECT * FROM Clients')
Values = QueryCurs.fetchall()
print(Values)
import pandas as pd
# méthode SQL Query
df1 = pd.read_sql_query('SELECT * FROM Clients', CreateDataBase)
print("En utilisant la méthode read_sql_query \n", df1.head(), "\n")
#méthode DataFrame en utilisant la liste issue de .fetchall()
df2 = pd.DataFrame(Values, columns=['ID','Name','City','Country','Price'])
print("En passant par une DataFrame \n", df2.head())
# en SQL
QueryCurs.execute('SELECT ID,City FROM Clients LIMIT 2')
Values = QueryCurs.fetchall()
print(Values)
#sur la table
df2[['ID','City']].head(2)
QueryCurs.execute('SELECT * FROM Clients WHERE City=="Paris"')
print(QueryCurs.fetchall())
df2[df2['City'] == "Paris"]
df2.query('City == "Paris"')
QueryCurs.execute('SELECT * FROM Clients WHERE City=="Paris" AND Country == "USA"')
print(QueryCurs.fetchall())
df2.query('City == "Paris" & Country == "USA"')
df2[(df2['City'] == "Paris") & (df2['Country'] == "USA")]
QueryCurs.execute('SELECT Country, count(*) FROM Clients GROUP BY Country')
print(QueryCurs.fetchall())
df2.groupby('Country').count()
df2.groupby('Country').size()
QueryCurs.execute('SELECT Country, AVG(Price), count(*) FROM Clients GROUP BY Country')
print(QueryCurs.fetchall())
import numpy as np
df2.groupby('Country').agg({'Price': np.mean, 'Country': np.size})
# par exemple calculer le prix moyen et le multiplier par 2
df2.groupby('Country')['Price'].apply(lambda x: 2*x.mean())
QueryCurs.execute('SELECT Country, 2*AVG(Price) FROM Clients GROUP BY Country').fetchall()
QueryCurs.execute('SELECT * FROM Clients WHERE Country == "Germany"')
print(QueryCurs.fetchall())
QueryCurs.execute('SELECT * FROM Clients WHERE City=="Berlin" AND Country == "Germany"')
print(QueryCurs.fetchall())
QueryCurs.execute('SELECT * FROM Clients WHERE Price BETWEEN 7 AND 20')
print(QueryCurs.fetchall())
data = QueryCurs.execute('SELECT * FROM Clients')
import csv
with open('./output.csv', 'w') as file:
writer = csv.writer(file)
writer.writerow(['id','Name','City','Country','Price'])
writer.writerows(data)
QueryCurs.execute('''DROP TABLE Clients''')
QueryCurs.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The functions below take a short segment of data, do a zero-padded FFT (to do some extra smoothing), then find the largest peak above and below 120 Hz.
Step2: Take the FFT and find peaks every quarter of a second.
Step3: To make the spectrogram look nice, we'll do some bandpassing. This is left over from my previous attempt to do a Hilbert transform and get the phase directly, but I like the results.
Step4: Now we look at the scattered light arches in a related degree of freedom, SRCL, to see if they're related to the raygun.
|
<ASSISTANT_TASK:>
Python Code:
chan='L1:LSC-POP_A_RF9_I_ERR_DQ'
st=1162024217
dur=600
data=TimeSeries.fetch(chan,st,st+dur)
srate=data.sample_rate.value
twid=int(srate) # Length of short spectra
zpadsec=8 # Length (sec) for oversampled FFT
zpadwid=int(srate*zpadsec)
win1=sig.hann(twid)
def myfft(data,tt):
idx=int(tt*srate)-twid/2
tmp=zeros(int(zpadsec*srate))
tmp[:twid]=win1*data.value[idx:idx+twid]
tmp=rfft(tmp)
return tmp[100*zpadsec:140*zpadsec]
def track_lines(data,tt):
fft1=myfft(data,tt)
i1=abs(fft1[:20*zpadsec+1]).argmax()
i2=abs(fft1[20*zpadsec:]).argmax()+20*zpadsec
return (tt,i1/float(zpadsec)+100.,i2/float(zpadsec)+100) #,fft1[i1],fft1[i2])
tst=array([track_lines(data,tt) for tt in arange(1,dur-1,0.25)])
filt=sig.firwin(int(8*srate),[105./nyq,135./nyq],window='hann',pass_zero=False)
filt.resize(len(data))
ffilt=abs(rfft(filt))
fdata=ffilt*rfft(data.detrend().value)
fdata[0]=0.
newdata=TimeSeries(irfft(fdata)[int(4*srate):-int(4*srate)],sample_rate=srate)
p1=newdata[:int(srate*120)].spectrogram2(1,0.98).plot()
plt.plot(tst[:,0]-4,tst[:,1],c='r')
plt.plot(tst[:,0]-4,tst[:,2],c='r')
#p1.gca().set_yscale('log')
p1.gca().set_ylim(100,140)
data2=TimeSeries.fetch('L1:LSC-SRCL_IN1_DQ',st,st+dur)
srate2=data2.sample_rate.value
nyq2=srate2/2.
filt2=sig.firwin(int(8*srate2),[4./nyq2,30./nyq2],window='hann',pass_zero=False)
filt2.resize(len(data2))
ffilt2=abs(rfft(filt2))
fdata2=ffilt2*rfft(data2.detrend().value)
fdata2[0]=0.
newdata2=TimeSeries(irfft(fdata2[:1+512*int(data2.duration.value)])[4*1024:-4*1024],sample_rate=1024)
sgram2=newdata2.spectrogram2(1,0.95)
p1=sgram2.plot()
plt.plot(tst[:,0]-4,1.15*(tst[:,2]-120),c='r')
#p1.gca().set_yscale('log')
p1.gca().set_ylim(5,15)
p1.gca().set_xlim(0,120)
p1=sgram2.plot()
plt.plot(tst[:,0]-4,1.15*(tst[:,2]-120),c='r')
#p1.gca().set_yscale('log')
p1.gca().set_ylim(5,15)
p1.gca().set_xlim(240,360)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: These are the imports from the Keras API. Note the long format which can hopefully be shortened in the future to e.g. from tf.keras.models import Model.
Step2: Helper Functions
Step3: Helper-function for plotting images
Step4: Helper-function for printing confusion matrix
Step5: Helper-function for plotting example errors
Step6: Function for calculating the predicted classes of the entire test-set and calling the above function to plot a few examples of mis-classified images.
Step7: Helper-function for loading images
Step8: Helper-function for plotting training history
Step9: Dataset
Step10: Download and extract the dataset if it hasn't already been done. It is about 22 MB.
Step11: This dataset has another directory structure than the Keras API requires, so copy the files into separate directories for the training- and test-sets.
Step12: The directories where the images are now stored.
Step13: Pre-Trained Model
Step14: Input Pipeline
Step15: Keras uses a so-called data-generator for inputting data into the neural network, which will loop over the data for eternity.
Step16: We also need a data-generator for the test-set, but this should not do any transformations to the images because we want to know the exact classification accuracy on those specific images. So we just rescale the pixel-values so they are between 0.0 and 1.0 because this is expected by the VGG16 model.
Step17: The data-generators will return batches of images. Because the VGG16 model is so large, the batch-size cannot be too large, otherwise you will run out of RAM on the GPU.
Step18: We can save the randomly transformed images during training, so as to inspect whether they have been overly distorted, so we have to adjust the parameters for the data-generator above.
Step19: Now we create the actual data-generator that will read files from disk, resize the images and return a random batch.
Step20: The data-generator for the test-set should not transform and shuffle the images.
Step21: Because the data-generators will loop for eternity, we need to specify the number of steps to perform during evaluation and prediction on the test-set. Because our test-set contains 530 images and the batch-size is set to 20, the number of steps is 26.5 for one full processing of the test-set. This is why we need to reset the data-generator's counter in the example_errors() function above, so it always starts processing from the beginning of the test-set.
Step22: Get the file-paths for all the images in the training- and test-sets.
Step23: Get the class-numbers for all the images in the training- and test-sets.
Step24: Get the class-names for the dataset.
Step25: Get the number of classes for the dataset.
Step26: Plot a few images to see if data is correct
Step27: Class Weights
Step28: Note how the weight is about 1.398 for the forky-class and only 0.707 for the spoony-class. This is because there are fewer images for the forky-class so the gradient should be amplified for those images, while the gradient should be lowered for spoony-images.
Step29: Example Predictions
Step30: We can then use the VGG16 model on a picture of a parrot which is classified as a macaw (a parrot species) with a fairly high score of 79%.
Step31: We can then use the VGG16 model to predict the class of one of the images in our new training-set. The VGG16 model is very confused about this image and cannot make a good classification.
Step32: We can try it for another image in our new training-set and the VGG16 model is still confused.
Step33: We can also try an image from our new test-set, and again the VGG16 model is very confused.
Step34: Transfer Learning
Step35: We can see that the last convolutional layer is called 'block5_pool' so we use Keras to get a reference to that layer.
Step36: We refer to this layer as the Transfer Layer because its output will be re-routed to our new fully-connected neural network which will do the classification for the Knifey-Spoony dataset.
Step37: Using the Keras API it is very simple to create a new model. First we take the part of the VGG16 model from its input-layer to the output of the transfer-layer. We may call this the convolutional model, because it consists of all the convolutional layers from the VGG16 model.
Step38: We can then use Keras to build a new model on top of this.
Step39: We use the Adam optimizer with a fairly low learning-rate. The learning-rate could perhaps be larger. But if you try and train more layers of the original VGG16 model, then the learning-rate should be quite low otherwise the pre-trained weights of the VGG16 model will be distorted and it will be unable to learn.
Step40: We have 3 classes in the Knifey-Spoony dataset so Keras needs to use this loss-function.
Step41: The only performance metric we are interested in is the classification accuracy.
Step42: Helper-function for printing whether a layer in the VGG16 model should be trained.
Step43: By default all the layers of the VGG16 model are trainable.
Step44: In Transfer Learning we are initially only interested in reusing the pre-trained VGG16 model as it is, so we will disable training for all its layers.
Step45: Once we have changed whether the model's layers are trainable, we need to compile the model for the changes to take effect.
Step46: An epoch normally means one full processing of the training-set. But the data-generator that we created above, will produce batches of training-data for eternity. So we need to define the number of steps we want to run for each "epoch" and this number gets multiplied by the batch-size defined above. In this case we have 100 steps per epoch and a batch-size of 20, so the "epoch" consists of 2000 random images from the training-set. We run 20 such "epochs".
Step47: Training the new model is just a single function call in the Keras API. This takes about 6-7 minutes on a GTX 1070 GPU.
Step48: Keras records the performance metrics at the end of each "epoch" so they can be plotted later. This shows that the loss-value for the training-set generally decreased during training, but the loss-values for the test-set were a bit more erratic. Similarly, the classification accuracy generally improved on the training-set while it was a bit more erratic on the test-set.
Step49: After training we can also evaluate the new model's performance on the test-set using a single function call in the Keras API.
Step50: We can plot some examples of mis-classified images from the test-set. Some of these images are also difficult for a human to classify.
Step51: Fine-Tuning
Step52: We want to train the last two convolutional layers whose names contain 'block5' or 'block4'.
Step53: We can check that this has updated the trainable boolean for the relevant layers.
Step54: We will use a lower learning-rate for the fine-tuning so the weights of the original VGG16 model only get changed slowly.
Step55: Because we have defined a new optimizer and have changed the trainable boolean for many of the layers in the model, we need to recompile the model so the changes can take effect before we continue training.
Step56: The training can then be continued so as to fine-tune the VGG16 model along with the new classifier.
Step57: We can then plot the loss-values and classification accuracy from the training. Depending on the dataset, the original model, the new classifier, and hyper-parameters such as the learning-rate, this may improve the classification accuracies on both training- and test-set, or it may improve on the training-set but worsen it for the test-set in case of overfitting. It may require some experimentation with the parameters to get this right.
Step58: We can plot some examples of mis-classified images again, and we can also see from the confusion matrix that the model is still having problems classifying forks correctly.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import PIL
import tensorflow as tf
import numpy as np
import os
from tensorflow.python.keras.models import Model, Sequential
from tensorflow.python.keras.layers import Dense, Flatten, Dropout
from tensorflow.python.keras.applications import VGG16
from tensorflow.python.keras.applications.vgg16 import preprocess_input, decode_predictions
from tensorflow.python.keras.preprocessing.image import ImageDataGenerator
from tensorflow.python.keras.optimizers import Adam, RMSprop
def path_join(dirname, filenames):
return [os.path.join(dirname, filename) for filename in filenames]
def plot_images(images, cls_true, cls_pred=None, smooth=True):
assert len(images) == len(cls_true)
# Create figure with sub-plots.
fig, axes = plt.subplots(3, 3)
# Adjust vertical spacing.
if cls_pred is None:
hspace = 0.3
else:
hspace = 0.6
fig.subplots_adjust(hspace=hspace, wspace=0.3)
# Interpolation type.
if smooth:
interpolation = 'spline16'
else:
interpolation = 'nearest'
for i, ax in enumerate(axes.flat):
# There may be less than 9 images, ensure it doesn't crash.
if i < len(images):
# Plot image.
ax.imshow(images[i],
interpolation=interpolation)
# Name of the true class.
cls_true_name = class_names[cls_true[i]]
# Show true and predicted classes.
if cls_pred is None:
xlabel = "True: {0}".format(cls_true_name)
else:
# Name of the predicted class.
cls_pred_name = class_names[cls_pred[i]]
xlabel = "True: {0}\nPred: {1}".format(cls_true_name, cls_pred_name)
# Show the classes as the label on the x-axis.
ax.set_xlabel(xlabel)
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
# Import a function from sklearn to calculate the confusion-matrix.
from sklearn.metrics import confusion_matrix
def print_confusion_matrix(cls_pred):
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# Get the confusion matrix using sklearn.
cm = confusion_matrix(y_true=cls_test, # True class for test-set.
y_pred=cls_pred) # Predicted class.
print("Confusion matrix:")
# Print the confusion matrix as text.
print(cm)
# Print the class-names for easy reference.
for i, class_name in enumerate(class_names):
print("({0}) {1}".format(i, class_name))
def plot_example_errors(cls_pred):
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# Boolean array whether the predicted class is incorrect.
incorrect = (cls_pred != cls_test)
# Get the file-paths for images that were incorrectly classified.
image_paths = np.array(image_paths_test)[incorrect]
# Load the first 9 images.
images = load_images(image_paths=image_paths[0:9])
# Get the predicted classes for those images.
cls_pred = cls_pred[incorrect]
# Get the true classes for those images.
cls_true = cls_test[incorrect]
# Plot the 9 images we have loaded and their corresponding classes.
# We have only loaded 9 images so there is no need to slice those again.
plot_images(images=images,
cls_true=cls_true[0:9],
cls_pred=cls_pred[0:9])
def example_errors():
# The Keras data-generator for the test-set must be reset
# before processing. This is because the generator will loop
# infinitely and keep an internal index into the dataset.
# So it might start in the middle of the test-set if we do
# not reset it first. This makes it impossible to match the
# predicted classes with the input images.
# If we reset the generator, then it always starts at the
# beginning so we know exactly which input-images were used.
generator_test.reset()
# Predict the classes for all images in the test-set.
y_pred = new_model.predict_generator(generator_test,
steps=steps_test)
# Convert the predicted classes from arrays to integers.
cls_pred = np.argmax(y_pred,axis=1)
# Plot examples of mis-classified images.
plot_example_errors(cls_pred)
# Print the confusion matrix.
print_confusion_matrix(cls_pred)
def load_images(image_paths):
# Load the images from disk.
images = [plt.imread(path) for path in image_paths]
# Convert to a numpy array and return it.
return np.asarray(images)
def plot_training_history(history):
# Get the classification accuracy and loss-value
# for the training-set.
acc = history.history['categorical_accuracy']
loss = history.history['loss']
# Get it for the validation-set (we only use the test-set).
val_acc = history.history['val_categorical_accuracy']
val_loss = history.history['val_loss']
# Plot the accuracy and loss-values for the training-set.
plt.plot(acc, linestyle='-', color='b', label='Training Acc.')
plt.plot(loss, 'o', color='b', label='Training Loss')
# Plot it for the test-set.
plt.plot(val_acc, linestyle='--', color='r', label='Test Acc.')
plt.plot(val_loss, 'o', color='r', label='Test Loss')
# Plot title and legend.
plt.title('Training and Test Accuracy')
plt.legend()
# Ensure the plot shows correctly.
plt.show()
import knifey
knifey.maybe_download_and_extract()
knifey.copy_files()
train_dir = knifey.train_dir
test_dir = knifey.test_dir
model = VGG16(include_top=True, weights='imagenet')
input_shape = model.layers[0].output_shape[1:3]
input_shape
datagen_train = ImageDataGenerator(
rescale=1./255,
rotation_range=180,
width_shift_range=0.1,
height_shift_range=0.1,
shear_range=0.1,
zoom_range=[0.9, 1.5],
horizontal_flip=True,
vertical_flip=True,
fill_mode='nearest')
datagen_test = ImageDataGenerator(rescale=1./255)
batch_size = 20
if True:
save_to_dir = None
else:
save_to_dir='augmented_images/'
generator_train = datagen_train.flow_from_directory(directory=train_dir,
target_size=input_shape,
batch_size=batch_size,
shuffle=True,
save_to_dir=save_to_dir)
generator_test = datagen_test.flow_from_directory(directory=test_dir,
target_size=input_shape,
batch_size=batch_size,
shuffle=False)
steps_test = generator_test.n / batch_size
steps_test
image_paths_train = path_join(train_dir, generator_train.filenames)
image_paths_test = path_join(test_dir, generator_test.filenames)
cls_train = generator_train.classes
cls_test = generator_test.classes
class_names = list(generator_train.class_indices.keys())
class_names
num_classes = generator_train.num_class
num_classes
# Load the first images from the train-set.
images = load_images(image_paths=image_paths_train[0:9])
# Get the true classes for those images.
cls_true = cls_train[0:9]
# Plot the images and labels using our helper-function above.
plot_images(images=images, cls_true=cls_true, smooth=True)
from sklearn.utils.class_weight import compute_class_weight
class_weight = compute_class_weight(class_weight='balanced',
classes=np.unique(cls_train),
y=cls_train)
class_weight
class_names
def predict(image_path):
# Load and resize the image using PIL.
img = PIL.Image.open(image_path)
img_resized = img.resize(input_shape, PIL.Image.LANCZOS)
# Plot the image.
plt.imshow(img_resized)
plt.show()
# Convert the PIL image to a numpy-array with the proper shape.
img_array = np.expand_dims(np.array(img_resized), axis=0)
# Use the VGG16 model to make a prediction.
# This outputs an array with 1000 numbers corresponding to
# the classes of the ImageNet-dataset.
pred = model.predict(img_array)
# Decode the output of the VGG16 model.
pred_decoded = decode_predictions(pred)[0]
# Print the predictions.
for code, name, score in pred_decoded:
print("{0:>6.2%} : {1}".format(score, name))
predict(image_path='images/parrot_cropped1.jpg')
predict(image_path=image_paths_train[0])
predict(image_path=image_paths_train[1])
predict(image_path=image_paths_test[0])
model.summary()
transfer_layer = model.get_layer('block5_pool')
transfer_layer.output
conv_model = Model(inputs=model.input,
outputs=transfer_layer.output)
# Start a new Keras Sequential model.
new_model = Sequential()
# Add the convolutional part of the VGG16 model from above.
new_model.add(conv_model)
# Flatten the output of the VGG16 model because it is from a
# convolutional layer.
new_model.add(Flatten())
# Add a dense (aka. fully-connected) layer.
# This is for combining features that the VGG16 model has
# recognized in the image.
new_model.add(Dense(1024, activation='relu'))
# Add a dropout-layer which may prevent overfitting and
# improve generalization ability to unseen data e.g. the test-set.
new_model.add(Dropout(0.5))
# Add the final layer for the actual classification.
new_model.add(Dense(num_classes, activation='softmax'))
optimizer = Adam(lr=1e-5)
loss = 'categorical_crossentropy'
metrics = ['categorical_accuracy']
def print_layer_trainable():
for layer in conv_model.layers:
print("{0}:\t{1}".format(layer.trainable, layer.name))
print_layer_trainable()
conv_model.trainable = False
for layer in conv_model.layers:
layer.trainable = False
print_layer_trainable()
new_model.compile(optimizer=optimizer, loss=loss, metrics=metrics)
epochs = 20
steps_per_epoch = 100
history = new_model.fit_generator(generator=generator_train,
epochs=epochs,
steps_per_epoch=steps_per_epoch,
class_weight=class_weight,
validation_data=generator_test,
validation_steps=steps_test)
plot_training_history(history)
result = new_model.evaluate_generator(generator_test, steps=steps_test)
print("Test-set classification accuracy: {0:.2%}".format(result[1]))
example_errors()
conv_model.trainable = True
for layer in conv_model.layers:
# Boolean whether this layer is trainable.
trainable = ('block5' in layer.name or 'block4' in layer.name)
# Set the layer's bool.
layer.trainable = trainable
print_layer_trainable()
optimizer_fine = Adam(lr=1e-7)
new_model.compile(optimizer=optimizer_fine, loss=loss, metrics=metrics)
history = new_model.fit_generator(generator=generator_train,
epochs=epochs,
steps_per_epoch=steps_per_epoch,
class_weight=class_weight,
validation_data=generator_test,
validation_steps=steps_test)
plot_training_history(history)
result = new_model.evaluate_generator(generator_test, steps=steps_test)
print("Test-set classification accuracy: {0:.2%}".format(result[1]))
example_errors()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Grab the <li> tags
Step2: Handling special dates with multiple accidents
Step3: But then there are also separate entries for each <li> in the list inside
Step4: Extracting data from the HTML
Step5: Iterating over all <li> elements and extract information
Step6: Let's write a function that will take each <li> and extract the crash details (month, day, link, and text description) from each element. We should also make sure that we handle the cases where there are several crashes per <li>.
Step7: Sanity check the lengths of each list
Step8: Looks good! Time to make the DataFrame, which we can do by passing a Python dict
Step9: Sanity check again
Step10: Let's check that we did everything right for the weird cases by checking one of the bullets that had multiple crashes
Step11: This looks like exactly what we expected. Now let's proceed with clicking the links so we can add in the year. We have to click those links anyway to extract the additional crash details, so let's just grab the years from there.
Step12: Question b
Step13: Extracting elements from the summary page
Step15: Upon inspection of the text of each summary, we can see that there are some cases where in addition to (or instead of) an integer, there is some extraneous text or just a string like "all", "unknown", or "none". Let's handle these special cases and extract numbers in a function
Step16: Scraping each page
Step17: Let's sanity check the lengths of these lists.
Step18: Clean up dates and format them as datetimes
Step19: Let's remove commas.
Step20: Some dates have month first. Some dates have date first. Let's make them consistent while also getting rid of extraneous information appended to the end of the date (like links to references). We'll write our own function to parse dates because we like to do fun and cool things like that.
Step21: We can see now that our dates are nicely formatted and can create them as datetime objects
Step22: Optional
Step23: Let's test with the two URLs from earlier
Step24: Looks like we can correctly extract a summary table with multiple aircraft in it.
Step25: Question c
Step26: So the top 5 crashes, the number of fatalities and the flight origin was
Step27: Let's see the description
Step28: Question d
Step29: Without de-duplication, Bergen Airport, Ninoy Aquino International Airport, and Domodedovo International Airport in Moscow had the highest number of aviation incidents.
Step30: London Heathrow and LAX (entered twice as two slightly different strings) come out on top, which is not unexpected given the number of flights these airports have.
|
<ASSISTANT_TASK:>
Python Code:
base_url = "https://en.wikipedia.org"
index_ref = "/wiki/List_of_accidents_and_incidents_involving_commercial_aircraft"
index_html = urlopen(base_url + index_ref)
index = BeautifulSoup(index_html, "lxml")
result = index.find_all('li')
result[829]
result[830:834]
result[0].find('a').get('href')
result[0].text
result[0].text.split(' – ')
result[0].text.split(' – ')[0]
result[0].text.split(' – ')[1]
def get_date_separator(html_fragment):
# Date separator changes throughout the document, so let's handle both
if ' – ' in html_fragment.text:
return '–'
elif ' - ' in html_fragment.text:
return '-'
else:
return None
def extract_details(html_fragment):
# these lists may have one or more elements when returned
bdates, blinks, bdescrips = [], [], []
if html_fragment.find_all('li') == []:
# Then there is only one crash for this bullet
separator = get_date_separator(html_fragment)
blinks.append(html_fragment.find('a').get('href'))
bdates.append(html_fragment.text.split(separator)[0].strip())
bdescrips.append(html_fragment.text.split(separator)[1].strip())
else:
# Then there are multiple crashes for this bullet
for bullet in html_fragment.find_all('li'):
# Dates might appear in current or parent <li>
separator = get_date_separator(bullet)
if separator != None:
bdates.append(bullet.text.split(separator)[0].strip())
bdescrips.append(bullet.text.split(separator)[1].strip())
else:
parent_separator = get_date_separator(html_fragment)
bdates.append(html_fragment.text.split(parent_separator)[0].strip())
bdescrips.append(bullet.text.strip())
# Relevant link might appear in current or parent <li>
if bullet.find('a') == None:
blinks.append(html_fragment.find('a').get('href'))
else:
blinks.append(bullet.find('a').get('href'))
return bdates, blinks, bdescrips
dates_month_day, links, descriptions = [], [], []
for each_li in result:
if (' – ' in each_li.text or ' - ' in each_li.text) and each_li.find('a') != None:
lis_dates, lis_links, lis_descrips = extract_details(each_li)
dates_month_day += lis_dates
links += lis_links
descriptions += lis_descrips
else:
# If neither condition is true, then we hit duplicate or extra links
# elsewhere in the page so we can skip these and throw them away
continue
len(dates_month_day), len(links), len(descriptions)
df = pd.DataFrame({'date': dates_month_day, 'link': links, 'description': descriptions})
df.head()
df[df.date == 'September 11']
df.to_csv('crashes_question_starter.csv')
df[['description', 'link']].to_csv('crashes_no_extra_credit.csv')
def try_request(url):
html = urlopen(url)
time.sleep(1)
return BeautifulSoup(html, "lxml")
def extract_summary(trs):
date_w_year, passengers, crew, fatalities, survivors = '', 0, 0, 0, 0
registration, origins, destination = 'No data', 'No data', 'No data'
for each_tr in trs:
if each_tr.find('th', text = re.compile('Destination')) != None:
try:
destination = each_tr.td.text
except:
pass
elif each_tr.find('th', text = re.compile('Date')) != None:
try:
date_w_year = each_tr.td.text
except:
pass
elif each_tr.find('th', text = re.compile('Passengers')) != None:
try:
passengers = extract_numbers(each_tr.td.text, passengers)
except:
pass
elif each_tr.find('th', text = re.compile('Crew')) != None:
try:
crew = extract_numbers(each_tr.td.text, passengers)
except:
pass
elif each_tr.find('th', text = re.compile('Fatalities')) != None:
try:
fatalities = extract_numbers(each_tr.td.text, passengers)
except:
pass
elif each_tr.find('th', text = re.compile('Survivors')) != None:
try:
survivors = extract_numbers(each_tr.td.text, passengers)
except:
pass
elif each_tr.find('th', text = re.compile('Flight origin')) != None:
try:
origins = each_tr.td.text
except:
pass
elif each_tr.find('th', text = re.compile('Registration')) != None:
try:
registration = each_tr.td.text
except:
pass
else:
pass
return {'destination': destination,
'date': date_w_year,
'passengers': passengers,
'crew': crew,
'fatalities': fatalities,
'survivors': survivors,
'origins': origins,
'registration': registration}
def extract_numbers(td_text, passengers):
Function that handles table data rows to extract numbers.
Handles special cases where there are strings like all, none, etc. in the text
number_regex = re.compile('\d+')
all_regex = re.compile('ll')
none_regex = re.compile('one')
unknown_regex = re.compile('nknown')
try:
data_element = int(number_regex.findall(td_text)[0])
except:
if len(all_regex.findall(td_text)) >= 1:
data_element = passengers
elif len(none_regex.findall(td_text)) >= 1:
data_element = 0
elif len(unknown_regex.findall(td_text)) >= 1:
data_element = 0
else:
data_element = 0
return data_element
# Define lists we use to store our results
dates_w_year, passengers, crew, fatalities, survivors = [], [], [], [], []
registration, origins, destination = [], [], []
for row in links:
# Get HTML of detail page
summary_html = try_request(base_url + row)
trs = summary_html.find_all('tr')
# Extract data from summary HTML
summary = extract_summary(trs)
# Save the data for this page in our lists
dates_w_year.append(summary['date'])
passengers.append(summary['passengers'])
crew.append(summary['crew'])
fatalities.append(summary['fatalities'])
survivors.append(summary['survivors'])
origins.append(summary['origins'])
registration.append(summary['registration'])
destination.append(summary['destination'])
len(destination), len(origins), len(registration), len(dates_w_year), len(passengers), len(crew), len(fatalities), len(survivors)
df_full = pd.DataFrame({'date': dates_w_year, 'link': links, 'description': descriptions, 'passengers': passengers,
'crew': crew, 'fatalities': fatalities, 'survivors': survivors,
'registration': registration, 'flight origin': origins, 'destination': destination})
# save all this scraped stuff!
df_full.to_csv('all_data_rescraped.csv')
df_full = pd.read_csv('all_data_rescraped.csv')
dates_w_year = df_full['date']
df_full.columns
df_full.drop(['Unnamed: 0'], axis=1, inplace=True)
dates_w_year[0:10]
cleaned_dates = [str(d).replace(',', '') for d in dates_w_year]
import calendar
months = list(calendar.month_name)
days = list(calendar.day_name)
dates = [str(d) for d in list(range(1, 32))]
years = [str(y) for y in list(range(1900, 2017))]
def parse_date_strings(text):
split_row = text.split()
month, day, year, date = '', '', '', ''
for each in split_row[0:4]:
if each in months:
month = each
elif each in days:
day = each
elif each in years:
year = each
elif each in dates:
date = each
else:
pass
return {'month': month,
'day': day,
'year': year,
'date': date}
def fix_dates(datecol):
correctedcol = []
for row in datecol:
parsed_date = parse_date_strings(row)
correctedcol.append('{} {} {}'.format(parsed_date['date'],
parsed_date['month'],
parsed_date['year']))
return correctedcol
datescol = fix_dates(cleaned_dates)
datescol[0:5]
dates_datetime = pd.to_datetime(datescol, format='%d %B %Y', errors='coerce')
df_full['date'] = dates_datetime
df_full.head()
df_full = pd.DataFrame({'date': dates_datetime, 'link': links, 'description': descriptions, 'passengers': passengers,
'crew': crew, 'fatalities': fatalities, 'survivors': survivors,
'registration': registration, 'flight origin': origins, 'destination': destination})
# save all this scraped stuff!
df_full.to_csv('final_dataframe.csv')
%pdb
def extract_summaries(tables, relevant_date):
if len(tables) == 1:
result = extract_single_table_summary(tables[0])
else:
result = extract_relevant_table_summary(tables, relevant_date)
return {'destination': result['destination'],
'date': result['date'],
'passengers': result['passengers'],
'crew': result['crew'],
'fatalities': result['fatalities'],
'survivors': result['survivors'],
'origins': result['origins'],
'registration': result['registration']}
def pick_out_table(tables, relevant_date):
for table in tables:
trs = table.find_all('tr')
for each_tr in trs:
if each_tr.find('th', text = re.compile('Date')) != None:
# Clean and parse date
date = each_tr.td.text.replace(',', '')
parsed_date = parse_date_strings(date)
if (parsed_date['month'] == relevant_date.split()[0]
and parsed_date['date'] == relevant_date.split()[1]):
return table
return tables[0]
def extract_relevant_table_summary(tables, relevant_date):
date_w_year, passengers, crew, fatalities, survivors = '', 0, 0, 0, 0
registration, origins, destination = '', '', ''
table = pick_out_table(tables, relevant_date)
trs = table.find_all('tr')
for each_tr in trs:
if each_tr.find('th', text = re.compile('Destination')) != None:
try:
destination = each_tr.td.text
except:
pass
elif each_tr.find('th', text = re.compile('Date')) != None:
try:
date_w_year = each_tr.td.text
except:
pass
elif each_tr.find('th', text = re.compile('Passengers')) != None:
try:
passengers = extract_numbers(each_tr.td.text, passengers)
except:
pass
elif each_tr.find('th', text = re.compile('Crew')) != None:
try:
crew = extract_numbers(each_tr.td.text, passengers)
except:
pass
elif each_tr.find('th', text = re.compile('Fatalities')) != None:
try:
fatalities = extract_numbers(each_tr.td.text, passengers)
except:
pass
elif each_tr.find('th', text = re.compile('Survivors')) != None:
try:
survivors = extract_numbers(each_tr.td.text, passengers)
except:
pass
elif each_tr.find('th', text = re.compile('Flight origin')) != None:
try:
origins = each_tr.td.text
except:
pass
elif each_tr.find('th', text = re.compile('Registration')) != None:
try:
registration = each_tr.td.text
except:
pass
else:
continue
return {'destination': destination.strip(),
'date': date_w_year,
'passengers': passengers,
'crew': crew,
'fatalities': fatalities,
'survivors': survivors,
'origins': origins.strip(),
'registration': registration.strip()}
def extract_single_table_summary(table):
date_w_year, passengers, crew, fatalities, survivors = '', 0, 0, 0, 0
registration, origins, destination = '', '', ''
trs = table.find_all('tr')
for each_tr in trs:
if each_tr.find('th', text = re.compile('Destination')) != None:
try:
destination += ' ' + each_tr.td.text
except:
pass
elif each_tr.find('th', text = re.compile('Date')) != None:
try:
date_w_year = each_tr.td.text
except:
pass
elif each_tr.find('th', text = re.compile('Passengers')) != None:
try:
passengers += extract_numbers(each_tr.td.text, passengers)
except:
pass
elif each_tr.find('th', text = re.compile('Crew')) != None:
try:
crew += extract_numbers(each_tr.td.text, passengers)
except:
pass
elif each_tr.find('th', text = re.compile('Fatalities')) != None:
try:
fatalities += extract_numbers(each_tr.td.text, passengers)
except:
pass
elif each_tr.find('th', text = re.compile('Survivors')) != None:
try:
survivors += extract_numbers(each_tr.td.text, passengers)
except:
pass
elif each_tr.find('th', text = re.compile('Flight origin')) != None:
try:
origins += ' ' + each_tr.td.text
except:
pass
elif each_tr.find('th', text = re.compile('Registration')) != None:
try:
registration += ' ' + each_tr.td.text
except:
pass
else:
continue
return {'destination': destination.strip(),
'date': date_w_year,
'passengers': passengers,
'crew': crew,
'fatalities': fatalities,
'survivors': survivors,
'origins': origins.strip(),
'registration': registration.strip()}
test_collision_url = 'https://en.wikipedia.org/wiki/1922_Picardie_mid-air_collision'
summary_html = try_request(test_collision_url)
tables = summary_html.find_all('table', {"class" : "infobox vcard vevent"})
result_updated = extract_summaries(tables)
result_updated
test_multiple_dates_url = 'https://en.wikipedia.org/wiki/1950_Air_France_multiple_Douglas_DC-4_accidents'
summary_html = try_request(test_multiple_dates_url)
first_crash = 'June 12'
second_crash = 'June 14'
tables = summary_html.find_all('table', {"class" : "infobox vcard vevent"})
result_updated = extract_summaries(tables, first_crash)
result_updated
result_updated = extract_summaries(tables, second_crash)
result_updated
dates_w_year, passengers, crew, fatalities, survivors = [], [], [], [], []
registration, origins, destination = [], [], []
for num_row in range(len(links)):
# Get HTML of detail page
summary_html = try_request(base_url + links[num_row])
# Get tables that are in these sidebars (mostly one, but sometimes multiple)
tables = summary_html.find_all('table', {"class" : ["infobox", "vcard"]})
# Extract data from summary HTML
summary = extract_summaries(tables, dates_month_day[num_row])
# Save the data for this page in our lists
dates_w_year.append(summary['date'])
passengers.append(summary['passengers'])
crew.append(summary['crew'])
fatalities.append(summary['fatalities'])
survivors.append(summary['survivors'])
origins.append(summary['origins'])
registration.append(summary['registration'])
destination.append(summary['destination'])
# Clean dates
cleaned_dates = [str(d).replace(',', '') for d in dates_w_year]
datescol = fix_dates(cleaned_dates)
dates_datetime = pd.to_datetime(datescol, format='%d %B %Y', errors='coerce')
# Save!
df_summary = pd.DataFrame({'date': dates_datetime, 'link': links, 'description': descriptions, 'passengers': passengers,
'crew': crew, 'fatalities': fatalities, 'survivors': survivors,
'registration': registration, 'flight origin': origins, 'destination': destination})
# save all this scraped stuff!
df_summary.to_csv('final_dataframe_summary.csv')
top_5_crashes = df_full.sort_values('fatalities', ascending=False)[0:5]
top_5_crashes[['fatalities', 'flight origin']]
top_5_crashes['description']
df_full.date[672]
recent_incidents = df_full[673:]
recent_incidents['flight origin'].value_counts()[0:5]
df_full['flight origin'].value_counts()[0:10]
df_full.to_json('crashes.json')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Check and set your environment
Step2: Let's reassign the $SPECPROD environment to something other than dailytest so that we don't conflict with the outputs of the standard DESI integration test. In addition, we need to make raw data input $DESI_SPECTO_DATA match $DESI_SPECTRO_SIM/$PIXPROD where the simulated data will be written.
Step3: Specify the parameters of the simulation.
Step4: Generating noiseless spectra.
Step5: Reading the fibermap and spectral metadata
Step6: Let's go a step further and read the fibermap and simspec files from on-disk.
Step7: Make a simple plot
Step8: Simulating spectra using quickgen.
Step9: Inspect the output cframe files
Step10: Let's make a quick plot of the zeroth spectrum.
Step11: Regrouping the spectra
Step12: Inspect the output (combined and regrouped) spectra
Step13: As a quick example, let's plot up the zeroth spectrum in healpix pixel 19435.
|
<ASSISTANT_TASK:>
Python Code:
import os
import numpy as np
import matplotlib.pyplot as plt
from astropy.io import fits
from astropy.table import Table
import desispec.io
import desisim.io
from desisim.obs import new_exposure
from desisim.scripts import quickgen
from desispec.scripts import group_spectra
%pylab inline
def check_env():
for env in ('DESIMODEL', 'DESI_ROOT', 'DESI_SPECTRO_SIM', 'DESI_SPECTRO_DATA',
'DESI_SPECTRO_REDUX', 'SPECPROD', 'PIXPROD'):
if env in os.environ:
print('{} environment set to {}'.format(env, os.getenv(env)))
else:
print('Required environment variable {} not set!'.format(env))
check_env()
%set_env SPECPROD=example
%set_env PIXPROD=example
rawdata_dir = desisim.io.simdir()
%set_env DESI_SPECTRO_DATA=$rawdata_dir
print('Simulated raw data will be written to {}'.format(desisim.io.simdir()))
print('Pipeline will read raw data from {}'.format(desispec.io.rawdata_root()))
print(' (without knowing that it was simulated)')
print('Pipeline will write processed data to {}'.format(desispec.io.specprod_root()))
nspec = 100
seed = 555
flavor = 'dark'
night = '20170615'
expid = 0
fibermap, truth = new_exposure(flavor=flavor, nspec=nspec, seed=seed, night=night,
expid=expid, tileid=None, exptime=None)
rawdata_dir = desispec.io.rawdata_root()
!find $rawdata_dir | sort
fiberfile = desispec.io.findfile('fibermap', night=night, expid=expid)
simspecfile = desisim.io.findfile('simspec', night=night, expid=expid)
print('Reading fibermap file {}'.format(fiberfile))
hdu = fits.open(fiberfile)
hdu.info()
fibermap = Table(hdu['FIBERMAP'].data)
hdu.close()
fibermap[:3]
print('Reading simspec file {}.'.format(simspecfile))
hdu = fits.open(simspecfile)
hdu.info()
meta = Table(hdu['METADATA'].data)
hdu.close()
meta[:3]
allobjtype = meta['OBJTYPE']
redlim = (-0.2, 1.1*meta['REDSHIFT'].max())
fig, ax = plt.subplots()
for objtype in sorted(set(allobjtype)):
indx = objtype == allobjtype
hh = ax.hist(meta['REDSHIFT'][indx], bins=nspec//3,
label=objtype, alpha=0.5, range=redlim)
ax.set_xlabel('Redshift')
ax.set_ylabel('Number of Simulated Spectra')
ax.legend(loc='upper right', ncol=3)
ax.margins(0.2)
ax.set_xlim(redlim)
args = quickgen.parse([
'--simspec', simspecfile,
'--fibermap', fiberfile
])
quickgen.main(args)
cframefile = desispec.io.findfile('cframe', night=night, expid=expid, camera='b0')
print('Reading {}'.format(cframefile))
cframe = desispec.io.frame.read_frame(cframefile)
dir(cframe)
print(cframe.wave.shape, cframe.flux.shape)
fig, ax = plt.subplots()
ax.errorbar(cframe.wave, cframe.flux[0, :], 1/np.sqrt(cframe.ivar[0, :]))
ax.set_xlabel('Wavelength (A)')
ax.set_ylabel('Flux ($10^{-17}$ erg/s/cm$^2$)')
nside = 64
args = group_spectra.parse(['--hpxnside', '{}'.format(nside)])
group_spectra.main(args)
reduxdir = desispec.io.specprod_root()
!find $reduxdir | sort
specfilename = desispec.io.findfile('spectra', groupname=19435, nside=nside)
print('Reading {}'.format(specfilename))
specobj = desispec.io.read_spectra(specfilename)
dir(specobj)
specobj.wave.keys(), specobj.flux.keys()
thisone = 0
fig, ax = plt.subplots()
for camera, color in zip( ('b', 'r', 'z'), ('blue', 'red', 'magenta') ):
ax.plot(specobj.wave[camera], specobj.flux[camera][thisone], color=color)
ax.set_xlabel('Wavelength (A)')
ax.set_ylabel('Flux ($10^{-17}$ erg/s/cm$^2$)')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: # 텐서 만들기 및 조작
Step2: ## 벡터 덧셈
Step3: ### 텐서 형태
Step4: ### 브로드캐스팅
Step5: ## 행렬 곱셈
Step6: ## 텐서 형태 변경
Step7: 또한 tf.reshape를 사용하여 텐서의 차원 수(\'순위\')를 변경할 수도 있습니다.
Step8: ### 실습 #1
Step9: ### 해결 방법
Step10: ## 변수, 초기화, 할당
Step11: 텐서플로우의 한 가지 특징은 변수 초기화가 자동으로 실행되지 않는다는 것입니다. 예를 들어 다음 블록에서는 오류가 발생합니다.
Step12: 변수를 초기화하는 가장 쉬운 방법은 global_variables_initializer를 호출하는 것입니다. eval()과 거의 비슷한 Session.run()의 사용을 참고하세요.
Step13: 초기화된 변수는 같은 세션 내에서는 값을 유지합니다. 하지만 새 세션을 시작하면 다시 초기화해야 합니다.
Step14: 변수 값을 변경하려면 할당 작업을 사용합니다. 할당 작업을 만들기만 하면 실행되는 것은 아닙니다. 초기화와 마찬가지로 할당 작업을 실행해야 변수 값이 업데이트됩니다.
Step15: 로드 및 저장과 같이 여기에서 다루지 않은 변수에 관한 주제도 더 많이 있습니다. 자세히 알아보려면 텐서플로우 문서를 참조하세요.
Step16: ### 해결 방법
|
<ASSISTANT_TASK:>
Python Code:
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import print_function
import tensorflow as tf
with tf.Graph().as_default():
# Create a six-element vector (1-D tensor).
primes = tf.constant([2, 3, 5, 7, 11, 13], dtype=tf.int32)
# Create another six-element vector. Each element in the vector will be
# initialized to 1. The first argument is the shape of the tensor (more
# on shapes below).
ones = tf.ones([6], dtype=tf.int32)
# Add the two vectors. The resulting tensor is a six-element vector.
just_beyond_primes = tf.add(primes, ones)
# Create a session to run the default graph.
with tf.Session() as sess:
print(just_beyond_primes.eval())
with tf.Graph().as_default():
# A scalar (0-D tensor).
scalar = tf.zeros([])
# A vector with 3 elements.
vector = tf.zeros([3])
# A matrix with 2 rows and 3 columns.
matrix = tf.zeros([2, 3])
with tf.Session() as sess:
print('scalar has shape', scalar.get_shape(), 'and value:\n', scalar.eval())
print('vector has shape', vector.get_shape(), 'and value:\n', vector.eval())
print('matrix has shape', matrix.get_shape(), 'and value:\n', matrix.eval())
with tf.Graph().as_default():
# Create a six-element vector (1-D tensor).
primes = tf.constant([2, 3, 5, 7, 11, 13], dtype=tf.int32)
# Create a constant scalar with value 1.
ones = tf.constant(1, dtype=tf.int32)
# Add the two tensors. The resulting tensor is a six-element vector.
just_beyond_primes = tf.add(primes, ones)
with tf.Session() as sess:
print(just_beyond_primes.eval())
with tf.Graph().as_default():
# Create a matrix (2-d tensor) with 3 rows and 4 columns.
x = tf.constant([[5, 2, 4, 3], [5, 1, 6, -2], [-1, 3, -1, -2]],
dtype=tf.int32)
# Create a matrix with 4 rows and 2 columns.
y = tf.constant([[2, 2], [3, 5], [4, 5], [1, 6]], dtype=tf.int32)
# Multiply `x` by `y`.
# The resulting matrix will have 3 rows and 2 columns.
matrix_multiply_result = tf.matmul(x, y)
with tf.Session() as sess:
print(matrix_multiply_result.eval())
with tf.Graph().as_default():
# Create an 8x2 matrix (2-D tensor).
matrix = tf.constant([[1,2], [3,4], [5,6], [7,8],
[9,10], [11,12], [13, 14], [15,16]], dtype=tf.int32)
# Reshape the 8x2 matrix into a 2x8 matrix.
reshaped_2x8_matrix = tf.reshape(matrix, [2,8])
# Reshape the 8x2 matrix into a 4x4 matrix
reshaped_4x4_matrix = tf.reshape(matrix, [4,4])
with tf.Session() as sess:
print("Original matrix (8x2):")
print(matrix.eval())
print("Reshaped matrix (2x8):")
print(reshaped_2x8_matrix.eval())
print("Reshaped matrix (4x4):")
print(reshaped_4x4_matrix.eval())
with tf.Graph().as_default():
# Create an 8x2 matrix (2-D tensor).
matrix = tf.constant([[1,2], [3,4], [5,6], [7,8],
[9,10], [11,12], [13, 14], [15,16]], dtype=tf.int32)
# Reshape the 8x2 matrix into a 3-D 2x2x4 tensor.
reshaped_2x2x4_tensor = tf.reshape(matrix, [2,2,4])
# Reshape the 8x2 matrix into a 1-D 16-element tensor.
one_dimensional_vector = tf.reshape(matrix, [16])
with tf.Session() as sess:
print("Original matrix (8x2):")
print(matrix.eval())
print("Reshaped 3-D tensor (2x2x4):")
print(reshaped_2x2x4_tensor.eval())
print("1-D vector:")
print(one_dimensional_vector.eval())
# Write your code for Task 1 here.
with tf.Graph().as_default(), tf.Session() as sess:
# Task: Reshape two tensors in order to multiply them
# Here are the original operands, which are incompatible
# for matrix multiplication:
a = tf.constant([5, 3, 2, 7, 1, 4])
b = tf.constant([4, 6, 3])
# We need to reshape at least one of these operands so that
# the number of columns in the first operand equals the number
# of rows in the second operand.
# Reshape vector "a" into a 2-D 2x3 matrix:
reshaped_a = tf.reshape(a, [2,3])
# Reshape vector "b" into a 2-D 3x1 matrix:
reshaped_b = tf.reshape(b, [3,1])
# The number of columns in the first matrix now equals
# the number of rows in the second matrix. Therefore, you
# can matrix mutiply the two operands.
c = tf.matmul(reshaped_a, reshaped_b)
print(c.eval())
# An alternate approach: [6,1] x [1, 3] -> [6,3]
g = tf.Graph()
with g.as_default():
# Create a variable with the initial value 3.
v = tf.Variable([3])
# Create a variable of shape [1], with a random initial value,
# sampled from a normal distribution with mean 1 and standard deviation 0.35.
w = tf.Variable(tf.random_normal([1], mean=1.0, stddev=0.35))
with g.as_default():
with tf.Session() as sess:
try:
v.eval()
except tf.errors.FailedPreconditionError as e:
print("Caught expected error: ", e)
with g.as_default():
with tf.Session() as sess:
initialization = tf.global_variables_initializer()
sess.run(initialization)
# Now, variables can be accessed normally, and have values assigned to them.
print(v.eval())
print(w.eval())
with g.as_default():
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# These three prints will print the same value.
print(w.eval())
print(w.eval())
print(w.eval())
with g.as_default():
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# This should print the variable's initial value.
print(v.eval())
assignment = tf.assign(v, [7])
# The variable has not been changed yet!
print(v.eval())
# Execute the assignment op.
sess.run(assignment)
# Now the variable is updated.
print(v.eval())
# Write your code for Task 2 here.
with tf.Graph().as_default(), tf.Session() as sess:
# Task 2: Simulate 10 throws of two dice. Store the results
# in a 10x3 matrix.
# We're going to place dice throws inside two separate
# 10x1 matrices. We could have placed dice throws inside
# a single 10x2 matrix, but adding different columns of
# the same matrix is tricky. We also could have placed
# dice throws inside two 1-D tensors (vectors); doing so
# would require transposing the result.
dice1 = tf.Variable(tf.random_uniform([10, 1],
minval=1, maxval=7,
dtype=tf.int32))
dice2 = tf.Variable(tf.random_uniform([10, 1],
minval=1, maxval=7,
dtype=tf.int32))
# We may add dice1 and dice2 since they share the same shape
# and size.
dice_sum = tf.add(dice1, dice2)
# We've got three separate 10x1 matrices. To produce a single
# 10x3 matrix, we'll concatenate them along dimension 1.
resulting_matrix = tf.concat(
values=[dice1, dice2, dice_sum], axis=1)
# The variables haven't been initialized within the graph yet,
# so let's remedy that.
sess.run(tf.global_variables_initializer())
print(resulting_matrix.eval())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: a. histogram
Step2: Different features have different ditributions.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import os
from sklearn.manifold import TSNE
from common import Data
lld=Data('lld')
lld.load_training_data()
print 'training feature shape: ', lld.feature.shape
print 'training label shape: ', lld.label.shape
#lld.load_test_data()
#print 'test feature shape: ',lld.feature_test.shape
#print 'test label shape: ',lld.label_test.shape
import matplotlib.pyplot as plt
%matplotlib inline
feature_table=[1,10,100,300]
for ind,fea in enumerate(feature_table):
f= lld.feature[:,fea]
plt.subplot(2,2,ind+1)
plt.hist(f)
#plt.title("Histogram of feature "+str(ind))
plt.axis('tight')
model=TSNE(n_components=2,random_state=0) # reduct the dimention to 2 for visualization
np.set_printoptions(suppress=True)
Y=model.fit_transform(lld.feature,lld.label) # the reducted data
plt.scatter(Y[:, 0], Y[:, 1],c=lld.label[:,0],cmap=plt.cm.Spectral)
plt.title('training data')
plt.axis('tight')
print Y.shape
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <pre>
Step2: Example 2 - Number of commits pushed per repository
Step3: The Split-Apply-Combine Pattern
Step4: <img src="http
Step5: This
Step6: Example 1 (using Pandas) - Number of Repositories Created
Step7: Example 2 (using Pandas) - Number of commits pushed per repo
Step8: Example 1 - revisited
Step9: Great for interactive work
Step10: Would prefer to write
Step11: PyToolz
Step12: Example 2 - pipelined with PyToolz
Step13: The Point of Learning Patterns
Step14: New tools
Step15: You can run the same computation on different backends!
Step16: Dask and Castra
|
<ASSISTANT_TASK:>
Python Code:
import os
import gzip
import ujson as json
directory = 'data/github_archive'
filename = '2015-01-29-16.json.gz'
path = os.path.join(directory, filename)
with gzip.open(path) as f:
events = [json.loads(line) for line in f]
#print json.dumps(events[0], indent=4)
new_repo_count = 0
for event in events:
new_repo_count += \
1 if event['type']=="CreateEvent" else 0
print new_repo_count
repo_commits = {}
for event in events:
if event['type']=="PushEvent":
repo = event['repo']['name']
commits = event['payload']['size']
repo_commits[repo] = \
repo_commits.get(repo, 0) + commits
def print_top_items(dct, N=5):
sorted_items = sorted(
dct.iteritems(), key=lambda t: t[1], reverse=True)
for key, value in sorted_items[:N]:
print "{:40} {}".format(key, value)
print_top_items(repo_commits)
from IPython.display import HTML
HTML('<iframe src="http://www.jstatsoft.org/v40/i01" width=800 height=400></iframe>')
repo_commits = {}
for event in events:
if event['type']=="PushEvent":
repo = event['repo']['name']
commits = event['payload']['size']
repo_commits[repo] = \
repo_commits.get(repo, 0) + commits
print_top_items(repo_commits)
import numpy as np
import pandas as pd
from collections import namedtuple
GithubEvent = namedtuple('GithubEvent', ['type_', 'user', 'repo', 'created_at', 'commits'])
def make_record(event):
return GithubEvent(
event['type'], event['actor']['login'],
event['repo']['name'], pd.Timestamp(event['created_at']),
event['payload']['size'] if event['type']=='PushEvent' else np.nan
)
df = pd.DataFrame.from_records(
(make_record(ev) for ev in events),
columns=GithubEvent._fields, index='created_at')
df.head()
df[df.type_=='CreateEvent'].head()
len(df[df.type_=='CreateEvent'])
repo_commits = {}
for event in events:
if event['type']=="PushEvent":
repo = event['repo']['name']
commits = event['payload']['size']
repo_commits[repo] = \
repo_commits.get(repo, 0) + commits
print_top_items(repo_commits)
repo_commits = df[df.type_=='PushEvent'].groupby('repo').commits.sum()
repo_commits.sort(ascending=False)
repo_commits.head(5)
event_counts = df.groupby('type_').repo.count()
event_counts.sort(ascending=False)
event_counts.head()
new_repo_count = 0
for event in events:
new_repo_count += \
1 if event['type']=="CreateEvent" else 0
print new_repo_count
reduce(lambda x,y: x+y,
map(lambda ev: 1 if ev['type']=='CreateEvent' else 0,
events))
def datapipe(data, *transforms):
for transform in transforms:
data = transform(data)
return data
datapipe(
events,
lambda events: map(lambda ev: 1 if ev['type']=='CreateEvent' else 0, events),
lambda counts: reduce(lambda x,y: x+y, counts)
)
from toolz.curried import pipe, map, reduce
pipe(events,
map(lambda ev: 1 if ev['type']=='CreateEvent' else 0),
reduce(lambda x,y: x+y)
)
repo_commits = {}
for event in events:
if event['type']=="PushEvent":
repo = event['repo']['name']
commits = event['payload']['size']
repo_commits[repo] = \
repo_commits.get(repo, 0) + commits
print_top_items(repo_commits)
from toolz.curried import filter, reduceby
pipe(events,
filter(lambda ev: ev['type']=='PushEvent'),
reduceby(lambda ev: ev['repo']['name'],
lambda commits, ev: commits+ev['payload']['size'],
init=0),
print_top_items
)
def count_commits(filename):
import gzip
import json
from toolz.curried import pipe, filter, reduceby
with gzip.open(filename) as f:
repo_commits = pipe(
map(json.loads, f),
filter(lambda ev: ev['type']=='PushEvent'),
reduceby(lambda ev: ev['repo']['name'],
lambda commits, e: commits+e['payload']['size'],
init=0)
)
return repo_commits
print_top_items(count_commits(path))
import glob
files = glob.glob('C:/ARGO/talks/split-apply-combine/data/github_archive/2015-01-*')
print len(files)
N = 24 #len(files) # 10
%%time
from toolz.curried import reduceby
from __builtin__ import map as pmap
repo_commits = \
pipe(pmap(count_commits, files[:N]),
lambda lst: reduce(lambda out, dct: out + dct.items(), lst, []),
reduceby(lambda t: t[0], lambda s,t: s+t[1], init=0)
)
print_top_items(repo_commits)
%%time
# Remember to start the ipcluster!
# ipcluster start -n 4
from IPython.parallel import Client
p = Client()[:]
pmap = p.map_sync
repo_commits = \
pipe(pmap(count_commits, files[:N]),
lambda lst: reduce(lambda out, dct: out + dct.items(), lst, []),
reduceby(lambda t: t[0], lambda s,t: s+t[1], init=0)
)
print_top_items(repo_commits)
repo_commits = df[df.type_=='PushEvent'].groupby('repo').commits.sum()
repo_commits.sort(ascending=False)
repo_commits.head(5)
from blaze import Symbol, by
event = Symbol('event', 'var * {created_at: datetime, type_: string, user: string, repo: string, commits: int}')
push_events = event[event.type_=='PushEvent']
repo_commits = by(push_events.repo, commits=push_events.commits.sum())
top_repos = repo_commits.sort('commits', ascending=False).head(5)
from blaze import compute
print compute(top_repos, df)
from odo import odo
uri = 'sqlite:///data/github_archive.sqlite::event'
odo(df, uri)
from blaze import Data
db = Data(uri)
compute(top_repos, db)
import os
if os.path.exists('data/github_archive.sqlite'):
os.remove('data/github_archive.sqlite')
from castra import Castra
castra = Castra('data/github_archive.castra',
template=df, categories=categories)
castra.extend_sequence(map(to_df, files), freq='1h')
import dask.dataframe as dd
from dask.diagnostics import ProgressBar
pbar = ProgressBar()
pbar.register()
df = dd.from_castra('data/github_archive.castra')
df.head()
df.type.value_counts().nlargest(5).compute()
df[df.type=='PushEvent'].groupby('repo').commits.resample('h', how='count').compute()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Breaking it down...
Step2: For loops
Step3: One confusing aspect of this loop is range(1,4) why does this loop from 1 to 3? Why not 1 to 4? Well it has to do with the fact that computers start counting at zero. The easier way to understand it is if you subtract the two numbers you get the number of times it will loop. So for example, 4-1 == 3.
Step4: Indefinite loops
Step5: In the above example, the loop will keep on looping until we enter mike. The value mike is called the sentinal value - a value we look out for, and when it exists we stop the loop. For this reason indefinite loops are also known as sentinal-controlled loops.
Step6: 1.2 You Code
Step7: Multiple exit conditions
Step8: Counting Characters in Text
Step9: Next, we surround the code we wrote in 1.4 with a sentinal-controlled indefinite loop. The sentinal (the part that exits the loop is when the text is empty (text=="") The algorithm is
Step10: Metacognition
|
<ASSISTANT_TASK:>
Python Code:
i = 1
while i <= 3:
print(i,"Mississippi...")
i=i+1
print("Blitz!")
## WARNING!!! INFINITE LOOP AHEAD
## IF YOU RUN THIS CODE YOU WILL NEED TO STOP OR RESTART THE KERNEL AFTER RUNNING THIS!!!
i = 1
while i <= 3:
print(i,"Mississippi...")
print("Blitz!")
for i in range(1,4):
print(i,"Mississippi...")
print("Blitz!")
# TODO Write code here
name = ""
while name != 'mike':
name = input("Say my name! : ")
print(f"Nope, my name is not {name}!")
while True:
name = input("Say my name!: ")
if name == 'mike':
break
print("Nope, my name is not %s!" %(name))
#TODO Debug this code
nicount=0
while True:
say = input "What say you? ")
if say == 'ni':
break
nicount = 1
print(f"You said 'ni' P {nicount} times.")
times = 0
while True:
name = input("Say my name!: ")
times = times + 1
if name == 'mike': # sentinal 1
print("You got it!")
break
if times == 3: # sentinal 2
print("Game over. Too many tries!")
break
print(f"Nope, my name is not {name}")
# TODO Write code here
# TODO Write Code here:
# run this code to turn in your work!
from coursetools.submission import Submission
Submission().submit()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Definition of variables
Step2: The Badness Index of each winter
Step3: There you have it! Some candidates for Worst Winter Ever can be determined by the highest peaks. The winter of 2013-14 was pretty bad, but it paled in comparison to the winter of 1978-79.
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
# Read data, sort by year & month
dateparse = lambda x: pd.datetime.strptime(x, '%Y%m%d')
noaa_monthly = pd.read_csv('chicago-midway-noaa.csv', index_col=2,
parse_dates=True, date_parser=dateparse, na_values=-9999)
noaa_monthly = noaa_monthly.groupby([noaa_monthly.index.year, noaa_monthly.index.month]).sum()
# Fix "suspicious" entry in January 1930, based on a NOAA source
noaa_monthly.loc[(1930, 1), 'MXSD'] = 268 # conversion: 268 mm == 11 in
# Sum seasonal totals
winter_vars = ['MNTM','EMNT','DT00','DX32','MXSD','EMXP','TSNW','DP10']
year_start = 1928
year_end = 2014
season_start = 11 #November
season_end = 3 #March
noaa_winters = pd.concat(
[noaa_monthly.loc[(year, season_start):(year+1, season_end), winter_vars].sum(axis=0)
for year in range(year_start, year_end+1)], axis=1).transpose()
noaa_winters.index = range(year_start, year_end+1)
# Fix variables that should have been handled differently
noaa_winters['TSNW'] /= 24.4
for year in noaa_winters.index:
noaa_winters.loc[year, 'MNTM'] = \
noaa_monthly.loc[(year, season_start):(year+1, season_end), 'MNTM'].mean() * 0.18 + 32
noaa_winters.loc[year, 'EMNT'] = \
noaa_monthly.loc[(year, season_start):(year+1, season_end), 'EMNT'].min() * 0.18 + 32
noaa_winters.loc[year, 'MXSD'] = \
noaa_monthly.loc[(year, season_start):(year+1, season_end), 'MXSD'].max() / 24.4
noaa_winters.loc[year, 'EMXP'] = \
noaa_monthly.loc[(year, season_start):(year+1, season_end), 'EMXP'].max() / 24.4
acronym = { 'DP10': 'Number of days with greater than or equal to 1.0 inch of precipitation',
'MXSD': 'Maximum snow depth, inches',
'EMXP': 'Extreme maximum daily precipitation, inches',
'DT00': 'Number days with minimum temperature less than or equal to 0.0 F',
'DX32': 'Number days with maximum temperature less than or equal to 32.0 F',
'EMNT': 'Extreme minimum daily temperature',
'TSNW': 'Total snow fall, inches',
'MNTM': 'Mean temperature'}
# Plot variables
import matplotlib.pyplot as plt
%matplotlib inline
for v in noaa_winters.columns:
noaa_winters[v].plot(figsize=(13,3), color='skyblue');
pd.rolling_mean(noaa_winters[v], 20).plot(color='blue')
plt.title(acronym[v])
plt.legend(["observed data", "20-year rolling average"], loc='best')
plt.show()
# Find the best & worst for each variable
winter_coldest = pd.Series(index=noaa_winters.columns)
winter_warmest = pd.Series(index=noaa_winters.columns)
# For these variables, big is bad
for v in ['MXSD','EMXP','DT00','DX32','TSNW','DP10']:
winter_coldest[v] = noaa_winters[v].max()
winter_warmest[v] = noaa_winters[v].min()
# For these variables, small (or negative) is bad
for v in ['MNTM','EMNT']:
winter_coldest[v] = noaa_winters[v].min()
winter_warmest[v] = noaa_winters[v].max()
# Assign scores to each year
winter_score = 100 * (noaa_winters-winter_warmest).abs() / (winter_coldest-winter_warmest).abs()
badness = winter_score.mean(axis=1)
# Plot the Badness Index
badness.plot(figsize=(13,6), marker='s', color='skyblue', xticks=badness.index[2::5])
pd.rolling_mean(badness, 20).plot(color='blue')
plt.title("Badness Index of each Chicago winter")
plt.ylabel("Badness index")
plt.xlabel("Year (start of winter)")
plt.legend(["Computed Badness", "20-year rolling average"])
plt.show()
z = (noaa_winters - noaa_winters.mean()) / noaa_winters.std()
from sklearn.decomposition import PCA
pca = PCA(n_components=4)
pca.fit(z)
pca_components = pd.DataFrame(pca.components_, index=['PC'+str(i) for i in range(1,pca.n_components_+1)], \
columns=z.columns)
pca_scores = pd.DataFrame(pca.transform(z), index=z.index, columns=pca_components.index )
print "Explained variance ratios:", pca.explained_variance_ratio_
pca_scores.plot(figsize=(13,8))
plt.legend(loc='best')
plt.title('Principal component scores')
plt.show()
# Cluster analysis
import numpy as np
from scipy.spatial.distance import squareform
from scipy.cluster.hierarchy import linkage, dendrogram
dissimilarity = 1 - noaa_winters.corr().abs()
row_distance = np.clip(squareform(dissimilarity),0,1)
L = linkage(row_distance, method='average')
plt.figure(figsize=(13,9), dpi=100)
plt.subplot(212)
R = dendrogram(L, orientation='bottom')
plt.ylabel('Cluster distance (UPGMA)')
# Matched up with PC loadings (scaled by corresponding PC variances)
leaves = [pca_components.columns[i] for i in R['leaves']]
plt.subplot(211)
(pca_components[leaves].iloc[0] * pca.explained_variance_[0]).plot(kind='bar', color='blue')
(pca_components[leaves].iloc[1] * pca.explained_variance_[1]).plot(kind='bar', color='green')
(pca_components[leaves].iloc[2] * pca.explained_variance_[2]).plot(kind='bar', color='red')
(pca_components[leaves].iloc[3] * pca.explained_variance_[3]).plot(kind='bar', color='cyan')
plt.ylabel('PC loadings times PC variance')
plt.legend(loc='best')
plt.title('Components of each variable: PC loadings scaled by corresponding PC variances')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Searching the database
Step2: We can get the number of returned records by looking at record["Count"]
Step3: But what are our records? We can see their GenBank identifiers by looking at record["IdList"]
Step4: But this isn't immediately informative. We're going to have to look at the assemblies associated with these identifiers in GenBank. We do this with Entrez.elink(), searching for associations between the genome database and the assembly database, compiling all the resulting Link UIDs in a single list.
Step5: Now we can recover links to the nucleotide database for each of these UIDs. There may be several such links, but as we are looking for the full assembly, we care only about the assembly_nuccore_insdc sequences, which are the contigs.
Step6: Once we have these nucleotide database identifiers, we can grab all the sequences and write them out as multi-FASTA files, with Entrez.efetch(). The assembly records themselves though, have to be obtained with Entrez.esummary(), and then a byzantine set of keywords navigated, to get the information we're interested in.
|
<ASSISTANT_TASK:>
Python Code:
import os
from Bio import Entrez, SeqIO
Entrez.email = "" # Use your own real email
Entrez.tool = "Biopython_get_GenBank_genomes.ipynb"
genus = "Pectobacterium"
query_text = "{0} AND bacteria[Organism]".format(genus)
handle = Entrez.esearch(db='genome', term=query_text)
record = Entrez.read(handle)
record["Count"]
record["IdList"]
asm_links = []
for uid in record["IdList"]:
links = Entrez.read(Entrez.elink(dbfrom="genome", db="assembly", retmode="text", from_uid=uid))
[asm_links.append(d.values()[0]) for d in links[0]['LinkSetDb'][0]['Link']]
print("We find {0} genome entries: {1}".format(len(asm_links), asm_links))
sequid_links = {}
for uid in asm_links:
links = Entrez.read(Entrez.elink(dbfrom="assembly", db="nucleotide", retmode="gb", from_uid=uid))
contigs = [l for l in links[0]['LinkSetDb'] if l['LinkName'] == 'assembly_nuccore_insdc'][0]
sequid_links[uid] = [e['Id'] for e in contigs['Link']]
expected_contigs = {}
print("There are {0} genomes identified for {1}:".format(len(sequid_links), genus))
for k, v in sorted(sequid_links.items()):
print("Assembly UID {0}: {1} contigs".format(k, len(v)))
expected_contigs[k] = len(v)
# Make sure there's a relevant output directory
if not os.path.exists(genus):
os.mkdir(genus)
if not os.path.exists("failures"):
os.mkdir("failures")
# Write output
with open(os.path.join(genus, 'labels.txt'), 'w') as lfh:
with open(os.path.join(genus, 'classes.txt'), 'w') as cfh:
for asm_uid, contigs in sorted(sequid_links.items()):
# Get assembly record information
asm_record = Entrez.read(Entrez.esummary(db='assembly', id=asm_uid, rettype='text'))
asm_organism = asm_record['DocumentSummarySet']['DocumentSummary'][0]['SpeciesName']
try:
asm_strain = asm_record['DocumentSummarySet']['DocumentSummary'][0]['Biosource']['InfraspeciesList'][0]['Sub_value']
except:
asm_strain = ""
gname = asm_record['DocumentSummarySet']['DocumentSummary'][0]['AssemblyAccession'].split('.')[0]
filestem = os.path.join(genus, gname)
# Write a labels.txt and a classes.txt file suitable for pyani
glab, species = asm_organism.split(' ', 1)
glab = glab[0]
labelstr = "{0}\t{1}. {2} {3}".format(gname, glab, species, asm_strain)
print >> lfh, labelstr
print >> cfh, "{0}\t{1}".format(gname, asm_organism)
print(labelstr)
# Get FASTA records for each of the contigs (we could do this with GenBank instead,
# but sometimes these are not formatted correctly with sequences)
query_uids = ','.join(contigs)
tries, success = 0, False
while success == False and tries < 20:
# Also check for total sequence length errors?
try:
print("UID:{0} download attempt {1}".format(asm_uid, tries + 1))
records = list(SeqIO.parse(Entrez.efetch(db='nucleotide', id=query_uids,
rettype="fasta", retmode='text'),
'fasta'))
if len(records) == expected_contigs[asm_uid]: # No exceptions, num records = expected
success = True
else: # No exceptions, but not all contigs
print('{0} records downloaded, expected {1}'.format(len(records),
expected_contigs[asm_uid]))
SeqIO.write(records, os.path.join("failures",
"{0}_{1}_failed.fasta".format(asm_uid, tries)),
'fasta')
tries += 1
except: # Catch any errors, incl. from SeqIO.parse and Entrez.efetch
tries += 1
if tries >= 10:
print("Download failed for {0}\n".format(labelstr))
print("UID:{0} has {1} records, total length {2}\n".format(asm_uid, len(records),
sum([len(r) for r in records])))
SeqIO.write(records, "{0}.fasta".format(filestem), 'fasta')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 2. Key Properties --> Variables
Step7: 3. Key Properties --> Seawater Properties
Step8: 3.2. Ocean Freezing Point Value
Step9: 4. Key Properties --> Resolution
Step10: 4.2. Canonical Horizontal Resolution
Step11: 4.3. Number Of Horizontal Gridpoints
Step12: 5. Key Properties --> Tuning Applied
Step13: 5.2. Target
Step14: 5.3. Simulations
Step15: 5.4. Metrics Used
Step16: 5.5. Variables
Step17: 6. Key Properties --> Key Parameter Values
Step18: 6.2. Additional Parameters
Step19: 7. Key Properties --> Assumptions
Step20: 7.2. On Diagnostic Variables
Step21: 7.3. Missing Processes
Step22: 8. Key Properties --> Conservation
Step23: 8.2. Properties
Step24: 8.3. Budget
Step25: 8.4. Was Flux Correction Used
Step26: 8.5. Corrected Conserved Prognostic Variables
Step27: 9. Grid --> Discretisation --> Horizontal
Step28: 9.2. Grid Type
Step29: 9.3. Scheme
Step30: 9.4. Thermodynamics Time Step
Step31: 9.5. Dynamics Time Step
Step32: 9.6. Additional Details
Step33: 10. Grid --> Discretisation --> Vertical
Step34: 10.2. Number Of Layers
Step35: 10.3. Additional Details
Step36: 11. Grid --> Seaice Categories
Step37: 11.2. Number Of Categories
Step38: 11.3. Category Limits
Step39: 11.4. Ice Thickness Distribution Scheme
Step40: 11.5. Other
Step41: 12. Grid --> Snow On Seaice
Step42: 12.2. Number Of Snow Levels
Step43: 12.3. Snow Fraction
Step44: 12.4. Additional Details
Step45: 13. Dynamics
Step46: 13.2. Transport In Thickness Space
Step47: 13.3. Ice Strength Formulation
Step48: 13.4. Redistribution
Step49: 13.5. Rheology
Step50: 14. Thermodynamics --> Energy
Step51: 14.2. Thermal Conductivity
Step52: 14.3. Heat Diffusion
Step53: 14.4. Basal Heat Flux
Step54: 14.5. Fixed Salinity Value
Step55: 14.6. Heat Content Of Precipitation
Step56: 14.7. Precipitation Effects On Salinity
Step57: 15. Thermodynamics --> Mass
Step58: 15.2. Ice Vertical Growth And Melt
Step59: 15.3. Ice Lateral Melting
Step60: 15.4. Ice Surface Sublimation
Step61: 15.5. Frazil Ice
Step62: 16. Thermodynamics --> Salt
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Step65: 17.2. Constant Salinity Value
Step66: 17.3. Additional Details
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Step68: 18.2. Constant Salinity Value
Step69: 18.3. Additional Details
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Step72: 20.2. Additional Details
Step73: 21. Thermodynamics --> Melt Ponds
Step74: 21.2. Formulation
Step75: 21.3. Impacts
Step76: 22. Thermodynamics --> Snow Processes
Step77: 22.2. Snow Aging Scheme
Step78: 22.3. Has Snow Ice Formation
Step79: 22.4. Snow Ice Formation Scheme
Step80: 22.5. Redistribution
Step81: 22.6. Heat Diffusion
Step82: 23. Radiative Processes
Step83: 23.2. Ice Radiation Transmission
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cams', 'cams-csm1-0', 'seaice')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This was developed using Python 3.5.2 (Anaconda) and TensorFlow version
Step2: Load Data
Step3: The MNIST data-set has now been loaded and consists of 70.000 images and associated labels (i.e. classifications of the images). The data-set is split into 3 mutually exclusive sub-sets. We will only use the training and test-sets in this tutorial.
Step4: One-Hot Encoding
Step5: We also need the classes as single numbers for various comparisons and performance measures, so we convert the One-Hot encoded vectors to a single number by taking the index of the highest element. Note that the word 'class' is a keyword used in Python so we need to use the name 'cls' instead.
Step6: We can now see the class for the first five images in the test-set. Compare these to the One-Hot encoded vectors above. For example, the class for the first image is 7, which corresponds to a One-Hot encoded vector where all elements are zero except for the element with index 7.
Step7: Data dimensions
Step8: Helper-function for plotting images
Step9: Plot a few images to see if data is correct
Step10: TensorFlow Graph
Step11: Next we have the placeholder variable for the true labels associated with the images that were input in the placeholder variable x. The shape of this placeholder variable is [None, num_classes] which means it may hold an arbitrary number of labels and each label is a vector of length num_classes which is 10 in this case.
Step12: Finally we have the placeholder variable for the true class of each image in the placeholder variable x. These are integers and the dimensionality of this placeholder variable is set to [None] which means the placeholder variable is a one-dimensional vector of arbitrary length.
Step13: Variables to be optimized
Step14: The second variable that must be optimized is called biases and is defined as a 1-dimensional tensor (or vector) of length num_classes.
Step15: Model
Step16: Now logits is a matrix with num_images rows and num_classes columns, where the element of the $i$'th row and $j$'th column is an estimate of how likely the $i$'th input image is to be of the $j$'th class.
Step17: The predicted class can be calculated from the y_pred matrix by taking the index of the largest element in each row.
Step18: Cost-function to be optimized
Step19: We have now calculated the cross-entropy for each of the image classifications so we have a measure of how well the model performs on each image individually. But in order to use the cross-entropy to guide the optimization of the model's variables we need a single scalar value, so we simply take the average of the cross-entropy for all the image classifications.
Step20: Optimization method
Step21: Performance measures
Step22: This calculates the classification accuracy by first type-casting the vector of booleans to floats, so that False becomes 0 and True becomes 1, and then calculating the average of these numbers.
Step23: TensorFlow Run
Step24: Initialize variables
Step25: Helper-function to perform optimization iterations
Step26: Function for performing a number of optimization iterations so as to gradually improve the weights and biases of the model. In each iteration, a new batch of data is selected from the training-set and then TensorFlow executes the optimizer using those training samples.
Step27: Helper-functions to show performance
Step28: Function for printing the classification accuracy on the test-set.
Step29: Function for printing and plotting the confusion matrix using scikit-learn.
Step30: Function for plotting examples of images from the test-set that have been mis-classified.
Step31: Helper-function to plot the model weights
Step32: Performance before any optimization
Step33: Performance after 1 optimization iteration
Step34: The weights can also be plotted as shown below. Positive weights are red and negative weights are blue. These weights can be intuitively understood as image-filters.
Step35: Performance after 10 optimization iterations
Step36: Performance after 1000 optimization iterations
Step37: The model has now been trained for 1000 optimization iterations, with each iteration using 100 images from the training-set. Because of the great variety of the images, the weights have now become difficult to interpret and we may doubt whether the model truly understands how digits are composed from lines, or whether the model has just memorized many different variations of pixels.
Step38: We can also print and plot the so-called confusion matrix which lets us see more details about the mis-classifications. For example, it shows that images actually depicting a 5 have sometimes been mis-classified as all other possible digits, but mostly either 3, 6 or 8.
Step39: We are now done using TensorFlow, so we close the session to release its resources.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
from sklearn.metrics import confusion_matrix
tf.__version__
from tensorflow.examples.tutorials.mnist import input_data
data = input_data.read_data_sets("data/MNIST/", one_hot=True)
print("Size of:")
print("- Training-set:\t\t{}".format(len(data.train.labels)))
print("- Test-set:\t\t{}".format(len(data.test.labels)))
print("- Validation-set:\t{}".format(len(data.validation.labels)))
data.test.labels[0:5, :]
data.test.cls = np.array([label.argmax() for label in data.test.labels])
data.test.cls[0:5]
# We know that MNIST images are 28 pixels in each dimension.
img_size = 28
# Images are stored in one-dimensional arrays of this length.
img_size_flat = img_size * img_size
# Tuple with height and width of images used to reshape arrays.
img_shape = (img_size, img_size)
# Number of classes, one class for each of 10 digits.
num_classes = 10
def plot_images(images, cls_true, cls_pred=None):
assert len(images) == len(cls_true) == 9
# Create figure with 3x3 sub-plots.
fig, axes = plt.subplots(3, 3)
fig.subplots_adjust(hspace=0.3, wspace=0.3)
for i, ax in enumerate(axes.flat):
# Plot image.
ax.imshow(images[i].reshape(img_shape), cmap='binary')
# Show true and predicted classes.
if cls_pred is None:
xlabel = "True: {0}".format(cls_true[i])
else:
xlabel = "True: {0}, Pred: {1}".format(cls_true[i], cls_pred[i])
ax.set_xlabel(xlabel)
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Get the first images from the test-set.
images = data.test.images[0:9]
# Get the true classes for those images.
cls_true = data.test.cls[0:9]
# Plot the images and labels using our helper-function above.
plot_images(images=images, cls_true=cls_true)
x = tf.placeholder(tf.float32, [None, img_size_flat])
y_true = tf.placeholder(tf.float32, [None, num_classes])
y_true_cls = tf.placeholder(tf.int64, [None])
weights = tf.Variable(tf.zeros([img_size_flat, num_classes]))
biases = tf.Variable(tf.zeros([num_classes]))
logits = tf.matmul(x, weights) + biases
y_pred = tf.nn.softmax(logits)
y_pred_cls = tf.argmax(y_pred, dimension=1)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=logits,
labels=y_true)
cost = tf.reduce_mean(cross_entropy)
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.5).minimize(cost)
correct_prediction = tf.equal(y_pred_cls, y_true_cls)
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
session = tf.Session()
session.run(tf.global_variables_initializer())
batch_size = 100
def optimize(num_iterations):
for i in range(num_iterations):
# Get a batch of training examples.
# x_batch now holds a batch of images and
# y_true_batch are the true labels for those images.
x_batch, y_true_batch = data.train.next_batch(batch_size)
# Put the batch into a dict with the proper names
# for placeholder variables in the TensorFlow graph.
# Note that the placeholder for y_true_cls is not set
# because it is not used during training.
feed_dict_train = {x: x_batch,
y_true: y_true_batch}
# Run the optimizer using this batch of training data.
# TensorFlow assigns the variables in feed_dict_train
# to the placeholder variables and then runs the optimizer.
session.run(optimizer, feed_dict=feed_dict_train)
feed_dict_test = {x: data.test.images,
y_true: data.test.labels,
y_true_cls: data.test.cls}
def print_accuracy():
# Use TensorFlow to compute the accuracy.
acc = session.run(accuracy, feed_dict=feed_dict_test)
# Print the accuracy.
print("Accuracy on test-set: {0:.1%}".format(acc))
def print_confusion_matrix():
# Get the true classifications for the test-set.
cls_true = data.test.cls
# Get the predicted classifications for the test-set.
cls_pred = session.run(y_pred_cls, feed_dict=feed_dict_test)
# Get the confusion matrix using sklearn.
cm = confusion_matrix(y_true=cls_true,
y_pred=cls_pred)
# Print the confusion matrix as text.
print(cm)
# Plot the confusion matrix as an image.
plt.imshow(cm, interpolation='nearest', cmap=plt.cm.Blues)
# Make various adjustments to the plot.
plt.tight_layout()
plt.colorbar()
tick_marks = np.arange(num_classes)
plt.xticks(tick_marks, range(num_classes))
plt.yticks(tick_marks, range(num_classes))
plt.xlabel('Predicted')
plt.ylabel('True')
def plot_example_errors():
# Use TensorFlow to get a list of boolean values
# whether each test-image has been correctly classified,
# and a list for the predicted class of each image.
correct, cls_pred = session.run([correct_prediction, y_pred_cls],
feed_dict=feed_dict_test)
# Negate the boolean array.
incorrect = (correct == False)
# Get the images from the test-set that have been
# incorrectly classified.
images = data.test.images[incorrect]
# Get the predicted classes for those images.
cls_pred = cls_pred[incorrect]
# Get the true classes for those images.
cls_true = data.test.cls[incorrect]
# Plot the first 9 images.
plot_images(images=images[0:9],
cls_true=cls_true[0:9],
cls_pred=cls_pred[0:9])
def plot_weights():
# Get the values for the weights from the TensorFlow variable.
w = session.run(weights)
# Get the lowest and highest values for the weights.
# This is used to correct the colour intensity across
# the images so they can be compared with each other.
w_min = np.min(w)
w_max = np.max(w)
# Create figure with 3x4 sub-plots,
# where the last 2 sub-plots are unused.
fig, axes = plt.subplots(3, 4)
fig.subplots_adjust(hspace=0.3, wspace=0.3)
for i, ax in enumerate(axes.flat):
# Only use the weights for the first 10 sub-plots.
if i<10:
# Get the weights for the i'th digit and reshape it.
# Note that w.shape == (img_size_flat, 10)
image = w[:, i].reshape(img_shape)
# Set the label for the sub-plot.
ax.set_xlabel("Weights: {0}".format(i))
# Plot the image.
ax.imshow(image, vmin=w_min, vmax=w_max, cmap='seismic')
# Remove ticks from each sub-plot.
ax.set_xticks([])
ax.set_yticks([])
print_accuracy()
plot_example_errors()
optimize(num_iterations=1)
print_accuracy()
plot_example_errors()
plot_weights()
# We have already performed 1 iteration.
optimize(num_iterations=9)
print_accuracy()
plot_example_errors()
plot_weights()
# We have already performed 10 iterations.
optimize(num_iterations=990)
print_accuracy()
plot_example_errors()
plot_weights()
print_confusion_matrix()
# This has been commented out in case you want to modify and experiment
# with the Notebook without having to restart it.
# session.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load Corpus After Preprocessing ...
Step2: Basic & Advanced machine learning tools
Step4: What are the features?
Step5: Classification accuracy
Step6: Null accuracy
Step7: Comparing the true and predicted response values
Step8: Conclusion
Step9: Basic terminology
Step10: Metrics computed from a confusion matrix
Step11: Classification Error
Step12: Specificity
Step13: False Positive Rate
Step14: Precision
Step15: Many other metrics can be computed
Step16: Classifier comparison
Step17: Decision Tree Regressor
Step18: Random Forests
Step19: Multiple Linear Regresssion
Step20: Polynomialy Regression
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
import random
#Training Corpus
trn_corpus_after_preprocessing = pd.read_csv("output/trn_corpus_after_preprocessing.csv")
#Testing Corpus
tst_corpus_after_preprocessing = pd.read_csv("output/tst_corpus_after_preprocessing.csv")
#tst_corpus_after_preprocessing[tst_corpus_after_preprocessing["Fare"].isnull()]
trn_corpus_after_preprocessing.info()
print("-"*36)
tst_corpus_after_preprocessing.info()
trn_corpus_after_preprocessing.columns
list_of_non_preditor_variables = ['Survived','PassengerId']
#Method 1
#x_train = trn_corpus_after_preprocessing.ix[:, trn_corpus_after_preprocessing.columns != 'Survived']
#y_train = trn_corpus_after_preprocessing.ix[:,"Survived"]
#Method 2
x_train = trn_corpus_after_preprocessing[trn_corpus_after_preprocessing.columns.difference(list_of_non_preditor_variables)].copy()
y_train = trn_corpus_after_preprocessing['Survived'].copy()
#y_train = trn_corpus_after_preprocessing.iloc[:,-1]
#y_train = trn_corpus_after_preprocessing[trn_corpus_after_preprocessing.columns[-1]]
#x_train
#y_train
x_train.columns
# check the types of the features and response
#print(type(x_train))
#print(type(x_test))
#Method 1
#x_test = tst_corpus_after_preprocessing.ix[:, trn_corpus_after_preprocessing.columns != 'Survived']
#y_test = tst_corpus_after_preprocessing.ix[:,"Survived"]
#Method 2
x_test = tst_corpus_after_preprocessing[tst_corpus_after_preprocessing.columns.difference(list_of_non_preditor_variables)].copy()
y_test = tst_corpus_after_preprocessing['Survived'].copy()
#y_test = tst_corpus_after_preprocessing.iloc[:,-1]
#y_test = tst_corpus_after_preprocessing[tst_corpus_after_preprocessing.columns[-1]]
#x_test
#y_test
# display the first 5 rows
x_train.head()
# display the last 5 rows
x_train.tail()
# check the shape of the DataFrame (rows, columns)
x_train.shape
from sklearn import tree
clf = tree.DecisionTreeClassifier()
clf = clf.fit(x_train, y_train)
#Once trained, we can export the tree in Graphviz format using the export_graphviz exporter.
#Below is an example export of a tree trained on the entire iris dataset:
with open("output/titanic.dot", 'w') as f:
f = tree.export_graphviz(clf, out_file=f)
#Then we can use Graphviz’s dot tool to create a PDF file (or any other supported file type):
#dot -Tpdf titanic.dot -o titanic.pdf.
import os
os.unlink('output/titanic.dot')
#Alternatively, if we have Python module pydotplus installed, we can generate a PDF file
#(or any other supported file type) directly in Python:
import pydotplus
dot_data = tree.export_graphviz(clf, out_file=None)
graph = pydotplus.graph_from_dot_data(dot_data)
graph.write_pdf("output/titanic.pdf")
#The export_graphviz exporter also supports a variety of aesthetic options,
#including coloring nodes by their class (or value for regression)
#and using explicit variable and class names if desired.
#IPython notebooks can also render these plots inline using the Image() function:
from IPython.display import Image
dot_data = tree.export_graphviz(clf, out_file=None,
feature_names= list(x_train.columns[1:]), #iris.feature_names,
class_names= ["Survived"], #iris.target_names,
filled=True, rounded=True,
special_characters=True)
graph = pydotplus.graph_from_dot_data(dot_data)
Image(graph.create_png())
print("accuracy score: ", clf.score(x_test,y_test))
#After being fitted, the model can then be used to predict the class of samples:
y_pred_class = clf.predict(x_test);
#Alternatively, the probability of each class can be predicted,
#which is the fraction of training samples of the same class in a leaf:
clf.predict_proba(x_test);
# calculate accuracy
from sklearn import metrics
print(metrics.accuracy_score(y_test, y_pred_class))
# examine the class distribution of the testing set (using a Pandas Series method)
y_test.value_counts()
# calculate the percentage of ones
y_test.mean()
# calculate the percentage of zeros
1 - y_test.mean()
# calculate null accuracy (for binary classification problems coded as 0/1)
max(y_test.mean(), 1 - y_test.mean())
# calculate null accuracy (for multi-class classification problems)
y_test.value_counts().head(1) / len(y_test)
# print the first 25 true and predicted responses
from __future__ import print_function
print('True:', y_test.values[0:25])
print('Pred:', y_pred_class[0:25])
# IMPORTANT: first argument is true values, second argument is predicted values
print(metrics.confusion_matrix(y_test, y_pred_class))
# save confusion matrix and slice into four pieces
confusion = metrics.confusion_matrix(y_test, y_pred_class)
TP = confusion[1, 1]
TN = confusion[0, 0]
FP = confusion[0, 1]
FN = confusion[1, 0]
print(TP, TN, FP, FN)
print((TP + TN) / float(TP + TN + FP + FN))
print(metrics.accuracy_score(y_test, y_pred_class))
print((FP + FN) / float(TP + TN + FP + FN))
print(1 - metrics.accuracy_score(y_test, y_pred_class))
print(TN / float(TN + FP))
print(FP / float(TN + FP))
print(TP / float(TP + FP))
print(metrics.precision_score(y_test, y_pred_class))
print("Presicion: ", metrics.precision_score(y_test, y_pred_class))
print("Recall: ", metrics.recall_score(y_test, y_pred_class))
print("F1 score: ", metrics.f1_score(y_test, y_pred_class))
from sklearn import svm
model = svm.LinearSVC()
model.fit(x_train, y_train)
acc_score = model.score(x_test, y_test)
print("Accuracy score: ", acc_score)
y_pred_class = model.predict(x_test)
from sklearn import metrics
confusion_matrix = metrics.confusion_matrix(y_test, y_pred_class)
print(confusion_matrix)
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
from sklearn.gaussian_process import GaussianProcessClassifier
from sklearn.gaussian_process.kernels import RBF
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn.neural_network import MLPClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis
from sklearn.datasets import make_classification
from sklearn.preprocessing import StandardScaler
from matplotlib.colors import ListedColormap
#classifiers
#x_train
#sns.pairplot(x_train)
x_train_scaled = StandardScaler().fit_transform(x_train)
x_test_scaled = StandardScaler().fit_transform(x_test)
x_train_scaled[0]
len(x_train_scaled[0])
df_x_train_scaled = pd.DataFrame(columns=x_train.columns, data=x_train_scaled)
df_x_train_scaled.head()
#sns.pairplot(df_x_train_scaled)
names = ["Nearest Neighbors", "Linear SVM", "RBF SVM",
"Decision Tree", "Random Forest", "Neural Net", "AdaBoost",
"Naive Bayes", "QDA", "Gaussian Process"]
classifiers = [
KNeighborsClassifier(3),
SVC(kernel="linear", C=0.025),
SVC(gamma=2, C=1),
DecisionTreeClassifier(max_depth=5),
RandomForestClassifier(max_depth=5, n_estimators=10, max_features=1),
MLPClassifier(alpha=1),
AdaBoostClassifier(),
GaussianNB(),
QuadraticDiscriminantAnalysis()
#, GaussianProcessClassifier(1.0 * RBF(1.0), warm_start=True), # Take too long...
]
# iterate over classifiers
for name, model in zip(names, classifiers):
model.fit(x_train_scaled, y_train)
acc_score = model.score(x_test_scaled, y_test)
print(name, " - accuracy score: ", acc_score)
#end for
from sklearn import tree
clf = tree.DecisionTreeRegressor()
clf = clf.fit(x_train, y_train)
clf.score(x_test,y_test)
#clf.predict(x_test)
from sklearn.linear_model import LinearRegression
model = LinearRegression()
model.fit(x_train, y_train)
r_squared = model.score(x_test, y_test)
print("R-squared: %.4f" %r_squared)
from sklearn.linear_model import LinearRegression
model = LinearRegression()
model.fit(x_train, y_train)
predictions = model.predict(x_test)
#for i in range(predictions.size):
# print("Predicted: %.2f, Target: %.2f" %(predictions[i], y_test[i]))
r_squared = model.score(x_test, y_test)
print("R-squared: %.4f" %r_squared)
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import PolynomialFeatures
model = LinearRegression()
model.fit(x_train, y_train)
xx = np.linspace(0, 26, 100)
#yy = np.linspace(0, 26, 100)
#yy = model.predict(xx.reshape(xx.shape[0],1))
#plt.plot(xx, yy)
quadratic_featurizer = PolynomialFeatures(degree=2)
x_train_quadratic = quadratic_featurizer.fit_transform(x_train)
x_test_quadratic = quadratic_featurizer.fit(x_test)
x_train.head()
model_quadratic = LinearRegression()
model_quadratic.fit(x_train_quadratic, y_train)
#predictions = model_quadratic.predict(x_test_quadratic)
#r_squared = model_quadratic.score(x_test_quadratic, y_test)
#r_squared
#print("R-squared: %.4f" %r_squared)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We will now ask the pygame library to load a .wav file into memory and make it into a sound, ready to play.
Step2: The play() method of the drum sound can be put in a selfmade function, so we can add stuff and reuse it later.
Step3: Let's run our new play() function
Step4: Time to connect a button I would say. We'll use the BCM numbering, the way they are engraved on the case we're using and on the GPIO pinout document.
Step5: After this preparation, we can ask Python to "register an event".
Step6: You can test it out by pressing the button.
Step7: Ok, this was so much fun, let's do another one
Step8: This may be a good moment to show how to make our function more generic and reusable.
Step9: So we can now use (read
Step10: Which means that, instead of making two seperate functions that we register one by one, we can now create one funcction that uses the pin number as input to play the appropriate tune
Step11: Play the cymbal
Step12: And the drums
Step13: Let's build the schema below to get our first, very rudimentary, music box
Step14: Music time!
|
<ASSISTANT_TASK:>
Python Code:
import pygame.mixer
pygame.mixer.init()
drum = pygame.mixer.Sound("/opt/sonic-pi/etc/samples/drum_tom_mid_hard.wav")
drum.play()
def play():
print("Drums !")
drum.play()
play()
#load GPIO library
import RPi.GPIO as GPIO
#Set BCM (Broadcom) mode for the pin numbering
GPIO.setmode(GPIO.BCM)
#activate pin 17 as input and set the built-in pull-up/pull-down resistor to pull-up
GPIO.setup(17, GPIO.IN, GPIO.PUD_UP)
def play(pin_number):
print("Drum roll !")
drum.play()
GPIO.add_event_detect(17, GPIO.FALLING, play, 200)
GPIO.remove_event_detect(17)
cymbal = pygame.mixer.Sound("/opt/sonic-pi/etc/samples/drum_cymbal_open.wav")
cymbal.play()
sound_pins = {
17: drum,
27: cymbal,
}
sound_pins[17].play()
def play(pin):
sound = sound_pins[pin]
print("Geluid spelen voor pin %s" % pin)
sound.play()
play(27)
play(17)
for pin in sound_pins:
GPIO.setup(pin, GPIO.IN, GPIO.PUD_UP)
GPIO.add_event_detect(pin, GPIO.FALLING, play, 200)
for pin in sound_pins:
GPIO.remove_event_detect(pin)
#clean up the GPIO settings again
GPIO.cleanup()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First we create a training set of size n_samples containing n_features each.
Step2: Next we run a performance test on the created data set. Therefor we train a random forest classifier with n_est estimators multiple times and save the best time for later comparison. Each time we use a different number of jobs to train the classifier
Step3: Finally we plot our results.
|
<ASSISTANT_TASK:>
Python Code:
# imports
from sklearn.datasets import make_classification
from sklearn.ensemble import RandomForestClassifier
from timeit import default_timer as timer
import matplotlib.pyplot as plt
n_samples = 200000
n_features = 20
X, y = make_classification(n_samples=n_samples, n_features=n_features)
# parameter for performance test
n_est = 16
max_jobs = 8
best_in = 3
# performance test
measurements = []
i = 1
while i <= max_jobs:
min_t = float("inf")
for j in range(best_in):
rnd_forest_clf = RandomForestClassifier(n_estimators=n_est, n_jobs=i)
start = timer()
rnd_forest_clf.fit(X,y)
stop = timer()
min_t = min(min_t, stop - start)
measurements.append(min_t)
i += 1
fig = plt.figure()
fig.suptitle('Visualization of the runtime depending on the number of used jobs.')
ax = fig.add_subplot(111)
ax.set_xlabel('used jobs')
ax.set_ylabel('runtime in seconds')
plt.xticks(range(1, max_jobs + 1), range(1, max_jobs + 1))
ax.plot(range(1, max_jobs + 1), measurements, 'ro')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Example 1
Step2: Passing datetime.datetime
Step3: Example 2
Step4: Therefore, according to Brazilian epiweek system, it should fall on the last epiweek of year 2105
Step5: Example 3
Step6: Comparing with isocalendar
|
<ASSISTANT_TASK:>
Python Code:
from episem import episem
d = '2010-10-01'
episem(d)
episem(d,out='W')
import datetime
datetime.datetime.strptime(d, '%Y-%m-%d')
dt = datetime.datetime.strptime(d, '%Y-%m-%d')
episem(dt)
dt2 = datetime.datetime.strptime('2016-01-01', '%Y-%m-%d')
dt2.isoweekday()
episem(dt2)
dt3 = datetime.datetime.strptime('2017-01-01', '%Y-%m-%d')
dt3.isoweekday()
episem(dt3)
print('Date: %s\nISO-calendar: %s\nBR-epiweek: %s\n' % (dt.date(), dt.isocalendar(), episem(dt)))
print('Date: %s\nISO-calendar: %s\nBR-epiweek: %s\n' % (dt2.date(), dt2.isocalendar(), episem(dt2)))
print('Date: %s\nISO-calendar: %s\nBR-epiweek: %s\n' % (dt3.date(), dt3.isocalendar(), episem(dt3)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: as compounds
Step2: and as mixtures of compounds
Step3: These materials can be used in a particular geometry (currently only flat multi-layered samples)
Step4: Cross sections
Step5: Partial cross sections
Step6: Differential cross sections
Step7: X-ray spectra
|
<ASSISTANT_TASK:>
Python Code:
from spectrocrunch.materials import element
element1 = element.Element("Ca")
print(element1)
from spectrocrunch.materials import compoundfromformula
from spectrocrunch.materials import compoundfromname
from spectrocrunch.materials import compoundfromlist
from spectrocrunch.materials import types
compound1 = compoundfromformula.CompoundFromFormula("PbSO4",density=6.29)
compound2 = compoundfromformula.CompoundFromFormula("CaSO4",density=2.32)
compound3 = compoundfromname.compoundfromname("linseed oil")
compound4 = compoundfromlist.CompoundFromList(["Ca","C","O"],
[1,1,3],
types.fraction.mole,
2.71,
name="calcite")
print(compound1,compound2,compound3,compound4)
from spectrocrunch.materials import mixture
mixture1 = mixture.Mixture([compound1,compound2],[0.5,0.5],\
types.fraction.mass,name="My mixture")
print(mixture1)
from spectrocrunch.materials import multilayer
from spectrocrunch.detectors import xrf as xrfdetectors
from spectrocrunch.geometries import xrf as xrfgeometries
from spectrocrunch.sources import xray as xraysources
source = xraysources.factory("synchrotron")
detector = xrfdetectors.factory("leia")
geometry = xrfgeometries.factory("sxm120",
detectorposition=-15.,
detector=detector,
source=source)
sample = multilayer.Multilayer(material=[element1,compound1,mixture1],
thickness=[1e-4,1e-4,1e-4],
geometry = geometry)
print(sample)
print("")
print(geometry)
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
energy = np.linspace(2,60,100)
material = mixture1
mu = material.mass_att_coeff(energy)
plt.plot(energy,mu,label='Total')
mu = material.mass_abs_coeff(energy)
plt.plot(energy,mu,label='Absorption')
mu = material.rayleigh_cross_section(energy)
plt.plot(energy,mu,label='Coherent')
mu = material.compton_cross_section(energy)
plt.plot(energy,mu,label='Incoherent')
plt.title(str(material))
plt.xlabel('Energy (keV)')
plt.ylabel('Cross-section (cm$^2$/g)')
plt.legend(loc='best')
plt.gca().set_yscale('log', basey=10)
plt.tight_layout()
plt.show()
material = mixture1
material.unmarkabsorber()
print("Partial cross-section: no selection")
print('\n'.join(list(material.markinfo())))
energy = np.linspace(3,5,100)
mu = material.mass_abs_coeff(energy)
plt.plot(energy,mu,label='Absorption')
material.unmarkabsorber()
material.markabsorber("Ca",shells=['K'])
print("\nPartial cross-section: selected Ca-K")
print('\n'.join(list(material.markinfo())))
mu = material.partial_mass_abs_coeff(energy)
plt.plot(energy,mu,label='Ca-K')
mu = material.fluorescence_cross_section(energy)
plt.plot(energy,mu,label='Fluo (Ca-K)')
material.unmarkabsorber()
material.markabsorber("Ca",shells=['K'],fluolines=['KA'])
print("\nPartial cross-section: selected Ca-Ka")
print('\n'.join(list(material.markinfo())))
mu = material.fluorescence_cross_section(energy)
plt.plot(energy,mu,label='Fluo (Ca-Ka)')
plt.title(str(material))
plt.xlabel('Energy (keV)')
plt.ylabel('Cross-section (cm$^2$/g)')
plt.legend(loc='best')
plt.gca().set_yscale('log', basey=10)
plt.tight_layout()
plt.show()
material.unmarkabsorber()
azimuth = np.linspace(0,360,100)
polar = np.linspace(0,180,50)
extent = [azimuth[0],azimuth[-1],polar[0],polar[-1]]
azimuth,polar = np.meshgrid(np.radians(azimuth),np.radians(polar))
energy = 5
material = mixture1
print(material)
def plotcs(cs):
plt.imshow(cs,origin="lower",extent=extent)
plt.axhline(y=90)
for x in [90,180,270]:
plt.axvline(x=x)
plt.xlabel("Azimuth (deg)")
plt.ylabel("Polar (deg)")
plt.colorbar(label="cm$^2$/g/sr")
cs = material.diff_rayleigh_cross_section(energy,source=source)
plotcs(cs(azimuth,polar))
plt.title("Elastic")
plt.show()
cs = material.diff_compton_cross_section(energy,source=source)
plotcs(cs(azimuth,polar))
plt.title("Inelastic")
plt.show()
material.markabsorber(energybounds=[1,energy])
cs = material.diff_fluorescence_cross_section(energy,source=source)
cs = np.full_like(azimuth,sum(cs.values()))
plotcs(np.full_like(azimuth,cs))
plt.title("Fluorescence")
plt.show()
fig,axs = plt.subplots(2,2,figsize=(10,10))
energy = [5,5.1]
weights = [2,1]
emin = 2
emax = 5.5
azimuth = 0
polar = np.pi/2
plt.sca(axs[0][0])
spectrum = element1.xrayspectrum(energy,weights=weights,emax=emax,emin=emin,\
source=source,azimuth=azimuth,polar=polar)
spectrum.plot()
plt.sca(axs[0][1])
spectrum = compound1.xrayspectrum(energy,weights=weights,emax=emax,emin=emin,\
source=source,azimuth=azimuth,polar=polar)
spectrum.plot()
plt.sca(axs[1][0])
spectrum = mixture1.xrayspectrum(energy,weights=weights,emax=emax,emin=emin,\
source=source,azimuth=azimuth,polar=polar)
spectrum.plot()
plt.sca(axs[1][1])
spectrum = sample.xrayspectrum(energy,weights=weights,emax=emax,emin=emin)
spectrum.plot()
plt.tight_layout()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Damped, driven nonlinear pendulum
Step4: Write a function derivs for usage with scipy.integrate.odeint that computes the derivatives for the damped, driven harmonic oscillator. The solution vector at each time will be $\vec{y}(t) = (\theta(t),\omega(t))$.
Step5: Simple pendulum
Step7: Damped pendulum
Step8: Here is an example of the output of your plot_pendulum function that should show a decaying spiral.
Step9: Use interact to explore the plot_pendulum function with
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from scipy.integrate import odeint
from IPython.html.widgets import interact, fixed
g = 9.81 # m/s^2
l = 0.5 # length of pendulum, in meters
tmax = 50. # seconds
t = np.linspace(0, tmax, int(100*tmax))
#I worked with James A and Hunter T.
def derivs(y, t, a, b, omega0):
Compute the derivatives of the damped, driven pendulum.
Parameters
----------
y : ndarray
The solution vector at the current time t[i]: [theta[i],omega[i]].
t : float
The current time t[i].
a, b, omega0: float
The parameters in the differential equation.
Returns
-------
dy : ndarray
The vector of derviatives at t[i]: [dtheta[i],domega[i]].
# YOUR CODE HERE
#raise NotImplementedError()
theta = y[0]
omega = y[1]
dtheta =omega
dw = -(g/l)*np.sin(theta)-a*omega-b*np.sin(omega0*t)
return [dtheta, dw]
assert np.allclose(derivs(np.array([np.pi,1.0]), 0, 1.0, 1.0, 1.0), [1.,-1.])
def energy(y):
Compute the energy for the state array y.
The state array y can have two forms:
1. It could be an ndim=1 array of np.array([theta,omega]) at a single time.
2. It could be an ndim=2 array where each row is the [theta,omega] at single
time.
Parameters
----------
y : ndarray, list, tuple
A solution vector
Returns
-------
E/m : float (ndim=1) or ndarray (ndim=2)
The energy per mass.
# YOUR CODE HERE
#raise NotImplementedError()
if y.ndim==1:
theta = y[0]
omega = y[1]
if y.ndim==2:
theta = y[:,0]
omega = y[:,1]
E = g*l*(1-np.cos(theta))+0.5*l**2*omega**2
return (E)
assert np.allclose(energy(np.array([np.pi,0])),g)
assert np.allclose(energy(np.ones((10,2))), np.ones(10)*energy(np.array([1,1])))
# YOUR CODE HERE
#raise NotImplementedError()
y0 = [np.pi,0]
solution = odeint(derivs, y0, t, args = (0,0,0), atol = 1e-5, rtol = 1e-4)
# YOUR CODE HERE
#raise NotImplementedError()
plt.plot(t,energy(solution), label="$Energy/mass$")
plt.title('Simple Pendulum Engery')
plt.xlabel('time')
plt.ylabel('$Engery/Mass$')
plt.ylim(9.2,10.2);
# YOUR CODE HERE
#raise NotImplementedError()
theta= solution[:,0]
omega = solution[:,1]
plt.plot(t ,theta, label = "$\Theta (t)$")
plt.plot(t, omega, label = "$\omega (t)$")
plt.ylim(-0.5,5)
plt.legend()
plt.title('Simple Pendulum $\Theta (t)$ and $\omega (t)$')
plt.xlabel('Time');
assert True # leave this to grade the two plots and their tuning of atol, rtol.
def plot_pendulum(a=0.0, b=0.0, omega0=0.0):
Integrate the damped, driven pendulum and make a phase plot of the solution.
# YOUR CODE HERE
#raise NotImplementedError()
y0 =[-np.pi+0.1,0]
solution = odeint(derivs, y0, t, args = (a,b,omega0), atol = 1e-5, rtol = 1e-4)
theta=solution[:,0]
omega=solution[:,1]
plt.plot(theta, omega, color="k")
plt.title('Damped and Driven Pendulum Motion')
plt.xlabel('$\Theta (t)$')
plt.ylabel('$\omega (t)$')
plt.xlim(-2*np.pi, 2*np.pi)
plt.ylim(-10,10);
plot_pendulum(0.5, 0.0, 0.0)
# YOUR CODE HERE
#raise NotImplementedError()
interact(plot_pendulum, a=(0.0,1.0,0.1), b=(0.0,10.0,0.1), omega0 = (0.0,10.0,0.1));
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Check out the Data
Step2: EDA-Exploratory Data Analysis
Step3: Training a Linear Regression Model
Step4: Train Test Split
Step5: Creating and Training the Model
Step6: Model Evaluation
Step7: Predictions from our Model
Step8: Here we can notice that y_test and predictions are almost in a straight line which denotes that our predictions almost did a good job!
Step9: Regression Evaluation Metrics
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
USAhousing = pd.read_csv('USA_Housing.csv')
USAhousing.head()
USAhousing.info()
USAhousing.describe()
USAhousing.columns
sns.set_style('darkgrid')
sns.pairplot(USAhousing)
sns.distplot(USAhousing['Price'])
sns.heatmap(USAhousing.corr())
X = USAhousing[['Avg. Area Income', 'Avg. Area House Age', 'Avg. Area Number of Rooms',
'Avg. Area Number of Bedrooms', 'Area Population']]
y = USAhousing['Price']
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=101)
from sklearn.linear_model import LinearRegression
lm = LinearRegression()
lm.fit(X_train,y_train)
# print the intercept
print(lm.intercept_)
coeff_df = pd.DataFrame(lm.coef_,X.columns,columns=['Coefficient'])
coeff_df
predictions = lm.predict(X_test)
plt.scatter(y_test,predictions)
sns.distplot((y_test-predictions),bins=50);
from sklearn import metrics
print('MAE:', metrics.mean_absolute_error(y_test, predictions))
print('MSE:', metrics.mean_squared_error(y_test, predictions))
print('RMSE:', np.sqrt(metrics.mean_squared_error(y_test, predictions)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Mesh generation by Transfinite Interpolation applied to the sea dike problem
Step2: Unfortunately, the TFI is defined on the unit square, so we have to normalize the sea dike topography, before applying the TFI.
Step3: OK, now we have the normalized dike topography on a unit square, so we can define the parametric curve for the topography.
Step4: No error so far. Before plotting the generated mesh, we have to unnormalize the spatial coordinates.
|
<ASSISTANT_TASK:>
Python Code:
# Execute this cell to load the notebook's style sheet, then ignore it
from IPython.core.display import HTML
css_file = '../style/custom.css'
HTML(open(css_file, "r").read())
# Import Libraries
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
# Here, I introduce a new library, which is useful
# to define the fonts and size of a figure in a notebook
from pylab import rcParams
# Get rid of a Matplotlib deprecation warning
import warnings
warnings.filterwarnings("ignore")
# Define number of grid points in x-direction and spatial vectors
NXtopo = 100
x_dike = np.linspace(0.0, 61.465, num=NXtopo)
z_dike = np.zeros(NXtopo)
# calculate dike topograpy
def dike_topo(x_dike, z_dike, NX1):
for i in range(NX1):
if(x_dike[i]<4.0):
z_dike[i] = 0.0
if(x_dike[i]>=4.0 and x_dike[i]<18.5):
z_dike[i] = (x_dike[i]-4) * 6.76/14.5
if(x_dike[i]>=18.5 and x_dike[i]<22.5):
z_dike[i] = 6.76
if(x_dike[i]>=22.5 and x_dike[i]<x_dike[-1]):
z_dike[i] = -(x_dike[i]-22.5) * 3.82/21.67 + 6.76
return x_dike, z_dike
# Define figure size
rcParams['figure.figsize'] = 10, 7
# Plot sea dike topography
dike_topo(x_dike,z_dike,NXtopo)
plt.plot(x_dike,z_dike)
plt.title("Sea dike topography" )
plt.xlabel("x [m]")
plt.ylabel("z [m]")
plt.axes().set_aspect('equal')
# Normalize sea dike topography
xmax_dike = np.max(x_dike)
zmax_dike = np.max(z_dike)
x_dike_norm = x_dike / xmax_dike
z_dike_norm = z_dike / zmax_dike + 1
# Plot normalized sea dike topography
plt.plot(x_dike_norm,z_dike_norm)
plt.title("Normalized sea dike topography" )
plt.xlabel("x []")
plt.ylabel("z []")
plt.axes().set_aspect('equal')
# Define parameters for deformed Cartesian mesh
NX = 80
NZ = 20
# Define parametric curves at model boundaries ...
# ... bottom boundary
def Xb(s):
x = s
z = 0.0
xzb = [x,z]
return xzb
# ... top boundary
def Xt(s):
x = s
# normalized x-coordinate s -> unnormalized x-coordinate x_d
x_d = xmax_dike * s
z_d = 0.0
if(x_d<4.0):
z_d = 0.0
if(x_d>=4.0 and x_d<18.5):
z_d = (x_d-4) * 6.76/14.5
if(x_d>=18.5 and x_d<22.5):
z_d = 6.76
if(x_d>=22.5 and x_d<xmax_dike):
z_d = -(x_d-22.5) * 3.82/21.67 + 6.76
# unnormalized z-coordinate z_d -> normalized z-coordinate z
z = z_d / zmax_dike + 1
xzt = [x,z]
return xzt
# ... left boundary
def Xl(s):
x = 0.0
z = s
xzl = [x,z]
return xzl
# ... right boundary
def Xr(s):
x = 1
z = s
xzr = [x,z]
return xzr
# Transfinite interpolation
# Discretize along xi and eta axis
xi = np.linspace(0.0, 1.0, num=NX)
eta = np.linspace(0.0, 1.0, num=NZ)
xi1, eta1 = np.meshgrid(xi, eta)
# Intialize matrices for x and z axis
X = np.zeros((NX,NZ))
Z = np.zeros((NX,NZ))
# loop over cells
for i in range(NX):
Xi = xi[i]
for j in range(NZ):
Eta = eta[j]
xb = Xb(Xi)
xb0 = Xb(0)
xb1 = Xb(1)
xt = Xt(Xi)
xt0 = Xt(0)
xt1 = Xt(1)
xl = Xl(Eta)
xr = Xr(Eta)
# Transfinite Interpolation (Gordon-Hall algorithm)
X[i,j] = (1-Eta) * xb[0] + Eta * xt[0] + (1-Xi) * xl[0] + Xi * xr[0] \
- (Xi * Eta * xt1[0] + Xi * (1-Eta) * xb1[0] + Eta * (1-Xi) * xt0[0] \
+ (1-Xi) * (1-Eta) * xb0[0])
Z[i,j] = (1-Eta) * xb[1] + Eta * xt[1] + (1-Xi) * xl[1] + Xi * xr[1] \
- (Xi * Eta * xt1[1] + Xi * (1-Eta) * xb1[1] + Eta * (1-Xi) * xt0[1] \
+ (1-Xi) * (1-Eta) * xb0[1])
# Unnormalize the mesh
X = X * xmax_dike
Z = Z * zmax_dike
# Plot TFI mesh (physical domain)
plt.plot(X, Z, 'k')
plt.plot(X.T, Z.T, 'k')
plt.title("Sea dike TFI grid (physical domain)" )
plt.xlabel("x [m]")
plt.ylabel("z [m]")
plt.axes().set_aspect('equal')
plt.savefig('sea_dike_TFI.pdf', bbox_inches='tight', format='pdf')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The second method defined fits a polynomial curve to the data points and then plots it. Here we will try a 1, 2, and 3, degree polynomial.
Step2: And plotted with the data points, we can clearly see that the cubic fit is the most accurate.
Step3: Problem 5.22 Vectorizing the Midpoint Integration
Step4: Next employs the built-in python sum() function to do the same thing.
Step5: Finally, the use of numpy's sum() function
Step6: Problem 5.23 - 5.25 Implementing Lagrange's interpolation formula
Step7: Now we will use the polynomial approximator to guess the already existing points. It should return the exact values (or at least very very close).
Step8: Now we'll try the same thing, but now with graphing (5.24). First we graph sin(x) with its interpolated points
Step9: Finally we investigate our newly created graph function for
|
<ASSISTANT_TASK:>
Python Code:
p_data = p1.retrieve_pendulum_data()
plt.plot(p_data[0],p_data[1],'go')
plt.show()
p_data = p1.retrieve_pendulum_data()
fit1 = p1.pendulum_fit(p_data[0],p_data[1],1)
plt.plot(fit1[0],fit1[1],'y')
fit2 = p1.pendulum_fit(p_data[0],p_data[1],2)
plt.plot(fit2[0],fit2[1],'b')
fit3 = p1.pendulum_fit(p_data[0],p_data[1],3)
plt.plot(fit3[0],fit3[1],'r')
plt.show()
p_data = p1.retrieve_pendulum_data()
plt.plot(p_data[0],p_data[1],'go')
fit1 = p1.pendulum_fit(p_data[0],p_data[1],1)
plt.plot(fit1[0],fit1[1],'y')
fit2 = p1.pendulum_fit(p_data[0],p_data[1],2)
plt.plot(fit2[0],fit2[1],'b')
fit3 = p1.pendulum_fit(p_data[0],p_data[1],3)
plt.plot(fit3[0],fit3[1],'r')
plt.show()
def f(x):
return x**3 - 15*x
print(p2.midpointint(f, 0, 10, 1000))
%timeit p2.midpointint(f, 0, 10, 1000)
print(p2.midpointint_python_sum(f, 0, 10, 1000))
%timeit p2.midpointint_python_sum(f, 0, 10, 1000)
print(p2.midpointint_numpy_sum(f, 0, 10, 1000))
%timeit p2.midpointint_numpy_sum(f, 0, 10, 1000)
xp = np.linspace(0, math.pi, 5)
sin_vectorized = np.vectorize(math.sin)
yp = sin_vectorized(xp)
print xp
print yp
for i in xrange(len(xp)):
print "p_L approximation: " + str(p3.p_L(xp[i], xp, yp))
print "Actual: " + str(yp[i])
p4.graph(math.sin, 5, 0, math.pi)
import Lagrange_poly2b
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create an empty dictionary and for each variable whose absolute value of correlation coefficient with the heating space energy consumption is greater than 0.35 we use the dictionary to store the position of this variable and the correlation coefficient.
Step2: Sort the dictionary according to their values of correlation coefficient.
Step3: I will print the indices and correlation coefficient of variables we will use in our model besides materials.
|
<ASSISTANT_TASK:>
Python Code:
import csv
file = open('public_layout.csv','r')
reader = csv.reader(file, delimiter=',')
fullcsv = list(reader)
dic_1=dict()
print(dic_1)
for i in range(801):
data = np.genfromtxt('recs2009_public.csv',delimiter=',',skip_header=1,usecols=(i,908))
coef = np.corrcoef(data[:,0],data[:,1])
if abs(coef[0][1])>=0.35:
dic_1[i]=coef[0][1]
print(dic_1)
import operator
sortedDic=sorted(dic_1.items(), key=operator.itemgetter(1))
sortedDic
variables_chosen=[6, 315, 430, 705]
print(sortedDic[-2])
print(sortedDic[8])
print(sortedDic[6])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: <img src="https
Step4: 3 - Launching IJulia
Step5: And now, you should see a IJulia notebook starting next to this IPython Notebook
Step6: 5 - To Un-install / Re-install Julia and Ijulia (or other trouble-shooting)
|
<ASSISTANT_TASK:>
Python Code:
import os
import sys
import io
# downloading julia (32Mo, may take 1 minute or 2)
try:
import urllib.request as urllib2 # Python 3
except:
import urllib2 # Python 2
if 'amd64' in sys.version.lower():
julia_binary="julia-0.3.7-win64.exe"
julia_url="http://s3.amazonaws.com/julialang/bin/winnt/x64/0.3/julia-0.3.7-win64.exe"
hashes=("90dedbbee5deb569c613543cc709af85", "813f049445421b9531cb6a3a4a2871fdbe1ca5d0" )
else:
julia_binary="julia-0.3.7-win32.exe"
julia_url="http://s3.amazonaws.com/julialang/bin/winnt/x86/0.3/julia-0.3.7-win32.exe"
hashes=("bd2237758fb2a034174249010cf7ba33", "4319c95c38622792b112478d240d2586e08430f0")
julia_installer=os.environ["WINPYDIR"]+"\\..\\tools\\"+julia_binary
os.environ["julia_installer"]=julia_installer
g = urllib2.urlopen(julia_url)
with io.open(julia_installer, 'wb') as f:
f.write(g.read())
g.close
g = None
#checking it's there
!dir %julia_installer%
# checking it's the official julia0.3.2
import hashlib
def give_hash(of_file, with_this):
with io.open(julia_installer, 'rb') as f:
return with_this(f.read()).hexdigest()
print (" "*12+"MD5"+" "*(32-12-3)+" "+" "*15+"SHA-1"+" "*(40-15-5)+"\n"+"-"*32+" "+"-"*40)
print ("%s %s %s" % (give_hash(julia_installer, hashlib.md5) , give_hash(julia_installer, hashlib.sha1),julia_installer))
assert give_hash(julia_installer, hashlib.md5) == hashes[0]
assert give_hash(julia_installer, hashlib.sha1) == hashes[1]
os.environ["JULIA_HOME"] = os.environ["WINPYDIR"]+"\\..\\tools\\Julia\\bin\\"
os.environ["JULIA_EXE"]="julia.exe"
os.environ["JULIA"]=os.environ["JULIA_HOME"]+os.environ["JULIA_EXE"]
# for installation we need this
os.environ["JULIAROOT"]=os.path.join(os.path.split(os.environ["WINPYDIR"])[0] , 'tools','julia' )
# let's install it (add a /S before /D if you want silence mode installation)
#nullsoft installers don't accept . or .. conventions
# If you are "USB life style", or multi-winpython
# ==> UN-CLICK the OPTION 'CREATE a StartMenuFolder and Shortcut' <== (when it will show up)
!start cmd /C %julia_installer% /D=%JULIAROOT%
# Writing a julia initial run script, for convenience
bat_text = r
@echo off
set WINPYDIR=%~dp0..\blablaPYTHON
set WINPYVER=blablaWINPYVER
set HOME=%WINPYDIR%\..\settings
set PATH=%WINPYDIR%\Lib\site-packages\PyQt4;%WINPYDIR%\;%WINPYDIR%\DLLs;%WINPYDIR%\Scripts;%WINPYDIR%\..\tools;%WINPYDIR%\..\tools\mingw32\bin;%PATH%;%WINPYDIR%\..\tools\TortoiseHg
set JULIA_HOME=%WINPYDIR%\..\tools\Julia\bin\
if exist "%JULIA_HOME%" goto julia_next
echo --------------------
echo First install Julia in \tools\Julia of winpython
echo suggestion : don't create Julia shortcuts, nor menu, nor desktop icons
echo (they would create a .julia in your home directory rather than here)
echo When it will be done, launch again this .bat
if not exist "%JULIA_HOME%" goto julia_end
:julia_next
set SYS_PATH=%PATH%
set PATH=%JULIA_HOME%;%SYS_PATH%
set JULIA_EXE=julia.exe
set JULIA=%JULIA_HOME%%JULIA_EXE%
set private_libdir=bin
if not exist "%JULIA_HOME%..\lib\julia\sys.ji" ( ^
echo "Preparing Julia for first launch. This may take a while"
echo "You may see two git related errors. This is completely normal"
cd "%JULIA_HOME%..\share\julia\base"
"%JULIA%" --build "%JULIA_HOME%..\lib\julia\sys0" sysimg.jl
"%JULIA%" --build "%JULIA_HOME%..\lib\julia\sys" -J sys0.ji sysimg.jl
popd && pushd "%cd%" )
echo "julia!"
echo --------------------
echo to install Ijulia for Winpython (the first time) :
echo type 'julia'
echo type in Julia prompt 'Pkg.add("IJulia")'
echo type in Julia prompt 'Pkg.add("PyCall")'
echo type in Julia prompt 'Pkg.add("PyPlot")'
echo type in Julia prompt 'Pkg.add("Interact")'
echo type in Julia prompt 'Pkg.add("Compose")'
echo type in Julia prompt 'Pkg.add("SymPy")'
echo type 'Ctrl + 'D' to quit Julia
echo nota : type 'help()' to get help in Julia
echo --------------------
rem 2014-08-23 refinement
echo or let me do it now
pause
echo Pkg.add("IJulia");>this_is_temporary.jl
echo Pkg.add("PyCall");>>this_is_temporary.jl
echo Pkg.add("PyPlot");>>this_is_temporary.jl
echo Pkg.add("Interact");>>this_is_temporary.jl
echo Pkg.add("Compose");>>this_is_temporary.jl
echo Pkg.add("SymPy");>>this_is_temporary.jl
@echo on
julia this_is_temporary.jl
@echo off
echo Julia installed
echo use the "ijulia_launcher.bat" script to launch Ijulia directly
pause
:julia_end
rem cmd.exe /k
bat_text = bat_text.replace("blablaPYTHON",os.path.split(os.environ["WINPYDIR"])[1])
bat_text = bat_text.replace("blablaWINPYVER",os.environ["WINPYVER"])
julia_initializer_bat=os.environ["WINPYDIR"]+"\\..\\scripts\\initialize_julia_once.bat"
if sys.version_info[0] == 3:
with io.open(julia_initializer_bat, 'w', encoding = sys.getdefaultencoding() ) as f:
for line in bat_text.splitlines():
f.write('%s\n' % line )
else:
with io.open(julia_initializer_bat, 'wb' ) as f:
for line in bat_text.splitlines():
f.write('%s\r\n' % line.encode(sys.getdefaultencoding()) )
# let's initialize Julia and install "IJulia", "PyCall", and "PyPlot" Julia modules with this .bat just created
# may take about 10 minutes (Julia pre-compiles itself and download a lot of things)
!start cmd /C %WINPYDIR%\\..\\scripts\\initialize_julia_once.bat
# let's launch Ijulia for Ipython now
# Writing a julia initial run script, for convenience
bat_text = r
@echo off
set WINPYDIR=%~dp0..\blablaPYTHON
set WINPYVER=blablaWINPYVER
set HOME=%WINPYDIR%\..\settings
set PATH=%WINPYDIR%\Lib\site-packages\PyQt4;%WINPYDIR%\;%WINPYDIR%\DLLs;%WINPYDIR%\Scripts;%WINPYDIR%\..\tools;%WINPYDIR%\..\tools\mingw32\bin;%PATH%;%WINPYDIR%\..\tools\TortoiseHg
set JULIA_HOME=%WINPYDIR%\..\tools\Julia\bin\
set SYS_PATH=%PATH%
set PATH=%JULIA_HOME%;%SYS_PATH%
set JULIA_EXE=julia.exe
set JULIA=%JULIA_HOME%%JULIA_EXE%
Ipython notebook --profile julia
echo to use julia_magic from Ipython, type "Ipython notebook" instead.
:julia_end
cmd.exe /k
bat_text = bat_text.replace("blablaPYTHON",os.path.split(os.environ["WINPYDIR"])[1])
bat_text = bat_text.replace("blablaWINPYVER",os.environ["WINPYVER"])
ijulia_launcher_bat=os.environ["WINPYDIR"]+"\\..\\scripts\\ijulia_launcher.bat"
if sys.version_info[0] == 3:
with io.open(ijulia_launcher_bat, 'w', encoding = sys.getdefaultencoding() ) as f:
for line in bat_text.splitlines():
f.write('%s\n' % line )
else:
with io.open(ijulia_launcher_bat, 'wb' ) as f:
for line in bat_text.splitlines():
f.write('%s\r\n' % line.encode(sys.getdefaultencoding()) )
!start cmd /C %WINPYDIR%\\..\\scripts\\ijulia_launcher.bat
# first, we must patch the path
import os
os.environ["JULIA_HOME"] = os.environ["WINPYDIR"]+"\\..\\tools\\Julia\\bin\\" # \\bin"
os.environ["JULIA_EXE"]="julia.exe"
if "\\julia\\" not in os.environ["PATH"].lower():
os.environ["JULIA"]=os.environ["JULIA_HOME"]+""+os.environ["JULIA_EXE"]
os.environ["PATH"] =os.environ["JULIA_HOME"]+";"+os.environ["PATH"]
#now we can
%load_ext julia.magic
# don't worry if you see "Julia error ? Failed to initialize PyCall package" and continue as usual
%julia @pyimport matplotlib.pyplot as plt
%julia @pyimport numpy as np
%%julia
# Note how we mix numpy and julia:
t = linspace(0, 2*pi, 1000); # use the julia linspace
s = sin(3 * t + 4 * np.cos(2 * t)); # use the numpy cosine and julia sine
fig = plt.gcf() # **** WATCH THIS VARIABLE ****
plt.plot(t, s, color="red", linewidth=2.0, linestyle="--")
import julia
j=julia.Julia()
j.eval("1 +31")
j.eval("sqrt(1 +31)")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code:
from scipy.optimize import curve_fit
import numpy as np
s = '''1.000000000000000021e-03,2.794682735905079767e+02
4.000000000000000083e-03,2.757183469104809888e+02
1.400000000000000029e-02,2.791403179603880176e+02
2.099999999999999784e-02,1.781413355804160119e+02
3.300000000000000155e-02,-2.798375517344049968e+02
4.199999999999999567e-02,-2.770513900380149721e+02
5.100000000000000366e-02,-2.713769422793179729e+02
6.900000000000000577e-02,1.280740698304900036e+02
7.799999999999999989e-02,2.800801708984579932e+02
8.999999999999999667e-02,2.790400329037249776e+02'''.replace('\n', ';')
arr = np.matrix(s)
z = np.array(arr[:, 0]).squeeze()
Ua = np.array(arr[:, 1]).squeeze()
tau = 0.045
degree = 15
def fourier(x, *a):
ret = a[0] * np.cos(np.pi / tau * x)
for deg in range(1, len(a)):
ret += a[deg] * np.cos((deg+1) * np.pi / tau * x)
return ret
popt, pcov = curve_fit(fourier, z, Ua, [1.0] * degree)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Notice the line that starts with %. This is a 'magic command' specific to Jupyter. It ensures that images will be plotted inline, instead of popping up in a window. You can look at all magic commands by entering %quickref. Some are useful, although most of them are not. The magic commands are not part of Python, so calling them in a script will throw an error. Keep this in mind when you copy code from a notebook.
Step2: We imported the module matplotlib.plot as plt, and we call a function of it called plot to plot the square function. You always plot discrete points
Step3: The order in which you add the decorations to your figure does not matter. The figure is not actually created until you execute the cell. Actually, the execution of the cell just triggers the call of the function plt.show(), which instructs Matplotlib to draw the figure and display it. In a Python script, you would always call plt.show() manually. Let us plot the cube function too, and call plt.show() manually
Step4: Notice the difference with this case
Step5: The plt.show() resets all settings, so for the second figure, you must set the axes again.
Step6: 2.1.2 Object-oriented paradigm
Step7: Armed with this knowledge, we can do inserts
Step8: You can also do aribtrary grids of subplots. The function plt.subplots conveniently creates you the figure object and returns it to you along with the axes
Step9: Matplotlib handles LaTeX reasonably well, just put things between $ signs. For instance, we can a fancy legend
Step10: You need the leading r in the strings to avoid some nastiness with backslashes.
Step11: Exercise 1. Create a three by three grid. Put Lluis Torner in the center. Surround him with aesthetically pleasing functions in the remaining subplots. Hint
Step12: Yes, importing a package should not have severe side effects. On the other hand, this is Python, not Haskell, so let us rejoice at this sheer opportunism.
Step13: You can access individual columns by indexing with the name of the column
Step14: We will use seaborn for some basic visualization
Step15: Let us define an array with all the names of the features and plot their correlations.
Step16: Exercise 2. Plot the histogram of all four features. First, instantiate a Matplotlib figure in a one by four grid, and then pass the matching axes to Seaborn's distplot function that draws the histograms. A figsize=(14, 4) is a recommended parameter to plt.subplots, otherwise the figure will be too squished. Use zip to iterate over the axes and the features simultaneously.
Step17: We reload the Iris data set
Step18: To avoid overfitting, we split the data set in a training and validation part. This is a static random split, not something you would use in 10x random cross-validation.
Step19: We standardize the distribution of the features of the data set. Some kind of normalization or standardization is usually a good idea. Certain learning models work well with data vectors of norm 1, for instance. Here we choose standardization because the physical size parameters of the iris species actually follows a normal distribution.
Step20: The dumbest model we can possibly train is a neural network of a single neuron, trained by stochastic gradient descent. Even this simple model misses only four instances
Step21: The decision boundary is linear.
Step22: This regularization term makes machine learning very different from statistics, at least as far as structural risk minimization goes. In general, sparser model will have better generalization properties, that is, they are less prone to overfitting. Since there is no explicit way to optimize over the hyperparameter, you typically do something like grid search.
Step23: Next we define the collapse operators
Step24: We calculate the steady state and the particle number in the steady state
Step25: We calculate the time evolution over a hundred points with two methods
Step26: Exercise 4. Improve the Monte Carlo simulation to approximate the master equation closer. Typing mcsolve? will give you a detailed help on the parametrization of the solver.
Step27: Exercise 5. Create and study the maximally mixed state of dimension $N$. Here are three possible ways to do it
Step30: When you try to improve the speed of your code, you must ensure two things
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import sklearn
import qutip
from skimage import io
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
from qutip import destroy, basis, steadystate, expect, mcsolve, mesolve, \
thermal_dm, plot_fock_distribution, matrix_histogram, hinton, tensor
%matplotlib inline
print("Matplotlib:", matplotlib.__version__,
"\nScikit-learn:", sklearn.__version__,
"\nQuTiP:", qutip.__version__)
x = np.linspace(0, 5, 10)
plt.plot(x, x**2);
x = np.linspace(0, 5, 100)
y = x**2
plt.plot(x, y)
plt.xlabel('x')
plt.ylabel('y')
plt.title('The most exciting function ever, full stop.');
x = np.linspace(0, 5, 100)
y1 = x**2
y2 = x**3
plt.plot(x, y1)
plt.plot(x, y2)
plt.xlabel('x')
plt.ylabel('y')
plt.show()
plt.plot(x, y1)
plt.xlabel('x')
plt.ylabel('y')
plt.show()
plt.xlabel('x')
plt.ylabel('y')
plt.plot(x, y2)
plt.show()
plt.plot(x, y1)
plt.xlabel('x')
plt.ylabel('y')
plt.savefig("whatever.pdf")
plt.close()
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(x, y)
ax.set_xlabel('x')
ax.set_ylabel('y')
plt.show()
fig = plt.figure()
axes1 = fig.add_axes([0.1, 0.1, 0.8, 0.8]) # main axes
axes2 = fig.add_axes([0.2, 0.5, 0.4, 0.3]) # insert axes
# main figure
axes1.plot(x, y1, 'r')
axes1.set_xlabel('x')
axes1.set_ylabel('y')
axes1.set_title('Square function in red')
# insert
axes2.plot(x, y2, 'b')
axes2.set_xlabel('x')
axes2.set_ylabel('y')
axes2.set_title('Cube function in blue')
plt.show()
fig, axes = plt.subplots(ncols=2)
y = [y1, y2]
labels = ["Square function", "Cube function"]
colors = ['r', 'b']
for i, ax in enumerate(axes):
ax.plot(x, y[i], colors[i])
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_title(labels[i])
fig.tight_layout()
fig, ax = plt.subplots()
ax.plot(x, y1, label=r"$y = x^2$")
ax.plot(x, y2, label=r"$y = x^3$")
ax.legend(loc=2) # upper left corner
ax.set_xlabel(r'$x$')
ax.set_ylabel(r'$y$')
plt.show()
# Some new data will be necessary
n = np.random.randn(100000)
t = np.linspace(0, 2 * np.pi, 100)
X, Y = np.meshgrid(t, t)
Z = (2.7 - 2 * np.cos(Y) * np.cos(X) - 0.7 * np.cos(np.pi - 2*Y)).T
# The actual plot
fig = plt.figure(figsize=(12, 6))
axes = [[],[]]
axes[0].append(fig.add_subplot(2, 4, 1))
axes[0][0].scatter(x, x + 0.25*np.random.randn(len(x)))
axes[0][0].set_title("Scatter")
axes[0].append(fig.add_subplot(2, 4, 2))
axes[0][1].step(x, y1, lw=2)
axes[0][1].set_title("Step")
axes[0].append(fig.add_subplot(2, 4, 3))
axes[0][2].bar(x, y1, align="center", width=0.5, alpha=0.5)
axes[0][2].set_title("Bar")
axes[0].append(fig.add_subplot(2, 4, 4))
axes[0][3].fill_between(x, y1, y2, color="green", alpha=0.5);
axes[0][3].set_title("Fill between");
axes[1].append(fig.add_subplot(2, 4, 5))
axes[1][0].hist(n, bins=100)
axes[1][0].set_title("Histogram")
axes[1][0].set_xlim((min(n), max(n)))
axes[1].append(fig.add_subplot(2, 4, 6))
p = axes[1][1].pcolor(X/(2*np.pi), Y/(2*np.pi), Z, cmap=matplotlib.cm.RdBu, vmin=abs(Z).min(), vmax=abs(Z).max())
axes[1][1].set_title("Color map")
fig.colorbar(p, ax=axes[1][1])
axes[1].append(fig.add_subplot(2, 4, 7, projection='3d'))
axes[1][2].plot_surface(X, Y, Z, rstride=1, cstride=1, cmap=matplotlib.cm.coolwarm, linewidth=0, antialiased=False)
axes[1][2].set_title("Surface plot")
axes[1].append(fig.add_subplot(2, 4, 8, polar=True))
axes[1][3].plot(t, t, color='blue', lw=3);
axes[1][3].set_title("Polar coordinates")
fig.tight_layout()
plt.show()
plt.plot(x, x**2)
plt.show()
import seaborn as sns
plt.plot(x, x**2)
plt.show()
iris = load_iris()
iris = pd.DataFrame(data=np.c_[iris['data'], iris['target']],
columns= iris['feature_names'] + ['target'])
iris.head()
iris["sepal length (cm)"].head()
sns.jointplot(x="sepal length (cm)", y="sepal width (cm)", data=iris, size=5);
features = iris.columns.values[:-1]
sns.pairplot(iris, vars=features, hue="target", size=3);
def plot_decision_regions(X, y, classifier, test_idx=None, resolution=0.02):
# setup marker generator and color map
markers = ('s', 'x', 'o', '^', 'v')
colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan')
cmap = matplotlib.colors.ListedColormap(colors[:len(np.unique(y))])
# plot the decision surface
x1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1
x2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution),
np.arange(x2_min, x2_max, resolution))
Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T)
Z = Z.reshape(xx1.shape)
plt.contourf(xx1, xx2, Z, alpha=0.4, cmap=cmap)
plt.xlim(xx1.min(), xx1.max())
plt.ylim(xx2.min(), xx2.max())
for idx, cl in enumerate(np.unique(y)):
plt.scatter(x=X[y == cl, 0], y=X[y == cl, 1], alpha=0.8,
c=cmap(idx), marker=markers[idx], label=cl)
# highlight test samples
if test_idx:
# plot all samples
X_test, y_test = X[test_idx, :], y[test_idx]
plt.scatter(X_test[:, 0], X_test[:, 1], c='', alpha=1.0,
linewidths=1, marker='o', s=55, label='test set')
iris = load_iris()
X = iris.data[:, [2, 3]]
y = iris.target
print('Class labels:', np.unique(y))
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
sc = StandardScaler()
sc.fit(X_train)
X_train_std = sc.transform(X_train)
X_test_std = sc.transform(X_test)
X_combined_std = np.vstack((X_train_std, X_test_std))
y_combined = np.hstack((y_train, y_test))
lr = LogisticRegression(C=1000.0, random_state=0)
lr.fit(X_train_std, y_train)
plot_decision_regions(X_combined_std, y_combined,
classifier=lr, test_idx=range(105, 150))
plt.xlabel('petal length [standardized]')
plt.ylabel('petal width [standardized]')
plt.legend(loc='upper left')
plt.tight_layout()
plt.show()
weights, params = [], []
for c in np.arange(-5, 5):
lr = LogisticRegression(C=10.0**c, random_state=0)
lr.fit(X_train_std, y_train)
weights.append(lr.coef_[1])
params.append(10.0**c)
weights = np.array(weights)
plt.plot(params, weights[:, 0], label='petal length')
plt.plot(params, weights[:, 1], linestyle='--', label='petal width')
plt.ylabel('weight coefficient')
plt.xlabel('C')
plt.legend(loc='upper left')
plt.xscale('log')
plt.show()
N = 20
a = destroy(N)
H = a.dag() * a
psi0 = basis(N, 10) # initial state
kappa = 0.1 # coupling to oscillator
n_th_a = 2 # temperature with average of 2 excitations
rate = kappa * (1 + n_th_a)
c_op_list = [np.sqrt(rate) * a] # decay operators
rate = kappa * n_th_a
c_op_list.append(np.sqrt(rate) * a.dag()) # excitation operators
final_state = steadystate(H, c_op_list)
fexpt = expect(a.dag() * a, final_state)
tlist = np.linspace(0, 50, 100)
# monte-carlo
mcdata = mcsolve(H, psi0, tlist, c_op_list, [a.dag() * a], ntraj=100)
# master eq.
medata = mesolve(H, psi0, tlist, c_op_list, [a.dag() * a])
plt.plot(tlist, mcdata.expect[0], tlist, medata.expect[0], lw=2)
plt.axhline(y=fexpt, color='r', lw=1.5)
plt.ylim([0, 10])
plt.xlabel('Time', fontsize=14)
plt.ylabel('Number of excitations', fontsize=14)
plt.legend(('Monte Carlo', 'Master Equation', 'Steady State'))
plt.title('Decay of Fock state $\left|10\\rangle\\right.$' + ' in a thermal environment with $\langle n\\rangle=2$')
plt.show()
rho_thermal = thermal_dm(N, 2)
fig, axes = plt.subplots(1, 3, figsize=(12,3))
axes[0].matshow(rho_thermal.data.toarray().real)
axes[0].set_title("Matrix plot")
axes[1].bar(np.arange(0, N)-.5, rho_thermal.diag())
axes[1].set_xlim([-.5, N])
axes[1].set_title("Diagonal")
plot_fock_distribution(rho_thermal, fig=fig, ax=axes[2])
axes[2].set_title("Fock number distribution")
plt.show()
a = [i for i in range(1000)]
b = np.array(a)
%timeit sum(a)
%timeit sum(b)
%timeit np.sum(a)
%timeit np.sum(b)
import numpy as np
def dummy_test_for_slow_function():
Super-stupid test for the function
np.testing.assert_almost_equal(slow_function(), 157687.67990470183)
def slow_function():
This function could see a fair bit of improvement
total = 0
for i in range(1000):
a = np.array([np.sqrt(j) for j in range(1000)])
b = a/(i+1)
total += sum(b)
return total
# We run the test
dummy_test_for_slow_function()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Datentypen
Step2: Aktionen
Step3: Variablen, Vergleiche und Zuordnungen von Variablen
Step4: if - else - (elif)
Step5: Lists
Step6: Dictionaries
Step7: Tuples
Step8: Simple Funktionen - len und sort
Step9: For Loop
Step10: For loop with list of dictionaries
|
<ASSISTANT_TASK:>
Python Code:
#Mit einem Hashtag vor einer Zeile können wir Code kommentieren, auch das ist sehr wichtig.
#Immer, wirklich, immer den eigenen Code zu kommentieren. Vor allem am Anfang.
print('hello world')
#Der Printbefehl druckt einfach alles aus. Nicht wirklich wahnsinnig toll.
#Doch er ist später sehr nützlich. Vorallem wenn es darum geht Fehler im eigenn Code zu finden.
#Mit dem Inputbefehl kannst Du Den Nutzer mit dem intergieren.
input('wie alt bis Du?')
#Strings
'Hallo wie geht es Dir'
"12345"
str(124)
str(1230847120934)
#Integer
567
int('1234')
#Floats
4.542323
float(12)
#Dates, eigentlich Strings
#Dates liest er als Strings, aber es ist eine der wichtigsten Sonderform. Ansonsten sind str, int und float.
'15-11-2019'
type("12")
#strings addieren mit +
print('Hallo' + 'wie' + 'geht' + 'es')
#Strings addieren mit , gibt Abstände!
print('Hallo','wie','geht','es')
#Alle anderen gängigen:
#minus -
#Mal *
#geteilt durch /
#Spezial: Modulo. %, geteilt durch und der Rest, der übrigbleibt. Also 13 enthält 2x die fünf und dann den Rest = 3. Modulo rechnet alles, was durch (hier) fünf teiltbar ist und sagt, was dann übrig bleibt. Hier: 3
13 % 5
#Grösser und kleiner als:
#< >
#Gleich == (wichtig, doppelte Gleichzeichen) vergleicht etwas.
#Denn das einfach *definiert* eine Variable
"Schweiz" == 'reich'
Schweiz = 'reich'
Schweiz == 'reich'
'Schweiz' = 'reich'
1 = 6
a = b
a = 'b'
a == 'b'
a = a
elem = int(input('Wie alt bist Du?'))
elem
if elem < 0:
print('Das ist unmöglich')
else:
print('Du bist aber alt')
if elem == 12:
print("Gratuliere zum Duzend!")
#elif macht, dass der Code auch weiter ausgeführt wird, auch wenn es zutrifft.
elem = int(input('Wie alt bist Du?'))
if elem < 0:
print('Das ist unmöglich')
elif elem < 25:
print('Du bist aber jung')
else:
print('Du bist aber alt')
#Eckige Klammern
[1,2,"eine String dazwischen",3,4,"nun folgt eine Float:",5.23,6,7]
lst = [1,2,3,4,5,6,7]
lst
#Einzelene Elemente - 0 bedeutet das erste Element.
lst[0]
#Ganze Abschnitte "bis vier" in diesem Bsp.
lst[:4]
#Komplexere Schnitte in diesem Bsp. jedes Zweite Element.
lst[::2]
#Append (hinzufügen), Pop (abschneiden - wenn ich leere Klammern verwende, ist default das letzte Element gemeint), etc.
lst.pop()
lst
lst.append(7)
lst.pop()
lst
#Aufpassen mit Befehl: list weil das macht aus etwas eine Liste. Auch aus Strings:
list('hallo wie geht')
#Elegantester Weg, eine Liste zu schreiben. Und ganz wichtig,
#der Computer beginn immer bei 0.
list(range(10))
list(range(5,-1,-1))
#Komische, geschwungene Klammern
{'Tier': 'Hund', 'Grösse': 124, 'Alter': 10}
dct = {'Tier': 'Hund', 'Grösse': 124, 'Alter': 10}
dct
dct['Grösse']
#List of Dictionaires
dct_lst = [{'Tier': 'Hund', 'Grösse': 124, 'Alter': 10}, {'Tier': 'Katze', 'Grösse': 130, 'Alter': 8}]
dct_lst[0]
dct_lst[0]['Alter']
dct_lst[1]["Alter"]
tuple(lst)
#Unveränderbar. Also gutes Format, um Sachen abzuspeichern.
#Aber das wirklich nur der Vollständigkeitshalber.
#len mit Strings (len für length) - zählt einfach die Elemente.
len('hallo wie geht es Dir')
#len mit Lists
len([1,2,3,4,4,5])
#len mit dictionaries
len({'Tier': 'Hund', 'Alter': 345})
#len mit Tuples
len((1,1,1,2,2,1))
#sorted für momentane Sortierung
sorted('hallo wie geht es Dir')
a = 'hallo wie geht es Dir'
sorted(a)
a
#Sort funktioniert allerdings "nur" mit lists
lst = [1, 5, 9, 10, 34, 12, 12, 14]
lst.sort()
lst
dic = {'Tier': 'Hund', 'Alter': 345}
dic.sort()
lst
for x in lst:
print(x)
dic = {'Tier': 'Hund', 'Alter': 345}
for key, value in dic.items():
print(key, value)
#for loop to make new lists
lst
#Nehmen wir einmal an, wir wollen nur die geraden Zahlen in der Liste
new_lst = []
for elem in lst:
if elem % 2 == 0:
new_lst.append(elem)
else:
continue
new_lst
lst
dic_lst = [{'Animal': 'Dog', 'Size': 45},
{'Animal': 'Cat', 'Size': 23},
{'Animal': 'Bird', 'Size': 121212}]
for dic in dic_lst:
print(dic)
for dic in dic_lst:
print(dic['Animal'])
for dic in dic_lst:
print(dic['Animal'] + ': '+ str(dic['Size']))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: let's explore the dataset.
Step2: in the train data we have 20 time series each of 210 data points. Notice that all the lines are compact and follow a similar shape. Is important to remember that when training with autoencoders you want to use only VALID data. All the anomalies should be removed.
Step3: Our Neural Network is now able to Encode the time series.
Step4: Now the question is
|
<ASSISTANT_TASK:>
Python Code:
import h2o
from h2o.estimators.deeplearning import H2OAutoEncoderEstimator
h2o.init()
%matplotlib inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
import os.path
PATH = os.path.expanduser("~/h2o-3/")
train_ecg = h2o.import_file(PATH + "smalldata/anomaly/ecg_discord_train.csv")
test_ecg = h2o.import_file(PATH + "smalldata/anomaly/ecg_discord_test.csv")
train_ecg.shape
# transpose the frame to have the time serie as a single colum to plot
train_ecg.as_data_frame().T.plot(legend=False, title="ECG Train Data", color='blue'); # don't display the legend
model = H2OAutoEncoderEstimator(
activation="Tanh",
hidden=[50],
l1=1e-5,
score_interval=0,
epochs=100
)
model.train(x=train_ecg.names, training_frame=train_ecg)
model
reconstruction_error = model.anomaly(test_ecg)
df = reconstruction_error.as_data_frame()
df['Rank'] = df['Reconstruction.MSE'].rank(ascending=False)
df_sorted = df.sort_values('Rank')
df_sorted
anomalies = df_sorted[ df_sorted['Reconstruction.MSE'] > 1.0 ]
anomalies
data = test_ecg.as_data_frame()
data.T.plot(legend=False, title="ECG Test Data", color='blue')
ax = data.T.plot(legend=False, color='blue')
data.T[anomalies.index].plot(legend=False, title="ECG Anomalies in the Data", color='red', ax=ax);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Porównanie z próbkowaniem "ręcznym"
Step2: Dobrze
Step3: Z jest row major więc można też napisać
Step4: mgrid
Step5: Zresztą sprawdzmy
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
nx = 11
ny = 5
x1,y1 = 1,2
X,Y = np.meshgrid(np.linspace(0,x1,nx),np.linspace(0,y1,ny))
X.shape
f = lambda X_,Y_:np.sin(X_**2+Y_**2)
Z = f(X,Y)
plt.contourf(X,Y,Z)
# To samo co:
# plt.contourf(X.T,Y.T,Z.T)
plt.imshow(Z,interpolation='nearest',origin='lower')
X
Y
i,j = 2,3
print ("dla x i y", X[i,j],Y[i,j],"jest", Z[i,j],f(X[i,j],Y[i,j]),\
"powinno byc rowne", f(x1/float(nx-1)*i,y1/float(ny-1)*j) )
i,j = 2,3
print ("dla x i y" ,X[j,i],Y[j,i],"jest", Z[j,i],f(X[j,i],Y[j,i]),\
"powinno byc rowne", f(x1/float(nx-1)*i,y1/float(ny-1)*j))
print Z[j,i],Z.flatten()[j*nx+i]
Yn, Xn = np.mgrid[0:ny,0:nx]
Xn.shape
Xn
Yn
Xn/(float(nx-1)*x1)
X1,Y1 = Xn*(x1/float(nx-1)),Yn*(y1/float(ny-1))
np.allclose(X, X1),np.allclose(Y, Y1),np.allclose(Y, Y1)
Z.strides
np.meshgrid(np.arange(nx),np.arange(ny))
list(reversed(np.mgrid[0:ny,0:nx]))
np.meshgrid(np.arange(ny),np.arange(nx),indexing='ij')
np.mgrid[0:ny,0:nx]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1.4 Jeux de Nim
Step3: On va d'abord écrire une fonction toute simple qui affiche une configuration, en mode texte.
Step4: On peut définir et afficher deux exemples de configuration d'un jeu de Nim, venant de la figure 1.
Step5: 1.4.2 Fonction de Sprague-Grundy pour le jeu de Nim
Step6: | Entrée | | Entrée | Sortie |
Step8: D'apres le corollaire 1., il suffit d'appliquer un xor à chaque valeur du tableau pour calculer $\gamma$.
Step10: On peut aussi obtenir pareil avec la fonction functools.reduce, qui fait comme Array.fold_left en OCaml.
Step12: 1.4.3 Déterminer un coup à jouer selon une stratégie gagnante (s'il y en a une)
Step14: 1.4.3.1 Stratégie optimale
Step15: On peut tester cette fonction sur nos deux configuration a et b
Step17: 1.4.4 Stratégie stupide
Step18: On peut ainsi faire un exemple de début de partie entre deux joueurs "stupides"
Step19: On peut aussi faire le même exemple de début de partie entre un joueur "optimal" et un joueur "stupide"
Step21: 1.4.5 Un bonus
Step23: La fonction simule va jouer la partie, en partant de la configuration donnée, en commençant par le joueur numero et pour un certain nombre de coups joués (nb_coups).
Step24: On peut finalement implementer une jolie fonction qui simule en partant du joueur 0 (comme le vrai jeu de Nim) et interprète l'exception renvoyée pour afficher l'issue du jeu
Step26: 1.4.6 Configuration aléatoire
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import numpy.random as random
import matplotlib.pyplot as plt
a = [1, 3, 5]
def print_nim(configuration):
Affiche une configuration, donnée sous forme d'une liste d'entiers.
for i, nb in enumerate(configuration):
print(i, ':', '! ' * nb)
print_nim(a)
b = [1, 3, 2]
print_nim(b)
from itertools import product
for b1, b2 in product([False, True], repeat=2):
print("{!s:>5} XOR {!s:>5} = {!s:>5} ^ {!s:>5} = {!s:>5}".format(b1, b2, b1, b2, b1 ^ b2))
# Ce morceau de code est un peu fancy mais concis et joli, cf. https://pyformat.info/#string_pad_align
3 ^ 5 # 3 xor 5 = 0b011 xor 0b101 = 0b111 = 6
5 ^ 9 # 5 xor 9 = 0b0101 xor 0b1001 = 0b1100 = 12
12 ^ 1 # 12 xor 1 = 0b1100 xor 0b0001 = 0b1101 = 13
12 ^ 2 # 12 xor 2 = 0b1100 xor 0b0010 = 0b1110 = 14
def gamma(configuration):
Fonction gamma de Sprague-Grundy pour le jeu de Nim.
resultat = 0
for nb in configuration:
resultat = (resultat ^ nb)
return resultat
print("Gamma(a) =", gamma(a))
print("Gamma(b) =", gamma(b))
from functools import reduce # Comme Array.fold_left ou List.fold_left en OCaml
from operator import xor # Version préfixe de l'opérateur ^ infixe
def gamma(configuration):
Fonction gamma de Sprague-Grundy pour le jeu de Nim.
return reduce(xor, configuration)
print("Gamma(a) =", gamma(a))
print("Gamma(b) =", gamma(b))
class PasDeStratGagnante(Exception):
Exception renvoyée s'il n'y a pas de stratégie gagnante.
pass
def optimal(configuration, joueur=0):
Essaie de trouver un coup à jouer pour le joueur 0 ou 1, et renvoit la configuration modifiée.
g = gamma(configuration)
if g == 0:
print("Il n'y a pas de stratégie gagnante !")
raise PasDeStratGagnante # On quitte
print("Il y a une stratégie gagnante... Trouvons la !")
# On chercher le coup à jouer : il suffit d'explorer tous les coups possibles
colonne = 0
nb = 1
nouvelle_configuration = configuration[:]
for j in range(len(configuration)):
for i in range(1, configuration[j]):
nouvelle_configuration[j] -= i # On tente de jouer ce coup
if gamma(nouvelle_configuration) == 0:
colonne, nb = j, i # On stocke j, i
nouvelle_configuration = configuration[:] # On l'annule
# On devrait avoir trouver un coup qui amène gamma(nouvelle_configuration) = 0
# On applique ce coup
print("Le joueur courant", joueur, "a choisi de retirer", nb, "allumettes à la rangée numéro", colonne)
nouvelle_configuration = configuration[:]
nouvelle_configuration[colonne] -= nb
if gamma(nouvelle_configuration) != 0:
print(" Attention, apparemment on a été contraint de choisir un coup qui n'est pas gagnant (n'amène pas à gamma(c') = 0).")
return nouvelle_configuration
print_nim(a)
print_nim(optimal(a, joueur=0)) # Ça joue
print_nim(b)
print_nim(optimal(b, joueur=0)) # Pas de stratégie gagnante ici !
def stupide(configuration, joueur=0):
Choisit un coup aléatoire (uniforme) pour le joueur 0 ou 1, et renvoit la configuration modifiée.
# On choisit le coup à jouer : ligne random, nb d'allumette(s) random...
lignes_non_vides = [i for i, c in enumerate(configuration) if c > 0]
position_random = random.choice(lignes_non_vides)
print("Le joueur", joueur, "aléatoire uniforme a choisi de regarder la ligne", position_random)
total = configuration[position_random]
a_enlever = random.randint(1, 1 + total)
print("Le joueur", joueur, "aléatoire uniforme a choisi de retirer", a_enlever, "allumettes parmi les", total, "disponibles")
# On applique ce coup
nouvelle_configuration = configuration[:]
nouvelle_configuration[position_random] -= a_enlever
return nouvelle_configuration
random.seed(0) # Assure la reproductibilité des résultats.
a0 = a # Debut du jeu
print_nim(a0)
a1 = stupide(a0, joueur=0)
print_nim(a1)
a2 = stupide(a1, joueur=1)
print_nim(a2)
a3 = stupide(a2, joueur=0)
print_nim(a3)
# ... etc
random.seed(0) # Assure la reproductibilité des résultats.
a0 = a # Debut du jeu
print_nim(a0)
a1 = optimal(a0, joueur=0)
print_nim(a1)
a2 = stupide(a1, joueur=1)
print_nim(a2)
# ... etc
class Perdu(Exception):
Représente le joueur numero i qui a perdu.
def __init__(self, numero):
self.numero = numero
def __str__(self):
return "Le joueur {} a perdu !".format(self.numero)
def simule(configuration, numero=0, nb_coups=None):
Simule le jeu de Nim, alternant un joeur malin et un joueur stupide.
config = configuration[:] # On ne change pas la liste donnee en argument !
# Si on n'a pas donne le nb de coups max, on calcule une borne :
if nb_coups is None:
nb_coups = sum(configuration)
print("Début de la simulation pour maximum", nb_coups, "coups.")
# On lance la simulation
for coup in range(1, 1 + nb_coups):
print("\n# Tour numéro", coup)
print_nim(config)
# On perd si on ne peut plus enlever d'allumettes
if not config or sum(config) == 0:
raise Perdu(numero)
else:
if numero == 0: # Joueur malin
config = optimal(config, joueur=numero)
else: # Joueur stupide
config = stupide(config, joueur=numero)
# Joueur suivant
numero = 1 - numero # 0 -> 1, 1 -> 0
# A la fin, la configuration finale est renvoyée.
return config
def nim(configuration):
try:
simule(configuration)
except PasDeStratGagnante:
print("==> Blocage car le joueur 0 n'a pas pu trouver de coup gagnant, il déclare forfait (le pleutre !).")
except Perdu as e:
print("==> Le joueur", e.numero, "a perdu.")
nim(a)
nim(b)
def config_aleatoire(nb_ligne, nb_max_allumette):
Configuration aléatoire, chaque ligne est uniformément tirée dans [1, nb_max_allumette] (bornes incluses).
return list(random.randint(1, 1 + nb_max_allumette, nb_ligne))
c = config_aleatoire(4, 4)
print("Configuration random c :")
print_nim(c)
nim(c)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Patterns
Step2: translate
Step3: rotate
Step4: We can choose any point of rotation so let's also do this about the center.
Step5: scale
Step6: skew
Step7: align
Step8: halign
Step9: valign
Step10: to
Step11: Curves and paths
Step12: turn
Step13: taper
Step14: arc
Step15: bezier_sbend
Step16: turn_sbend
Step17: Operations
Step18: segments
Step19: reverse
Step20: interpolated
Step21: symmetrized
Step22: We can apply the symmetrization many times to build funky ring structures.
|
<ASSISTANT_TASK:>
Python Code:
import dphox as dp
import numpy as np
import holoviews as hv
hv.extension('bokeh')
pi = dp.text(r"$\pi$")
pi.port['p'] = dp.Port(3, 1)
pi.hvplot().opts(title='pi')
pi1 = pi.copy.translate() # no translation
pi2 = pi.copy.translate(10) # translation by 10
pi3 = pi.copy.translate(10, 10) # translation by (10, 10)
b = dp.Pattern(pi1, pi2, pi3).bounds
(pi1.hvplot() * pi2.hvplot('blue') * pi3.hvplot('red')).opts(xlim=(b[0], b[2]), ylim=(b[1], b[3]), title='translation')
pi1 = pi.copy.rotate(45) # rotate by 45 degrees about the origin
pi2 = pi.copy.rotate(90) # rotate by 90 degrees about the center of the pattern
b = dp.Pattern(pi, pi1, pi2).bounds
(pi.hvplot() * pi1.hvplot('blue') * pi2.hvplot('red')).opts(xlim=(b[0], b[2]), ylim=(b[1], b[3]), title='rotation')
pi1 = pi.copy.rotate(45, pi.center) # rotate by 45 degrees about the origin
pi2 = pi.copy.rotate(90, pi.center) # rotate by 90 degrees about the center of the pattern
b = dp.Pattern(pi, pi1, pi2).bounds
(pi.hvplot() * pi1.hvplot('blue') * pi2.hvplot('red')).opts(xlim=(b[0], b[2]), ylim=(b[1], b[3]), title='rotation')
pi1 = pi.copy.scale(4, origin=pi.center) # rotate by 45 degrees about the origin
pi2 = pi.copy.scale(2, 2, pi.center) # rotate by 90 degrees about the center of the pattern
b = dp.Pattern(pi, pi1, pi2).bounds
(pi.hvplot() * pi1.hvplot('blue') * pi2.hvplot('red')).opts(xlim=(b[0], b[2]), ylim=(b[1], b[3]), title='scale')
pi1 = pi.copy.skew(0.5, origin=pi.center) # rotate by 45 degrees about the origin
pi2 = pi.copy.skew(0, -0.5, pi.center) # rotate by 90 degrees about the center of the pattern
b = dp.Pattern(pi, pi1, pi2).bounds
(pi.hvplot().opts(title='no skew') + pi1.hvplot('blue').opts(title='xskew') + pi2.hvplot('red').opts(title='yskew'))
circle = dp.Circle(5)
circle.align(pi)
b = dp.Pattern(circle, pi).bounds
(pi.hvplot() * circle.hvplot('green')).opts(xlim=(b[0], b[2]), ylim=(b[1], b[3]), title='scale')
box = dp.Box((3, 3)) # centered at (0, 0) by default.
aligned_boxes = {
'default': box.copy.halign(circle),
'opposite=True': box.copy.halign(circle, opposite=True),
'left=False': box.copy.halign(circle, left=False),
'left=False,opposite=True': box.copy.halign(circle, left=False, opposite=True),
}
plots = []
for name, bx in aligned_boxes.items():
b = dp.Pattern(circle, bx, pi).bounds
plots.append(
(pi.hvplot() * circle.hvplot('green') * bx.hvplot('blue', plot_ports=False)).opts(
xlim=(b[0], b[2]), ylim=(b[1], b[3]), title=name
)
)
hv.Layout(plots).cols(2).opts(shared_axes=False)
box.halign(circle, opposite=True) # to create a wider plot
aligned_boxes = {
'default': box.copy.valign(circle),
'opposite=True': box.copy.valign(circle, opposite=True),
'bottom=False': box.copy.valign(circle, bottom=False),
'bottom=False,opposite=True': box.copy.valign(circle, bottom=False, opposite=True),
}
plots = []
for name, bx in aligned_boxes.items():
b = dp.Pattern(circle, bx, pi).bounds
plots.append(
(pi.hvplot() * circle.hvplot('green') * bx.hvplot('blue', plot_ports=False)).opts(
xlim=(b[0], b[2]), ylim=(b[1], b[3]), title=name
)
)
hv.Layout(plots).cols(2).opts(shared_axes=False)
box = dp.Box((3, 3))
box.port = {'n': dp.Port(a=45)} # 45 degree reference port.
aligned_boxes = {
'to n from origin': pi.copy.to(box.port['n']),
'to n from p': pi.copy.to(box.port['n'], from_port='p')
}
plots = []
for name, bx in aligned_boxes.items():
b = dp.Pattern(bx, box).bounds
plots.append(
(box.hvplot() * bx.hvplot('blue')).opts(
xlim=(b[0], b[2]), ylim=(b[1], b[3]), title=name
)
)
hv.Layout(plots).cols(2).opts(shared_axes=False)
aligned_boxes = {
'to p from origin': box.copy.to(pi.port['p']),
'to p from n': box.copy.to(pi.port['p'], from_port='n')
}
plots = []
for name, bx in aligned_boxes.items():
b = dp.Pattern(bx, pi).bounds
plots.append(
(bx.hvplot() * pi.hvplot('blue')).opts(
xlim=(b[0], b[2]), ylim=(b[1], b[3]), title=name
)
)
hv.Layout(plots).cols(2).opts(shared_axes=False)
straight_curve = dp.straight(3) # A turn of radius 5.
straight_path = dp.straight(3).path(1) # A turn of radius 5 and width 1
straight_curve.hvplot().opts(title='straight curve', ylim=(-2, 2)) + straight_path.hvplot().opts(title='straight path', ylim=(-2, 2))
hv.DynamicMap(lambda width, length: dp.straight(length).path(width).hvplot().opts(
xlim=(0, 5), ylim=(-2, 2)),
kdims=['width', 'length']).redim.range(
width=(0.1, 0.5), length=(1, 5)).opts(framewise=True)
turn_curve = dp.turn(5, 90) # A turn of radius 5.
turn_path = dp.turn(5, 90).path(1) # A turn of radius 5 and width 1
turn_curve.hvplot().opts(title='turn curve') + turn_path.hvplot().opts(title='turn path')
dmap = hv.DynamicMap(lambda width, radius, angle, euler: dp.turn(radius, angle, euler).path(width).hvplot().opts(
xlim=(-10, 10), ylim=(-10, 10)),
kdims=['width', 'radius', 'angle', 'euler'])
dmap.redim.range(width=(0.3, 0.7), radius=(3., 5.), angle=(-180, 180), euler=(0, 0.5)).redim.step(radius=0.1, euler=0.05).redim.default(angle=90, width=0.5, radius=5)
cubic = dp.taper(5).path(dp.cubic_taper_fn(1, 0.5))
quad = dp.taper(5).path(dp.quad_taper_fn(1, 0.5))
linear = dp.taper(5).path(dp.linear_taper_fn(1, 0.5))
linear_plot = linear.hvplot().opts(title='linear taper (1 to 0.5)', ylim=(-2, 2))
quad_plot = quad.hvplot().opts(title='quadratic taper (1 to 0.5)', ylim=(-2, 2))
cubic_plot = cubic.hvplot().opts(title='cubic taper (1 to 0.5)', ylim=(-2, 2))
linear_plot + quad_plot + cubic_plot
def taper_plot(length, init_w, final_w):
cubic = dp.taper(length).path(dp.cubic_taper_fn(init_w, final_w))
quad = dp.taper(length).path(dp.quad_taper_fn(init_w, final_w))
linear = dp.taper(length).path(dp.linear_taper_fn(init_w, final_w))
linear_plot = linear.hvplot().opts(title=f'linear taper ({init_w} to {final_w})', xlim=(0, 10), ylim=(-5, 5))
quad_plot = quad.hvplot().opts(title=f'quadratic taper ({init_w} to {final_w})', xlim=(0, 10), ylim=(-5, 5))
cubic_plot = cubic.hvplot().opts(title=f'cubic taper ({init_w} to {final_w})', xlim=(0, 10), ylim=(-5, 5))
return linear_plot + quad_plot + cubic_plot
dmap = hv.DynamicMap(lambda length, init_w, final_w: taper_plot(length, init_w, final_w), kdims=['length', 'init_w', 'final_w'])
dmap.redim.range(length=(5., 10.), init_w=(3., 5.), final_w=(2., 6.)).redim.default(length=10)
curve = dp.arc(120, 5)
path = curve.path(1)
path_taper = curve.path(dp.cubic_taper_fn(0.5, 2))
arc_curve_plot = curve.hvplot().opts(xlim=(0, 6), ylim=(-5, 5), title='arc curve')
arc_path_plot = path.hvplot().opts(xlim=(0, 6), ylim=(-5, 5), title='arc path')
arc_path_taper_plot = path_taper.hvplot().opts(xlim=(0, 6), ylim=(-5, 5), title='arc path, cubic taper')
arc_curve_plot + arc_path_plot + arc_path_taper_plot
curve = dp.bezier_sbend(bend_x=15, bend_y=10)
path = dp.bezier_sbend(15, 10).path(1)
path_taper = dp.bezier_sbend(15, 10).path(dp.cubic_taper_fn(0.5, 2))
curve.hvplot().opts(title='bezier curve') + path.hvplot().opts(title='bezier path') + path_taper.hvplot().opts(title='bezier path, cubic taper')
curve = dp.turn_sbend(height=5, radius=5)
path = dp.turn_sbend(5, 5).path(1)
path_taper = dp.turn_sbend(5, 5).interpolated.path(dp.cubic_taper_fn(0.5, 2))
curve.hvplot().opts(title='turn_sbend curve') + path.hvplot().opts(title='turn_sbend path') + path_taper.hvplot().opts(title='turn_sbend path, cubic taper')
def racetrack(radius: float, length: float):
return dp.link(dp.left_uturn(radius), length, dp.left_uturn(radius), length)
def trombone(radius: float, length: float):
return dp.link(dp.left_turn(radius), length, dp.right_uturn(radius), length, dp.left_turn(radius))
racetrack_curve = racetrack(5, 10)
trombone_curve = trombone(5, 10)
racetrack_plot = racetrack_curve.path(1).hvplot(alpha=0.2) * racetrack_curve.hvplot(alternate_color='green', line_width=4)
trombone_plot = trombone_curve.path(2).hvplot(alpha=0.2) * trombone_curve.hvplot(alternate_color='green', line_width=4)
(racetrack_plot.opts(title='racetrack') + trombone_plot.opts(title='trombone')).opts(shared_axes=False)
racetrack_segments = racetrack_curve.segments
xmin, ymin, xmax, ymax = racetrack_curve.bounds
hv.Overlay([segment.hvplot() for segment in racetrack_segments]).opts(xlim=(xmin - 2, xmax + 2), ylim=(ymin - 2, ymax + 2))
taper = dp.taper(5).path(dp.cubic_taper_fn(1, 0.5))
reverse_taper = dp.taper(5).reverse().path(dp.cubic_taper_fn(1, 0.5))
(taper.hvplot().opts(title='forward') + reverse_taper.hvplot().opts(title='backward')).opts(shared_axes=False).cols(1)
path_taper = dp.turn_sbend(20, 5).path(dp.cubic_taper_fn(0.5, 2))
path_taper_interp = dp.turn_sbend(20, 5).interpolated.path(dp.cubic_taper_fn(0.5, 2))
path_taper.hvplot().opts(title='noninterpolated', fontsize=10) + path_taper_interp.hvplot().opts(title='interpolated', fontsize=10)
trombone_taper = path_taper_interp.symmetrized()
trombone_taper.hvplot(alpha=0.5) * trombone_taper.curve.hvplot(alternate_color='red', line_width=6)
path1 = dp.link(dp.turn(5, -45).path(0.5), trombone_taper, dp.turn(5, -45).path(0.5)).symmetrized().symmetrized()
path2 = dp.link(dp.turn(5, -45).path(0.5), trombone_taper.symmetrized(), dp.turn(5, -45).path(0.5)).symmetrized().symmetrized()
(path1.hvplot() * path1.curve.hvplot(alternate_color='red') + path2.hvplot() * path2.curve.hvplot(alternate_color='red')).opts(shared_axes=False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <a id="ref1"></a>
Step2: define the class with the Tanh activation function
Step3: define the class for the Relu activation function
Step4: define a function to train the model, in this case the function returns a Python dictionary to store the training loss and accuracy on the validation data
Step5: <a id="ref2"></a>
Step6: Load the testing dataset by setting the parameters train <code>False</code> and convert it to a tensor by placing a transform object int the argument <code>transform</code>
Step7: create the criterion function
Step8: create the training-data loader and the validation-data loader object
Step9: <a id="ref3"></a>
Step10: create the model with 100 hidden layers
Step11: print the model parameters
Step12: <a id="ref4"></a>
Step13: train the network using the Tanh activations function
Step14: train the network using the Relu activations function
Step15: <a id="ref5"></a>
Step16: compare the validation loss for each model
|
<ASSISTANT_TASK:>
Python Code:
!conda install -y torchvision
import torch
import torch.nn as nn
import torchvision.transforms as transforms
import torchvision.datasets as dsets
import torch.nn.functional as F
import matplotlib.pylab as plt
import numpy as np
torch.manual_seed(2)
class Net(nn.Module):
def __init__(self,D_in,H1,H2,D_out):
super(Net,self).__init__()
self.linear1=nn.Linear(D_in,H1)
self.linear2=nn.Linear(H1,H2)
self.linear3=nn.Linear(H2,D_out)
def forward(self,x):
x=torch.sigmoid(self.linear1(x))
x=torch.sigmoid(self.linear2(x))
x=self.linear3(x)
return x
class NetTanh(nn.Module):
def __init__(self,D_in,H1,H2,D_out):
super(NetTanh,self).__init__()
self.linear1=nn.Linear(D_in,H1)
self.linear2=nn.Linear(H1,H2)
self.linear3=nn.Linear(H2,D_out)
def forward(self,x):
x=torch.tanh(self.linear1(x))
x=torch.tanh(self.linear2(x))
x=self.linear3(x)
return x
class NetRelu(nn.Module):
def __init__(self,D_in,H1,H2,D_out):
super(NetRelu,self).__init__()
self.linear1=nn.Linear(D_in,H1)
self.linear2=nn.Linear(H1,H2)
self.linear3=nn.Linear(H2,D_out)
def forward(self,x):
x=F.relu(self.linear1(x))
x=F.relu(self.linear2(x))
x=self.linear3(x)
return x
def train(model,criterion, train_loader,validation_loader, optimizer, epochs=100):
i=0
useful_stuff={'training_loss':[],'validation_accuracy':[]}
#n_epochs
for epoch in range(epochs):
for i,(x, y) in enumerate(train_loader):
#clear gradient
optimizer.zero_grad()
#make a prediction logits
z=model(x.view(-1,28*28))
# calculate loss
loss=criterion(z,y)
# calculate gradients of parameters
loss.backward()
# update parameters
optimizer.step()
useful_stuff['training_loss'].append(loss.data.item())
correct=0
for x, y in validation_loader:
#perform a prediction on the validation data
yhat=model(x.view(-1,28*28))
_,lable=torch.max(yhat,1)
correct+=(lable==y).sum().item()
accuracy=100*(correct/len(validation_dataset))
useful_stuff['validation_accuracy'].append(accuracy)
return useful_stuff
train_dataset=dsets.MNIST(root='./data', train=True, download=True, transform=transforms.ToTensor())
validation_dataset=dsets.MNIST(root='./data', train=False, download=True, transform=transforms.ToTensor())
criterion=nn.CrossEntropyLoss()
train_loader=torch.utils.data.DataLoader(dataset=train_dataset,batch_size=2000,shuffle=True)
validation_loader=torch.utils.data.DataLoader(dataset=validation_dataset,batch_size=5000,shuffle=False)
criterion=nn.CrossEntropyLoss()
input_dim=28*28
hidden_dim1=50
hidden_dim2=50
output_dim=10
cust_epochs = 10
model=Net(input_dim,hidden_dim1,hidden_dim2,output_dim)
learning_rate=0.01
optimizer=torch.optim.SGD(model.parameters(),lr=learning_rate)
training_results=train(model,criterion, train_loader,validation_loader, optimizer, epochs=cust_epochs)
model_Tanh=NetTanh(input_dim,hidden_dim1,hidden_dim2,output_dim)
optimizer=torch.optim.SGD(model_Tanh.parameters(),lr=learning_rate)
training_results_tanch=train(model_Tanh,criterion, train_loader,validation_loader, optimizer, epochs=cust_epochs)
modelRelu=NetRelu(input_dim,hidden_dim1,hidden_dim2,output_dim)
optimizer=torch.optim.SGD(modelRelu.parameters(),lr=learning_rate)
training_results_relu=train(modelRelu,criterion, train_loader,validation_loader, optimizer, epochs=cust_epochs)
plt.plot(training_results_tanch['training_loss'],label='tanh')
plt.plot(training_results['training_loss'],label='sim')
plt.plot(training_results_relu['training_loss'],label='relu')
plt.ylabel('loss')
plt.title('training loss iterations')
plt.legend()
plt.plot(training_results_tanch['validation_accuracy'],label='tanh')
plt.plot(training_results['validation_accuracy'],label='sigmoid')
plt.plot(training_results_relu['validation_accuracy'],label='relu')
plt.ylabel('validation accuracy')
plt.xlabel('epochs ')
plt.legend()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Plotting the open price
Step2: Plotting the volume traded
Step3: Finding the timestamp of highest traded volume
Step4: Creating 'Total Traded' value
Step5: Plotting 'Total Traded'
Step6: Finding the timestamp of highest total traded value
Step7: Plotting moving average (rolling mean)
Step8: Plotting scatter matrix
Step9: Plotting candlestick
Step10: Daily Percentage Change
Step11: Plotting histograms
Step12: Conclusion
Step13: Conclusion
Step14: Conclusion
Step15: Conclusion
Step16: Plotting CDR
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
from datetime import datetime
import matplotlib.pyplot as plt
%matplotlib inline
tesla = pd.read_csv('Tesla_Stock.csv', parse_dates= True, index_col='Date')
tesla.head()
ford = pd.read_csv('Ford_Stock.csv', parse_dates= True, index_col='Date')
ford.head()
gm = pd.read_csv('GM_Stock.csv', parse_dates= True, index_col='Date')
gm.head()
fig = plt.figure(figsize=(16,8))
tesla['Open'].plot(label = 'Tesla')
gm['Open'].plot(label = 'GM')
ford['Open'].plot(label = 'Ford')
plt.title('Open Price')
plt.legend()
fig = plt.figure(figsize=(16,8))
tesla['Volume'].plot(label = 'Tesla')
gm['Volume'].plot(label = 'gm')
ford['Volume'].plot(label = 'ford')
plt.title('Volume Traded')
plt.legend()
# https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.idxmax.html
# https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.name.html
ford.loc[ford['Volume'].idxmax()].name
ford['Volume'].argmax()
# https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.name.html
ford.loc[ford['Volume'].idxmax()].name
tesla['Total Traded'] = tesla['Open'] * tesla['Volume']
tesla.head()
ford['Total Traded'] = ford['Open'] * ford['Volume']
ford.head()
gm['Total Traded'] = gm['Open'] * gm['Volume']
gm.head()
fig = plt.figure(figsize=(16,8))
tesla['Total Traded'].plot(label = 'Tesla')
gm['Total Traded'].plot(label = 'GM')
ford['Total Traded'].plot(label = 'Ford')
plt.legend()
tesla.loc[tesla['Total Traded'].idxmax()].name
tesla['Total Traded'].argmax()
gm['MA50'] = gm['Open'].rolling(window=50).mean()
gm['MA200'] = gm['Open'].rolling(window=200).mean()
gm[['Open','MA50', 'MA200']].plot(figsize=(16,8))
from pandas.plotting import scatter_matrix
# https://stackoverflow.com/questions/30986989/reindex-a-dataframe-with-duplicate-index-values
# https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.rename.html
# Either use rename or use below
df = pd.concat([tesla['Open'], gm['Open'], ford['Open']], axis = 1)
df.columns = ['Tesla Open', 'GM Open', 'Ford Open']
df = pd.DataFrame(pd.concat([tesla['Open'].rename('Tesla Open'), gm['Open'].rename('GM Open'), ford['Open'].rename('Ford Open')], axis = 1))
df.head()
# https://stackoverflow.com/questions/43801637/pandas-legend-for-scatter-matrix
# hist_kwds = historgram keywords
scatter_matrix(df, alpha=0.2, figsize=(8, 8), diagonal='hist', hist_kwds={'bins':50});
# https://matplotlib.org/examples/pylab_examples/finance_demo.html
from matplotlib.dates import DateFormatter, WeekdayLocator, DayLocator, MONDAY, date2num
from matplotlib.finance import candlestick_ohlc
# Creating a ford dataframe suitable as per our needs
ford_reset = ford.loc['2012-01'].reset_index()
ford_reset
ford_reset.info()
ford_reset['date_ax'] = ford_reset['Date'].apply(date2num)
ford_reset
list_of_cols = ['date_ax', 'Open', 'High', 'Low', 'Close']
ford_values = [tuple(vals) for vals in ford_reset[list_of_cols].values]
ford_values
mondays = WeekdayLocator(MONDAY) # major ticks on the mondays
alldays = DayLocator() # minor ticks on the days
weekFormatter = DateFormatter('%b %d') # e.g., Jan 12
dayFormatter = DateFormatter('%d') # e.g., 12
fig, ax = plt.subplots()
fig.subplots_adjust(bottom=0.2)
ax.xaxis.set_major_locator(mondays)
ax.xaxis.set_minor_locator(alldays)
ax.xaxis.set_major_formatter(weekFormatter)
#plot_day_summary(ax, quotes, ticksize=3)
candlestick_ohlc(ax, ford_values, width=0.6, colorup = 'g', colordown='r');
# Using the shift method
tesla['returns'] = (tesla['Close'] / tesla['Close'].shift(1)) - 1
# https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.pct_change.html
tesla['returns'] = tesla['Close'].pct_change()
tesla.head()
ford['returns'] = ford['Close'].pct_change()
ford.head()
gm['returns'] = gm['Close'].pct_change()
gm.head()
ford['returns'].plot.hist(bins=100, grid=True)
gm['returns'].plot.hist(bins=100, grid=True)
tesla['returns'].plot.hist(bins=100, grid=True)
tesla['returns'].hist(bins=100, label='Tesla', figsize=(10,8), alpha=0.4)
gm['returns'].hist(bins=100, label='GM', figsize=(10,8), alpha=0.4)
ford['returns'].hist(bins=100, label='Ford', figsize=(10,8), alpha=0.4)
plt.legend();
df = pd.concat([tesla['returns'], gm['returns'],ford['returns']], axis = 1)
df.columns = ['Tesla','GM','Ford']
df.plot.kde(figsize=(12,6))
df.plot.box(figsize=(8,12))
scatter_matrix(df, alpha=0.2, figsize=(8, 8), diagonal='hist', hist_kwds={'bins':50});
df.plot(kind='scatter', x='Ford', y='GM', alpha=0.5, figsize=(11,8))
# cumprod - cumulative product
tesla['Cumulative Return'] = (1 + tesla['returns']).cumprod()
tesla.head()
ford['Cumulative Return'] = (1 + ford['returns']).cumprod()
ford.head()
gm['Cumulative Return'] = (1 + gm['returns']).cumprod()
gm.head()
fig = plt.figure(figsize=(16,8))
tesla['Cumulative Return'].plot(label = 'Tesla')
gm['Cumulative Return'].plot(label = 'GM')
ford['Cumulative Return'].plot(label = 'Ford')
plt.title('Cumulative Return')
plt.legend()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: lmplot()
Step2: Working with Markers
Step3: Using a Grid
Step4: Aspect and Size
|
<ASSISTANT_TASK:>
Python Code:
import seaborn as sns
%matplotlib inline
tips = sns.load_dataset('tips')
tips.head()
sns.lmplot(x='total_bill',y='tip',data=tips)
sns.lmplot(x='total_bill',y='tip',data=tips,hue='sex')
sns.lmplot(x='total_bill',y='tip',data=tips,hue='sex',palette='coolwarm')
# http://matplotlib.org/api/markers_api.html
sns.lmplot(x='total_bill',y='tip',data=tips,hue='sex',palette='coolwarm',
markers=['o','v'],scatter_kws={'s':100})
sns.lmplot(x='total_bill',y='tip',data=tips,col='sex')
sns.lmplot(x="total_bill", y="tip", row="sex", col="time",data=tips)
sns.lmplot(x='total_bill',y='tip',data=tips,col='day',hue='sex',palette='coolwarm')
sns.lmplot(x='total_bill',y='tip',data=tips,col='day',hue='sex',palette='coolwarm',
aspect=0.6,size=8)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We need to import some libraries
Step2: Open up the input image
Step3: ...and let's take a look at it
Step4: Next we'll dive straight in and Fourier Transform our cat. Note that there are a couple of steps to this, the first is the fft itself and the second is an fft "shift". This is necessary because of the way that the frequency space is ordered in the Fourier Transform.
Step5: Let's see how that looks
Step6: Now we can set things up for filtering our Fourier Cat. We need to know (1) the dimenions of the image and (2) where the centre is.
Step7: To start with, let's make a filter function that separates the inner most 40 x 40 pixels from everything else.
Step8: We can then use this filter to, firstly, mask out the inner most 40 x 40 pixels in Fourier space. This removes our small Fourier frequencies, i.e. the large scale information in our image.
Step9: We can then Fourier transform this back into image space.
Step10: ...and, see how it looks.
Step11: Now let's filter out the large Fourier frequencies
Step12: and Fourier transform that back into image space
Step13: This looks like
Step14: We can also take the Fourier Transform of the filter function to see how the PSF looks
Step15: Let's use that to make the plot from the lecture
Step16: What about... instead of filtering out a contiguous range of Fourier frequencies, we rnadomly selected the Fourier components?
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np # for array manipulation and the fft
import pylab as pl # for plotting
import cv2 # for image file handling
cat = cv2.imread('./FIGURES/cat1.jpg',0)
pl.imshow(cat,cmap='gray')
pl.show()
cat_squiggle = np.fft.fft2(cat)
cat_squiggle_shifted = np.fft.fftshift(cat_squiggle)
cat_spectrum = 20*np.log(np.abs(cat_squiggle_shifted))
pl.subplot(121),pl.imshow(cat, cmap = 'gray')
pl.title('Cat'), pl.xticks([]), pl.yticks([])
pl.subplot(122),pl.imshow(cat_spectrum, cmap = 'gray')
pl.title('Fourier Cat'), pl.xticks([]), pl.yticks([])
pl.show()
rows, cols = cat.shape
crow,ccol = rows/2 , cols/2
filter_fnc = np.zeros(cat_squiggle_shifted.shape)
filter_fnc[crow-20:crow+20, ccol-20:ccol+20] = 1.0
pl.imshow(filter_fnc)
pl.show()
cat_squiggle_hpf = np.copy(cat_squiggle_shifted)
cat_squiggle_hpf[np.where(filter_fnc==1.)] = 0.0+0*1j
cat_filtered = np.fft.ifftshift(cat_squiggle_hpf)
cat_filtered_hpf = np.fft.ifft2(cat_filtered)
cat_filtered_hpf = np.abs(cat_filtered_hpf)
#pl.subplot(121),pl.imshow(cat, cmap = 'gray')
#pl.title('Image'), pl.xticks([]), pl.yticks([])
pl.subplot(121),pl.imshow(20*np.log(np.abs(cat_squiggle_hpf)), cmap = 'gray')
pl.title('Filtered Fourier Cat'), pl.xticks([]), pl.yticks([])
pl.subplot(122),pl.imshow(cat_filtered_hpf)
pl.title('HPF Cat'), pl.xticks([]), pl.yticks([])
pl.show()
cat_squiggle_lpf = np.copy(cat_squiggle_shifted)
cat_squiggle_lpf[np.where(filter_fnc==0.)] = 0.+0.*1j
cat_filtered = np.fft.ifftshift(cat_squiggle_lpf)
cat_filtered_lpf = np.fft.ifft2(cat_filtered)
cat_filtered_lpf = np.abs(cat_filtered_lpf)
#pl.subplot(121),pl.imshow(cat, cmap = 'gray')
#pl.title('Cat'), pl.xticks([]), pl.yticks([])
pl.subplot(121),pl.imshow(20*np.log(np.abs(cat_squiggle_lpf)), cmap = 'gray')
pl.title('Filtered Fourier Cat'), pl.xticks([]), pl.yticks([])
pl.subplot(122),pl.imshow(cat_filtered_lpf)
pl.title('LPF Cat'), pl.xticks([]), pl.yticks([])
pl.show()
psf = np.fft.ifft2(filter_fnc)
psf = np.fft.ifftshift(psf)
psf = np.abs(psf)
pl.subplot(231),pl.imshow(20*np.log(np.abs(cat_squiggle_lpf)), cmap = 'gray')
pl.title('Filtered Fourier Cat'), pl.xticks([]), pl.yticks([])
pl.subplot(232),pl.imshow(20*np.log(np.abs(cat_squiggle_shifted)), cmap = 'gray')
pl.title('Fourier Cat'), pl.xticks([]), pl.yticks([])
pl.subplot(233),pl.imshow(filter_fnc, cmap = 'gray')
pl.title('Filter'), pl.xticks([]), pl.yticks([])
pl.subplot(234),pl.imshow(cat_filtered_lpf, cmap = 'gray')
pl.xlabel('LPF Cat'), pl.xticks([]), pl.yticks([])
pl.subplot(235),pl.imshow(cat, cmap = 'gray')
pl.xlabel('Cat'), pl.xticks([]), pl.yticks([])
pl.subplot(236),pl.imshow(psf, cmap = 'gray')
pl.xlabel('PSF'), pl.xticks([]), pl.yticks([])
pl.show()
filter_mask = np.random.randint(2, size=cat_squiggle_shifted.shape)
pl.imshow(filter_mask)
pl.show()
cat_squiggle_msk = np.copy(cat_squiggle_shifted)
cat_squiggle_msk[np.where(filter_mask==0.)] = 0.+0.*1j
cat_filtered = np.fft.ifftshift(cat_squiggle_msk)
cat_filtered_msk = np.fft.ifft2(cat_filtered)
cat_filtered_msk = np.abs(cat_filtered_msk)
pl.subplot(121),pl.imshow(cat, cmap = 'gray')
pl.title('Cat'), pl.xticks([]), pl.yticks([])
#pl.subplot(132),pl.imshow(20*np.log(np.abs(cat_squiggle_msk)), cmap = 'gray')
#pl.title('Filtered Fourier Cat'), pl.xticks([]), pl.yticks([])
pl.subplot(122),pl.imshow(cat_filtered_msk)
pl.title('Masked Cat'), pl.xticks([]), pl.yticks([])
pl.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The data is too big to determine exactly what columns are usable for our analysis (137 columns with over 150,000 records. Kaggle did provide a data dictionary for this dataset to allow us to determine what columns may be most useful for this analysis.
Step2: Bayesian Comparison of Different Populations
Step3: Based on the graphs produced above, I decided to use Western Europe as a region to compare countries within. They had an extended period of elevated terror attacks from the early 1970s until the late 1990s, so I decided to see if there were two countries in Western Europe that could be considered statistically different from one another during this time period based on the number of terror attacks within the country each year. I also decided to limit the type of attacks to bombings, since it was the most common type of attack.
Step4: In both cases (The United Kingdom and Spain) for the number of bombings from 1997 - 2000 using a prior distribution of bombings in Western Europe from 1975 - 1996, our posterior distributions are not statistically different. For the difference of the two means and standard deviations of the two posteriors, the value '0' falls in the 95% credible interval. This is the case for the effect size as well, the normalized measure of the differences between the two posteriors.
Step5: The same information printed above is plotted over time below, with marks at both 1992 and 1994 on the line to show about where the estimate needs to look.
Step6: There is a slow upward trend of bombings from the beginning of data collection in 1970 until the early 90s, and then a slow gradual drop through the rest of the 90s. It will be most effective to calculate a rolling mean that captures the peak in bombings in 1992, and then the decrease throughout the 90s. With this rolling mean trend line created, we can calculate the number of bombings in 1993 by taking the mean of the rolling mean values between 1992 and 1993.
Step7: Below, we will plot the rolling mean values that will be used to calculate 1993's number of bombings. We will use a lookback window of 4 years to smooth out any random spikes that occur over time, and get a more general line to capture these trends over the time period of interest.
Step8: Below, we estimate the number of bombings in 1993 by averaging the rolling mean value from 1992 and 1994. This should be around the value where the green vertical line in the graph above intersects the red Rolling Mean trend.
|
<ASSISTANT_TASK:>
Python Code:
# standard libraries
import pandas as pd
from matplotlib import pyplot as plt
import seaborn as sns
import numpy as np
# for Bayesian analysis
import pymc3 as pm
# cleaning up the notebook
import warnings
warnings.filterwarnings('ignore')
df = pd.read_csv('../../../Data/globalterrorismdb_0616dist.csv')
print(df.head())
# The below code will not show the data type for most columns, nor will it show null counts for most columns
print df.dtypes
print df.isnull().sum()
regions = df.region_txt.unique()
#print regions
for region in regions:
df[df.region_txt == region].groupby('iyear').eventid.count().plot(kind = 'line')
plt.title('Terror Attacks in %s by Year' % (region))
plt.xlabel('Year')
plt.ylabel('Total Attacks')
plt.show()
# Attack type is one column that will be necessary
print('Counts of Attack Types:')
print(df.attacktype1.value_counts())
print('Attack Type Count Text:')
print(df.attacktype1_txt.value_counts())
## create dataframe of just attacks that are bombings
df_bomb = df[df.attacktype1 == 3]
## remove most unnecessarsy columns
df_bomb_2 = df_bomb[['eventid','iyear','imonth','iday','country','country_txt','region','region_txt']]
## getting individual bombing events in Western Europe from 1975 - 1996, period of elevated bombings
we_bomb_80 = df_bomb_2[(df_bomb_2.region == 8) & (df_bomb_2.iyear >= 1975) & (df_bomb_2.iyear <= 1996)]
## dataframes for UK and Spain after decrease in bombings in W. Europe
UK_bomb_96 = df_bomb_2[(df_bomb_2.country_txt == 'United Kingdom') & (df_bomb_2.iyear >= 1997) & (df_bomb_2.iyear <= 2000)]
ESP_bomb_96 = df_bomb_2[(df_bomb_2.country_txt == 'Spain') & (df_bomb_2.iyear >= 1997) & (df_bomb_2.iyear <= 2000)]
## bombings by year in each
UK_observed = UK_bomb_96.iyear.value_counts()
ESP_observed = ESP_bomb_96.iyear.value_counts()
## Distribution of attacks in W. European country by year in 1980s
we_bomb_80.groupby(['iyear','country']).eventid.count().plot.hist(bins = 20)
plt.show()
## Mean and Standard Deviation of total counts of bombings by country & year for W. European countries in 1980s
mean_prior_mean = np.mean(we_bomb_80.groupby(['iyear','country']).eventid.count())
mean_prior_std = np.std(we_bomb_80.groupby(['iyear','country']).eventid.count())
## Setting priors for both countries
## Normally distributed since there are enough countries + years to assume normality
with pm.Model() as model:
UK_mean = pm.Normal('UK_mean', mean_prior_mean, sd=mean_prior_std)
ESP_mean = pm.Normal('ESP_mean', mean_prior_mean, sd=mean_prior_std)
## Setting upper and lower prior Standard Devs - observed counts as low as 1 and as high as ~120 in prior dataset
std_prior_lower = 0.01
std_prior_upper = 120.0
with model:
UK_std = pm.Uniform('UK_std', lower=std_prior_lower, upper=std_prior_upper)
ESP_std = pm.Uniform('ESP_std', lower=std_prior_lower, upper=std_prior_upper)
## Setting up posterior distributions of bombings for two countries
with model:
group_UK = pm.Normal('UK_Bomb', mu=UK_mean, sd=UK_std, observed=UK_observed.values)
group_ESP = pm.Normal('ESP_Bomb', mu=ESP_mean, sd=ESP_std, observed=ESP_observed.values)
## Get range of values in late 90s as observed values (bombings from 1997 - 2000)
### Additional deterministic measures to compare two groups
with model:
diff_of_means = pm.Deterministic('difference of means', UK_mean - ESP_mean)
diff_of_stds = pm.Deterministic('difference of stds', UK_std - ESP_std)
effect_size = pm.Deterministic('effect size',
diff_of_means / np.sqrt((UK_std**2 + ESP_std**2) / 2))
## start fitting model w MCMC
with model:
trace = pm.sample(25000, njobs=4)
## Plotting posteriors and result distributions
pm.plot_posterior(trace[3000:],
varnames=['UK_mean', 'ESP_mean', 'UK_std', 'ESP_std'],
color='#87ceeb')
plt.show()
pm.plot_posterior(trace[3000:],
varnames=['difference of means', 'difference of stds', 'effect size'],
ref_val=0,
color='#87ceeb')
plt.show()
df_bomb.iyear.value_counts().sort_index()
plt.plot(df_bomb.iyear.value_counts().sort_index())
plt.scatter([1992, 1994], [1738, 1153], c = 'red')
plt.xlabel('Year')
plt.ylabel('Total Bombings')
plt.title('Bombings by Year, 1970 - 2015')
plt.xlim([1970,2015])
plt.ylim([0,10000])
plt.show()
## create the years to look over
years = np.arange(1977, 1998)
## drop attacks where month is unassigned
bombing = df_bomb_2[(df_bomb_2.iyear.isin(years)) & (df_bomb_2.imonth != 0)].groupby('iyear').eventid.count().sort_index()
print bombing
## Visualize the time series of years of interest
plt.plot(bombing)
plt.xlabel('Year')
plt.ylabel('Total Bombings')
plt.title('Bombings by Year, 1977 - 1997')
plt.xlim([1977,1997])
plt.ylim([0,3000])
plt.axvline(x = 1993, c = 'red')
plt.show()
## Import libraries necessary to plot smoothed bombings
## Due to not having any pattern in autocorrelation, ultimately did not calculate any time series to impute 1993 values
%matplotlib inline
import datetime
from dateutil.relativedelta import relativedelta
import statsmodels.api as sm
from statsmodels.tsa.stattools import acf
from statsmodels.tsa.stattools import pacf
from statsmodels.tsa.seasonal import seasonal_decompose
from statsmodels.graphics.tsaplots import plot_acf, plot_pacf
## This would need to be smoother to calculate an ARIMA or anything time series related to get 1993 values.
diff0 = bombing.diff(periods=1)[1:]
diff0.plot(title='Terrorist Bombings Differenced')
## This is a function to plot smoothed rolling mean next to original data with rolling mean window as an argument
from statsmodels.tsa.stattools import adfuller
def test_stationarity(timeseries, window):
#Determing rolling statistics
rolmean = pd.rolling_mean(timeseries, window=window)
rolstd = pd.rolling_std(timeseries, window=window)
#Plot rolling statistics:
fig = plt.figure(figsize=(12, 8))
orig = plt.plot(timeseries, color='blue',label='Original')
mean = plt.plot(rolmean, color='red', label='Rolling Mean')
std = plt.plot(rolstd, color='black', label = 'Rolling Std')
plt.legend(loc='best')
plt.title('Rolling Mean & Standard Deviation')
plt.xlabel('Bombings')
plt.ylabel('Year')
plt.axvline(x = 1993, c = 'green')
plt.show()
## Pandas has rolling mean function
pd.rolling_mean(bombing, 4)
## Using the function created above to show this visually
test_stationarity(bombing, 4)
print "The estimated number of bombings in 1993 is {}".format(((pd.rolling_mean(bombing, 4).loc[1992]) + (pd.rolling_mean(bombing, 4).loc[1994])) / 2)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load house value vs. crime rate data
Step2: Exploring the data
Step3: Fit the regression model using crime as the feature
Step4: Let's see what our fit looks like
Step5: Remove Center City and redo the analysis
Step6: Compare coefficients for full-data fit versus no-Center-City fit¶
Step7: Above
Step8: Do the coefficients change much?
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pa
import matplotlib.pyplot as plt
import numpy as np
from sklearn import linear_model
regressionDir = '/home/weenkus/workspace/Machine Learning - University of Washington/Regression'
sales = pa.read_csv(regressionDir + '/datasets/Philadelphia_Crime_Rate_noNA.csv')
sales
# Show plots in jupyter
%matplotlib inline
plt.scatter(sales.CrimeRate, sales.HousePrice, alpha=0.5)
plt.ylabel('House price')
plt.xlabel('Crime rate')
# Check the type and shape
X = sales[['CrimeRate']]
print (type(X))
print (X.shape)
y = sales['HousePrice']
print (type(y))
print (y.shape)
crime_model = linear_model.LinearRegression()
crime_model.fit(X, y)
plt.plot(sales.CrimeRate, sales.HousePrice, '.',
X, crime_model.predict(X), '-',
linewidth=3)
plt.ylabel('House price')
plt.xlabel('Crime rate')
sales_noCC = sales[sales['MilesPhila'] != 0.0]
plt.scatter(sales_noCC.CrimeRate, sales_noCC.HousePrice, alpha=0.5)
plt.ylabel('House price')
plt.xlabel('Crime rate')
crime_model_noCC = linear_model.LinearRegression()
crime_model_noCC.fit(sales_noCC[['CrimeRate']], sales_noCC['HousePrice'])
plt.plot(sales_noCC.CrimeRate, sales_noCC.HousePrice, '.',
sales_noCC[['CrimeRate']], crime_model_noCC.predict(sales_noCC[['CrimeRate']]), '-',
linewidth=3)
plt.ylabel('House price')
plt.xlabel('Crime rate')
print ('slope: ', crime_model.coef_)
print ('intercept: ', crime_model.intercept_)
print ('slope: ', crime_model_noCC.coef_)
print ('intercept: ', crime_model_noCC.intercept_)
sales_nohighend = sales_noCC[sales_noCC['HousePrice'] < 350000]
crime_model_nohighhend = linear_model.LinearRegression()
crime_model_nohighhend.fit(sales_nohighend[['CrimeRate']], sales_nohighend['HousePrice'])
plt.plot(sales_nohighend.CrimeRate, sales_nohighend.HousePrice, '.',
sales_nohighend[['CrimeRate']], crime_model_nohighhend.predict(sales_nohighend[['CrimeRate']]), '-',
linewidth=3)
plt.ylabel('House price')
plt.xlabel('Crime rate')
print ('slope: ', crime_model_noCC.coef_)
print ('intercept: ', crime_model_noCC.intercept_)
print ('slope: ', crime_model_nohighhend.coef_)
print ('intercept: ', crime_model_nohighhend.intercept_)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Replace by your GCP project and bucket
Step2: Loading the dataset in GCS
Step3: It has very specialized language such as
Step4: and for gcs-directory-path-for-pipeline-output which we will set to
Step5: Remark
Step6: The projector view will present you with a representation of the word vectors in a 3 dimensional space (the dim is reduced through PCA) that you can interact with. Enter in the search tool a few words like "ilium" and points in the 3D space will light up.
Step 7
Step7: Now we are ready to create a KerasLayer out of our custom text embedding.
Step8: That layer when called with a list of sentences will create a sentence vector for each sentence by averaging the word vectors of the sentence.
|
<ASSISTANT_TASK:>
Python Code:
!pip freeze | grep tensorflow-hub==0.7.0 || pip install tensorflow-hub==0.7.0
import os
import tensorflow as tf
import tensorflow_hub as hub
PROJECT = "your-gcp-project-here" # REPLACE WITH YOUR PROJECT NAME
BUCKET = "your-gcp-bucket-here" # REPLACE WITH YOUR BUCKET NAME
os.environ["PROJECT"] = PROJECT
os.environ["BUCKET"] = BUCKET
%%bash
URL=https://www.gutenberg.org/cache/epub/24564/pg24564.txt
OUTDIR=gs://$BUCKET/custom_embedding
CORPUS=surgery_manual.txt
curl $URL > $CORPUS
gsutil cp $CORPUS $OUTDIR/$CORPUS
!echo gs://$BUCKET/custom_embedding/surgery_manual.txt
!echo gs://$BUCKET/custom_embedding
!echo tensorboard --port 8080 --logdir gs://$BUCKET/custom_embedding/embeddings
MODULE = "gs://{bucket}/custom_embedding/hub-module".format(bucket=BUCKET)
MODULE
med_embed = # TODO: Your code goes here.
outputs = # TODO: Your code goes here.
outputs
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Using interact for animation with data
Step3: To create an animation of a soliton propagating in time, we are going to precompute the soliton data and store it in a 2d array. To set this up, we create the following variables and arrays
Step4: Compute a 2d NumPy array called phi
Step5: Write a plot_soliton_data(i) function that plots the soliton wave $\phi(x, t[i])$. Customize your plot to make it effective and beautiful.
Step6: Use interact to animate the plot_soliton_data function versus time.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
from IPython.html.widgets import interact, interactive, fixed
from IPython.display import display
def soliton(x, t, c, a):
Return phi(x, t) for a soliton wave with constants c and a.
return 0.5*c*(1/(np.cosh((c**(1/2)/2)*(x-c*t-a))**2))
assert np.allclose(soliton(np.array([0]),0.0,1.0,0.0), np.array([0.5]))
tmin = 0.0
tmax = 10.0
tpoints = 100
t = np.linspace(tmin, tmax, tpoints)
xmin = 0.0
xmax = 10.0
xpoints = 200
x = np.linspace(xmin, xmax, xpoints)
c = 1.0
a = 0.0
assert phi.shape==(xpoints, tpoints)
assert phi.ndim==2
assert phi.dtype==np.dtype(float)
assert phi[0,0]==soliton(x[0],t[0],c,a)
def plot_soliton_data(i=0):
plot_soliton_data(0)
assert True # leave this for grading the plot_soliton_data function
# YOUR CODE HERE
raise NotImplementedError()
assert True # leave this for grading the interact with plot_soliton_data cell
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Firstly we will calculate the features required to characterise the pointcloud.
Step2: Next we can get training as a numpy array for creating our model
Step3: Next we create a model, this will be a keras-based dense net in this instance but does not have to be.
Step4: Finally we classify the point cloud
|
<ASSISTANT_TASK:>
Python Code:
from geospatial_learn import learning as ln
incloud = "/path/to/Llandinam.ply"
ln.ply_features(incloud)
training = ln.get_training_ply(incld)
model = 'path/to/model.h5'
ln.create_model(training, model, clf='keras', cv=5)
classify_ply(incloud, model, train_field="training", class_field='label',
rgb=True
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Given that we will perform some numerical integrations with this function below, we expand it onto a grid appropriate for integration by Gauss-Legendre quadrature
Step2: Next, we would like to calculate the variance of this single spherical harmonic. Since each spherical harmonic has a zero mean, the variance is equal to the integral of the function squared (i.e., its norm N) divided by the surface area of the sphere (4 pi)
Step3: Alternatively, we could have done the integration with a 'DH' grid instead
Step4: Parseval's theorem
Step5: If the coefficients of all spherical harmonics are independent, the distribution will become Gaussian as predicted by the central limit theorem. If the individual coefficients were Gaussian in the first place, the distribution would naturally be Gaussian as well. We illustrate this below.
Step6: Next, we calculate a histogram of the data using the Gauss-Legendre quadrature points and weights
Step7: Finally, we compute the expected distribution and plot the two
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from __future__ import print_function # only necessary if using Python 2.x
import matplotlib.pyplot as plt
import numpy as np
from pyshtools.shclasses import SHCoeffs, SHGrid, SHWindow
lmax = 100
coeffs = SHCoeffs.from_zeros(lmax)
coeffs.set_coeffs(values=[1], ls=[5], ms=[2])
grid = coeffs.expand('GLQ')
fig, ax = grid.plot()
N = ((grid.data**2) * grid.weights[np.newaxis,:].T).sum() * (2. * np.pi / grid.nlon)
print('N = ', N)
print('Variance of Ylm = ', N / (4. * np.pi))
from pyshtools.utils import DHaj
grid_dh = coeffs.expand('DH')
weights = DHaj(grid_dh.nlat)
N = ((grid_dh.data**2) * weights[np.newaxis,:].T).sum() * 2. * np.sqrt(2.) * np.pi / grid_dh.nlon
print('N = ', N)
print('Variance of Ylm = ', N / (4. * np.pi))
power = coeffs.spectrum()
print('Total power is ', power.sum())
lmax = 200
a = 30
ls = np.arange(lmax+1, dtype=float)
power = 1. / (1. + (ls / a) ** 2) ** 1
coeffs = SHCoeffs.from_random(power)
power_random = coeffs.spectrum()
total_power = power_random.sum()
grid = coeffs.expand('GLQ')
fig, ax = grid.plot()
weights = (grid.weights[np.newaxis,:].T).repeat(grid.nlon, axis=1) * (2. * np.pi / grid.nlon)
bins = np.linspace(-50, 50, 30)
center = 0.5 * (bins[:-1] + bins[1:])
dbin = center[1] - center[0]
hist, bins = np.histogram(grid.data, bins=bins, weights=weights, density=True)
normal_distribution = np.exp( - center ** 2 / (2 * total_power))
normal_distribution /= dbin * normal_distribution.sum()
fig, ax = plt.subplots(1, 1)
ax.plot(center, hist, '-x', c='blue', label='computed distribution')
ax.plot(center, normal_distribution, c='red', label='predicted distribution')
ax.legend(loc=3);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Loading a dataset
Step2: Generating layers
Step3: The Deep Residual Network
Step4: The model is a collection of resnet modules between an input conv and output pooling and affine layer.
Step5: Cost function
Step6: Optimizer
Step7: Callbacks
Step8: Training the model
Step9: Congrats! If you made it this far you have trained a convolutional network in neon.
Step10: By increasing the depth of the network and the number of epochs, we can improve the performance to match state of the art.
Step11: Create a dataset with this image for inference
Step12: Get model outputs on the inference data
|
<ASSISTANT_TASK:>
Python Code:
# Start by generating the backend:
from neon.backends import gen_backend
be = gen_backend(backend='gpu',
batch_size=128)
from neon.data.aeon_shim import AeonDataLoader
from neon.data.dataloader_transformers import OneHot, TypeCast, BGRMeanSubtract
import numpy as np
# define configuration file for CIFAR-10 dataset
config = {
'manifest_filename': 'data/cifar10/train-index.csv', # CSV manifest of data
'manifest_root': 'data/cifar10', # root data directory
'image': {'height': 32, 'width': 32, # output image size
'scale': [0.8, 0.8], # random scaling of image before cropping
'flip_enable': True}, # randomly flip image
'type': 'image,label', # type of data
'minibatch_size': be.bsz # batch size
}
from neon.data.aeon_shim import AeonDataLoader
# build train_set
train_set = AeonDataLoader(config, be)
train_set = OneHot(train_set, index=1, nclasses=10) # perform onehot on the labels
train_set = TypeCast(train_set, index=0, dtype=np.float32) # cast the image to float32
train_set = BGRMeanSubtract(train_set, index=0) # subtract image color means (based on default values)
# build test set
config['manifest_filename'] = 'data/cifar10/val-index.csv'
test_set = AeonDataLoader(config, be)
test_set = OneHot(test_set, index=1, nclasses=10) # perform onehot on the labels
test_set = TypeCast(test_set, index=0, dtype=np.float32) # cast the image to float32
test_set = BGRMeanSubtract(test_set, index=0) # subtract image color means (based on default values)
from neon.initializers import Uniform
from neon.transforms import Rectlin, Softmax
from neon.layers import Activation, Conv, Pooling, Affine, MergeSum
# This is a simple convnet with a one conv layer,
# max-pooling, and a fully connected layer.
#
# input - Conv - ReLu - Pooling - Affine - ReLu - Affine - Softmax
#
layers = [Conv((5, 5, 16), init=Uniform(-0.1, 0.1), activation=Rectlin()),
Pooling((2, 2)),
Affine(nout=500, init=Uniform(-0.1, 0.1), activation=Rectlin()),
Affine(nout=10, init=Uniform(-0.1, 0.1), activation=Softmax())]
# We can use a MergeSum layer to combine differnt layers in parallel
#
# - Conv3 - ReLu -
# / \
# input - Sum - ReLu - ...
# \ /
# - Conv5 - ReLu -
#
conv3 = Conv((3, 3, 16), init=Uniform(-0.1, 0.1), activation=Rectlin())
conv5 = Conv((5, 5, 16), padding=1, init=Uniform(-0.1, 0.1), activation=Rectlin())
layers = [MergeSum([conv3, conv5]), Activation(Rectlin()),
Pooling((2, 2)),
Affine(nout=500, init=Uniform(-0.1, 0.1), activation=Rectlin()),
Affine(nout=10, init=Uniform(-0.1, 0.1), activation=Softmax())]
from neon.initializers import Kaiming, IdentityInit
from neon.layers import SkipNode
from neon.models import Model
# helper functions simplify init params for conv and identity layers
def conv_params(fsize, nfm, stride=1, relu=True, batch_norm=True):
return dict(fshape=(fsize, fsize, nfm),
strides=stride,
padding=(1 if fsize > 1 else 0),
activation=(Rectlin() if relu else None),
init=Kaiming(local=True),
batch_norm=batch_norm)
def id_params(nfm):
return dict(fshape=(1, 1, nfm),
strides=2,
padding=0,
activation=None,
init=IdentityInit())
# A resnet module
#
# - Conv - Conv -
# / \
# input - Sum - Relu - output
# \ /
# - Identity -
#
def module_factory(nfm, stride=1):
mainpath = [Conv(**conv_params(3, nfm, stride=stride)),
Conv(**conv_params(3, nfm, relu=False))]
sidepath = [SkipNode() if stride == 1 else Conv(**id_params(nfm))]
module = [MergeSum([mainpath, sidepath]),
Activation(Rectlin())]
return module
# Set depth = 3 for quick results
# or depth = 9 to reach 6.7% top1 error in 150 epochs
depth = 3
nfms = [2**(stage + 4) for stage in sorted(range(3) * depth)]
strides = [1] + [1 if cur == prev else 2 for cur, prev in zip(nfms[1:], nfms[:-1])]
layers = [Conv(**conv_params(3, 16))]
for nfm, stride in zip(nfms, strides):
layers.append(module_factory(nfm, stride))
layers.append(Pooling('all', op='avg'))
layers.append(Affine(10, init=Kaiming(local=False),
batch_norm=True, activation=Softmax()))
model = Model(layers=layers)
from neon.transforms import CrossEntropyMulti
from neon.layers import GeneralizedCost
cost = GeneralizedCost(costfunc=CrossEntropyMulti())
from neon.optimizers import GradientDescentMomentum, Schedule
opt = GradientDescentMomentum(0.1, 0.9, wdecay=0.0001,
schedule=Schedule([90, 135], 0.1))
# set up callbacks. By default sets up a progress bar
from neon.transforms import Misclassification
from neon.callbacks.callbacks import Callbacks
valmetric = Misclassification()
callbacks = Callbacks(model, eval_set=test_set, metric=valmetric)
# And run the model
epochs = 10
model.fit(train_set, optimizer=opt, num_epochs=epochs,
cost=cost, callbacks=callbacks)
# Check the performance on the supplied test set
from neon.transforms import Misclassification
error_pct = 100 * model.eval(test_set, metric=Misclassification())
print 'Misclassification error = %.1f%%' % error_pct
%matplotlib inline
import matplotlib.pyplot as plt
import urllib
from PIL import Image
import numpy as np
# download images from the web
imgs = {
'frog': "https://upload.wikimedia.org/wikipedia/commons/thumb/5/55/Atelopus_zeteki1.jpg/440px-Atelopus_zeteki1.jpg",
'airplane': "https://img0.etsystatic.com/016/0/5185796/il_570xN.433414910_p5n3.jpg",
'cat': "https://s-media-cache-ak0.pinimg.com/236x/8e/d7/41/8ed7410285f101ba5892ff723c91fa75.jpg",
'car': "http://static01.nyt.com/images/2012/09/09/automobiles/09REFI2/09REFI2-articleLarge.jpg",
}
# empty buffer to use for inference dataset
# dims [minibatch, imgsize]
x_new = np.zeros((128, 32*32*3), dtype=np.float32)
# crop/resize images and assign them to slots in x_new
# also display with true labels
plt.figure(1)
for i, name in enumerate(imgs):
imgs[name] = urllib.urlretrieve(imgs[name], filename="data/{}.jpg".format(name))
plt.subplot(100 + (10 * len(imgs)) + 1 + i)
img = Image.open("data/{}.jpg".format(name))
crop = img.crop((0,0,min(img.size),min(img.size)))
crop.thumbnail((32, 32))
plt.imshow(crop, interpolation="nearest")
plt.title(name)
plt.axis('off')
x_new[i,:] = np.asarray(crop, dtype=np.float32)[:,:,(2,0,1)].transpose(2,0,1).reshape(1,3072) -127
from neon.data import ArrayIterator
# create a minibatch with the new image
inference_set = ArrayIterator(x_new, None, nclass=10,
lshape=(3, 32, 32))
# inference_set = ArrayIterator(x_train, None, nclass=10,
# lshape=(3, 32, 32))
classes =["airplane", "auto", "bird", "cat", "deer",
"dog", "frog", "horse", "ship", "truck"]
out = model.get_outputs(inference_set)
plt.figure(2)
for i, name in enumerate(imgs):
plt.subplot(100 + (10 * len(imgs)) + 1 + i)
img = Image.open("data/{}.jpg".format(name))
crop = img.crop((0,0,min(img.size),min(img.size)))
crop.thumbnail((32, 32))
title = "{} ({:.2})".format(classes[out[i].argmax()], out[i].max())
plt.imshow(crop, interpolation="nearest")
plt.title(title)
plt.axis('off')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Similarly, the rate parameters can automatically be given exponential priors
Step3: Decorator
Step4: Note that this is a simple Python function preceded by a Python
Step5: Notice that the log-probability and random variate functions are
Step6: In the other interfaces, the observed=True argument is added to the
Step7: The Deterministic class
Step8: so rate's value can be computed exactly from the values of its parents
Step9: All the objects thus created have trace=False and plot=False by default.
Step10: Containers
Step11: PyMC automatically wraps array $x$ in an appropriate Container class.
Step12: The function supplied should return the potential's current
Step13: Example
Step14: Fitting Models
Step15: This call will cause $M$ to fit the model using Powell's method, which does not require derivatives. The variables in DisasterModel have now been set to their maximum a posteriori values
Step16: We can also calculate model selection statistics, AIC and BIC
Step17: MAP has two useful methods
Step18: The approximate joint posterior mean and covariance of the variables are
Step19: As with MAP, the variables have been set to their maximum a
Step20: In addition to the methods and attributes of MAP, NormApprox
Step21: Step methods
Step method objects handle individual stochastic variables, or sometimes groups
StepMethod implements a method called step(), which is called by
Step22: Metropolis itself handles float-valued variables, and subclasses
Step23: AdaptativeMetropolis's init method takes the following arguments
|
<ASSISTANT_TASK:>
Python Code:
import pymc as pm
import numpy as np
from pymc.examples import disaster_model
switchpoint = pm.DiscreteUniform('switchpoint', lower=0, upper=110)
early_mean = pm.Exponential('early_mean', beta=1., value=1)
late_mean = pm.Exponential('late_mean', beta=1., value=1)
@pm.stochastic
def switchpoint(value=1900, t_l=1851, t_h=1962):
The switchpoint for the rate of disaster occurrence.
if value > t_h or value < t_l:
# Invalid values
return -np.inf
else:
# Uniform log-likelihood
return -np.log(t_h - t_l + 1)
def switchpoint_logp(value, t_l, t_h):
if value > t_h or value < t_l:
return -np.inf
else:
return -np.log(t_h - t_l + 1)
def switchpoint_rand(t_l, t_h):
return np.round( (t_l - t_h) * np.random.random() ) + t_l
switchpoint = pm.Stochastic( logp = switchpoint_logp,
doc = 'The switchpoint for the rate of disaster occurrence.',
name = 'switchpoint',
parents = {'t_l': 1851, 't_h': 1962},
random = switchpoint_rand,
trace = True,
value = 1900,
dtype=int,
rseed = 1.,
observed = False,
cache_depth = 2,
plot=True,
verbose = 0)
from scipy.stats.distributions import poisson
@pm.observed
def likelihood(value=[1, 2, 1, 5], parameter=3):
return poisson.logpmf(value, parameter).sum()
disasters = pm.Poisson('disasters', mu=2,
value=disaster_model.disasters_array,
observed=True)
@pm.deterministic
def rate(s=switchpoint, e=early_mean, l=late_mean):
''' Concatenate Poisson means '''
out = np.empty(len(disaster_model.disasters_array))
out[:s] = e
out[s:] = l
return out
x = pm.MvNormal('x', np.ones(3), np.eye(3))
y = pm.MvNormal('y', np.ones(3), np.eye(3))
x+y
print(x[0])
print(x[0]+y[2])
def rate_eval(switchpoint=switchpoint, early_mean=early_mean, late_mean=late_mean):
value = np.zeros(111)
value[:switchpoint] = early_mean
value[switchpoint:] = late_mean
return value
rate = pm.Deterministic(eval = rate_eval,
name = 'rate',
parents = {'switchpoint': switchpoint,
'early_mean': early_mean,
'late_mean': late_mean},
doc = 'The rate of disaster occurrence.',
trace = True,
verbose = 0,
dtype=float,
plot=False,
cache_depth = 2)
N = 10
x_0 = pm.Normal('x_0', mu=0, tau=1)
x = np.empty(N, dtype=object)
x[0] = x_0
for i in range(1, N):
x[i] = pm.Normal('x_%i' % i, mu=x[i-1], tau=1)
@pm.observed
def y(value=1, mu=x, tau=100):
return pm.normal_like(value, (mu**2).sum(), tau)
@pm.potential
def rate_constraint(l1=early_mean, l2=late_mean):
if np.abs(l2 - l1) > 1:
return -np.inf
return 0
def rate_constraint_logp(l1=early_mean, l2=late_mean):
if np.abs(l2 - l1) > 1:
return -np.inf
return 0
rate_constraint = pm.Potential(logp = rate_constraint_logp,
name = 'rate_constraint',
parents = {'l1': early_mean, 'l2': late_mean},
doc = 'Constraint on rate differences',
verbose = 0,
cache_depth = 2)
# Log dose in each group
log_dose = [-.86, -.3, -.05, .73]
# Sample size in each group
n = 5
# Outcomes
deaths = [0, 1, 3, 5]
## Write your answer here
from pymc.examples import gelman_bioassay
M = pm.MAP(gelman_bioassay)
M.fit(method='fmin_powell')
M.alpha.value
M.beta.value
M.AIC
M.BIC
N = pm.NormApprox(gelman_bioassay)
N.fit()
N.mu[N.alpha]
N.C[N.alpha, N.beta]
N.sample(100)
N.trace('alpha')[:10]
M = pm.MCMC(gelman_bioassay, db='sqlite')
M.use_step_method(pm.Metropolis, M.alpha, proposal_sd=1., proposal_distribution='Normal')
from pymc.examples import disaster_model_linear
M = pm.MCMC(disaster_model_linear)
M.use_step_method(pm.AdaptiveMetropolis, M.params_of_mean)
M = pm.MCMC(gelman_bioassay)
M.sample(10000, burn=5000)
%matplotlib inline
pm.Matplot.plot(M.LD50)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Family
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables
Step9: 2. Key Properties --> Seawater Properties
Step10: 2.2. Eos Functional Temp
Step11: 2.3. Eos Functional Salt
Step12: 2.4. Eos Functional Depth
Step13: 2.5. Ocean Freezing Point
Step14: 2.6. Ocean Specific Heat
Step15: 2.7. Ocean Reference Density
Step16: 3. Key Properties --> Bathymetry
Step17: 3.2. Type
Step18: 3.3. Ocean Smoothing
Step19: 3.4. Source
Step20: 4. Key Properties --> Nonoceanic Waters
Step21: 4.2. River Mouth
Step22: 5. Key Properties --> Software Properties
Step23: 5.2. Code Version
Step24: 5.3. Code Languages
Step25: 6. Key Properties --> Resolution
Step26: 6.2. Canonical Horizontal Resolution
Step27: 6.3. Range Horizontal Resolution
Step28: 6.4. Number Of Horizontal Gridpoints
Step29: 6.5. Number Of Vertical Levels
Step30: 6.6. Is Adaptive Grid
Step31: 6.7. Thickness Level 1
Step32: 7. Key Properties --> Tuning Applied
Step33: 7.2. Global Mean Metrics Used
Step34: 7.3. Regional Metrics Used
Step35: 7.4. Trend Metrics Used
Step36: 8. Key Properties --> Conservation
Step37: 8.2. Scheme
Step38: 8.3. Consistency Properties
Step39: 8.4. Corrected Conserved Prognostic Variables
Step40: 8.5. Was Flux Correction Used
Step41: 9. Grid
Step42: 10. Grid --> Discretisation --> Vertical
Step43: 10.2. Partial Steps
Step44: 11. Grid --> Discretisation --> Horizontal
Step45: 11.2. Staggering
Step46: 11.3. Scheme
Step47: 12. Timestepping Framework
Step48: 12.2. Diurnal Cycle
Step49: 13. Timestepping Framework --> Tracers
Step50: 13.2. Time Step
Step51: 14. Timestepping Framework --> Baroclinic Dynamics
Step52: 14.2. Scheme
Step53: 14.3. Time Step
Step54: 15. Timestepping Framework --> Barotropic
Step55: 15.2. Time Step
Step56: 16. Timestepping Framework --> Vertical Physics
Step57: 17. Advection
Step58: 18. Advection --> Momentum
Step59: 18.2. Scheme Name
Step60: 18.3. ALE
Step61: 19. Advection --> Lateral Tracers
Step62: 19.2. Flux Limiter
Step63: 19.3. Effective Order
Step64: 19.4. Name
Step65: 19.5. Passive Tracers
Step66: 19.6. Passive Tracers Advection
Step67: 20. Advection --> Vertical Tracers
Step68: 20.2. Flux Limiter
Step69: 21. Lateral Physics
Step70: 21.2. Scheme
Step71: 22. Lateral Physics --> Momentum --> Operator
Step72: 22.2. Order
Step73: 22.3. Discretisation
Step74: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Step75: 23.2. Constant Coefficient
Step76: 23.3. Variable Coefficient
Step77: 23.4. Coeff Background
Step78: 23.5. Coeff Backscatter
Step79: 24. Lateral Physics --> Tracers
Step80: 24.2. Submesoscale Mixing
Step81: 25. Lateral Physics --> Tracers --> Operator
Step82: 25.2. Order
Step83: 25.3. Discretisation
Step84: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Step85: 26.2. Constant Coefficient
Step86: 26.3. Variable Coefficient
Step87: 26.4. Coeff Background
Step88: 26.5. Coeff Backscatter
Step89: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Step90: 27.2. Constant Val
Step91: 27.3. Flux Type
Step92: 27.4. Added Diffusivity
Step93: 28. Vertical Physics
Step94: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Step95: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
Step96: 30.2. Closure Order
Step97: 30.3. Constant
Step98: 30.4. Background
Step99: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
Step100: 31.2. Closure Order
Step101: 31.3. Constant
Step102: 31.4. Background
Step103: 32. Vertical Physics --> Interior Mixing --> Details
Step104: 32.2. Tide Induced Mixing
Step105: 32.3. Double Diffusion
Step106: 32.4. Shear Mixing
Step107: 33. Vertical Physics --> Interior Mixing --> Tracers
Step108: 33.2. Constant
Step109: 33.3. Profile
Step110: 33.4. Background
Step111: 34. Vertical Physics --> Interior Mixing --> Momentum
Step112: 34.2. Constant
Step113: 34.3. Profile
Step114: 34.4. Background
Step115: 35. Uplow Boundaries --> Free Surface
Step116: 35.2. Scheme
Step117: 35.3. Embeded Seaice
Step118: 36. Uplow Boundaries --> Bottom Boundary Layer
Step119: 36.2. Type Of Bbl
Step120: 36.3. Lateral Mixing Coef
Step121: 36.4. Sill Overflow
Step122: 37. Boundary Forcing
Step123: 37.2. Surface Pressure
Step124: 37.3. Momentum Flux Correction
Step125: 37.4. Tracers Flux Correction
Step126: 37.5. Wave Effects
Step127: 37.6. River Runoff Budget
Step128: 37.7. Geothermal Heating
Step129: 38. Boundary Forcing --> Momentum --> Bottom Friction
Step130: 39. Boundary Forcing --> Momentum --> Lateral Friction
Step131: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Step132: 40.2. Ocean Colour
Step133: 40.3. Extinction Depth
Step134: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Step135: 41.2. From Sea Ice
Step136: 41.3. Forced Mode Restoring
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nerc', 'hadgem3-gc31-hm', 'ocean')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Image Classification
Step2: Explore the Data
Step5: Implement Preprocess Functions
Step8: One-hot encode
Step10: Randomize Data
Step12: Check Point
Step17: Build the network
Step20: Convolution and Max Pooling Layer
Step23: Flatten Layer
Step26: Fully-Connected Layer
Step29: Output Layer
Step32: Create Convolutional Model
Step35: Train the Neural Network
Step37: Show Stats
Step38: Hyperparameters
Step40: Train on a Single CIFAR-10 Batch
Step42: Fully Train the Model
Step45: Checkpoint
|
<ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
tar_gz_path = floyd_cifar10_location
else:
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 1
sample_id = 5
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
def normalize(x):
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
norm = np.array (x / x.max())
return norm
#norm=np.linalg.norm(x)
#if norm==0:
# return x
#return x/norm
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_normalize(normalize)
from sklearn import preprocessing
one_hot_classes = None
def one_hot_encode(x):
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
global one_hot_classes
# TODO: Implement Function
return preprocessing.label_binarize(x,classes=[0,1,2,3,4,5,6,7,8,9])
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_one_hot_encode(one_hot_encode)
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
stddev=0.05
DON'T MODIFY ANYTHING IN THIS CELL
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
import tensorflow as tf
def neural_net_image_input(image_shape):
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
# TODO: Implement Function
x=tf.placeholder(tf.float32,(None, image_shape[0], image_shape[1], image_shape[2]), name='x')
return x
def neural_net_label_input(n_classes):
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
# TODO: Implement Function
return tf.placeholder(tf.float32, (None, n_classes), name='y')
def neural_net_keep_prob_input():
Return a Tensor for keep probability
: return: Tensor for keep probability.
# TODO: Implement Function
return tf.placeholder(tf.float32,name='keep_prob')
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
import math
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
# TODO: Implement Function
height = math.ceil((float(x_tensor.shape[1].value - conv_ksize[0] + 1))/float((conv_strides[0])))
width = math.ceil(float((x_tensor.shape[2].value - conv_ksize[1] + 1))/float((conv_strides[1])))
#height = math.ceil((float(x_tensor.shape[1].value - conv_ksize[0] + 2))/float((conv_strides[0] + 1)))
#width = math.ceil(float((x_tensor.shape[2].value - conv_ksize[1] + 2))/float((conv_strides[1] + 1)))
weight = tf.Variable(tf.truncated_normal((height, width, x_tensor.shape[3].value, conv_num_outputs),stddev=stddev))
bias = tf.Variable(tf.zeros(conv_num_outputs))
conv_layer = tf.nn.conv2d(x_tensor, weight, strides=[1,conv_strides[0],conv_strides[1],1], padding='SAME')
conv_layer = tf.nn.bias_add(conv_layer,bias)
conv_layer = tf.nn.relu(conv_layer)
maxpool_layer = tf.nn.max_pool(conv_layer, ksize=[1,pool_ksize[0],pool_ksize[1],1], strides=[1,pool_strides[0],pool_strides[1],1], padding='SAME')
return maxpool_layer
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_con_pool(conv2d_maxpool)
def flatten(x_tensor):
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
# TODO: Implement Function
flattened = x_tensor.shape[1].value * x_tensor.shape[2].value * x_tensor.shape[3].value
return tf.reshape(x_tensor, shape=(-1, flattened))
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_flatten(flatten)
def fully_conn(x_tensor, num_outputs):
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
weights = tf.Variable(tf.truncated_normal([x_tensor.shape[1].value, num_outputs],stddev=stddev))
bias = tf.Variable(tf.zeros([num_outputs], dtype=tf.float32))
fc1 = tf.add(tf.matmul(x_tensor, weights), bias)
out = tf.nn.relu(fc1)
return out
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_fully_conn(fully_conn)
def output(x_tensor, num_outputs):
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
weights = tf.Variable(tf.truncated_normal([x_tensor.shape[1].value, num_outputs],stddev=stddev))
bias = tf.Variable(tf.zeros([num_outputs], dtype=tf.float32))
return tf.add(tf.matmul(x_tensor, weights), bias)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_output(output)
def conv_net(x, keep_prob):
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
#def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
stddev=0.01
conv_strides = (2,2) # Getting out of mem errors with stride=1
pool_strides = (2,2)
pool_ksize = (2,2)
conv_num_outputs1 = 32
conv_ksize1 = (2,2)
conv_num_outputs2 = 128
conv_ksize2 = (4,4)
conv_num_outputs3 = 128
conv_ksize3 = (2,2)
fully_conn_out1 = 1024
fully_conn_out2 = 512
fully_conn_out3 = 128
num_outputs = 10
x = conv2d_maxpool(x, conv_num_outputs1, conv_ksize1, conv_strides, pool_ksize, pool_strides)
#x = tf.nn.dropout(x, keep_prob)
x = conv2d_maxpool(x, conv_num_outputs2, conv_ksize2, conv_strides, pool_ksize, pool_strides)
x = tf.nn.dropout(x, keep_prob)
#x = conv2d_maxpool(x, conv_num_outputs3, conv_ksize3, conv_strides, pool_ksize, pool_strides)
# TODO: Apply a Flatten Layer
# Function Definition from Above:
x = flatten(x)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
x = fully_conn(x,fully_conn_out1)
x = tf.nn.dropout(x, keep_prob)
x = fully_conn(x,fully_conn_out2)
#x = tf.nn.dropout(x, keep_prob)
#x = fully_conn(x,fully_conn_out3)
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
x = output(x, num_outputs)
return x
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
session.run(optimizer, feed_dict={
x: feature_batch,
y: label_batch,
keep_prob: keep_probability})
pass
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_train_nn(train_neural_network)
def print_stats(session, feature_batch, label_batch, cost, accuracy):
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
# TODO: Implement Function
loss = session.run(cost, feed_dict={
x: feature_batch,
y: label_batch,
keep_prob: 1.})
valid_acc = sess.run(accuracy, feed_dict={
x: valid_features[:256],
y: valid_labels[:256],
keep_prob: 1.})
train_acc = session.run (accuracy, feed_dict = {
x: feature_batch,
y: label_batch,
keep_prob: 1.})
print('Loss: {:>10.4f} Training: {:.6f} Validation: {:.6f}'.format(
loss,
train_acc,
valid_acc))
pass
# TODO: Tune Parameters
epochs = 100
batch_size = 1024
keep_probability = 0.4
DON'T MODIFY ANYTHING IN THIS CELL
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
DON'T MODIFY ANYTHING IN THIS CELL
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
Test the saved model against the test dataset
test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Looks good! Now we import transformers and download the scripts run_benchmark.py, run_benchmark_tf.py, and plot_csv_file.py which can be found under transformers/examples/benchmarking.
Step2: Information about the input arguments to the run_benchmark scripts can be accessed by running !python run_benchmark.py --help for PyTorch and !python run_benchmark_tf.py --help for TensorFlow.
Step3: Great, we are ready to run our first memory benchmark. By default, both the required memory and time for inference is enabled. To disable benchmarking on time, we add --no_speed.
Step4: Under plots_pt, two files are now created
Step5: Each row in the csv file lists one data point showing the peak memory usage for a given model, batch_size and sequence_length. As can be seen, some values have a NaN result meaning that an Out-of-Memory Error occurred. To better visualize the results, one can make use of the plot_csv_file.py script.
Step6: We can see all relevant information here
Step7: At this point, it is important to understand how the peak memory is measured. The benchmarking tools measure the peak memory usage the same way the command nvidia-smi does - see here for more information.
Step8: Let's plot the results again, this time changing the x-axis to batch_size however.
Step9: Interesting! aodiniz/bert_uncased_L-10_H-51 clearly scales better for higher batch sizes and does not even run out of memory for 512 tokens.
Step10: Let's see the same plot for TensorFlow.
Step11: The model implemented in TensorFlow requires more memory than the one implemented in PyTorch. Let's say for whatever reason we have decided to use TensorFlow instead of PyTorch.
Step12: Ok, this took some time... time measurements take much longer than memory measurements because the forward pass is called multiple times for stable results. Timing measurements leverage Python's timeit module and run 10 times the value given to the --repeat argument (defaults to 3), so in our case 30 times.
Step13: First of all, it can be noted that XLA reduces latency time by a factor of ca. 1.3 (which is more than observed for other models by TensorFlow here). A batch size of 64 looks like a good choice. More or less half a second for the forward pass is good enough.
Step14: For the sake of the notebook, we assume that we are looking for a more efficient version of Facebook's bart-large-mnli model.
Step15: Alright! The important configuration parameters are usually the number of layers config.encoder_num_layers and config.decoder_num_layers, the model's hidden size
Step16: Cool, now we can benchmark these configs against the baseline config. This time, instead of using the benchmarking script we will directly use the PyTorchBenchmark class. The class expects the argument args which has to be of type PyTorchBenchmarkArguments and optionally a list of configs.
Step17: Nice, let's plot the results again.
Step18: As expected the model of the baseline config requires the most memory.
Step19: The option no_multi_process disabled multi-processing here. This option should in general only be used for testing or debugging. Enabling multi-processing is crucial to ensure accurate memory consumption measurement, but is less important when only measuring speed. The main reason it is disabled here is that google colab sometimes raises "CUDA initialization" due to the notebook's environment.
|
<ASSISTANT_TASK:>
Python Code:
#@title Check available memory of GPU
# Check that we are using 100% of GPU
# memory footprint support libraries/code
!ln -sf /opt/bin/nvidia-smi /usr/bin/nvidia-smi
!pip -q install gputil
!pip -q install psutil
!pip -q install humanize
import psutil
import humanize
import os
import GPUtil as GPU
GPUs = GPU.getGPUs()
# XXX: only one GPU on Colab and isn’t guaranteed
gpu = GPUs[0]
def printm():
process = psutil.Process(os.getpid())
print("Gen RAM Free: " + humanize.naturalsize( psutil.virtual_memory().available ), " | Proc size: " + humanize.naturalsize( process.memory_info().rss))
print("GPU RAM Free: {0:.0f}MB | Used: {1:.0f}MB | Util {2:3.0f}% | Total {3:.0f}MB".format(gpu.memoryFree, gpu.memoryUsed, gpu.memoryUtil*100, gpu.memoryTotal))
printm()
# If GPU RAM Util > 0% => crash notebook on purpose
# !kill -9 -1
# install transformes
!pip uninstall -y transformers
!pip install -q git+https://github.com/huggingface/transformers.git
# install py3nvml to track GPU memory usage
!pip install -q py3nvml
!rm -f run_benchmark.py
!rm -f run_benchmark_tf.py
!rm -f plot_csv_file.py
!wget https://raw.githubusercontent.com/huggingface/transformers/master/examples/benchmarking/run_benchmark.py -qq
!wget https://raw.githubusercontent.com/huggingface/transformers/master/examples/benchmarking/run_benchmark_tf.py -qq
!wget https://raw.githubusercontent.com/huggingface/transformers/master/examples/benchmarking/plot_csv_file.py -qq
# import pandas to pretty print csv files
import pandas as pd
!python run_benchmark.py --help
# create plots folder in content
!mkdir -p plots_pt
# run benchmark
!python run_benchmark.py --no_speed --save_to_csv \
--models a-ware/roberta-large-squad-classification \
a-ware/xlmroberta-squadv2 \
aodiniz/bert_uncased_L-10_H-512_A-8_cord19-200616_squad2 \
deepset/roberta-base-squad2 \
mrm8488/longformer-base-4096-finetuned-squadv2 \
--sequence_lengths 32 128 512 1024 \
--batch_sizes 32 \
--inference_memory_csv_file plots_pt/required_memory.csv \
--env_info_csv_file plots_pt/env.csv >/dev/null 2>&1 # redirect all prints
df = pd.read_csv('plots_pt/required_memory.csv')
df
df = pd.read_csv('plots_pt/env.csv')
df
# plot graph and save as image
!python plot_csv_file.py --csv_file plots_pt/required_memory.csv --figure_png_file=plots_pt/required_memory_plot.png --no_log_scale --short_model_names a-ware-roberta a-aware-xlm aodiniz-bert deepset-roberta mrm8488-long
# show image
from IPython.display import Image
Image('plots_pt/required_memory_plot.png')
!python run_benchmark.py --no_speed --save_to_csv \
--inference_memory_csv_file plots_pt/required_memory_2.csv \
--env_info_csv_file plots_pt/env.csv \
--models aodiniz/bert_uncased_L-10_H-512_A-8_cord19-200616_squad2 \
deepset/roberta-base-squad2 \
--sequence_lengths 512 \
--batch_sizes 64 128 256 512\
--no_env_print
# plot graph and save as image
!python plot_csv_file.py --csv_file plots_pt/required_memory_2.csv \
--figure_png_file=plots_pt/required_memory_plot_2.png \
--no_log_scale \
--short_model_names aodiniz-bert deepset-roberta \
--plot_along_batch
# show image
from IPython.display import Image
Image('plots_pt/required_memory_plot_2.png')
# create plots folder in content
!mkdir -p plots_tf
!TF_CPP_MIN_LOG_LEVEL=3 python run_benchmark_tf.py --no_speed --save_to_csv \
--inference_memory_csv_file plots_tf/required_memory_2.csv \
--env_info_csv_file plots_tf/env.csv \
--models aodiniz/bert_uncased_L-10_H-512_A-8_cord19-200616_squad2 \
deepset/roberta-base-squad2 \
--sequence_lengths 512 \
--batch_sizes 64 128 256 512 \
--no_env_print \
# plot graph and save as image
!python plot_csv_file.py --csv_file plots_tf/required_memory_2.csv --figure_png_file=plots_tf/required_memory_plot_2.png --no_log_scale --short_model_names aodiniz-bert deepset-roberta --plot_along_batch
# show image
from IPython.display import Image
Image('plots_tf/required_memory_plot_2.png')
!TF_CPP_MIN_LOG_LEVEL=3 python run_benchmark_tf.py --no_memory --save_to_csv \
--inference_time_csv_file plots_tf/time_2.csv \
--env_info_csv_file plots_tf/env.csv \
--models aodiniz/bert_uncased_L-10_H-512_A-8_cord19-200616_squad2 \
deepset/roberta-base-squad2 \
--sequence_lengths 8 32 128 512 \
--batch_sizes 256 \
--no_env_print \
# plot graph and save as image
!python plot_csv_file.py --csv_file plots_tf/time_2.csv --figure_png_file=plots_tf/time_plot_2.png --no_log_scale --short_model_names aodiniz-bert deepset-roberta --is_time
# show image
from IPython.display import Image
Image('plots_tf/time_plot_2.png')
!TF_CPP_MIN_LOG_LEVEL=3 python run_benchmark_tf.py --no_memory --save_to_csv \
--inference_time_csv_file plots_tf/time_xla_1.csv \
--env_info_csv_file plots_tf/env.csv \
--models aodiniz/bert_uncased_L-10_H-512_A-8_cord19-200616_squad2 \
--sequence_lengths 512 \
--batch_sizes 8 64 256 \
--no_env_print \
--use_xla
# Imports
from transformers import BartConfig, PyTorchBenchmark, PyTorchBenchmarkArguments
BartConfig.from_pretrained("facebook/bart-large-mnli").to_diff_dict()
config_baseline = BartConfig.from_pretrained("facebook/bart-large-mnli")
config_768_hidden = BartConfig.from_pretrained("facebook/bart-large-mnli", d_model=768)
config_8_heads = BartConfig.from_pretrained("facebook/bart-large-mnli", decoder_attention_heads=8, encoder_attention_heads=8)
config_10000_vocab = BartConfig.from_pretrained("facebook/bart-large-mnli", vocab_size=10000)
config_8_layers = BartConfig.from_pretrained("facebook/bart-large-mnli", encoder_layers=8, decoder_layers=8)
# define args
args = PyTorchBenchmarkArguments(models=["bart-base", "bart-768-hid", "bart-8-head", "bart-10000-voc", "bart-8-lay"],
no_speed=True,
no_inference=True,
training=True,
train_memory_csv_file="plots_pt/training_mem_fp16.csv",
save_to_csv=True,
env_info_csv_file="plots_pt/env.csv",
sequence_lengths=[64, 128, 256, 512],
batch_sizes=[8],
no_env_print=True,
fp16=True) # let's train on fp16
# create benchmark
benchmark = PyTorchBenchmark(configs=[config_baseline, config_768_hidden, config_8_heads, config_10000_vocab, config_8_layers], args=args)
# run benchmark
result = benchmark.run()
# plot graph and save as image
!python plot_csv_file.py --csv_file plots_pt/training_mem_fp16.csv --figure_png_file=plots_pt/training_mem_fp16.png --no_log_scale
# show image
from IPython.display import Image
Image('plots_pt/training_mem_fp16.png')
# define args
args = PyTorchBenchmarkArguments(models=["bart-8-head", "bart-8-lay"],
no_inference=True,
training=True,
no_memory=True,
train_time_csv_file="plots_pt/training_speed_fp16.csv",
save_to_csv=True,
env_info_csv_file="plots_pt/env.csv",
sequence_lengths=[32, 128, 512],
batch_sizes=[8],
no_env_print=True,
repeat=1, # to make speed measurement faster but less accurate
no_multi_process=True, # google colab has problems with multi processing
fp16=True
)
# create benchmark
benchmark = PyTorchBenchmark(configs=[config_8_heads, config_8_layers], args=args)
# run benchmark
result = benchmark.run()
# plot graph and save as image
!python plot_csv_file.py --csv_file plots_pt/training_speed_fp16.csv --figure_png_file=plots_pt/training_speed_fp16.png --no_log_scale --is_time
# show image
from IPython.display import Image
Image('plots_pt/training_speed_fp16.png')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Step 1
Step2: Check that we can store objects of different types in a dictionary
Step3: Yay - seems to work just fine!
Step 2
Step4: Step 2.2 Display Feature Importances Graphically (just for interest)
Step5: Step 3
Step6: Get the second Decision tree to use for testing
Step7: Write down an efficient Binary Tree Traversal Function
Step9: Create the single function to output the required values
Step10: Check that the following leaf node depth is correct
Step12: Design the single function to get the key tree information
|
<ASSISTANT_TASK:>
Python Code:
# Setup
%matplotlib inline
import matplotlib.pyplot as plt
from sklearn.datasets import load_iris
from sklearn.cross_validation import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import confusion_matrix
from sklearn.datasets import load_iris
from sklearn import tree
import numpy as np
# Define a function to draw the decision trees in IPython
# Adapted from: http://scikit-learn.org/stable/modules/tree.html
from IPython.display import display, Image
import pydotplus
# Custom util functions
from utils import utils
RANDOM_STATE_SPLIT = 1001
RANDOM_STATE_CLASSIFIER = 1039
# Load the iris data
iris = load_iris()
# Create the train-test datasets
X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target, random_state = RANDOM_STATE_SPLIT)
# Just fit a simple random forest classifier with 2 decision trees
rf = RandomForestClassifier(n_estimators = 2, random_state = RANDOM_STATE_CLASSIFIER)
rf.fit(X = X_train, y = y_train)
# Now plot the trees individually
#for idx, dtree in enumerate(rf.estimators_):
# print(idx)
# utils.draw_tree(inp_tree = dtree)
a = 1
test = {} # create the dictionary to store the objects
test['first'] = a
test['rf_obj'] = rf
print(test['first'])
print(test['rf_obj'].feature_importances_)
importances = rf.feature_importances_
std = np.std([dtree.feature_importances_ for dtree in rf.estimators_]
, axis=0)
indices = np.argsort(importances)[::-1]
# Check that the feature importances are standardized to 1
print(sum(importances))
# Print the feature ranking
print("Feature ranking:")
for f in range(X_train.shape[1]):
print("%d. feature %d (%f)" % (f + 1, indices[f], importances[indices[f]]))
# Plot the feature importances of the forest
plt.figure()
plt.title("Feature importances")
plt.bar(range(X_train.shape[1]), importances[indices],
color="r", yerr=std[indices], align="center")
plt.xticks(range(X_train.shape[1]), indices)
plt.xlim([-1, X_train.shape[1]])
plt.show()
feature_names = ["X" + str(i) for i in range(X_train.shape[1])]
target_vals = list(np.sort(np.unique(y_train)))
target_names = ["y" + str(i) for i in target_vals]
print(feature_names)
print(target_names)
estimator = rf.estimators_[1]
from sklearn.tree import _tree
estimator.tree_.children_left[0]
estimator.tree_.children_right[0]
# Now plot the trees individually
utils.draw_tree(inp_tree = estimator)
# Setup the key variables
threshold = estimator.tree_.threshold
max_node_depth = estimator.tree_.max_depth
max_node_depth
print("Max node depth in tree", max_node_depth, sep = ":\n")
n_nodes = estimator.tree_.node_count
print("number of nodes in tree", n_nodes, sep = ":\n")
# Define the number of features
num_features = X_train.shape[1]
# Get the node features from the decision tree classifier attribute
# It is hard to tell which features this came from i.e. indices are zero,
# positive and negative - we want only non-negative indices for the
# corresponding feature columns
node_features = estimator.tree_.feature
# Get indices for all the features used - 0 indexed and ranging
# to the total number of possible features in the training data
all_features_idx = np.array(range(num_features))
node_features_idx = np.array(range(num_features))[node_features]
# Count the unique number of features used
num_features_used = (np.unique(node_features_idx)).shape[0]
print("number of node features", num_features_used, sep = ":\n")
print("all features indices", all_features_idx, sep = ":\n")
print("node features", node_features, sep = ":\n")
print("node feature indices", node_features_idx, sep = ":\n")
def allTreePaths(dtree, root_node_id = 0):
Get all the individual tree paths from root node
to the leaves
# Use these lists to parse the tree structure
children_left = dtree.tree_.children_left
children_right = dtree.tree_.children_right
if root_node_id is None:
paths = []
if root_node_id == _tree.TREE_LEAF:
raise ValueError("Invalid node_id %s" % _tree.TREE_LEAF)
# if left/right is None we'll get empty list anyway
if children_left[root_node_id] != _tree.TREE_LEAF:
paths = [np.append(root_node_id, l)
for l in allTreePaths(dtree, children_left[root_node_id]) +
allTreePaths(dtree, children_right[root_node_id])]
else:
paths = [root_node_id]
return paths
all_leaf_node_paths = allTreePaths(rf.estimators_[1], root_node_id = 0)
all_leaf_node_paths
leaf_nodes = [path[-1] for path in all_leaf_node_paths]
leaf_nodes
features_used = []
leaf_nodes_depths = [np.size(y) - 1 for y in all_leaf_node_paths]
leaf_nodes_depths
n_node_samples = estimator.tree_.n_node_samples
num_samples = [n_node_samples[y].astype(int) for y in leaf_nodes]
print(n_node_samples)
print(len(n_node_samples))
num_samples
print(num_samples)
print(sum(num_samples))
print(sum(n_node_samples))
X_train.shape
value = estimator.tree_.value
values = [value[node_id].astype(int) for node_id in leaf_nodes]
print(values)
# This should match the number of rows in the training feature set
print(sum(values).sum())
values
feature_names = ["X" + str(i) for i in range(X_train.shape[1])]
np.asarray(feature_names)
print(type(feature_names))
print(feature_names[0])
print(feature_names[-2])
#feature = estimator.tree_.feature
#z = [feature[y].astype(int) for y in x]
#z
#[feature_names[i] for i in z]
max_dpth = estimator.tree_.max_depth
max_dpth
max_n_class = estimator.tree_.max_n_classes
max_n_class
predict = estimator.tree_.predict
predict
all_leaf_nodes = [path[-1] for path in all_leaf_node_paths]
#[predict(node_id) for node_id in np.asarray(all_leaf_nodes)]
print(all_leaf_nodes)
print(all_leaf_nodes[0])
print(value[all_leaf_nodes[0]])
print(all_features_idx[np.argmax(value[all_leaf_nodes[0]])])
print(node_features_idx)
#predict(class_names[np.argmax(value[all_leaf_nodes[0]])])
#print("nodes", np.asarray(a = nodes, dtype = "int64"), sep = ":\n")
# print("node_depth", node_depth, sep = ":\n")
# print("leaf_node", is_leaves, sep = ":\n")
# print("feature_names", used_feature_names, sep = ":\n")
# print("feature", feature, sep = ":\n")
def getTreeData(dtree, root_node_id = 0):
This returns all of the required summary results from an
individual decision tree
max_node_depth = dtree.tree_.max_depth
n_nodes = dtree.tree_.node_count
value = dtree.tree_.value
predict = dtree.tree_.predict
# Get the total number of features in the training data
tot_num_features = X_train.shape[1]
# Get indices for all the features used - 0 indexed and ranging
# to the total number of possible features in the training data
all_features_idx = np.array(range(tot_num_features), dtype = 'int64')
# Get the raw node feature indices from the decision tree classifier attribute
# It is hard to tell which features this came from i.e. indices are zero,
# positive and negative - we want only non-negative indices for the
# corresponding feature columns for consistency in reference
node_features_raw_idx = dtree.tree_.feature
# Get the refined non-negative feature indices for each node
# Start with a range over the total number of features and
# subset the relevant indices from the raw indices array
node_features_idx = np.array(range(tot_num_features))[node_features]
# Count the unique number of features used
num_features_used = (np.unique(node_features_idx)).shape[0]
# Get all of the paths used in the tree
all_leaf_node_paths = allTreePaths(dtree = dtree, root_node_id = root_node_id)
# Get list of leaf nodes
# In all paths it is the final node value
all_leaf_nodes = [path[-1] for path in all_leaf_node_paths]
# Final number of training samples predicted in each class at each leaf node
all_leaf_node_values = [value[node_id].astype(int) for node_id in leaf_nodes]
# Total number of training samples predicted in each class at each leaf node
tot_leaf_node_values = [np.sum(leaf_node_values) for leaf_node_values in all_leaf_node_values]
# All leaf node depths
# The depth is 0 indexed i.e. root node has depth 0
leaf_nodes_depths = [np.size(path) - 1 for path in all_leaf_node_paths]
# Predicted Classes
# Check that we correctly account for ties in determining the class here
all_leaf_node_classes = [all_features_idx[np.argmax(value)] for value in all_leaf_node_values]
# Get all of the features used along the leaf node paths i.e. features used to split a node
# CHECK: Why does the leaf node have a feature associated with it? Investigate further
# Removed the final leaf node value so that this feature does not get included currently
all_leaf_paths_features = [node_features_idx[path[:-1]] for path in all_leaf_node_paths]
# Get the unique list of features along a path
# NOTE: This removes the original ordering of the features along the path
# The original ordering could be preserved using a special function but will increase runtime
all_uniq_leaf_paths_features = [np.unique(feature_path) for feature_path in all_leaf_paths_features]
print("number of node features", num_features_used, sep = ":\n")
print("node feature indices", node_features_idx, sep = ":\n")
print("Max node depth in tree", max_node_depth, sep = ":\n")
print("number of nodes in tree", n_nodes, sep = ":\n")
print("node features", node_features, sep = ":\n")
print("all leaf node paths", all_leaf_node_paths, sep = ":\n")
print("all leaf node indices", all_leaf_nodes, sep = ":\n")
print("all leaf node depths", leaf_nodes_depths, sep = ":\n")
print("all leaf node predicted values", all_leaf_node_values, sep = ":\n")
print("total leaf node predicted values", tot_leaf_node_values, sep = ":\n")
print("all leaf node predicted classes", all_leaf_node_classes, sep = ":\n")
print("all features in leaf node paths", all_leaf_paths_features, sep = ":\n")
print("all unique features in leaf node paths", all_uniq_leaf_paths_features, sep = ":\n")
tree_data = {"num_features_used" : num_features_used,
"node_features_idx" : node_features_idx,
"max_node_depth" : max_node_depth,
"n_nodes" : n_nodes,
"all_leaf_node_paths" : all_leaf_node_paths,
"all_leaf_nodes" : all_leaf_nodes,
"leaf_nodes_depths" : leaf_nodes_depths,
"all_leaf_node_values" : all_leaf_node_values,
"tot_leaf_node_values" : tot_leaf_node_values,
"all_leaf_node_classes" : all_leaf_node_classes,
"all_leaf_paths_features" : all_leaf_paths_features,
"all_uniq_leaf_paths_features" : all_uniq_leaf_paths_features}
return tree_data
tree_dat1 = getTreeData(dtree = estimator, root_node_id = 0)
tree_dat1
print(sum(tree_dat1['tot_leaf_node_values']))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Then we can display the final result
Step2: Sawtooth Signal
Step3: Then, to display
Step4: Interactive mode
|
<ASSISTANT_TASK:>
Python Code:
import pedsp.oscillator as oscillator
import pedsp.algorithm as algorithm
import matplotlib.pyplot as plt
import numpy as np
amplitude = 1.;
sample_rate = 8000;
frequency = 5;
duration_secs = 2;
samples = int(duration_secs * sample_rate);
duty = 0.5;
square = oscillator.Square(amp=amplitude, sr=sample_rate, f=frequency, duty=duty)
data = square.generate(N=samples)
t = algorithm.linspace(0, duration_secs, samples)
plt.plot(t, data)
plt.show()
width = 0.7
sawtooth = oscillator.Sawtooth(amp=amplitude, sr=sample_rate, f=frequency, width=width)
data = sawtooth.generate(N=samples)
plt.plot(t, data)
plt.show()
from __future__ import print_function
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
@interact(dtype=widgets.Dropdown(
options=['square', 'sinusoidal', 'sawtooth'],
value='square',
description='Type:',
disabled=False),
frequency=widgets.IntSlider(min=1,max=20,step=1,value=10),
duration=widgets.IntSlider(min=1,max=5,step=1,value=1),
alpha=widgets.FloatSlider(min=0.0,max=1.0, value=0.3))
def display_oscillator(dtype, frequency, duration, alpha):
sr = 42000
g = None
if dtype == "square":
g = oscillator.Square(amp=1, sr=sr, f=frequency, duty=alpha)
elif dtype == "sinusoidal":
g = oscillator.Sinusoidal(amp=1, sr=sr, f=frequency, p=0)
else:
g = oscillator.Sawtooth(amp=1, sr=sr, f=frequency, width=alpha)
samples = int(duration * sr)
data = g.generate(N=samples)
t = algorithm.linspace(0, duration, samples)
plt.plot(t, data)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's open a file that contains a mover and lot's of changes
Step2: A Simulator creates simulation steps. So we load a single step.
Step3: Each step posesses a movechange which we will get
Step4: And the mover that was used to generate this change
Step5: Let's first check if the pmc was really created by the pathmover
Step6: This should be obvious, but under the hood there is some more fancy stuff happening. The in keyword with pathmovers and changes actually works on trees of movers. These trees are used to represent a specific order or movers being called and these trees are unique for changes. This means there is a way to label each possible pmc that can be generated by a mover. This possible only refers to choices in actual movers.
Step7: The property enum will evaluate the (potentially VERY large) number of all possible changes
Step8: which is two in our case. Another example
Step9: Now we have 4 choices as expected 2 for the RandomChoice * 2 for the OneWayShooter. Although effectively there are only 2 distinct ones. Why is that? The problem in ow_mover_2 is that we defined two separate instances of the OneWayShootingMover and in general different instances are considered different. In this case it might even be possible to check for equality, but we decided to leave this to the user. If you create two instances we assume there is a reason. Maybe just calling the differently (although it makes no sense for the monte carlo scheme).
Step10: And we get only two distinct movers as we wanted to.
Step11: This one has 31. Let's if we can reconstruct that number. The pathmover first choses between 4 different move types.
Step12: What happened here? Why are only some changes in the storage made by our mover? Shouldn't all of them (except the 2 in the beginning) be the result of our pathmover? Yes and no. They are, but we are checking if the total change is made from a mover and we are also loading all subchanges subchanges.
Step13: We will now cache all changes to see if the next steps are really fast or not.
Step14: Now
Step15: While this works it would be better to rely on the steps in the simulation and not on the changes itself. We might store some other changes for whatever reason, but we only want the ones that are associated with a MC step. The steps in the storage do exactly that and point to the changes that are used to generate the next sampleset that we wanted.
Step16: We exclude the first step since it is the initialization which is not generated by our pathmover.
Step17: We see that especially minus moves are underrepresented. This is due to the standard weighting of the minus move which runs minus moves much less frequent than other moves.
Step18: We realize that a specific shooter is called less likely than other movers
Step19: Get the (unique) location of the randomchoicemover. You can search for Mover classes, Mover instances or by the .name property of a mover which is a string.
Step20: the location object is effectively a tree mover instances described by nested tuples. For convenience it is wrapped to make searching easier and format the output.
Step21: In most cases you can use python tuples instead of TupleTree. The structure of a tree of tuples looks as follows
Step22: Instead of checking for a pmc directly we can also check for the tuple tree representation
Step23: Now get the mover at the loc_rc location (which is of course a RandomChoiceMover).
Step24: These are the weights by mover
Step25: So a shooting move is about 30x more likely than a minus move.
Step26: Note, that certain trees of movers can be arranged differently in a tree, but result in the same possible steps. Like Sequential(Forward, Sequential([Forward, Forward])) and Sequential([Forward, Forward, Forward]). We can regard this as being associative (placing arbitrary brackets) but we will not check for these types of equality. We assume that you arrange steps in a certain way for a reason and that your grouping of sequenced will reflect a certain logical idea. On the other hand are your locators / keys dependend on that choice!
Step27: the expression locator in pmc checks for the overall appearance of a specific mover somewhere dependent of the location. While using mover_instance in pmc check for the appearance of that mover instance independent of the location. If a particular mover_instance appears only once then both methods are equal.
Step28: In this case the MinusExtentionDirectionChoose was not called in this particular pmc.
Step29: will be interpreted as
Step30: Some speed tests
|
<ASSISTANT_TASK:>
Python Code:
import openpathsampling as p
st = p.storage.Storage('_toy_retis.nc', mode='r')
mc = st.steps[3]
print mc
pmc = mc.change
print pmc
pm = pmc.mover
print pm.treeprint()
pmc in pm
ow_mover = p.OneWayShootingMover([], []) # we use dummy arguments since are not going to use it
list(ow_mover.enum)
ow_mover_2 = p.RandomChoiceMover([
p.OneWayShootingMover([], []),
p.OneWayShootingMover([], [])
])
list(ow_mover_2.enum)
ow_mover_3 = p.RandomChoiceMover([
ow_mover,
ow_mover
])
list(ow_mover_3.enum)
all_changes = list(pm.enum)
print len(all_changes)
print [pc in pm for pc in st.movechanges[0:20]]
print st.movechanges[2]
print st.movechanges[2] in pm
print
print st.movechanges[5]
print st.movechanges[5] in pm
_ = list(st.movechanges)
_ = list(st.steps)
real_changes = filter(lambda x : x in pm, st.movechanges)
print len(real_changes), 'of', len(st.movechanges)
step_changes = [step.change for step in st.steps[1:]]
print len(step_changes)
import collections
counter = collections.defaultdict(lambda: 0)
for ch in step_changes:
counter[ch.unique] += 1
s = '%d of %d different changes run' % (len(counter), len(list(pm.enum)))
print s, '\n', '-' * len(s), '\n'
for y in sorted(counter.items(), key=lambda x : -x[1]):
print
print y[1], 'x'
print y[0].treeprint()
pmc_list = [pm.random() for x in xrange(10000)]
counter2 = collections.defaultdict(lambda: 0)
for ch in pmc_list:
counter2[ch] += 1
s = '%d of %d different changes run' % (len(counter2), len(list(pm.enum)))
print s, '\n', '-' * len(s), '\n'
for y in sorted(counter2.items(), key=lambda x : -x[1]):
print (100.0 * y[1]) / len(pmc_list), '%', repr(y[0])
print pm.treeprint()
loc_rc = pm.locate('RootMover')
print loc_rc
print type(loc_rc)
print isinstance(loc_rc, tuple)
print repr(loc_rc)
print str(loc_rc)
print pmc
print repr(pmc.unique)
print pmc
print pmc in pm # check if pmc could have been generated by pm
print pmc.unique in pm # check if the tuple tree representation could have been generated by pm
rc = pm[loc_rc]
dict(zip(rc.movers, rc.weights))
print rc in pmc # check if the RandomChoiceMover was called in pmc
print pc in pm # check if the pathmover has a RandomChoiceMover at that position in the tree
loc_medc = pm.locate('MinusExtensionDirectionChooser')
medc = pm[loc_medc]
print medc
%%time
for pc in step_changes:
if medc in pc:
print repr(pc)
print pc.unique
print
print pmc
print loc_medc in pm
print medc in pm
print loc_medc in pmc
print medc in pmc
first_minus_change = filter(lambda x : p.MinusMover in x, step_changes)[0]
print first_minus_change.unique
print first_minus_change.unique
pm.map_tree(lambda x : len(x.name))
%%timeit
pm.enum
%%timeit
pmc in pm
%%timeit
[p in pmc for p in pm.enum]
%%timeit
pm in pm
%%timeit
pmc.unique in pm
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. User-Specified Input
Step3: <div class=hw>
Step4: 2.2 Concatenating strings
Step5: 2.3 Multiple Concatenations
Step6: 2.4 Comparing Strings
Step7: <div class=hw>
Step8: 3.2 While and Iteration
Step9: <div class=hw>
Step10: 4. Writing and Reading Files with Magic Commands
Step11: <div class=hw>
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import astropy.units as u
faren = input("enter a temperature (in Fahrenheit): ")
print(faren)
print("green eggs and\n spam")
# Triple quotes are another way to specify multi-line strings
y = For score and seven minutes ago,
you folks all learned some basic mathy stuff with Python
and boy were you blown away!
print(y)
# note the ; allows us to do two calculations on the same line
s = "spam" ; e = "eggs"
print(s + e)
print(s + " and " + e)
# this one won't work
print('I want' + 3 + ' eggs and no ' + s)
# but this will
print('I want ' + str(3) + ' eggs and no ' + s)
print(s*3 + e)
print("*" * 50)
print("spam" == "good"); print("spam" == "spam")
"spam" < "zoo"
"s" < "spam"
x = 1
if x > 0:
print("yo")
else:
print("dude")
# the one line version
"yo" if x > 0 else "dude"
# conditionals can lie within function calls
print("yo" if x > 0 else "dude")
z = "no"
np.sin(np.pi if z=="yes" else 0)
x = 1
y = 0
while y < 10:
print("yo" if x > 0 else "dude")
x *= -1
y += 1
#can also do this with a break statement
while True:
print("yo" if x > 0 else "dude")
x *= -1
y += 1
if y >= 10:
break
%%file number_game.py
# The above "magic" command, denoted with the double %% saves the contents
# of the current cell to file. We'll see more of these later
x = 0
max_tries = 10
count = 0
while True:
x_new = int(input("Enter a new number: "))
if x_new > x:
print(" -> it's bigger than the last!")
elif x_new < x:
print(" -> it's smaller than the last!")
else:
print(" -> no change! I'll exit now")
break
x = x_new
count += 1
if count > max_tries:
print("too many tries...")
break
%run number_game.py
# this magic command runs the given file. It's like typing python number_game.py in the command line
4 % 2
4 % 3
6.28 % 3.14
6.28 % 3.1
25 % 5
25 % 7
from IPython.core.display import HTML
def css_styling():
styles = open("../custom.css", "r").read()
return HTML(styles)
css_styling()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Configurations
Step3: Data encoding
Step4: Test
Step5: Load data set
Step6: Define Batch Generator
Step7: Check the generator
Step8: Define model class
Step9: Make an instance of the model and define the rest of the graph
Step10: Training
Step11: Test online inference
|
<ASSISTANT_TASK:>
Python Code:
import tensorflow as tf
import numpy as np
import random
import os
vocab = (" $%'()+,-./0123456789:;=?ABCDEFGHIJKLMNOPQRSTUVWXYZ"
"\\^_abcdefghijklmnopqrstuvwxyz{|}\n")
graph_path = r"./graphs"
test_text_path = os.path.normpath(r"../Dataset/arvix_abstracts.txt")
batch_size=50
model_param_path=os.path.normpath(r"./model_checkpoints")
class TextCodec:
def __init__(self, vocab):
self._vocab = vocab
self._dim = len(vocab) + 2
def encode(self, string, sess = None, start=True, stop=True):
Encode string.
Each character is represented as a N-dimension one hot vector.
N = len(self._vocab)+ 2
Note:
The first entry of the vector corresponds to unknown character.
The last entry of the vector corresponds to STOP signal of the sequence.
The entries in the middle corresponds to the index of the character.
The START signal is represented as a zero vector.
tensor = [vocab.find(ch) + 1 for ch in string]
if stop:
tensor.append(len(vocab)+1) # String + STOP
tensor = tf.one_hot(tensor,depth=len(vocab) + 2,on_value=1.0,off_value=0.0,axis=-1, dtype=tf.float32)
if start:
tensor=tf.concat([tf.zeros([1, len(vocab) + 2],dtype=tf.float32),tensor],axis=0) # String + START
if sess is None:
with tf.Session() as sess:
nparray=tensor.eval()
elif type(sess) == tf.Session:
nparray = tensor.eval(session=sess)
else:
raise TypeError('"sess" must be {}, got {}'.format(tf.Session, type(sess)))
return nparray
def decode(self, nparray, default="[UNKNOWN]",start="[START]",stop="[STOP]",strip=False):
text_list = []
indices=np.argmax(nparray, axis=1)
for v, ch_i in zip(nparray,indices):
if np.all(v==0):
text_list.append(start if not strip else "")
elif ch_i==0:
text_list.append(default)
elif ch_i==len(self._vocab)+1:
text_list.append(stop if not strip else "")
else:
text_list.append(vocab[ch_i-1])
return "".join(text_list)
@property
def dim(self):
return self._dim
test_codec=TextCodec(vocab)
test_text_encoded=test_codec.encode("Hello world!")
print("Encoded text looks like:\n{}".format(test_text_encoded))
test_text_decoded=test_codec.decode(nparray=test_text_encoded,strip=False)
print("Decoded text looks like:\n{}".format(test_text_decoded))
with open(test_text_path, "r") as f:
raw_text_list = "".join(f.readlines()).split("\n")
print("Loaded abstract from a total of {} theses.".format(len(raw_text_list)))
# See what we have loaded
sample_text_no = random.randint(0, len(raw_text_list)-1)
sample_text_raw = raw_text_list[sample_text_no]
print("A sample text in the data set:\n{}".format(sample_text_raw))
sample_text_encoded=test_codec.encode(sample_text_raw)
print("Encoded text:\n{}".format(sample_text_encoded))
print("Decoded text:\n{}".format(test_codec.decode(sample_text_encoded)))
encoded_data = test_codec.encode("\n".join(raw_text_list), start=False, stop=False)
def batch_generator(data, codec, batch_size, seq_length, reset_every):
if type(data) == str:
data=codec.encode(data, start=False, stop=False)
head = 0
reset_index = 0
batch = []
seq = []
increment = seq_length * reset_every - 1
extras = codec.encode("", start=True, stop=True)
v_start, v_stop = extras[0: 1, :], extras[1: 2, :]
while head < np.shape(data)[0] or len(batch) == batch_size:
if len(batch) == batch_size:
batch = np.array(batch)
for offset in range(reset_every):
yield (batch[:, offset * seq_length: (offset + 1) * seq_length, :],
batch[:, offset * seq_length + 1: (offset + 1) * seq_length + 1, :])
batch = []
else:
seq = np.concatenate([v_start, data[head: head + increment, :], v_stop], axis=0)
if np.shape(seq)[0] == (increment + 2):
batch.append(seq)
head += increment
seq_length = 100
reset_every = 2
batch_size = 2
batches = batch_generator(data=encoded_data,
codec=test_codec,
batch_size=batch_size,
seq_length=seq_length,
reset_every=reset_every)
for (x, y), i in zip(batches, range(reset_every * 2)):
print("Batch {}".format(i))
if (i % reset_every) == 0:
print("Reset")
for j in range(batch_size):
decoded_x, decoded_y = test_codec.decode(x[j], strip=False), test_codec.decode(y[j], strip=False)
print("Index of sub-sequence:\n{}\nSequence input:\n{}:\nSequence output:\n{}".format(j,
decoded_x,
decoded_y))
del seq_length, reset_every, batch_size, batches
class DRNN(tf.nn.rnn_cell.RNNCell):
def __init__(self, input_dim, hidden_dim, output_dim, num_hidden_layer, dtype=tf.float32):
super(tf.nn.rnn_cell.RNNCell, self).__init__(dtype=dtype)
assert type(input_dim) == int and input_dim > 0, "Invalid input dimension. "
self._input_dim = input_dim
assert type(num_hidden_layer) == int and num_hidden_layer > 0, "Invalid number of hidden layer. "
self._num_hidden_layer = num_hidden_layer
assert type(hidden_dim) == int and hidden_dim > 0, "Invalid dimension of hidden states. "
self._hidden_dim = hidden_dim
assert type(output_dim) == int and output_dim > 0, "Invalid dimension of output dimension. "
self._output_dim = output_dim
self._state_is_tuple = True
with tf.variable_scope("input_layer"):
self._W_xh = tf.get_variable("W_xh", shape=[self._input_dim, self._hidden_dim])
self._b_xh = tf.get_variable("b_xh", shape=[self._hidden_dim])
with tf.variable_scope("rnn_layers"):
self._cells = [tf.nn.rnn_cell.GRUCell(self._hidden_dim) for _ in range(num_hidden_layer)]
with tf.variable_scope("output_layer"):
self._W_ho_list = [tf.get_variable("W_h{}o".format(i), shape=[self._hidden_dim, self._output_dim])
for i in range(num_hidden_layer)]
self._b_ho = tf.get_variable("b_ho", shape=[self._output_dim])
@property
def output_size(self):
return self._output_dim
@property
def state_size(self):
return (self._hidden_dim,) * self._num_hidden_layer
def zero_state(self, batch_size, dtype):
if self._state_is_tuple:
return tuple(cell.zero_state(batch_size, dtype)for cell in self._cells)
else:
raise NotImplementedError("Not implemented yet.")
def __call__(self, _input, state, scope=None):
assert type(state) == tuple and len(state) == self._num_hidden_layer, "state must be a tuple of size {}".format(
self._num_hidden_layer)
hidden_layer_input = tf.matmul(_input, self._W_xh) + self._b_xh
prev_output = hidden_layer_input
final_state = []
output = None
for hidden_layer_index, hidden_cell in enumerate(self._cells):
with tf.variable_scope("cell_{}".format(hidden_layer_index)):
new_output, new_state = hidden_cell(prev_output, state[hidden_layer_index])
prev_output = new_output + hidden_layer_input # Should be included in variable scope of this layer or?
final_state.append(new_state)
_W_ho = self._W_ho_list[hidden_layer_index]
if output is None:
output = tf.matmul(new_output, _W_ho)
else:
output = output + tf.matmul(new_output, _W_ho)
output = tf.tanh(output + self._b_ho)
# output = tf.nn.relu(output)
final_state = tuple(final_state)
return output, final_state
def inspect_weights(self, sess):
val = self._W_xh.eval(sess)
print("W_xh:\n{}\nF-norm:\n{}".format(val, norm(val)))
val = self._b_xh.eval(sess)
print("b_xh:\n{}\nF-norm:\n{}".format(val, norm(val)))
for hidden_layer_index in range(self._num_hidden_layer):
val = self._W_ho_list[hidden_layer_index].eval(sess)
print("W_h{}o:\n{}\nF-norm:\n{}".format(hidden_layer_index, val, norm(val)))
val = self._b_ho.eval(sess)
print("b_ho:\n{}\nF-norm:\n{}".format(val, norm(val)))
tf.reset_default_graph()
input_dim = output_dim = test_codec.dim
hidden_dim = 700
num_hidden_layer = 3
rnn_cell = DRNN(input_dim=input_dim, output_dim=output_dim, num_hidden_layer=num_hidden_layer, hidden_dim=hidden_dim)
batch_size = 50
init_state = tuple(tf.placeholder_with_default(input=tensor,
shape=[None, hidden_dim]) for tensor in rnn_cell.zero_state(
batch_size=batch_size, dtype=tf.float32))
seq_input = tf.placeholder(name="batch_input", shape=[None, None, input_dim], dtype=tf.float32)
target_seq_output = tf.placeholder(name="target_batch_output", shape=[None, None, output_dim], dtype=tf.float32)
seq_output, final_states = tf.nn.dynamic_rnn(cell=rnn_cell,inputs=seq_input,
initial_state=init_state, dtype=tf.float32)
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=target_seq_output, logits=seq_output))
summary_op = tf.summary.scalar(tensor=loss, name="loss")
global_step = tf.get_variable(name="global_step", initializer=0, trainable=False)
lr = tf.get_variable(name="learning_rate", initializer=1.0, trainable=False)
n_epoch=50
learning_rate=1e-3
train_op=tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(loss, global_step=global_step)
print_every = 50
save_every = 1000
partition_size = 100
logdir = os.path.normpath("./graphs")
seq_length = 100
reset_every = 100
visualize_every = 100
learning_rate_decay = 0.9
# batch_size has been specified when configuring the the tensors for initial states
keep_checkpoint_every_n_hours = 0.5
model_checkpoint_dir = os.path.normpath("./model_checkpoints")
model_checkpoint_path = os.path.join(model_checkpoint_dir, "DRNN")
saver = tf.train.Saver(keep_checkpoint_every_n_hours=keep_checkpoint_every_n_hours)
batches = list(batch_generator(data=encoded_data,
codec=test_codec,
batch_size=batch_size,
seq_length=seq_length,
reset_every=reset_every))
with tf.Session() as sess, tf.summary.FileWriter(logdir=logdir) as writer:
sess.run(tf.global_variables_initializer())
feed_dict = dict()
states = None
sess.run(tf.assign(lr, learning_rate))
zero_states = sess.run(rnn_cell.zero_state(batch_size=1, dtype=tf.float32))
for epoch in range(n_epoch):
assert lr.eval(sess) > 0, "learning_rate must be positive."
for i, (x, y) in enumerate(batches):
feed_dict = {seq_input: x, target_seq_output: y}
if (i % reset_every) != 0 and states is not None:
for j in range(len(init_state)):
feed_dict[init_state[j]] = states[j]
_, summary, states, step = sess.run(fetches=[train_op, summary_op, final_states, global_step],
feed_dict=feed_dict)
writer.add_summary(summary=summary, global_step=step)
if ((step + 1) % save_every) == 0:
saver.save(sess=sess, save_path=model_checkpoint_path, global_step=step)
if (step % visualize_every) == 0:
feed_dict = {seq_input: x[:1, : , :]}
for key, value in zip(init_state, zero_states):
feed_dict[key] = value
sample_output = sess.run(seq_output, feed_dict=feed_dict)
print(test_codec.decode(sample_output[0], strip=False))
sess.run(tf.assign(lr, lr.eval(sess) * learning_rate_decay))
def online_inference(cell, prime, sess, codec,
input_tensor,
init_state_tensor_tuple,
output_tensor,
final_state_tensor_tuple,
length):
final_output = [prime]
zero_states = sess.run(cell.zero_state(batch_size=1, dtype=tf.float32))
feed_dict = {input_tensor: codec.encode(prime, start=True, stop=False)[np.newaxis, :, :]} # prime
for init_state_tensor, init_state_value in zip(init_state_tensor_tuple,
zero_states):
feed_dict[init_state_tensor] = init_state_value
output, final_states = sess.run([output_tensor, final_state_tensor_tuple], feed_dict=feed_dict)
final_output.append(codec.decode(output[0, -1:, :], strip=False))
for _ in range(length - len(prime)):
feed_dict = {input_tensor: codec.encode(final_output[-1], start=False, stop=False)[np.newaxis, :, :]}
for init_state_tensor, init_state_value in zip(init_state_tensor_tuple, final_states):
feed_dict[init_state_tensor] = init_state_value
output, final_states = sess.run([output_tensor, final_state_tensor_tuple], feed_dict=feed_dict)
final_output.append(codec.decode(output[0], strip=False))
return "".join(final_output)
saver = tf.train.Saver()
with tf.Session() as sess:
ckpt = saver.last_checkpoints
print(ckpt)
print(online_inference(rnn_cell, "We propose",
sess, test_codec, seq_input, init_state, seq_output, final_states, 200))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
series = pd.Series([np.array([1,2,3,4]), np.array([5,6,7,8]), np.array([9,10,11,12])], index=['file1', 'file2', 'file3'])
def g(s):
return pd.DataFrame.from_records(s.values,index=s.index).reset_index().rename(columns={'index': 'name'})
df = g(series.copy())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Here's the image, admittedly its small
Step2: Here's the new image
Step3: To write a binary file you first need to convert binary data to uint8 datatype and cast to that data type as well.
Step4: Writing direct to zipfiles
|
<ASSISTANT_TASK:>
Python Code:
#For data manipulations
%pylab inline
from IPython.display import set_matplotlib_formats
from io import BytesIO
import numpy as np
#to compare to scipy's builtin conversions
from scipy.misc import imsave, toimage
#import pillow
from PIL import Image
set_cmap('Greys')
checkerboard = np.mod(np.arange(49).reshape(7,7),2).astype('bool')
matshow(checkerboard)
#whats the dtype?
checkerboard.dtype
#create a PIL image with binary mode
cb_img = Image.fromarray(checkerboard,mode='1')
#write the image
cb_img.save('cb_img.bmp')
#read in the image
cb_img_read = Image.open('cb_img.bmp')
#convert to array
checkerboard_read = np.asarray(cb_img_read)
#display
matshow(checkerboard_read)
cb_img2 = Image.fromarray((checkerboard*255).astype('uint8'),mode='L')
#write the image
cb_img2.convert('1').save('cb_img2.bmp')
a = array([True, False])
(a*255).dtype
#read in the image
cb_img2_read = Image.open('cb_img2.bmp')
#convert to array
checkerboard2_read = np.asarray(cb_img2_read)
#display
matshow(checkerboard2_read)
checkerboard2_read.dtype
cb_img2.save()
import zipfile as zp
# this is admittedly not the most transferable
with zp.ZipFile("junk.zip","w") as zf:
# generate an output byte buffer to save the data in, instead of a file
output = BytesIO()
# convert and save the image to the byte buffer with the correct format
cb_img2.convert('1').save(output, "BMP")
# wrtie the byte buffer to the zipfile directly
zf.writestr("junk.bmp", output.getvalue())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Take for example an ideal low-pass filter, which would give a magnitude
Step2: This filter hypothetically achieves zero ripple in the frequency domain,
Step3: This is not so good! Making the filter 10 times longer (1 s) gets us a
Step4: Let's make the stop-band tighter still with a longer filter (10 s),
Step5: Now we have very sharp frequency suppression, but our filter rings for the
Step6: Accepting a shallower roll-off of the filter in the frequency domain makes
Step7: Since our lowpass is around 40 Hz with a 10 Hz transition, we can actually
Step8: But if we shorten the filter too much (2 cycles of 10 Hz = 0.2 s),
Step9: If we want a filter that is only 0.1 seconds long, we should probably use
Step10: So far, we have only discussed non-causal filtering, which means that each
Step11: Applying FIR filters
Step12: Filter it with a shallow cutoff, linear-phase FIR (which allows us to
Step13: Filter it with a different design method fir_design="firwin2", and also
Step14: Let's also filter with the MNE-Python 0.13 default, which is a
Step15: Let's also filter it with the MNE-C default, which is a long-duration
Step16: And now an example of a minimum-phase filter
Step18: Both the MNE-Python 0.13 and MNE-C filters have excellent frequency
Step19: IIR filters
Step20: The falloff of this filter is not very steep.
Step21: There are other types of IIR filters that we can use. For a complete list,
Step22: If we can live with even more ripple, we can get it slightly steeper,
Step23: Applying IIR filters
Step24: Some pitfalls of filtering
Step25: Similarly, in a P300 paradigm reported by Kappenman & Luck (2010) [12]_,
Step26: In response, Maess et al. (2016) [11]_ note that these simulations do not
Step27: Both groups seem to acknowledge that the choices of filtering cutoffs, and
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from scipy import signal, fftpack
import matplotlib.pyplot as plt
from mne.time_frequency.tfr import morlet
from mne.viz import plot_filter, plot_ideal_filter
import mne
sfreq = 1000.
f_p = 40.
flim = (1., sfreq / 2.) # limits for plotting
nyq = sfreq / 2. # the Nyquist frequency is half our sample rate
freq = [0, f_p, f_p, nyq]
gain = [1, 1, 0, 0]
third_height = np.array(plt.rcParams['figure.figsize']) * [1, 1. / 3.]
ax = plt.subplots(1, figsize=third_height)[1]
plot_ideal_filter(freq, gain, ax, title='Ideal %s Hz lowpass' % f_p, flim=flim)
n = int(round(0.1 * sfreq))
n -= n % 2 - 1 # make it odd
t = np.arange(-(n // 2), n // 2 + 1) / sfreq # center our sinc
h = np.sinc(2 * f_p * t) / (4 * np.pi)
plot_filter(h, sfreq, freq, gain, 'Sinc (0.1 s)', flim=flim, compensate=True)
n = int(round(1. * sfreq))
n -= n % 2 - 1 # make it odd
t = np.arange(-(n // 2), n // 2 + 1) / sfreq
h = np.sinc(2 * f_p * t) / (4 * np.pi)
plot_filter(h, sfreq, freq, gain, 'Sinc (1.0 s)', flim=flim, compensate=True)
n = int(round(10. * sfreq))
n -= n % 2 - 1 # make it odd
t = np.arange(-(n // 2), n // 2 + 1) / sfreq
h = np.sinc(2 * f_p * t) / (4 * np.pi)
plot_filter(h, sfreq, freq, gain, 'Sinc (10.0 s)', flim=flim, compensate=True)
trans_bandwidth = 10 # 10 Hz transition band
f_s = f_p + trans_bandwidth # = 50 Hz
freq = [0., f_p, f_s, nyq]
gain = [1., 1., 0., 0.]
ax = plt.subplots(1, figsize=third_height)[1]
title = '%s Hz lowpass with a %s Hz transition' % (f_p, trans_bandwidth)
plot_ideal_filter(freq, gain, ax, title=title, flim=flim)
h = signal.firwin2(n, freq, gain, nyq=nyq)
plot_filter(h, sfreq, freq, gain, 'Windowed 10 Hz transition (1.0 s)',
flim=flim, compensate=True)
n = int(round(sfreq * 0.5)) + 1
h = signal.firwin2(n, freq, gain, nyq=nyq)
plot_filter(h, sfreq, freq, gain, 'Windowed 10 Hz transition (0.5 s)',
flim=flim, compensate=True)
n = int(round(sfreq * 0.2)) + 1
h = signal.firwin2(n, freq, gain, nyq=nyq)
plot_filter(h, sfreq, freq, gain, 'Windowed 10 Hz transition (0.2 s)',
flim=flim, compensate=True)
trans_bandwidth = 25
f_s = f_p + trans_bandwidth
freq = [0, f_p, f_s, nyq]
h = signal.firwin2(n, freq, gain, nyq=nyq)
plot_filter(h, sfreq, freq, gain, 'Windowed 50 Hz transition (0.2 s)',
flim=flim, compensate=True)
h_min = mne.fixes.minimum_phase(h)
plot_filter(h_min, sfreq, freq, gain, 'Minimum-phase', flim=flim)
dur = 10.
center = 2.
morlet_freq = f_p
tlim = [center - 0.2, center + 0.2]
tticks = [tlim[0], center, tlim[1]]
flim = [20, 70]
x = np.zeros(int(sfreq * dur) + 1)
blip = morlet(sfreq, [morlet_freq], n_cycles=7)[0].imag / 20.
n_onset = int(center * sfreq) - len(blip) // 2
x[n_onset:n_onset + len(blip)] += blip
x_orig = x.copy()
rng = np.random.RandomState(0)
x += rng.randn(len(x)) / 1000.
x += np.sin(2. * np.pi * 60. * np.arange(len(x)) / sfreq) / 2000.
transition_band = 0.25 * f_p
f_s = f_p + transition_band
freq = [0., f_p, f_s, sfreq / 2.]
gain = [1., 1., 0., 0.]
# This would be equivalent:
h = mne.filter.create_filter(x, sfreq, l_freq=None, h_freq=f_p,
fir_design='firwin', verbose=True)
x_v16 = np.convolve(h, x)
# this is the linear->zero phase, causal-to-non-causal conversion / shift
x_v16 = x_v16[len(h) // 2:]
plot_filter(h, sfreq, freq, gain, 'MNE-Python 0.16 default', flim=flim,
compensate=True)
transition_band = 0.25 * f_p
f_s = f_p + transition_band
freq = [0., f_p, f_s, sfreq / 2.]
gain = [1., 1., 0., 0.]
# This would be equivalent:
# filter_dur = 6.6 / transition_band # sec
# n = int(sfreq * filter_dur)
# h = signal.firwin2(n, freq, gain, nyq=sfreq / 2.)
h = mne.filter.create_filter(x, sfreq, l_freq=None, h_freq=f_p,
fir_design='firwin2', verbose=True)
x_v14 = np.convolve(h, x)[len(h) // 2:]
plot_filter(h, sfreq, freq, gain, 'MNE-Python 0.14 default', flim=flim,
compensate=True)
transition_band = 0.5 # Hz
f_s = f_p + transition_band
filter_dur = 10. # sec
freq = [0., f_p, f_s, sfreq / 2.]
gain = [1., 1., 0., 0.]
# This would be equivalent
# n = int(sfreq * filter_dur)
# h = signal.firwin2(n, freq, gain, nyq=sfreq / 2.)
h = mne.filter.create_filter(x, sfreq, l_freq=None, h_freq=f_p,
h_trans_bandwidth=transition_band,
filter_length='%ss' % filter_dur,
fir_design='firwin2', verbose=True)
x_v13 = np.convolve(np.convolve(h, x)[::-1], h)[::-1][len(h) - 1:-len(h) - 1]
# the effective h is one that is applied to the time-reversed version of itself
h_eff = np.convolve(h, h[::-1])
plot_filter(h_eff, sfreq, freq, gain, 'MNE-Python 0.13 default', flim=flim,
compensate=True)
h = mne.filter.design_mne_c_filter(sfreq, l_freq=None, h_freq=f_p + 2.5)
x_mne_c = np.convolve(h, x)[len(h) // 2:]
transition_band = 5 # Hz (default in MNE-C)
f_s = f_p + transition_band
freq = [0., f_p, f_s, sfreq / 2.]
gain = [1., 1., 0., 0.]
plot_filter(h, sfreq, freq, gain, 'MNE-C default', flim=flim, compensate=True)
h = mne.filter.create_filter(x, sfreq, l_freq=None, h_freq=f_p,
phase='minimum', fir_design='firwin',
verbose=True)
x_min = np.convolve(h, x)
transition_band = 0.25 * f_p
f_s = f_p + transition_band
filter_dur = 6.6 / transition_band # sec
n = int(sfreq * filter_dur)
freq = [0., f_p, f_s, sfreq / 2.]
gain = [1., 1., 0., 0.]
plot_filter(h, sfreq, freq, gain, 'Minimum-phase filter', flim=flim)
axes = plt.subplots(1, 2)[1]
def plot_signal(x, offset):
Plot a signal.
t = np.arange(len(x)) / sfreq
axes[0].plot(t, x + offset)
axes[0].set(xlabel='Time (s)', xlim=t[[0, -1]])
X = fftpack.fft(x)
freqs = fftpack.fftfreq(len(x), 1. / sfreq)
mask = freqs >= 0
X = X[mask]
freqs = freqs[mask]
axes[1].plot(freqs, 20 * np.log10(np.maximum(np.abs(X), 1e-16)))
axes[1].set(xlim=flim)
yscale = 30
yticklabels = ['Original', 'Noisy', 'FIR-firwin (0.16)', 'FIR-firwin2 (0.14)',
'FIR-steep (0.13)', 'FIR-steep (MNE-C)', 'Minimum-phase']
yticks = -np.arange(len(yticklabels)) / yscale
plot_signal(x_orig, offset=yticks[0])
plot_signal(x, offset=yticks[1])
plot_signal(x_v16, offset=yticks[2])
plot_signal(x_v14, offset=yticks[3])
plot_signal(x_v13, offset=yticks[4])
plot_signal(x_mne_c, offset=yticks[5])
plot_signal(x_min, offset=yticks[6])
axes[0].set(xlim=tlim, title='FIR, Lowpass=%d Hz' % f_p, xticks=tticks,
ylim=[-len(yticks) / yscale, 1. / yscale],
yticks=yticks, yticklabels=yticklabels)
for text in axes[0].get_yticklabels():
text.set(rotation=45, size=8)
axes[1].set(xlim=flim, ylim=(-60, 10), xlabel='Frequency (Hz)',
ylabel='Magnitude (dB)')
mne.viz.tight_layout()
plt.show()
sos = signal.iirfilter(2, f_p / nyq, btype='low', ftype='butter', output='sos')
plot_filter(dict(sos=sos), sfreq, freq, gain, 'Butterworth order=2', flim=flim,
compensate=True)
x_shallow = signal.sosfiltfilt(sos, x)
del sos
iir_params = dict(order=8, ftype='butter')
filt = mne.filter.create_filter(x, sfreq, l_freq=None, h_freq=f_p,
method='iir', iir_params=iir_params,
verbose=True)
plot_filter(filt, sfreq, freq, gain, 'Butterworth order=8', flim=flim,
compensate=True)
x_steep = signal.sosfiltfilt(filt['sos'], x)
iir_params.update(ftype='cheby1',
rp=1., # dB of acceptable pass-band ripple
)
filt = mne.filter.create_filter(x, sfreq, l_freq=None, h_freq=f_p,
method='iir', iir_params=iir_params,
verbose=True)
plot_filter(filt, sfreq, freq, gain,
'Chebychev-1 order=8, ripple=1 dB', flim=flim, compensate=True)
iir_params['rp'] = 6.
filt = mne.filter.create_filter(x, sfreq, l_freq=None, h_freq=f_p,
method='iir', iir_params=iir_params,
verbose=True)
plot_filter(filt, sfreq, freq, gain,
'Chebychev-1 order=8, ripple=6 dB', flim=flim,
compensate=True)
axes = plt.subplots(1, 2)[1]
yticks = np.arange(4) / -30.
yticklabels = ['Original', 'Noisy', 'Butterworth-2', 'Butterworth-8']
plot_signal(x_orig, offset=yticks[0])
plot_signal(x, offset=yticks[1])
plot_signal(x_shallow, offset=yticks[2])
plot_signal(x_steep, offset=yticks[3])
axes[0].set(xlim=tlim, title='IIR, Lowpass=%d Hz' % f_p, xticks=tticks,
ylim=[-0.125, 0.025], yticks=yticks, yticklabels=yticklabels,)
for text in axes[0].get_yticklabels():
text.set(rotation=45, size=8)
axes[1].set(xlim=flim, ylim=(-60, 10), xlabel='Frequency (Hz)',
ylabel='Magnitude (dB)')
mne.viz.adjust_axes(axes)
mne.viz.tight_layout()
plt.show()
x = np.zeros(int(2 * sfreq))
t = np.arange(0, len(x)) / sfreq - 0.2
onset = np.where(t >= 0.5)[0][0]
cos_t = np.arange(0, int(sfreq * 0.8)) / sfreq
sig = 2.5 - 2.5 * np.cos(2 * np.pi * (1. / 0.8) * cos_t)
x[onset:onset + len(sig)] = sig
iir_lp_30 = signal.iirfilter(2, 30. / sfreq, btype='lowpass')
iir_hp_p1 = signal.iirfilter(2, 0.1 / sfreq, btype='highpass')
iir_lp_2 = signal.iirfilter(2, 2. / sfreq, btype='lowpass')
iir_hp_2 = signal.iirfilter(2, 2. / sfreq, btype='highpass')
x_lp_30 = signal.filtfilt(iir_lp_30[0], iir_lp_30[1], x, padlen=0)
x_hp_p1 = signal.filtfilt(iir_hp_p1[0], iir_hp_p1[1], x, padlen=0)
x_lp_2 = signal.filtfilt(iir_lp_2[0], iir_lp_2[1], x, padlen=0)
x_hp_2 = signal.filtfilt(iir_hp_2[0], iir_hp_2[1], x, padlen=0)
xlim = t[[0, -1]]
ylim = [-2, 6]
xlabel = 'Time (sec)'
ylabel = r'Amplitude ($\mu$V)'
tticks = [0, 0.5, 1.3, t[-1]]
axes = plt.subplots(2, 2)[1].ravel()
for ax, x_f, title in zip(axes, [x_lp_2, x_lp_30, x_hp_2, x_hp_p1],
['LP$_2$', 'LP$_{30}$', 'HP$_2$', 'LP$_{0.1}$']):
ax.plot(t, x, color='0.5')
ax.plot(t, x_f, color='k', linestyle='--')
ax.set(ylim=ylim, xlim=xlim, xticks=tticks,
title=title, xlabel=xlabel, ylabel=ylabel)
mne.viz.adjust_axes(axes)
mne.viz.tight_layout()
plt.show()
def baseline_plot(x):
all_axes = plt.subplots(3, 2)[1]
for ri, (axes, freq) in enumerate(zip(all_axes, [0.1, 0.3, 0.5])):
for ci, ax in enumerate(axes):
if ci == 0:
iir_hp = signal.iirfilter(4, freq / sfreq, btype='highpass',
output='sos')
x_hp = signal.sosfiltfilt(iir_hp, x, padlen=0)
else:
x_hp -= x_hp[t < 0].mean()
ax.plot(t, x, color='0.5')
ax.plot(t, x_hp, color='k', linestyle='--')
if ri == 0:
ax.set(title=('No ' if ci == 0 else '') +
'Baseline Correction')
ax.set(xticks=tticks, ylim=ylim, xlim=xlim, xlabel=xlabel)
ax.set_ylabel('%0.1f Hz' % freq, rotation=0,
horizontalalignment='right')
mne.viz.adjust_axes(axes)
mne.viz.tight_layout()
plt.suptitle(title)
plt.show()
baseline_plot(x)
n_pre = (t < 0).sum()
sig_pre = 1 - np.cos(2 * np.pi * np.arange(n_pre) / (0.5 * n_pre))
x[:n_pre] += sig_pre
baseline_plot(x)
# Use the same settings as when calling e.g., `raw.filter()`
fir_coefs = mne.filter.create_filter(
data=None, # data is only used for sanity checking, not strictly needed
sfreq=1000., # sfreq of your data in Hz
l_freq=None,
h_freq=40., # assuming a lowpass of 40 Hz
method='fir',
fir_window='hamming',
fir_design='firwin',
verbose=True)
# See the printed log for the transition bandwidth and filter length.
# Alternatively, get the filter length through:
filter_length = fir_coefs.shape[0]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We have provided three images containing stars, taken with 3 different CCDs, in "stars_X.npz" where X = 0, 1, 2.
Step2: Just to make sure that we're all on the same page, here's code to display the image and stars (imshow is a simple utility imported from rhlUtils -- feel free to use plt.imshow if you'd rather)
Step3: Time for you to do some work. Write some code to estimate a PSF model by simply averaging all the objects, giving each equal weight. You should add the option to use only some subset of the stars (e.g. the faintest 25%).
Step4: OK, now use your PSF model to create an image of the residuals created by subtracting the scaled PSF from the stars.
|
<ASSISTANT_TASK:>
Python Code:
import os
import numpy as np
import matplotlib.pyplot as plt
from rhlUtils import BBox, CCD, Image, imshow
%matplotlib notebook
%config InlineBackend.figure_format = 'retina'
#%matplotlib qt
#%gui qt
dataDir = # complete
mag0 = 33 # Magnitude of an object with 1 detected photon
data = np.load(os.path.join(dataDir, "stars_0.npz"))
image, calibs = data["image"], data["calibs"]
image0 = image.copy() # Keep a copy
plt.figure(1)
plt.clf()
imshow(image, vmin=0, vmax=1000)
plt.title("Data")
plt.plot(calibs[:, 0], calibs[:, 1], '+') # calibs[:, 2] contains the object's magnitude (not flux)
plt.show()
image = image0.copy()
#...
plt.figure(2)
plt.clf()
imshow(psfIm, vmin=0, vmax=1.1)
plt.title("PSF model")
plt.show();
image = image0.copy()
# ...
plt.figure(3)
plt.clf()
imshow(image, vmin=image.min(), vmax=image.max()) # , vmin=0, vmax=100)
plt.title("Residuals")
plt.show();
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Loading the problem and the loop
Step2: Now we define the domain of the function to optimize.
Step3: And prepare the optimization object to run the loop.
Step4: Now, we set the number of iterations to run to 10.
Step5: Running the optimization by setting a context variable
Step6: We can now inspect the collected points.
|
<ASSISTANT_TASK:>
Python Code:
from emukit.test_functions import branin_function
from emukit.core import ParameterSpace, ContinuousParameter, DiscreteParameter
from emukit.core.initial_designs import RandomDesign
from GPy.models import GPRegression
from emukit.model_wrappers import GPyModelWrapper
from emukit.bayesian_optimization.acquisitions import ExpectedImprovement
from emukit.bayesian_optimization.loops import BayesianOptimizationLoop
from emukit.core.loop import FixedIterationsStoppingCondition
f, parameter_space = branin_function()
design = RandomDesign(parameter_space) # Collect random points
X = design.get_samples(10)
Y = f(X)
model_gpy = GPRegression(X,Y) # Train and wrap the model in Emukit
model_emukit = GPyModelWrapper(model_gpy)
expected_improvement = ExpectedImprovement(model = model_emukit)
bayesopt_loop = BayesianOptimizationLoop(model = model_emukit,
space = parameter_space,
acquisition = expected_improvement,
batch_size = 1)
max_iter = 10
bayesopt_loop.run_loop(f, max_iter, context={'x1':0.3}) # we set x1 as the context variable
bayesopt_loop.run_loop(f, max_iter, context={'x2':0.1}) # we set x2 as the context variable
bayesopt_loop.run_loop(f, max_iter) # no context
bayesopt_loop.loop_state.X
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Probabilistički grafički modeli -- Bayesove mreže
Step2: Q
Step3: Q
Step4: Q
Step5: (a)
Step6: Prvo, prilažemo kôd koji to radi "standardnim pristupom"
Step7: Vaš zadatak izvesti je dani kôd korištenjem cjevovoda. Proučite razred pipeline.Pipeline.
Step8: (b)
Step9: Potreba za drugim metrikama osim točnosti može se vidjeti pri korištenju nekih osnovnih modela (engl. baselines). Možda najjednostavniji model takvog tipa je model koji svrstava sve primjere u većinsku klasu (engl. most frequent class; MFC) ili označuje testne primjere nasumično (engl. random). Proučite razred dummy.DummyClassifier i pomoću njega stvorite spomenute osnovne klasifikatore. Opet ćete trebati iskoristiti cjevovod kako biste došli do vektorskog oblika ulaznih primjera, makar ovi osnovni klasifikatori koriste samo oznake pri predikciji.
Step10: Q
Step11: Q
Step12: Q
Step13: Q
Step14: Iskoristite ugrađenu funkciju scipy.stats.ttest_rel za provedbu uparenog t-testa i provjerite koji od ova modela je bolji kada se koristi 5, 10 i 50 preklopa.
Step15: Q
Step16: Iskoristite skup podataka Xp dan gore. Isprobajte vrijednosti hiperparametra $K$ iz $[0,1,\ldots,15]$. Ne trebate dirati nikakve hiperparametre modela osim $K$. Iscrtajte krivulju od $J$ u ovisnosti o broju grupa $K$. Metodom lakta/koljena odredite vrijednost hiperparametra $K$.
Step17: Q
Step18: Q
Step19: Naučite model k-sredina (idealno pretpostavljajući $K=2$) na gornjim podatcima i prikažite dobiveno grupiranje (proučite funkciju scatter, posebice argument c).
Step20: Q
Step21: Ponovno, naučite model k-sredina (idealno pretpostavljajući $K=2$) na gornjim podatcima i prikažite dobiveno grupiranje (proučite funkciju scatter, posebice argument c).
Step22: Q
Step23: Ponovno, naučite model k-sredina (ovaj put idealno pretpostavljajući $K=3$) na gornjim podatcima i prikažite dobiveno grupiranje (proučite funkciju scatter, posebice argument c).
Step24: Q
Step25: (g)
Step26: Q
|
<ASSISTANT_TASK:>
Python Code:
# Učitaj osnovne biblioteke...
import sklearn
import codecs
import mlutils
import matplotlib.pyplot as plt
import pgmpy as pgm
%pylab inline
from pgmpy.models import BayesianModel
from pgmpy.factors.discrete.CPD import TabularCPD
from pgmpy.inference import VariableElimination
model = BayesianModel([('C', 'S'), ('C', 'R'), ('S', 'W'), ('R', 'W')])
cpd_c = TabularCPD(variable='C', variable_card=2, values=[[0.5, 0.5]])
cpd_s = TabularCPD(variable='S', evidence=['C'], evidence_card=[2],
variable_card=2,
values=[[0.9, 0.5],
[0.1, 0.5]])
cpd_r = TabularCPD(variable='R', evidence=['C'], evidence_card=[2],
variable_card=2,
values=[[0.2, 0.8],
[0.8, 0.2]])
cpd_w = TabularCPD(variable='W', evidence=['S', 'R'], evidence_card=[2,2],
variable_card=2,
values=[[1, 0.1, 0.1, 0.01],
[0, 0.9, 0.9, 0.99]])
model.add_cpds(cpd_c, cpd_r, cpd_s, cpd_w)
model.check_model()
infer = VariableElimination(model)
print(infer.query(['W'])['W'])
print(infer.query(['S'], evidence={'W': 1})['S'])
print(infer.query(['R'], evidence={'W': 1})['R'])
print(infer.query(['C'], evidence={'S': 1, 'R': 1})['C'])
print(infer.query(['C'])['C'])
print(infer.query(['S'], evidence={'W': 1, 'R': 1})['S'])
print(infer.query(['S'], evidence={'W': 1, 'R': 0})['S'])
print(infer.query(['R'], evidence={'W': 1, 'S': 1})['R'])
print(infer.query(['R'], evidence={'W': 1, 'S': 0})['R'])
model.is_active_trail('C','W')
from sklearn.model_selection import train_test_split
spam_X, spam_y = mlutils.load_SMS_dataset('./spam.csv')
spam_X_train, spam_X_test, spam_y_train, spam_y_test = \
train_test_split(spam_X, spam_y, train_size=0.7, test_size=0.3, random_state=69)
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.decomposition import TruncatedSVD
from sklearn.preprocessing import Normalizer
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score
# TF-IDF
vectorizer = TfidfVectorizer(stop_words="english", ngram_range=(1, 2), max_features=500)
spam_X_feat_train = vectorizer.fit_transform(spam_X_train)
# Smanjenje dimenzionalnosti
reducer = TruncatedSVD(n_components=300, random_state=69)
spam_X_feat_train = reducer.fit_transform(spam_X_feat_train)
# Normaliziranje
normalizer = Normalizer()
spam_X_feat_train = normalizer.fit_transform(spam_X_feat_train)
# NB
clf = LogisticRegression(solver='lbfgs')
clf.fit(spam_X_feat_train, spam_y_train)
# I sada ponovno sve ovo za testne podatke.
spam_X_feat_test = vectorizer.transform(spam_X_test)
spam_X_feat_test = reducer.transform(spam_X_feat_test)
spam_X_feat_test = normalizer.transform(spam_X_feat_test)
print(accuracy_score(spam_y_test, clf.predict(spam_X_feat_test)))
x_test123 = ["You were selected for a green card, apply here for only 50 USD!!!",
"Hey, what are you doing later? Want to grab a cup of coffee?"]
x_test = vectorizer.transform(x_test123)
x_test = reducer.transform(x_test)
x_test = normalizer.transform(x_test)
print(clf.predict(x_test))
vectorizer = TfidfVectorizer(stop_words="english", ngram_range=(1, 2), max_features=500)
reducer = TruncatedSVD(n_components=300, random_state=69)
normalizer = Normalizer()
clf = LogisticRegression(solver='lbfgs')
pipeline = Pipeline([('vectorizer', vectorizer), ('reducer', reducer), ('normalizer', normalizer), ('clf', clf)])
pipeline.fit(spam_X_train, spam_y_train)
print(accuracy_score(spam_y_test, pipeline.predict(spam_X_test)))
print(pipeline.predict(x_test123))
from sklearn.metrics import classification_report, accuracy_score
print(classification_report(y_pred=pipeline.predict(spam_X_test), y_true=spam_y_test))
from sklearn.dummy import DummyClassifier
rando = DummyClassifier(strategy='uniform')
pipeline = Pipeline([('vectorizer', vectorizer), ('reducer', reducer), ('normalizer', normalizer), ('clf', rando)])
pipeline.fit(spam_X_train, spam_y_train)
print(classification_report(spam_y_test, pipeline.predict(spam_X_test)))
mfc = DummyClassifier(strategy='most_frequent')
pipeline = Pipeline([('vectorizer', vectorizer), ('reducer', reducer), ('normalizer', normalizer), ('clf', mfc)])
pipeline.fit(spam_X_train, spam_y_train)
print(classification_report(spam_y_test, pipeline.predict(spam_X_test)))
from sklearn.model_selection import cross_val_score, KFold
kf = KFold(n_splits=5)
vectorizer = TfidfVectorizer(stop_words="english", ngram_range=(1, 2), max_features=500)
reducer = TruncatedSVD(n_components=300, random_state=69)
normalizer = Normalizer()
clf = LogisticRegression(solver='lbfgs')
pipeline = Pipeline([('vectorizer', vectorizer), ('reducer', reducer), ('normalizer', normalizer), ('clf', clf)])
for train_index, test_index in kf.split(spam_X):
X_train, X_test = spam_X[train_index], spam_X[test_index]
y_train, y_test = spam_y[train_index], spam_y[test_index]
pipeline.fit(X_train, y_train)
print()
print(cross_val_score(estimator=pipeline, X=X_test, y=y_test, cv=5))
print(accuracy_score(spam_y_test, pipeline.predict(spam_X_test)))
from sklearn.model_selection import GridSearchCV
param_grid = {
'vectorizer__max_features': [500, 1000],
'reducer__n_components': [100, 200, 300]
}
vectorizer = TfidfVectorizer(stop_words="english", ngram_range=(1, 2), max_features=500)
reducer = TruncatedSVD()
normalizer = Normalizer()
clf = LogisticRegression(solver='lbfgs')
pipeline = Pipeline([('vectorizer', vectorizer), ('reducer', reducer), ('normalizer', normalizer), ('clf', clf)])
f = GridSearchCV(pipeline, param_grid, cv = 3).fit(spam_X_train)
print(f.best_estimator_.fit(spam_X_train, spam_y_train).predict(spam_X_test))
from sklearn.model_selection import GridSearchCV, KFold
def nested_kfold_cv(clf, param_grid, X, y, k1=10, k2=3):
err = []
kfold = KFold(n_splits=k1, shuffle=True, random_state=42)
for index_train, index_test in kfold.split(X):
X_train, y_train, X_test, y_test = X[index_train], y[index_train], X[index_test], y[index_test]
f = GridSearchCV(clf, param_grid, cv=k2).fit(X_train, y_train)
h = f.best_estimator_.fit(X_train, y_train).predict(X_test)
err.append(zero_one_loss(y_test, h))
return err
np.random.seed(1337)
C1_scores_5folds = np.random.normal(78, 4, 5)
C2_scores_5folds = np.random.normal(81, 2, 5)
C1_scores_10folds = np.random.normal(78, 4, 10)
C2_scores_10folds = np.random.normal(81, 2, 10)
C1_scores_50folds = np.random.normal(78, 4, 50)
C2_scores_50folds = np.random.normal(81, 2, 50)
from scipy.stats import ttest_rel
print(ttest_rel(C1_scores_5folds, C2_scores_5folds).pvalue)
print(ttest_rel(C1_scores_10folds, C2_scores_10folds).pvalue)
print(ttest_rel(C1_scores_50folds, C2_scores_50folds).pvalue)
from sklearn.datasets import make_blobs
Xp, yp = make_blobs(n_samples=300, n_features=2, centers=[[0, 0], [3, 2.5], [0, 4]],
cluster_std=[0.45, 0.3, 0.45], random_state=96)
plt.scatter(Xp[:,0], Xp[:,1], c=yp, cmap=plt.get_cmap("cool"), s=20)
Ks = range(1,16)
from sklearn.cluster import KMeans
Js = []
for K in Ks:
J = KMeans(n_clusters=K).fit(Xp).inertia_
Js.append(J)
plot(Ks, Js)
from mlutils import plot_silhouette
for K in [2,3,5]:
plot_silhouette(K, Xp)
from sklearn.datasets import make_blobs
X1, y1 = make_blobs(n_samples=1000, n_features=2, centers=[[0, 0], [1.3, 1.3]], cluster_std=[0.15, 0.5], random_state=96)
plt.scatter(X1[:,0], X1[:,1], c=y1, cmap=plt.get_cmap("cool"), s=20)
k_means = KMeans(n_clusters=2).fit(X1)
plt.scatter(X1[:,0], X1[:,1], c=k_means.predict(X1), cmap=plt.get_cmap("cool"), s=20);
from sklearn.datasets import make_circles
X2, y2 = make_circles(n_samples=1000, noise=0.15, factor=0.05, random_state=96)
plt.scatter(X2[:,0], X2[:,1], c=y2, cmap=plt.get_cmap("cool"), s=20)
k_means = KMeans(n_clusters=2).fit(X2)
plt.scatter(X2[:,0], X2[:,1], c=k_means.predict(X2), cmap=plt.get_cmap("cool"), s=20);
X31, y31 = make_blobs(n_samples=1000, n_features=2, centers=[[0, 0]], cluster_std=[0.2], random_state=69)
X32, y32 = make_blobs(n_samples=50, n_features=2, centers=[[0.7, 0.5]], cluster_std=[0.15], random_state=69)
X33, y33 = make_blobs(n_samples=600, n_features=2, centers=[[0.8, -0.4]], cluster_std=[0.2], random_state=69)
plt.scatter(X31[:,0], X31[:,1], c="#00FFFF", s=20)
plt.scatter(X32[:,0], X32[:,1], c="#F400F4", s=20)
plt.scatter(X33[:,0], X33[:,1], c="#8975FF", s=20)
# Just join all the groups in a single X.
X3 = np.vstack([X31, X32, X33])
y3 = np.hstack([y31, y32, y33])
k_means = KMeans(n_clusters=3).fit(X3)
plt.scatter(X3[:,0], X3[:,1], c=k_means.predict(X3), cmap=plt.get_cmap("cool"), s=20);
from sklearn.mixture import GaussianMixture
gauss_X1 = GaussianMixture(n_components = 2).fit(X1)
gauss_X2 = GaussianMixture(n_components = 2).fit(X2)
gauss_X3 = GaussianMixture(n_components = 3).fit(X3)
h_X1 = gauss_X1.predict(X1)
h_X2 = gauss_X2.predict(X2)
h_X3 = gauss_X3.predict(X3)
figure(1)
plt.scatter(X1[:,0], X1[:,1], c=h_X1, cmap=plt.get_cmap("cool"), s=20);
figure(2)
plt.scatter(X2[:,0], X2[:,1], c=h_X2, cmap=plt.get_cmap("cool"), s=20);
figure(3)
plt.scatter(X3[:,0], X3[:,1], c=h_X3, cmap=plt.get_cmap("cool"), s=20);
import itertools as it
from scipy.special import comb
def rand_index_score(y_gold, y_predict):
N = len(y_gold)
grupa1 = ([ y for i, y in enumerate(y_gold) if y_predict[i] == 0])
grupa2 = ([ y for i, y in enumerate(y_gold) if y_predict[i] == 1])
grupa3 = ([ y for i, y in enumerate(y_gold) if y_predict[i] == 2])
n = [[len([y for y in g if y == i])
for i in [0,1,2]]
for g in [grupa1, grupa2, grupa3]]
a = sum([(comb(nnn, 2)) for nn in n for nnn in nn])
b = n[0][0] * (n[1][1] + n[1][2] + n[2][1] + n[2][2]) + \
n[0][1] * (n[1][0] + n[1][2] + n[2][0] + n[2][2]) + \
n[0][2] * (n[1][0] + n[1][1] + n[2][0] + n[2][1]) + \
n[1][0] * (n[2][1] + n[2][2]) + \
n[1][1] * (n[2][0] + n[2][2]) + \
n[1][2] * (n[2][0] + n[2][1])
return (a+b) / comb(N,2)
y_pred = KMeans(n_clusters=3).fit(Xp).predict(Xp)
rand_index_score(yp, y_pred)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Family
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables
Step9: 2. Key Properties --> Seawater Properties
Step10: 2.2. Eos Functional Temp
Step11: 2.3. Eos Functional Salt
Step12: 2.4. Eos Functional Depth
Step13: 2.5. Ocean Freezing Point
Step14: 2.6. Ocean Specific Heat
Step15: 2.7. Ocean Reference Density
Step16: 3. Key Properties --> Bathymetry
Step17: 3.2. Type
Step18: 3.3. Ocean Smoothing
Step19: 3.4. Source
Step20: 4. Key Properties --> Nonoceanic Waters
Step21: 4.2. River Mouth
Step22: 5. Key Properties --> Software Properties
Step23: 5.2. Code Version
Step24: 5.3. Code Languages
Step25: 6. Key Properties --> Resolution
Step26: 6.2. Canonical Horizontal Resolution
Step27: 6.3. Range Horizontal Resolution
Step28: 6.4. Number Of Horizontal Gridpoints
Step29: 6.5. Number Of Vertical Levels
Step30: 6.6. Is Adaptive Grid
Step31: 6.7. Thickness Level 1
Step32: 7. Key Properties --> Tuning Applied
Step33: 7.2. Global Mean Metrics Used
Step34: 7.3. Regional Metrics Used
Step35: 7.4. Trend Metrics Used
Step36: 8. Key Properties --> Conservation
Step37: 8.2. Scheme
Step38: 8.3. Consistency Properties
Step39: 8.4. Corrected Conserved Prognostic Variables
Step40: 8.5. Was Flux Correction Used
Step41: 9. Grid
Step42: 10. Grid --> Discretisation --> Vertical
Step43: 10.2. Partial Steps
Step44: 11. Grid --> Discretisation --> Horizontal
Step45: 11.2. Staggering
Step46: 11.3. Scheme
Step47: 12. Timestepping Framework
Step48: 12.2. Diurnal Cycle
Step49: 13. Timestepping Framework --> Tracers
Step50: 13.2. Time Step
Step51: 14. Timestepping Framework --> Baroclinic Dynamics
Step52: 14.2. Scheme
Step53: 14.3. Time Step
Step54: 15. Timestepping Framework --> Barotropic
Step55: 15.2. Time Step
Step56: 16. Timestepping Framework --> Vertical Physics
Step57: 17. Advection
Step58: 18. Advection --> Momentum
Step59: 18.2. Scheme Name
Step60: 18.3. ALE
Step61: 19. Advection --> Lateral Tracers
Step62: 19.2. Flux Limiter
Step63: 19.3. Effective Order
Step64: 19.4. Name
Step65: 19.5. Passive Tracers
Step66: 19.6. Passive Tracers Advection
Step67: 20. Advection --> Vertical Tracers
Step68: 20.2. Flux Limiter
Step69: 21. Lateral Physics
Step70: 21.2. Scheme
Step71: 22. Lateral Physics --> Momentum --> Operator
Step72: 22.2. Order
Step73: 22.3. Discretisation
Step74: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Step75: 23.2. Constant Coefficient
Step76: 23.3. Variable Coefficient
Step77: 23.4. Coeff Background
Step78: 23.5. Coeff Backscatter
Step79: 24. Lateral Physics --> Tracers
Step80: 24.2. Submesoscale Mixing
Step81: 25. Lateral Physics --> Tracers --> Operator
Step82: 25.2. Order
Step83: 25.3. Discretisation
Step84: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Step85: 26.2. Constant Coefficient
Step86: 26.3. Variable Coefficient
Step87: 26.4. Coeff Background
Step88: 26.5. Coeff Backscatter
Step89: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Step90: 27.2. Constant Val
Step91: 27.3. Flux Type
Step92: 27.4. Added Diffusivity
Step93: 28. Vertical Physics
Step94: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Step95: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
Step96: 30.2. Closure Order
Step97: 30.3. Constant
Step98: 30.4. Background
Step99: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
Step100: 31.2. Closure Order
Step101: 31.3. Constant
Step102: 31.4. Background
Step103: 32. Vertical Physics --> Interior Mixing --> Details
Step104: 32.2. Tide Induced Mixing
Step105: 32.3. Double Diffusion
Step106: 32.4. Shear Mixing
Step107: 33. Vertical Physics --> Interior Mixing --> Tracers
Step108: 33.2. Constant
Step109: 33.3. Profile
Step110: 33.4. Background
Step111: 34. Vertical Physics --> Interior Mixing --> Momentum
Step112: 34.2. Constant
Step113: 34.3. Profile
Step114: 34.4. Background
Step115: 35. Uplow Boundaries --> Free Surface
Step116: 35.2. Scheme
Step117: 35.3. Embeded Seaice
Step118: 36. Uplow Boundaries --> Bottom Boundary Layer
Step119: 36.2. Type Of Bbl
Step120: 36.3. Lateral Mixing Coef
Step121: 36.4. Sill Overflow
Step122: 37. Boundary Forcing
Step123: 37.2. Surface Pressure
Step124: 37.3. Momentum Flux Correction
Step125: 37.4. Tracers Flux Correction
Step126: 37.5. Wave Effects
Step127: 37.6. River Runoff Budget
Step128: 37.7. Geothermal Heating
Step129: 38. Boundary Forcing --> Momentum --> Bottom Friction
Step130: 39. Boundary Forcing --> Momentum --> Lateral Friction
Step131: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Step132: 40.2. Ocean Colour
Step133: 40.3. Extinction Depth
Step134: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Step135: 41.2. From Sea Ice
Step136: 41.3. Forced Mode Restoring
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ncc', 'noresm2-mm', 'ocean')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let us make sure that the artifact store exists
Step2: Creating the KFP CLI builder for Vertex AI
Step3: Build the image and push it to your project's Container Registry.
Step4: Understanding the Cloud Build workflow.
|
<ASSISTANT_TASK:>
Python Code:
PROJECT_ID = !(gcloud config get-value project)
PROJECT_ID = PROJECT_ID[0]
REGION = "us-central1"
ARTIFACT_STORE = f"gs://{PROJECT_ID}-kfp-artifact-store"
!gsutil ls | grep ^{ARTIFACT_STORE}/$ || gsutil mb -l {REGION} {ARTIFACT_STORE}
!cat kfp-cli_vertex/Dockerfile
KFP_CLI_IMAGE_NAME = "kfp-cli-vertex"
KFP_CLI_IMAGE_URI = f"gcr.io/{PROJECT_ID}/{KFP_CLI_IMAGE_NAME}:latest"
KFP_CLI_IMAGE_URI
!gcloud builds submit --timeout 15m --tag {KFP_CLI_IMAGE_URI} kfp-cli_vertex
SUBSTITUTIONS = f"_REGION={REGION},_PIPELINE_FOLDER=./"
SUBSTITUTIONS
!gcloud builds submit . --config cloudbuild_vertex.yaml --substitutions {SUBSTITUTIONS}
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now I will create an object that knows how to deal with Martian times and illuminations.
Step2: I saved some predefined places and their locations into the code, so that I don't need to remember the coordinates all the time. So let's justify the variable name by actually setting it on top of Inca City
Step3: By default, when I don't provide a time, the time is set to the current time. In the UTC timezone, that is
Step4: To double-check how close we are to spring time in the southern hemisphere on Mars, I need to look at a value called L_s, which is the solar longitude.
Step5: So, we are pretty close to spring then. But do we already have sunlight in Inca? We should remember that we are in polar areas, where we have darkness for half a year, just like on Earth. Let's have a look what is the local time in Inca
Step6: Right, that's still in the night, so that most likely means that the sun is below the horizon, right?
Step7: Solar angles are measured from the local normal direction, with the sun directly over head being defined as 0. Which means the horizon is at 90 degrees. Hence, this value of 96 means the sun is below the horizon. But it is local night, so we would expect that!
Step8: Oh yes! This is just 2 degrees above the horizon, the sun is lurking over it just a tiny bit. But all you humans that work so much in helping us know what this means, right? Where there is sun, there is energy. And this energy can be used to sublime CO2 gas and create the wonderful fans we are studying.
Step9: Now, I will create a loop with 100 elements, and check and write down the time each 10 minutes (= 600 seconds). I save the stuff in 2 new arrays to have it easier to plot things over time.
Step10: I'm now importing the pandas library, an amazing toolbox to deal with time-series data. Especially, the plots automatically get nicely formatted time-axes.
Step11: I need to switch this notebook to show plots inside this notebook and not outside as an extra window, which is my default
Step12: Here we see how the sun's angle is developing over time. As expected we see a minimum (i.e. highest sun over horizon) right around noon.
|
<ASSISTANT_TASK:>
Python Code:
from pymars import kmaspice
inca = kmaspice.MarsSpicer()
inca.goto('inca')
inca.time.isoformat()
round(inca.l_s, 1)
inca.local_soltime
round(inca.illum_angles.dsolar,1)
inca.advance_time_by(7*3600)
round(inca.illum_angles.dsolar)
inca.advance_time_by(-7*3600)
times = []
angles = []
for i in range(100):
inca.advance_time_by(600)
times.append(inca.local_soltime[3])
angles.append(inca.illum_angles.dsolar)
import pandas as pd
data = pd.Series(angles, index=times)
%pylab inline
data.plot()
times = []
angles = []
for i in range(2000):
inca.advance_time_by(-600)
times.append(inca.time)
angles.append(inca.illum_angles.dsolar)
pd.Series(angles,index=times).plot()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: 1a. Make an array of Fourier frequencies
Step7: Now the T&K algorithm. I've transcribed the 'recipe' section of the T&K95 paper, which you will convert to lines of code.
Step8: 2. More realistic simulation with T&K
Step10: 2f. Re-do 2b through the plot above but slightly changing the power spectrum shape in each segment.
|
<ASSISTANT_TASK:>
Python Code:
n_bins = 8192 ## number of total frequency bins in a FT segment; same as number of time bins in the light curve
dt = 1./16. # time resolution of the output light curve
df = 1. / dt / n_bins
def lorentzian(v, v_0, gamma):
Gives a Lorentzian centered on v_0 with a FWHM of gamma
numerator = gamma / (np.pi * 2.0)
denominator = (v - v_0) ** 2 + (1.0/2.0 * gamma) ** 2
L = numerator / denominator
return L
def powerlaw(v, beta):
Gives a powerlaw of (1/v)^-beta
pl = np.zeros(len(v))
pl[1:] = v[1:] ** (-beta)
pl[0] = np.inf
return pl
def gaussian(v, mean, std_dev):
Gives a Gaussian with a mean of mean and a standard deviation of std_dev
FWHM = 2 * np.sqrt(2 * np.log(2))*std_dev
exp_numerator = -(v - mean)**2
exp_denominator = 2 * std_dev**2
G = np.exp(exp_numerator / exp_denominator)
return G
def powerlaw_expdecay(v, beta, alpha):
Gives a powerlaw of (1/v)^-beta with an exponential decay e^{-alpha*v}
pl_exp = np.where(v != 0, (1.0 / v) ** beta * np.exp(-alpha * v), np.inf)
return pl_exp
def broken_powerlaw(v, v_b, beta_1, beta_2):
Gives two powerlaws, (1/v)^-beta_1 and (1/v)^-beta_2
that cross over at break frequency v_b.
c = v_b ** (-beta_1 + beta_2) ## scale factor so that they're equal at the break frequency
pl_1 = v[np.where(v <= v_b)] ** (-beta_1)
pl_2 = c * v[np.where(v > v_b)] ** (-beta_2)
pl = np.append(pl_1, pl_2)
return pl
fig, ax = plt.subplots(1,1, figsize=(8,5))
ax.plot(rb_freq, rb_pow, linewidth=2.0)
ax.set_xscale('log')
ax.set_yscale('log')
ax.set_xlabel(r'Frequency (Hz)', fontproperties=font_prop)
ax.tick_params(axis='x', labelsize=16, bottom=True, top=True,
labelbottom=True, labeltop=False)
ax.tick_params(axis='y', labelsize=16, left=True, right=True,
labelleft=True, labelright=False)
plt.show()
def lorentz_q(v, v_peak, q, rms):
Form of the Lorentzian function defined in terms of
peak frequency v_peak and quality factor q
q = v_peak / fwhm
with the integrated rms of the QPO as the normalizing factor.
e.g. see Pottschmidt et al. 2003, A&A, 407, 1039 for more info
f_res = v_peak / np.sqrt(1.0+(1.0/(4.0*q**2)))
r = rms / np.sqrt(0.5-np.arctan(-2.0*q)/np.pi)
lorentz = ((1/np.pi)*2*r**2*q*f_res) / (f_res**2+(4*q**2*(v-f_res)**2))
return lorentz
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Plot chan1 as in resample2.ipynb, and add vancouver and portland points with a line between them
Step2: What is the distance between Vancouver and Portland?
|
<ASSISTANT_TASK:>
Python Code:
import h5py
from a301utils.a301_readfile import download
from mpl_toolkits.basemap import Basemap
from matplotlib import pyplot as plt
import json
import numpy as np
rad_file=' MYD021KM.A2016217.1915.006.2016218155919.h5'
geom_file='MYD03.A2016217.1915.006.2016218154759.h5'
download(rad_file)
data_name='MYD021KM.A2016224.2100.006_new.reproject.h5'
download(data_name)
with h5py.File(data_name,'r') as h5_file:
basemap_args=json.loads(h5_file.attrs['basemap_args'])
chan1=h5_file['channels']['1'][...]
print(basemap_args)
%matplotlib inline
from matplotlib import cm
from matplotlib.colors import Normalize
cmap=cm.autumn #see http://wiki.scipy.org/Cookbook/Matplotlib/Show_colormaps
cmap.set_over('w')
cmap.set_under('b',alpha=0.2)
cmap.set_bad('0.75') #75% grey
plt.close('all')
fig,ax = plt.subplots(1,1,figsize=(14,14))
#
# set up the Basemap object
#
basemap_args['ax']=ax
basemap_args['resolution']='c'
bmap = Basemap(**basemap_args)
num_meridians=180
num_parallels = 90
vmin=None; vmax=None
col = bmap.imshow(chan1, origin='upper',cmap=cmap, vmin=0, vmax=0.4)
lon_sep, lat_sep = 5,5
parallels = np.arange(-90, 90, lat_sep)
meridians = np.arange(0, 360, lon_sep)
bmap.drawparallels(parallels, labels=[1, 0, 0, 0],
fontsize=10, latmax=90)
bmap.drawmeridians(meridians, labels=[0, 0, 0, 1],
fontsize=10, latmax=90)
bmap.drawcoastlines()
colorbar=fig.colorbar(col, shrink=0.5, pad=0.05,extend='both')
colorbar.set_label('channel1 reflectivity',rotation=-90,verticalalignment='bottom')
_=ax.set(title='vancouver')
#
# now use the basemap object to project the portland and vancouver
# lon/lat coords into the xy lambert coordinate system
#
# remember what the asterisk * argument expansion does:
# if I have a list A=[a,b] then fun(*A) is the same as fun(a,b)
#
#
vancouver_lon_lat=[-123.1207,49.2827]
portland_lon_lat=[-122.6765,45.5231]
#
# get the xy coords
#
van_xy = bmap(*vancouver_lon_lat)
portland_xy = bmap(*portland_lon_lat)
#
# draw a blue circle for van and
# a green circle for portland
#
bmap.plot(*van_xy,'bo',markersize=15)
bmap.plot(*portland_xy,'go',markersize=15)
#
# connect them with a cyan line
#
xcoords=[van_xy[0],portland_xy[0]]
ycoords=[van_xy[1],portland_xy[1]]
_ = bmap.plot(xcoords,ycoords,'c-',linewidth=5)
import pyproj
great_circle=pyproj.Geod(a=bmap.rmajor,b=bmap.rminor)
azi12,azi21,distance=great_circle.inv(vancouver_lon_lat[0],vancouver_lon_lat[1],
portland_lon_lat[0],portland_lon_lat[1])
print('Vancouver to Portland -- great circle is: {:5.2f} km'.format(distance/1.e3))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Sample dataset
Step2: Exercise. What does this code do?
Step3: Sparse matrix storage
Step4: Exercise. What does the following code do?
Step5: Hopefully, you deduced that A_numbered_keys above is constructed in such a way that it will work with the sparse matrix-vector multiply routine we created in Lab 14 (solutions).
Step6: Exercise. Implement a sparse matrix-vector multiply that works when the matrix is A_named_keys. How much faster or slower is it than cse6040.spmv()?
Step7: @TODO
Step9: Exercise. Implement a sparse matrix-vector multiply routine for this COO implementation.
Step10: Exercise. Now create a CSR data structure, again using native Python lists. Name your output CSR lists csr_ptrs, csr_inds, and csr_vals.
Step11: Exercise. Now implement a CSR-based sparse matrix-vector multiply.
Step12: Sparse matrix storage using SciPy (Numpy)
Step13: Per the notes, here is how we can convert our COO representation from before into a SciPy implementation.
Step14: Now measure the time to do a sparse matrix-vector multiply in the COO representation. How does it compare to the nested default dictionary approach?
Step15: Exercise. Repeat the same experiment for SciPy-based CSR.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
from IPython.display import display
edges = pd.read_csv ('UserEdges-1M.csv')
display (edges.head ())
V_names = set (edges.Source)
V_names.update (set (edges.Target))
m = len (edges)
n = len (V_names)
print ("==> |V| == %d, |E| == %d" % (n, m))
import cse6040utils as cse6040
id2name = {}
name2id = {}
for (k, v) in enumerate (V_names):
if k <= 5: print ("[%d] %s" % (k, v)) # for debugging
id2name[k] = v
name2id[v] = k
A_named_keys = cse6040.sparse_matrix ()
A_numbered_keys = cse6040.sparse_matrix ()
for (k, row) in edges.iterrows ():
ni = row['Source']
nj = row['Target']
A_named_keys[ni][nj] = 1.
A_named_keys[nj][ni] = 1.
i = name2id[ni]
j = name2id[nj]
A_numbered_keys[i][j] = 1.
A_numbered_keys[j][i] = 1.
# Build a dense vector
x = cse6040.dense_vector (n)
%timeit cse6040.spmv (n, A_numbered_keys, x)
def spmv_named_keys (n, A, x, name2id):
y = cse6040.dense_vector (n)
# @YOUSE: Complete this routine
pass
return y
# Measures the speed of your implementation:
%timeit spmv_named_keys (n, A_named_keys, x, name2id)
coo_rows = [name2id[e] for e in edges['Source']]
coo_cols = [name2id[e] for e in edges['Target']]
coo_vals = [1.] * len (coo_rows)
def coo_spmv (n, R, C, V, x):
Returns y = A*x, where A has 'n' rows and is stored in
COO format by the array triples, (R, C, V).
assert n > 0
assert type (x) is list
assert type (R) is list
assert type (C) is list
assert type (V) is list
assert len (R) == len (C) == len (V)
y = cse6040.dense_vector (n)
# @YOUSE: Fill in this implementation
pass
return y
%timeit coo_spmv (n, coo_rows, coo_cols, coo_vals, x)
# Aside: What does this do? Try running it to see.
z1 = ['q', 'v', 'c']
z2 = [1, 2, 3]
z3 = ['dog', 7, 'man']
print sorted (zip (z1, z2, z3), key=lambda z: z[0])
C = sorted (zip (coo_rows, coo_cols, coo_vals),
key=lambda t: t[0])
nnz = len (C)
assert n == (C[-1][0] + 1) # Why?
csr_inds = [j for (i, j, a_ij) in C]
csr_vals = [a_ij for (i, j, a_ij) in C]
# @YOUSE: Construct `csr_ptrs`
pass
# Some checks on your implementation: Test the first 10 entries
assert len (csr_ptrs) == (n+1)
assert all ([a==b for (a, b) in zip (csr_ptrs[0:10], [0, 1, 3, 60, 66, 72, 73, 74, 78, 82])])
print ("==> Passed quick test")
def csr_spmv (n, ptr, ind, val, x):
assert n > 0
assert type (ptr) == list
assert type (ind) == list
assert type (val) == list
assert type (x) == list
assert len (ptr) >= (n+1) # Why?
assert len (ind) >= ptr[n] # Why?
assert len (val) >= ptr[n] # Why?
y = cse6040.dense_vector (n)
# @YOUSE: Insert your implementation here
pass
return y
%timeit csr_spmv (n, csr_ptrs, csr_inds, csr_vals, x)
import scipy.sparse as sp
A_coo = sp.coo_matrix ((vals, (rows, cols)))
x_np = np.array (x)
%timeit A_coo.dot (x_np)
# @YOUSE: Fill in your code here
pass
%timeit A_csr.dot (x_np)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Inside the notebook, the state will automatically display the values of the key variables
Step2: Adding a label to the state for output purposes requires an extra keyword
Step3: Reactive states
Step4: Additional functions
Step5: A state will return the key primitive variables ($\rho, v_x, v_t, \epsilon$)
Step6: A state will return all the variables it computes, which is $\rho, v_x, v_t, \epsilon, p, W, h, c_s$
|
<ASSISTANT_TASK:>
Python Code:
from r3d2 import eos_defns, State
eos = eos_defns.eos_gamma_law(5.0/3.0)
U = State(1.0, 0.1, 0.0, 2.0, eos)
U
U2 = State(10.0, -0.3, 0.1, 5.0, eos, label="L")
U2
q_available = 0.1
t_ignition = 10.0
Cv = 1.0
eos_reactive = eos_defns.eos_gamma_law_react(5.0/3.0, q_available, Cv, t_ignition, eos)
U_reactive = State(5.0, 0.1, 0.1, 2.0, eos_reactive, label="Reactive")
U_reactive
print("Left wavespeed of first state is {}".format(U.wavespeed(0)))
print("Middle wavespeed of second state is {}".format(U2.wavespeed(1)))
print("Right wavespeed of reactive state is {}".format(U.wavespeed(2)))
print("Primitive variables of first state are {}".format(U.prim()))
print("All variables of second state are {}".format(U.state()))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Numerical Features
Step2: The housing price correlates strongly with OverallQual, GrLivArea(GarageCars), GargeArea, TotalBsmtSF, 1stFlrSF, FullBath, TotRmsAbvGrd, YearBuilt, YearRemodAdd, GargeYrBlt, MasVnrArea and Fireplaces. But some of those features are highly correlated among each others.
Step3: Categorical Features
Step4: Neighborhood
Step5: Could group those Neighborhoods with similar housing price into a same bucket for dimension-reduction.
Step6: Sale's timing does not seem to hugely affect the house.
Step7: Housing Condition
Step8: Basement Conditions
Step9: Home Functionality
Step10: FirePlaceQu
Step11: Heating
Step12: Clearly, having AC or not has a big impact on housing price.
Step13: Kitchen Quality
Step14: MSZonig
Step15: Street & Alley Access
|
<ASSISTANT_TASK:>
Python Code:
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory
from subprocess import check_output
print(check_output(["ls", "../input"]).decode("utf8"))
# Any results you write to the current directory are saved as output.
df = pd.read_csv('../input/train.csv')
#df.drop('SalePrice', axis = 1, inplace = True)
#test = pd.read_csv('../input/test.csv')
#df = df.append(test, ignore_index = True)
df.head()
df.describe()
df.columns
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
print("Some Statistics of the Housing Price:\n")
print(df['SalePrice'].describe())
print("\nThe median of the Housing Price is: ", df['SalePrice'].median(axis = 0))
sns.distplot(df['SalePrice'], kde = False, color = 'b', hist_kws={'alpha': 0.9})
corr = df.select_dtypes(include = ['float64', 'int64']).iloc[:, 1:].corr()
plt.figure(figsize=(12, 12))
sns.heatmap(corr, vmax=1, square=True)
cor_dict = corr['SalePrice'].to_dict()
del cor_dict['SalePrice']
print("List the numerical features decendingly by their correlation with Sale Price:\n")
for ele in sorted(cor_dict.items(), key = lambda x: -abs(x[1])):
print("{0}: \t{1}".format(*ele))
sns.regplot(x = 'OverallQual', y = 'SalePrice', data = df, color = 'Orange')
plt.figure(1)
f, axarr = plt.subplots(3, 2, figsize=(10, 9))
price = df.SalePrice.values
axarr[0, 0].scatter(df.GrLivArea.values, price)
axarr[0, 0].set_title('GrLiveArea')
axarr[0, 1].scatter(df.GarageArea.values, price)
axarr[0, 1].set_title('GarageArea')
axarr[1, 0].scatter(df.TotalBsmtSF.values, price)
axarr[1, 0].set_title('TotalBsmtSF')
axarr[1, 1].scatter(df['1stFlrSF'].values, price)
axarr[1, 1].set_title('1stFlrSF')
axarr[2, 0].scatter(df.TotRmsAbvGrd.values, price)
axarr[2, 0].set_title('TotRmsAbvGrd')
axarr[2, 1].scatter(df.MasVnrArea.values, price)
axarr[2, 1].set_title('MasVnrArea')
f.text(-0.01, 0.5, 'Sale Price', va='center', rotation='vertical', fontsize = 12)
plt.tight_layout()
plt.show()
fig = plt.figure(2, figsize=(9, 7))
plt.subplot(211)
plt.scatter(df.YearBuilt.values, price)
plt.title('YearBuilt')
plt.subplot(212)
plt.scatter(df.YearRemodAdd.values, price)
plt.title('YearRemodAdd')
fig.text(-0.01, 0.5, 'Sale Price', va = 'center', rotation = 'vertical', fontsize = 12)
plt.tight_layout()
print(df.select_dtypes(include=['object']).columns.values)
plt.figure(figsize = (12, 6))
sns.boxplot(x = 'Neighborhood', y = 'SalePrice', data = df)
xt = plt.xticks(rotation=45)
plt.figure(figsize = (12, 6))
sns.countplot(x = 'Neighborhood', data = df)
xt = plt.xticks(rotation=45)
fig, ax = plt.subplots(2, 1, figsize = (10, 6))
sns.boxplot(x = 'SaleType', y = 'SalePrice', data = df, ax = ax[0])
sns.boxplot(x = 'SaleCondition', y = 'SalePrice', data = df, ax = ax[1])
plt.tight_layout()
g = sns.FacetGrid(df, col = 'YrSold', col_wrap = 3)
g.map(sns.boxplot, 'MoSold', 'SalePrice', palette='Set2', order = range(1, 13))\
.set(ylim = (0, 500000))
plt.tight_layout()
fig, ax = plt.subplots(2, 1, figsize = (10, 8))
sns.boxplot(x = 'BldgType', y = 'SalePrice', data = df, ax = ax[0])
sns.boxplot(x = 'HouseStyle', y = 'SalePrice', data = df, ax = ax[1])
fig, ax = plt.subplots(2, 1, figsize = (10, 8))
sns.boxplot(x = 'Condition1', y = 'SalePrice', data = df, ax = ax[0])
sns.boxplot(x = 'Exterior1st', y = 'SalePrice', data = df, ax = ax[1])
x = plt.xticks(rotation = 45)
plt.show()
fig, ax = plt.subplots(2, 2, figsize = (10, 8))
sns.boxplot('BsmtCond', 'SalePrice', data = df, ax = ax[0, 0])
sns.boxplot('BsmtQual', 'SalePrice', data = df, ax = ax[0, 1])
sns.boxplot('BsmtExposure', 'SalePrice', data = df, ax = ax[1, 0])
sns.boxplot('BsmtFinType1', 'SalePrice', data = df, ax = ax[1, 1])
sns.violinplot('Functional', 'SalePrice', data = df)
sns.factorplot('FireplaceQu', 'SalePrice', data = df, color = 'm', \
estimator = np.median, order = ['Ex', 'Gd', 'TA', 'Fa', 'Po'], size = 4.5, aspect=1.35)
pd.crosstab(df.Fireplaces, df.FireplaceQu)
g = sns.FacetGrid(df, col = 'FireplaceQu', col_wrap = 3, col_order=['Ex', 'Gd', 'TA', 'Fa', 'Po'])
g.map(sns.boxplot, 'Fireplaces', 'SalePrice', order = [1, 2, 3], palette = 'Set2')
pd.crosstab(df.HeatingQC, df.CentralAir)
pd.crosstab(df.HeatingQC, df.FireplaceQu)
sns.factorplot('HeatingQC', 'SalePrice', hue = 'CentralAir', estimator = np.mean, data = df,
size = 4.5, aspect = 1.4)
fig, ax = plt.subplots(1, 2, figsize = (10, 4))
sns.boxplot('Electrical', 'SalePrice', data = df, ax = ax[0]).set(ylim = (0, 400000))
sns.countplot('Electrical', data = df)
plt.tight_layout()
sns.factorplot('KitchenQual', 'SalePrice', estimator = np.mean,
size = 4.5, aspect = 1.4, data = df, order = ['Ex', 'Gd', 'TA', 'Fa'])
sns.boxplot(x = 'MSZoning', y = 'SalePrice', data = df)
fig, ax = plt.subplots(1, 2, figsize = (10, 4))
sns.boxplot(x = 'Street', y = 'SalePrice', data = df, ax = ax[0])
sns.boxplot(x = 'Alley', y = 'SalePrice', data = df, ax = ax[1])
plt.tight_layout()
print("The NA's in Alley is: ", df['Alley'].isnull().sum())
print("\nThere are so many NA's in Alley. When Alley is NA, Street = ",
df[df.Alley.notnull()].Street.unique())
print("\n", pd.crosstab(df.Street, df.Alley))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Loading and visualizing the input data
Step2: Plotting the diffraction profile
Step3: The profile showed above is highly smoothed and the hkl peaks are merged.<br>
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
from ImageD11.columnfile import columnfile
from ImageD11 import weighted_kde as wkde
%matplotlib inline
plt.rcParams['figure.figsize'] = (6,4)
plt.rcParams['figure.dpi'] = 150
plt.rcParams['mathtext.fontset'] = 'cm'
plt.rcParams['font.size'] = 12
# read the peaks
flt = columnfile('sma_261N.flt.new')
# peaks indexed to phase 1
phase1 = flt.copy()
phase1.filter( phase1.labels > -1 )
# unindexed peaks (phase 2 + unindexed phase 1?)
phase2 = flt.copy()
phase2.filter( phase2.labels == -1 )
#plot radial transform for phase 1
plt.plot( phase1.tth_per_grain, phase1.eta_per_grain, 'x')
plt.xlabel( r'$ 2 \theta \, (\degree) $' )
plt.ylabel( r'$ \eta \, (\degree) $' )
plt.title( r'$Diffraction \, angles$' )
# Probability density function (pdf) of 2theta
# weighted by the peak intensity and using default 2theta bandwidth
I_phase1 = phase1.sum_intensity * phase1.Lorentz_per_grain
pdf = wkde.gaussian_kde( phase1.tth_per_grain, weights = I_phase1)
# Plotting it over 2theta range
x = np.linspace( min(flt.tth), max(flt.tth), 500 )
y = pdf(x)
plt.plot(x, y)
plt.xlabel( r'$ 2 \theta \, (\degree) $' )
plt.ylabel( r'$ I $' )
plt.yticks([])
plt.title( ' With bandwidth = %.3f'%pdf.factor )
pdf_phase1 = wkde.gaussian_kde( phase1.tth, weights = phase1.sum_intensity )
pdf_phase2 = wkde.gaussian_kde( phase2.tth, weights = phase2.sum_intensity )
frac_phase1 = np.sum( phase1.sum_intensity ) / np.sum( flt.sum_intensity )
frac_phase2 = np.sum( phase2.sum_intensity ) / np.sum( flt.sum_intensity )
from ipywidgets import interact
bw_range = ( 0.001, pdf_phase1.factor/3, 0.001)
@interact( bandwidth = bw_range)
def plot_pdf(bandwidth):
pdf_phase1.set_bandwidth(bandwidth)
pdf_phase2.set_bandwidth(bandwidth)
y_phase1 = pdf_phase1(x)
y_phase2 = pdf_phase2(x)
plt.plot( x, frac_phase1 * y_phase1, label = r'$Phase \, 1$' )
plt.plot( x, frac_phase2 * y_phase2, label = r'$Phase \, 2$' )
plt.legend(loc='best')
plt.xlabel( r'$ 2 \theta \, (\degree) $' )
plt.ylabel( r'$ I $' )
plt.yticks([])
plt.title( r'$ 3DXRD \, diffractogram $' )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Hub with Keras
Step2: ImageNet 分類器
Step3: 単一の画像で実行する
Step4: バッチ次元を一つ追加し、画像をモデルに渡してください。
Step5: 結果は 1001 の要素をもつロジットベクトルで、画像がそれぞれのクラスに属する確率を表します。
Step6: 予測結果をデコードする
Step7: シンプルな転移学習
Step8: このデータをモデルに読み込むもっとも簡単な方法は、 tf.keras.preprocessing.image.ImageDataGenerator を使用することです。
Step9: 結果のオブジェクトは、image_batch と label_batch のペアを返すイテレータです。
Step10: 画像のバッチに対して分類器を実行する
Step11: これらの予測結果と実際の画像がどの程度一致しているか確認してください。
Step12: 画像の属性については、 LICENSE.txt ファイルを参照してください。
Step13: モジュールを作成し、入力として期待される画像サイズを確認します。
Step14: 特徴抽出器は、各画像に対して 1280 要素のベクトルを返します。
Step15: 学習が新しい分類層のみを修正するように、特徴抽出層の変数を freeze(固定)します。
Step16: 分類ヘッドを追加する
Step17: モデルを学習する
Step18: 今度は .fit メソッドを使ってモデルを学習します。
Step19: これで、ほんの数回の学習の繰り返しでも、モデルがタスクを進めていることがわかったと思います。
Step20: 予測結果を確認する
Step21: モデルを介してイメージバッチを実行し、インデックスをクラス名に変換します。
Step22: 結果をプロットします。
Step23: モデルをエクスポートする
Step24: それではモデルをリロードできることを確認してください。リロードしたモデルでもおなじ結果が得られます。
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
!pip install -U tensorflow_hub
import matplotlib.pylab as plt
import tensorflow.compat.v1 as tf
import tensorflow_hub as hub
from tensorflow.keras import layers
classifier_url ="https://tfhub.dev/google/tf2-preview/mobilenet_v2/classification/2" #@param {type:"string"}
IMAGE_SHAPE = (224, 224)
classifier = tf.keras.Sequential([
hub.KerasLayer(classifier_url, input_shape=IMAGE_SHAPE+(3,))
])
import numpy as np
import PIL.Image as Image
grace_hopper = tf.keras.utils.get_file('image.jpg','https://storage.googleapis.com/download.tensorflow.org/example_images/grace_hopper.jpg')
grace_hopper = Image.open(grace_hopper).resize(IMAGE_SHAPE)
grace_hopper
grace_hopper = np.array(grace_hopper)/255.0
grace_hopper.shape
result = classifier.predict(grace_hopper[np.newaxis, ...])
result.shape
predicted_class = np.argmax(result[0], axis=-1)
predicted_class
labels_path = tf.keras.utils.get_file('ImageNetLabels.txt','https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt')
imagenet_labels = np.array(open(labels_path).read().splitlines())
plt.imshow(grace_hopper)
plt.axis('off')
predicted_class_name = imagenet_labels[predicted_class]
_ = plt.title("Prediction: " + predicted_class_name.title())
data_root = tf.keras.utils.get_file(
'flower_photos','https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',
untar=True)
image_generator = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1/255)
image_data = image_generator.flow_from_directory(str(data_root), target_size=IMAGE_SHAPE)
for image_batch, label_batch in image_data:
print("Image batch shape: ", image_batch.shape)
print("Label batch shape: ", label_batch.shape)
break
result_batch = classifier.predict(image_batch)
result_batch.shape
predicted_class_names = imagenet_labels[np.argmax(result_batch, axis=-1)]
predicted_class_names
plt.figure(figsize=(10,9))
plt.subplots_adjust(hspace=0.5)
for n in range(30):
plt.subplot(6,5,n+1)
plt.imshow(image_batch[n])
plt.title(predicted_class_names[n])
plt.axis('off')
_ = plt.suptitle("ImageNet predictions")
feature_extractor_url = "https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/2" #@param {type:"string"}
feature_extractor_layer = hub.KerasLayer(feature_extractor_url,
input_shape=(224,224,3))
feature_batch = feature_extractor_layer(image_batch)
print(feature_batch.shape)
feature_extractor_layer.trainable = False
model = tf.keras.Sequential([
feature_extractor_layer,
layers.Dense(image_data.num_classes, activation='softmax')
])
model.summary()
predictions = model(image_batch)
predictions.shape
model.compile(
optimizer=tf.keras.optimizers.Adam(),
loss='categorical_crossentropy',
metrics=['acc'])
class CollectBatchStats(tf.keras.callbacks.Callback):
def __init__(self):
self.batch_losses = []
self.batch_acc = []
def on_train_batch_end(self, batch, logs=None):
self.batch_losses.append(logs['loss'])
self.batch_acc.append(logs['acc'])
self.model.reset_metrics()
steps_per_epoch = np.ceil(image_data.samples/image_data.batch_size)
batch_stats_callback = CollectBatchStats()
history = model.fit(image_data, epochs=2,
steps_per_epoch=steps_per_epoch,
callbacks = [batch_stats_callback])
plt.figure()
plt.ylabel("Loss")
plt.xlabel("Training Steps")
plt.ylim([0,2])
plt.plot(batch_stats_callback.batch_losses)
plt.figure()
plt.ylabel("Accuracy")
plt.xlabel("Training Steps")
plt.ylim([0,1])
plt.plot(batch_stats_callback.batch_acc)
class_names = sorted(image_data.class_indices.items(), key=lambda pair:pair[1])
class_names = np.array([key.title() for key, value in class_names])
class_names
predicted_batch = model.predict(image_batch)
predicted_id = np.argmax(predicted_batch, axis=-1)
predicted_label_batch = class_names[predicted_id]
label_id = np.argmax(label_batch, axis=-1)
plt.figure(figsize=(10,9))
plt.subplots_adjust(hspace=0.5)
for n in range(30):
plt.subplot(6,5,n+1)
plt.imshow(image_batch[n])
color = "green" if predicted_id[n] == label_id[n] else "red"
plt.title(predicted_label_batch[n].title(), color=color)
plt.axis('off')
_ = plt.suptitle("Model predictions (green: correct, red: incorrect)")
import time
t = time.time()
export_path = "/tmp/saved_models/{}".format(int(t))
tf.keras.experimental.export_saved_model(model, export_path)
export_path
reloaded = tf.keras.experimental.load_from_saved_model(export_path, custom_objects={'KerasLayer':hub.KerasLayer})
result_batch = model.predict(image_batch)
reloaded_result_batch = reloaded.predict(image_batch)
abs(reloaded_result_batch - result_batch).max()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step5: WARNING
Step6: Outputs
Step8: General markdown
Step10: Headers aren't recognized by (Pandoc on Windows?) if there isn't a blank line above the headers.
Step12: If internal links are defined, these will not work in nbviewer and latex as the local link is not existing.
Step14: Basic Markdown bold and italic works.
Step16: Nested lists work as well
Step18: Unicode support
Step20: Pandoc may produce invalid latex, e.g \sout is not allowed in headings
Step22: Horizontal lines work just fine
Step24: Extended markdown of pandoc
Step26: No space before underline behaves inconsistent (Pandoc extension
Step28: Pandoc allows to define tex macros which are respected for all output formats, the notebook not.
Step30: When placing the \newcommand inside a math environment it works within the notebook and nbviewer, but produces invalid latex (the newcommand is only valid in the same math environment).
Step32: HTML or LaTeX injections
Step34: Same for something like center
Step36: Raw $\LaTeX$ gets droppen entirely when converted to HTML. (I don't know why the HTML output is cropped here???)
Step38: A combination of raw $\LaTeX$ and raw HTML
Step40: Tables
Step42: Instead, Pandoc supports simple ascii tables. Unfortunately marked.js doesn't support this, and therefore it is not supported in the notebook.
Step44: An alternative to basic ascii tables is pipe tables. Pipe tables can be recognized by Pandoc and are supported by marked, hence, this is the best way to add tables.
Step46: Pandoc recognizes cell alignment in simple tables. Since marked.js doesn't recognize ascii tables, it can't render this table.
Step48: Images
Step50: HTML Images only work in the notebook.
Step52: Math
Step54: If the first \$ is on a new line, the equation is not captured by md2tex, if both \$s are on a new line md2html fails (Note the raw latex is dropped) but the notebook renders it correctly.
Step56: MathJax permits some $\LaTeX$ math constructs without \$s, of course these raw $\LaTeX$ is stripped when converting to html.
Step58: There is another lxml issue, #4283
Step60: Listings, and Code blocks
Step62: Language specific syntax highlighting by Pandoc requires additional dependencies to render correctly.
|
<ASSISTANT_TASK:>
Python Code:
from IPython.nbconvert.utils.pandoc import pandoc
from IPython.display import HTML, Javascript, display
from IPython.nbconvert.filters import citation2latex, strip_files_prefix, \
markdown2html, markdown2latex
def pandoc_render(markdown):
Render Pandoc Markdown->LaTeX content.
## Convert the markdown directly to latex. This is what nbconvert does.
#latex = pandoc(markdown, "markdown", "latex")
#html = pandoc(markdown, "markdown", "html", ["--mathjax"])
# nbconvert template conversions
html = strip_files_prefix(markdown2html(markdown))
latex = markdown2latex(citation2latex(markdown))
display(HTML(data="<div style='display: inline-block; width: 30%; vertical-align: top;'>" \
"<div style='background: #AAFFAA; width: 100%;'>NBConvert Latex Output</div>" \
"<pre class='prettyprint lang-tex' style='background: #EEFFEE; border: 1px solid #DDEEDD;'><xmp>" + latex + "</xmp></pre>"\
"</div>" \
"<div style='display: inline-block; width: 2%;'></div>" \
"<div style='display: inline-block; width: 30%; vertical-align: top;'>" \
"<div style='background: #FFAAAA; width: 100%;'>NBViewer Output</div>" \
"<div style='display: inline-block; width: 100%;'>" + html + "</div>" \
"</div>"))
javascript =
$.getScript("https://google-code-prettify.googlecode.com/svn/loader/run_prettify.js");
display(Javascript(data=javascript))
def notebook_render(markdown):
javascript =
var mdcell = new IPython.MarkdownCell();
mdcell.create_element();
mdcell.set_text(' + markdown.replace("\\", "\\\\").replace("'", "\'").replace("\n", "\\n") + ');
mdcell.render();
$(element).append(mdcell.element)
.removeClass()
.css('left', '66%')
.css('position', 'absolute')
.css('width', '30%')
mdcell.element.prepend(
$('<div />')
.removeClass()
.css('background', '#AAAAFF')
.css('width', '100 %')
.html('Notebook Output')
);
container.show()
display(Javascript(data=javascript))
def pandoc_html_render(markdown):
Render Pandoc Markdown->LaTeX content.
# Convert the markdown directly to latex. This is what nbconvert does.
latex = pandoc(markdown, "markdown", "latex")
# Convert the pandoc generated latex to HTML so it can be rendered in
# the web browser.
html = pandoc(latex, "latex", "html", ["--mathjax"])
display(HTML(data="<div style='background: #AAFFAA; width: 40%;'>HTML Pandoc Output</div>" \
"<div style='display: inline-block; width: 40%;'>" + html + "</div>"))
return html
def compare_render(markdown):
notebook_render(markdown)
pandoc_render(markdown)
try:
import lxml
print 'LXML found!'
except:
print 'Warning! No LXML found - the old citation2latex filter will not work'
compare_render(r
# Heading 1
## Heading 2
### Heading 3
#### Heading 4
##### Heading 5
###### Heading 6)
compare_render(r
# Heading 1
## Heading 2
### Heading 3
#### Heading 4
##### Heading 5
###### Heading 6 )
print("\n"*10)
compare_render(r
[Link2Heading](http://127.0.0.1:8888/0a2d8086-ee24-4e5b-a32b-f66b525836cb#General-markdown)
)
compare_render(r
This is Markdown **bold** and *italic* text.
)
compare_render(r
- li 1
- li 2
1. li 3
1. li 4
- li 5
)
compare_render(ur
überschuß +***^°³³ α β θ
)
compare_render(r
# Heading 1 ~~strikeout~~
)
compare_render(r
above
--------
below
)
compare_render(r
This is Markdown ~subscript~ and ^superscript^ text.
)
compare_render(r
This is Markdown not_italic_.
)
compare_render(r
\newcommand{\tuple}[1]{\langle #1 \rangle}
$\tuple{a, b, c}$
)
compare_render(r
$\newcommand{\foo}[1]{...:: #1 ::...}$
$\foo{bar}$
)
compare_render(r
This is HTML <b>bold</b> and <i>italic</i> text.
)
compare_render(r
<center>Center aligned</center>
)
compare_render(r
This is \LaTeX \bf{bold} and \emph{italic} text.
)
compare_render(r
**foo** $\left( \sum_{k=1}^n a_k b_k \right)^2 \leq$ <b>b\$ar</b> $$test$$
\cite{}
)
compare_render(r
<table>
<tr>
<td>a</td>
<td>b</td>
</tr>
<tr>
<td>c</td>
<td>d</td>
</tr>
</table>
)
compare_render(r
+---+---+
| a | b |
+---+---+
| c | d |
+---+---+
)
compare_render(r
|Left |Center |Right|
|:----|:-----:|----:|
|Text1|Text2 |Text3|
)
compare_render(r
Right Aligned Center Aligned Left Aligned
------------- -------------- ------------
Why does this
actually work? Who
knows ...
)
print("\n"*5)
compare_render(r

)
compare_render(r
<img src="http://ipython.org/_static/IPy_header.png">
)
compare_render(r
My equation:
$$ 5/x=2y $$
It is inline $ 5/x=2y $ here.
)
compare_render(r
$5 \cdot x=2$
$
5 \cdot x=2$
$
5 \cdot x=2
$
)
compare_render(r
\begin{align}
a & b\\
d & c
\end{align}
\begin{eqnarray}
a & b \\
c & d
\end{eqnarray}
)
compare_render(r
1<2 is true, but 3>4 is false.
$1<2$ is true, but $3>4$ is false.
1<2 it is even worse if it is alone in a line.
)
compare_render(r
some source code
```
a = "test"
print(a)
```
)
compare_render(r
some source code
```python
a = "test"
print(a)
```
)
<END_TASK>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.