Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 149
45.7k
| code_prompt
stringlengths 67
140k
|
---|---|---|
15,600 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
<small><i>This notebook was prepared by Marco Guajardo. Source and license info is on GitHub.</i></small>
Challenge Notebook
Problem
Step1: Unit Test | <ASSISTANT_TASK:>
Python Code:
class Node (object):
def __init__ (self, data=None):
#TODO:implement me
pass
def __str__ (self):
#TODO:implement me
pass
class BinaryTree (object):
def __init__ (self):
#TODO:implement me
pass
def insert (self, newData):
#TODO:implement me
pass
def delete (self, key):
#TODO:implement me
pass
def maxNode (self):
#TODO:implement me
pass
def minNode (self):
#TODO:implement me
pass
def printPostOrder (self):
#TODO:implement me
pass
def printPreOrder (self):
#TODO:implement me
pass
def printInOrder (self):
#TODO:implement me
pass
def treeIsEmpty (self):
#TODO: implement me
pass
Explanation: <small><i>This notebook was prepared by Marco Guajardo. Source and license info is on GitHub.</i></small>
Challenge Notebook
Problem: Implement a binary search tree with insert, delete, different traversals & max/min node values
Constraints
Test Cases
Algorithm
Code
Unit Test
Solution Notebook
Constraints
Is this a binary tree?
Yes
Is the root set to None initially?
Yes
Do we care if the tree is balanced?
No
What do we return for the traversals?
Return a list of the data in the desired order
What type of data can the tree hold?
Assume the tree only takes ints. In a realistic example, we'd use a hash table to convert other types to ints.
Test Cases
Insert
Always start with the root
If value is less than the root, go to the left child
if value is more than the root, go to the right child
Delete
Deleting a node from a binary tree is tricky. Make sure you arrange the tree correctly when deleting a node.
Here are some basic instructions
If the value to delete isn't on the tree return False
Traverals
In order traversal -left, center, right
Pre order traversal - center, left, right
Post order traversal - left, right, center
Return list for all traverals
Max & Min
Find the max node in the binary search tree
Find the min node in the binary search tree
treeIsEmpty
check if the tree is empty
Algorithm
Refer to the Solution Notebook. If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start.
Code
End of explanation
from nose.tools import assert_equal
class TestBinaryTree(object):
def test_insert_traversals (self):
myTree = BinaryTree()
myTree2 = BinaryTree()
for num in [50, 30, 70, 10, 40, 60, 80, 7, 25, 38]:
myTree.insert(num)
[myTree2.insert(num) for num in range (1, 100, 10)]
print("Test: insert checking with in order traversal")
expectVal = [7, 10, 25, 30, 38, 40, 50, 60, 70, 80]
assert_equal(myTree.printInOrder(), expectVal)
expectVal = [1, 11, 21, 31, 41, 51, 61, 71, 81, 91]
assert_equal(myTree2.printInOrder(), expectVal)
print("Test: insert checking with post order traversal")
expectVal = [7, 25, 10, 38, 40, 30, 60, 80, 70, 50]
assert_equal(myTree.printPostOrder(), expectVal)
expectVal = [91, 81, 71, 61, 51, 41, 31, 21, 11, 1]
assert_equal(myTree2.printPostOrder(), expectVal)
print("Test: insert checking with pre order traversal")
expectVal = [50, 30, 10, 7, 25, 40, 38, 70, 60, 80]
assert_equal(myTree.printPreOrder(), expectVal)
expectVal = [1, 11, 21, 31, 41, 51, 61, 71, 81, 91]
assert_equal(myTree2.printPreOrder(), expectVal)
print("Success: test_insert_traversals")
def test_max_min_nodes (self):
myTree = BinaryTree()
myTree.insert(5)
myTree.insert(1)
myTree.insert(21)
print("Test: max node")
assert_equal(myTree.maxNode(), 21)
myTree.insert(32)
assert_equal(myTree.maxNode(), 32)
print("Test: min node")
assert_equal(myTree.minNode(), 1)
print("Test: min node inserting negative number")
myTree.insert(-10)
assert_equal(myTree.minNode(), -10)
print("Success: test_max_min_nodes")
def test_delete (self):
myTree = BinaryTree()
myTree.insert(5)
print("Test: delete")
myTree.delete(5)
assert_equal(myTree.treeIsEmpty(), True)
print("Test: more complex deletions")
[myTree.insert(x) for x in range(1, 5)]
myTree.delete(2)
assert_equal(myTree.root.rightChild.data, 3)
print("Test: delete invalid value")
assert_equal(myTree.delete(100), False)
print("Success: test_delete")
def main():
testing = TestBinaryTree()
testing.test_insert_traversals()
testing.test_max_min_nodes()
testing.test_delete()
if __name__=='__main__':
main()
Explanation: Unit Test
End of explanation
<END_TASK> |
15,601 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Import and Standardize Data
Step1: WQP
The following html addresses are REST-based queries to download WQP data from the <a href="http
Step2: Read csv data into python.
Step3: Rename columns to match with other data later.
Step4: Define unneeded columns that will be dropped to save memory.
Step5: Define a function to fix funky dates found in the WQP database. This includes impossible dates or dates with too many numbers.
Step6: Drop unwanted fields
Step7: Convert result values and the MDL values to numeric fields from string fields.
Step8: Remove station duplicates by removing the preceding 'WQX' found in the station id field.
Step9: Standardize all ug/l data to mg/l by changing out the unit descriptor and dividing by 1000.
Step10: Normalize common nutrients so that they are all in the same type of units. For example, sometimes nitrate is reported "as nitrogen" and sometimes it is reported "as nitrate". The difference between the two types of reporting is a factor of 4.427!
Step11: WQP Stations
Read in WQP station data.
Step12: Rename and simplify station data column names for later compilation.
Step13: Define the fields to drop to save memory.
Step14: Make station types in the StationType field consistent for easier summary and compilation later on.
Step15: Remove preceding WQX from StationId field to remove duplicate station data created by legacy database.
Step16: SDWIS
SDWIS data were extracted from the Utah SDWIS database into ArcGIS 10.3.2 using the following SQL query. NED 10m elevation and UTM coordinates were appended using ArcGIS.
SQL
SELECT UTV80.TINWSF.EXTERNAL_SYS_NUM AS "FED_NM", UTV80.TINWSF.ST_ASGN_IDENT_CD AS "ST_ID", UTV80.TINWSF.TYPE_CODE, UTV80.TINWSYS.NAME AS "SYS_NM", UTV80.TINWSYS.D_PRIN_CNTY_SVD_NM AS "COUNTY", UTV80.TINWSF.NAME AS "FAC_NM", UTV80.TINWSF.TINWSYS_IS_NUMBER AS "SY_NBR", UTV80.TINLOC.LATITUDE_MEASURE AS "Y", UTV80.TINLOC.LONGITUDE_MEASURE AS "X", UTV80.TINLOC.VERTICAL_MEASURE AS "Z", UTV80.TSASAMPL.COLLLECTION_END_DT AS "DTE", UTV80.TSAANLYT.NAME AS "ANLY_NM", UTV80.TSASAR.CONCENTRATION_MSR AS "CONC_MSR", UTV80.TSASAR.TSASAR_IS_NUMBER AS "ID_NUM", UTV80.TSASAR.UOM_CODE, UTV80.TSASAR.DETECTN_LIMIT_NUM AS "DET_LIM", UTV80.TSASAR.DETECTN_LIM_UOM_CD AS "DET_UOM" FROM UTV80.TINWSF INNER JOIN UTV80.TINWSYS ON UTV80.TINWSF.TINWSYS_IS_NUMBER = UTV80.TINWSYS.TINWSYS_IS_NUMBER INNER JOIN UTV80.TINLOC ON UTV80.TINWSF.TINWSF_IS_NUMBER = UTV80.TINLOC.TINWSF_IS_NUMBER INNER JOIN UTV80.TSASMPPT ON UTV80.TINWSF.TINWSF_IS_NUMBER = UTV80.TSASMPPT.TINWSF0IS_NUMBER INNER JOIN UTV80.TSASAMPL ON UTV80.TSASMPPT.TSASMPPT_IS_NUMBER = UTV80.TSASAMPL.TSASMPPT_IS_NUMBER INNER JOIN UTV80.TSASAR ON UTV80.TSASAMPL.TSASAMPL_IS_NUMBER = UTV80.TSASAR.TSASAMPL_IS_NUMBER INNER JOIN UTV80.TSAANLYT ON UTV80.TSASAR.TSAANLYT_IS_NUMBER = UTV80.TSAANLYT.TSAANLYT_IS_NUMBER WHERE (UTV80.TINWSYS.D_PRIN_CNTY_SVD_NM LIKE '%CACHE COUNTY%') AND (UTV80.TSAANLYT.NAME LIKE '%NITRATE%' OR UTV80.TSAANLYT.NAME LIKE '%NITRITE%' OR UTV80.TSAANLYT.NAME LIKE '%AMMONI%' OR UTV80.TSAANLYT.NAME LIKE '%SULFATE%' OR UTV80.TSAANLYT.NAME LIKE '%TDS%' OR UTV80.TSAANLYT.NAME LIKE '%SODIUM%' OR UTV80.TSAANLYT.NAME LIKE '%FLUORIDE%' OR UTV80.TSAANLYT.NAME LIKE '%MAGNESIUM%' OR UTV80.TSAANLYT.NAME LIKE '%SELENIUM%' OR UTV80.TSAANLYT.NAME LIKE '%CALCIUM%' OR UTV80.TSAANLYT.NAME LIKE '%CHLORIDE%' OR UTV80.TSAANLYT.NAME LIKE '%POTASSIUM%' OR UTV80.TSAANLYT.NAME LIKE '%SELENIUM%' OR UTV80.TSAANLYT.NAME LIKE '%SILICA%' OR UTV80.TSAANLYT.NAME LIKE '%IRON %' OR UTV80.TSAANLYT.NAME LIKE '%ALKA %' OR UTV80.TSAANLYT.NAME LIKE '%CONDUCTIVITY%' OR UTV80.TSAANLYT.NAME LIKE '%PH %' OR UTV80.TSAANLYT.NAME LIKE '%TEMP%' OR UTV80.TSAANLYT.NAME LIKE '%ARSENIC%' OR UTV80.TSAANLYT.NAME LIKE '%CARBON%' OR UTV80.TSAANLYT.NAME LIKE '%TRITIUM%' OR UTV80.TSAANLYT.NAME LIKE '%COPPER%' OR UTV80.TSAANLYT.NAME LIKE '%LEAD%' OR UTV80.TSAANLYT.NAME LIKE '%NITROGEN%' OR UTV80.TSAANLYT.NAME LIKE '%PHOSPHATE%' OR UTV80.TSAANLYT.NAME LIKE '%TDS%' OR UTV80.TSAANLYT.NAME LIKE '%ZINC%' OR UTV80.TSAANLYT.NAME LIKE '%IRON%' OR UTV80.TSAANLYT.NAME LIKE '%CHROMIUM%' ) ORDER BY UTV80.TINWSF.ST_ASGN_IDENT_CD
SQL
SELECT UTV80.TINWSF.EXTERNAL_SYS_NUM AS "FED_NM", UTV80.TINWSF.ST_ASGN_IDENT_CD AS "ST_ID", UTV80.TINWSF.TYPE_CODE, UTV80.TINWSYS.NAME AS "SYS_NM", UTV80.TINWSYS.D_PRIN_CNTY_SVD_NM AS "COUNTY", UTV80.TINWSF.NAME AS "FAC_NM", UTV80.TINWSF.TINWSYS_IS_NUMBER AS "SY_NBR", UTV80.TINLOC.LATITUDE_MEASURE AS "Y", UTV80.TINLOC.LONGITUDE_MEASURE AS "X", UTV80.TINLOC.VERTICAL_MEASURE AS "Z", UTV80.TSASAMPL.COLLLECTION_END_DT AS "DTE", UTV80.TSAANLYT.NAME AS "ANLY_NM", UTV80.TSASAR.CONCENTRATION_MSR AS "CONC_MSR", UTV80.TSASAR.TSASAR_IS_NUMBER AS "ID_NUM", UTV80.TSASAR.UOM_CODE, UTV80.TSASAR.DETECTN_LIMIT_NUM AS "DET_LIM", UTV80.TSASAR.DETECTN_LIM_UOM_CD AS "DET_UOM" FROM UTV80.TINWSF INNER JOIN UTV80.TINWSYS ON UTV80.TINWSF.TINWSYS_IS_NUMBER = UTV80.TINWSYS.TINWSYS_IS_NUMBER INNER JOIN UTV80.TINLOC ON UTV80.TINWSF.TINWSF_IS_NUMBER = UTV80.TINLOC.TINWSF_IS_NUMBER INNER JOIN UTV80.TSASMPPT ON UTV80.TINWSF.TINWSF_IS_NUMBER = UTV80.TSASMPPT.TINWSF0IS_NUMBER INNER JOIN UTV80.TSASAMPL ON UTV80.TSASMPPT.TSASMPPT_IS_NUMBER = UTV80.TSASAMPL.TSASMPPT_IS_NUMBER INNER JOIN UTV80.TSASAR ON UTV80.TSASAMPL.TSASAMPL_IS_NUMBER = UTV80.TSASAR.TSASAMPL_IS_NUMBER INNER JOIN UTV80.TSAANLYT ON UTV80.TSASAR.TSAANLYT_IS_NUMBER = UTV80.TSAANLYT.TSAANLYT_IS_NUMBER WHERE UTV80.TINWSYS.D_PRIN_CNTY_SVD_NM LIKE '%CACHE COUNTY%' AND (UTV80.TINWSYS.NAME IN('%PROVID%','%MILL%','%LOG%','%NIB%', ORDER BY UTV80.TINWSF.ST_ASGN_IDENT_CD
Read in the queried SDWIS data and make a StationId and StationName field. Make field names consistent with those applied to WQP data above so that compilation is easier later.
Step17: Normalize units and nutrient data so that they are consistent with the WQP data. This includes standardizing ug/l to mg/l
Step18: Drop unneeded SDWIS fields to save memory and reduce confusion.
Step19: Rename chemical parameters in the SDWIS Param field to match those of the WQP data.
Step20: Make station types consistent with the WQP data.
Step21: SDWIS facility type code (FacTypeCode)
Step22: Create SDWIS results file from the SDWIS data. Drop unneeded fields from the results file. These are fields that are in the station field and apply to stations.
Step23: Create a sample media field and populate it with the value Groundwater.
Step24: UDAF
UDAF Stations
Import Utah Department of Food and Agriculture data from the data file. These data were compiled from <a href=http
Step25: Drop unneeded fields to save memory.
Step26: Add UDAF prefix to the station identification field (StationId) to make station ids unique.
Step27: UDAF Results
Import data Utah Department of Food and Agriculture data from the data file. These data were compiled from reports available of the UDAF website. Once the data are imported, rename the fields to match the above SDWIS and WQP data.
Step28: Create a detection condition field and populate it based on values in the imported data.
Step29: Fill null result values with zeros when data are reported as below detection limit.
Step30: Make data consistent by cleaning up parameter descriptions.
Step31: STORET Legacy
Legacy EPA data are kept in the <a href=ftp
Step32: Parse choppy text data from the STORET Legacy database.
Step33: UGS Data
Step34: State Lab
These are raw data results sent to the UGS via Tab-delimited tables from the Utah State Health Laboratory. They make up the bulk of results of data collected for this study. They are supplimented with field data translated to spreadsheets.
Step35: Combine Data
Step36: Clean Up Non Detects
Step37: Pivot Data
Step38: Add GIS Information
Step39: Convert and Balance Samples
Step40: Subset Data
Step41: Summarize & Plot Data
Step42: Export data | <ASSISTANT_TASK:>
Python Code:
#rootname = "/media/p/5F5B-8FCB/PROJECTS/UMAR/Data/chem/" #thumb on ubuntu
rootname = "E:\\PROJECTS\\UMAR\\Data\\chem\\" #thumb on windows
WQPResultsFile = rootname + "result.csv"
WQPStationFile = rootname + "station.csv"
SDWISFile = rootname + "SDWIS_Cache.txt"
AGStationsFile = rootname + "AG_Stations_Cache.csv"
AGResultsFile = rootname + "AG_byparam.csv"
UGSFile = rootname + "UGS_Cache.txt"
STORLegStatFile = rootname + "UT_Cache_sta_001.txt"
STORLegResFile = rootname + "UT_Cache_res_001.txt"
STORParamFile = rootname + "parameter.txt"
fielddata = rootname + "FieldData.xlsx"
statelabresults0 = rootname + "utgs1402.txt"
statelabresults1 = rootname + "utgs1403.txt"
statelabresults2 = rootname + "utgs1501.txt"
statelabstations = rootname + "UtahStateLabStations.xlsx"
df = wc.WQP.WQPimportRes(WQPResultsFile)
df = wc.WQP.WQPmassageResults(df)
df
Explanation: Import and Standardize Data
End of explanation
Rdtypes = {"OrganizationIdentifier":np.str_, "OrganizationFormalName":np.str_, "ActivityIdentifier":np.str_,
"ActivityStartTime/Time":np.str_,
"ActivityTypeCode":np.str_, "ActivityMediaName":np.str_, "ActivityMediaSubdivisionName":np.str_,
"ActivityStartDate":np.str_, "ActivityStartTime/Time":np.str_, "ActivityStartTime/TimeZoneCode":np.str_,
"ActivityEndDate":np.str_, "ActivityEndTime/Time":np.str_, "ActivityEndTime/TimeZoneCode":np.str_,
"ActivityDepthHeightMeasure/MeasureValue":np.float16, "ActivityDepthHeightMeasure/MeasureUnitCode":np.str_,
"ActivityDepthAltitudeReferencePointText":np.str_, "ActivityTopDepthHeightMeasure/MeasureValue":np.float16,
"ActivityTopDepthHeightMeasure/MeasureUnitCode":np.str_,
"ActivityBottomDepthHeightMeasure/MeasureValue":np.float16,
"ActivityBottomDepthHeightMeasure/MeasureUnitCode":np.str_,
"ProjectIdentifier":np.str_, "ActivityConductingOrganizationText":np.str_,
"MonitoringLocationIdentifier":np.str_, "ActivityCommentText":np.str_,
"SampleAquifer":np.str_, "HydrologicCondition":np.str_, "HydrologicEvent":np.str_,
"SampleCollectionMethod/MethodIdentifier":np.str_, "SampleCollectionMethod/MethodIdentifierContext":np.str_,
"SampleCollectionMethod/MethodName":np.str_, "SampleCollectionEquipmentName":np.str_,
"ResultDetectionConditionText":np.str_, "CharacteristicName":np.str_, "ResultSampleFractionText":np.str_,
"ResultMeasureValue":np.str_, "ResultMeasure/MeasureUnitCode":np.str_, "MeasureQualifierCode":np.str_,
"ResultStatusIdentifier":np.str_, "StatisticalBaseCode":np.str_, "ResultValueTypeName":np.str_,
"ResultWeightBasisText":np.str_, "ResultTimeBasisText":np.str_, "ResultTemperatureBasisText":np.str_,
"ResultParticleSizeBasisText":np.str_, "PrecisionValue":np.str_, "ResultCommentText":np.str_,
"USGSPCode":np.str_, "ResultDepthHeightMeasure/MeasureValue":np.float16,
"ResultDepthHeightMeasure/MeasureUnitCode":np.str_, "ResultDepthAltitudeReferencePointText":np.str_,
"SubjectTaxonomicName":np.str_, "SampleTissueAnatomyName":np.str_,
"ResultAnalyticalMethod/MethodIdentifier":np.str_, "ResultAnalyticalMethod/MethodIdentifierContext":np.str_,
"ResultAnalyticalMethod/MethodName":np.str_, "MethodDescriptionText":np.str_, "LaboratoryName":np.str_,
"AnalysisStartDate":np.str_, "ResultLaboratoryCommentText":np.str_,
"DetectionQuantitationLimitTypeName":np.str_, "DetectionQuantitationLimitMeasure/MeasureValue":np.str_,
"DetectionQuantitationLimitMeasure/MeasureUnitCode":np.str_, "PreparationStartDate":np.str_,
"ProviderName":np.str_}
dt = [6,56,61]
Explanation: WQP
The following html addresses are REST-based queries to download WQP data from the <a href="http://www.waterqualitydata.us/portal/">WQP portal</a>. If you click on them, they will produce zipped csv files that can be opened and processed with the code below. Originally, the code directly applied these links, but the files are large and take a lot of time to download.
Station data address:
http://waterqualitydata.us/Station/search?statecode=US%3A49&countycode=US%3A49%3A005&sampleMedia=Water&characteristicType=Information%3BInorganics%2C+Major%2C+Metals%3BInorganics%2C+Major%2C+Non-metals%3BInorganics%2C+Minor%2C+Metals%3BInorganics%2C+Minor%2C+Non-metals%3BNot+Assigned%3BNutrient%3BPhysical%3BStable+Isotopes&mimeType=csv&zip=yes&sorted=no
Result data address:
http://waterqualitydata.us/Result/search?statecode=US%3A49&countycode=US%3A49%3A005&sampleMedia=Water&characteristicType=Information%3BInorganics%2C+Major%2C+Metals%3BInorganics%2C+Major%2C+Non-metals%3BInorganics%2C+Minor%2C+Metals%3BInorganics%2C+Minor%2C+Non-metals%3BNot+Assigned%3BNutrient%3BPhysical%3BStable+Isotopes&mimeType=csv&zip=yes&sorted=no
WQP Results
Define data type of each field in the WQP database. This allows for easy import of data. Everything under this header can be acheived using wc.WQP.WQPimportRes(WQPResultsFile) then wc.WQP.WQPmassageResults(df)
End of explanation
WQP = pd.read_csv(WQPResultsFile, dtype=Rdtypes, parse_dates=dt)
Explanation: Read csv data into python.
End of explanation
ResFieldDict = {"AnalysisStartDate":"AnalysisDate", "ResultAnalyticalMethod/MethodIdentifier":"AnalytMeth",
"ResultAnalyticalMethod/MethodName":"AnalytMethId", "ResultDetectionConditionText":"DetectCond",
"ResultLaboratoryCommentText":"LabComments", "LaboratoryName":"LabName",
"DetectionQuantitationLimitTypeName":"LimitType", "DetectionQuantitationLimitMeasure/MeasureValue":"MDL",
"DetectionQuantitationLimitMeasure/MeasureUnitCode":"MDLUnit", "MethodDescriptionText":"MethodDescript",
"OrganizationIdentifier":"OrgId", "OrganizationFormalName":"OrgName", "CharacteristicName":"Param",
"ProjectIdentifier":"ProjectId", "MeasureQualifierCode":"QualCode", "ResultCommentText":"ResultComment",
"ResultStatusIdentifier":"ResultStatus", "ResultMeasureValue":"ResultValue",
"ActivityCommentText":"SampComment", "ActivityDepthHeightMeasure/MeasureValue":"SampDepth",
"ActivityDepthAltitudeReferencePointText":"SampDepthRef",
"ActivityDepthHeightMeasure/MeasureUnitCode":"SampDepthU", "SampleCollectionEquipmentName":"SampEquip",
"ResultSampleFractionText":"SampFrac", "ActivityStartDate":"SampleDate", "ActivityIdentifier":"SampleId",
"ActivityStartTime/Time":"SampleTime", "ActivityMediaSubdivisionName":"SampMedia",
"SampleCollectionMethod/MethodIdentifier":"SampMeth", "SampleCollectionMethod/MethodName":"SampMethName",
"ActivityTypeCode":"SampType", "MonitoringLocationIdentifier":"StationId",
"ResultMeasure/MeasureUnitCode":"Unit", "USGSPCode":"USGSPCode",
"ActivityStartDate":"StartDate","ActivityStartTime/Time":"StartTime"}
WQP.rename(columns=ResFieldDict,inplace=True)
Explanation: Rename columns to match with other data later.
End of explanation
resdroplist = ["ActivityBottomDepthHeightMeasure/MeasureUnitCode", "ActivityBottomDepthHeightMeasure/MeasureValue",
"ActivityConductingOrganizationText", "ActivityEndDate", "ActivityEndTime/Time",
"ActivityEndTime/TimeZoneCode", "ActivityMediaName", "ActivityStartTime/TimeZoneCode",
"ActivityTopDepthHeightMeasure/MeasureUnitCode", "ActivityTopDepthHeightMeasure/MeasureValue",
"HydrologicCondition", "HydrologicEvent", "PrecisionValue", "PreparationStartDate", "ProviderName",
"ResultAnalyticalMethod/MethodIdentifierContext", "ResultDepthAltitudeReferencePointText",
"ResultDepthHeightMeasure/MeasureUnitCode", "ResultDepthHeightMeasure/MeasureValue",
"ResultParticleSizeBasisText", "ResultTemperatureBasisText",
"ResultTimeBasisText", "ResultValueTypeName", "ResultWeightBasisText", "SampleAquifer",
"SampleCollectionMethod/MethodIdentifierContext", "SampleTissueAnatomyName", "StatisticalBaseCode",
"SubjectTaxonomicName","StartTime","StartDate","StartTime","StartDate"]
Explanation: Define unneeded columns that will be dropped to save memory.
End of explanation
def datetimefix(x,format):
'''
This script cleans date-time errors
input
x = date-time string
format = format of date-time string
output
formatted datetime type
'''
d = str(x[0]).lstrip().rstrip()[0:10]
t = str(x[1]).lstrip().rstrip()[0:5].zfill(5)
try:
int(d[0:2])
except(ValueError,TypeError,NameError):
return np.nan
try:
int(t[0:2])
int(t[3:5])
except(ValueError,TypeError,NameError):
t = "00:00"
if int(t[0:2])>23:
t = "00:00"
elif int(t[3:5])>59:
t = "00:00"
else:
t = t[0:2].zfill(2) + ":" + t[3:5]
return datetime.datetime.strptime(d + " " + t, format)
WQP["SampleDate"] = WQP[["StartDate","StartTime"]].apply(lambda x: datetimefix(x,"%Y-%m-%d %H:%M"),1)
Explanation: Define a function to fix funky dates found in the WQP database. This includes impossible dates or dates with too many numbers.
End of explanation
WQP.drop(resdroplist,inplace=True,axis=1)
Explanation: Drop unwanted fields
End of explanation
WQP['ResultValue'] = WQP['ResultValue'].convert_objects(convert_numeric=True)
WQP['MDL'] = WQP['MDL'].convert_objects(convert_numeric=True)
Explanation: Convert result values and the MDL values to numeric fields from string fields.
End of explanation
WQP['StationId'] = WQP['StationId'].str.replace('_WQX-','-')
Explanation: Remove station duplicates by removing the preceding 'WQX' found in the station id field.
End of explanation
#standardize all ug/l data to mg/l
def unitfix(x):
z = str(x).lower()
if z == "ug/l":
return "mg/l"
elif z == "mg/l":
return "mg/l"
else:
return x
WQP.Unit = WQP.Unit.apply(lambda x: str(x).rstrip(), 1)
WQP.ResultValue = WQP[["ResultValue","Unit"]].apply(lambda x: x[0]/1000 if str(x[1]).lower()=="ug/l" else x[0], 1)
WQP.Unit = WQP.Unit.apply(lambda x: unitfix(x),1)
Explanation: Standardize all ug/l data to mg/l by changing out the unit descriptor and dividing by 1000.
End of explanation
def parnorm(x):
p = str(x[0]).rstrip().lstrip().lower()
u = str(x[2]).rstrip().lstrip().lower()
if p == 'nitrate' and u == 'mg/l as n':
return 'Nitrate', x[1]*4.427, 'mg/l'
elif p == 'nitrite' and u == 'mg/l as n':
return 'Nitrite', x[1]*3.285, 'mg/l'
elif p == 'ammonia-nitrogen' or p == 'ammonia-nitrogen as n' or p == 'ammonia and ammonium':
return 'Ammonium', x[1]*1.288, 'mg/l'
elif p == 'ammonium' and u == 'mg/l as n':
return 'Ammonium', x[1]*1.288, 'mg/l'
elif p == 'sulfate as s':
return 'Sulfate', x[1]*2.996, 'mg/l'
elif p in ('phosphate-phosphorus', 'phosphate-phosphorus as p','orthophosphate as p'):
return 'Phosphate', x[1]*3.066, 'mg/l'
elif (p == 'phosphate' or p == 'orthophosphate') and u == 'mg/l as p':
return 'Phosphate', x[1]*3.066, 'mg/l'
elif u == 'ug/l':
return x[0], x[1]/1000, 'mg/l'
else:
return x[0], x[1], str(x[2]).rstrip()
WQP['Param'], WQP['ResultValue'], WQP['Unit'] = zip(*WQP[['Param','ResultValue','Unit']].apply(lambda x: parnorm(x),1))
Explanation: Normalize common nutrients so that they are all in the same type of units. For example, sometimes nitrate is reported "as nitrogen" and sometimes it is reported "as nitrate". The difference between the two types of reporting is a factor of 4.427!
End of explanation
WQPStat = pd.read_csv(WQPStationFile)
Explanation: WQP Stations
Read in WQP station data.
End of explanation
StatFieldDict = {"MonitoringLocationIdentifier":"StationId", "AquiferName":"Aquifer", "AquiferTypeName":"AquiferType",
"ConstructionDateText":"ConstDate", "CountyCode":"CountyCode", "WellDepthMeasure/MeasureValue":"Depth",
"WellDepthMeasure/MeasureUnitCode":"DepthUnit", "VerticalMeasure/MeasureValue":"Elev",
"VerticalAccuracyMeasure/MeasureValue":"ElevAcc", "VerticalAccuracyMeasure/MeasureUnitCode":"ElevAccUnit",
"VerticalCollectionMethodName":"ElevMeth", "VerticalCoordinateReferenceSystemDatumName":"ElevRef",
"VerticalMeasure/MeasureUnitCode":"ElevUnit", "FormationTypeText":"FmType",
"WellHoleDepthMeasure/MeasureValue":"HoleDepth", "WellHoleDepthMeasure/MeasureUnitCode":"HoleDUnit",
"HorizontalAccuracyMeasure/MeasureValue":"HorAcc", "HorizontalAccuracyMeasure/MeasureUnitCode":"HorAccUnit",
"HorizontalCollectionMethodName":"HorCollMeth", "HorizontalCoordinateReferenceSystemDatumName":"HorRef",
"HUCEightDigitCode":"HUC8", "LatitudeMeasure":"Lat_Y", "LongitudeMeasure":"Lon_X",
"OrganizationIdentifier":"OrgId", "OrganizationFormalName":"OrgName", "StateCode":"StateCode",
"MonitoringLocationDescriptionText":"StationComment", "MonitoringLocationName":"StationName",
"MonitoringLocationTypeName":"StationType"}
WQPStat.rename(columns=StatFieldDict,inplace=True)
Explanation: Rename and simplify station data column names for later compilation.
End of explanation
statdroplist = ["ContributingDrainageAreaMeasure/MeasureUnitCode", "ContributingDrainageAreaMeasure/MeasureValue",
"DrainageAreaMeasure/MeasureUnitCode", "DrainageAreaMeasure/MeasureValue", "CountryCode", "ProviderName",
"SourceMapScaleNumeric"]
WQPStat.drop(statdroplist,inplace=True,axis=1)
Explanation: Define the fields to drop to save memory.
End of explanation
TypeDict = {"Stream: Canal":"Stream", "River/Stream":"Stream",
"Stream: Canal":"Stream", "Well: Test hole not completed as a well":"Well"}
WQPStat.StationType = WQPStat["StationType"].apply(lambda x: TypeDict.get(x,x),1)
WQPStat.Elev = WQPStat.Elev.apply(lambda x: np.nan if x==0.0 else round(x,1), 1)
Explanation: Make station types in the StationType field consistent for easier summary and compilation later on.
End of explanation
WQPStat['StationId'] = WQPStat['StationId'].str.replace('_WQX-','-')
WQPStat.drop_duplicates(subset=['StationId'],inplace=True)
Explanation: Remove preceding WQX from StationId field to remove duplicate station data created by legacy database.
End of explanation
SDWIS = pd.read_csv(SDWISFile)
def sampid(x):
return "SDWIS" + str(x[0]) + str(x[1]) + str(x[2])[:-7]
def statid(x):
return "SDWIS" + str(x[0]) + str(x[1])
def statnm(x):
return str(str(x[0]) + " " + str(x[1])).title()
SDWIS["StationId"] = SDWIS[["FED_NM","ST_ID"]].apply(lambda x: statid(x),1)
SDWIS["StationName"] = SDWIS[["SYS_NM","FAC_NM"]].apply(lambda x: statnm(x),1)
SDWIS["SampleId"] = SDWIS[["FED_NM","ST_ID","DTE"]].apply(lambda x: sampid(x),1)
SDWIS["OrgId"] = "UDDW"
SDWIS["OrgName"] = "Utah Division of Drinking Water"
SDWIS["Elev"] = SDWIS["Z"].apply(lambda x: round(x*3.2808,1),1)
SDWIS["Unit"] = SDWIS["UOM_CODE"].apply(lambda x: str(x).lower(),1)
SDWIS["MDLUnit"] = SDWIS["DET_UOM"].apply(lambda x: str(x).lower(),1)
SDWIS["Param"] = SDWIS["ANLY_NM"].apply(lambda x: str(x).title().rstrip(),1)
SDWISFields ={"DTE":"SampleDate", "TYPE_CODE":"StationType",
"CONC_MSR":"ResultValue", "DET_LIM":"MDL",
"Y":"Lat_Y", "X":"Lon_X"}
SDWIS.rename(columns=SDWISFields,inplace=True)
def datetimefixSDWIS(x,format):
d = str(x).lstrip().rstrip()
try:
return datetime.datetime.strptime(d, "%m/%d/%Y %H:%M:%S")
except(ValueError):
return datetime.datetime.strptime(d, "%Y-%m-%d %H:%M:%S")
SDWIS["SampleDate"] = SDWIS["SampleDate"].apply(lambda x: datetimefixSDWIS(x,"%m/%d/%Y %H:%M:%S"),1)
Explanation: SDWIS
SDWIS data were extracted from the Utah SDWIS database into ArcGIS 10.3.2 using the following SQL query. NED 10m elevation and UTM coordinates were appended using ArcGIS.
SQL
SELECT UTV80.TINWSF.EXTERNAL_SYS_NUM AS "FED_NM", UTV80.TINWSF.ST_ASGN_IDENT_CD AS "ST_ID", UTV80.TINWSF.TYPE_CODE, UTV80.TINWSYS.NAME AS "SYS_NM", UTV80.TINWSYS.D_PRIN_CNTY_SVD_NM AS "COUNTY", UTV80.TINWSF.NAME AS "FAC_NM", UTV80.TINWSF.TINWSYS_IS_NUMBER AS "SY_NBR", UTV80.TINLOC.LATITUDE_MEASURE AS "Y", UTV80.TINLOC.LONGITUDE_MEASURE AS "X", UTV80.TINLOC.VERTICAL_MEASURE AS "Z", UTV80.TSASAMPL.COLLLECTION_END_DT AS "DTE", UTV80.TSAANLYT.NAME AS "ANLY_NM", UTV80.TSASAR.CONCENTRATION_MSR AS "CONC_MSR", UTV80.TSASAR.TSASAR_IS_NUMBER AS "ID_NUM", UTV80.TSASAR.UOM_CODE, UTV80.TSASAR.DETECTN_LIMIT_NUM AS "DET_LIM", UTV80.TSASAR.DETECTN_LIM_UOM_CD AS "DET_UOM" FROM UTV80.TINWSF INNER JOIN UTV80.TINWSYS ON UTV80.TINWSF.TINWSYS_IS_NUMBER = UTV80.TINWSYS.TINWSYS_IS_NUMBER INNER JOIN UTV80.TINLOC ON UTV80.TINWSF.TINWSF_IS_NUMBER = UTV80.TINLOC.TINWSF_IS_NUMBER INNER JOIN UTV80.TSASMPPT ON UTV80.TINWSF.TINWSF_IS_NUMBER = UTV80.TSASMPPT.TINWSF0IS_NUMBER INNER JOIN UTV80.TSASAMPL ON UTV80.TSASMPPT.TSASMPPT_IS_NUMBER = UTV80.TSASAMPL.TSASMPPT_IS_NUMBER INNER JOIN UTV80.TSASAR ON UTV80.TSASAMPL.TSASAMPL_IS_NUMBER = UTV80.TSASAR.TSASAMPL_IS_NUMBER INNER JOIN UTV80.TSAANLYT ON UTV80.TSASAR.TSAANLYT_IS_NUMBER = UTV80.TSAANLYT.TSAANLYT_IS_NUMBER WHERE (UTV80.TINWSYS.D_PRIN_CNTY_SVD_NM LIKE '%CACHE COUNTY%') AND (UTV80.TSAANLYT.NAME LIKE '%NITRATE%' OR UTV80.TSAANLYT.NAME LIKE '%NITRITE%' OR UTV80.TSAANLYT.NAME LIKE '%AMMONI%' OR UTV80.TSAANLYT.NAME LIKE '%SULFATE%' OR UTV80.TSAANLYT.NAME LIKE '%TDS%' OR UTV80.TSAANLYT.NAME LIKE '%SODIUM%' OR UTV80.TSAANLYT.NAME LIKE '%FLUORIDE%' OR UTV80.TSAANLYT.NAME LIKE '%MAGNESIUM%' OR UTV80.TSAANLYT.NAME LIKE '%SELENIUM%' OR UTV80.TSAANLYT.NAME LIKE '%CALCIUM%' OR UTV80.TSAANLYT.NAME LIKE '%CHLORIDE%' OR UTV80.TSAANLYT.NAME LIKE '%POTASSIUM%' OR UTV80.TSAANLYT.NAME LIKE '%SELENIUM%' OR UTV80.TSAANLYT.NAME LIKE '%SILICA%' OR UTV80.TSAANLYT.NAME LIKE '%IRON %' OR UTV80.TSAANLYT.NAME LIKE '%ALKA %' OR UTV80.TSAANLYT.NAME LIKE '%CONDUCTIVITY%' OR UTV80.TSAANLYT.NAME LIKE '%PH %' OR UTV80.TSAANLYT.NAME LIKE '%TEMP%' OR UTV80.TSAANLYT.NAME LIKE '%ARSENIC%' OR UTV80.TSAANLYT.NAME LIKE '%CARBON%' OR UTV80.TSAANLYT.NAME LIKE '%TRITIUM%' OR UTV80.TSAANLYT.NAME LIKE '%COPPER%' OR UTV80.TSAANLYT.NAME LIKE '%LEAD%' OR UTV80.TSAANLYT.NAME LIKE '%NITROGEN%' OR UTV80.TSAANLYT.NAME LIKE '%PHOSPHATE%' OR UTV80.TSAANLYT.NAME LIKE '%TDS%' OR UTV80.TSAANLYT.NAME LIKE '%ZINC%' OR UTV80.TSAANLYT.NAME LIKE '%IRON%' OR UTV80.TSAANLYT.NAME LIKE '%CHROMIUM%' ) ORDER BY UTV80.TINWSF.ST_ASGN_IDENT_CD
SQL
SELECT UTV80.TINWSF.EXTERNAL_SYS_NUM AS "FED_NM", UTV80.TINWSF.ST_ASGN_IDENT_CD AS "ST_ID", UTV80.TINWSF.TYPE_CODE, UTV80.TINWSYS.NAME AS "SYS_NM", UTV80.TINWSYS.D_PRIN_CNTY_SVD_NM AS "COUNTY", UTV80.TINWSF.NAME AS "FAC_NM", UTV80.TINWSF.TINWSYS_IS_NUMBER AS "SY_NBR", UTV80.TINLOC.LATITUDE_MEASURE AS "Y", UTV80.TINLOC.LONGITUDE_MEASURE AS "X", UTV80.TINLOC.VERTICAL_MEASURE AS "Z", UTV80.TSASAMPL.COLLLECTION_END_DT AS "DTE", UTV80.TSAANLYT.NAME AS "ANLY_NM", UTV80.TSASAR.CONCENTRATION_MSR AS "CONC_MSR", UTV80.TSASAR.TSASAR_IS_NUMBER AS "ID_NUM", UTV80.TSASAR.UOM_CODE, UTV80.TSASAR.DETECTN_LIMIT_NUM AS "DET_LIM", UTV80.TSASAR.DETECTN_LIM_UOM_CD AS "DET_UOM" FROM UTV80.TINWSF INNER JOIN UTV80.TINWSYS ON UTV80.TINWSF.TINWSYS_IS_NUMBER = UTV80.TINWSYS.TINWSYS_IS_NUMBER INNER JOIN UTV80.TINLOC ON UTV80.TINWSF.TINWSF_IS_NUMBER = UTV80.TINLOC.TINWSF_IS_NUMBER INNER JOIN UTV80.TSASMPPT ON UTV80.TINWSF.TINWSF_IS_NUMBER = UTV80.TSASMPPT.TINWSF0IS_NUMBER INNER JOIN UTV80.TSASAMPL ON UTV80.TSASMPPT.TSASMPPT_IS_NUMBER = UTV80.TSASAMPL.TSASMPPT_IS_NUMBER INNER JOIN UTV80.TSASAR ON UTV80.TSASAMPL.TSASAMPL_IS_NUMBER = UTV80.TSASAR.TSASAMPL_IS_NUMBER INNER JOIN UTV80.TSAANLYT ON UTV80.TSASAR.TSAANLYT_IS_NUMBER = UTV80.TSAANLYT.TSAANLYT_IS_NUMBER WHERE UTV80.TINWSYS.D_PRIN_CNTY_SVD_NM LIKE '%CACHE COUNTY%' AND (UTV80.TINWSYS.NAME IN('%PROVID%','%MILL%','%LOG%','%NIB%', ORDER BY UTV80.TINWSF.ST_ASGN_IDENT_CD
Read in the queried SDWIS data and make a StationId and StationName field. Make field names consistent with those applied to WQP data above so that compilation is easier later.
End of explanation
print sorted(list(SDWIS.Param.unique()))
def parnormSDWIS(x):
p = str(x[0]).rstrip().lstrip().lower()
u = str(x[2]).rstrip().lstrip().lower()
if p == 'nitrate':
return 'Nitrate', x[1]*4.427, 'mg/l'
elif p == 'nitrite':
return 'Nitrite', x[1]*3.285, 'mg/l'
elif p == 'nitrogen-ammonia as (n)':
return 'Ammonium', x[1]*1.288, 'mg/l'
elif u == 'ug/l':
return x[0], x[1]/1000, 'mg/l'
else:
return x[0], x[1], str(x[2]).rstrip()
SDWIS['Param'], SDWIS['ResultValue'], SDWIS['Unit'] = zip(*SDWIS[['Param','ResultValue','Unit']].apply(lambda x: parnormSDWIS(x),1))
Explanation: Normalize units and nutrient data so that they are consistent with the WQP data. This includes standardizing ug/l to mg/l
End of explanation
SDWIS.drop(["FED_NM", "DET_UOM", "UOM_CODE","ANLY_NM", "FAC_NM", "ST_ID",
"SYS_NM", "COUNTY", "SY_NBR", "Z", "ID_NUM"],inplace=True, axis=1)
Explanation: Drop unneeded SDWIS fields to save memory and reduce confusion.
End of explanation
SDWISPmatch = {"Ph":"pH","Tds":"TDS","Nitrogen-Ammonia As (N)":"Nitrogen-Ammonia as (N)",
"Hydroxide As Calcium Carbonate":"Hydroxide as Calcium Carbonate",
"Bicarbonate As Hco3":"Bicarbonate as HCO3"}
SDWIS["Param"] = SDWIS["Param"].apply(lambda x: SDWISPmatch.get(x,x))
SDWIS["StationName"] = SDWIS["StationName"].apply(lambda x: x.replace("Wtp","WTP"))
SDWIS["ResultValue"] = SDWIS[["ResultValue","Unit"]].apply(lambda x: x[0]/1000 if x[1]=="ug/L" else x[0], 1)
Explanation: Rename chemical parameters in the SDWIS Param field to match those of the WQP data.
End of explanation
SDWISType = {"SP":"Spring","WL":"Well","TP":"Facility Other","IN":"Stream","CC":"Connection","WH":"Well"}
SDWIS.StationType = SDWIS.StationType.apply(lambda x: SDWISType.get(x,x),1)
Explanation: Make station types consistent with the WQP data.
End of explanation
SDWISSta = SDWIS.drop([u'SampleDate', u'ResultValue', u'MDL', u'SampleId', u'Unit', u'MDLUnit', u'Param'], axis=1)
SDWISSta.drop_duplicates(inplace=True)
Explanation: SDWIS facility type code (FacTypeCode): CC, Consecutive_connection;
CH, Common_headers; CS, Cistern; CW, Clear_well;
DS, Distribution_system_zone; IG, Infiltration_gallery; IN,
Intake; NP, Non-piped; OT, Other; PC, Pressure_control;
PF, Pump_facility; RC, Roof_catchment; RS, Reservoir; SI,
Surface_impoundment; SP, Spring; SS, Sampling_station; ST,
Storage; TM, Transmission_main; TP, Treatment_plant; WH,
Well_head; WL, Well.
Create a SDWIS stations file from the SDWIS data. Drop unneeded fields from the station file. Remove duplication stations.
End of explanation
SDWISRes = SDWIS.drop([u'StationType', u'Lat_Y', u'Lon_X', u'StationName', u'Elev'], axis=1)
Explanation: Create SDWIS results file from the SDWIS data. Drop unneeded fields from the results file. These are fields that are in the station field and apply to stations.
End of explanation
SDWISRes["SampMedia"] = "Groundwater"
Explanation: Create a sample media field and populate it with the value Groundwater.
End of explanation
AGStat = pd.read_csv(AGStationsFile)
AGStat["StationType"] = "Well"
AGStatFields = {"SITEID":"StationId","FINISHEDDE":"Depth","POINT_Y":"Lat_Y",
"POINT_X":"Lon_X","ELEV_FT":"Elev","ACCURACY":"HorAcc"}
AGStat.rename(columns=AGStatFields,inplace=True)
Explanation: UDAF
UDAF Stations
Import Utah Department of Food and Agriculture data from the data file. These data were compiled from <a href=http://ag.utah.gov/conservation-environmental/ground-water.html>reports available of the UDAF website</a>. Once the data are imported, rename the fields to match the above SDWIS and WQP data.
End of explanation
AGStat.drop(["OBJECTID_1", "OBJECTID", "PUB_YR", "SAMPLENO", "WLDATE", "WLDEPTH"], inplace=True, axis=1)
Explanation: Drop unneeded fields to save memory.
End of explanation
AGStat.StationId = AGStat.StationId.apply(lambda x: "UDAF-"+str(int(x)).zfill(5),1)
Explanation: Add UDAF prefix to the station identification field (StationId) to make station ids unique.
End of explanation
names = ["SampleId","ResultValue", "ParAbb", "Unit", "Param", "MDL","BelowLim","TestNo",
"StationId","SampleDate","SampYear"]
AGRes = pd.read_csv(AGResultsFile, names=names, index_col=10)
Explanation: UDAF Results
Import data Utah Department of Food and Agriculture data from the data file. These data were compiled from reports available of the UDAF website. Once the data are imported, rename the fields to match the above SDWIS and WQP data.
End of explanation
AGRes["DetectCond"] = AGRes["BelowLim"].apply(lambda x: 'Not Detected' if x=='Y' else np.nan,1)
Explanation: Create a detection condition field and populate it based on values in the imported data.
End of explanation
AGRes.ResultValue = AGRes[["BelowLim","ResultValue"]].apply(lambda x: np.nan if x[0]=="Y" or x[1] == 0.0 else x[1], 1)
Explanation: Fill null result values with zeros when data are reported as below detection limit.
End of explanation
def parnormAG(x):
p = str(x[0]).rstrip().lstrip().lower()
u = str(x[2]).rstrip().lstrip().lower()
if p == 'nitrate-n':
return 'Nitrate', x[1]*4.427, 'mg/l'
elif u == 'ug/l':
return x[0], x[1]/1000, 'mg/l'
else:
return x[0], x[1], str(x[2]).rstrip()
AGRes['Param'], AGRes['ResultValue'], AGRes['Unit'] = zip(*AGRes[['Param','ResultValue','Unit']].apply(lambda x: parnormAG(x),1))
AGRes.Unit.unique()
AGRes.dropna(subset=["StationId","ResultValue"], how="any", inplace=True)
AGRes.StationId = AGRes.StationId.apply(lambda x: "UDAF-"+str(int(x)).zfill(5),1)
AGStAv = list(AGStat.StationId.values)
AGRes = AGRes[AGRes.StationId.isin(AGStAv)]
AGRes["SampMedia"] = "Groundwater"
AGStat['OrgId']='UDAF'
Explanation: Make data consistent by cleaning up parameter descriptions.
End of explanation
STORLegSta = pd.read_table(STORLegStatFile, skiprows=[1])
STORLegRes = pd.read_table(STORLegResFile, skiprows=[1])
STORParam = pd.read_table(STORParamFile)
Explanation: STORET Legacy
Legacy EPA data are kept in the <a href=ftp://ftp.epa.gov/storet/exports/>STORET Legacy Database</a>.
End of explanation
rescol = list(STORLegRes.columns)
j = []
for i in rescol:
j.append(i.rstrip("\t").rstrip().lstrip().replace(" ",""))
resdict = dict(zip(rescol,j))
STORLegRes.rename(columns=resdict,inplace=True)
statcol = list(STORLegSta.columns)
k = []
for i in statcol:
k.append(i.rstrip("\t").rstrip().lstrip().replace(" ",""))
statdict = dict(zip(statcol,k))
STORLegSta.rename(columns=statdict,inplace=True)
STORLegRes["SampleDate"] = STORLegRes[["StartDate","StartTime"]].apply(lambda x: datetimefix(x,"%Y-%m-%d %H:%M"),1)
STORLegRes = STORLegRes[STORLegRes.SecondaryActivityCategory.isin(['Water',np.nan])]
STORParamDict = dict(zip(STORParam['Parameter No.'].values, STORParam['Full Name'].values))
STORLegRes.Param = STORLegRes.Param.apply(lambda x: STORParamDict.get(x),1)
STORResField = {"Agency":"OrgId","AgencyName":"OrgName","Station":"StationId","SampleDepth":"SampDepth"}
STORLegRes.rename(columns=STORResField,inplace=True)
STORLegRes.drop(["StateName", "CountyName", "HUC", "EndDate", "UMK", "CS", "ReplicateNumber",
"COMPOSITE_GRAB_NUMBER","CM","PrimaryActivityCategory","PrimaryActivityCategory",
"SecondaryActivityCategory",
"EndTime", "StartDate", "StartTime", "Latitude", "Longitude"],inplace=True,axis=1)
STORLegRes["SampleId"] = STORLegRes[["StationId","SampleDate"]].apply(lambda x: str(x[0]) + "-" + str(x[1]),1 )
STORLegRes["StationId"] = STORLegRes["StationId"].apply(lambda x: "EPALeg-" + x, 1)
STORLegRes.Param = STORLegRes.Param.apply(lambda x: str(x).title(),1)
STORLegRes.columns
def parnormSTOR(x):
p = str(x[0]).rstrip().lstrip().lower()
if p == 'nitrate nitrogen, total (mg/L as n)' or p== 'nitrate nitrogen, total':
return 'Nitrate', x[1]*4.427, 'mg/l'
elif p == 'nitrite nitrogen, total (mg/l as n)':
return 'Nitrite', x[1]*3.285, 'mg/l'
elif p == 'nitrogen, ammonia, total (mg/l as n)':
return 'Ammonium', x[1]*1.288, 'mg/l'
elif p == 'sulfate (as s) whole water, mg/L':
return 'Sulfate', x[1]*2.996, 'mg/l'
elif p in ('phosphorus, dissolved orthophosphate (mg/l as p)'):
return 'Phosphate', x[1]*3.066, 'mg/l'
else:
return x[0], x[1], np.nan
STORLegRes['Param'], STORLegRes['ResultValue'], STORLegRes['Unit'] = zip(*STORLegRes[['Param','ResultValue']].apply(lambda x: parnormSTOR(x),1))
STORKeepers = ['Temperature, Water (Degrees Centigrade)',
'Temperature, Water (Degrees Fahrenheit)',
'Specific Conductance,Field (Umhos/Cm @ 25C)',
'Specific Conductance (Umhos/Cm @ 25C)',
'Sulfate (As S) Whole Water, Mg/L',
'Oxygen, Dissolved Mg/L',
'Oxygen, Dissolved, Percent Of Saturation %',
'Bod, 5 Day, 20 Deg C Mg/L',
'Ph (Standard Units)', 'Ph, Lab, Standard Units Su',
'Carbon Dioxide (Mg/L As Co2)', 'Alkalinity,Total,Low Level Gran Analysis Ueq/L',
'Alkalinity, Total (Mg/L As Caco3)', 'Bicarbonate Ion (Mg/L As Hco3)', 'Carbonate Ion (Mg/L As Co3)',
'Nitrogen, Ammonia, Total (Mg/L As N)', 'Ammonia, Unionzed (Mg/L As N)',
'Nitrite Nitrogen, Total (Mg/L As N)', 'Ammonia, Unionized (Calc Fr Temp-Ph-Nh4) (Mg/L)',
'Nitrate Nitrogen, Total (Mg/L As N)', 'Nitrogen, Kjeldahl, Total, (Mg/L As N)',
'Nitrite Plus Nitrate, Total 1 Det. (Mg/L As N)', 'Phosphorus (P), Water, Total Recoverable Ug/L',
'Phosphorus, Total (Mg/L As P)', 'Phosphorus, Dissolved Orthophosphate (Mg/L As P)',
'Carbon, Dissolved Organic (Mg/L As C)',
'Carbon, Dissolved Inorganic (Mg/L As C)',
'Hardness, Total (Mg/L As Caco3)', 'Calcium (Mg/L As Caco3)',
'Calcium, Dissolved (Mg/L As Ca)',
'Magnesium, Dissolved (Mg/L As Mg)',
'Sodium, Dissolved (Mg/L As Na)',
'Potassium, Dissolved (Mg/L As K)',
'Chloride, Dissolved In Water Mg/L',
'Sulfate, Dissolved (Mg/L As So4)',
'Fluoride, Dissolved (Mg/L As F)',
'Silica, Dissolved (Mg/L As Si02)',
'Arsenic, Dissolved (Ug/L As As)', 'Arsenic, Total (Ug/L As As)',
'Barium, Dissolved (Ug/L As Ba)', 'Barium, Total (Ug/L As Ba)',
'Beryllium, Total (Ug/L As Be)', 'Boron, Dissolved (Ug/L As B)',
'Boron, Total (Ug/L As B)', 'Cadmium, Dissolved (Ug/L As Cd)',
'Cadmium, Total (Ug/L As Cd)', 'Chromium, Dissolved (Ug/L As Cr)',
'Chromium, Hexavalent (Ug/L As Cr)', 'Chromium, Total (Ug/L As Cr)',
'Copper, Dissolved (Ug/L As Cu)', 'Copper, Total (Ug/L As Cu)',
'Iron, Dissolved (Ug/L As Fe)', 'Lead, Dissolved (Ug/L As Pb)',
'Lead, Total (Ug/L As Pb)', 'Manganese, Total (Ug/L As Mn)',
'Manganese, Dissolved (Ug/L As Mn)', 'Thallium, Total (Ug/L As Tl)',
'Nickel, Dissolved (Ug/L As Ni)', 'Nickel, Total (Ug/L As Ni)',
'Silver, Dissolved (Ug/L As Ag)', 'Silver, Total (Ug/L As Ag)',
'Zinc, Dissolved (Ug/L As Zn)', 'Zinc, Total (Ug/L As Zn)',
'Antimony, Total (Ug/L As Sb)', 'Aluminum, Total (Ug/L As Al)',
'Selenium, Dissolved (Ug/L As Se)', 'Selenium, Total (Ug/L As Se)',
'Tritium (1H3),Total (Picocuries/Liter)',
'Hardness, Ca Mg Calculated (Mg/L As Caco3)',
'Chlorine, Total Residual (Mg/L)',
'Residue,Total Filtrable (Dried At 180C),Mg/L',
'Nitrate Nitrogen, Dissolved (Mg/L As No3)', 'Iron (Ug/L As Fe)',
'Phosphorus, Total, As Po4 - Mg/L', 'Mercury, Total (Ug/L As Hg)']
STORLegRes = STORLegRes[STORLegRes.Param.isin(STORKeepers)]
def parsplit(x,p):
x = str(x).rstrip().lstrip()
if p == "Un":
z = -1
x = str(x).replace("Mg/L", "mg/l")
x = str(x).replace("Ug/L", "ug/l")
x = str(x).replace("o", "O")
x = str(x).replace("c", "C")
x = str(x).replace("TOtal ReCOverable","Total Recoverable")
x = str(x).replace("UmhOs", "umhos")
x = str(x).replace("TOtal","Total")
elif p== "Par":
z = 0
x = str(x).replace(", Standard Units","")
x = str(x).replace(", Unionized","")
x = str(x).replace(", Unionzed","")
x = str(x).replace(",Low Level Gran Analysis","")
x = str(x).replace(" Ion","")
x = str(x).replace(",Total",", Total")
if x == "Ph" or x == "Ph, Lab":
x = str(x).replace("Ph","pH")
if "(" in x:
x = str(x).replace(" As ", " as ")
return str(x).split(" (")[z].rstrip(")").rstrip().lstrip()
else:
return str(x).split(" ")[z].rstrip().lstrip()
def splitmore(x):
if "NO3" in x:
return x
elif " as " in x:
return x.split(" as ")[0]
elif x == "As S) WhOle Water, mg/l" or x == "Dried At 180C),mg/l" or x=="PhOsphOrus, Total, As PO4 - mg/l":
return "mg/l"
elif x == "P), Water, Total Recoverable ug/l":
return "ug/l"
else:
return x
def unitconv(x):
if x[1]=="ug/l":
return x[0]/1000
elif x[1]=="Degrees Fahrenheit":
return (float(x[0])-32.0)*(5.0/9.0)
else:
return x[0]
STORLegRes["Unit"] = STORLegRes["Param"].apply(lambda x: parsplit(x,"Un"), 1)
STORLegRes["Param"] = STORLegRes["Param"].apply(lambda x: parsplit(x,"Par"), 1)
STORLegRes["Unit"] = STORLegRes["Unit"].apply(lambda x: splitmore(x), 1)
STORLegRes["ResultValue"] = STORLegRes[["ResultValue","Unit"]].apply(lambda x: unitconv(x), 1)
STORLegRes["Unit"] = STORLegRes["Unit"].apply(lambda x: "mg/l" if x=="ug/l" else x, 1)
STORLegRes["Unit"] = STORLegRes["Unit"].apply(lambda x: "Degrees Centigrade" if x=="Degrees Fahrenheit" else x, 1)
STORStaField = {"Agency":"OrgId","AgencyName":"OrgName","Station":"StationId", "DepthUnits":"DepthUnit",
"Latitude":"Lat_Y", "Longitude":"Lon_X", "HUC":"HUC8", "StationDepth":"Depth"}
STORLegSta.rename(columns=STORStaField,inplace=True)
STORLegSta.columns
STORLegSta.drop(["RchmileSegment", "MilesUpReach", "Rchonoff", "Description", "G", "S", "StationAlias",
"Rchname", "StateName", "CountyName"], inplace=True, axis=1)
STORLegSta.StationType = STORLegSta.StationType.apply(lambda x: str(x).rstrip(" ").strip("/SUPPLY").split("/")[-1].title(),1)
LegTypeDict = {"We":"Well"}
STORLegSta.StationType = STORLegSta.StationType.apply(lambda x: LegTypeDict.get(x,x),1)
STORLegSta.StationId = STORLegSta["StationId"].apply(lambda x: "EPALeg-" + x, 1)
Explanation: Parse choppy text data from the STORET Legacy database.
End of explanation
UGSfield = pd.read_excel(fielddata,"FieldChem") #Field data
UGSNO3 = pd.read_excel(fielddata,"Nitrate") #Nitrate data provided by Millville City
UGS = pd.read_csv(UGSFile, engine="python")
UGS["StationId"] = UGS["SITE"].apply(lambda x:"UGS-"+str(x).zfill(4),1)
UGSSta = UGS.drop([u'OBJECTID_1',u'SITE', u'TDS', u'Temp', u'Cond', u'CO2', u'HCO3',
u'CO3',u'Na', u'pH', u'Ca', u'SO4', u'NO3', u'As_', u'Cl', u'K',
u'Mg', u'Hard', u'NH4'], axis=1)
UGSRe = UGS.drop([u'OBJECTID_1',u'SITE',u'StationType', u'Geology', u'Elev', u'Lat_Y', u'Lon_X', u'StationName',
u'OrgId', u'WRNUM', u'SITE', u'UTM_X', u'UTM_Y', u'Depth_ft'], axis=1)
UGSRe["SampleId"] = UGSRe.index
UGSRe.reset_index(inplace=True)
UGSRe.set_index(["StationId","SampleId"], inplace=True)
UGSRe.drop(UGSRe.columns[0],inplace=True,axis=1)
UGSStack = UGSRe.stack().to_frame()
UGSStack.columns = ["ResultValue"]
UGSStack.reset_index(inplace=True)
UGSStack.columns=["StationId","SampleId","Param","ResultValue"]
def unitcon(x):
if x=="pH":
return ""
elif x=="Temp":
return "C"
elif x=="Cond":
return "uS/cm"
else:
return "mg/l"
UGSStack["Unit"] = UGSStack["Param"].apply(lambda x: unitcon(x),1)
UGSStack["ParAbb"] = UGSStack["Param"]
UGSStack["OrgId"] = "UGS"
UGSStack["OrgName"] = "Utah Geological Survey"
UGSStack["ResultValue"] = UGSStack[['Param','ResultValue']].apply(lambda x: x[1]*1.288 if x[0]=='Ammonia as N' else x[1],1)
UGSStack["Param"] = UGSStack['Param'].apply(lambda x: 'Ammonia' if x=='Ammonia as N' else x, 1)
UGSStack["ResultValue"] = UGSStack[['Param','ResultValue']].apply(lambda x: x[1]*3.066 if x[0]=='Phosphate, Tot. Dig. (as P)' else x[1],1)
UGSStack["Param"] = UGSStack['Param'].apply(lambda x: 'Phosphate' if x=='Phosphate, Tot. Dig. (as P)' else x, 1)
Explanation: UGS Data
End of explanation
SLSampMatch = pd.read_excel(fielddata,"StateLabMatch")
SLStat = pd.read_excel(fielddata,"Stations")
#SLStat = pd.merge(SLSampMatch, SLStations, on='StationId', how='outer')
#SLStat.reset_index(inplace=True)
SLStat
SL0 = pd.read_table(statelabresults0, sep="\t", lineterminator="\n", error_bad_lines=False)
SL0 = SL0[SL0['Collector']=='PI']
SL1 = pd.read_table(statelabresults1, sep="\t", lineterminator="\n", error_bad_lines=False)
SL1 = SL1[SL1['Collector']=='PI']
SL2 = pd.read_table(statelabresults2, sep="\t", lineterminator="\n", error_bad_lines=False)
SL2 = SL2[SL2['Collector']=='PI']
SL = pd.concat([SL0,SL1,SL2])
SL["OrgId"] = "UGS"
SL["OrgName"] = "Utah Geological Survey"
SL['DetectCond'] = SL['Problem#Identifier'].apply(lambda x: 'Not Detected' if str(x).rstrip()=='<' else np.nan,1)
SL['SampleDate'] = SL[['Sample#Date','Sample#Time']].apply(lambda x: datetimefix(x,"%m/%d/%y %H:%M"),1)
SLHead = {'Sample#Number':'SampleId', 'Param#Description':'Param', 'Result#Value':'ResultValue','Units':'Unit',
'Lower#Report#Limit':'MDL','Method#ID':'SampMeth','Analysis#Date':'AnalysisDate'}
SL.rename(columns=SLHead,inplace=True)
SL['Sample#Description'].unique()
SL.drop([u'Lab#Code', u'Station#ID', u'Source#Code', u'Sample#Date',
u'Sample#Time', u'Sample#Type', u'Cost#Code', u'Billing#Code',
u'Agency#Bill#Code', u'Trip#ID', u'Sample#Description', u'Collector',
u'Sample#Recieved#Date', u'Chain#of#Custody#Ind.', u'Replicate#Number',
u'Sample#Comment', u'Method#Number', u'Method#Agency',
u'Method#Description', u'Param#Number', u'CAS#Number',
u'Matrix#Number', u'Matrix#Description', u'Preparation#Date',
u'Problem#Identifier', u'Result#Code',
u'Upper#Quant#Limit', u'Method#Detect#Limit',
u'Confidence#Limit', u'%#Confidence#Limit',u'Dilution#Factor',
u'Batch#Number',u'Comment#Number', u'Comment#Text'], inplace=True, axis=1)
SL.columns
SLRes = pd.merge(SL, SLSampMatch, on='SampleId', how='left')
SLStat.drop_duplicates(inplace=True)
def SLparnorm(x):
p = str(x[0]).rstrip().lstrip().lower()
u = str(x[2]).rstrip().lstrip().lower()
if p == 'nitrate nitrogen, total (mg/l as n)':
return 'Nitrate', x[1]*4.427, 'mg/l'
elif p == 'nitrite nitrogen, total (mg/l as n)':
return 'Nitrite', x[1]*3.285, 'mg/l'
elif p == 'ammonia as n':
return 'Ammonium', x[1]*1.288, 'mg/l'
elif p == 'sulfate (as s) whole water, mg/L':
return 'Sulfate', x[1]*2.996, 'mg/l'
elif p in ('phosphate, tot. dig. (as p)', 'phosphate-phosphorus as p','orthophosphate as p'):
return 'Phosphate', x[1]*3.066, 'mg/l'
elif u == 'ug/l':
return x[0], x[1]/1000, 'mg/l'
else:
return x[0], x[1], str(x[2]).rstrip()
def MDLfix(x):
u = str(x[1]).rstrip().lstrip().lower()
if np.isfinite(x[2]):
return x[0]
elif u=='ug/l':
return x[0]/1000
else:
return x[0]
SLRes['MDL'] = SLRes[['MDL','Unit','ResultValue']].apply(lambda x: MDLfix(x),1)
SLRes['Param'], SLRes['ResultValue'], SLRes['Unit'] = zip(*SLRes[['Param','ResultValue','Unit']].apply(lambda x: parnorm(x),1))
SLRes.StationId.unique()
Explanation: State Lab
These are raw data results sent to the UGS via Tab-delimited tables from the Utah State Health Laboratory. They make up the bulk of results of data collected for this study. They are supplimented with field data translated to spreadsheets.
End of explanation
Res = pd.concat([STORLegRes,AGRes,SDWISRes,WQP,UGSStack,SLRes,UGSfield,UGSNO3])
Res = Res[~Res["Unit"].isin(['ueq/L','Ueq/L','ueq/l','tons/ac ft','tons/day','meq/L'])]
Res = Res[~Res["Param"].isin(["Heptachlorobiphenyl", "Hydrocarbons", "Hydroxide", "Ionic strength",
"Floating debris, severity", "Carbon Tetrachloride", "Trichlorobiphenyl",
"Vinyl Chloride", "True color", "Color", "Trash, Debris, Floatables",
"Total volatile solids", "Temperature, air", "Residue, Total Filtrable",
"Pentachlorobiphenyl", "Odor threshold number", "Odor, atmospheric",
"Instream features, est. stream width", "Hydroxide",
"Light, transmissivity","Algae, floating mats (severity)"])]
len(Res)
Res[["Param","Unit","USGSPCode"]].drop_duplicates(subset=["Param","Unit"]).sort_values(by=["Param"]).to_clipboard()
Stat = pd.concat([STORLegSta, AGStat, SDWISSta, WQPStat, SLStat, UGSSta])
parmatch = pd.read_excel(rootname + "Aquachem.xlsx")
parmatchdict = dict(zip(parmatch.Param.values, parmatch.ParrAbb.values))
Res["ParAbb"] = Res[["ParAbb","Param"]].apply(lambda x: parmatchdict.get(x[1],x[0]),1)
results = Res.dropna(subset=["StationId","Param","SampleId"], how="any")
Stat.loc[:,"StationName"] = Stat["StationName"].apply(lambda x: str(x).strip().lstrip().rstrip(),1)
Stat.loc[:,"StationId"] = Stat["StationId"].apply(lambda x: str(x).strip().lstrip().rstrip(),1)
Res.loc[:,"StationId"] = Res["StationId"].apply(lambda x: str(x).strip().lstrip().rstrip(),1)
results.loc[:,"Unit"] = results[["ParAbb","Unit"]].apply(lambda x: "C" if x[0]=="Temp" else x[1],1)
results.loc[:,"Unit"] = results[["ParAbb","Unit"]].apply(lambda x: "umhos/cm" if x[0]=="Cond" else x[1],1)
results.loc[:,"Unit"] = results[["ParAbb","Unit"]].apply(lambda x: "" if x[0]=="pH" else x[1],1)
results.drop(["AnalysisDate","AnalytMeth","SampType","AnalytMethId", "BelowLim", "StationName",
"MethodDescript", "LabComments", "LabName", "LimitType", "ProjectId", "QualCode",
"OrgName","R", "ResultComment","ResultStatus","SampComment", "SampEquip",
"SampDepthRef", "SampDepthU","SampDepth","SampType", "USGSPCode",
"SampMeth", "SampMethName","SampYear","TestNo"],inplace=True,axis=1)
Explanation: Combine Data
End of explanation
NDs = {'Not Detected':'<', 'Present Above Quantification Limit':'<', 'ND ':'<', '*Present >QL ':'>',
'Present Below Quantification Limit':'<', '*Non-detect ':'<', 'Detected Not Quantified':'<',
'Systematic Contamination':'<'}
results.DetectCond = results.DetectCond.apply(lambda x: NDs.get(x,np.nan),1)
def is_nan(x):
'''
this function identifies nan values
Source: http://stackoverflow.com/questions/944700/how-to-check-for-nan-in-python
'''
try:
return math.isnan(x)
except:
return False
def detected(x):
'''
Finds nondetects and fixes units and values
'''
if x[1]=='<' and np.isfinite(x[0]):
return x[1]+str(x[0])
elif x[1]=='<' and np.isfinite(x[2]):
if str(x[3]).rstrip().lower() == 'ug/l':
return x[1]+str(x[2]/1000)
else:
return x[1]+str(x[2])
else:
return x[0]
results.ResultValue = results[['ResultValue','DetectCond','MDL','MDLUnit']].apply(lambda x: detected(x),1)
def MDLfill(x):
if x[0] <= 0 and x[1]>0:
return 0
elif x[2] == '<':
return 0
elif x[0] < x[1]:
return 0
else:
return 1
results.loc[:,'ResValue'] = pd.to_numeric(results['ResultValue'], errors='coerce')
results.loc[:,'Censored'] = results[['ResValue','MDL','DetectCond']].apply(lambda x: MDLfill(x),1)
matchDict = {'414143111495501':'USGS-414143111495501','414115111490301':'USGS-414115111490301',
'SDWIS3117.0WS004':'USGS-414115111490301',
'EPALeg-0301203':'414029111483501','SDWIS3116.0WS003':'414029111483501',
'EPALeg-0301201':'USGS-414024111481101','SDWIS5435.0WS001':'USGS-414024111481101',
'414024111481101':'USGS-414024111481101','EPALeg-0300101':'SDWIS5411.0WS001',
'EPALeg-0300102':'SDWIS5412.0WS002', 'EPALeg-0300103':'SDWIS5413.0WS003',
'UGS-107.5':'SDWIS3143.0WS001','UDAF-01492':'UGS-0412', 'UDAF-03165':'UGS-106.5',
'SDWIS3126.0WS002':'USGS-414216111485201', 'EPALeg-0301702':'USGS-414216111485201',
'EPALeg-0301901':'USGS-414328111493001', 'SDWIS3131.0WS001':'USGS-414328111493001',
'EPALeg-0301005':'SDWIS3112.0WS005', 'EPALeg-0301002':'USGS-414417111484301',
'SDWIS3109.0WS002':'USGS-414417111484301', 'SDWIS3113.0WS006':'USGS-414459111493601',
'SDWIS3127.0WS003':'414213111493101', 'SDWIS3159.0WS003':'SDWIS3157.0WS001','UDAF-01500':'UGS-63.5',
'SDWIS3111.0WS004':'USGS-414441111490701', 'EPALeg-0301904':'SDWIS3133.0WS004',
'EPALeg-0301004':'USGS-414441111490701','EPALeg-0301502':'SDWIS3118.0WS002',
'UDAF-01589':'UDAF-01568','UDAF-01586':'UDAF-01566','UGS-0050':'UDAF-01566',
'EPALeg-0300104':'SDWIS3088.0WS004', 'UDAF-01585':'UGS-0032', 'UDAF-01565':'UGS-0032',
'EPALeg-0300201':'SDWIS3091.0WS001', 'EPALeg-0300204':'SDWIS3094.0WS004',
'EPALeg-0301803':'SDWIS3129.0WS003','EPALeg-0300405':'SDWIS5418.0WS005',
'EPALeg-0300404':'SDWIS5417.0WS004', 'EPALeg-0300403':'SDWIS5416.0WS003',
'SDWIS5439.0WS003':'SDWIS5416.0WS003', 'SDWIS5460.0WS003':'SDWIS5416.0WS003',
'SDWIS5414.0WS001':'SDWIS5458.0WS001', 'SDWIS5437.0WS001':'SDWIS5458.0WS001',
'EPALeg-0308601':'USGS-415828111460001', 'SDWIS5487.0WS001':'USGS-415828111460001',
'SDWIS5430.0WS002':'USGS-415828111460001', 'SDWIS5423.0WS003':'USGS-415828111460001',
'SDWIS5421.0WS001':'USGS-415836111464701', 'EPALeg-0304901':'SDWIS5479.0WS001',
'EPALeg-0303201':'SDWIS5470.0WS001', 'SDWIS5432.0WS001':'USGS-414535111423001',
'EPALeg-0303001':'SDWIS5469.0WS001','EPALeg-0307701':'SDWIS5485.0WS001',
'EPALeg-0308301':'SDWIS5486.0WS001', 'EPALeg-0301501':'SDWIS5445.0WS001',
'EPALeg-0300701':'USGS-415120111440001', 'SDWIS5424.0WS001':'USGS-415120111440001',
'EPALeg-0302001':'SDWIS5455.0WS001', 'EPALeg-0301101':'SDWIS5433.0WS001',
'EPALeg-0301102':'SDWIS5434.0WS002'}
Stat.loc[:,'StationId'] = Stat['StationId'].apply(lambda x: matchDict.get(x,x),1)
results.loc[:,'StationId'] = results['StationId'].apply(lambda x: matchDict.get(x,x),1)
results.loc[:,'SampleDate'] = pd.to_datetime(results.SampleDate)
def depthFill(x):
if x > 0:
return x
def depthUnitFill(x):
if x > 0:
return 'ft'
Stat.Depth = Stat['Depth_ft'].apply(lambda x: depthFill(x),1)
Stat.DepthUnit = Stat['Depth_ft'].apply(lambda x: depthUnitFill(x),1)
WINdict = {'SDWIS3180.0WS001':435116, 'UGS-47.5':32700, 'UDAF-01566':30211, 'UGS-46.5':12420,
'USGS-414525111503705':427268, 'UDAF-01569':28327, 'SDWIS3112.0WS005':2694,
'UDAF-03162':434818, 'USGS-414328111493001':2823, 'USGS-414332111491001':2836,
'SDWIS3133.0WS004':2848, 'UGS-91.5':28647, 'UGS-95.5':35814, 'SDWIS3128.0WS004':18590,
'UGS-0102':426853, 'USGS-414115111490301':2722, 'UT4140521114843201':32975, '414029111483501':2721,
'SDWIS3088.0WS004':2741,'UGS-63.5':7126, 'UGS-0084':9639, 'USGS-414134111544701':434098, 'UGS-0070':35061,
'UGS-0029':32851,'UGS-0030':26663,'UGS-0034':29110, 'UGS-0055':3728, 'UGS-61.5':9280,
'SDWIS3129.0WS003':2816, 'UGS-0043':29329, 'UGS-0889':24493, 'UGS-44.5':28333}
Stat.WIN = Stat['StationId'].apply(lambda x: WINdict.get(x,x),1)
results.SampleID = results.SampleId.apply(lambda x: str(x).replace(' ',''),1)
results.StationId = results.StationId.apply(lambda x: str(x).replace(' ',''),1)
Stat.StationId = Stat.StationId.apply(lambda x: str(x).replace(' ',''),1)
results.drop_duplicates(subset = ['SampleId','ParAbb'],inplace=True)
Stat.drop_duplicates(subset = ['StationId'],inplace=True)
resultsNoND = results[(~results['DetectCond'].isin(['<','>']))]
Explanation: Clean Up Non Detects
End of explanation
datap = resultsNoND.pivot(index='SampleId', columns='ParAbb', values='ResValue')
datap.dropna(subset=['SO4','Cond','Temp','TDS','pH_field'],how='all',inplace=True)
datap.drop(datap.columns[[0]], axis=1, inplace=True)
results.columns
resdrop = [ 'DetectCond', u'Comment#Number.1', u'Comment#Text.1', 'ResultValue', 'ResValue',
'MDL', 'MDLUnit', 'OrgId', 'Param', 'ResultValue', 'SampFrac',
'SampMedia', 'Unit', 'ParAbb']
resPivot = results.drop(resdrop, axis=1)
datapiv = pd.merge(datap, resPivot, left_index=True, right_on='SampleId',how='left')
datapiv.drop_duplicates(subset=['SampleId'],inplace=True)
Explanation: Pivot Data
End of explanation
def projy(x):
inProj = Proj(init='epsg:4326') #WGS84
outProj = Proj(init='epsg:2152') #NAD83(CSRS98) / UTM zone 12N
x2,y2 = transform(inProj,outProj,x[0],x[1])
return y2
def projx(x):
inProj = Proj(init='epsg:4326') #WGS84
outProj = Proj(init='epsg:2152') #NAD83(CSRS98) / UTM zone 12N
x2,y2 = transform(inProj,outProj,x[0],x[1])
return x2
def getelev(x):
elev = "http://ned.usgs.gov/epqs/pqs.php?x="+str(x[0])+"&y="+str(x[1])+"&units=Meters&output=xml"
response = urllib2.urlopen(elev)
html = response.read()
d = xmltodict.parse(html)
return float(d['USGS_Elevation_Point_Query_Service']['Elevation_Query']['Elevation'])
Stat.loc[:,'UTM_X'] = Stat[['Lon_X','Lat_Y']].apply(lambda x: projx(x),1)
Stat.loc[:,'UTM_Y'] = Stat[['Lon_X','Lat_Y']].apply(lambda x: projy(x),1)
Stat.loc[:,'Elev'] = Stat[['Lon_X','Lat_Y']].apply(lambda x: getelev(x),1)
pivStats = Stat.drop(['Aquifer', 'ConstDate', 'Depth', 'DepthUnit','AquiferType', 'HorCollMeth', 'Geology',
'HoleDUnit', 'HoleDepth', 'HUC8', 'HorAccUnit', 'HoleDUnit', 'SCREENDEPT',
'ElevUnit', 'ElevRef', 'ElevAcc', 'ElevMeth','CountyCode', 'ElevAccUnit',
'HorAcc', 'StateCode', 'HorRef',
'OrgId', 'StationComment'], axis=1)
pivStats.reset_index(inplace=True)
pivStats.set_index("StationId",inplace=True)
pivdata = pd.merge(datapiv, pivStats, left_on="StationId", right_index=True, how='left')
pivdata.drop_duplicates(subset=['SampleId'],inplace=True)
Explanation: Add GIS Information
End of explanation
alkmatch = pivdata[(pivdata['Meas_Alk']>0)&(pivdata['HCO3']>0)]
x = [np.float64(i) for i in alkmatch['Meas_Alk'].values]
y = [np.float64(i) for i in alkmatch['HCO3'].values]
X = sm.add_constant(x)
res = sm.RLM(y,X).fit()
b = res.params[0]
m = res.params[1]
print m
print b
plt.figure()
plt.scatter(x,y)
plt.plot(x, res.fittedvalues, color='red')
def HCO3fix(x):
if x[0]>0:
return x[0]
elif x[1]>0:
return x[1]*m+b
else:
pass
pivdata['HCO3'] = pivdata[['HCO3','Meas_Alk']].apply(lambda x: HCO3fix(x),1)
parlist = ['Ca','Mg','Na','K','Cl','HCO3','CO3','SO4','NO3','NO2','CO2','TDS','Si','Zn_tot','As_tot']
def removeInf(x):
if x <= 0:
return np.nan
else:
return np.log(x)
for i in parlist:
if i in pivdata.columns:
pivdata[i+'Ln'] = pivdata[i].apply(lambda x: removeInf(x),1)
d = {'Ca':0.04990269, 'Mg':0.082287595, 'Na':0.043497608, 'K':0.02557656, 'Cl':0.028206596, 'HCO3':0.016388838, 'CO3':0.033328223, 'SO4':0.020833333, 'NO2':0.021736513, 'NO3':0.016129032}
chemlist = ['Ca','Mg','Na','K','Cl','HCO3','CO3','SO4','NO3','NO2']
for i in chemlist:
if i in pivdata.columns:
pivdata[i+'Meq'] = pivdata.loc[:,i] * d[i]
pivdata.drop_duplicates(subset = ['StationId','SampleDate'], inplace=True)
def sumIons(x):
b = 0
for i in x:
if i>0:
b = i + b
else:
b = b
return b
pivdata['Anions'] = pivdata[['ClMeq','HCO3Meq','SO4Meq','CO3Meq']].apply(lambda x: sumIons(x),1)
pivdata['Cations'] = pivdata[['KMeq','MgMeq','NaMeq','CaMeq']].apply(lambda x: sumIons(x),1)
pivdata['EC'] = pivdata['Anions'] - pivdata['Cations']
pivdata['CBE'] = ((pivdata['Cations']-np.abs(pivdata['Anions']))/(pivdata['Cations']+np.abs(pivdata['Anions'])))*100
Explanation: Convert and Balance Samples
End of explanation
#piperdata = pivdata.dropna(subset = ['Ca','Na','Cl','Mg','SO4','HCO3'], how='any')
#piperdata.drop_duplicates(subset=['SampleId'], inplace=True)
print(len(pivdata))
pivgrps = pivdata.groupby(['StationId']).median()
pivGoodData = pivdata[abs(pivdata.CBE)<=5]
pipergrps = pivGoodData.groupby(['StationId']).median()
pipergrps['sampCount'] = pivGoodData.groupby(['StationId'])['CBE'].agg({'cnt':(lambda x: np.count_nonzero(~np.isnan(x)))}).reset_index#squeeze=True
pivgrp = pd.merge(pivgrps, pivStats, left_index=True, right_index=True, how='left')
pipergrp = pd.merge(pipergrps, pivStats, left_index=True, right_index=True, how='left')
pipergrp.drop_duplicates(inplace=True)
pivgrp = pivgrp.reset_index().drop_duplicates(subset=['StationId']).set_index('StationId')
princpiv = pivGoodData[(pivGoodData.SampleDate < datetime.datetime(2014,3,10))&(pivGoodData.UTM_X < 435000) & (pivGoodData.UTM_X > 422000) \
& (pivGoodData.UTM_Y > 4608000) & (pivGoodData.UTM_Y < 4634000) & (pivGoodData.StationType=='Well')]
princpiv.drop_duplicates(subset = ['SampleId'],inplace=True)
ResOldPrinc = resultsNoND[(resultsNoND.SampleId.isin(princpiv.SampleId))]
GWStat = Stat[Stat.StationType.isin(['Well','Spring'])]
GWRes = results[results.StationId.isin(list(GWStat.StationId))]
Nitrate = GWRes[GWRes['ParAbb'].isin(['N','NO2','NO3','NH4'])]
NitrateStat = GWStat[GWStat.StationId.isin(list(Nitrate.StationId))]
Explanation: Subset Data
End of explanation
ParrAbbSummary = ResOldPrinc.groupby('ParAbb')['ResValue'].agg({'min':np.min, 'mean':np.mean,
'qrt5':(lambda x: np.percentile(x,q=5)),
'qrt95':(lambda x: np.percentile(x,q=95)),
'range':(lambda x: np.max(x)-np.min(x)),
'lqrt':(lambda x: np.percentile(x,q=25)),
'median':np.median,
'uqrt':(lambda x: np.percentile(x,q=75)),
'max':np.max, 'std':np.std,
'cnt':(lambda x: np.count_nonzero(~np.isnan(x)))}).reset_index()
ParrAbbSummary
manyPars = list(ParrAbbSummary[ParrAbbSummary['cnt'] >= 30]['ParAbb'])
ResOldPrinc = ResOldPrinc[ResOldPrinc['ParAbb'].isin(manyPars)]
summaryStats = ParrAbbSummary[ParrAbbSummary['ParAbb'].isin(manyPars)]
summaryStats
from pylab import rcParams
rcParams['figure.figsize'] = 15, 10
parLabCounts = ParrAbbSummary.reset_index()
parLabCounts = parLabCounts.set_index(['ParAbb'])
plt.figure()
boxres= ResOldPrinc[ResOldPrinc['ParAbb'].isin(['pH_lab','pH_field'])]
boxres.boxplot(column='ResValue', by='ParAbb',vert=False)
plt.title('Boxplot of Principal Aquifer pH')
plt.yticks([1,2],['Field pH (n = %s)'%(parLabCounts.loc['pH_field','cnt']),'Lab pH (n = %s)'%(parLabCounts.loc['pH_lab','cnt'])])
plt.xlim(6,9)
plt.xticks(np.arange(6,9.25,0.25))
plt.xlabel('pH')
plt.savefig(rootname+"pHBoxplot.svg")
plt.savefig(rootname+"pHBoxplot.pdf")
plt.figure()
boxres= ResOldPrinc[ResOldPrinc['ParAbb'].isin(['Temp'])]
boxres.boxplot(column='ResValue', by='ParAbb',vert=False)
plt.title('Boxplot of Principal Aquifer Temperature')
plt.yticks([1],['Temperature (deg. C) (n = %s)'%(parLabCounts.loc['Temp','cnt'])])
plt.xticks(np.arange(5,30,1))
plt.xlabel('Temp. (deg. C)')
plt.savefig(rootname+"pHBoxplot.pdf")
import seaborn as sns
import matplotlib.pyplot as plt
sns.set(style="whitegrid")
rcParams['figure.figsize'] = 15, 20
parLabCounts = ParrAbbSummary.reset_index()
parLabCounts = parLabCounts.set_index(['ParAbb'])
parlist = ['Mg','Ca','Na','Cl','SO4','HCO3','Si','K','NO3','TDS','N']
boxres = ResOldPrinc[ResOldPrinc['ParAbb'].isin(parlist)]
plt.figure()
sns.violinplot(x="ResValue", y='ParAbb', data=boxres, palette="Set3", scale='width', cut=0)
plt.xlabel('mg/L')
plt.xlim(0,1200)
plt.ylabel('Chemical Constituent')
plt.savefig(rootname+'violinMajor.pdf')
parLabCounts = ParrAbbSummary.reset_index()
parLabCounts = parLabCounts.set_index(['ParAbb'])
def parboxplot(parlist):
plt.figure()
boxres= ResOldPrinc[ResOldPrinc['ParAbb'].isin(parlist)]
boxres.boxplot(column='ResValue', by='ParAbb',vert=False)
#labs = [str(parlist[i]) + " (n= %s)"%(parLabCounts.loc[parlist[i],'cnt']) for i in range(len(parlist))]
#tickloc = [b+1 for b in range(len(parlist))]
#plt.yticks(tickloc,labs)
parlist = ['pH_lab','pH_field']
parboxplot(parlist)
plt.xlabel('pH')
plt.savefig(rootname+'pHBoxplot')
parlist = ['Mg','Ca','Na','Cl','SO4','HCO3','Si','K','NO3','TDS','N']
parboxplot(parlist)
plt.title('Major Ions')
plt.xlabel('mg/L')
plt.grid(which='both',axis='both')
plt.xscale('log')
plt.xlim(0.1,1000)
plt.savefig(rootname+'MajorIonsBoxplot.pdf')
#plt.xlim(0.00001,1000)
#plt.xscale('log')
Explanation: Summarize & Plot Data
End of explanation
pipergrps.to_csv(rootname+'avgpiper.csv',index_label='StationId')
pivdata.to_csv(rootname+'pivotdata.csv',index_label='OBJECTID')
princpiv.to_csv(rootname+'PrincAquiferData.csv',index_label='OBJECTID')
pivgrp.to_csv(rootname+'pivgrps.csv',index_label='StationId')
Nitrate.to_csv(rootname+'NitrateResults.csv')
NitrateStat.to_csv(rootname+'NitrateStations.csv')
summaryStats.to_csv(rootname+'PrincAquifStats.csv')
summaryStats.to_clipboard()
GWStat.to_csv(rootname+'GWStations.csv',index_label='ObjectID')
GWRes.to_csv(rootname+'GWResults.csv',index_label='ObjectID')
writer = pd.ExcelWriter(rootname + "combined_out.xlsx", engine="xlsxwriter")
Stat.to_excel(writer, "stations", index=False)
results.to_excel(writer, "results", index=False)
GWStat.to_excel(writer, 'GWStations',index=False)
GWRes.to_excel(writer, 'GWResults',index=False)
pipergrps.to_excel(writer,'avgpiper')
pivdata.to_excel(writer,'pivotdata')
summaryStats.to_excel(writer,'princaquifstats')
writer.save()
Explanation: Export data
End of explanation
<END_TASK> |
15,602 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Pandas Exercises
Time to test your new pandas skills! Use the two csv files in this folder to complete the tasks in bold below!
NOTE
Step1: Show the head of the dataframe
Step2: What are the column names?
Step3: How many States (ST) are represented in this data set?
Step4: Get a list or array of all the states in the data set.
Step5: What are the top 5 states with the most failed banks?
Step6: What are the top 5 acquiring institutions?
Step7: How many banks has the State Bank of Texas acquired? How many of them were actually in Texas?
Step8: What is the most common city in California for a bank to fail in?
Step9: How many failed banks don't have the word "Bank" in their name?
Step10: How many bank names start with the letter 's' ?
Step11: How many CERT values are above 20000 ?
Step12: How many bank names consist of just two words? (e.g. "First Bank" , "Bank Georgia" )
Step13: Bonus | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
df = pd.read_csv('./banklist.csv')
Explanation: Pandas Exercises
Time to test your new pandas skills! Use the two csv files in this folder to complete the tasks in bold below!
NOTE: ALL TASKS MUST BE DONE IN ONE LINE OF PANDAS CODE. GOT STUCK? NO PROBLEM! CHECK OUT THE SOLUTIONS LECTURE!
Import pandas and read in the banklist.csv file into a dataframe called banks.
End of explanation
# CODE HERE
df.head()
Explanation: Show the head of the dataframe
End of explanation
# CODE HERE
df.columns
Explanation: What are the column names?
End of explanation
# CODE HERE
df['ST'].unique().shape[0]
# or
# len(df['ST'].unique())
Explanation: How many States (ST) are represented in this data set?
End of explanation
# CODE HERE
df['ST'].unique()
Explanation: Get a list or array of all the states in the data set.
End of explanation
# CODE HERE
df['ST'].value_counts().head()
Explanation: What are the top 5 states with the most failed banks?
End of explanation
# CODE HERE
df['Acquiring Institution'].value_counts().head()
Explanation: What are the top 5 acquiring institutions?
End of explanation
# CODE HERE
df[df['Acquiring Institution'] == 'State Bank of Texas']
len(df[df['Acquiring Institution'] == 'State Bank of Texas'])
len(df[(df['Acquiring Institution'] == 'State Bank of Texas') & (df['ST'] == 'TX')])
Explanation: How many banks has the State Bank of Texas acquired? How many of them were actually in Texas?
End of explanation
# CODE HERE
df[df['ST'] == 'CA']['City'].value_counts()[0]
Explanation: What is the most common city in California for a bank to fail in?
End of explanation
# CODE HERE
len(df[~df['Bank Name'].str.contains('Bank')]['Bank Name'].unique())
Explanation: How many failed banks don't have the word "Bank" in their name?
End of explanation
# CODE HERE
df['Bank Name'].apply(lambda name: name[0].lower() == 's').value_counts()
Explanation: How many bank names start with the letter 's' ?
End of explanation
# CODE HERE
sum(df['CERT'] > 20000)
Explanation: How many CERT values are above 20000 ?
End of explanation
# CODE HERE
sum(df['Bank Name'].apply(lambda name: len(name.split()) == 2))
Explanation: How many bank names consist of just two words? (e.g. "First Bank" , "Bank Georgia" )
End of explanation
# CODE HERE
sum(df['Closing Date'].apply(lambda date: date[-2:] == '08'))
Explanation: Bonus: How many banks closed in the year 2008? (this is hard because we technically haven't learned about time series with pandas yet! Feel free to skip this one!
End of explanation
<END_TASK> |
15,603 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
A Period - Magnitude Relation in Cepheid Stars
Cepheids are stars whose brightness oscillates with a stable period that appears to be strongly correlated with their luminosity (or absolute magnitude).
A lot of monitoring data - repeated imaging and subsequent "photometry" of the star - can provide a measurement of the absolute magnitude (if we know the distance to it's host galaxy) and the period of the oscillation.
Let's look at some Cepheid measurements reported by Riess et al (2011). Like the correlation function summaries, they are in the form of datapoints with error bars, where it is not clear how those error bars were derived (or what they mean).
Our goal is to infer the parameters of a simple relationship between Cepheid period and, in the first instance, apparent magnitude.
Step1: A Look at Each Host Galaxy's Cepheids
Let's read in all the data, and look at each galaxy's Cepheid measurements separately. Instead of using pandas, we'll write our own simple data structure, and give it a custom plotting method so we can compare the different host galaxies' datasets.
Step2: OK, now we are all set up! Let's plot one of the datasets.
Step3: Q
Step4: Q
Step5: Now, let's set up a suitable parameter grid and compute the posterior PDF!
Step6: Now, plot, with confidence contours
Step7: Are these inferred parameters sensible?
Let's read off a plausible (a,b) pair and overlay the model period-magnitude relation on the data.
Step8: OK, this looks good! Later in the course we will do some more extensive model checking.
Summarizing our Inferences
Let's compute the 1D marginalized posterior PDFs for $a$ and for $b$, and report the median and "68% credible interval" (defined as the region of 1D parameter space enclosing 68% of the posterior probability). | <ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (15.0, 8.0)
Explanation: A Period - Magnitude Relation in Cepheid Stars
Cepheids are stars whose brightness oscillates with a stable period that appears to be strongly correlated with their luminosity (or absolute magnitude).
A lot of monitoring data - repeated imaging and subsequent "photometry" of the star - can provide a measurement of the absolute magnitude (if we know the distance to it's host galaxy) and the period of the oscillation.
Let's look at some Cepheid measurements reported by Riess et al (2011). Like the correlation function summaries, they are in the form of datapoints with error bars, where it is not clear how those error bars were derived (or what they mean).
Our goal is to infer the parameters of a simple relationship between Cepheid period and, in the first instance, apparent magnitude.
End of explanation
# First, we need to know what's in the data file.
!head -15 R11ceph.dat
class Cepheids(object):
def __init__(self,filename):
# Read in the data and store it in this master array:
self.data = np.loadtxt(filename)
self.hosts = self.data[:,1].astype('int').astype('str')
# We'll need the plotting setup to be the same each time we make a plot:
colornames = ['red','orange','yellow','green','cyan','blue','violet','magenta','gray']
self.colors = dict(zip(self.list_hosts(), colornames))
self.xlimits = np.array([0.3,2.3])
self.ylimits = np.array([30.0,17.0])
return
def list_hosts(self):
# The list of (9) unique galaxy host names:
return np.unique(self.hosts)
def select(self,ID):
# Pull out one galaxy's data from the master array:
index = (self.hosts == str(ID))
self.mobs = self.data[index,2]
self.merr = self.data[index,3]
self.logP = np.log10(self.data[index,4])
return
def plot(self,X):
# Plot all the points in the dataset for host galaxy X.
ID = str(X)
self.select(ID)
plt.rc('xtick', labelsize=16)
plt.rc('ytick', labelsize=16)
plt.errorbar(self.logP, self.mobs, yerr=self.merr, fmt='.', ms=7, lw=1, color=self.colors[ID], label='NGC'+ID)
plt.xlabel('$\\log_{10} P / {\\rm days}$',fontsize=20)
plt.ylabel('${\\rm magnitude (AB)}$',fontsize=20)
plt.xlim(self.xlimits)
plt.ylim(self.ylimits)
plt.title('Cepheid Period-Luminosity (Riess et al 2011)',fontsize=20)
return
def overlay_straight_line_with(self,a=0.0,b=24.0):
# Overlay a straight line with gradient a and intercept b.
x = self.xlimits
y = a*x + b
plt.plot(x, y, 'k-', alpha=0.5, lw=2)
plt.xlim(self.xlimits)
plt.ylim(self.ylimits)
return
def add_legend(self):
plt.legend(loc='upper left')
return
data = Cepheids('R11ceph.dat')
print(data.colors)
Explanation: A Look at Each Host Galaxy's Cepheids
Let's read in all the data, and look at each galaxy's Cepheid measurements separately. Instead of using pandas, we'll write our own simple data structure, and give it a custom plotting method so we can compare the different host galaxies' datasets.
End of explanation
data.plot(4258)
# for ID in data.list_hosts():
# data.plot(ID)
data.overlay_straight_line_with(a=-2.0,b=24.0)
data.add_legend()
Explanation: OK, now we are all set up! Let's plot one of the datasets.
End of explanation
# import cepheids_pgm
# cepheids_pgm.simple()
from IPython.display import Image
Image(filename="cepheids_pgm.png")
Explanation: Q: Is the Cepheid Period-Luminosity relation likely to be well-modeled by a power law ?
Is it easy to find straight lines that "fit" all the data from each host? And do we get the same "fit" for each host?
Inferring the Period-Magnitude Relation
Let's try inferring the parameters $a$ and $b$ of the following linear relation:
$m = a\;\log_{10} P + b$
We have data consisting of observed magnitudes with quoted uncertainties, of the form
$m^{\rm obs} = 24.51 \pm 0.31$ at $\log_{10} P = \log_{10} (13.0/{\rm days})$
Let's draw a PGM for this, imagining our way through what we would do to generate a mock dataset like the one we have.
End of explanation
def log_likelihood(logP,mobs,merr,a,b):
return -0.5*np.sum((mobs - a*logP -b)**2/(merr**2))
def log_prior(a,b):
amin,amax = -10.0,10.0
bmin,bmax = 10.0,30.0
if (a > amin)*(a < amax)*(b > bmin)*(b < bmax):
logp = np.log(1.0/(amax-amin)) + np.log(1.0/(bmax-bmin))
else:
logp = -np.inf
return logp
def log_posterior(logP,mobs,merr,a,b):
return log_likelihood(logP,mobs,merr,a,b) + log_prior(a,b)
Explanation: Q: What are reasonable assumptions about the sampling distribution for the $k^{\rm th}$ datapoint, ${\rm Pr}(m^{\rm obs}_k|m_k,H)$?
We were given points ($m^{\rm obs}_k$) with error bars ($\sigma_k$), which suggests a Gaussian sampling distribution (as was suggested in Session 1):
${\rm Pr}(m^{\rm obs}_k|m_k,\sigma_k,H) = \frac{1}{Z} \exp{-\frac{(m^{\rm obs}_k - m_k)^2}{2\sigma_k^2}}$
Then, we might suppose that the measurements of each Cepheid start are independent of each other, so that we can define predicted and observed data vectors $m$ and $m^{\rm obs}$ (plus a corresponding observational uncertainty vector $\sigma$) via:
${\rm Pr}(m^{\rm obs}|m,\sigma,H) = \prod_k {\rm Pr}(m^{\rm obs}_k|m_k,\sigma_k,H)$
Q: What is the conditional PDF ${\rm Pr}(m_k|a,b,\log{P_k},H)$?
Our relationship between the intrinsic magnitude and the log period is linear and deterministic, indicating the following delta-function PDF:
${\rm Pr}(m_k|a,b,\log{P_k},H) = \delta(m_k - a\log{P_k} - b)$
Q: What is the resulting joint likelihood, ${\rm Pr}(m^{\rm obs}|a,b,H)$?
The factorisation of the joint PDF for everything inside the plate that is illustrated by the PGM is:
${\rm Pr}(m^{\rm obs}|m,\sigma,H)\;{\rm Pr}(m|a,b,H) = \prod_k {\rm Pr}(m^{\rm obs}_k|m_k,\sigma_k,H)\;\delta(m_k - a\log{P_k} - b)$
The intrinsic magnitudes of each Cepheid ($m$) are not interesting, and so we marginalize them out:
${\rm Pr}(m^{\rm obs}|a,b,H) = \int {\rm Pr}(m^{\rm obs}|m,\sigma,H)\;{\rm Pr}(m|a,b,H)\; dm$
so that ${\rm Pr}(m^{\rm obs}|a,b,H) = \prod_k {\rm Pr}(m^{\rm obs}_k|[a\log{P_k} + b],\sigma,H)$
Q: What is the log likelihood?
$\log {\rm Pr}(m^{\rm obs}|a,b,H) = \sum_k \log {\rm Pr}(m^{\rm obs}_k|[a\log{P_k} + b],\sigma,H)$
which, substituting in our Gaussian form, gives us:
$\log {\rm Pr}(m^{\rm obs}|a,b,H) = {\rm constant} - 0.5 \sum_k \frac{(m^{\rm obs}_k - a\log{P_k} - b)^2}{\sigma_k^2}$
This sum is often called $\chi^2$ ("chi-squared"), and you may have seen it before. It's an effective "misfit" statistic, quantifying the difference between observed and predicted data - and under the assumptions outlined here, it's twice the log likelihood (up to a constant).
Q: What could be reasonable assumptions for the prior ${\rm Pr}(a,b|H)$?
For now, we can (continue to) assume a uniform distribution for each of $a$ and $b$ - in the homework, you can investigate some alternatives.
${\rm Pr}(a|H) = \frac{1.0}{a_{\rm max} - a_{\rm min}}\;\;{\rm for}\;\; a_{\rm min} < a < a_{\rm max}$
${\rm Pr}(b|H) = \frac{1.0}{b_{\rm max} - b_{\rm min}}\;\;{\rm for}\;\; b_{\rm min} < b < b_{\rm max}$
We should now be able to code up functions for the log likelihood, log prior and log posterior, such that we can evaluate them on a 2D parameter grid. Let's fill them in:
End of explanation
# Select a Cepheid dataset:
data.select(4258)
# Set up parameter grids:
npix = 100
amin,amax = -4.0,-2.0
bmin,bmax = 25.0,27.0
agrid = np.linspace(amin,amax,npix)
bgrid = np.linspace(bmin,bmax,npix)
logprob = np.zeros([npix,npix])
# Loop over parameters, computing unnormlized log posterior PDF:
for i,a in enumerate(agrid):
for j,b in enumerate(bgrid):
logprob[j,i] = log_posterior(data.logP,data.mobs,data.merr,a,b)
# Normalize and exponentiate to get posterior density:
Z = np.max(logprob)
prob = np.exp(logprob - Z)
norm = np.sum(prob)
prob /= norm
Explanation: Now, let's set up a suitable parameter grid and compute the posterior PDF!
End of explanation
sorted = np.sort(prob.flatten())
C = sorted.cumsum()
# Find the pixel values that lie at the levels that contain
# 68% and 95% of the probability:
lvl68 = np.min(sorted[C > (1.0 - 0.68)])
lvl95 = np.min(sorted[C > (1.0 - 0.95)])
plt.imshow(prob, origin='lower', cmap='Blues', interpolation='none', extent=[amin,amax,bmin,bmax])
plt.contour(prob,[lvl68,lvl95],colors='black',extent=[amin,amax,bmin,bmax])
plt.grid()
plt.xlabel('slope a')
plt.ylabel('intercept b / AB magnitudes')
Explanation: Now, plot, with confidence contours:
End of explanation
data.plot(4258)
data.overlay_straight_line_with(a=-3.0,b=26.3)
data.add_legend()
Explanation: Are these inferred parameters sensible?
Let's read off a plausible (a,b) pair and overlay the model period-magnitude relation on the data.
End of explanation
prob_a_given_data = np.sum(prob,axis=0) # Approximate the integral as a sum
prob_b_given_data = np.sum(prob,axis=1) # Approximate the integral as a sum
print(prob_a_given_data.shape, np.sum(prob_a_given_data))
# Plot 1D distributions:
fig,ax = plt.subplots(nrows=1, ncols=2)
fig.set_size_inches(15, 6)
plt.subplots_adjust(wspace=0.2)
left = ax[0].plot(agrid, prob_a_given_data)
ax[0].set_title('${\\rm Pr}(a|d)$')
ax[0].set_xlabel('slope $a$')
ax[0].set_ylabel('Posterior probability density')
right = ax[1].plot(bgrid, prob_b_given_data)
ax[1].set_title('${\\rm Pr}(b|d)$')
ax[0].set_xlabel('intercept $b$ / AB magnitudes')
ax[1].set_ylabel('Posterior probability density')
# Compress each PDF into a median and 68% credible interval, and report:
def compress_1D_pdf(x,pr,ci=68,dp=1):
# Interpret credible interval request:
low = (1.0 - ci/100.0)/2.0 # 0.16 for ci=68
high = 1.0 - low # 0.84 for ci=68
# Find cumulative distribution and compute percentiles:
cumulant = pr.cumsum()
pctlow = x[cumulant>low].min()
median = x[cumulant>0.50].min()
pcthigh = x[cumulant>high].min()
# Convert to error bars, and format a string:
errplus = np.abs(pcthigh - median)
errminus = np.abs(median - pctlow)
report = "$ "+str(round(median,dp))+"^{+"+str(round(errplus,dp))+"}_{-"+str(round(errminus,dp))+"} $"
return report
print("a = ",compress_1D_pdf(agrid,prob_a_given_data,ci=68,dp=2))
print("b = ",compress_1D_pdf(bgrid,prob_b_given_data,ci=68,dp=2))
Explanation: OK, this looks good! Later in the course we will do some more extensive model checking.
Summarizing our Inferences
Let's compute the 1D marginalized posterior PDFs for $a$ and for $b$, and report the median and "68% credible interval" (defined as the region of 1D parameter space enclosing 68% of the posterior probability).
End of explanation
<END_TASK> |
15,604 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Multi-Input Text Explanation
Step1: Load model and tokenizer
Step2: Run shap values
Step3: Explanation Visualization
Step4: Input Partition Tree - Dendrogram
Step5: Benchmarking | <ASSISTANT_TASK:>
Python Code:
import numpy as np
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import shap
from datasets import load_dataset
Explanation: Multi-Input Text Explanation: Textual Entailment with Facebook BART
This notebook demonstrates how to get explanations for the output of the Facebook BART model trained on the mnli dataset and used for textual entailment. We use an example from the snli dataset due to mnli not being supported in the required environment for shap.
BART: https://huggingface.co/facebook/bart-large-mnli
End of explanation
model = AutoModelForSequenceClassification.from_pretrained("facebook/bart-large-mnli")
tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large-mnli")
# load dataset
dataset = load_dataset("snli")
snli_label_map = {0: 'entailment', 1: 'neutral', 2: 'contradiction'}
example_ind = 6
premise, hypothesis, label = ( dataset['train']['premise'][example_ind],
dataset['train']['hypothesis'][example_ind],
dataset['train']['label'][example_ind] )
print('Premise: ' + premise)
print('Hypothesis: ' + hypothesis)
true_label = snli_label_map[label]
print('The true label is: {true_label}'.format(true_label=true_label))
# test model
input_ids = tokenizer.encode(premise, hypothesis, return_tensors='pt')
logits = model(input_ids)[0]
probs = logits.softmax(dim=1)
bart_label_map = {0: 'contradiction', 1: 'neutral', 2: 'entailment'}
for i, lab in bart_label_map.items():
print('{lab} probability: {prob:0.2f}%'.format(lab=lab, prob=probs[0][i] * 100))
Explanation: Load model and tokenizer
End of explanation
import scipy as sp
import torch
# wrapper function for model
# takes in masked string which is in the form: premise <separator token(s)> hypothesis
def f(x):
outputs = []
for _x in x:
encoding = torch.tensor([tokenizer.encode(_x)])
output = model(encoding)[0].detach().cpu().numpy()
outputs.append(output[0])
outputs = np.array(outputs)
scores = (np.exp(outputs).T / np.exp(outputs).sum(-1)).T
val = sp.special.logit(scores)
return val
# Construct explainer
bart_labels = ['contradiction', 'neutral', 'entailment']
explainer = shap.Explainer(f, tokenizer, output_names=bart_labels)
# encode then decode premise, hypothesis to get concatenated sentences
encoded = tokenizer(premise, hypothesis)['input_ids'][1:-1] # ignore the start and end tokens, since tokenizer will naturally add them
decoded = tokenizer.decode(encoded)
print(decoded)
shap_values = explainer([decoded]) # wrap input in list
print(shap_values)
Explanation: Run shap values
End of explanation
shap.plots.text(shap_values)
Explanation: Explanation Visualization
End of explanation
from scipy.cluster.hierarchy import dendrogram, linkage
from matplotlib import pyplot as plt
Z = shap_values[0].abs.clustering
Z[-1][2] = Z[-2][2] + 10 # last row's distance is extremely large, so make it a more reasonable value
print(Z)
labels_arr = shap_values[0].data
# # clean labels of unusal characters (only for slow tokenizer, if use_fast=False)
# labels_arr = []
# for token in shap_values[0].data:
# if token[0] == 'Ġ':
# labels_arr.append(token[1:])
# else:
# labels_arr.append(token)
print(labels_arr)
fig = plt.figure(figsize=(len(Z) + 20, 15))
dn = dendrogram(Z, labels=labels_arr)
plt.show()
Explanation: Input Partition Tree - Dendrogram
End of explanation
sort_order = 'positive'
perturbation = 'keep'
from shap import benchmark
sper = benchmark.perturbation.SequentialPerturbation(explainer.model, explainer.masker, sort_order, perturbation)
xs, ys, auc = sper.model_score(shap_values, [decoded])
sper.plot(xs, ys, auc)
Explanation: Benchmarking
End of explanation
<END_TASK> |
15,605 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Persistent homology
This demo explains how to use Dionysus for persistent homology computation. First necessary imports.
Step1: We will compute persistent homology of a 2-simplex (triangle) ABC. The filtration is as follows
Step2: Now the persistent homology is computed.
Step3: Now output the computed persistence diagram. For each critical cell that appears in the filtration the time of Birth and Death is given as well as the cell that kills it (its pair). The features that persist forever have Death value set to inf. | <ASSISTANT_TASK:>
Python Code:
from dionysus import Simplex, Filtration, StaticPersistence, \
vertex_cmp, data_cmp, data_dim_cmp, \
DynamicPersistenceChains
from math import sqrt
Explanation: Persistent homology
This demo explains how to use Dionysus for persistent homology computation. First necessary imports.
End of explanation
scx = [Simplex((2,), 0), # C
Simplex((0,), 1), # A
Simplex((1,), 1), # B
Simplex((0,1), 2), # AB
Simplex((1,2), 3), # BC
Simplex((0,2), 3), # AC
Simplex((0,1,2), 4), # ABC
]
Explanation: We will compute persistent homology of a 2-simplex (triangle) ABC. The filtration is as follows: first the top vertex (C) of the triangle is added, then the rest of vertices (A and B) followed by the the bottom edge (AB), then the rest of the edges (AC and BC) and finally the triangle is filled in (ABC).
End of explanation
f = Filtration(scx, data_cmp)
p = DynamicPersistenceChains(f)
p.pair_simplices()
smap = p.make_simplex_map(f)
Explanation: Now the persistent homology is computed.
End of explanation
print "{:>10}{:>10}{:>10}{:>10}".format("First", "Second", "Birth", "Death")
for i in (i for i in p if i.sign()):
b = smap[i]
if i.unpaired():
print "{:>10}{:>10}{:>10}{:>10}".format(b, '', b.data, "inf")
else:
d = smap[i.pair()]
print "{:>10}{:>10}{:>10}{:>10}".format(b, d, b.data, d.data)
Explanation: Now output the computed persistence diagram. For each critical cell that appears in the filtration the time of Birth and Death is given as well as the cell that kills it (its pair). The features that persist forever have Death value set to inf.
End of explanation
<END_TASK> |
15,606 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
<h1>Tutorial
Step1: We will try summarizing a small toy example; later we will use a larger piece of text. In reality, the text is too small, but it suffices as an illustrative example.
Step2: To summarize this text, we pass the <b>raw string data</b> as input to the function "summarize", and it will return a summary.
Note
Step3: Use the "split" option if you want a list of strings instead of a single string.
Step4: You can adjust how much text the summarizer outputs via the "ratio" parameter or the "word_count" parameter. Using the "ratio" parameter, you specify what fraction of sentences in the original text should be returned as output. Below we specify that we want 50% of the original text (the default is 20%).
Step5: Using the "word_count" parameter, we specify the maximum amount of words we want in the summary. Below we have specified that we want no more than 50 words.
Step6: As mentioned earlier, this module also supports <b>keyword</b> extraction. Keyword extraction works in the same way as summary generation (i.e. sentence extraction), in that the algorithm tries to find words that are important or seem representative of the entire text. They keywords are not always single words; in the case of multi-word keywords, they are typically all nouns.
Step7: <h2>Larger example</h2>
Let us try an example with a larger piece of text. We will be using a synopsis of the movie "The Matrix", which we have taken from this IMDb page.
In the code below, we read the text file directly from a web-page using "requests". Then we produce a summary and some keywords.
Step8: If you know this movie, you see that this summary is actually quite good. We also see that some of the most important characters (Neo, Morpheus, Trinity) were extracted as keywords.
<h2>Another example</h2>
Let's try an example similar to the one above. This time, we will use the IMDb synopsis of "The Big Lebowski".
Again, we download the text and produce a summary and some keywords.
Step9: This time around, the summary is not of high quality, as it does not tell us much about the movie. In a way, this might not be the algorithms fault, rather this text simply doesn't contain one or two sentences that capture the essence of the text as in "The Matrix" synopsis.
The keywords, however, managed to find some of the main characters.
<h2>Performance</h2>
We will test how the speed of the summarizer scales with the size of the dataset. These tests were run on an Intel Core i5 4210U CPU @ 1.70 GHz x 4 processor. Note that the summarizer does <i>not</i> support multithreading (parallel processing).
The tests were run on the book "Honest Abe" by Alonzo Rothschild. Download the book in plain-text <a href="http
Step10: By default, the algorithm weights the entropy by the overall frequency of the word in the document. We can remove this weighting by setting weighted=False
Step11: When this option is used, it is possible to calculate a threshold automatically from the number of blocks | <ASSISTANT_TASK:>
Python Code:
import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
from gensim.summarization import summarize
Explanation: <h1>Tutorial: automatic summarization using Gensim</h1>
This module automatically summarizes the given text, by extracting one or more important sentences from the text. In a similar way, it can also extract keywords. This tutorial will teach you to use this summarization module via some examples. First, we will try a small example, then we will try two larger ones, and then we will review the performance of the summarizer in terms of speed.
This summarizer is based on the "TextRank" algorithm, from an article by Mihalcea et al. This algorithm was later improved upon by Barrios et al. in another article, by introducing something called a "BM25 ranking function".
This tutorial assumes that you are familiar with Python and have installed Gensim.
<b>Note</b>: Gensim's summarization only works for English for now, because the text is pre-processed so that stopwords are removed and the words are stemmed, and these processes are language-dependent.
<h2>Small example</h2>
First of all, we import the function "summarize".
End of explanation
text = "Thomas A. Anderson is a man living two lives. By day he is an " + \
"average computer programmer and by night a hacker known as " + \
"Neo. Neo has always questioned his reality, but the truth is " + \
"far beyond his imagination. Neo finds himself targeted by the " + \
"police when he is contacted by Morpheus, a legendary computer " + \
"hacker branded a terrorist by the government. Morpheus awakens " + \
"Neo to the real world, a ravaged wasteland where most of " + \
"humanity have been captured by a race of machines that live " + \
"off of the humans' body heat and electrochemical energy and " + \
"who imprison their minds within an artificial reality known as " + \
"the Matrix. As a rebel against the machines, Neo must return to " + \
"the Matrix and confront the agents: super-powerful computer " + \
"programs devoted to snuffing out Neo and the entire human " + \
"rebellion. "
print ('Input text:')
print (text)
Explanation: We will try summarizing a small toy example; later we will use a larger piece of text. In reality, the text is too small, but it suffices as an illustrative example.
End of explanation
print ('Summary:')
print (summarize(text))
Explanation: To summarize this text, we pass the <b>raw string data</b> as input to the function "summarize", and it will return a summary.
Note: make sure that the string does not contain any newlines where the line breaks in a sentence. A sentence with a newline in it (i.e. a carriage return, "\n") will be treated as two sentences.
End of explanation
print (summarize(text, split=True))
Explanation: Use the "split" option if you want a list of strings instead of a single string.
End of explanation
print ('Summary:')
print (summarize(text, ratio=0.5))
Explanation: You can adjust how much text the summarizer outputs via the "ratio" parameter or the "word_count" parameter. Using the "ratio" parameter, you specify what fraction of sentences in the original text should be returned as output. Below we specify that we want 50% of the original text (the default is 20%).
End of explanation
print ('Summary:')
print (summarize(text, word_count=50))
Explanation: Using the "word_count" parameter, we specify the maximum amount of words we want in the summary. Below we have specified that we want no more than 50 words.
End of explanation
from gensim.summarization import keywords
print ('Keywords:')
print (keywords(text))
Explanation: As mentioned earlier, this module also supports <b>keyword</b> extraction. Keyword extraction works in the same way as summary generation (i.e. sentence extraction), in that the algorithm tries to find words that are important or seem representative of the entire text. They keywords are not always single words; in the case of multi-word keywords, they are typically all nouns.
End of explanation
import requests
text = requests.get('http://rare-technologies.com/the_matrix_synopsis.txt').text
print ('Summary:')
print (summarize(text, ratio=0.01))
print ('\nKeywords:')
print (keywords(text, ratio=0.01))
Explanation: <h2>Larger example</h2>
Let us try an example with a larger piece of text. We will be using a synopsis of the movie "The Matrix", which we have taken from this IMDb page.
In the code below, we read the text file directly from a web-page using "requests". Then we produce a summary and some keywords.
End of explanation
import requests
text = requests.get('http://rare-technologies.com/the_big_lebowski_synopsis.txt').text
print ('Summary:')
print (summarize(text, ratio=0.01))
print ('\nKeywords:')
print (keywords(text, ratio=0.01))
Explanation: If you know this movie, you see that this summary is actually quite good. We also see that some of the most important characters (Neo, Morpheus, Trinity) were extracted as keywords.
<h2>Another example</h2>
Let's try an example similar to the one above. This time, we will use the IMDb synopsis of "The Big Lebowski".
Again, we download the text and produce a summary and some keywords.
End of explanation
import requests
from gensim.summarization import mz_keywords
text=requests.get("http://www.gutenberg.org/files/49679/49679-0.txt").text
mz_keywords(text,scores=True,threshold=0.001)
Explanation: This time around, the summary is not of high quality, as it does not tell us much about the movie. In a way, this might not be the algorithms fault, rather this text simply doesn't contain one or two sentences that capture the essence of the text as in "The Matrix" synopsis.
The keywords, however, managed to find some of the main characters.
<h2>Performance</h2>
We will test how the speed of the summarizer scales with the size of the dataset. These tests were run on an Intel Core i5 4210U CPU @ 1.70 GHz x 4 processor. Note that the summarizer does <i>not</i> support multithreading (parallel processing).
The tests were run on the book "Honest Abe" by Alonzo Rothschild. Download the book in plain-text <a href="http://www.gutenberg.org/ebooks/49679">here</a>.
In the <b>plot below</b>, we see the running times together with the sizes of the datasets. To create datasets of different sizes, we have simply taken prefixes of text; in other words we take the first <i>n</i> characters of the book. The algorithm seems to be <b>quadratic in time</b>, so one needs to be careful before plugging a large dataset into the summarizer.
<figure>
<img src="http://rare-technologies.com/summarization_tutorial_plot.png">
<figcaption></figcaption>
</figure>
<h3>Text-content dependent running times</h3>
The running time is not only dependent on the size of the dataset. For example, summarizing "The Matrix" synopsis (about 36,000 characters) takes about 3.1 seconds, while summarizing 35,000 characters of this book takes about 8.5 seconds. So the former is <i>more than twice as fast</i>.
One reason for this difference in running times is the data structure that is used. The algorithm represents the data using a graph, where vertices (nodes) are sentences, and then constructs weighted edges between the vertices that represent how the sentences relate to each other. This means that every piece of text will have a different graph, thus making the running times different. The size of this data structure is <i>quadratic in the worst case</i> (the worst case is when each vertex has an edge to every other vertex).
Another possible reason for the difference in running times is that the problems converge at different rates, meaning that the error drops slower for some datasets than for others.
Montemurro and Zanette's entropy based keyword extraction algorithm
This paper describes a technique to identify words that play a significant role in the large-scale structure of a text. These typically correspond to the major themes of the text. The text is divided into blocks of ~1000 words, and the entropy of each word's distribution amongst the blocks is
caclulated and compared with the expected entropy if the word were distributed randomly.
End of explanation
mz_keywords(text,scores=True,weighted=False,threshold=1.0)
Explanation: By default, the algorithm weights the entropy by the overall frequency of the word in the document. We can remove this weighting by setting weighted=False
End of explanation
mz_keywords(text,scores=True,weighted=False,threshold="auto")
Explanation: When this option is used, it is possible to calculate a threshold automatically from the number of blocks
End of explanation
<END_TASK> |
15,607 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
PyLadies and local Python User Groups
Last updated
Step1: Part 1
Step2: The Meetup API limits requests, however their documentation isn't exactly helpful. Using their headers, I saw that I was limited to 30 requests per 10 seconds. Therefore, I'll sleep 1 second in between each request to be safe.
Step5: Part 2
Step6: Part 3
Step7: Sanity check (I have a tree command installed via brew install tree)
Step8: Part 4 | <ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
from collections import defaultdict
import json
import os
import time
import requests
Explanation: PyLadies and local Python User Groups
Last updated: August 4, 2015
I am not a statistician by trade; far from it. I did take a few stats & econometrics courses in college, but I won't even consider myself an armchair statistician here.
I am not making any suggestions about causation, just merely exploring what the Meetup API has to offer.
This also isn't how I code in general; but I love ~~IPython~~ Jupyter Notebooks, and I wanted an excuse to use it with Pandas (first time I'm using Pandas too!).
This data was used in my EuroPython 2015 talk, Diversity: We're not done yet. (Slides, video soon)
End of explanation
def save_output(data, output_file):
with open(output_file, "w") as f:
json.dump(data, f)
# Set some global variables
MEETUP_API_KEY = "yeah right"
MEETUP_GROUPS_URL = "https://api.meetup.com/2/groups"
PARAMS = {
"signed": True,
"key": MEETUP_API_KEY,
"topic": "python",
"category_id": 34, # 34 = Tech, there are only ~35 categories
"order": "members",
"page": 200, # max allowed
"omit": "group_photo" # no need for photos in response
}
TOTAL_PAGES = 6 # looked on the API console, 1117 meetup groups as of 7/17, 200 groups per page = 6 pages
Explanation: Part 1: Grabbing all Python-centric meetup groups
NOTE
This repository includes all the data files that I used (latest update: Aug 4, 2015). You may skip this part if you don't want to call the Meetup API to get new/fresh data.
TIP
Take a look at Meetup's API Console; I used it when forming API requests as well as getting an idea of pagination for some requests.
What we're doing
We'll call a few different endpoints from the Meetup API and save the data locally in a json file for us to use later.
To get your own Meetup API key, you'll need a regular Meetup user account. Once you're logged in, you can navigate to the API Key portion of the API docs to reveal your API key.
API Endpoint docs:
Groups
End of explanation
def get_meetup_groups():
meetup_groups = []
for i in xrange(TOTAL_PAGES):
PARAMS["offset"] = i
print("GROUPS: Getting page {0} of {1}".format(i+1, TOTAL_PAGES+1))
response = requests.get(MEETUP_GROUPS_URL, params=PARAMS)
if response.ok:
meetup_groups.extend(response.json().get("results"))
time.sleep(1) # don't bombard the Meetup API
print("GROUPS: Collected {0} Meetup groups".format(len(meetup_groups)))
return meetup_groups
meetup_groups = get_meetup_groups()
# Create a directory to save everything
data_dir = "meetup_data"
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# Save meetup groups data
output = os.path.join(data_dir, "meetup_groups.json")
save_output(meetup_groups, output)
# inspect one for funsies
meetup_groups[0]
Explanation: The Meetup API limits requests, however their documentation isn't exactly helpful. Using their headers, I saw that I was limited to 30 requests per 10 seconds. Therefore, I'll sleep 1 second in between each request to be safe.
End of explanation
search = ["python", "pydata", "pyramid", "py", "django", "flask", "plone"]
omit = ["happy"] # I realize that a group could be called "happy python user group" or something...
def is_pug(group):
Return `True` if in `search` key words and not in `omit` keywords.
group_name = group.get("name").lower()
for o in omit:
if o in group_name:
return False
for s in search:
if s in group_name:
return True
return False
def sort_groups(groups):
Sort groups by 'pyladies' and 'python user groups'.
pyladies = []
user_groups = []
for g in groups:
if "pyladies" in g.get("name").lower():
pyladies.append(g)
else:
if is_pug(g):
user_groups.append(g)
return user_groups, pyladies
user_groups, pyladies = sort_groups(meetup_groups)
# Let's spot check the UGs to see if what we're left with makes sense
# Note: I took a peek at a few (not shown here) and for the most part,
# all seems okay
for g in user_groups:
print(g.get("name"))
Explanation: Part 2: Narrow down & sort the meetup groups
We got a lot returned from searching the /groups endpoint with just the "python" topic. So we should narrow it down a bit, as well as sort out PyLadies groups.
My process is to just narrow down by actual name of the group (e.g. python, py, django, etc).
Spot checking the results will definitely be needed, but will come a bit later.
End of explanation
from math import sin, cos, asin, degrees, radians, atan2, sqrt
RADIUS = 3958.75 # Earth's radius in miles
def is_within_50_miles(pyladies_coords, python_coords):
pyladies_lat, pyladies_lon = pyladies_coords[0], pyladies_coords[1]
python_lat, python_lon = python_coords[0], python_coords[1]
d_lat = radians(pyladies_lat - python_lat)
d_lon = radians(pyladies_lon - python_lon)
sin_d_lat = sin(d_lat / 2)
sin_d_lon = sin(d_lon / 2)
a = (sin_d_lat ** 2 + sin_d_lon ** 2 ) * cos(radians(pyladies_lat)) * cos(radians(python_lat))
c = 2 * atan2(sqrt(a), sqrt(1-a))
dist = RADIUS * c
return dist <= 50
def get_coords(group):
return group.get("lat"), group.get("lon")
def get_nearby_python_groups(pyl, collect):
pyl_coords = get_coords(pyl)
nearby = []
for group in user_groups:
pyt_coords = get_coords(group)
if is_within_50_miles(pyl_coords, pyt_coords):
nearby.append(group)
collect[pyl.get("name")] = nearby
return collect
collect = {}
for pylady in pyladies:
collect = get_nearby_python_groups(pylady, collect)
for item in collect.items():
print(item[0], len(item[1]))
# Save data into pyladies-specific directories
def pylady_dir(pyl):
_dir = pyl.split()
_dir = "".join(_dir)
outdir = os.path.join(data_dir, _dir)
if not os.path.exists(outdir):
os.makedirs(outdir)
return _dir
def save_pyladies():
for pylady in pyladies:
name = pylady.get("name")
subdir = pylady_dir(name)
outputdir = os.path.join(data_dir, subdir)
output = os.path.join(outputdir, subdir + ".json")
save_output(pylady, output)
groups = collect.get(name)
for g in groups:
group_link = g.get("link")
group_name = group_link.split(".com/")[1][:-1]
group_name = "".join(group_name)
outfile = group_name + ".json"
ug_output = os.path.join(outputdir, outfile)
save_output(g, ug_output)
save_pyladies()
Explanation: Part 3: Find all Python meetup groups with a PyLadies within 50 miles
I've adapted this from a Java implementation to find if a point is within a radius of another point. Geo-math is hard.
End of explanation
!tree
Explanation: Sanity check (I have a tree command installed via brew install tree):
End of explanation
MEETUP_MEMBER_URL = "https://api.meetup.com/2/members"
PARAMS = {
"signed": True,
"key": MEETUP_API_KEY,
}
def get_members(group):
PARAMS["group_id"] = group.get("id")
members_count = group.get("members")
print(u"MEMBERS: Getting {0} members for group {1}".format(members_count, group.get("name")))
pages = members_count / 200
remainder = members_count % 200
if remainder > 0:
pages += 1
members = []
for i in xrange(pages):
print("MEMBERS: Iteration {0} out of {1}".format(i+1, pages+1))
PARAMS["offset"] = i
resp = requests.get(MEETUP_MEMBER_URL, PARAMS)
if resp.ok:
results = resp.json().get("results")
members.extend(results)
time.sleep(1)
print("MEMBERS: Got {0} members".format(len(members)))
return members
def get_members_collection(pylady, groups):
pylady_members = get_members(pylady)
pug_members = defaultdict(list)
for g in groups:
pg_mbrs = get_members(g)
pug_members[g.get("name")].append(pg_mbrs)
return pylady_members, pug_members
# NOTE: this takes *FOREVER*.
start = time.time()
for i, item in enumerate(collect.items()):
print("COLLECTING: {0} out of {1}".format(i+1, len(collect)+1))
pylady = [p for p in pyladies if p.get("name") == item[0]][0]
pylady_members, pug_members = get_members_collection(pylady, item[1])
print("COLLECTING: Saving all the data!")
pylady_name = pylady.get("name")
outdir = pylady_dir(pylady_name)
outdir = os.path.join(data_dir, outdir)
outfile = os.path.join(outdir, "pyladies_members.json")
save_output(pylady_members, outfile)
outfile = os.path.join(outdir, "pug_members.json")
save_output(pug_members, outfile)
end = time.time()
delta_s = end - start
delta_m = delta_s / 60
print("**DONE**")
print("Completed in {:.0f} minutes".format(delta_m))
Explanation: Part 4: Membership join history
Note
If getting members from an endpoint returns 0, despite the member count in the group data being a positive number, then the group is set to private & accessible only to members (you can join that group to be able to have access that data, but I did not; I already have too much email).
Note
There's a "pseudo" race condition where the group data member # may be one number, but you actually receive a different number (+/- ~3), it's (probably) due to people leaving or joining the group between the group API call and the members API call.
API endpoint docs:
Members
End of explanation
<END_TASK> |
15,608 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Bert Pipeline
Step1: Enter your gateway and the cookie
Use this extension on chrome to get token
Update values for the ingress gateway and auth session
Step2: Set Log bucket and Tensorboard Image
Step4: Define pipeline
Step5: Wait for inference service below to go to READY True state.
Step6: Get Inferenceservice name
Step7: Prediction Request
Step8: Explanation Request
Step9: Visualization of Predictions
Step10: visualization appreas as below
Cleanup Script | <ASSISTANT_TASK:>
Python Code:
! pip uninstall -y kfp
! pip install --no-cache-dir kfp torch captum
import kfp
import json
import os
from kfp.onprem import use_k8s_secret
from kfp import components
from kfp.components import load_component_from_file, load_component_from_url, InputPath
from kfp import dsl
from kfp import compiler
kfp.__version__
Explanation: Bert Pipeline : PyTorch BERT News Classfication
This notebook shows PyTorch BERT end-to-end news classification example using Kubeflow Pipelines.
An example notebook that demonstrates how to:
Get different tasks needed for the pipeline
Create a Kubeflow pipeline
Include Pytorch KFP components to preprocess, train, visualize and deploy the model in the pipeline
Submit a job for execution
Query(prediction and explain) the final deployed model
Interpretation of the model using the Captum Insights
End of explanation
INGRESS_GATEWAY='http://istio-ingressgateway.istio-system.svc.cluster.local'
AUTH="<enter your token here>"
NAMESPACE="kubeflow-user-example-com"
COOKIE="authservice_session="+AUTH
EXPERIMENT="Default"
dist_volume = 'dist-vol'
volume_mount_path ="/model"
dataset_path = volume_mount_path+"/dataset"
checkpoint_dir = volume_mount_path+"/checkpoint"
tensorboard_root = volume_mount_path+"/tensorboard"
Explanation: Enter your gateway and the cookie
Use this extension on chrome to get token
Update values for the ingress gateway and auth session
End of explanation
MINIO_ENDPOINT="http://minio-service.kubeflow:9000"
LOG_BUCKET="mlpipeline"
TENSORBOARD_IMAGE="public.ecr.aws/pytorch-samples/tboard:latest"
client = kfp.Client(host=INGRESS_GATEWAY+"/pipeline", cookies=COOKIE)
client.create_experiment(EXPERIMENT)
experiments = client.list_experiments(namespace=NAMESPACE)
my_experiment = experiments.experiments[0]
my_experiment
DEPLOY_NAME="bert-dist"
MODEL_NAME="bert"
! python utils/generate_templates.py bert/template_mapping.json
prepare_tensorboard_op = load_component_from_file(
"yaml/tensorboard_component.yaml"
)
prep_op = components.load_component_from_file(
"yaml/preprocess_component.yaml"
)
# Use GPU image in train component
train_op = components.load_component_from_file(
"yaml/train_component.yaml"
)
deploy_op = load_component_from_file(
"yaml/deploy_component.yaml"
)
minio_op = components.load_component_from_file(
"yaml/minio_component.yaml"
)
pytorch_job_op = load_component_from_file("../../../components/kubeflow/pytorch-launcher/component.yaml")
kubernetes_create_pvc_op = load_component_from_file(
"../../../components/kubernetes/Create_PersistentVolumeClaim/component.yaml"
)
cp_op = load_component_from_file(
"yaml/copy_component.yaml"
)
from kubernetes.client.models import V1Volume, V1PersistentVolumeClaimVolumeSource
def create_dist_pipeline():
kubernetes_create_pvc_op(name=dist_volume, storage_size= "20Gi")
create_volume_run = client.create_run_from_pipeline_func(create_dist_pipeline, arguments={})
create_volume_run.wait_for_run_completion()
Explanation: Set Log bucket and Tensorboard Image
End of explanation
@dsl.pipeline(name="Training pipeline", description="Sample training job test")
def pytorch_bert(
minio_endpoint=MINIO_ENDPOINT,
log_bucket=LOG_BUCKET,
log_dir=f"tensorboard/logs/{dsl.RUN_ID_PLACEHOLDER}",
confusion_matrix_log_dir=f"confusion_matrix/{dsl.RUN_ID_PLACEHOLDER}/",
mar_path=f"mar/{dsl.RUN_ID_PLACEHOLDER}/model-store/",
config_prop_path=f"mar/{dsl.RUN_ID_PLACEHOLDER}/config/",
model_uri=f"pvc://{dist_volume}/mar/{dsl.RUN_ID_PLACEHOLDER}",
tf_image=TENSORBOARD_IMAGE,
deploy=DEPLOY_NAME,
namespace=NAMESPACE,
num_samples=1000,
max_epochs=1,
gpus=2,
num_nodes=2
):
prepare_tb_task = prepare_tensorboard_op(
log_dir_uri=f"s3://{log_bucket}/{log_dir}",
image=tf_image,
pod_template_spec=json.dumps({
"spec": {
"containers": [{
"env": [
{
"name": "AWS_ACCESS_KEY_ID",
"valueFrom": {
"secretKeyRef": {
"name": "mlpipeline-minio-artifact",
"key": "accesskey",
}
},
},
{
"name": "AWS_SECRET_ACCESS_KEY",
"valueFrom": {
"secretKeyRef": {
"name": "mlpipeline-minio-artifact",
"key": "secretkey",
}
},
},
{
"name": "AWS_REGION",
"value": "minio"
},
{
"name": "S3_ENDPOINT",
"value": f"{minio_endpoint}",
},
{
"name": "S3_USE_HTTPS",
"value": "0"
},
{
"name": "S3_VERIFY_SSL",
"value": "0"
},
]
}]
}
}),
).set_display_name("Visualization")
prep_task = prep_op().after(prepare_tb_task).set_display_name("Preprocess & Transform")
copy_task = cp_op("true", prep_task.outputs['output_data'], dataset_path,"").add_pvolumes({volume_mount_path: dsl.PipelineVolume(pvc=dist_volume)}).after(prep_task).set_display_name("Copy Dataset")
confusion_matrix_url = f"minio://{log_bucket}/{confusion_matrix_log_dir}"
train_task = pytorch_job_op(
name="pytorch-bert",
namespace=namespace,
master_spec=
{
"replicas": 1,
"imagePullPolicy": "Always",
"restartPolicy": "OnFailure",
"template": {
"metadata": {
"annotations": {
"sidecar.istio.io/inject": "false"
}
},
"spec": {
"containers": [
{
"name": "pytorch",
"image": "public.ecr.aws/pytorch-samples/kfp_samples:latest-gpu",
"command": ["python3", "bert/agnews_classification_pytorch.py"],
"args": [
"--dataset_path", dataset_path,
"--checkpoint_dir", checkpoint_dir,
"--script_args", f"model_name=bert.pth,num_samples={num_samples}",
"--tensorboard_root", tensorboard_root,
"--ptl_args", f"max_epochs={max_epochs},profiler=pytorch,gpus={gpus},accelerator=ddp,num_nodes={num_nodes},confusion_matrix_url={confusion_matrix_url}"
],
"ports": [
{
"containerPort": 24456,
"name": "pytorchjob-port"
}
],
"resources": {
"limits": {
"nvidia.com/gpu": 2
}
},
"volumeMounts": [
{
"mountPath": volume_mount_path,
"name": "model-volume"
}
]
}
],
"volumes": [
{
"name": "model-volume",
"persistentVolumeClaim": {
"claimName": dist_volume
}
}
]
}
}
},
worker_spec=
{
"replicas": 1,
"imagePullPolicy": "Always",
"restartPolicy": "OnFailure",
"template": {
"metadata": {
"annotations": {
"sidecar.istio.io/inject": "false"
}
},
"spec": {
"containers": [
{
"name": "pytorch",
"image": "public.ecr.aws/pytorch-samples/kfp_samples:latest-gpu",
"command": ["python3", "bert/agnews_classification_pytorch.py"],
"args": [
"--dataset_path", dataset_path,
"--checkpoint_dir", checkpoint_dir,
"--script_args", f"model_name=bert.pth,num_samples={num_samples}",
"--tensorboard_root", tensorboard_root,
"--ptl_args", f"max_epochs={max_epochs},profiler=pytorch,gpus={gpus},accelerator=ddp,num_nodes={num_nodes},confusion_matrix_url={confusion_matrix_url}"
],
"ports": [
{
"containerPort": 24456,
"name": "pytorchjob-port"
}
],
"resources": {
"limits": {
"nvidia.com/gpu": 2
}
},
"volumeMounts": [
{
"mountPath": volume_mount_path,
"name": "model-volume"
}
]
}
],
"volumes": [
{
"name": "model-volume",
"persistentVolumeClaim": {
"claimName": dist_volume
}
}
]
}
}
},
delete_after_done=False
).after(copy_task)
mar_folder_restructure_task = dsl.ContainerOp(
name='mar restructure',
image='library/bash:4.4.23',
command=['sh', '-c'],
arguments=[f'mkdir -p {volume_mount_path}/{mar_path}; mkdir -p {volume_mount_path}/{config_prop_path}; cp {checkpoint_dir}/*.mar {volume_mount_path}/{mar_path}; cp {checkpoint_dir}/config.properties {volume_mount_path}/{config_prop_path}']).add_pvolumes({volume_mount_path: dsl.PipelineVolume(pvc=dist_volume)}).after(train_task).set_display_name("Restructure MAR and config.properties path")
mar_folder_restructure_task.execution_options.caching_strategy.max_cache_staleness = "P0D"
copy_tensorboard = cp_op("false", "", "", tensorboard_root).add_pvolumes({volume_mount_path: dsl.PipelineVolume(pvc=dist_volume)}).after(mar_folder_restructure_task).set_display_name("Copy Tensorboard Logs")
copy_tensorboard.execution_options.caching_strategy.max_cache_staleness = "P0D"
minio_tb_upload = (
minio_op(
bucket_name=log_bucket,
folder_name=log_dir,
input_path=copy_tensorboard.outputs["destination_path"],
filename="",
).after(copy_tensorboard)
.set_display_name("Tensorboard Events Pusher")
)
# Deploy inferenceservice in gpu
gpu_count = "1"
isvc_gpu_yaml =
apiVersion: "serving.kubeflow.org/v1beta1"
kind: "InferenceService"
metadata:
name: {}
namespace: {}
spec:
predictor:
serviceAccountName: sa
pytorch:
storageUri: {}
resources:
requests:
cpu: 4
memory: 8Gi
limits:
cpu: 4
memory: 8Gi
nvidia.com/gpu: {}
.format(
deploy, namespace, model_uri, gpu_count
)
deploy_task = (
deploy_op(action="apply", inferenceservice_yaml=isvc_gpu_yaml)
.after(minio_tb_upload)
.set_display_name("Deployer")
)
deploy_task.execution_options.caching_strategy.max_cache_staleness = "P0D"
dsl.get_pipeline_conf().add_op_transformer(
use_k8s_secret(
secret_name="mlpipeline-minio-artifact",
k8s_secret_key_to_env={
"secretkey": "MINIO_SECRET_KEY",
"accesskey": "MINIO_ACCESS_KEY",
},
)
)
# Compile pipeline
compiler.Compiler().compile(pytorch_bert, 'pytorch.tar.gz', type_check=True)
# Execute pipeline
run = client.run_pipeline(my_experiment.id, 'pytorch-bert', 'pytorch.tar.gz')
Explanation: Define pipeline
End of explanation
!kubectl get isvc $DEPLOY
Explanation: Wait for inference service below to go to READY True state.
End of explanation
INFERENCE_SERVICE_LIST = ! kubectl get isvc {DEPLOY_NAME} -n {NAMESPACE} -o json | python3 -c "import sys, json; print(json.load(sys.stdin)['status']['url'])"| tr -d '"' | cut -d "/" -f 3
INFERENCE_SERVICE_NAME = INFERENCE_SERVICE_LIST[0]
INFERENCE_SERVICE_NAME
Explanation: Get Inferenceservice name
End of explanation
!curl -v -H "Host: $INFERENCE_SERVICE_NAME" -H "Cookie: $COOKIE" "$INGRESS_GATEWAY/v1/models/$MODEL_NAME:predict" -d @./bert/sample.txt > bert_prediction_output.json
! cat bert_prediction_output.json
Explanation: Prediction Request
End of explanation
!curl -v -H "Host: $INFERENCE_SERVICE_NAME" -H "Cookie: $COOKIE" "$INGRESS_GATEWAY/v1/models/$MODEL_NAME:explain" -d @./bert/sample.txt > bert_explaination_output.json
! cat bert_explaination_output.json
explanations_json = json.loads(open("./bert_explaination_output.json", "r").read())
explanations_json
prediction_json = json.loads(open("./bert_prediction_output.json", "r").read())
import torch
attributions = explanations_json["explanations"][0]['importances']
tokens = explanations_json["explanations"][0]['words']
delta = explanations_json["explanations"][0]['delta']
attributions = torch.tensor(attributions)
pred_prob = 0.75
pred_class = prediction_json["predictions"][0]
true_class = "Business"
attr_class ="world"
Explanation: Explanation Request
End of explanation
from captum.attr import visualization
vis_data_records =[]
vis_data_records.append(visualization.VisualizationDataRecord(
attributions,
pred_prob,
pred_class,
true_class,
attr_class,
attributions.sum(),
tokens,
delta))
vis = visualization.visualize_text(vis_data_records)
Explanation: Visualization of Predictions
End of explanation
! kubectl delete --all isvc -n $NAMESPACE
! kubectl delete pod --field-selector=status.phase==Succeeded -n $NAMESPACE
Explanation: visualization appreas as below
Cleanup Script
End of explanation
<END_TASK> |
15,609 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Pos-Tagging & Feature Extraction
Following normalisation, we can now proceed to the process of pos-tagging and feature extraction. Let's start with pos-tagging.
POS-tagging
Part-of-speech tagging is one of the most important text analysis tasks used to classify words into their part-of-speech and label them according the tagset which is a collection of tags used for the pos tagging. Part-of-speech tagging also known as word classes or lexical categories.
The nltk library provides its own pre-trained POS-tagger. Let's see how it is used.
Step1: <span style="color
Step2: Thankfully, nltk provides documentation for each tag, which can be queried using the tag, e.g., nltk.help.upenn_tagset(‘RB’), or a regular expression. nltk also provides batch pos-tagging method for document pos-tagging
Step3: The list of all possible tags appears below
Step4: Nouns
Nouns generally refer to people, places, things, or concepts, e.g. | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
df0 = pd.read_csv("../../data/interim/001_normalised_keyed_reviews.csv", sep="\t", low_memory=False)
df0.head()
# For monitoring duration of pandas processes
from tqdm import tqdm, tqdm_pandas
# To avoid RuntimeError: Set changed size during iteration
tqdm.monitor_interval = 0
# Register `pandas.progress_apply` and `pandas.Series.map_apply` with `tqdm`
# (can use `tqdm_gui`, `tqdm_notebook`, optional kwargs, etc.)
tqdm.pandas(desc="Progress:")
# Now you can use `progress_apply` instead of `apply`
# and `progress_map` instead of `map`
# can also groupby:
# df.groupby(0).progress_apply(lambda x: x**2)
def convert_text_to_list(review):
return review.replace("[","").replace("]","").replace("'","").split(",")
# Convert "reviewText" field to back to list
df0['reviewText'] = df0['reviewText'].astype(str)
df0['reviewText'] = df0['reviewText'].progress_apply(lambda text: convert_text_to_list(text));
df0['reviewText'].head()
df0['reviewText'][12]
import nltk
nltk.__version__
# Split negs
def split_neg(review):
new_review = []
for token in review:
if '_' in token:
split_words = token.split("_")
new_review.append(split_words[0])
new_review.append(split_words[1])
else:
new_review.append(token)
return new_review
df0["reviewText"] = df0["reviewText"].progress_apply(lambda review: split_neg(review))
df0["reviewText"].head()
### Remove Stop Words
from nltk.corpus import stopwords
stop_words = set(stopwords.words('english'))
def remove_stopwords(review):
return [token for token in review if not token in stop_words]
df0["reviewText"] = df0["reviewText"].progress_apply(lambda review: remove_stopwords(review))
df0["reviewText"].head()
Explanation: Pos-Tagging & Feature Extraction
Following normalisation, we can now proceed to the process of pos-tagging and feature extraction. Let's start with pos-tagging.
POS-tagging
Part-of-speech tagging is one of the most important text analysis tasks used to classify words into their part-of-speech and label them according the tagset which is a collection of tags used for the pos tagging. Part-of-speech tagging also known as word classes or lexical categories.
The nltk library provides its own pre-trained POS-tagger. Let's see how it is used.
End of explanation
from nltk.tag import StanfordPOSTagger
from nltk import word_tokenize
# import os
# os.getcwd()
# Add the jar and model via their path (instead of setting environment variables):
jar = '../../models/stanford-postagger-full-2017-06-09/stanford-postagger.jar'
model = '../../models/stanford-postagger-full-2017-06-09/models/english-left3words-distsim.tagger'
pos_tagger = StanfordPOSTagger(model, jar, encoding='utf8')
def pos_tag(review):
if(len(review)>0):
return pos_tagger.tag(review)
# Example
text = pos_tagger.tag(word_tokenize("What's the airspeed of an unladen swallow ?"))
print(text)
tagged_df = pd.DataFrame(df0['reviewText'].progress_apply(lambda review: pos_tag(review)))
tagged_df.head()
# tagged_df = pd.DataFrame(df0['reviewText'].progress_apply(lambda review: nltk.pos_tag(review)))
# tagged_df.head()
Explanation: <span style="color:red">Unfortunatelly, this tagger, though much better and accurate, takes a lot of time. In order to process the above data set it would need close to 3 days of running.</span>
Follow this link for more info on the tagger: https://nlp.stanford.edu/software/tagger.shtml#History
End of explanation
tagged_df['reviewText'][8]
Explanation: Thankfully, nltk provides documentation for each tag, which can be queried using the tag, e.g., nltk.help.upenn_tagset(‘RB’), or a regular expression. nltk also provides batch pos-tagging method for document pos-tagging:
End of explanation
## Join with Original Key and Persist Locally to avoid RE-processing
uniqueKey_series_df = df0[['uniqueKey']]
uniqueKey_series_df.head()
pos_tagged_keyed_reviews = pd.concat([uniqueKey_series_df, tagged_df], axis=1);
pos_tagged_keyed_reviews.head()
pos_tagged_keyed_reviews.to_csv("../data/interim/002_pos_tagged_keyed_reviews.csv", sep='\t', header=True, index=False);
Explanation: The list of all possible tags appears below:
| Tag | Description |
|------|------------------------------------------|
| CC | Coordinating conjunction |
| CD | Cardinal number |
| DT | Determiner |
| EX | ExistentialĘthere |
| FW | Foreign word |
| IN | Preposition or subordinating conjunction |
| JJ | Adjective |
| JJR | Adjective, comparative |
| JJS | Adjective, superlative |
| LS | List item marker |
| MD | Modal |
| NN | Noun, singular or mass |
| NNS | Noun, plural |
| NNP | Proper noun, singular |
| NNPS | Proper noun, plural |
| PDT | Predeterminer |
| POS | Possessive ending |
| PRP | Personal pronoun |
| PRP | Possessive pronoun |
| RB | Adverb |
| RBR | Adverb, comparative |
| RBS | Adverb, superlative |
| RP | Particle |
| SYM | Symbol |
| TO | to |
| UH | Interjection |
| VB | Verb, base form |
| VBD | Verb, past tense |
| VBG | Verb, gerund or present participle |
| VBN | Verb, past participle |
| VBP | Verb, non-3rd person singular present |
| VBZ | Verb, 3rd person singular present |
| WDT | Wh-determiner |
| WP | Wh-pronoun |
| WP | Possessive wh-pronoun |
| WRB | Wh-adverb |
Notice: where you see * replace with $.
End of explanation
def noun_collector(word_tag_list):
if(len(word_tag_list)>0):
return [word for (word, tag) in word_tag_list if tag in {'NN', 'NNS', 'NNP', 'NNPS'}]
nouns_df = pd.DataFrame(tagged_df['reviewText'].progress_apply(lambda review: noun_collector(review)))
nouns_df.head()
keyed_nouns_df = pd.concat([uniqueKey_series_df, nouns_df], axis=1);
keyed_nouns_df.head()
keyed_nouns_df.to_csv("../../data/interim/002_keyed_nouns_stanford.csv", sep='\t', header=True, index=False);
## END_OF_FILE
Explanation: Nouns
Nouns generally refer to people, places, things, or concepts, e.g.: woman, Scotland, book, intelligence. Nouns can appear after determiners and adjectives, and can be the subject or object of the verb.
The simplified noun tags are N for common nouns like book, and NP for proper nouns like Scotland.
End of explanation
<END_TASK> |
15,610 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Install and import biothings_explorer
Step1: Construct Meta-KG from SmartAPI
Step2: Filter
Filter for Meta-KG operations with Gene as Input and ChemicalSubstance as output
Step3: Find Meta-KG operations that converys Gene->Metabolize->ChemicalSubstance
Step4: Filter for Knowledge Graph Operations supported by MyChem.info as API source
Step5: Filter for API operations with drugbank as data source | <ASSISTANT_TASK:>
Python Code:
!pip install git+https://github.com/biothings/biothings_explorer.git
from biothings_explorer.smartapi_kg import MetaKG
Explanation: Install and import biothings_explorer
End of explanation
kg = MetaKG()
kg.constructMetaKG(source="remote")
Explanation: Construct Meta-KG from SmartAPI
End of explanation
kg.filter({"input_type": "Gene", "output_type": "ChemicalSubstance"})
Explanation: Filter
Filter for Meta-KG operations with Gene as Input and ChemicalSubstance as output
End of explanation
kg.filter({"input_type": "Gene", "output_type": "ChemicalSubstance", "predicate": "metabolize"})
Explanation: Find Meta-KG operations that converys Gene->Metabolize->ChemicalSubstance
End of explanation
kg.filter({"api_name": "MyChem.info API"})
Explanation: Filter for Knowledge Graph Operations supported by MyChem.info as API source
End of explanation
kg.filter({"source": "drugbank"})
Explanation: Filter for API operations with drugbank as data source
End of explanation
<END_TASK> |
15,611 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Logistic Regression with a Neural Network mindset
Welcome to your first (required) programming assignment! You will build a logistic regression classifier to recognize cats. This assignment will step you through how to do this with a Neural Network mindset, and so will also hone your intuitions about deep learning.
Instructions
Step1: 2 - Overview of the Problem set
Problem Statement
Step2: We added "_orig" at the end of image datasets (train and test) because we are going to preprocess them. After preprocessing, we will end up with train_set_x and test_set_x (the labels train_set_y and test_set_y don't need any preprocessing).
Each line of your train_set_x_orig and test_set_x_orig is an array representing an image. You can visualize an example by running the following code. Feel free also to change the index value and re-run to see other images.
Step3: Many software bugs in deep learning come from having matrix/vector dimensions that don't fit. If you can keep your matrix/vector dimensions straight you will go a long way toward eliminating many bugs.
Exercise
Step4: Expected Output for m_train, m_test and num_px
Step5: Expected Output
Step7: <font color='blue'>
What you need to remember
Step9: Expected Output
Step11: Expected Output
Step13: Expected Output
Step14: Expected Output
Step16: Expected Output
Step17: Run the following cell to train your model.
Step18: Expected Output
Step19: Let's also plot the cost function and the gradients.
Step20: Interpretation
Step21: Interpretation | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
import h5py
import scipy
from PIL import Image
from scipy import ndimage
from lr_utils import load_dataset
%matplotlib inline
Explanation: Logistic Regression with a Neural Network mindset
Welcome to your first (required) programming assignment! You will build a logistic regression classifier to recognize cats. This assignment will step you through how to do this with a Neural Network mindset, and so will also hone your intuitions about deep learning.
Instructions:
- Do not use loops (for/while) in your code, unless the instructions explicitly ask you to do so.
You will learn to:
- Build the general architecture of a learning algorithm, including:
- Initializing parameters
- Calculating the cost function and its gradient
- Using an optimization algorithm (gradient descent)
- Gather all three functions above into a main model function, in the right order.
1 - Packages
First, let's run the cell below to import all the packages that you will need during this assignment.
- numpy is the fundamental package for scientific computing with Python.
- h5py is a common package to interact with a dataset that is stored on an H5 file.
- matplotlib is a famous library to plot graphs in Python.
- PIL and scipy are used here to test your model with your own picture at the end.
End of explanation
# Loading the data (cat/non-cat)
train_set_x_orig, train_set_y, test_set_x_orig, test_set_y, classes = load_dataset()
Explanation: 2 - Overview of the Problem set
Problem Statement: You are given a dataset ("data.h5") containing:
- a training set of m_train images labeled as cat (y=1) or non-cat (y=0)
- a test set of m_test images labeled as cat or non-cat
- each image is of shape (num_px, num_px, 3) where 3 is for the 3 channels (RGB). Thus, each image is square (height = num_px) and (width = num_px).
You will build a simple image-recognition algorithm that can correctly classify pictures as cat or non-cat.
Let's get more familiar with the dataset. Load the data by running the following code.
End of explanation
# Example of a picture
index = 23
plt.imshow(train_set_x_orig[index])
print ("y = " + str(train_set_y[:,index]) + ", it's a '" + classes[np.squeeze(train_set_y[:,index])].decode("utf-8") + "' picture.")
Explanation: We added "_orig" at the end of image datasets (train and test) because we are going to preprocess them. After preprocessing, we will end up with train_set_x and test_set_x (the labels train_set_y and test_set_y don't need any preprocessing).
Each line of your train_set_x_orig and test_set_x_orig is an array representing an image. You can visualize an example by running the following code. Feel free also to change the index value and re-run to see other images.
End of explanation
### START CODE HERE ### (≈ 3 lines of code)
m_train = train_set_y.shape[1]
m_test = test_set_y.shape[1]
num_px = train_set_x_orig[0].shape[0]
### END CODE HERE ###
print ("Number of training examples: m_train = " + str(m_train))
print ("Number of testing examples: m_test = " + str(m_test))
print ("Height/Width of each image: num_px = " + str(num_px))
print ("Each image is of size: (" + str(num_px) + ", " + str(num_px) + ", 3)")
print ("train_set_x shape: " + str(train_set_x_orig.shape))
print ("train_set_y shape: " + str(train_set_y.shape))
print ("test_set_x shape: " + str(test_set_x_orig.shape))
print ("test_set_y shape: " + str(test_set_y.shape))
Explanation: Many software bugs in deep learning come from having matrix/vector dimensions that don't fit. If you can keep your matrix/vector dimensions straight you will go a long way toward eliminating many bugs.
Exercise:
Find the values for:
- m_train (number of training examples)
- m_test (number of test examples)
- num_px (= height = width of a training image)
Remember that train_set_x_orig is a numpy-array of shape (m_train, num_px, num_px, 3). For instance, you can access m_train by writing train_set_x_orig.shape[0].
End of explanation
# Reshape the training and test examples
### START CODE HERE ### (≈ 2 lines of code)
train_set_x_flatten = train_set_x_orig.reshape(m_train, -1).T
test_set_x_flatten = test_set_x_orig.reshape(m_test, -1).T
### END CODE HERE ###
print ("train_set_x_flatten shape: " + str(train_set_x_flatten.shape))
print ("train_set_y shape: " + str(train_set_y.shape))
print ("test_set_x_flatten shape: " + str(test_set_x_flatten.shape))
print ("test_set_y shape: " + str(test_set_y.shape))
print ("sanity check after reshaping: " + str(train_set_x_flatten[0:5,0]))
Explanation: Expected Output for m_train, m_test and num_px:
<table style="width:15%">
<tr>
<td>**m_train**</td>
<td> 209 </td>
</tr>
<tr>
<td>**m_test**</td>
<td> 50 </td>
</tr>
<tr>
<td>**num_px**</td>
<td> 64 </td>
</tr>
</table>
For convenience, you should now reshape images of shape (num_px, num_px, 3) in a numpy-array of shape (num_px $$ num_px $$ 3, 1). After this, our training (and test) dataset is a numpy-array where each column represents a flattened image. There should be m_train (respectively m_test) columns.
Exercise: Reshape the training and test data sets so that images of size (num_px, num_px, 3) are flattened into single vectors of shape (num_px $$ num_px $$ 3, 1).
A trick when you want to flatten a matrix X of shape (a,b,c,d) to a matrix X_flatten of shape (b$$c$$d, a) is to use:
python
X_flatten = X.reshape(X.shape[0], -1).T # X.T is the transpose of X
End of explanation
train_set_x = train_set_x_flatten / 255.
test_set_x = test_set_x_flatten / 255.
Explanation: Expected Output:
<table style="width:35%">
<tr>
<td>**train_set_x_flatten shape**</td>
<td> (12288, 209)</td>
</tr>
<tr>
<td>**train_set_y shape**</td>
<td>(1, 209)</td>
</tr>
<tr>
<td>**test_set_x_flatten shape**</td>
<td>(12288, 50)</td>
</tr>
<tr>
<td>**test_set_y shape**</td>
<td>(1, 50)</td>
</tr>
<tr>
<td>**sanity check after reshaping**</td>
<td>[17 31 56 22 33]</td>
</tr>
</table>
To represent color images, the red, green and blue channels (RGB) must be specified for each pixel, and so the pixel value is actually a vector of three numbers ranging from 0 to 255.
One common preprocessing step in machine learning is to center and standardize your dataset, meaning that you substract the mean of the whole numpy array from each example, and then divide each example by the standard deviation of the whole numpy array. But for picture datasets, it is simpler and more convenient and works almost as well to just divide every row of the dataset by 255 (the maximum value of a pixel channel).
<!-- During the training of your model, you're going to multiply weights and add biases to some initial inputs in order to observe neuron activations. Then you backpropogate with the gradients to train the model. But, it is extremely important for each feature to have a similar range such that our gradients don't explode. You will see that more in detail later in the lectures. !-->
Let's standardize our dataset.
End of explanation
# GRADED FUNCTION: sigmoid
def sigmoid(z):
Compute the sigmoid of z
Arguments:
x -- A scalar or numpy array of any size.
Return:
s -- sigmoid(z)
### START CODE HERE ### (≈ 1 line of code)
s = 1.0 / (1 + np.exp(-z))
### END CODE HERE ###
return s
print ("sigmoid(0) = " + str(sigmoid(0)))
print ("sigmoid(9.2) = " + str(sigmoid(9.2)))
Explanation: <font color='blue'>
What you need to remember:
Common steps for pre-processing a new dataset are:
- Figure out the dimensions and shapes of the problem (m_train, m_test, num_px, ...)
- Reshape the datasets such that each example is now a vector of size (num_px * num_px * 3, 1)
- "Standardize" the data
3 - General Architecture of the learning algorithm
It's time to design a simple algorithm to distinguish cat images from non-cat images.
You will build a Logistic Regression, using a Neural Network mindset. The following Figure explains why Logistic Regression is actually a very simple Neural Network!
<img src="images/LogReg_kiank.png" style="width:650px;height:400px;">
Mathematical expression of the algorithm:
For one example $x^{(i)}$:
$$z^{(i)} = w^T x^{(i)} + b \tag{1}$$
$$\hat{y}^{(i)} = a^{(i)} = sigmoid(z^{(i)})\tag{2}$$
$$ \mathcal{L}(a^{(i)}, y^{(i)}) = - y^{(i)} \log(a^{(i)}) - (1-y^{(i)} ) \log(1-a^{(i)})\tag{3}$$
The cost is then computed by summing over all training examples:
$$ J = \frac{1}{m} \sum_{i=1}^m \mathcal{L}(a^{(i)}, y^{(i)})\tag{6}$$
Key steps:
In this exercise, you will carry out the following steps:
- Initialize the parameters of the model
- Learn the parameters for the model by minimizing the cost
- Use the learned parameters to make predictions (on the test set)
- Analyse the results and conclude
4 - Building the parts of our algorithm ##
The main steps for building a Neural Network are:
1. Define the model structure (such as number of input features)
2. Initialize the model's parameters
3. Loop:
- Calculate current loss (forward propagation)
- Calculate current gradient (backward propagation)
- Update parameters (gradient descent)
You often build 1-3 separately and integrate them into one function we call model().
4.1 - Helper functions
Exercise: Using your code from "Python Basics", implement sigmoid(). As you've seen in the figure above, you need to compute $sigmoid( w^T x + b)$ to make predictions.
End of explanation
# GRADED FUNCTION: initialize_with_zeros
def initialize_with_zeros(dim):
This function creates a vector of zeros of shape (dim, 1) for w and initializes b to 0.
Argument:
dim -- size of the w vector we want (or number of parameters in this case)
Returns:
w -- initialized vector of shape (dim, 1)
b -- initialized scalar (corresponds to the bias)
### START CODE HERE ### (≈ 1 line of code)
w, b = np.zeros((dim, 1)), 0
### END CODE HERE ###
assert(w.shape == (dim, 1))
assert(isinstance(b, float) or isinstance(b, int))
return w, b
dim = 2
w, b = initialize_with_zeros(dim)
print ("w = " + str(w))
print ("b = " + str(b))
Explanation: Expected Output:
<table style="width:20%">
<tr>
<td>**sigmoid(0)**</td>
<td> 0.5</td>
</tr>
<tr>
<td>**sigmoid(9.2)**</td>
<td> 0.999898970806 </td>
</tr>
</table>
4.2 - Initializing parameters
Exercise: Implement parameter initialization in the cell below. You have to initialize w as a vector of zeros. If you don't know what numpy function to use, look up np.zeros() in the Numpy library's documentation.
End of explanation
# GRADED FUNCTION: propagate
def propagate(w, b, X, Y):
Implement the cost function and its gradient for the propagation explained above
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of size (num_px * num_px * 3, number of examples)
Y -- true "label" vector (containing 0 if non-cat, 1 if cat) of size (1, number of examples)
Return:
cost -- negative log-likelihood cost for logistic regression
dw -- gradient of the loss with respect to w, thus same shape as w
db -- gradient of the loss with respect to b, thus same shape as b
Tips:
- Write your code step by step for the propagation
m = X.shape[1]
# FORWARD PROPAGATION (FROM X TO COST)
### START CODE HERE ### (≈ 2 lines of code)
A = sigmoid(np.dot(w.T, X) + b)
cost = -1.0 / m * np.sum(Y * np.log(A) + (1 - Y) * np.log(1 - A))
### END CODE HERE ###
# BACKWARD PROPAGATION (TO FIND GRAD)
### START CODE HERE ### (≈ 2 lines of code)
dw = 1.0 / m * np.dot(X, (A - Y).T)
db = 1.0 / m * np.sum(A - Y)
### END CODE HERE ###
assert(dw.shape == w.shape)
assert(db.dtype == float)
cost = np.squeeze(cost)
assert(cost.shape == ())
grads = {"dw": dw,
"db": db}
return grads, cost
w, b, X, Y = np.array([[1], [2]]), 2, np.array([[1,2], [3,4]]), np.array([[1, 0]])
grads, cost = propagate(w, b, X, Y)
print ("dw = " + str(grads["dw"]))
print ("db = " + str(grads["db"]))
print ("cost = " + str(cost))
Explanation: Expected Output:
<table style="width:15%">
<tr>
<td> ** w ** </td>
<td> [[ 0.]
[ 0.]] </td>
</tr>
<tr>
<td> ** b ** </td>
<td> 0 </td>
</tr>
</table>
For image inputs, w will be of shape (num_px $\times$ num_px $\times$ 3, 1).
4.3 - Forward and Backward propagation
Now that your parameters are initialized, you can do the "forward" and "backward" propagation steps for learning the parameters.
Exercise: Implement a function propagate() that computes the cost function and its gradient.
Hints:
Forward Propagation:
- You get X
- You compute $A = \sigma(w^T X + b) = (a^{(0)}, a^{(1)}, ..., a^{(m-1)}, a^{(m)})$
- You calculate the cost function: $J = -\frac{1}{m}\sum_{i=1}^{m}y^{(i)}\log(a^{(i)})+(1-y^{(i)})\log(1-a^{(i)})$
Here are the two formulas you will be using:
$$ \frac{\partial J}{\partial w} = \frac{1}{m}X(A-Y)^T\tag{7}$$
$$ \frac{\partial J}{\partial b} = \frac{1}{m} \sum_{i=1}^m (a^{(i)}-y^{(i)})\tag{8}$$
End of explanation
# GRADED FUNCTION: optimize
def optimize(w, b, X, Y, num_iterations, learning_rate, print_cost = False):
This function optimizes w and b by running a gradient descent algorithm
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of shape (num_px * num_px * 3, number of examples)
Y -- true "label" vector (containing 0 if non-cat, 1 if cat), of shape (1, number of examples)
num_iterations -- number of iterations of the optimization loop
learning_rate -- learning rate of the gradient descent update rule
print_cost -- True to print the loss every 100 steps
Returns:
params -- dictionary containing the weights w and bias b
grads -- dictionary containing the gradients of the weights and bias with respect to the cost function
costs -- list of all the costs computed during the optimization, this will be used to plot the learning curve.
Tips:
You basically need to write down two steps and iterate through them:
1) Calculate the cost and the gradient for the current parameters. Use propagate().
2) Update the parameters using gradient descent rule for w and b.
costs = []
for i in range(num_iterations):
# Cost and gradient calculation (≈ 1-4 lines of code)
### START CODE HERE ###
grads, cost = propagate(w, b, X, Y)
### END CODE HERE ###
# Retrieve derivatives from grads
dw = grads["dw"]
db = grads["db"]
# update rule (≈ 2 lines of code)
### START CODE HERE ###
w = w - learning_rate * dw
b = b - learning_rate * db
### END CODE HERE ###
# Record the costs
if i % 100 == 0:
costs.append(cost)
# Print the cost every 100 training examples
if print_cost and i % 100 == 0:
print ("Cost after iteration %i: %f" % (i, cost))
params = {"w": w,
"b": b}
grads = {"dw": dw,
"db": db}
return params, grads, costs
params, grads, costs = optimize(w, b, X, Y, num_iterations= 100, learning_rate = 0.009, print_cost = False)
print ("w = " + str(params["w"]))
print ("b = " + str(params["b"]))
print ("dw = " + str(grads["dw"]))
print ("db = " + str(grads["db"]))
Explanation: Expected Output:
<table style="width:50%">
<tr>
<td> ** dw ** </td>
<td> [[ 0.99993216]
[ 1.99980262]]</td>
</tr>
<tr>
<td> ** db ** </td>
<td> 0.499935230625 </td>
</tr>
<tr>
<td> ** cost ** </td>
<td> 6.000064773192205</td>
</tr>
</table>
d) Optimization
You have initialized your parameters.
You are also able to compute a cost function and its gradient.
Now, you want to update the parameters using gradient descent.
Exercise: Write down the optimization function. The goal is to learn $w$ and $b$ by minimizing the cost function $J$. For a parameter $\theta$, the update rule is $ \theta = \theta - \alpha \text{ } d\theta$, where $\alpha$ is the learning rate.
End of explanation
# GRADED FUNCTION: predict
def predict(w, b, X):
'''
Predict whether the label is 0 or 1 using learned logistic regression parameters (w, b)
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of size (num_px * num_px * 3, number of examples)
Returns:
Y_prediction -- a numpy array (vector) containing all predictions (0/1) for the examples in X
'''
m = X.shape[1]
Y_prediction = np.zeros((1, m))
w = w.reshape(X.shape[0], 1)
# Compute vector "A" predicting the probabilities of a cat being present in the picture
### START CODE HERE ### (≈ 1 line of code)
A = sigmoid(np.dot(w.T, X) + b)
### END CODE HERE ###
# Y_prediction[A >= 0.5] = int(1)
# Y_prediction[A < 0.5] = int(0)
for i in range(A.shape[1]):
# Convert probabilities a[0,i] to actual predictions p[0,i]
### START CODE HERE ### (≈ 4 lines of code)
if A[0][i] > 0.5:
Y_prediction[0][i] = 1
else:
Y_prediction[0][i] = 0
### END CODE HERE ###
assert(Y_prediction.shape == (1, m))
return Y_prediction.astype(int)
print("predictions = " + str(predict(w, b, X)))
Explanation: Expected Output:
<table style="width:40%">
<tr>
<td> **w** </td>
<td>[[ 0.1124579 ]
[ 0.23106775]] </td>
</tr>
<tr>
<td> **b** </td>
<td> 1.55930492484 </td>
</tr>
<tr>
<td> **dw** </td>
<td> [[ 0.90158428]
[ 1.76250842]] </td>
</tr>
<tr>
<td> **db** </td>
<td> 0.430462071679 </td>
</tr>
</table>
Exercise: The previous function will output the learned w and b. We are able to use w and b to predict the labels for a dataset X. Implement the predict() function. There is two steps to computing predictions:
Calculate $\hat{Y} = A = \sigma(w^T X + b)$
Convert the entries of a into 0 (if activation <= 0.5) or 1 (if activation > 0.5), stores the predictions in a vector Y_prediction. If you wish, you can use an if/else statement in a for loop (though there is also a way to vectorize this).
End of explanation
# GRADED FUNCTION: model
def model(X_train, Y_train, X_test, Y_test, num_iterations=2000, learning_rate=0.5, print_cost=False):
Builds the logistic regression model by calling the function you've implemented previously
Arguments:
X_train -- training set represented by a numpy array of shape (num_px * num_px * 3, m_train)
Y_train -- training labels represented by a numpy array (vector) of shape (1, m_train)
X_test -- test set represented by a numpy array of shape (num_px * num_px * 3, m_test)
Y_test -- test labels represented by a numpy array (vector) of shape (1, m_test)
num_iterations -- hyperparameter representing the number of iterations to optimize the parameters
learning_rate -- hyperparameter representing the learning rate used in the update rule of optimize()
print_cost -- Set to true to print the cost every 100 iterations
Returns:
d -- dictionary containing information about the model.
### START CODE HERE ###
dim = X_train.shape[0]
w, b = initialize_with_zeros(dim)
params, grads, costs = optimize(w, b, X_train, Y_train, num_iterations = num_iterations, learning_rate = learning_rate, print_cost = print_cost)
w = params["w"]
b = params["b"]
Y_prediction_test = predict(w, b, X_test)
Y_prediction_train = predict(w, b, X_train)
### END CODE HERE ###
# Print train/test Errors
print("train accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_train - Y_train)) * 100))
print("test accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_test - Y_test)) * 100))
d = {"costs": costs,
"Y_prediction_test": Y_prediction_test,
"Y_prediction_train" : Y_prediction_train,
"w" : w,
"b" : b,
"learning_rate" : learning_rate,
"num_iterations": num_iterations}
return d
Explanation: Expected Output:
<table style="width:30%">
<tr>
<td>
**predictions**
</td>
<td>
[[ 1. 1.]]
</td>
</tr>
</table>
<font color='blue'>
What to remember:
You've implemented several functions that:
- Initialize (w,b)
- Optimize the loss iteratively to learn parameters (w,b):
- computing the cost and its gradient
- updating the parameters using gradient descent
- Use the learned (w,b) to predict the labels for a given set of examples
5 - Merge all functions into a model
You will now see how the overall model is structured by putting together all the building blocks (functions implemented in the previous parts) together, in the right order.
Exercise: Implement the model function. Use the following notation:
- Y_prediction for your predictions on the test set
- Y_prediction_train for your predictions on the train set
- w, costs, grads for the outputs of optimize()
End of explanation
d = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 2000, learning_rate = 0.005, print_cost = True)
Explanation: Run the following cell to train your model.
End of explanation
# Example of a picture that was wrongly classified.
index = 5
plt.imshow(test_set_x[:,index].reshape((num_px, num_px, 3)))
print(d["Y_prediction_test"][0, index])
print ("y = " + str(test_set_y[0, index]) + ", you predicted that it is a \"" + classes[d["Y_prediction_test"][0, index]].decode("utf-8") + "\" picture.")
Explanation: Expected Output:
<table style="width:40%">
<tr>
<td> **Train Accuracy** </td>
<td> 99.04306220095694 % </td>
</tr>
<tr>
<td>**Test Accuracy** </td>
<td> 70.0 % </td>
</tr>
</table>
Comment: Training accuracy is close to 100%. This is a good sanity check: your model is working and has high enough capacity to fit the training data. Test error is 68%. It is actually not bad for this simple model, given the small dataset we used and that logistic regression is a linear classifier. But no worries, you'll build an even better classifier next week!
Also, you see that the model is clearly overfitting the training data. Later in this specialization you will learn how to reduce overfitting, for example by using regularization. Using the code below (and changing the index variable) you can look at predictions on pictures of the test set.
End of explanation
# Plot learning curve (with costs)
costs = np.squeeze(d['costs'])
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (per hundreds)')
plt.title("Learning rate =" + str(d["learning_rate"]))
plt.show()
Explanation: Let's also plot the cost function and the gradients.
End of explanation
learning_rates = [0.01, 0.001, 0.0001]
models = {}
for i in learning_rates:
print ("learning rate is: " + str(i))
models[str(i)] = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 1500, learning_rate = i, print_cost = False)
print ('\n' + "-------------------------------------------------------" + '\n')
for i in learning_rates:
plt.plot(np.squeeze(models[str(i)]["costs"]), label= str(models[str(i)]["learning_rate"]))
plt.ylabel('cost')
plt.xlabel('iterations')
legend = plt.legend(loc='upper center', shadow=True)
frame = legend.get_frame()
frame.set_facecolor('0.90')
plt.show()
Explanation: Interpretation:
You can see the cost decreasing. It shows that the parameters are being learned. However, you see that you could train the model even more on the training set. Try to increase the number of iterations in the cell above and rerun the cells. You might see that the training set accuracy goes up, but the test set accuracy goes down. This is called overfitting.
6 - Further analysis (optional/ungraded exercise)
Congratulations on building your first image classification model. Let's analyze it further, and examine possible choices for the learning rate $\alpha$.
Choice of learning rate
Reminder:
In order for Gradient Descent to work you must choose the learning rate wisely. The learning rate $\alpha$ determines how rapidly we update the parameters. If the learning rate is too large we may "overshoot" the optimal value. Similarly, if it is too small we will need too many iterations to converge to the best values. That's why it is crucial to use a well-tuned learning rate.
Let's compare the learning curve of our model with several choices of learning rates. Run the cell below. This should take about 1 minute. Feel free also to try different values than the three we have initialized the learning_rates variable to contain, and see what happens.
End of explanation
## START CODE HERE ## (PUT YOUR IMAGE NAME)
## END CODE HERE ##
# We preprocess the image to fit your algorithm.
fname = "images/" + my_image
image = np.array(ndimage.imread(fname, flatten=False))
my_image = scipy.misc.imresize(image, size=(num_px, num_px)).reshape((1, num_px * num_px * 3)).T
my_predicted_image = predict(d["w"], d["b"], my_image)
plt.imshow(image)
print("y = " + str(np.squeeze(my_predicted_image)) + ", your algorithm predicts a \"" + classes[int(np.squeeze(my_predicted_image)),].decode("utf-8") + "\" picture.")
Explanation: Interpretation:
- Different learning rates give different costs and thus different predictions results.
- If the learning rate is too large (0.01), the cost may oscillate up and down. It may even diverge (though in this example, using 0.01 still eventually ends up at a good value for the cost).
- A lower cost doesn't mean a better model. You have to check if there is possibly overfitting. It happens when the training accuracy is a lot higher than the test accuracy.
- In deep learning, we usually recommend that you:
- Choose the learning rate that better minimizes the cost function.
- If your model overfits, use other techniques to reduce overfitting. (We'll talk about this in later videos.)
7 - Test with your own image (optional/ungraded exercise)
Congratulations on finishing this assignment. You can use your own image and see the output of your model. To do that:
1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub.
2. Add your image to this Jupyter Notebook's directory, in the "images" folder
3. Change your image's name in the following code
4. Run the code and check if the algorithm is right (1 = cat, 0 = non-cat)!
End of explanation
<END_TASK> |
15,612 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Spatial Weights
Spatial weights are mathematical structures used to represent spatial relationships. They characterize the relationship of each observation to every other observation using some concept of proximity or closeness that depends on the weight type.
They can be build in PySAL from shapefiles, as well as some types of files.
Step1: There are functions to construct weights directly from a file path.
Step2: Weight Types
Contiguity
Step3: All weights objects have a few traits that you can use to work with the weights object, as well as to get information about the weights object.
To get the neighbors & weights around an observation, use the observation's index on the weights object, like a dictionary
Step4: By default, the weights and the pandas dataframe will use the same index. So, we can view the observation and its neighbors in the dataframe by putting the observation's index and its neighbors' indexes together in one list
Step5: and grabbing those elements from the dataframe
Step6: A full, dense matrix describing all of the pairwise relationships is constructed using the .full method, or when pysal.full is called on a weights object
Step7: Note that this matrix is binary, in that its elements are either zero or one, since an observation is either a neighbor or it is not a neighbor.
However, many common use cases of spatial weights require that the matrix is row-standardized. This is done simply in PySAL using the .transform attribute
Step8: Now, if we build a new full matrix, its rows should sum to one
Step9: Since weight matrices are typically very sparse, there is also a sparse weights matrix constructor
Step10: By default, PySAL assigns each observation an index according to the order in which the observation was read in. This means that, by default, all of the observations in the weights object are indexed by table order. If you have an alternative ID variable, you can pass that into the weights constructor.
For example, the NAT.shp dataset has a possible alternative ID Variable, a FIPS code.
Step11: The observation we were discussing above is in the fifth row
Step12: Now, Pend Oreille county has a different index
Step13: Note that a KeyError in Python usually means that some index, here 4, was not found in the collection being searched, the IDs in the queen weights object. This makes sense, since we explicitly passed an idVariable argument, and nothing has a FIPS code of 4.
Instead, if we use the observation's FIPS code
Step14: We get what we need.
In addition, we have to now query the dataframe using the FIPS code to find our neighbors. But, this is relatively easy to do, since pandas will parse the query by looking into python objects, if told to.
First, let us store the neighbors of our target county
Step15: Then, we can use this list in .query
Step16: Note that we have to use @ before the name in order to show that we're referring to a python object and not a column in the dataframe.
Step17: Of course, we could also reindex the dataframe to use the same index as our weights
Step18: Now that both are using the same weights, we can use the .loc indexer again
Step19: Rook Weights
Rook weights are another type of contiguity weight, but consider observations as neighboring only when they share an edge. The rook neighbors of an observation may be different than its queen neighbors, depending on how the observation and its nearby polygons are configured.
We can construct this in the same way as the queen weights, using the special rook_from_shapefile function
Step20: These weights function exactly like the Queen weights, and are only distinguished by what they consider "neighbors."
Bishop Weights
In theory, a "Bishop" weighting scheme is one that arises when only polygons that share vertexes are considered to be neighboring. But, since Queen contiguigy requires either an edge or a vertex and Rook contiguity requires only shared edges, the following relationship is true
Step21: Thus, the vast majority of counties have no bishop neighbors. But, a few do. A simple way to see these observations in the dataframe is to find all elements of the dataframe that are not "islands," the term for an observation with no neighbors
Step22: Distance
There are many other kinds of weighting functions in PySAL. Another separate type use a continuous measure of distance to define neighborhoods. To use these measures, we first must extract the polygons' centroids.
For each polygon poly in dataframe.geometry, we want poly.centroid. So, one way to do this is to make a list of all of the centroids
Step23: If we were working with point data, this step would be unncessary.
KnnW
If we wanted to consider only the k-nearest neighbors to an observation's centroid, we could use the knnW function in PySAL.
This specific type of distance weights requires that we first build a KDTree, a special representation for spatial point data. Fortunately, this is built in to PySAL
Step24: Then, we can use this to build a spatial weights object where only the closest k observations are considered "neighbors." In this example, let's do the closest 5
Step25: So, all observations have exactly 5 neighbors. Sometimes, these neighbors are actually different observations than the ones identified by contiguity neighbors.
For example, Pend Oreille gets a new neighbor, Kootenai county
Step26: Kernel W
Kernel Weights are continuous distance-based weights that use kernel densities to provide an indication of neighborliness.
Typically, they estimate a bandwidth, which is a parameter governing how far out observations should be considered neighboring. Then, using this bandwidth, they evaluate a continuous kernel function to provide a weight between 0 and 1.
Many different choices of kernel functions are supported, and bandwidth can be estimated at each point or over the entire map.
For example, if we wanted to use a single estimated bandwidth for the entire map and weight according to a gaussian kernel | <ASSISTANT_TASK:>
Python Code:
import pysal as ps
import numpy as np
Explanation: Spatial Weights
Spatial weights are mathematical structures used to represent spatial relationships. They characterize the relationship of each observation to every other observation using some concept of proximity or closeness that depends on the weight type.
They can be build in PySAL from shapefiles, as well as some types of files.
End of explanation
shp_path = ps.examples.get_path('NAT.shp')
Explanation: There are functions to construct weights directly from a file path.
End of explanation
qW = ps.queen_from_shapefile(shp_path)
dataframe = ps.pdio.read_files(shp_path)
qW
Explanation: Weight Types
Contiguity:
Queen Weights
A commonly-used type of weight is a queen contigutiy weight, which reflects adjacency relationships as a binary indicator variable denoting whether or not a polygon shares an edge or a verted each another polygon. These weights are symmetric, in that when polygon $A$ neighbors polygon $B$, both $w_{AB} = 1$ and $w_{BA} = 1$.
To construct queen weights from a shapefile, use the queen_from_shapefile function:
End of explanation
qW[4] #neighbors & weights of the 5th observation
Explanation: All weights objects have a few traits that you can use to work with the weights object, as well as to get information about the weights object.
To get the neighbors & weights around an observation, use the observation's index on the weights object, like a dictionary:
End of explanation
self_and_neighbors = [4]
self_and_neighbors.extend(qW.neighbors[4])
print(self_and_neighbors)
Explanation: By default, the weights and the pandas dataframe will use the same index. So, we can view the observation and its neighbors in the dataframe by putting the observation's index and its neighbors' indexes together in one list:
End of explanation
dataframe.loc[self_and_neighbors]
Explanation: and grabbing those elements from the dataframe:
End of explanation
Wmatrix, ids = qW.full()
#Wmatrix, ids = ps.full(qW)
Wmatrix
Explanation: A full, dense matrix describing all of the pairwise relationships is constructed using the .full method, or when pysal.full is called on a weights object:
End of explanation
qW.transform = 'r'
Explanation: Note that this matrix is binary, in that its elements are either zero or one, since an observation is either a neighbor or it is not a neighbor.
However, many common use cases of spatial weights require that the matrix is row-standardized. This is done simply in PySAL using the .transform attribute
End of explanation
Wmatrix, ids = qW.full()
Wmatrix.sum(axis=1) #numpy axes are 0:column, 1:row, 2:facet, into higher dimensions
Explanation: Now, if we build a new full matrix, its rows should sum to one:
End of explanation
qW.sparse
Explanation: Since weight matrices are typically very sparse, there is also a sparse weights matrix constructor:
End of explanation
dataframe.head()
Explanation: By default, PySAL assigns each observation an index according to the order in which the observation was read in. This means that, by default, all of the observations in the weights object are indexed by table order. If you have an alternative ID variable, you can pass that into the weights constructor.
For example, the NAT.shp dataset has a possible alternative ID Variable, a FIPS code.
End of explanation
qW = ps.queen_from_shapefile(shp_path, idVariable='FIPS')
Explanation: The observation we were discussing above is in the fifth row: Pend Oreille county, Washington. Note that its FIPS code is 53051.
Then, instead of indexing the weights and the dataframe just based on read-order, use the FIPS code as an index:
End of explanation
qW[4] #fails, since no FIPS is 4.
Explanation: Now, Pend Oreille county has a different index:
End of explanation
qW['53051']
Explanation: Note that a KeyError in Python usually means that some index, here 4, was not found in the collection being searched, the IDs in the queen weights object. This makes sense, since we explicitly passed an idVariable argument, and nothing has a FIPS code of 4.
Instead, if we use the observation's FIPS code:
End of explanation
self_and_neighbors = ['53051']
self_and_neighbors.extend(qW.neighbors['53051'])
Explanation: We get what we need.
In addition, we have to now query the dataframe using the FIPS code to find our neighbors. But, this is relatively easy to do, since pandas will parse the query by looking into python objects, if told to.
First, let us store the neighbors of our target county:
End of explanation
dataframe.query('FIPS in @self_and_neighbors')
Explanation: Then, we can use this list in .query:
End of explanation
#dataframe.query('FIPS in neighs') will fail because there is no column called 'neighs'
Explanation: Note that we have to use @ before the name in order to show that we're referring to a python object and not a column in the dataframe.
End of explanation
fips_frame = dataframe.set_index(dataframe.FIPS)
fips_frame.head()
Explanation: Of course, we could also reindex the dataframe to use the same index as our weights:
End of explanation
fips_frame.loc[self_and_neighbors]
Explanation: Now that both are using the same weights, we can use the .loc indexer again:
End of explanation
rW = ps.rook_from_shapefile(shp_path, idVariable='FIPS')
rW['53051']
Explanation: Rook Weights
Rook weights are another type of contiguity weight, but consider observations as neighboring only when they share an edge. The rook neighbors of an observation may be different than its queen neighbors, depending on how the observation and its nearby polygons are configured.
We can construct this in the same way as the queen weights, using the special rook_from_shapefile function
End of explanation
bW = ps.w_difference(qW, rW, constrained=False, silent_island_warning=True) #silence because there will be a lot of warnings
bW.histogram
Explanation: These weights function exactly like the Queen weights, and are only distinguished by what they consider "neighbors."
Bishop Weights
In theory, a "Bishop" weighting scheme is one that arises when only polygons that share vertexes are considered to be neighboring. But, since Queen contiguigy requires either an edge or a vertex and Rook contiguity requires only shared edges, the following relationship is true:
$$ \mathcal{Q} = \mathcal{R} \cup \mathcal{B} $$
where $\mathcal{Q}$ is the set of neighbor pairs via queen contiguity, $\mathcal{R}$ is the set of neighbor pairs via Rook contiguity, and $\mathcal{B}$ via Bishop contiguity. Thus:
$$ \mathcal{Q} \setminus \mathcal{R} = \mathcal{B}$$
Bishop weights entail all Queen neighbor pairs that are not also Rook neighbors.
PySAL does not have a dedicated bishop weights constructor, but you can construct very easily using the w_difference function. This function is one of a family of tools to work with weights, all defined in ps.weights, that conduct these types of set operations between weight objects.
End of explanation
islands = bW.islands
dataframe.query('FIPS not in @islands')
Explanation: Thus, the vast majority of counties have no bishop neighbors. But, a few do. A simple way to see these observations in the dataframe is to find all elements of the dataframe that are not "islands," the term for an observation with no neighbors:
End of explanation
centroids = [list(poly.centroid) for poly in dataframe.geometry]
centroids[0:5] #let's look at the first five
Explanation: Distance
There are many other kinds of weighting functions in PySAL. Another separate type use a continuous measure of distance to define neighborhoods. To use these measures, we first must extract the polygons' centroids.
For each polygon poly in dataframe.geometry, we want poly.centroid. So, one way to do this is to make a list of all of the centroids:
End of explanation
kdtree = ps.cg.KDTree(centroids)
Explanation: If we were working with point data, this step would be unncessary.
KnnW
If we wanted to consider only the k-nearest neighbors to an observation's centroid, we could use the knnW function in PySAL.
This specific type of distance weights requires that we first build a KDTree, a special representation for spatial point data. Fortunately, this is built in to PySAL:
End of explanation
nn5 = ps.knnW(kdtree, k=5)
nn5.histogram
Explanation: Then, we can use this to build a spatial weights object where only the closest k observations are considered "neighbors." In this example, let's do the closest 5:
End of explanation
nn5[4]
dataframe.loc[nn5.neighbors[4] + [4]]
fips_frame.loc[qW.neighbors['53051'] + ['53051']]
Explanation: So, all observations have exactly 5 neighbors. Sometimes, these neighbors are actually different observations than the ones identified by contiguity neighbors.
For example, Pend Oreille gets a new neighbor, Kootenai county:
End of explanation
kernelW = ps.Kernel(centroids, fixed=True, function='gaussian')
#ps.Kernel(centroids, fixed=False, function='gaussian') #same kernel, but bandwidth changes at each observation
dataframe.loc[kernelW.neighbors[4] + [4]]
Explanation: Kernel W
Kernel Weights are continuous distance-based weights that use kernel densities to provide an indication of neighborliness.
Typically, they estimate a bandwidth, which is a parameter governing how far out observations should be considered neighboring. Then, using this bandwidth, they evaluate a continuous kernel function to provide a weight between 0 and 1.
Many different choices of kernel functions are supported, and bandwidth can be estimated at each point or over the entire map.
For example, if we wanted to use a single estimated bandwidth for the entire map and weight according to a gaussian kernel:
End of explanation
<END_TASK> |
15,613 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Getting started with KerasTuner
Authors
Step1: Introduction
KerasTuner is a general-purpose hyperparameter tuning library. It has strong
integration with Keras workflows, but it isn't limited to them
Step2: You can quickly test if the model builds successfully.
Step3: There are many other types of hyperparameters as well. We can define multiple
hyperparameters in the function. In the following code, we tune the whether to
use a Dropout layer with hp.Boolean(), tune which activation function to
use with hp.Choice(), tune the learning rate of the optimizer with
hp.Float().
Step4: As shown below, the hyperparameters are actual values. In fact, they are just
functions returning actual values. For example, hp.Int() returns an int
value. Therefore, you can put them into variables, for loops, or if
conditions.
Step5: You can also define the hyperparameters in advance and keep your Keras code in
a separate function.
Step6: Each of the hyperparameters is uniquely identified by its name (the first
argument). To tune the number of units in different Dense layers separately
as different hyperparameters, we give them different names as f"units_{i}".
Notably, this is also an example of creating conditional hyperparameters.
There are many hyperparameters specifying the number of units in the Dense
layers. The number of such hyperparameters is decided by the number of layers,
which is also a hyperparameter. Therefore, the total number of hyperparameters
used may be different from trial to trial. Some hyperparameter is only used
when a certain condition is satisfied. For example, units_3 is only used
when num_layers is larger than 3. With KerasTuner, you can easily define
such hyperparameters dynamically while creating the model.
Step7: Start the search
After defining the search space, we need to select a tuner class to run the
search. You may choose from RandomSearch, BayesianOptimization and
Hyperband, which correspond to different tuning algorithms. Here we use
RandomSearch as an example.
To initialize the tuner, we need to specify several arguments in the initializer.
hypermodel. The model-building function, which is build_model in our case.
objective. The name of the objective to optimize (whether to minimize or
maximize is automatically inferred for built-in metrics). We will introduce how
to use custom metrics later in this tutorial.
max_trials. The total number of trials to run during the search.
executions_per_trial. The number of models that should be built and fit for
each trial. Different trials have different hyperparameter values. The
executions within the same trial have the same hyperparameter values. The
purpose of having multiple executions per trial is to reduce results variance
and therefore be able to more accurately assess the performance of a model. If
you want to get results faster, you could set executions_per_trial=1 (single
round of training for each model configuration).
overwrite. Control whether to overwrite the previous results in the same
directory or resume the previous search instead. Here we set overwrite=True
to start a new search and ignore any previous results.
directory. A path to a directory for storing the search results.
project_name. The name of the sub-directory in the directory.
Step8: You can print a summary of the search space
Step9: Before starting the search, let's prepare the MNIST dataset.
Step10: Then, start the search for the best hyperparameter configuration.
All the arguments passed to search is passed to model.fit() in each
execution. Remember to pass validation_data to evaluate the model.
Step11: During the search, the model-building function is called with different
hyperparameter values in different trial. In each trial, the tuner would
generate a new set of hyperparameter values to build the model. The model is
then fit and evaluated. The metrics are recorded. The tuner progressively
explores the space and finally finds a good set of hyperparameter values.
Query the results
When search is over, you can retrieve the best model(s). The model is saved at
its best performing epoch evaluated on the validation_data.
Step12: You can also print a summary of the search results.
Step13: You will find detailed logs, checkpoints, etc, in the folder
my_dir/helloworld, i.e. directory/project_name.
You can also visualize the tuning results using TensorBoard and HParams plugin.
For more information, please following
this link.
Retrain the model
If you want to train the model with the entire dataset, you may retrieve the
best hyperparameters and retrain the model by yourself.
Step14: Tune model training
To tune the model building process, we need to subclass the HyperModel class,
which also makes it easy to share and reuse hypermodels.
We need to override HyperModel.build() and HyperModel.fit() to tune the
model building and training process respectively. A HyperModel.build()
method is the same as the model-building function, which creates a Keras model
using the hyperparameters and returns it.
In HyperModel.fit(), you can access the model returned by
HyperModel.build(),hp and all the arguments passed to search(). You need
to train the model and return the training history.
In the following code, we will tune the shuffle argument in model.fit().
It is generally not needed to tune the number of epochs because a built-in
callback is passed to model.fit() to save the model at its best epoch
evaluated by the validation_data.
Note
Step15: Again, we can do a quick check to see if the code works correctly.
Step16: Tune data preprocessing
To tune data preprocessing, we just add an additional step in
HyperModel.fit(), where we can access the dataset from the arguments. In the
following code, we tune whether to normalize the data before training the
model. This time we explicitly put x and y in the function signature
because we need to use them.
Step17: If a hyperparameter is used both in build() and fit(), you can define it in
build() and use hp.get(hp_name) to retrieve it in fit(). We use the
image size as an example. It is both used as the input shape in build(), and
used by data prerprocessing step to crop the images in fit().
Step18: Retrain the model
Using HyperModel also allows you to retrain the best model by yourself.
Step19: Specify the tuning objective
In all previous examples, we all just used validation accuracy
("val_accuracy") as the tuning objective to select the best model. Actually,
you can use any metric as the objective. The most commonly used metric is
"val_loss", which is the validation loss.
Built-in metric as the objective
There are many other built-in metrics in Keras you can use as the objective.
Here is a list of the built-in metrics.
To use a built-in metric as the objective, you need to follow these steps
Step20: Custom metric as the objective
You may implement your own metric and use it as the hyperparameter search
objective. Here, we use mean squared error (MSE) as an example. First, we
implement the MSE metric by subclassing keras.metrics.Metric. Remember to
give a name to your metric using the name argument of super().__init__(),
which will be used later. Note
Step21: Run the search with the custom objective.
Step22: If your custom objective is hard to put into a custom metric, you can also
evaluate the model by yourself in HyperModel.fit() and return the objective
value. The objective value would be minimized by default. In this case, you
don't need to specify the objective when initializing the tuner. However, in
this case, the metric value will not be tracked in the Keras logs by only
KerasTuner logs. Therefore, these values would not be displayed by any
TensorBoard view using the Keras metrics.
Step23: If you have multiple metrics to track in KerasTuner, but only use one of them
as the objective, you can return a dictionary, whose keys are the metric names
and the values are the metrics values, for example, return {"metric_a"
Step24: Tune end-to-end workflows
In some cases, it is hard to align your code into build and fit functions. You
can also keep your end-to-end workflow in one place by overriding
Tuner.run_trial(), which gives you full control of a trial. You can see it
as a black-box optimizer for anything.
Tune any function
For example, you can find a value of x, which minimizes f(x)=x*x+1. In the
following code, we just define x as a hyperparameter, and return f(x) as
the objective value. The hypermodel and objective argument for initializing
the tuner can be omitted.
Step25: Keep Keras code separate
You can keep all your Keras code unchanged and use KerasTuner to tune it. It
is useful if you cannot modify the Keras code for some reason.
It also gives you more flexibility. You don't have to separate the model
building and training code apart. However, this workflow would not help you
save the model or connect with the TensorBoard plugins.
To save the model, you can use trial.trial_id, which is a string to uniquely
identify a trial, to construct different paths to save the models from
different trials.
Step26: KerasTuner includes pre-made tunable applications | <ASSISTANT_TASK:>
Python Code:
!pip install keras-tuner -q
Explanation: Getting started with KerasTuner
Authors: Luca Invernizzi, James Long, Francois Chollet, Tom O'Malley, Haifeng Jin<br>
Date created: 2019/05/31<br>
Last modified: 2021/10/27<br>
Description: The basics of using KerasTuner to tune model hyperparameters.
End of explanation
from tensorflow import keras
from tensorflow.keras import layers
def build_model(hp):
model = keras.Sequential()
model.add(layers.Flatten())
model.add(
layers.Dense(
# Define the hyperparameter.
units=hp.Int("units", min_value=32, max_value=512, step=32),
activation="relu",
)
)
model.add(layers.Dense(10, activation="softmax"))
model.compile(
optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"],
)
return model
Explanation: Introduction
KerasTuner is a general-purpose hyperparameter tuning library. It has strong
integration with Keras workflows, but it isn't limited to them: you could use
it to tune scikit-learn models, or anything else. In this tutorial, you will
see how to tune model architecture, training process, and data preprocessing
steps with KerasTuner. Let's start from a simple example.
Tune the model architecture
The first thing we need to do is writing a function, which returns a compiled
Keras model. It takes an argument hp for defining the hyperparameters while
building the model.
Define the search space
In the following code example, we define a Keras model with two Dense layers.
We want to tune the number of units in the first Dense layer. We just define
an integer hyperparameter with hp.Int('units', min_value=32, max_value=512, step=32),
whose range is from 32 to 512 inclusive. When sampling from it, the minimum
step for walking through the interval is 32.
End of explanation
import keras_tuner
build_model(keras_tuner.HyperParameters())
Explanation: You can quickly test if the model builds successfully.
End of explanation
def build_model(hp):
model = keras.Sequential()
model.add(layers.Flatten())
model.add(
layers.Dense(
# Tune number of units.
units=hp.Int("units", min_value=32, max_value=512, step=32),
# Tune the activation function to use.
activation=hp.Choice("activation", ["relu", "tanh"]),
)
)
# Tune whether to use dropout.
if hp.Boolean("dropout"):
model.add(layers.Dropout(rate=0.25))
model.add(layers.Dense(10, activation="softmax"))
# Define the optimizer learning rate as a hyperparameter.
learning_rate = hp.Float("lr", min_value=1e-4, max_value=1e-2, sampling="log")
model.compile(
optimizer=keras.optimizers.Adam(learning_rate=learning_rate),
loss="categorical_crossentropy",
metrics=["accuracy"],
)
return model
build_model(keras_tuner.HyperParameters())
Explanation: There are many other types of hyperparameters as well. We can define multiple
hyperparameters in the function. In the following code, we tune the whether to
use a Dropout layer with hp.Boolean(), tune which activation function to
use with hp.Choice(), tune the learning rate of the optimizer with
hp.Float().
End of explanation
hp = keras_tuner.HyperParameters()
print(hp.Int("units", min_value=32, max_value=512, step=32))
Explanation: As shown below, the hyperparameters are actual values. In fact, they are just
functions returning actual values. For example, hp.Int() returns an int
value. Therefore, you can put them into variables, for loops, or if
conditions.
End of explanation
def call_existing_code(units, activation, dropout, lr):
model = keras.Sequential()
model.add(layers.Flatten())
model.add(layers.Dense(units=units, activation=activation))
if dropout:
model.add(layers.Dropout(rate=0.25))
model.add(layers.Dense(10, activation="softmax"))
model.compile(
optimizer=keras.optimizers.Adam(learning_rate=lr),
loss="categorical_crossentropy",
metrics=["accuracy"],
)
return model
def build_model(hp):
units = hp.Int("units", min_value=32, max_value=512, step=32)
activation = hp.Choice("activation", ["relu", "tanh"])
dropout = hp.Boolean("dropout")
lr = hp.Float("lr", min_value=1e-4, max_value=1e-2, sampling="log")
# call existing model-building code with the hyperparameter values.
model = call_existing_code(
units=units, activation=activation, dropout=dropout, lr=lr
)
return model
build_model(keras_tuner.HyperParameters())
Explanation: You can also define the hyperparameters in advance and keep your Keras code in
a separate function.
End of explanation
def build_model(hp):
model = keras.Sequential()
model.add(layers.Flatten())
# Tune the number of layers.
for i in range(hp.Int("num_layers", 1, 3)):
model.add(
layers.Dense(
# Tune number of units separately.
units=hp.Int(f"units_{i}", min_value=32, max_value=512, step=32),
activation=hp.Choice("activation", ["relu", "tanh"]),
)
)
if hp.Boolean("dropout"):
model.add(layers.Dropout(rate=0.25))
model.add(layers.Dense(10, activation="softmax"))
learning_rate = hp.Float("lr", min_value=1e-4, max_value=1e-2, sampling="log")
model.compile(
optimizer=keras.optimizers.Adam(learning_rate=learning_rate),
loss="categorical_crossentropy",
metrics=["accuracy"],
)
return model
build_model(keras_tuner.HyperParameters())
Explanation: Each of the hyperparameters is uniquely identified by its name (the first
argument). To tune the number of units in different Dense layers separately
as different hyperparameters, we give them different names as f"units_{i}".
Notably, this is also an example of creating conditional hyperparameters.
There are many hyperparameters specifying the number of units in the Dense
layers. The number of such hyperparameters is decided by the number of layers,
which is also a hyperparameter. Therefore, the total number of hyperparameters
used may be different from trial to trial. Some hyperparameter is only used
when a certain condition is satisfied. For example, units_3 is only used
when num_layers is larger than 3. With KerasTuner, you can easily define
such hyperparameters dynamically while creating the model.
End of explanation
tuner = keras_tuner.RandomSearch(
hypermodel=build_model,
objective="val_accuracy",
max_trials=3,
executions_per_trial=2,
overwrite=True,
directory="my_dir",
project_name="helloworld",
)
Explanation: Start the search
After defining the search space, we need to select a tuner class to run the
search. You may choose from RandomSearch, BayesianOptimization and
Hyperband, which correspond to different tuning algorithms. Here we use
RandomSearch as an example.
To initialize the tuner, we need to specify several arguments in the initializer.
hypermodel. The model-building function, which is build_model in our case.
objective. The name of the objective to optimize (whether to minimize or
maximize is automatically inferred for built-in metrics). We will introduce how
to use custom metrics later in this tutorial.
max_trials. The total number of trials to run during the search.
executions_per_trial. The number of models that should be built and fit for
each trial. Different trials have different hyperparameter values. The
executions within the same trial have the same hyperparameter values. The
purpose of having multiple executions per trial is to reduce results variance
and therefore be able to more accurately assess the performance of a model. If
you want to get results faster, you could set executions_per_trial=1 (single
round of training for each model configuration).
overwrite. Control whether to overwrite the previous results in the same
directory or resume the previous search instead. Here we set overwrite=True
to start a new search and ignore any previous results.
directory. A path to a directory for storing the search results.
project_name. The name of the sub-directory in the directory.
End of explanation
tuner.search_space_summary()
Explanation: You can print a summary of the search space:
End of explanation
from tensorflow import keras
import numpy as np
(x, y), (x_test, y_test) = keras.datasets.mnist.load_data()
x_train = x[:-10000]
x_val = x[-10000:]
y_train = y[:-10000]
y_val = y[-10000:]
x_train = np.expand_dims(x_train, -1).astype("float32") / 255.0
x_val = np.expand_dims(x_val, -1).astype("float32") / 255.0
x_test = np.expand_dims(x_test, -1).astype("float32") / 255.0
num_classes = 10
y_train = keras.utils.to_categorical(y_train, num_classes)
y_val = keras.utils.to_categorical(y_val, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
Explanation: Before starting the search, let's prepare the MNIST dataset.
End of explanation
tuner.search(x_train, y_train, epochs=2, validation_data=(x_val, y_val))
Explanation: Then, start the search for the best hyperparameter configuration.
All the arguments passed to search is passed to model.fit() in each
execution. Remember to pass validation_data to evaluate the model.
End of explanation
# Get the top 2 models.
models = tuner.get_best_models(num_models=2)
best_model = models[0]
# Build the model.
# Needed for `Sequential` without specified `input_shape`.
best_model.build(input_shape=(None, 28, 28))
best_model.summary()
Explanation: During the search, the model-building function is called with different
hyperparameter values in different trial. In each trial, the tuner would
generate a new set of hyperparameter values to build the model. The model is
then fit and evaluated. The metrics are recorded. The tuner progressively
explores the space and finally finds a good set of hyperparameter values.
Query the results
When search is over, you can retrieve the best model(s). The model is saved at
its best performing epoch evaluated on the validation_data.
End of explanation
tuner.results_summary()
Explanation: You can also print a summary of the search results.
End of explanation
# Get the top 2 hyperparameters.
best_hps = tuner.get_best_hyperparameters(5)
# Build the model with the best hp.
model = build_model(best_hps[0])
# Fit with the entire dataset.
x_all = np.concatenate((x_train, x_val))
y_all = np.concatenate((y_train, y_val))
model.fit(x=x_all, y=y_all, epochs=1)
Explanation: You will find detailed logs, checkpoints, etc, in the folder
my_dir/helloworld, i.e. directory/project_name.
You can also visualize the tuning results using TensorBoard and HParams plugin.
For more information, please following
this link.
Retrain the model
If you want to train the model with the entire dataset, you may retrieve the
best hyperparameters and retrain the model by yourself.
End of explanation
class MyHyperModel(keras_tuner.HyperModel):
def build(self, hp):
model = keras.Sequential()
model.add(layers.Flatten())
model.add(
layers.Dense(
units=hp.Int("units", min_value=32, max_value=512, step=32),
activation="relu",
)
)
model.add(layers.Dense(10, activation="softmax"))
model.compile(
optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"],
)
return model
def fit(self, hp, model, *args, **kwargs):
return model.fit(
*args,
# Tune whether to shuffle the data in each epoch.
shuffle=hp.Boolean("shuffle"),
**kwargs,
)
Explanation: Tune model training
To tune the model building process, we need to subclass the HyperModel class,
which also makes it easy to share and reuse hypermodels.
We need to override HyperModel.build() and HyperModel.fit() to tune the
model building and training process respectively. A HyperModel.build()
method is the same as the model-building function, which creates a Keras model
using the hyperparameters and returns it.
In HyperModel.fit(), you can access the model returned by
HyperModel.build(),hp and all the arguments passed to search(). You need
to train the model and return the training history.
In the following code, we will tune the shuffle argument in model.fit().
It is generally not needed to tune the number of epochs because a built-in
callback is passed to model.fit() to save the model at its best epoch
evaluated by the validation_data.
Note: The **kwargs should always be passed to model.fit() because it
contains the callbacks for model saving and tensorboard plugins.
End of explanation
hp = keras_tuner.HyperParameters()
hypermodel = MyHyperModel()
model = hypermodel.build(hp)
hypermodel.fit(hp, model, np.random.rand(100, 28, 28), np.random.rand(100, 10))
Explanation: Again, we can do a quick check to see if the code works correctly.
End of explanation
class MyHyperModel(keras_tuner.HyperModel):
def build(self, hp):
model = keras.Sequential()
model.add(layers.Flatten())
model.add(
layers.Dense(
units=hp.Int("units", min_value=32, max_value=512, step=32),
activation="relu",
)
)
model.add(layers.Dense(10, activation="softmax"))
model.compile(
optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"],
)
return model
def fit(self, hp, model, x, y, **kwargs):
if hp.Boolean("normalize"):
x = layers.Normalization()(x)
return model.fit(
x,
y,
# Tune whether to shuffle the data in each epoch.
shuffle=hp.Boolean("shuffle"),
**kwargs,
)
hp = keras_tuner.HyperParameters()
hypermodel = MyHyperModel()
model = hypermodel.build(hp)
hypermodel.fit(hp, model, np.random.rand(100, 28, 28), np.random.rand(100, 10))
Explanation: Tune data preprocessing
To tune data preprocessing, we just add an additional step in
HyperModel.fit(), where we can access the dataset from the arguments. In the
following code, we tune whether to normalize the data before training the
model. This time we explicitly put x and y in the function signature
because we need to use them.
End of explanation
class MyHyperModel(keras_tuner.HyperModel):
def build(self, hp):
image_size = hp.Int("image_size", 10, 28)
inputs = keras.Input(shape=(image_size, image_size))
outputs = layers.Flatten()(inputs)
outputs = layers.Dense(
units=hp.Int("units", min_value=32, max_value=512, step=32),
activation="relu",
)(outputs)
outputs = layers.Dense(10, activation="softmax")(outputs)
model = keras.Model(inputs, outputs)
model.compile(
optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"],
)
return model
def fit(self, hp, model, x, y, validation_data=None, **kwargs):
if hp.Boolean("normalize"):
x = layers.Normalization()(x)
image_size = hp.get("image_size")
cropped_x = x[:, :image_size, :image_size, :]
if validation_data:
x_val, y_val = validation_data
cropped_x_val = x_val[:, :image_size, :image_size, :]
validation_data = (cropped_x_val, y_val)
return model.fit(
cropped_x,
y,
# Tune whether to shuffle the data in each epoch.
shuffle=hp.Boolean("shuffle"),
validation_data=validation_data,
**kwargs,
)
tuner = keras_tuner.RandomSearch(
MyHyperModel(),
objective="val_accuracy",
max_trials=3,
overwrite=True,
directory="my_dir",
project_name="tune_hypermodel",
)
tuner.search(x_train, y_train, epochs=2, validation_data=(x_val, y_val))
Explanation: If a hyperparameter is used both in build() and fit(), you can define it in
build() and use hp.get(hp_name) to retrieve it in fit(). We use the
image size as an example. It is both used as the input shape in build(), and
used by data prerprocessing step to crop the images in fit().
End of explanation
hypermodel = MyHyperModel()
best_hp = tuner.get_best_hyperparameters()[0]
model = hypermodel.build(best_hp)
hypermodel.fit(best_hp, model, x_all, y_all, epochs=1)
Explanation: Retrain the model
Using HyperModel also allows you to retrain the best model by yourself.
End of explanation
def build_regressor(hp):
model = keras.Sequential(
[
layers.Dense(units=hp.Int("units", 32, 128, 32), activation="relu"),
layers.Dense(units=1),
]
)
model.compile(
optimizer="adam",
loss="mean_squared_error",
# Objective is one of the metrics.
metrics=[keras.metrics.MeanAbsoluteError()],
)
return model
tuner = keras_tuner.RandomSearch(
hypermodel=build_regressor,
# The objective name and direction.
# Name is the f"val_{snake_case_metric_class_name}".
objective=keras_tuner.Objective("val_mean_absolute_error", direction="min"),
max_trials=3,
overwrite=True,
directory="my_dir",
project_name="built_in_metrics",
)
tuner.search(
x=np.random.rand(100, 10),
y=np.random.rand(100, 1),
validation_data=(np.random.rand(20, 10), np.random.rand(20, 1)),
)
tuner.results_summary()
Explanation: Specify the tuning objective
In all previous examples, we all just used validation accuracy
("val_accuracy") as the tuning objective to select the best model. Actually,
you can use any metric as the objective. The most commonly used metric is
"val_loss", which is the validation loss.
Built-in metric as the objective
There are many other built-in metrics in Keras you can use as the objective.
Here is a list of the built-in metrics.
To use a built-in metric as the objective, you need to follow these steps:
* Compile the model with the the built-in metric. For example, you want to use
MeanAbsoluteError(). You need to compile the model with
metrics=[MeanAbsoluteError()]. You may also use its name string instead:
metrics=["mean_absolute_error"]. The name string of the metric is always
the snake case of the class name.
Identify the objective name string. The name string of the objective is
always in the format of f"val_{metric_name_string}". For example, the
objective name string of mean squared error evaluated on the validation data
should be "val_mean_absolute_error".
Wrap it into keras_tuner.Objective. We usually need to wrap the objective
into a keras_tuner.Objective object to specify the direction to optimize the
objective. For example, we want to minimize the mean squared error, we can use
keras_tuner.Objective("val_mean_absolute_error", "min"). The direction should
be either "min" or "max".
Pass the wrapped objective to the tuner.
You can see the following barebone code example.
End of explanation
import tensorflow as tf
class CustomMetric(keras.metrics.Metric):
def __init__(self, **kwargs):
# Specify the name of the metric as "custom_metric".
super().__init__(name="custom_metric", **kwargs)
self.sum = self.add_weight(name="sum", initializer="zeros")
self.count = self.add_weight(name="count", dtype=tf.int32, initializer="zeros")
def update_state(self, y_true, y_pred, sample_weight=None):
values = tf.math.squared_difference(y_pred, y_true)
count = tf.shape(y_true)[0]
if sample_weight is not None:
sample_weight = tf.cast(sample_weight, self.dtype)
values *= sample_weight
count *= sample_weight
self.sum.assign_add(tf.reduce_sum(values))
self.count.assign_add(count)
def result(self):
return self.sum / tf.cast(self.count, tf.float32)
def reset_states(self):
self.sum.assign(0)
self.count.assign(0)
Explanation: Custom metric as the objective
You may implement your own metric and use it as the hyperparameter search
objective. Here, we use mean squared error (MSE) as an example. First, we
implement the MSE metric by subclassing keras.metrics.Metric. Remember to
give a name to your metric using the name argument of super().__init__(),
which will be used later. Note: MSE is actully a build-in metric, which can be
imported with keras.metrics.MeanSquaredError. This is just an example to show
how to use a custom metric as the hyperparameter search objective.
For more information about implementing custom metrics, please see this
tutorial. If you would
like a metric with a different function signature than update_state(y_true,
y_pred, sample_weight), you can override the train_step() method of your
model following this
tutorial.
End of explanation
def build_regressor(hp):
model = keras.Sequential(
[
layers.Dense(units=hp.Int("units", 32, 128, 32), activation="relu"),
layers.Dense(units=1),
]
)
model.compile(
optimizer="adam",
loss="mean_squared_error",
# Put custom metric into the metrics.
metrics=[CustomMetric()],
)
return model
tuner = keras_tuner.RandomSearch(
hypermodel=build_regressor,
# Specify the name and direction of the objective.
objective=keras_tuner.Objective("val_custom_metric", direction="min"),
max_trials=3,
overwrite=True,
directory="my_dir",
project_name="custom_metrics",
)
tuner.search(
x=np.random.rand(100, 10),
y=np.random.rand(100, 1),
validation_data=(np.random.rand(20, 10), np.random.rand(20, 1)),
)
tuner.results_summary()
Explanation: Run the search with the custom objective.
End of explanation
class HyperRegressor(keras_tuner.HyperModel):
def build(self, hp):
model = keras.Sequential(
[
layers.Dense(units=hp.Int("units", 32, 128, 32), activation="relu"),
layers.Dense(units=1),
]
)
model.compile(
optimizer="adam", loss="mean_squared_error",
)
return model
def fit(self, hp, model, x, y, validation_data, **kwargs):
model.fit(x, y, **kwargs)
x_val, y_val = validation_data
y_pred = model.predict(x_val)
# Return a single float to minimize.
return np.mean(np.abs(y_pred - y_val))
tuner = keras_tuner.RandomSearch(
hypermodel=HyperRegressor(),
# No objective to specify.
# Objective is the return value of `HyperModel.fit()`.
max_trials=3,
overwrite=True,
directory="my_dir",
project_name="custom_eval",
)
tuner.search(
x=np.random.rand(100, 10),
y=np.random.rand(100, 1),
validation_data=(np.random.rand(20, 10), np.random.rand(20, 1)),
)
tuner.results_summary()
Explanation: If your custom objective is hard to put into a custom metric, you can also
evaluate the model by yourself in HyperModel.fit() and return the objective
value. The objective value would be minimized by default. In this case, you
don't need to specify the objective when initializing the tuner. However, in
this case, the metric value will not be tracked in the Keras logs by only
KerasTuner logs. Therefore, these values would not be displayed by any
TensorBoard view using the Keras metrics.
End of explanation
class HyperRegressor(keras_tuner.HyperModel):
def build(self, hp):
model = keras.Sequential(
[
layers.Dense(units=hp.Int("units", 32, 128, 32), activation="relu"),
layers.Dense(units=1),
]
)
model.compile(
optimizer="adam", loss="mean_squared_error",
)
return model
def fit(self, hp, model, x, y, validation_data, **kwargs):
model.fit(x, y, **kwargs)
x_val, y_val = validation_data
y_pred = model.predict(x_val)
# Return a dictionary of metrics for KerasTuner to track.
return {
"metric_a": -np.mean(np.abs(y_pred - y_val)),
"metric_b": np.mean(np.square(y_pred - y_val)),
}
tuner = keras_tuner.RandomSearch(
hypermodel=HyperRegressor(),
# Objective is one of the keys.
# Maximize the negative MAE, equivalent to minimize MAE.
objective=keras_tuner.Objective("metric_a", "max"),
max_trials=3,
overwrite=True,
directory="my_dir",
project_name="custom_eval_dict",
)
tuner.search(
x=np.random.rand(100, 10),
y=np.random.rand(100, 1),
validation_data=(np.random.rand(20, 10), np.random.rand(20, 1)),
)
tuner.results_summary()
Explanation: If you have multiple metrics to track in KerasTuner, but only use one of them
as the objective, you can return a dictionary, whose keys are the metric names
and the values are the metrics values, for example, return {"metric_a": 1.0,
"metric_b", 2.0}. Use one of the keys as the objective name, for example,
keras_tuner.Objective("metric_a", "min").
End of explanation
class MyTuner(keras_tuner.RandomSearch):
def run_trial(self, trial, *args, **kwargs):
# Get the hp from trial.
hp = trial.hyperparameters
# Define "x" as a hyperparameter.
x = hp.Float("x", min_value=-1.0, max_value=1.0)
# Return the objective value to minimize.
return x * x + 1
tuner = MyTuner(
# No hypermodel or objective specified.
max_trials=20,
overwrite=True,
directory="my_dir",
project_name="tune_anything",
)
# No need to pass anything to search()
# unless you use them in run_trial().
tuner.search()
print(tuner.get_best_hyperparameters()[0].get("x"))
Explanation: Tune end-to-end workflows
In some cases, it is hard to align your code into build and fit functions. You
can also keep your end-to-end workflow in one place by overriding
Tuner.run_trial(), which gives you full control of a trial. You can see it
as a black-box optimizer for anything.
Tune any function
For example, you can find a value of x, which minimizes f(x)=x*x+1. In the
following code, we just define x as a hyperparameter, and return f(x) as
the objective value. The hypermodel and objective argument for initializing
the tuner can be omitted.
End of explanation
import os
def keras_code(units, optimizer, saving_path):
# Build model
model = keras.Sequential(
[layers.Dense(units=units, activation="relu"), layers.Dense(units=1),]
)
model.compile(
optimizer=optimizer, loss="mean_squared_error",
)
# Prepare data
x_train = np.random.rand(100, 10)
y_train = np.random.rand(100, 1)
x_val = np.random.rand(20, 10)
y_val = np.random.rand(20, 1)
# Train & eval model
model.fit(x_train, y_train)
# Save model
model.save(saving_path)
# Return a single float as the objective value.
# You may also return a dictionary
# of {metric_name: metric_value}.
y_pred = model.predict(x_val)
return np.mean(np.abs(y_pred - y_val))
class MyTuner(keras_tuner.RandomSearch):
def run_trial(self, trial, **kwargs):
hp = trial.hyperparameters
return keras_code(
units=hp.Int("units", 32, 128, 32),
optimizer=hp.Choice("optimizer", ["adam", "adadelta"]),
saving_path=os.path.join("/tmp", trial.trial_id),
)
tuner = MyTuner(
max_trials=3, overwrite=True, directory="my_dir", project_name="keep_code_separate",
)
tuner.search()
# Retraining the model
best_hp = tuner.get_best_hyperparameters()[0]
keras_code(**best_hp.values, saving_path="/tmp/best_model")
Explanation: Keep Keras code separate
You can keep all your Keras code unchanged and use KerasTuner to tune it. It
is useful if you cannot modify the Keras code for some reason.
It also gives you more flexibility. You don't have to separate the model
building and training code apart. However, this workflow would not help you
save the model or connect with the TensorBoard plugins.
To save the model, you can use trial.trial_id, which is a string to uniquely
identify a trial, to construct different paths to save the models from
different trials.
End of explanation
from keras_tuner.applications import HyperResNet
hypermodel = HyperResNet(input_shape=(28, 28, 1), classes=10)
tuner = keras_tuner.RandomSearch(
hypermodel,
objective="val_accuracy",
max_trials=2,
overwrite=True,
directory="my_dir",
project_name="built_in_hypermodel",
)
tuner.search(
x_train[:100], y_train[:100], epochs=1, validation_data=(x_val[:100], y_val[:100])
)
Explanation: KerasTuner includes pre-made tunable applications: HyperResNet and HyperXception
These are ready-to-use hypermodels for computer vision.
They come pre-compiled with loss="categorical_crossentropy" and
metrics=["accuracy"].
End of explanation
<END_TASK> |
15,614 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
2.2 Matplotlib - Wykresy kolumnowe, punktowe, histogramy
Wykresy kolumnowe
Step1: Wykresy punktowe
Step2: Histogramy | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(15,10))
ax = fig.add_subplot(111)
## the data
N = 5
menMeans = [18, 35, 30, 35, 27]
menStd = [2, 3, 4, 1, 2]
womenMeans = [25, 32, 34, 20, 25]
womenStd = [3, 5, 2, 3, 3]
## necessary variables
ind = np.arange(N) # the x locations for the groups
width = 0.35 # the width of the bars
## the bars
rects1 = ax.bar(ind, menMeans, width,
color='black',
yerr=menStd,
error_kw=dict(elinewidth=2,ecolor='red'))
rects2 = ax.bar(ind+width, womenMeans, width,
color='red',
yerr=womenStd,
error_kw=dict(elinewidth=2,ecolor='black'))
# axes and labels
ax.set_xlim(-width,len(ind)+width)
ax.set_ylim(0,45)
ax.set_ylabel('Scores')
ax.set_title('Scores by group and gender')
xTickMarks = ['Group'+str(i) for i in range(1,6)]
ax.set_xticks(ind+width)
xtickNames = ax.set_xticklabels(xTickMarks)
plt.setp(xtickNames, rotation=45, fontsize=10)
## add a legend
ax.legend( (rects1[0], rects2[0]), ('Men', 'Women') )
plt.show()
Explanation: 2.2 Matplotlib - Wykresy kolumnowe, punktowe, histogramy
Wykresy kolumnowe
End of explanation
import matplotlib.pyplot as plt
import numpy as np
fig = plt.figure(figsize=(15,7))
ax1 = fig.add_subplot(121)
## the data
N=1000
x = np.random.randn(N)
y = np.random.randn(N)
## left panel
ax1.scatter(x,y,color='blue',s=5,edgecolor='none')
ax1.set_aspect(1./ax1.get_data_ratio()) # make axes square
## right panel
ax2 = fig.add_subplot(122)
props = dict(alpha=0.5, edgecolors='none' )
handles = []
colors = ['blue', 'green', 'magenta', 'cyan']
for color in colors:
x = np.random.randn(N)
y = np.random.randn(N)
s = np.random.randint(50,200)
handles.append(ax2.scatter(x, y, c=color, s=s, **props))
ax2.set_ylim([-5,11])
ax2.set_xlim([-5,11])
ax2.legend(handles, colors)
ax2.grid(True)
ax2.set_aspect(1./ax2.get_data_ratio())
plt.show()
Explanation: Wykresy punktowe
End of explanation
import numpy as np
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(15,10))
ax = fig.add_subplot(111)
x = np.random.normal(0,1,1000)
numBins = 50
ax.hist(x,numBins,color='green',alpha=0.8)
plt.show()
Explanation: Histogramy
End of explanation
<END_TASK> |
15,615 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
NLTK Regular Expressions
Author
Step1: Let's see which free resources are readily available. And then let's have a closer look at Shakespeare's Hamlet (to pretent we are literature freaks).
Step2: Regular Expressions
So Shakespeare used 37360 words to write Hamlet. Let's investigate which patterns we find there.
In which word does the character sequence "wre" occur?
Step3: And which of them actually start with "wre"?
Step4: Find all words that start with "T" or "t", end with "r" and have exactly 3 other characters in the middle. To implement the "T" or "t" we use a character class specified by the brackets []. [Tt]matches either "T" or "t".
For macthing any character (no whitespace) we could use the character class [a-zA-Z], but using the abbreviation \Dis much more convenient. Further predefined character classes are
Step5: Did Shakespeare use any numbers (written as digits?) For macthing all the digits, we could similarly use [0123456789] or [0-9], but using the abbreviation \dis much more convenient.
Step6: And is there something that starts with z and ends with g?
Step7: In the last example we can not be sure whether there is definitely nothing or whether we got the regular expression wrong. To find out which one is the case, create a string you know should match and test your expression there. | <ASSISTANT_TASK:>
Python Code:
import nltk, re
from nltk import word_tokenize
# NOTE if the data (corpora, example files) is not yet downloaded, this needs to be done first
# nltk.download()
Explanation: NLTK Regular Expressions
Author: Christin Seifert, licensed under the Creative Commons Attribution 3.0 Unported License https://creativecommons.org/licenses/by/3.0/
This is a tutorial for simple text processing with python using the nltk library. For further reading I recommend the extensive online book of nltk available here.
In this notebook we will
* load text files from disk
* find word patterns with regular expressions (and see where they fail)
It is assumed that you have some general knowledge on
* basic python
Setup
If you have never used nltk before you need to download the example copora. Uncomment the nltk.download to do so. We also want the nltk library, the library (re) for regular expression.
End of explanation
print(nltk.corpus.gutenberg.fileids())
hamlet = nltk.corpus.gutenberg.words('shakespeare-hamlet.txt')
print(len(hamlet))
Explanation: Let's see which free resources are readily available. And then let's have a closer look at Shakespeare's Hamlet (to pretent we are literature freaks).
End of explanation
[w for w in hamlet if re.search('wre', w)]
Explanation: Regular Expressions
So Shakespeare used 37360 words to write Hamlet. Let's investigate which patterns we find there.
In which word does the character sequence "wre" occur?
End of explanation
[w for w in hamlet if re.search('^wre', w)]
Explanation: And which of them actually start with "wre"?
End of explanation
[w for w in hamlet if re.search('^[Tt]\w{5,5}r$', w)]
Explanation: Find all words that start with "T" or "t", end with "r" and have exactly 3 other characters in the middle. To implement the "T" or "t" we use a character class specified by the brackets []. [Tt]matches either "T" or "t".
For macthing any character (no whitespace) we could use the character class [a-zA-Z], but using the abbreviation \Dis much more convenient. Further predefined character classes are:
\d Matches any decimal digit
\D Matches any non-digit character
\s Matches any whitespace character (this could be line endings, blanks or tabs). This is tricky, because some of them are not visible if you look at the text with a text editor.
End of explanation
[w for w in hamlet if re.search('\d', w)]
Explanation: Did Shakespeare use any numbers (written as digits?) For macthing all the digits, we could similarly use [0123456789] or [0-9], but using the abbreviation \dis much more convenient.
End of explanation
[w for w in hamlet if re.search('^z.*g$', w)]
Explanation: And is there something that starts with z and ends with g?
End of explanation
[w for w in ["zarhhg","zhang","zg","42"] if re.search('^z.*g$', w)]
Explanation: In the last example we can not be sure whether there is definitely nothing or whether we got the regular expression wrong. To find out which one is the case, create a string you know should match and test your expression there.
End of explanation
<END_TASK> |
15,616 | <SYSTEM_TASK:>
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
<END_TASK>
<USER_TASK:>
Problem:
I have a set of objects and their positions over time. I would like to get the distance between each car and their farmost neighbour, and calculate an average of this for each time point. An example dataframe is as follows: | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
time = [0, 0, 0, 1, 1, 2, 2]
x = [216, 218, 217, 280, 290, 130, 132]
y = [13, 12, 12, 110, 109, 3, 56]
car = [1, 2, 3, 1, 3, 4, 5]
df = pd.DataFrame({'time': time, 'x': x, 'y': y, 'car': car})
import numpy as np
def g(df):
time = df.time.tolist()
car = df.car.tolist()
farmost_neighbour = []
euclidean_distance = []
for i in range(len(df)):
n = 0
d = 0
for j in range(len(df)):
if df.loc[i, 'time'] == df.loc[j, 'time'] and df.loc[i, 'car'] != df.loc[j, 'car']:
t = np.sqrt(((df.loc[i, 'x'] - df.loc[j, 'x'])**2) + ((df.loc[i, 'y'] - df.loc[j, 'y'])**2))
if t >= d:
d = t
n = df.loc[j, 'car']
farmost_neighbour.append(n)
euclidean_distance.append(d)
return pd.DataFrame({'time': time, 'car': car, 'farmost_neighbour': farmost_neighbour, 'euclidean_distance': euclidean_distance})
df = g(df.copy())
<END_TASK> |
15,617 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Conv2D
[convolutional.Conv2D.0] 4 3x3 filters on 5x5x2 input, strides=(1,1), padding='valid', data_format='channels_last', dilation_rate=(1,1), activation='linear', use_bias=True
Step1: [convolutional.Conv2D.1] 4 3x3 filters on 5x5x2 input, strides=(1,1), padding='valid', data_format='channels_last', dilation_rate=(1,1), activation='linear', use_bias=False
Step2: [convolutional.Conv2D.2] 4 3x3 filters on 5x5x2 input, strides=(2,2), padding='valid', data_format='channels_last', dilation_rate=(1,1), activation='relu', use_bias=True
Step3: [convolutional.Conv2D.3] 5 4x4 filters on 7x7x3 input, strides=(2,1), padding='valid', data_format='channels_last', dilation_rate=(1,1), activation='relu', use_bias=True
Step4: [convolutional.Conv2D.4] 4 3x3 filters on 5x5x2 input, strides=(1,1), padding='same', data_format='channels_last', dilation_rate=(1,1), activation='relu', use_bias=True
Step5: [convolutional.Conv2D.5] 4 3x3 filters on 4x4x2 input, strides=(2,2), padding='same', data_format='channels_last', dilation_rate=(1,1), activation='relu', use_bias=True
Step6: [convolutional.Conv2D.6] 4 3x3 filters on 6x3x1 input, strides=(3,2), padding='same', data_format='channels_last', dilation_rate=(1,1), activation='relu', use_bias=True
Step7: [convolutional.Conv2D.7] 4 3x3 filters on 5x5x2 input, strides=(1,1), padding='valid', data_format='channels_last', dilation_rate=(2,2), activation='linear', use_bias=True
Step8: [convolutional.Conv2D.8] 4 3x3 filters on 5x5x2 input, strides=(1,1), padding='valid', data_format='channels_last', dilation_rate=(2,2), activation='linear', use_bias=False
Step9: [convolutional.Conv2D.9] 4 3x3 filters on 7x7x2 input, strides=(1,1), padding='valid', data_format='channels_last', dilation_rate=(3,3), activation='relu', use_bias=True
Step10: [convolutional.Conv2D.10] 3 4x4 filters on 4x8x3 input, strides=(1,1), padding='same', data_format='channels_last', dilation_rate=(2,2), activation='relu', use_bias=True
Step11: [convolutional.Conv2D.11] 4 3x3 filters on 8x8x2 input, strides=(1,1), padding='same', data_format='channels_last', dilation_rate=(4,4), activation='relu', use_bias=True
Step12: export for Keras.js tests | <ASSISTANT_TASK:>
Python Code:
data_in_shape = (5, 5, 2)
conv = Conv2D(4, (3,3), strides=(1,1), padding='valid',
data_format='channels_last', dilation_rate=(1,1),
activation='linear', use_bias=True)
layer_0 = Input(shape=data_in_shape)
layer_1 = conv(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for w in model.get_weights():
np.random.seed(100)
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
print('W shape:', weights[0].shape)
print('W:', format_decimal(weights[0].ravel().tolist()))
print('b shape:', weights[1].shape)
print('b:', format_decimal(weights[1].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.Conv2D.0'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: Conv2D
[convolutional.Conv2D.0] 4 3x3 filters on 5x5x2 input, strides=(1,1), padding='valid', data_format='channels_last', dilation_rate=(1,1), activation='linear', use_bias=True
End of explanation
data_in_shape = (5, 5, 2)
conv = Conv2D(4, (3,3), strides=(1,1), padding='valid',
data_format='channels_last', dilation_rate=(1,1),
activation='linear', use_bias=False)
layer_0 = Input(shape=data_in_shape)
layer_1 = conv(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for w in model.get_weights():
np.random.seed(101)
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
print('W shape:', weights[0].shape)
print('W:', format_decimal(weights[0].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.Conv2D.1'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [convolutional.Conv2D.1] 4 3x3 filters on 5x5x2 input, strides=(1,1), padding='valid', data_format='channels_last', dilation_rate=(1,1), activation='linear', use_bias=False
End of explanation
data_in_shape = (5, 5, 2)
conv = Conv2D(4, (3,3), strides=(2,2), padding='valid',
data_format='channels_last', dilation_rate=(1,1),
activation='relu', use_bias=True)
layer_0 = Input(shape=data_in_shape)
layer_1 = conv(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for w in model.get_weights():
np.random.seed(102)
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
print('W shape:', weights[0].shape)
print('W:', format_decimal(weights[0].ravel().tolist()))
print('b shape:', weights[1].shape)
print('b:', format_decimal(weights[1].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.Conv2D.2'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [convolutional.Conv2D.2] 4 3x3 filters on 5x5x2 input, strides=(2,2), padding='valid', data_format='channels_last', dilation_rate=(1,1), activation='relu', use_bias=True
End of explanation
data_in_shape = (7, 7, 3)
conv = Conv2D(5, (4,4), strides=(2,1), padding='valid',
data_format='channels_last', dilation_rate=(1,1),
activation='relu', use_bias=True)
layer_0 = Input(shape=data_in_shape)
layer_1 = conv(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for w in model.get_weights():
np.random.seed(103)
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
print('W shape:', weights[0].shape)
print('W:', format_decimal(weights[0].ravel().tolist()))
print('b shape:', weights[1].shape)
print('b:', format_decimal(weights[1].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.Conv2D.3'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [convolutional.Conv2D.3] 5 4x4 filters on 7x7x3 input, strides=(2,1), padding='valid', data_format='channels_last', dilation_rate=(1,1), activation='relu', use_bias=True
End of explanation
data_in_shape = (5, 5, 2)
conv = Conv2D(4, (3,3), strides=(1,1), padding='same',
data_format='channels_last', dilation_rate=(1,1),
activation='relu', use_bias=True)
layer_0 = Input(shape=data_in_shape)
layer_1 = conv(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for w in model.get_weights():
np.random.seed(104)
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
print('W shape:', weights[0].shape)
print('W:', format_decimal(weights[0].ravel().tolist()))
print('b shape:', weights[1].shape)
print('b:', format_decimal(weights[1].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.Conv2D.4'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [convolutional.Conv2D.4] 4 3x3 filters on 5x5x2 input, strides=(1,1), padding='same', data_format='channels_last', dilation_rate=(1,1), activation='relu', use_bias=True
End of explanation
data_in_shape = (4, 4, 2)
conv = Conv2D(4, (3,3), strides=(2,2), padding='same',
data_format='channels_last', dilation_rate=(1,1),
activation='relu', use_bias=True)
layer_0 = Input(shape=data_in_shape)
layer_1 = conv(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for w in model.get_weights():
np.random.seed(105)
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
print('W shape:', weights[0].shape)
print('W:', format_decimal(weights[0].ravel().tolist()))
print('b shape:', weights[1].shape)
print('b:', format_decimal(weights[1].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.Conv2D.5'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [convolutional.Conv2D.5] 4 3x3 filters on 4x4x2 input, strides=(2,2), padding='same', data_format='channels_last', dilation_rate=(1,1), activation='relu', use_bias=True
End of explanation
data_in_shape = (6, 3, 1)
conv = Conv2D(4, (3,3), strides=(3,2), padding='same',
data_format='channels_last', dilation_rate=(1,1),
activation='relu', use_bias=True)
layer_0 = Input(shape=data_in_shape)
layer_1 = conv(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for w in model.get_weights():
np.random.seed(106)
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
print('W shape:', weights[0].shape)
print('W:', format_decimal(weights[0].ravel().tolist()))
print('b shape:', weights[1].shape)
print('b:', format_decimal(weights[1].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.Conv2D.6'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [convolutional.Conv2D.6] 4 3x3 filters on 6x3x1 input, strides=(3,2), padding='same', data_format='channels_last', dilation_rate=(1,1), activation='relu', use_bias=True
End of explanation
data_in_shape = (5, 5, 2)
conv = Conv2D(4, (3,3), strides=(1,1), padding='valid',
data_format='channels_last', dilation_rate=(2,2),
activation='linear', use_bias=True)
layer_0 = Input(shape=data_in_shape)
layer_1 = conv(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for w in model.get_weights():
np.random.seed(100)
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
print('W shape:', weights[0].shape)
print('W:', format_decimal(weights[0].ravel().tolist()))
print('b shape:', weights[1].shape)
print('b:', format_decimal(weights[1].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.Conv2D.7'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [convolutional.Conv2D.7] 4 3x3 filters on 5x5x2 input, strides=(1,1), padding='valid', data_format='channels_last', dilation_rate=(2,2), activation='linear', use_bias=True
End of explanation
data_in_shape = (5, 5, 2)
conv = Conv2D(4, (3,3), strides=(1,1), padding='valid',
data_format='channels_last', dilation_rate=(2,2),
activation='linear', use_bias=False)
layer_0 = Input(shape=data_in_shape)
layer_1 = conv(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for w in model.get_weights():
np.random.seed(101)
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
print('W shape:', weights[0].shape)
print('W:', format_decimal(weights[0].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.Conv2D.8'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [convolutional.Conv2D.8] 4 3x3 filters on 5x5x2 input, strides=(1,1), padding='valid', data_format='channels_last', dilation_rate=(2,2), activation='linear', use_bias=False
End of explanation
data_in_shape = (7, 7, 2)
conv = Conv2D(4, (3,3), strides=(1,1), padding='valid',
data_format='channels_last', dilation_rate=(3,3),
activation='relu', use_bias=True)
layer_0 = Input(shape=data_in_shape)
layer_1 = conv(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for w in model.get_weights():
np.random.seed(102)
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
print('W shape:', weights[0].shape)
print('W:', format_decimal(weights[0].ravel().tolist()))
print('b shape:', weights[1].shape)
print('b:', format_decimal(weights[1].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.Conv2D.9'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [convolutional.Conv2D.9] 4 3x3 filters on 7x7x2 input, strides=(1,1), padding='valid', data_format='channels_last', dilation_rate=(3,3), activation='relu', use_bias=True
End of explanation
data_in_shape = (4, 8, 3)
conv = Conv2D(3, (4,4), strides=(1,1), padding='same',
data_format='channels_last', dilation_rate=(2,2),
activation='relu', use_bias=True)
layer_0 = Input(shape=data_in_shape)
layer_1 = conv(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for w in model.get_weights():
np.random.seed(103)
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
print('W shape:', weights[0].shape)
print('W:', format_decimal(weights[0].ravel().tolist()))
print('b shape:', weights[1].shape)
print('b:', format_decimal(weights[1].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.Conv2D.10'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [convolutional.Conv2D.10] 3 4x4 filters on 4x8x3 input, strides=(1,1), padding='same', data_format='channels_last', dilation_rate=(2,2), activation='relu', use_bias=True
End of explanation
data_in_shape = (8, 8, 2)
conv = Conv2D(4, (3,3), strides=(1,1), padding='same',
data_format='channels_last', dilation_rate=(4,4),
activation='relu', use_bias=True)
layer_0 = Input(shape=data_in_shape)
layer_1 = conv(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for w in model.get_weights():
np.random.seed(104)
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
print('W shape:', weights[0].shape)
print('W:', format_decimal(weights[0].ravel().tolist()))
print('b shape:', weights[1].shape)
print('b:', format_decimal(weights[1].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.Conv2D.11'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [convolutional.Conv2D.11] 4 3x3 filters on 8x8x2 input, strides=(1,1), padding='same', data_format='channels_last', dilation_rate=(4,4), activation='relu', use_bias=True
End of explanation
import os
filename = '../../../test/data/layers/convolutional/Conv2D.json'
if not os.path.exists(os.path.dirname(filename)):
os.makedirs(os.path.dirname(filename))
with open(filename, 'w') as f:
json.dump(DATA, f)
print(json.dumps(DATA))
Explanation: export for Keras.js tests
End of explanation
<END_TASK> |
15,618 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
ES-DOC CMIP6 Model Properties - Toplevel
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required
Step7: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required
Step8: 3.2. CMIP3 Parent
Is Required
Step9: 3.3. CMIP5 Parent
Is Required
Step10: 3.4. Previous Name
Is Required
Step11: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required
Step12: 4.2. Code Version
Is Required
Step13: 4.3. Code Languages
Is Required
Step14: 4.4. Components Structure
Is Required
Step15: 4.5. Coupler
Is Required
Step16: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required
Step17: 5.2. Atmosphere Double Flux
Is Required
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Is Required
Step19: 5.4. Atmosphere Relative Winds
Is Required
Step20: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required
Step21: 6.2. Global Mean Metrics Used
Is Required
Step22: 6.3. Regional Metrics Used
Is Required
Step23: 6.4. Trend Metrics Used
Is Required
Step24: 6.5. Energy Balance
Is Required
Step25: 6.6. Fresh Water Balance
Is Required
Step26: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required
Step27: 7.2. Atmos Ocean Interface
Is Required
Step28: 7.3. Atmos Land Interface
Is Required
Step29: 7.4. Atmos Sea-ice Interface
Is Required
Step30: 7.5. Ocean Seaice Interface
Is Required
Step31: 7.6. Land Ocean Interface
Is Required
Step32: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required
Step33: 8.2. Atmos Ocean Interface
Is Required
Step34: 8.3. Atmos Land Interface
Is Required
Step35: 8.4. Atmos Sea-ice Interface
Is Required
Step36: 8.5. Ocean Seaice Interface
Is Required
Step37: 8.6. Runoff
Is Required
Step38: 8.7. Iceberg Calving
Is Required
Step39: 8.8. Endoreic Basins
Is Required
Step40: 8.9. Snow Accumulation
Is Required
Step41: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required
Step42: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required
Step43: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required
Step45: 12.2. Additional Information
Is Required
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required
Step47: 13.2. Additional Information
Is Required
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required
Step49: 14.2. Additional Information
Is Required
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required
Step51: 15.2. Additional Information
Is Required
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required
Step53: 16.2. Additional Information
Is Required
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required
Step55: 17.2. Equivalence Concentration
Is Required
Step56: 17.3. Additional Information
Is Required
Step57: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required
Step58: 18.2. Additional Information
Is Required
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required
Step60: 19.2. Additional Information
Is Required
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required
Step62: 20.2. Additional Information
Is Required
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required
Step64: 21.2. Additional Information
Is Required
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required
Step66: 22.2. Aerosol Effect On Ice Clouds
Is Required
Step67: 22.3. Additional Information
Is Required
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required
Step69: 23.2. Aerosol Effect On Ice Clouds
Is Required
Step70: 23.3. RFaci From Sulfate Only
Is Required
Step71: 23.4. Additional Information
Is Required
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required
Step73: 24.2. Additional Information
Is Required
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step77: 25.4. Additional Information
Is Required
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step81: 26.4. Additional Information
Is Required
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required
Step83: 27.2. Additional Information
Is Required
Step84: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required
Step85: 28.2. Crop Change Only
Is Required
Step86: 28.3. Additional Information
Is Required
Step87: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required
Step88: 29.2. Additional Information
Is Required | <ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'test-institute-1', 'sandbox-2', 'toplevel')
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: TEST-INSTITUTE-1
Source ID: SANDBOX-2
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:43
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
<END_TASK> |
15,619 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Let's look at how long it takes to tokenize a sample of 1000 IMDB review.
Step1: We'll start with the simplest approach
Step2: ...and a general way to tokenize a bunch of strings
Step3: Let's time it
Step4: ...and the same thing with 2 workers
Step5: How about if we put half the work in each worker?
Step6: So there's a lot of overhead in using parallel processing in Python.
Step7: That's quite fast! (Although still slower than single process.)
What if we don't return much data?
Step8: That's a bit faster still.
What if we don't actually return the lists of tokens, but create them still?
Step9: So creating the tokens, isn't taking the time, but returning them over the process boundary is.
Step10: Is numpy any faster?
Step11: Spacy
Step12: Let's see how long it takes to create a tokenizer in Spacy
Step13: Time tokenize in Spacy using a loop
Step14: ...and the same thing in parallel
Step15: ...and with more workers
Step16: ...and with creating the tokenizer in the child process
Step17: Let's try pipe | <ASSISTANT_TASK:>
Python Code:
path = untar_data(URLs.IMDB_SAMPLE)
df = pd.read_csv(path/'texts.csv')
df.head(2)
ss = L(list(df.text))
ss[0]
Explanation: Let's look at how long it takes to tokenize a sample of 1000 IMDB review.
End of explanation
def delim_tok(s, delim=' '): return L(s.split(delim))
s = ss[0]
delim_tok(s)
Explanation: We'll start with the simplest approach:
End of explanation
def apply(func, items): return list(map(func, items))
Explanation: ...and a general way to tokenize a bunch of strings:
End of explanation
%%timeit -n 2 -r 3
global t
t = apply(delim_tok, ss)
Explanation: Let's time it:
End of explanation
%%timeit -n 2 -r 3
parallel(delim_tok, ss, n_workers=2, progress=False)
Explanation: ...and the same thing with 2 workers:
End of explanation
batches32 = [L(list(o)).map(str) for o in np.array_split(ss, 32)]
batches8 = [L(list(o)).map(str) for o in np.array_split(ss, 8 )]
batches = [L(list(o)).map(str) for o in np.array_split(ss, 2 )]
%%timeit -n 2 -r 3
parallel(partial(apply, delim_tok), batches, progress=False, n_workers=2)
Explanation: How about if we put half the work in each worker?
End of explanation
%%timeit -n 2 -r 3
global t
t = parallel(noop, batches, progress=False, n_workers=2)
Explanation: So there's a lot of overhead in using parallel processing in Python. :(
Let's see why. What if we do nothing interesting in our function?
End of explanation
def f(x): return 1
%%timeit -n 2 -r 3
global t
t = parallel(f, batches, progress=False, n_workers=2)
Explanation: That's quite fast! (Although still slower than single process.)
What if we don't return much data?
End of explanation
def f(items):
o = [s.split(' ') for s in items]
return [s for s in items]
Explanation: That's a bit faster still.
What if we don't actually return the lists of tokens, but create them still?
End of explanation
%%timeit -n 2 -r 3
global t
t = parallel(f, batches, progress=False, n_workers=2)
Explanation: So creating the tokens, isn't taking the time, but returning them over the process boundary is.
End of explanation
sarr = np.array(ss)
%%timeit -n 2 -r 3
global t
t = np.char.split(sarr)
Explanation: Is numpy any faster?
End of explanation
from spacy.lang.en import English
def conv_sp(doc): return L(doc).map(str)
class SpTok:
def __init__(self):
nlp = English()
self.tok = nlp.Defaults.create_tokenizer(nlp)
def __call__(self, x): return L(self.tok(str(x))).map(conv_sp)
Explanation: Spacy
End of explanation
%%timeit -n 2 -r 3
SpTok()
nlp = English()
sp_tokenizer = nlp.Defaults.create_tokenizer(nlp)
def spacy_tok(s): return L(sp_tokenizer(str(s))).map(str)
Explanation: Let's see how long it takes to create a tokenizer in Spacy:
End of explanation
%%timeit -r 3
global t
t = apply(spacy_tok, ss)
Explanation: Time tokenize in Spacy using a loop:
End of explanation
%%timeit -r 3
global t
t = parallel(partial(apply, spacy_tok), batches, progress=False, n_workers=2)
Explanation: ...and the same thing in parallel:
End of explanation
%%timeit -r 3
global t
t = parallel(partial(apply, spacy_tok), batches8, progress=False, n_workers=8)
Explanation: ...and with more workers:
End of explanation
def f(its):
tok = SpTok()
return [[str(o) for o in tok(p)] for p in its]
%%timeit -r 3
global t
t = parallel(f, batches8, progress=False, n_workers=8)
Explanation: ...and with creating the tokenizer in the child process:
End of explanation
%%timeit -r 3
global t
t = L(nlp.tokenizer.pipe(ss)).map(conv_sp)
def f(its): return L(nlp.tokenizer.pipe(its)).map(conv_sp)
%%timeit -r 3
global t
t = parallel(f, batches8, progress=False, n_workers=8)
test_eq(chunked(range(12),n_chunks=4), [[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 10, 11]])
test_eq(chunked(range(11),n_chunks=4), [[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 10]])
test_eq(chunked(range(10),n_chunks=4), [[0, 1, 2], [3, 4, 5], [6, 7, 8], [9]])
test_eq(chunked(range( 9),n_chunks=3), [[0, 1, 2], [3, 4, 5], [6, 7, 8]])
%%timeit -r 3
global t
t = parallel_chunks(f, ss, n_workers=8, progress=False)
def array_split(arr, n): return chunked(arr, math.floor(len(arr)/n))
Explanation: Let's try pipe
End of explanation
<END_TASK> |
15,620 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Pivot Table on Pandas
In this notebook, we focus on the pivot_table feature of pandas.
Import Modules
Step1: Import data
Step2: Pivot Table
In order to build up a pivot table, we must specify an index.
Step3: Note that the default aggregation function is np.mean. We can specify the aggregation function in the aggfunc parameter, as shown below.
Step4: For simplicity, we will stick with the default aggregation function.
We also want to see value, but we need to change it into floats first
Step5: We could also choose more than one column as index
Step6: columns provide an additional way to segment the data
Step7: Note that NaN implies that there is no data here
The default aggfunc is avg but we could use other functions such as np.sum
Step8: Use margins=True to show the total numbers
Step9: We should use avg for value but sum for sold, and we do not want to see distance_to_CBD for now
Step10: Advanced Filtering over Pivot Table
We firstly build a pivot table
Step11: We can just look at data from one city
Step12: We can also specify multiple values
Step13: Note | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import plotly.plotly as py
import plotly.graph_objs as go
Explanation: Pivot Table on Pandas
In this notebook, we focus on the pivot_table feature of pandas.
Import Modules
End of explanation
df = pd.read_csv('./asset/sydney_housing_market.txt', sep='\t')
df.head()
Explanation: Import data
End of explanation
pd.pivot_table(df, index=['type'])
Explanation: Pivot Table
In order to build up a pivot table, we must specify an index.
End of explanation
pd.pivot_table(df, index=['type'], aggfunc={'distance_to_CBD':np.mean, 'sold':np.sum})
Explanation: Note that the default aggregation function is np.mean. We can specify the aggregation function in the aggfunc parameter, as shown below.
End of explanation
df['value']=df['value'].replace('[\$,]','',regex=True).astype(float)
pd.pivot_table(df, index=['type'])
Explanation: For simplicity, we will stick with the default aggregation function.
We also want to see value, but we need to change it into floats first
End of explanation
pd.pivot_table(df, index=['type','council'])
Explanation: We could also choose more than one column as index
End of explanation
pd.pivot_table(df, index=['council'], columns=['type'])
Explanation: columns provide an additional way to segment the data
End of explanation
pd.pivot_table(df, index=['type'],aggfunc=np.sum)
Explanation: Note that NaN implies that there is no data here
The default aggfunc is avg but we could use other functions such as np.sum
End of explanation
pd.pivot_table(df, index=['type','council'], aggfunc=np.sum, margins=True)
Explanation: Use margins=True to show the total numbers
End of explanation
pd.pivot_table(df, index=['council','suburb'],
columns=['type'],
values=['sold', 'value'],
aggfunc={'sold':np.sum, 'value':np.mean},
margins=True)
Explanation: We should use avg for value but sum for sold, and we do not want to see distance_to_CBD for now
End of explanation
table = pd.pivot_table(df, index=['council'], columns=['type'], values=['sold'], aggfunc=np.sum, margins=True)
table
Explanation: Advanced Filtering over Pivot Table
We firstly build a pivot table
End of explanation
table.query('council==["Randwick"]')
Explanation: We can just look at data from one city
End of explanation
table.query('council==["Rockdale","Lane Cove"]')
Explanation: We can also specify multiple values
End of explanation
plot_table = table[:-1] # get rid of ALL
plot_table.sold.house
table.sold.house
plot_table.index
trace1 = go.Bar(
x=plot_table.index,
y=plot_table.sold.house,
name='House'
)
trace2 = go.Bar(
x=plot_table.index,
y=plot_table.sold.unit,
name='Unit'
)
data = [trace1, trace2]
layout = go.Layout(
barmode='group'
)
fig = go.Figure(data=data, layout=layout)
py.iplot(fig, filename='pandas-notebook-plot2')
Explanation: Note: we can not query a pivot table from the columns. So you should put item in index if you want to query it.
Plot the data using plot.ly
Before you can execute the code below, you need to install plotly, register a free account with them, and create a profile that contains your own API key. See https://plot.ly/python/getting-started/
End of explanation
<END_TASK> |
15,621 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Artifact correction with Maxwell filter
This tutorial shows how to clean MEG data with Maxwell filtering.
Maxwell filtering in MNE can be used to suppress sources of external
intereference and compensate for subject head movements.
See maxwell for more details.
Step1: Set parameters
Step2: Preprocess with Maxwell filtering
Step3: Select events to extract epochs from, pick M/EEG channels, and plot evoked | <ASSISTANT_TASK:>
Python Code:
import mne
from mne.preprocessing import maxwell_filter
data_path = mne.datasets.sample.data_path()
Explanation: Artifact correction with Maxwell filter
This tutorial shows how to clean MEG data with Maxwell filtering.
Maxwell filtering in MNE can be used to suppress sources of external
intereference and compensate for subject head movements.
See maxwell for more details.
End of explanation
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
ctc_fname = data_path + '/SSS/ct_sparse_mgh.fif'
fine_cal_fname = data_path + '/SSS/sss_cal_mgh.dat'
Explanation: Set parameters
End of explanation
raw = mne.io.read_raw_fif(raw_fname, add_eeg_ref=False)
raw.info['bads'] = ['MEG 2443', 'EEG 053', 'MEG 1032', 'MEG 2313'] # set bads
# Here we don't use tSSS (set st_duration) because MGH data is very clean
raw_sss = maxwell_filter(raw, cross_talk=ctc_fname, calibration=fine_cal_fname)
Explanation: Preprocess with Maxwell filtering
End of explanation
tmin, tmax = -0.2, 0.5
event_id = {'Auditory/Left': 1}
events = mne.find_events(raw, 'STI 014')
picks = mne.pick_types(raw.info, meg=True, eeg=False, stim=False, eog=True,
include=[], exclude='bads')
for r, kind in zip((raw, raw_sss), ('Raw data', 'Maxwell filtered data')):
epochs = mne.Epochs(r, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=dict(eog=150e-6),
preload=False)
evoked = epochs.average()
evoked.plot(window_title=kind, ylim=dict(grad=(-200, 250),
mag=(-600, 700)))
Explanation: Select events to extract epochs from, pick M/EEG channels, and plot evoked
End of explanation
<END_TASK> |
15,622 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
ES-DOC CMIP6 Model Properties - Landice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Ice Albedo
Is Required
Step7: 1.4. Atmospheric Coupling Variables
Is Required
Step8: 1.5. Oceanic Coupling Variables
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required
Step11: 2.2. Code Version
Is Required
Step12: 2.3. Code Languages
Is Required
Step13: 3. Grid
Land ice grid
3.1. Overview
Is Required
Step14: 3.2. Adaptive Grid
Is Required
Step15: 3.3. Base Resolution
Is Required
Step16: 3.4. Resolution Limit
Is Required
Step17: 3.5. Projection
Is Required
Step18: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required
Step19: 4.2. Description
Is Required
Step20: 4.3. Dynamic Areal Extent
Is Required
Step21: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required
Step22: 5.2. Grounding Line Method
Is Required
Step23: 5.3. Ice Sheet
Is Required
Step24: 5.4. Ice Shelf
Is Required
Step25: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required
Step26: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required
Step27: 7.2. Ocean
Is Required
Step28: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required
Step29: 8.2. Melting
Is Required
Step30: 9. Ice --> Dynamics
**
9.1. Description
Is Required
Step31: 9.2. Approximation
Is Required
Step32: 9.3. Adaptive Timestep
Is Required
Step33: 9.4. Timestep
Is Required | <ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ncc', 'sandbox-1', 'landice')
Explanation: ES-DOC CMIP6 Model Properties - Landice
MIP Era: CMIP6
Institute: NCC
Source ID: SANDBOX-1
Topic: Landice
Sub-Topics: Glaciers, Ice.
Properties: 30 (21 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:25
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Ice Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify how ice albedo is modelled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Atmospheric Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the atmosphere and ice (e.g. orography, ice mass)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Oceanic Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the ocean and ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which variables are prognostically calculated in the ice model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Grid
Land ice grid
3.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is an adative grid being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Base Resolution
Is Required: TRUE Type: FLOAT Cardinality: 1.1
The base resolution (in metres), before any adaption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Resolution Limit
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If an adaptive grid is being used, what is the limit of the resolution (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.5. Projection
Is Required: TRUE Type: STRING Cardinality: 1.1
The projection of the land ice grid (e.g. albers_equal_area)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of glaciers in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of glaciers, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 4.3. Dynamic Areal Extent
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does the model include a dynamic glacial extent?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the ice sheet and ice shelf in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.2. Grounding Line Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.3. Ice Sheet
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice sheets simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Ice Shelf
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice shelves simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over bedrock
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Ocean
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of calving from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Melting
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of melting from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Ice --> Dynamics
**
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description if ice sheet and ice shelf dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Approximation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Approximation type used in modelling ice dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.3. Adaptive Timestep
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there an adaptive time scheme for the ice scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep.
End of explanation
<END_TASK> |
15,623 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
How to create a Deployment
In this notebook, we show you how to create a Deployment with 3 ReplicaSets. These ReplicaSets are owned by the Deployment and are managed by the Deployment controller. We would also learn how to carry out RollingUpdate and RollBack to new and older versions of the deployment.
Step1: Load config from default location
Step2: Create Deployment object
Step3: Fill required Deployment fields (apiVersion, kind, and metadata)
Step4: A Deployment also needs a .spec section
Step5: Add Pod template in .spec.template section
Step6: Pod template container description
Step7: Create Deployment
Step8: Update container image
Step9: Apply update (RollingUpdate)
Step10: Delete Deployment | <ASSISTANT_TASK:>
Python Code:
from kubernetes import client, config
Explanation: How to create a Deployment
In this notebook, we show you how to create a Deployment with 3 ReplicaSets. These ReplicaSets are owned by the Deployment and are managed by the Deployment controller. We would also learn how to carry out RollingUpdate and RollBack to new and older versions of the deployment.
End of explanation
config.load_kube_config()
apps_api = client.AppsV1Api()
Explanation: Load config from default location
End of explanation
deployment = client.V1Deployment()
Explanation: Create Deployment object
End of explanation
deployment.api_version = "apps/v1"
deployment.kind = "Deployment"
deployment.metadata = client.V1ObjectMeta(name="nginx-deployment")
Explanation: Fill required Deployment fields (apiVersion, kind, and metadata)
End of explanation
spec = client.V1DeploymentSpec()
spec.replicas = 3
Explanation: A Deployment also needs a .spec section
End of explanation
spec.template = client.V1PodTemplateSpec()
spec.template.metadata = client.V1ObjectMeta(labels={"app": "nginx"})
spec.template.spec = client.V1PodSpec()
Explanation: Add Pod template in .spec.template section
End of explanation
container = client.V1Container()
container.name="nginx"
container.image="nginx:1.7.9"
container. ports = [client.V1ContainerPort(container_port=80)]
spec.template.spec.containers = [container]
deployment.spec = spec
Explanation: Pod template container description
End of explanation
apps_api.create_namespaced_deployment(namespace="default", body=deployment)
Explanation: Create Deployment
End of explanation
deployment.spec.template.spec.containers[0].image = "nginx:1.9.1"
Explanation: Update container image
End of explanation
apps_api.replace_namespaced_deployment(name="nginx-deployment", namespace="default", body=deployment)
Explanation: Apply update (RollingUpdate)
End of explanation
apps_api.delete_namespaced_deployment(name="nginx-deployment", namespace="default", body=client.V1DeleteOptions(propagation_policy="Foreground", grace_period_seconds=5))
Explanation: Delete Deployment
End of explanation
<END_TASK> |
15,624 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Load and process review dataset
Step1: Train-Validation split
We split the data into a train-validation split with 80% of the data in the training set and 20% of the data in the validation set. We use seed=2 so that everyone gets the same result.
Note
Step2: Convert Frame to NumPy array
Just like in the second assignment of the previous module, we provide you with a function that extracts columns from an SFrame and converts them into a NumPy array. Two arrays are returned
Step3: Building on logistic regression with no L2 penalty assignment
Let us now build on Module 3 assignment. Recall from lecture that the link function for logistic regression can be defined as
Step4: Adding L2 penalty
Let us now work on extending logistic regression with L2 regularization. As discussed in the lectures, the L2 regularization is particularly useful in preventing overfitting. In this assignment, we will explore L2 regularization in detail.
Recall from lecture and the previous assignment that for logistic regression without an L2 penalty, the derivative of the log likelihood function is
Step5: Quiz Question
Step6: Quiz Question
Step7: Explore effects of L2 regularization
Now that we have written up all the pieces needed for regularized logistic regression, let's explore the benefits of using L2 regularization in analyzing sentiment for product reviews. As iterations pass, the log likelihood should increase.
Below, we train models with increasing amounts of regularization, starting with no L2 penalty, which is equivalent to our previous logistic regression implementation.
Step8: Compare coefficients
We now compare the coefficients for each of the models that were trained above. We will create a table of features and learned coefficients associated with each of the different L2 penalty values.
Below is a simple helper function that will help us create this table.
Step9: Using the coefficients trained with L2 penalty 0, find the 5 most positive words (with largest positive coefficients). Save them to positive_words. Similarly, find the 5 most negative words (with largest negative coefficients) and save them to negative_words.
Quiz Question. Which of the following is not listed in either positive_words or negative_words?
Step10: Let us observe the effect of increasing L2 penalty on the 10 words just selected. We provide you with a utility function to plot the coefficient path.
Step11: Quiz Question
Step12: Below, we compare the accuracy on the training data and validation data for all the models that were trained in this assignment. We first calculate the accuracy values and then build a simple report summarizing the performance for the various models. | <ASSISTANT_TASK:>
Python Code:
products = pd.read_csv('../../data/amazon_baby_subset.csv')
products['sentiment']
products['sentiment'].size
products.head(10).name
print ('# of positive reviews =', len(products[products['sentiment']==1]))
print ('# of negative reviews =', len(products[products['sentiment']==-1]))
# The same feature processing (same as the previous assignments)
# ---------------------------------------------------------------
import json
with open('../../data/important_words.json', 'r') as f: # Reads the list of most frequent words
important_words = json.load(f)
important_words = [str(s) for s in important_words]
def remove_punctuation(text):
import string
translator = str.maketrans('', '', string.punctuation)
return str(text).translate(translator)
# Remove punctuation.
products['review_clean'] = products['review'].apply(remove_punctuation)
# Split out the words into individual columns
for word in important_words:
products[word] = products['review_clean'].apply(lambda s : s.split().count(word))
Explanation: Load and process review dataset
End of explanation
with open('../../data/module-4-assignment-train-idx.json', 'r') as f:
train_idx = json.load(f)
train_data = products.ix[train_idx]
with open ('../../data/module-4-assignment-validation-idx.json', 'r') as f:
v_idx = json.load(f)
validation_data = products.ix[v_idx]
Explanation: Train-Validation split
We split the data into a train-validation split with 80% of the data in the training set and 20% of the data in the validation set. We use seed=2 so that everyone gets the same result.
Note: In previous assignments, we have called this a train-test split. However, the portion of data that we don't train on will be used to help select model parameters. Thus, this portion of data should be called a validation set. Recall that examining performance of various potential models (i.e. models with different parameters) should be on a validation set, while evaluation of selected model should always be on a test set.
End of explanation
import numpy as np
def get_numpy_data(data_frame, features, label):
data_frame['intercept'] = 1
features = ['intercept'] + features
features_frame = data_frame[features]
feature_matrix = features_frame.as_matrix()
label_array = data_frame[label]
return(feature_matrix, label_array)
feature_matrix_train, sentiment_train = get_numpy_data(train_data, important_words, 'sentiment')
feature_matrix_valid, sentiment_valid = get_numpy_data(validation_data, important_words, 'sentiment')
Explanation: Convert Frame to NumPy array
Just like in the second assignment of the previous module, we provide you with a function that extracts columns from an SFrame and converts them into a NumPy array. Two arrays are returned: one representing features and another representing class labels.
Note: The feature matrix includes an additional column 'intercept' filled with 1's to take account of the intercept term.
End of explanation
def prediction(score):
return (1 / (1 + np.exp(-score)))
'''
produces probablistic estimate for P(y_i = +1 | x_i, w).
estimate ranges between 0 and 1.
'''
def predict_probability(feature_matrix, coefficients):
# Take dot product of feature_matrix and coefficients
scores = np.dot(feature_matrix, coefficients)
# Compute P(y_i = +1 | x_i, w) using the link function
predictions = np.apply_along_axis(prediction, 0, scores)
# return predictions
return predictions
Explanation: Building on logistic regression with no L2 penalty assignment
Let us now build on Module 3 assignment. Recall from lecture that the link function for logistic regression can be defined as:
$$
P(y_i = +1 | \mathbf{x}_i,\mathbf{w}) = \frac{1}{1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))},
$$
where the feature vector $h(\mathbf{x}_i)$ is given by the word counts of important_words in the review $\mathbf{x}_i$.
We will use the same code as in this past assignment to make probability predictions since this part is not affected by the L2 penalty. (Only the way in which the coefficients are learned is affected by the addition of a regularization term.)
End of explanation
def feature_derivative_with_L2(errors, feature, coefficient, l2_penalty, feature_is_constant):
# Compute the dot product of errors and feature
derivative = np.dot(feature, errors)
# add L2 penalty term for any feature that isn't the intercept.
if not feature_is_constant:
derivative = derivative - 2 * l2_penalty * coefficient
return derivative
Explanation: Adding L2 penalty
Let us now work on extending logistic regression with L2 regularization. As discussed in the lectures, the L2 regularization is particularly useful in preventing overfitting. In this assignment, we will explore L2 regularization in detail.
Recall from lecture and the previous assignment that for logistic regression without an L2 penalty, the derivative of the log likelihood function is:
$$
\frac{\partial\ell}{\partial w_j} = \sum_{i=1}^N h_j(\mathbf{x}_i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right)
$$
Adding L2 penalty to the derivative
It takes only a small modification to add a L2 penalty. All terms indicated in red refer to terms that were added due to an L2 penalty.
Recall from the lecture that the link function is still the sigmoid:
$$
P(y_i = +1 | \mathbf{x}_i,\mathbf{w}) = \frac{1}{1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))},
$$
We add the L2 penalty term to the per-coefficient derivative of log likelihood:
$$
\frac{\partial\ell}{\partial w_j} = \sum_{i=1}^N h_j(\mathbf{x}_i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right) \color{red}{-2\lambda w_j }
$$
The per-coefficient derivative for logistic regression with an L2 penalty is as follows:
$$
\frac{\partial\ell}{\partial w_j} = \sum_{i=1}^N h_j(\mathbf{x}i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right) \color{red}{-2\lambda w_j }
$$
and for the intercept term, we have
$$
\frac{\partial\ell}{\partial w_0} = \sum{i=1}^N h_0(\mathbf{x}_i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right)
$$
Note: As we did in the Regression course, we do not apply the L2 penalty on the intercept. A large intercept does not necessarily indicate overfitting because the intercept is not associated with any particular feature.
Write a function that computes the derivative of log likelihood with respect to a single coefficient $w_j$. Unlike its counterpart in the last assignment, the function accepts five arguments:
* errors vector containing $(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w}))$ for all $i$
* feature vector containing $h_j(\mathbf{x}_i)$ for all $i$
* coefficient containing the current value of coefficient $w_j$.
* l2_penalty representing the L2 penalty constant $\lambda$
* feature_is_constant telling whether the $j$-th feature is constant or not.
End of explanation
def compute_log_likelihood_with_L2(feature_matrix, sentiment, coefficients, l2_penalty):
indicator = (sentiment==+1)
scores = np.dot(feature_matrix, coefficients)
lp = np.sum((indicator-1)*scores - np.log(1. + np.exp(-scores))) - l2_penalty*np.sum(coefficients[1:]**2)
return lp
Explanation: Quiz Question: In the code above, was the intercept term regularized?
To verify the correctness of the gradient ascent algorithm, we provide a function for computing log likelihood (which we recall from the last assignment was a topic detailed in an advanced optional video, and used here for its numerical stability).
$$\ell\ell(\mathbf{w}) = \sum_{i=1}^N \Big( (\mathbf{1}[y_i = +1] - 1)\mathbf{w}^T h(\mathbf{x}_i) - \ln\left(1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))\right) \Big) \color{red}{-\lambda\|\mathbf{w}\|_2^2} $$
End of explanation
from math import sqrt
def logistic_regression_with_L2(feature_matrix, sentiment, initial_coefficients, step_size, l2_penalty, max_iter):
coefficients = np.array(initial_coefficients) # make sure it's a numpy array
for itr in range(max_iter):
# Predict P(y_i = +1|x_i,w) using your predict_probability() function
# YOUR CODE HERE
predictions = predict_probability(feature_matrix, coefficients)
# Compute indicator value for (y_i = +1)
indicator = (sentiment==+1)
# Compute the errors as indicator - predictions
errors = indicator - predictions
for j in range(len(coefficients)): # loop over each coefficient
# Recall that feature_matrix[:,j] is the feature column associated with coefficients[j].
# Compute the derivative for coefficients[j]. Save it in a variable called derivative
# YOUR CODE HERE
derivative = feature_derivative_with_L2(errors, feature_matrix[:, j], coefficients[j], l2_penalty, j == 0)
# add the step size times the derivative to the current coefficient
coefficients[j] += (step_size * derivative)
# Checking whether log likelihood is increasing
if itr <= 15 or (itr <= 100 and itr % 10 == 0) or (itr <= 1000 and itr % 100 == 0) \
or (itr <= 10000 and itr % 1000 == 0) or itr % 10000 == 0:
lp = compute_log_likelihood_with_L2(feature_matrix, sentiment, coefficients, l2_penalty)
print ('iteration %*d: log likelihood of observed labels = %.8f' % \
(int(np.ceil(np.log10(max_iter))), itr, lp))
return coefficients
Explanation: Quiz Question: Does the term with L2 regularization increase or decrease $\ell\ell(\mathbf{w})$?
The logistic regression function looks almost like the one in the last assignment, with a minor modification to account for the L2 penalty. Fill in the code below to complete this modification.
End of explanation
# run with L2 = 0
coefficients_0_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=0, max_iter=501)
# run with L2 = 4
coefficients_4_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=4, max_iter=501)
# run with L2 = 10
coefficients_10_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=10, max_iter=501)
# run with L2 = 1e2
coefficients_1e2_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=1e2, max_iter=501)
# run with L2 = 1e3
coefficients_1e3_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=1e3, max_iter=501)
# run with L2 = 1e5
coefficients_1e5_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=1e5, max_iter=501)
Explanation: Explore effects of L2 regularization
Now that we have written up all the pieces needed for regularized logistic regression, let's explore the benefits of using L2 regularization in analyzing sentiment for product reviews. As iterations pass, the log likelihood should increase.
Below, we train models with increasing amounts of regularization, starting with no L2 penalty, which is equivalent to our previous logistic regression implementation.
End of explanation
important_words.insert(0, 'intercept')
data = np.array(important_words)
table = pd.DataFrame(columns = ['words'], data = data)
def add_coefficients_to_table(coefficients, column_name):
table[column_name] = coefficients
return table
important_words.remove('intercept')
add_coefficients_to_table(coefficients_0_penalty, 'coefficients [L2=0]')
add_coefficients_to_table(coefficients_4_penalty, 'coefficients [L2=4]')
add_coefficients_to_table(coefficients_10_penalty, 'coefficients [L2=10]')
add_coefficients_to_table(coefficients_1e2_penalty, 'coefficients [L2=1e2]')
add_coefficients_to_table(coefficients_1e3_penalty, 'coefficients [L2=1e3]')
add_coefficients_to_table(coefficients_1e5_penalty, 'coefficients [L2=1e5]')
Explanation: Compare coefficients
We now compare the coefficients for each of the models that were trained above. We will create a table of features and learned coefficients associated with each of the different L2 penalty values.
Below is a simple helper function that will help us create this table.
End of explanation
def make_tuple(column_name):
word_coefficient_tuples = [(word, coefficient) for word, coefficient in zip( table['words'], table[column_name])]
return word_coefficient_tuples
positive_words = list(map(lambda x: x[0], sorted(make_tuple('coefficients [L2=0]'), key=lambda x:x[1], reverse=True)[:5]))
negative_words = list(map(lambda x: x[0], sorted(make_tuple('coefficients [L2=0]'), key=lambda x:x[1], reverse=False)[:5]))
positive_words
negative_words
Explanation: Using the coefficients trained with L2 penalty 0, find the 5 most positive words (with largest positive coefficients). Save them to positive_words. Similarly, find the 5 most negative words (with largest negative coefficients) and save them to negative_words.
Quiz Question. Which of the following is not listed in either positive_words or negative_words?
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = 10, 6
def make_coefficient_plot(table, positive_words, negative_words, l2_penalty_list):
cmap_positive = plt.get_cmap('Reds')
cmap_negative = plt.get_cmap('Blues')
xx = l2_penalty_list
plt.plot(xx, [0.]*len(xx), '--', lw=1, color='k')
table_positive_words = table[table['words'].isin(positive_words)]
table_negative_words = table[table['words'].isin(negative_words)]
del table_positive_words['words']
del table_negative_words['words']
for i in range(len(positive_words)):
color = cmap_positive(0.8*((i+1)/(len(positive_words)*1.2)+0.15))
plt.plot(xx, table_positive_words[i:i+1].as_matrix().flatten(),
'-', label=positive_words[i], linewidth=4.0, color=color)
for i in range(len(negative_words)):
color = cmap_negative(0.8*((i+1)/(len(negative_words)*1.2)+0.15))
plt.plot(xx, table_negative_words[i:i+1].as_matrix().flatten(),
'-', label=negative_words[i], linewidth=4.0, color=color)
plt.legend(loc='best', ncol=3, prop={'size':16}, columnspacing=0.5)
plt.axis([1, 1e5, -1, 2])
plt.title('Coefficient path')
plt.xlabel('L2 penalty ($\lambda$)')
plt.ylabel('Coefficient value')
plt.xscale('log')
plt.rcParams.update({'font.size': 18})
plt.tight_layout()
make_coefficient_plot(table, positive_words, negative_words, l2_penalty_list=[0, 4, 10, 1e2, 1e3, 1e5])
Explanation: Let us observe the effect of increasing L2 penalty on the 10 words just selected. We provide you with a utility function to plot the coefficient path.
End of explanation
def get_classification_accuracy(feature_matrix, sentiment, coefficients):
scores = np.dot(feature_matrix, coefficients)
apply_threshold = np.vectorize(lambda x: 1. if x > 0 else -1.)
predictions = apply_threshold(scores)
num_correct = (predictions == sentiment).sum()
accuracy = num_correct / len(feature_matrix)
return accuracy
Explanation: Quiz Question: (True/False) All coefficients consistently get smaller in size as the L2 penalty is increased.
Quiz Question: (True/False) The relative order of coefficients is preserved as the L2 penalty is increased. (For example, if the coefficient for 'cat' was more positive than that for 'dog', this remains true as the L2 penalty increases.)
Measuring accuracy
Now, let us compute the accuracy of the classifier model. Recall that the accuracy is given by
$$
\mbox{accuracy} = \frac{\mbox{# correctly classified data points}}{\mbox{# total data points}}
$$
Recall from lecture that that the class prediction is calculated using
$$
\hat{y}_i =
\left{
\begin{array}{ll}
+1 & h(\mathbf{x}_i)^T\mathbf{w} > 0 \
-1 & h(\mathbf{x}_i)^T\mathbf{w} \leq 0 \
\end{array}
\right.
$$
Note: It is important to know that the model prediction code doesn't change even with the addition of an L2 penalty. The only thing that changes is the estimated coefficients used in this prediction.
Based on the above, we will use the same code that was used in Module 3 assignment.
End of explanation
train_accuracy = {}
train_accuracy[0] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_0_penalty)
train_accuracy[4] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_4_penalty)
train_accuracy[10] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_10_penalty)
train_accuracy[1e2] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_1e2_penalty)
train_accuracy[1e3] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_1e3_penalty)
train_accuracy[1e5] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_1e5_penalty)
validation_accuracy = {}
validation_accuracy[0] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_0_penalty)
validation_accuracy[4] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_4_penalty)
validation_accuracy[10] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_10_penalty)
validation_accuracy[1e2] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_1e2_penalty)
validation_accuracy[1e3] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_1e3_penalty)
validation_accuracy[1e5] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_1e5_penalty)
# Build a simple report
for key in sorted(validation_accuracy.keys()):
print("L2 penalty = %g" % key)
print("train accuracy = %s, validation_accuracy = %s" % (train_accuracy[key], validation_accuracy[key]))
print("--------------------------------------------------------------------------------")
# Optional. Plot accuracy on training and validation sets over choice of L2 penalty.
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = 10, 6
sorted_list = sorted(train_accuracy.items(), key=lambda x:x[0])
plt.plot([p[0] for p in sorted_list], [p[1] for p in sorted_list], 'bo-', linewidth=4, label='Training accuracy')
sorted_list = sorted(validation_accuracy.items(), key=lambda x:x[0])
plt.plot([p[0] for p in sorted_list], [p[1] for p in sorted_list], 'ro-', linewidth=4, label='Validation accuracy')
plt.xscale('symlog')
plt.axis([0, 1e3, 0.78, 0.786])
plt.legend(loc='lower left')
plt.rcParams.update({'font.size': 18})
plt.tight_layout
Explanation: Below, we compare the accuracy on the training data and validation data for all the models that were trained in this assignment. We first calculate the accuracy values and then build a simple report summarizing the performance for the various models.
End of explanation
<END_TASK> |
15,625 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Tutorial
Step1: Loading a dataset
We use the aeon dataloader to present the data to the model.
Note
Step2: Generating layers
The core of the model is the layers. This can be as simple as a list, but merging and branching makes it easy to specify complex topologies.
Step3: The Deep Residual Network
A resnet module is a MergeSum layer containing a main path with conv layers, and a side path with a SkipNode() configured as the identity function. This allows earlier layer activations to bypass a series of layers.
We use some helper functions to succinclty define the deep network
Step4: The model is a collection of resnet modules between an input conv and output pooling and affine layer.
Step5: Cost function
The cost function compares network outputs with ground truth labels, and produces and error that we can backpropagate through the layers of the network.
For our binary classification task, we use a cross entropy cost function.
Step6: Optimizer
We now have a cost function to minimize by gradient descent. We do this
iteratively over small batches of the data set, making it stochastic gradient
decesent (SGD). There are other optimizers such as RMSProp and AdaDelta that are supported in neon, but often simple gradient descent works well.
Step7: Callbacks
To provide feedback while the model is training, neon lets the user specify a set of callbacks that get evaluated at the end of every iteration (minibatch) or pass through the dataset (epoch). Callbacks include evaluating the model on a validation set or computing missclassification percentage. There are also callbacks for saving to disk and for generating visualizations. Here we will set up a progress bar to monitor training.
Step8: Training the model
Now all the pieces are in place to run the network. We use the fit function and pass it a dataset, cost, optmizer, and the callbacks we set up.
Step9: Congrats! If you made it this far you have trained a convolutional network in neon.
Evaluating the model
We can now compute the misclassification on the test set to see how well we did.
Step10: By increasing the depth of the network and the number of epochs, we can improve the performance to match state of the art.
This was quite a lot of code! Generally, to set up a new model from scratch it is best to follow one of the examples from the neon/examples directory. It's easy to mix and match parts!
Inference
Now we want to grab a few new images from the internet and classify them through our network.
Step11: Create a dataset with this image for inference
Step12: Get model outputs on the inference data | <ASSISTANT_TASK:>
Python Code:
# Start by generating the backend:
from neon.backends import gen_backend
be = gen_backend(backend='gpu',
batch_size=128)
Explanation: Tutorial: Classifying tiny images with a Convolutional Neural Network
Outline
This interactive notebook shows how to do image classification with a Convnet. You can edit code in the code cells, and run it with Shift+Return. The notebook is read-only, so feel free to hack the code, and reload the page if something breaks. The tutorial covers how to:
* Build a small convNet in neon.
* Train it on the Cifar10 dataset.
* Upload a new image, and classify it into one of the 10 categories.
<img src="https://kaggle2.blob.core.windows.net/competitions/kaggle/3649/media/cifar-10.png">
Setting up a model
The pieces we need to set up a model are described in the neon user guide:
* The CIFAR10 dataset.
* layer configuration and a model.
* a compute backend.
* an optimizer to train the model.
* callbacks to keep us updated about the progress of training.
End of explanation
from neon.data.aeon_shim import AeonDataLoader
from neon.data.dataloader_transformers import OneHot, TypeCast, BGRMeanSubtract
import numpy as np
# define configuration file for CIFAR-10 dataset
config = {
'manifest_filename': 'data/cifar10/train-index.csv', # CSV manifest of data
'manifest_root': 'data/cifar10', # root data directory
'image': {'height': 32, 'width': 32, # output image size
'scale': [0.8, 0.8], # random scaling of image before cropping
'flip_enable': True}, # randomly flip image
'type': 'image,label', # type of data
'minibatch_size': be.bsz # batch size
}
from neon.data.aeon_shim import AeonDataLoader
# build train_set
train_set = AeonDataLoader(config, be)
train_set = OneHot(train_set, index=1, nclasses=10) # perform onehot on the labels
train_set = TypeCast(train_set, index=0, dtype=np.float32) # cast the image to float32
train_set = BGRMeanSubtract(train_set, index=0) # subtract image color means (based on default values)
# build test set
config['manifest_filename'] = 'data/cifar10/val-index.csv'
test_set = AeonDataLoader(config, be)
test_set = OneHot(test_set, index=1, nclasses=10) # perform onehot on the labels
test_set = TypeCast(test_set, index=0, dtype=np.float32) # cast the image to float32
test_set = BGRMeanSubtract(test_set, index=0) # subtract image color means (based on default values)
Explanation: Loading a dataset
We use the aeon dataloader to present the data to the model.
Note: This assumes the data has already been downloaded and ingested. If that is not the case, follow the instructions in the 02 VGG Fine-tuning notebook to process the CIFAR-10 dataset
End of explanation
from neon.initializers import Uniform
from neon.transforms import Rectlin, Softmax
from neon.layers import Activation, Conv, Pooling, Affine, MergeSum
# This is a simple convnet with a one conv layer,
# max-pooling, and a fully connected layer.
#
# input - Conv - ReLu - Pooling - Affine - ReLu - Affine - Softmax
#
layers = [Conv((5, 5, 16), init=Uniform(-0.1, 0.1), activation=Rectlin()),
Pooling((2, 2)),
Affine(nout=500, init=Uniform(-0.1, 0.1), activation=Rectlin()),
Affine(nout=10, init=Uniform(-0.1, 0.1), activation=Softmax())]
# We can use a MergeSum layer to combine differnt layers in parallel
#
# - Conv3 - ReLu -
# / \
# input - Sum - ReLu - ...
# \ /
# - Conv5 - ReLu -
#
conv3 = Conv((3, 3, 16), init=Uniform(-0.1, 0.1), activation=Rectlin())
conv5 = Conv((5, 5, 16), padding=1, init=Uniform(-0.1, 0.1), activation=Rectlin())
layers = [MergeSum([conv3, conv5]), Activation(Rectlin()),
Pooling((2, 2)),
Affine(nout=500, init=Uniform(-0.1, 0.1), activation=Rectlin()),
Affine(nout=10, init=Uniform(-0.1, 0.1), activation=Softmax())]
Explanation: Generating layers
The core of the model is the layers. This can be as simple as a list, but merging and branching makes it easy to specify complex topologies.
End of explanation
from neon.initializers import Kaiming, IdentityInit
from neon.layers import SkipNode
from neon.models import Model
# helper functions simplify init params for conv and identity layers
def conv_params(fsize, nfm, stride=1, relu=True, batch_norm=True):
return dict(fshape=(fsize, fsize, nfm),
strides=stride,
padding=(1 if fsize > 1 else 0),
activation=(Rectlin() if relu else None),
init=Kaiming(local=True),
batch_norm=batch_norm)
def id_params(nfm):
return dict(fshape=(1, 1, nfm),
strides=2,
padding=0,
activation=None,
init=IdentityInit())
# A resnet module
#
# - Conv - Conv -
# / \
# input - Sum - Relu - output
# \ /
# - Identity -
#
def module_factory(nfm, stride=1):
mainpath = [Conv(**conv_params(3, nfm, stride=stride)),
Conv(**conv_params(3, nfm, relu=False))]
sidepath = [SkipNode() if stride == 1 else Conv(**id_params(nfm))]
module = [MergeSum([mainpath, sidepath]),
Activation(Rectlin())]
return module
Explanation: The Deep Residual Network
A resnet module is a MergeSum layer containing a main path with conv layers, and a side path with a SkipNode() configured as the identity function. This allows earlier layer activations to bypass a series of layers.
We use some helper functions to succinclty define the deep network:
End of explanation
# Set depth = 3 for quick results
# or depth = 9 to reach 6.7% top1 error in 150 epochs
depth = 3
nfms = [2**(stage + 4) for stage in sorted(range(3) * depth)]
strides = [1] + [1 if cur == prev else 2 for cur, prev in zip(nfms[1:], nfms[:-1])]
layers = [Conv(**conv_params(3, 16))]
for nfm, stride in zip(nfms, strides):
layers.append(module_factory(nfm, stride))
layers.append(Pooling('all', op='avg'))
layers.append(Affine(10, init=Kaiming(local=False),
batch_norm=True, activation=Softmax()))
model = Model(layers=layers)
Explanation: The model is a collection of resnet modules between an input conv and output pooling and affine layer.
End of explanation
from neon.transforms import CrossEntropyMulti
from neon.layers import GeneralizedCost
cost = GeneralizedCost(costfunc=CrossEntropyMulti())
Explanation: Cost function
The cost function compares network outputs with ground truth labels, and produces and error that we can backpropagate through the layers of the network.
For our binary classification task, we use a cross entropy cost function.
End of explanation
from neon.optimizers import GradientDescentMomentum, Schedule
opt = GradientDescentMomentum(0.1, 0.9, wdecay=0.0001,
schedule=Schedule([90, 135], 0.1))
Explanation: Optimizer
We now have a cost function to minimize by gradient descent. We do this
iteratively over small batches of the data set, making it stochastic gradient
decesent (SGD). There are other optimizers such as RMSProp and AdaDelta that are supported in neon, but often simple gradient descent works well.
End of explanation
# set up callbacks. By default sets up a progress bar
from neon.transforms import Misclassification
from neon.callbacks.callbacks import Callbacks
valmetric = Misclassification()
callbacks = Callbacks(model, eval_set=test_set, metric=valmetric)
Explanation: Callbacks
To provide feedback while the model is training, neon lets the user specify a set of callbacks that get evaluated at the end of every iteration (minibatch) or pass through the dataset (epoch). Callbacks include evaluating the model on a validation set or computing missclassification percentage. There are also callbacks for saving to disk and for generating visualizations. Here we will set up a progress bar to monitor training.
End of explanation
# And run the model
epochs = 10
model.fit(train_set, optimizer=opt, num_epochs=epochs,
cost=cost, callbacks=callbacks)
Explanation: Training the model
Now all the pieces are in place to run the network. We use the fit function and pass it a dataset, cost, optmizer, and the callbacks we set up.
End of explanation
# Check the performance on the supplied test set
from neon.transforms import Misclassification
error_pct = 100 * model.eval(test_set, metric=Misclassification())
print 'Misclassification error = %.1f%%' % error_pct
Explanation: Congrats! If you made it this far you have trained a convolutional network in neon.
Evaluating the model
We can now compute the misclassification on the test set to see how well we did.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
import urllib
from PIL import Image
import numpy as np
# download images from the web
imgs = {
'frog': "https://upload.wikimedia.org/wikipedia/commons/thumb/5/55/Atelopus_zeteki1.jpg/440px-Atelopus_zeteki1.jpg",
'airplane': "https://img0.etsystatic.com/016/0/5185796/il_570xN.433414910_p5n3.jpg",
'cat': "https://s-media-cache-ak0.pinimg.com/236x/8e/d7/41/8ed7410285f101ba5892ff723c91fa75.jpg",
'car': "http://static01.nyt.com/images/2012/09/09/automobiles/09REFI2/09REFI2-articleLarge.jpg",
}
# empty buffer to use for inference dataset
# dims [minibatch, imgsize]
x_new = np.zeros((128, 32*32*3), dtype=np.float32)
# crop/resize images and assign them to slots in x_new
# also display with true labels
plt.figure(1)
for i, name in enumerate(imgs):
imgs[name] = urllib.urlretrieve(imgs[name], filename="data/{}.jpg".format(name))
plt.subplot(100 + (10 * len(imgs)) + 1 + i)
img = Image.open("data/{}.jpg".format(name))
crop = img.crop((0,0,min(img.size),min(img.size)))
crop.thumbnail((32, 32))
plt.imshow(crop, interpolation="nearest")
plt.title(name)
plt.axis('off')
x_new[i,:] = np.asarray(crop, dtype=np.float32)[:,:,(2,0,1)].transpose(2,0,1).reshape(1,3072) -127
Explanation: By increasing the depth of the network and the number of epochs, we can improve the performance to match state of the art.
This was quite a lot of code! Generally, to set up a new model from scratch it is best to follow one of the examples from the neon/examples directory. It's easy to mix and match parts!
Inference
Now we want to grab a few new images from the internet and classify them through our network.
End of explanation
from neon.data import ArrayIterator
# create a minibatch with the new image
inference_set = ArrayIterator(x_new, None, nclass=10,
lshape=(3, 32, 32))
# inference_set = ArrayIterator(x_train, None, nclass=10,
# lshape=(3, 32, 32))
classes =["airplane", "auto", "bird", "cat", "deer",
"dog", "frog", "horse", "ship", "truck"]
out = model.get_outputs(inference_set)
Explanation: Create a dataset with this image for inference
End of explanation
plt.figure(2)
for i, name in enumerate(imgs):
plt.subplot(100 + (10 * len(imgs)) + 1 + i)
img = Image.open("data/{}.jpg".format(name))
crop = img.crop((0,0,min(img.size),min(img.size)))
crop.thumbnail((32, 32))
title = "{} ({:.2})".format(classes[out[i].argmax()], out[i].max())
plt.imshow(crop, interpolation="nearest")
plt.title(title)
plt.axis('off')
Explanation: Get model outputs on the inference data
End of explanation
<END_TASK> |
15,626 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Example Notebook to demonstrate the simplicity of saving data in QKIT
First import the QKIT store module
Step1: ... some imports to create some random data ...
Step2: Create a data file object
The file name will be prepended by a unique 6 character identifier (UUID)
and the file saved in a standard path below
qkit.cfg['datadir']
Step3: h5d is now an empty hdf5 file, holding only some qkit structure
How to get the file path
Step4: Now use qviewkit to open and view the file
Hint
Step5: Now we get to the important stuff
Step6: A value_vector ie is a 1-dim vector measured values
Step7: A value_matrix is a 2-dim tuple with two cooardinates with measurement values
Step8: A value_box is a 3-dim tuple with three cooardinates with measurement values
Step9: For free text, like settings or commments there is a special dataset which can be appended to add data
Step10: If one want to plot a arbitrary dataset vs another arbitrary dataset use a view
Step11: Finally add some data to the file
There are two functions to do so
Step12: Close the file at the end | <ASSISTANT_TASK:>
Python Code:
import qkit
from qkit.storage import store
Explanation: Example Notebook to demonstrate the simplicity of saving data in QKIT
First import the QKIT store module
End of explanation
## for random data
from numpy.random import rand
from numpy import linspace,arange
import time
## number of points
nop = 101
Explanation: ... some imports to create some random data ...
End of explanation
h5d = store.Data(name='NewFancyData',mode = "a")
Explanation: Create a data file object
The file name will be prepended by a unique 6 character identifier (UUID)
and the file saved in a standard path below
qkit.cfg['datadir']
End of explanation
print(h5d.get_filepath())
Explanation: h5d is now an empty hdf5 file, holding only some qkit structure
How to get the file path:
End of explanation
h5d.add_comment("New data has been created ....")
Explanation: Now use qviewkit to open and view the file
Hint: in a fully started qkit environment (qkit.start()) This can be done by qkit.fid.view("UUID")
Comment added to the basic hdf folder
options
* comment (mandatory)
* folder='data' | 'analysis' (optional, default is "data")
End of explanation
# add_coordinate() <- for measurement boundaries/steps
# options: name (mandatory)
# : unit = "" (optional, default is "a.u.")
# : comment = "" (optional, default is "")
# : folder='data' | 'analysis' (optional, default is "data")
f_co = h5d.add_coordinate('frequency', unit = "Hz", comment = "VNA frequency scan")
I_co = h5d.add_coordinate('current', unit = "A", comment = "magnetic field current")
P_co = h5d.add_coordinate('power', unit = "dBm", comment = "microwave power")
Explanation: Now we get to the important stuff:
first we have to create the objects which are later used to store the data
then we can add data
A coordinate is a 1-dim vector with bias values, e.g. set current, frequencies, etc.
End of explanation
# add_value_vector() <- for measurement data
# options: name (mandatory)
# : x = X (optional) coordinate vector in x direction, default: None
# : unit = "" (optional, default is "a.u.")
# : comment = "" (optional, default is "")
# : folder='data' | 'analysis' (optional, default is "data")
T_vec = h5d.add_value_vector('temperature', x = None,
unit = "K", comment = "save temperature values")
Tc_vec = h5d.add_value_vector('critical_temperature', x = I_co,
unit = "K", folder='analysis' ,comment = "save temperature values")
Explanation: A value_vector ie is a 1-dim vector measured values
End of explanation
# add_value_matrix() <- for measurement data
# convention: the last coordiante should be the one with the fastest changes:
# e.g. for a VNA scan x= magnetic field y= transmission frequency
#
# options: name (mandatory)
# : x = X (optional) coordinate vector in x direction, default: None
# : y = Y (mandatory) coordinate vector in y direction / fastest changes
# : unit = "" (optional, default is "a.u.")
# : comment = "" (optional, default is "")
# : folder='data' | 'analysis' (optional, default is "data")
amp_mx = h5d.add_value_matrix('amplitude', x = I_co , y = f_co,
unit = "V", comment = "magic data")
pha_mx = h5d.add_value_matrix('phase', x = I_co , y = f_co,
unit = "rad", comment = "more magic data!")
Explanation: A value_matrix is a 2-dim tuple with two cooardinates with measurement values
End of explanation
# add_value_box() <- for measurement data
# options: name (mandatory)
# : x = X (optional) coordinate vector in x direction, default: None
# : y = Y (optional) coordinate vector in y direction
# : z = Z (mandatory) coordinate vector in y direction / fastest changes
# : unit = "" (optional, default is "a.u.")
# : comment = "" (optional, default is "")
# : folder='data' | 'analysis' (optional, default is "data")
amp_bx = h5d.add_value_box('amplitude', x = I_co , y = f_co, z= P_co,
unit = "V", comment = "magic data")
pha_bx = h5d.add_value_box('phase', x = I_co , y = f_co, z= P_co,
unit = "rad", comment = "more magic data!")
Explanation: A value_box is a 3-dim tuple with three cooardinates with measurement values
End of explanation
#string array
#add_textlist()
#options: name (mandatory)
# : comment = "" (optional)
# : folder="data" (optional)
# use the append method to add the text
settings = h5d.add_textlist("settings",comment = "my settings")
settings.append(u"vnapower = 10dBm")
settings.append(u"fridge attenuation=50db\n data jumps like Van Halen.")
Explanation: For free text, like settings or commments there is a special dataset which can be appended to add data
End of explanation
### Add a view on Data:
TvsTc_view = h5d.add_view("f_vs_I", x = f_co, y = I_co)
TvsTc_view.add(x=T_vec,y=Tc_vec)
Laspect_view = h5d.add_view("locked_aspect", x = f_co, y = f_co,
view_params={'aspect':1.0, 'bgcolor':(100,200,100)})
Explanation: If one want to plot a arbitrary dataset vs another arbitrary dataset use a view
End of explanation
# now we add the coordinate data to the file
fs = linspace(1e9,5e9,nop)
Is = linspace(0e-3,10e-3,nop)
f_co.add(fs)
I_co.add(Is)
for i in arange(nop):
#time.sleep(10)
amp = rand(nop)
pha = rand(nop)
amp_mx.append(amp)
pha_mx.append(pha)
T_vec.append(float(rand(1)))
Tc_vec.append(float(rand(1)))
Explanation: Finally add some data to the file
There are two functions to do so:
* append(data) <- this is the most frequently used method, it does what it says.
* add(data) <- this is used to set a dataset of lenght N to the data with length N
lets make an example:
End of explanation
h5d.close_file()
Explanation: Close the file at the end
End of explanation
<END_TASK> |
15,627 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
In this notebook, we will work through a Bayes Net analysis using the GES algorithm with the TETRAD software (http
Step1: Load the data generated using the DCM forward model. In this model, there is a significant static connectivity from 1->2 and 1->3 (A matrix), and a PPI for 0->2 and 0->4 (B matrix) and a significant input to ROI 0 (C matrix).
Step2: Generate a set of synthetic datasets, referring to individual subjects
Step3: Run iMAGES (using a shell script)
Step4: Show the graph estimated by iMAGES
Step5: Show the true graph from the DCM forward model | <ASSISTANT_TASK:>
Python Code:
import os,sys
import numpy
%matplotlib inline
import matplotlib.pyplot as plt
sys.path.insert(0,'../')
from utils.mkdesign import create_design_singlecondition
from nipy.modalities.fmri.hemodynamic_models import spm_hrf,compute_regressor
from utils.make_data import make_continuous_data
from utils.graph_utils import show_graph_from_adjmtx,show_graph_from_pattern
from statsmodels.tsa.arima_process import arma_generate_sample
import scipy.stats
from dcm_sim import sim_dcm_dataset
results_dir = os.path.abspath("../results")
if not os.path.exists(results_dir):
os.mkdir(results_dir)
Explanation: In this notebook, we will work through a Bayes Net analysis using the GES algorithm with the TETRAD software (http://www.phil.cmu.edu/tetrad/). We will use the same dataset used for the PPI and DCM examples.
End of explanation
_,data_conv,params=sim_dcm_dataset(verbose=True)
A_mtx=params['A']
B_mtx=params['B']
u=params['u']
# downsample design to 1 second TR
u=numpy.convolve(params['u'],spm_hrf(params['stepsize'],oversampling=1))
u=u[range(0,data_conv.shape[0],int(1./params['stepsize']))]
ntp=u.shape[0]
Explanation: Load the data generated using the DCM forward model. In this model, there is a significant static connectivity from 1->2 and 1->3 (A matrix), and a PPI for 0->2 and 0->4 (B matrix) and a significant input to ROI 0 (C matrix).
End of explanation
tetrad_dir='/home/vagrant/data/tetrad_files'
if not os.path.exists(tetrad_dir):
os.mkdir(tetrad_dir)
nfiles=10
for i in range(nfiles):
_,data_conv,params=sim_dcm_dataset()
# downsample to 1 second TR
data=data_conv[range(0,data_conv.shape[0],int(1./params['stepsize']))]
ntp=data.shape[0]
imagesdata=numpy.hstack((numpy.array(u)[:,numpy.newaxis],data))
numpy.savetxt(os.path.join(tetrad_dir,"data%03d.txt"%i),
imagesdata,delimiter='\t',
header='u\t0\t1\t2\t3\t4',comments='')
Explanation: Generate a set of synthetic datasets, referring to individual subjects
End of explanation
!bash run_images.sh
Explanation: Run iMAGES (using a shell script)
End of explanation
g=show_graph_from_pattern('images_test/test.pattern.dot')
Explanation: Show the graph estimated by iMAGES
End of explanation
show_graph_from_adjmtx(A_mtx,B_mtx,params['C'])
Explanation: Show the true graph from the DCM forward model
End of explanation
<END_TASK> |
15,628 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Chemistry Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 1.8. Coupling With Chemical Reactivity
Is Required
Step12: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step13: 2.2. Code Version
Is Required
Step14: 2.3. Code Languages
Is Required
Step15: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required
Step16: 3.2. Split Operator Advection Timestep
Is Required
Step17: 3.3. Split Operator Physical Timestep
Is Required
Step18: 3.4. Split Operator Chemistry Timestep
Is Required
Step19: 3.5. Split Operator Alternate Order
Is Required
Step20: 3.6. Integrated Timestep
Is Required
Step21: 3.7. Integrated Scheme Type
Is Required
Step22: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required
Step23: 4.2. Convection
Is Required
Step24: 4.3. Precipitation
Is Required
Step25: 4.4. Emissions
Is Required
Step26: 4.5. Deposition
Is Required
Step27: 4.6. Gas Phase Chemistry
Is Required
Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required
Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required
Step30: 4.9. Photo Chemistry
Is Required
Step31: 4.10. Aerosols
Is Required
Step32: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required
Step33: 5.2. Global Mean Metrics Used
Is Required
Step34: 5.3. Regional Metrics Used
Is Required
Step35: 5.4. Trend Metrics Used
Is Required
Step36: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required
Step37: 6.2. Matches Atmosphere Grid
Is Required
Step38: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required
Step39: 7.2. Canonical Horizontal Resolution
Is Required
Step40: 7.3. Number Of Horizontal Gridpoints
Is Required
Step41: 7.4. Number Of Vertical Levels
Is Required
Step42: 7.5. Is Adaptive Grid
Is Required
Step43: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required
Step44: 8.2. Use Atmospheric Transport
Is Required
Step45: 8.3. Transport Details
Is Required
Step46: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required
Step47: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required
Step48: 10.2. Method
Is Required
Step49: 10.3. Prescribed Climatology Emitted Species
Is Required
Step50: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step51: 10.5. Interactive Emitted Species
Is Required
Step52: 10.6. Other Emitted Species
Is Required
Step53: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required
Step54: 11.2. Method
Is Required
Step55: 11.3. Prescribed Climatology Emitted Species
Is Required
Step56: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step57: 11.5. Interactive Emitted Species
Is Required
Step58: 11.6. Other Emitted Species
Is Required
Step59: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required
Step60: 12.2. Prescribed Upper Boundary
Is Required
Step61: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required
Step62: 13.2. Species
Is Required
Step63: 13.3. Number Of Bimolecular Reactions
Is Required
Step64: 13.4. Number Of Termolecular Reactions
Is Required
Step65: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required
Step66: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required
Step67: 13.7. Number Of Advected Species
Is Required
Step68: 13.8. Number Of Steady State Species
Is Required
Step69: 13.9. Interactive Dry Deposition
Is Required
Step70: 13.10. Wet Deposition
Is Required
Step71: 13.11. Wet Oxidation
Is Required
Step72: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required
Step73: 14.2. Gas Phase Species
Is Required
Step74: 14.3. Aerosol Species
Is Required
Step75: 14.4. Number Of Steady State Species
Is Required
Step76: 14.5. Sedimentation
Is Required
Step77: 14.6. Coagulation
Is Required
Step78: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required
Step79: 15.2. Gas Phase Species
Is Required
Step80: 15.3. Aerosol Species
Is Required
Step81: 15.4. Number Of Steady State Species
Is Required
Step82: 15.5. Interactive Dry Deposition
Is Required
Step83: 15.6. Coagulation
Is Required
Step84: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required
Step85: 16.2. Number Of Reactions
Is Required
Step86: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required
Step87: 17.2. Environmental Conditions
Is Required | <ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ipsl', 'sandbox-1', 'atmoschem')
Explanation: ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era: CMIP6
Institute: IPSL
Source ID: SANDBOX-1
Topic: Atmoschem
Sub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry.
Properties: 84 (39 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-20 15:02:45
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmospheric chemistry model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmospheric chemistry model code.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Chemistry Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Form of prognostic variables in the atmospheric chemistry component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of advected tracers in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry calculations (not advection) generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.8. Coupling With Chemical Reactivity
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry transport scheme turbulence is couple with chemical reactivity?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the evolution of a given variable
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemical species advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Split Operator Chemistry Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemistry (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.5. Split Operator Alternate Order
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.6. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the atmospheric chemistry model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.7. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.2. Convection
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Precipitation
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.4. Emissions
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.5. Deposition
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.6. Gas Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.9. Photo Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.10. Aerosols
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the atmopsheric chemistry grid
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
* Does the atmospheric chemistry grid match the atmosphere grid?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 7.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview of transport implementation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Use Atmospheric Transport
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is transport handled by the atmosphere, rather than within atmospheric cehmistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Transport Details
Is Required: FALSE Type: STRING Cardinality: 0.1
If transport is handled within the atmospheric chemistry scheme, describe it.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric chemistry emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the chemical species emitted at the surface that are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via any other method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview gas phase atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Species included in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.3. Number Of Bimolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of bi-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.4. Number Of Termolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of ter-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.7. Number Of Advected Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of advected species in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.8. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.9. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.10. Wet Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.11. Wet Oxidation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview stratospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
Explanation: 14.2. Gas Phase Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Gas phase species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
Explanation: 14.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.5. Sedimentation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview tropospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Gas Phase Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of gas phase species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
Explanation: 15.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the tropospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric photo chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 16.2. Number Of Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the photo-chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
Explanation: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Photolysis scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.2. Environmental Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)
End of explanation
<END_TASK> |
15,629 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Data preprocessing
Here will download and subset NCEP reanalysis data, and read in files created from the DesInventar database. Then create a map showing the regions where disaster records are available, and how this relates to the resolution of the meteorological data.
Set up
Import needed packages
Step1: Specify region
For this exercise, using examples from India.
Step2: Set data
Disaster records
A spreadsheet of availble data was obtained from the DesInventar website, and then exported to .csv format. Both versions are available in the data repository. When pulling data from the website sometimes there can be little formatting issues, which we repair here. Also want to learn what span of years is covered by the database for our example country (India), so that we can save disk space by paring down the reanalysis data to the smallest possible file.
Step3: Reanalysis
Need to pull the renalysis data from NCEP's online database. Going to pull the full global files at first, so that have the data avaialbe if want to look at other regions of the world. This requires a lot of download time and storage space, the resulting minimally sized files are stored in the repository (others are deleated or moved to save disk space) so don't run these code blocks unless you need to change something about the data is being aquired or it's final form (which means, yeah, probably you'll end up having to run the script).
Step4: Once have full data set can then subdivide to create individual files for different regions to reduce the run time when reading in data for individual regions.
Step5: Region masks
The way we arranged the analysis (which as you can see is a bit of an ad hoc, duct tape style procedure) requires masking out the individual districts, or rather the closest approximation of them possible using the low resolution, gridded reanalysis data.
The first step is creating a 'blanked' file of the region, where all the values are set to unity.
Step6: The actual mask files are made with a different script, writen in NCL The code here modifies the generic script based on what region we're interested in at the moment.
For some countries, e.g., Chile, the region labels in the shapefiles and the region labels in the heatwave database are not rendered the same (typically this has to do with how accented letters are notated), so some tweaking has to be done.
Step7: Drawing a map
Want to create a graphic to show that reports only exist for certain regions, and how the grid spacing of the meterological fields imperfectly matches the actual region boundaries. Have currently set things so that a grid cell is considered informative about the political region as long some part of the region boundary is within 50kms of the grid point (cell center). Played around with a few things before settling on this. The distance is pretty conservative; as in tends towards considering information from outside the region, rather than excluding information from within, but still keeps a more "fair" evaluation, by not evaluating against grid cells which contain only a minimal amount of the geographical region. Considering that most political boundaries are linked to geographical features/divides, if only a small fraction of the region extends into another grid cell, would expect its weather to more correlated with that shown by cells over the rest of the region than that of this other area. Example of this can be seen for Uttar Pradesh (India), where a sliver of the region overlaps with a gird cell that is mostly representitive of the Himalayas, so it is not considered when calculating the warm spell durations.
Looking at the individual administrative regions requires working
with shape files. These are obtained from the Database of Global
Administrative Areas. | <ASSISTANT_TASK:>
Python Code:
#--- Libraries
import pandas as pd # statistics packages
import numpy as np # linear algebra packages
import matplotlib.pyplot as plt # plotting routines
import seaborn as sns # more plotting routines
import shapefile # routines for using 'shapefiles'
import urllib # routines for interacting with websites
import subprocess # routines for calling external OS commands
from mpl_toolkits.basemap import Basemap # plotting routines for map making
from matplotlib import gridspec # plotting routines for multiple plots
from netCDF4 import Dataset # routines for interacting with NetCDF files
from matplotlib import cm # more plotting routines
from matplotlib.collections import LineCollection # more plotting routines
from cdo import * # routines for interacting with NetCDF files
cdo = Cdo() # via an external program
# place graphics in the notebook document
%matplotlib inline
Explanation: Data preprocessing
Here will download and subset NCEP reanalysis data, and read in files created from the DesInventar database. Then create a map showing the regions where disaster records are available, and how this relates to the resolution of the meteorological data.
Set up
Import needed packages
End of explanation
#--- Identify country for example
# label country
country = 'India'
# define bounding box for region
mlat = '0' ; Mlat = '40' ; mlon = '65' ; Mlon = '105'
Explanation: Specify region
For this exercise, using examples from India.
End of explanation
#--- Pull in data from DesInvetar records
# Read file of reported heatwaves (original spreadsheet)
heatwave_data = pd.read_csv('../data/Heatwaves_database.csv')
# repair region name with space before name
heatwave_data.loc[(heatwave_data.Region==' Tamil Nadu'),'Region'] = 'Tamil Nadu'
# list out the dates for example country (India)
india_dates = heatwave_data['Date (YMD)'][heatwave_data['Country'].isin(['India'])]
# find year of earliest entry
min_year = np.min([int(x.split('/')[0]) for x in india_dates])
# find year of latest entry
max_year = np.max([int(x.split('/')[0]) for x in india_dates])
Explanation: Set data
Disaster records
A spreadsheet of availble data was obtained from the DesInventar website, and then exported to .csv format. Both versions are available in the data repository. When pulling data from the website sometimes there can be little formatting issues, which we repair here. Also want to learn what span of years is covered by the database for our example country (India), so that we can save disk space by paring down the reanalysis data to the smallest possible file.
End of explanation
#---Download NetCDF files
# path to data directory for max/min daily temperatures
path_maxmin = 'ftp://ftp.cdc.noaa.gov/Datasets/ncep.reanalysis.dailyavgs/surface_gauss'
# path to data directory for 6hr temperature records
path_hourly = 'ftp://ftp.cdc.noaa.gov/Datasets/ncep.reanalysis/surface_gauss'
# loop through years
for yr in range(1948,2016) :
# write max 2meter temperature to new file
path = path_maxmin+'/tmax.2m.gauss.'+str(yr)+'.nc'
ofile = open('../data/t2m.max.daily.'+str(yr)+'.nc','w')
ofile.write(urllib.urlopen(path).read())
ofile.close()
# write min 2meter temperature to new file
path = path_maxmin+'/tmin.2m.gauss.'+str(yr)+'.nc'
ofile = open('../data/t2m.min.daily.'+str(yr)+'.nc','w')
ofile.write(urllib.urlopen(path).read())
ofile.close()
# write 2meter temperature at 6-hour intervals to new file
path = path_hourly+'/air.2m.gauss.'+str(yr)+'.nc'
ofile = open('../data/t2m.subdaily.'+str(yr)+'.nc','w')
ofile.write(urllib.urlopen(path).read())
ofile.close()
# set data as single multiyear files
_ = cdo.mergetime(input='../data/t2m.max.daily.*.nc',output='../data/t2m.max.daily.nc')
_ = cdo.mergetime(input='../data/t2m.min.daily.*.nc',output='../data/t2m.min.daily.nc')
_ = cdo.mergetime(input='../data/t2m.subdaily.*.nc',output='../data/t2m.subdaily.nc')
Explanation: Reanalysis
Need to pull the renalysis data from NCEP's online database. Going to pull the full global files at first, so that have the data avaialbe if want to look at other regions of the world. This requires a lot of download time and storage space, the resulting minimally sized files are stored in the repository (others are deleated or moved to save disk space) so don't run these code blocks unless you need to change something about the data is being aquired or it's final form (which means, yeah, probably you'll end up having to run the script).
End of explanation
#--- Create data files of region
# select region from min-temperature data
_ = cdo.sellonlatbox(','.join([mlon,Mlon,mlat,Mlat]),
input='../data/t2m.min.daily.nc',
output='../data/'+country+'.t2m.min.daily.nc')
# select region from max-temperature data
_ = cdo.sellonlatbox(','.join([mlon,Mlon,mlat,Mlat]),
input='../data/t2m.max.daily.nc',
output='../data/'+country+'.t2m.max.daily.nc')
# select region from hourly-temperature data
_ = cdo.sellonlatbox(','.join([mlon,Mlon,mlat,Mlat]),
input='../data/t2m.subdaily.nc',
output='../data/'+country+'.t2m.subdaily.nc')
# create a daily mean temperature file
_ = cdo.daymean(input='../data/'+country+'.t2m.subdaily.nc',
output='../data/'+country+'.t2m.daily.nc')
#--- Trim time range of file to match disaster records
# list years in time range
years_in_record = ','.join([ str(x) for x in range(min_year,max_year+1) ])
# subset regional data
_ = cdo.selyear(years_in_record,
input='../data/'+country+'.t2m.min.daily.nc',
output='../data/'+country+'.t2m.min.daily.subset.nc')
_ = cdo.selyear(years_in_record,
input='../data/'+country+'.t2m.max.daily.nc',
output='../data/'+country+'.t2m.max.daily.subset.nc')
# _ = cdo.selyear(years_in_record,
# input='../data/'+country+'.t2m.subdaily.nc',
# output='../data/'+country+'.t2m.subdaily.subset.nc')
_ = cdo.selyear(years_in_record,
input='../data/'+country+'.t2m.daily.nc',
output='../data/'+country+'.t2m.daily.subset.nc')
# retain base period file (needed for one of the heat wave definitions)
years = ','.join([ str(x) for x in range(1960,1991)])
_ = cdo.selyear(years,
input='../data/'+country+'.t2m.max.daily.nc',
output='../data/'+country+'basefile.nc')
Explanation: Once have full data set can then subdivide to create individual files for different regions to reduce the run time when reading in data for individual regions.
End of explanation
#--- Create blank file for region
# write grid information to file
ofile = open('../data/ncep_grid.asc','w')
ofile.write('\n'.join(cdo.griddes(input='../data/'+country+'.t2m.daily.nc')))
ofile.close()
# create data file with all values set to 1
_ = cdo.const('1','../data/ncep_grid.asc',
output='../data/'+country+'.blank.ncepgrid.nc',
options='-f nc')
Explanation: Region masks
The way we arranged the analysis (which as you can see is a bit of an ad hoc, duct tape style procedure) requires masking out the individual districts, or rather the closest approximation of them possible using the low resolution, gridded reanalysis data.
The first step is creating a 'blanked' file of the region, where all the values are set to unity.
End of explanation
#--- Identify regions of interest
# make list of unique region names for country
regions = list( set(heatwave_data.Region.where(heatwave_data.Country==country)) )
# remove nans (from regions that arent in the selected country)
regions = [x for x in regions if str(x) != 'nan']
regions = [x.title() for x in regions]
if ( country == 'Chile') :
regions_shapefile = [u'Antofagasta',u'Araucan\xeda',
u'Ais\xe9n del General Carlos Ib\xe1\xf1ez del Campo',
u'Regi\xf3n Metropolitana de Santiago',
u'Magallanes y Ant\xe1rtica Chilena',
u"Libertador General Bernardo O'Higgins"]
else :
regions_shapefile = regions
#--- Create masks
# loop through regions
for i in range(len(regions)) :
# find the name of the region
reg = regions[i].title()
# find the name of the region as defined by the shapefile
reg_shapefile = regions_shapefile[i] #reg_shapefile = regions_shapefile[i].decode('utf-8')
# remove spaces
reg = reg.strip()
# report what's happening
print("Creating masking script for "+reg+", aka "+reg_shapefile)
# create NCL script from defualt file with name of region
with open('maskregions_'+"".join(country.split(" "))+'.ncl', 'r') as input_file, open('crMaskFile.ncl', 'w') as output_file:
# check lines for dummy line
for line in input_file :
if line.strip() == 'region = "STATE/PROVINCE"' :
# overwrite with region name
output_file.write(' region = "'+reg_shapefile.encode('utf-8')+'"\n')
else :
output_file.write(line)
# run NCL routine
print("Running masking script")
# subprocess.call(['/bin/bash','-i','-c','ncl crMaskFile.ncl'])
subprocess.call(['/bin/bash','-c','ncl crMaskFile.ncl'])
# create a file that masks the region
print("Renaming mask and copying to data folder.")
subprocess.call(['cp','mask.nc',"../data/"+"_".join(reg.split())+'.mask.nc'])
#--- Create single mask file showing all considered regions
# combine all the individual mask files
_ = cdo.add(input='../data/Orissa.mask.nc ../data/Uttar_Pradesh.mask.nc',
output='../data/tmp.nc')
_ = cdo.add(input='../data/tmp.nc ../data/Tamil_Nadu.mask.nc',
output='../data/India.masks.nc')
Explanation: The actual mask files are made with a different script, writen in NCL The code here modifies the generic script based on what region we're interested in at the moment.
For some countries, e.g., Chile, the region labels in the shapefiles and the region labels in the heatwave database are not rendered the same (typically this has to do with how accented letters are notated), so some tweaking has to be done.
End of explanation
#--- Map regions of India used in this example
# read which regions are included in disaster database
regions = list(set(heatwave_data.loc[(heatwave_data.Country=='India'),'Region']))
# Create a map object
chart = Basemap(projection='lcc',resolution='c',
lat_0=20,lon_0=85,
llcrnrlat=5,urcrnrlat=35,
llcrnrlon=70,urcrnrlon=100)
# add geographic features
chart.shadedrelief()
# draw parallels and meridians.
chart.drawparallels(np.arange(-90.,91.,10.),labels=[False,True,True,False])
chart.drawmeridians(np.arange(-180.,181.,10.),labels=[True,False,False,True])
# add country outline
chart.readshapefile('../data/IND_adm0', 'IND0',drawbounds=True) ;
# add region outlines, for regions in data set
chart.readshapefile('../data/IND_adm1', 'IND1',drawbounds=False) ;
for info, shape in zip(chart.IND1_info, chart.IND1):
if info['NAME_1'] in regions :
x, y = zip(*shape)
chart.plot(x, y, marker=None,color=sns.xkcd_rgb['dusty orange'])
# load file of combined regional masks
ncfile = Dataset('../data/India.masks.nc')
# read mask data
rmask = ncfile.variables['region_mask'][:]
# get coordinates of data
lons = ncfile.variables['lon'][:]
lats = ncfile.variables['lat'][:]
# shift so that lines show grid box boundaries,
# rather than grid point locations
lons = lons - (1.875/2)
lats = lats + (1.9047/2)
# if in western hemisphere, need to label as
# "all the way round", rather than +/-
# lons = lons - 360
# set coordinates list as grid of locations
lons, lats = np.meshgrid(lons,lats)
# overlay region masks
chart.pcolormesh(lons,lats,rmask,shading='flat',latlon=True, alpha=0.2) ;
# save image
plt.savefig('../figures/india.png')
Explanation: Drawing a map
Want to create a graphic to show that reports only exist for certain regions, and how the grid spacing of the meterological fields imperfectly matches the actual region boundaries. Have currently set things so that a grid cell is considered informative about the political region as long some part of the region boundary is within 50kms of the grid point (cell center). Played around with a few things before settling on this. The distance is pretty conservative; as in tends towards considering information from outside the region, rather than excluding information from within, but still keeps a more "fair" evaluation, by not evaluating against grid cells which contain only a minimal amount of the geographical region. Considering that most political boundaries are linked to geographical features/divides, if only a small fraction of the region extends into another grid cell, would expect its weather to more correlated with that shown by cells over the rest of the region than that of this other area. Example of this can be seen for Uttar Pradesh (India), where a sliver of the region overlaps with a gird cell that is mostly representitive of the Himalayas, so it is not considered when calculating the warm spell durations.
Looking at the individual administrative regions requires working
with shape files. These are obtained from the Database of Global
Administrative Areas.
End of explanation
<END_TASK> |
15,630 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
简单散点图
另一种常用的图类型是简单散点图,线图的近亲。此处的点不是由线段连接,而是由点,圆或其他形状分别表示。首先,设置ipython notebook得作图环境:
Step1: plt.plot制作散点图
在上一节中,我们查看了plt.plot / ax.plot以生成线图。事实证明,相同的函数也可以生成散点图:
Step2: 函数调用中的第三个参数是一个字符,代表用于绘图的符号类型。正如可以指定诸如“-”,“-”之类的选项来控制线条样式一样,标记样式也具有自己的一组短字符串代码。可用符号的完整列表可以在plt.plot文档或Matplotlib的在线文档中找到。大多数可能性都是相当直观的,我们将在此处显示一些更常见的可能性:
Step3: 为了获得更多的可能性,这些字符代码可以与线和颜色代码一起用于绘制点以及连接它们的线:
Step4: plt.plot的其他关键字参数指定线和标记的广泛属性:
Step5: plt.plot函数中的这种灵活性允许使用多种可能的可视化选项。有关可用选项的完整说明,请参阅plt.plot文档。
plt.scatter的散点图
第二种功能更强大的创建散点图的方法是plt.scatter函数,其用法与plt.plot函数非常相似:
Step6: plt.scatter与plt.plot的主要区别在于,它可以用于创建散点图,在散点图中可以单独控制每个点的属性(大小,面色,边缘颜色等)或将其映射到数据。
让我们通过创建具有许多颜色和大小的点的随机散点图来显示这一点。为了更好地查看重叠结果,我们还将使用alpha关键字来调整透明度级别:
Step7: 请注意,color参数会自动映射到色标(此处由colorbar()命令显示),并且size参数以像素为单位。这样,点的颜色和大小可用于在可视化中传达信息,以可视化多维数据。
例如,我们可以使用Scikit-Learn提供的Iris数据,其中的每个样本都是经过仔细测量其花瓣和萼片大小的三种类型的花朵之一: | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
# 使用风格seaborn白底
plt.style.use('seaborn-whitegrid')
import numpy as np
Explanation: 简单散点图
另一种常用的图类型是简单散点图,线图的近亲。此处的点不是由线段连接,而是由点,圆或其他形状分别表示。首先,设置ipython notebook得作图环境:
End of explanation
x = np.linspace(0, 10, 30)
y = np.sin(x)
plt.plot(x, y, 'o', color='black');
Explanation: plt.plot制作散点图
在上一节中,我们查看了plt.plot / ax.plot以生成线图。事实证明,相同的函数也可以生成散点图:
End of explanation
rng=np.random.RandomState(0)
for marker in ['o', '.', ',', 'x', '+', 'v', '^', '<', '>', 's', 'd']:
plt.plot(rng.rand(5),rng.rand(5),marker,label="marker for '{0}'".format(marker))
plt.legend(numpoints=1)
plt.xlim(0, 1.8);
Explanation: 函数调用中的第三个参数是一个字符,代表用于绘图的符号类型。正如可以指定诸如“-”,“-”之类的选项来控制线条样式一样,标记样式也具有自己的一组短字符串代码。可用符号的完整列表可以在plt.plot文档或Matplotlib的在线文档中找到。大多数可能性都是相当直观的,我们将在此处显示一些更常见的可能性:
End of explanation
plt.plot(x, y, '-ok');
Explanation: 为了获得更多的可能性,这些字符代码可以与线和颜色代码一起用于绘制点以及连接它们的线:
End of explanation
plt.plot(x, y, '-p', color='gray',
markersize=15, linewidth=4,
markerfacecolor='white',
markeredgecolor='gray',
markeredgewidth=2)
plt.ylim(-1.2, 1.2);
Explanation: plt.plot的其他关键字参数指定线和标记的广泛属性:
End of explanation
plt.scatter(x, y, marker='o');
Explanation: plt.plot函数中的这种灵活性允许使用多种可能的可视化选项。有关可用选项的完整说明,请参阅plt.plot文档。
plt.scatter的散点图
第二种功能更强大的创建散点图的方法是plt.scatter函数,其用法与plt.plot函数非常相似:
End of explanation
rng = np.random.RandomState(0)
x = rng.randn(100)
y = rng.randn(100)
# 随机颜色
colors = rng.rand(100)
sizes = 1000 * rng.rand(100)
# 设置透明度
plt.scatter(x, y, c=colors, s=sizes, alpha=0.3,
cmap='viridis')
plt.colorbar(); # show color scale
Explanation: plt.scatter与plt.plot的主要区别在于,它可以用于创建散点图,在散点图中可以单独控制每个点的属性(大小,面色,边缘颜色等)或将其映射到数据。
让我们通过创建具有许多颜色和大小的点的随机散点图来显示这一点。为了更好地查看重叠结果,我们还将使用alpha关键字来调整透明度级别:
End of explanation
from sklearn.datasets import load_iris
iris = load_iris()
features = iris.data.T
plt.scatter(features[0], features[1], alpha=0.2,
s=100*features[3], c=iris.target, cmap='viridis')
plt.xlabel(iris.feature_names[0])
plt.ylabel(iris.feature_names[1]);
Explanation: 请注意,color参数会自动映射到色标(此处由colorbar()命令显示),并且size参数以像素为单位。这样,点的颜色和大小可用于在可视化中传达信息,以可视化多维数据。
例如,我们可以使用Scikit-Learn提供的Iris数据,其中的每个样本都是经过仔细测量其花瓣和萼片大小的三种类型的花朵之一:
End of explanation
<END_TASK> |
15,631 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
AutoML SDK
Step1: Install the Google cloud-storage library as well.
Step2: Restart the Kernel
Once you've installed the AutoML SDK and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.
Step3: Before you begin
GPU run-time
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU
Set up your GCP project
The following steps are required, regardless of your notebook environment.
Select or create a GCP project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the AutoML APIs and Compute Engine APIs.
Google Cloud SDK is already installed in AutoML Notebooks.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note
Step4: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for AutoML. We recommend when possible, to choose the region closest to you.
Americas
Step5: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
Step6: Authenticate your GCP account
If you are using AutoML Notebooks, your environment is already
authenticated. Skip this step.
Note
Step7: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
This tutorial is designed to use training data that is in a public Cloud Storage bucket and a local Cloud Storage bucket for your batch predictions. You may alternatively use your own training data that you have stored in a local Cloud Storage bucket.
Set the name of your Cloud Storage bucket below. It must be unique across all Cloud Storage buckets.
Step8: Only if your bucket doesn't already exist
Step9: Finally, validate access to your Cloud Storage bucket by examining its contents
Step10: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Import AutoML SDK
Import the AutoML SDK into our Python environment.
Step11: AutoML constants
Setup up the following constants for AutoML
Step12: Clients
The AutoML SDK works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the server (AutoML).
You will use several clients in this tutorial, so set them all up upfront.
Step13: Example output
Step14: Example output
Step15: Response
Step16: Example output
Step17: projects.locations.datasets.importData
Request
Step18: Example output
Step19: Response
Step20: Example output
Step21: Example output
Step22: Response
Step23: Example output
Step24: Evaluate the model
projects.locations.models.modelEvaluations.list
Call
Step25: Response
Step26: Example output
[
{
"name"
Step27: Response
Step28: Example output
Step29: Example output
Step30: Make the batch input file
Step31: Example output
Step32: Example output
Step33: Response
Step34: Example output
Step35: Example output
Step36: Response
Step37: Example output
Step38: Example output
Step39: Response
Step40: Example output
Step41: Call
Step42: Response
Step43: Example output
Step44: Example output
Step45: Response
Step46: Example output
Step47: projects.locations.models.export
Step48: Example output
Step49: Response
Step50: Example output
Step51: Example output | <ASSISTANT_TASK:>
Python Code:
! pip3 install -U google-cloud-automl --user
Explanation: AutoML SDK: AutoML image classification model
Installation
Install the latest (preview) version of AutoML SDK.
End of explanation
! pip3 install google-cloud-storage
Explanation: Install the Google cloud-storage library as well.
End of explanation
import os
if not os.getenv("AUTORUN"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the Kernel
Once you've installed the AutoML SDK and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.
End of explanation
PROJECT_ID = "[your-project-id]" #@param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
Explanation: Before you begin
GPU run-time
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU
Set up your GCP project
The following steps are required, regardless of your notebook environment.
Select or create a GCP project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the AutoML APIs and Compute Engine APIs.
Google Cloud SDK is already installed in AutoML Notebooks.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
End of explanation
REGION = 'us-central1' #@param {type: "string"}
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for AutoML. We recommend when possible, to choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You cannot use a Multi-Regional Storage bucket for training with AutoML. Not all regions provide support for all AutoML services. For the latest support per region, see Region support for AutoML services
End of explanation
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
End of explanation
import os
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your Google Cloud account. This provides access
# to your Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on AutoML, then don't execute this code
if not os.path.exists('/opt/deeplearning/metadata/env_version'):
if 'google.colab' in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this tutorial in a notebook locally, replace the string
# below with the path to your service account key and run this cell to
# authenticate your Google Cloud account.
else:
%env GOOGLE_APPLICATION_CREDENTIALS your_path_to_credentials.json
# Log in to your account on Google Cloud
! gcloud auth login
Explanation: Authenticate your GCP account
If you are using AutoML Notebooks, your environment is already
authenticated. Skip this step.
Note: If you are on an AutoML notebook and run the cell, the cell knows to skip executing the authentication steps.
End of explanation
BUCKET_NAME = "[your-bucket-name]" #@param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "[your-bucket-name]":
BUCKET_NAME = PROJECT_ID + "aip-" + TIMESTAMP
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
This tutorial is designed to use training data that is in a public Cloud Storage bucket and a local Cloud Storage bucket for your batch predictions. You may alternatively use your own training data that you have stored in a local Cloud Storage bucket.
Set the name of your Cloud Storage bucket below. It must be unique across all Cloud Storage buckets.
End of explanation
! gsutil mb -l $REGION gs://$BUCKET_NAME
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
! gsutil ls -al gs://$BUCKET_NAME
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
import json
import os
import sys
import time
from google.cloud import automl
from google.protobuf.json_format import MessageToJson
from google.protobuf.json_format import ParseDict
from googleapiclient.discovery import build
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Import AutoML SDK
Import the AutoML SDK into our Python environment.
End of explanation
# AutoML location root path for your dataset, model and endpoint resources
PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION
Explanation: AutoML constants
Setup up the following constants for AutoML:
PARENT: The AutoML location root path for dataset, model and endpoint resources.
End of explanation
def automl_client():
return automl.AutoMlClient()
def prediction_client():
return automl.PredictionServiceClient()
def operations_client():
return automl.AutoMlClient()._transport.operations_client
clients = {}
clients["automl"] = automl_client()
clients["prediction"] = prediction_client()
clients["operations"] = operations_client()
for client in clients.items():
print(client)
IMPORT_FILE = "gs://cloud-ml-data/img/flower_photos/train_set.csv"
#%%capture
! gsutil cp -r gs://cloud-ml-data/img/flower_photos/ gs://$BUCKET_NAME
import tensorflow as tf
all_files_csv = ! gsutil cat $IMPORT_FILE
all_files_csv = [ l.replace("cloud-ml-data/img", BUCKET_NAME) for l in all_files_csv ]
IMPORT_FILE = "gs://" + BUCKET_NAME + "/flower_photos/train_set.csv"
with tf.io.gfile.GFile(IMPORT_FILE, 'w') as f:
for l in all_files_csv:
f.write(l + "\n")
! gsutil cat $IMPORT_FILE | head -n 10
Explanation: Clients
The AutoML SDK works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the server (AutoML).
You will use several clients in this tutorial, so set them all up upfront.
End of explanation
dataset = {
"display_name": "flowers_" + TIMESTAMP,
"image_classification_dataset_metadata": {
"classification_type": "MULTICLASS",
},
}
print(MessageToJson(
automl.CreateDatasetRequest(
parent=PARENT,
dataset=dataset,
).__dict__["_pb"])
)
Explanation: Example output:
gs://migration-ucaip-trainingaip-20210226015151/flower_photos/daisy/754296579_30a9ae018c_n.jpg,daisy
gs://migration-ucaip-trainingaip-20210226015151/flower_photos/dandelion/18089878729_907ed2c7cd_m.jpg,dandelion
gs://migration-ucaip-trainingaip-20210226015151/flower_photos/dandelion/284497199_93a01f48f6.jpg,dandelion
gs://migration-ucaip-trainingaip-20210226015151/flower_photos/dandelion/3554992110_81d8c9b0bd_m.jpg,dandelion
gs://migration-ucaip-trainingaip-20210226015151/flower_photos/daisy/4065883015_4bb6010cb7_n.jpg,daisy
gs://migration-ucaip-trainingaip-20210226015151/flower_photos/roses/7420699022_60fa574524_m.jpg,roses
gs://migration-ucaip-trainingaip-20210226015151/flower_photos/dandelion/4558536575_d43a611bd4_n.jpg,dandelion
gs://migration-ucaip-trainingaip-20210226015151/flower_photos/daisy/7568630428_8cf0fc16ff_n.jpg,daisy
gs://migration-ucaip-trainingaip-20210226015151/flower_photos/tulips/7064813645_f7f48fb527.jpg,tulips
gs://migration-ucaip-trainingaip-20210226015151/flower_photos/sunflowers/4933229095_f7e4218b28.jpg,sunflowers
Create a dataset
projects.locations.datasets.create
Request
End of explanation
request = clients["automl"].create_dataset(
parent=PARENT,
dataset=dataset,
)
Explanation: Example output:
{
"parent": "projects/migration-ucaip-training/locations/us-central1",
"dataset": {
"displayName": "flowers_20210226015151",
"imageClassificationDatasetMetadata": {
"classificationType": "MULTICLASS"
}
}
}
Call
End of explanation
result = request.result()
print(MessageToJson(result.__dict__["_pb"]))
Explanation: Response
End of explanation
# The full unique ID for the dataset
dataset_id = result.name
# The short numeric ID for the dataset
dataset_short_id = dataset_id.split('/')[-1]
print(dataset_id)
Explanation: Example output:
{
"name": "projects/116273516712/locations/us-central1/datasets/ICN2833688305139187712"
}
End of explanation
input_config = {
"gcs_source": {
"input_uris": [IMPORT_FILE],
},
}
print(MessageToJson(
automl.ImportDataRequest(
name=dataset_short_id,
input_config=input_config
).__dict__["_pb"])
)
Explanation: projects.locations.datasets.importData
Request
End of explanation
request = clients["automl"].import_data(
name=dataset_id,
input_config=input_config
)
Explanation: Example output:
{
"name": "ICN2833688305139187712",
"inputConfig": {
"gcsSource": {
"inputUris": [
"gs://migration-ucaip-trainingaip-20210226015151/flower_photos/train_set.csv"
]
}
}
}
Call
End of explanation
result = request.result()
print(MessageToJson(result))
Explanation: Response
End of explanation
model = {
"display_name": "flowers_" + TIMESTAMP,
"dataset_id": dataset_short_id,
"image_classification_model_metadata": {
"train_budget_milli_node_hours": 8000,
},
}
print(MessageToJson(
automl.CreateModelRequest(
parent=PARENT,
model=model,
).__dict__["_pb"])
)
Explanation: Example output:
{}
Train a model
projects.locations.models.create
Request
End of explanation
request = clients["automl"].create_model(
parent=PARENT,
model=model,
)
Explanation: Example output:
{
"parent": "projects/migration-ucaip-training/locations/us-central1",
"model": {
"displayName": "flowers_20210226015151",
"datasetId": "ICN2833688305139187712",
"imageClassificationModelMetadata": {
"trainBudgetMilliNodeHours": "8000"
}
}
}
Call
End of explanation
result = request.result()
print(MessageToJson(result.__dict__["_pb"]))
Explanation: Response
End of explanation
# The full unique ID for the training pipeline
model_id = result.name
# The short numeric ID for the training pipeline
model_short_id = model_id.split('/')[-1]
print(model_short_id)
Explanation: Example output:
{
"name": "projects/116273516712/locations/us-central1/models/ICN3600040762873479168"
}
End of explanation
request = clients["automl"].list_model_evaluations(
parent=model_id,
)
Explanation: Evaluate the model
projects.locations.models.modelEvaluations.list
Call
End of explanation
import json
model_evaluations = [
json.loads(MessageToJson(me.__dict__["_pb"])) for me in request
]
# The evaluation slice
evaluation_slice = request.model_evaluation[0].name
print(json.dumps(model_evaluations, indent=2))
Explanation: Response
End of explanation
request = clients["automl"].get_model_evaluation(
name=evaluation_slice,
)
Explanation: Example output
[
{
"name": "projects/116273516712/locations/us-central1/models/ICN3600040762873479168/modelEvaluations/1701367336556072668",
"createTime": "2021-02-26T03:00:19.383521Z",
"evaluatedExampleCount": 329,
"classificationEvaluationMetrics": {
"auPrc": 0.99747145,
"confidenceMetricsEntry": [
{
"recall": 1.0,
"precision": 0.2
},
{
"confidenceThreshold": 0.05,
"recall": 0.99088144,
"precision": 0.92877495
},
{
"confidenceThreshold": 0.1,
"recall": 0.98784196,
"precision": 0.9447674
},
{
"confidenceThreshold": 0.15,
"recall": 0.9848024,
"precision": 0.9501466
},
{
"confidenceThreshold": 0.2,
"recall": 0.9848024,
"precision": 0.96142435
},
{
"confidenceThreshold": 0.25,
"recall": 0.98176295,
"precision": 0.9641791
},
{
"confidenceThreshold": 0.3,
"recall": 0.98176295,
"precision": 0.9670659
},
{
"confidenceThreshold": 0.35,
"recall": 0.9787234,
"precision": 0.966967
},
{
"confidenceThreshold": 0.4,
"recall": 0.97568387,
"precision": 0.96686745
},
{
"confidenceThreshold": 0.45,
"recall": 0.97568387,
"precision": 0.9727273
},
{
"confidenceThreshold": 0.5,
"recall": 0.9726444,
"precision": 0.9756098
},
{
"confidenceThreshold": 0.55,
"recall": 0.9726444,
"precision": 0.9756098
},
{
"confidenceThreshold": 0.6,
"recall": 0.9665654,
"precision": 0.9754601
},
{
"confidenceThreshold": 0.65,
"recall": 0.9665654,
"precision": 0.9814815
},
{
"confidenceThreshold": 0.7,
"recall": 0.9665654,
"precision": 0.98452014
},
{
"confidenceThreshold": 0.75,
"recall": 0.9665654,
"precision": 0.98452014
},
{
"confidenceThreshold": 0.8,
"recall": 0.9604863,
"precision": 0.9875
},
{
"confidenceThreshold": 0.85,
"recall": 0.9452888,
"precision": 0.99044585
},
{
"confidenceThreshold": 0.875,
"recall": 0.94224924,
"precision": 0.99041533
},
{
"confidenceThreshold": 0.9,
"recall": 0.9392097,
"precision": 0.99038464
},
{
"confidenceThreshold": 0.91,
"recall": 0.9392097,
"precision": 0.99038464
},
{
"confidenceThreshold": 0.92,
"recall": 0.9361702,
"precision": 0.9935484
},
{
"confidenceThreshold": 0.93,
"recall": 0.9361702,
"precision": 0.9935484
},
{
"confidenceThreshold": 0.94,
"recall": 0.9361702,
"precision": 0.9935484
},
{
"confidenceThreshold": 0.95,
"recall": 0.9331307,
"precision": 0.99352753
},
{
"confidenceThreshold": 0.96,
"recall": 0.9300912,
"precision": 0.99674267
},
{
"confidenceThreshold": 0.97,
"recall": 0.92705166,
"precision": 0.996732
},
{
"confidenceThreshold": 0.98,
"recall": 0.9148936,
"precision": 1.0
},
{
"confidenceThreshold": 0.99,
"recall": 0.89361703,
"precision": 1.0
},
{
"confidenceThreshold": 0.995,
"recall": 0.88145894,
"precision": 1.0
},
{
"confidenceThreshold": 0.996,
"recall": 0.87234044,
"precision": 1.0
},
{
"confidenceThreshold": 0.997,
"recall": 0.8693009,
"precision": 1.0
},
{
"confidenceThreshold": 0.998,
"recall": 0.8449848,
"precision": 1.0
},
{
"confidenceThreshold": 0.999,
"recall": 0.81155014,
"precision": 1.0
},
{
"confidenceThreshold": 1.0,
"recall": 0.24012157,
"precision": 1.0
}
],
"confusionMatrix": {
"annotationSpecId": [
"548545251585818624",
"4295540141558071296",
"5160231270013206528",
"6601383150771765248",
"8907226159985459200"
],
"row": [
{
"exampleCount": [
55,
0,
1,
2,
0
]
},
{
"exampleCount": [
0,
59,
1,
0,
1
]
},
{
"exampleCount": [
0,
0,
81,
0,
0
]
},
{
"exampleCount": [
0,
0,
0,
73,
0
]
},
{
"exampleCount": [
0,
1,
2,
0,
53
]
}
],
"displayName": [
"roses",
"sunflowers",
"dandelion",
"tulips",
"daisy"
]
},
"logLoss": 0.02853713
}
},
{
"name": "projects/116273516712/locations/us-central1/models/ICN3600040762873479168/modelEvaluations/4464795143994212237",
"annotationSpecId": "6601383150771765248",
"createTime": "2021-02-26T03:00:19.383521Z",
"classificationEvaluationMetrics": {
"auPrc": 0.9990742,
"confidenceMetricsEntry": [
{
"recall": 1.0,
"precision": 0.2218845
},
{
"confidenceThreshold": 0.05,
"recall": 1.0,
"precision": 0.8795181
},
{
"confidenceThreshold": 0.1,
"recall": 1.0,
"precision": 0.9125
},
{
"confidenceThreshold": 0.15,
"recall": 1.0,
"precision": 0.9240506
},
{
"confidenceThreshold": 0.2,
"recall": 1.0,
"precision": 0.9605263
},
{
"confidenceThreshold": 0.25,
"recall": 1.0,
"precision": 0.9605263
},
{
"confidenceThreshold": 0.3,
"recall": 1.0,
"precision": 0.97333336
},
{
"confidenceThreshold": 0.35,
"recall": 1.0,
"precision": 0.97333336
},
{
"confidenceThreshold": 0.4,
"recall": 1.0,
"precision": 0.97333336
},
{
"confidenceThreshold": 0.45,
"recall": 1.0,
"precision": 0.97333336
},
{
"confidenceThreshold": 0.5,
"recall": 0.98630136,
"precision": 0.972973
},
{
"confidenceThreshold": 0.55,
"recall": 0.98630136,
"precision": 0.972973
},
{
"confidenceThreshold": 0.6,
"recall": 0.9726027,
"precision": 0.9726027
},
{
"confidenceThreshold": 0.65,
"recall": 0.9726027,
"precision": 0.9726027
},
{
"confidenceThreshold": 0.7,
"recall": 0.9726027,
"precision": 0.9726027
},
{
"confidenceThreshold": 0.75,
"recall": 0.9726027,
"precision": 0.9726027
},
{
"confidenceThreshold": 0.8,
"recall": 0.9726027,
"precision": 0.9861111
},
{
"confidenceThreshold": 0.85,
"recall": 0.9726027,
"precision": 0.9861111
},
{
"confidenceThreshold": 0.875,
"recall": 0.9726027,
"precision": 0.9861111
},
{
"confidenceThreshold": 0.9,
"recall": 0.9726027,
"precision": 0.9861111
},
{
"confidenceThreshold": 0.91,
"recall": 0.9726027,
"precision": 0.9861111
},
{
"confidenceThreshold": 0.92,
"recall": 0.9726027,
"precision": 0.9861111
},
{
"confidenceThreshold": 0.93,
"recall": 0.9726027,
"precision": 0.9861111
},
{
"confidenceThreshold": 0.94,
"recall": 0.9726027,
"precision": 0.9861111
},
{
"confidenceThreshold": 0.95,
"recall": 0.9726027,
"precision": 0.9861111
},
{
"confidenceThreshold": 0.96,
"recall": 0.9726027,
"precision": 0.9861111
},
{
"confidenceThreshold": 0.97,
"recall": 0.9726027,
"precision": 0.9861111
},
{
"confidenceThreshold": 0.98,
"recall": 0.9589041,
"precision": 1.0
},
{
"confidenceThreshold": 0.99,
"recall": 0.9315069,
"precision": 1.0
},
{
"confidenceThreshold": 0.995,
"recall": 0.91780823,
"precision": 1.0
},
{
"confidenceThreshold": 0.996,
"recall": 0.91780823,
"precision": 1.0
},
{
"confidenceThreshold": 0.997,
"recall": 0.91780823,
"precision": 1.0
},
{
"confidenceThreshold": 0.998,
"recall": 0.9041096,
"precision": 1.0
},
{
"confidenceThreshold": 0.999,
"recall": 0.8356164,
"precision": 1.0
},
{
"confidenceThreshold": 1.0,
"recall": 0.12328767,
"precision": 1.0
}
],
"auRoc": 0.99973243,
"logLoss": 0.024023052
},
"displayName": "tulips"
},
{
"name": "projects/116273516712/locations/us-central1/models/ICN3600040762873479168/modelEvaluations/6132683338167493052",
"annotationSpecId": "8907226159985459200",
"createTime": "2021-02-26T03:00:19.383521Z",
"classificationEvaluationMetrics": {
"auPrc": 0.99841,
"confidenceMetricsEntry": [
{
"recall": 1.0,
"precision": 0.17021276
},
{
"confidenceThreshold": 0.05,
"recall": 1.0,
"precision": 0.9655172
},
{
"confidenceThreshold": 0.1,
"recall": 0.98214287,
"precision": 0.9649123
},
{
"confidenceThreshold": 0.15,
"recall": 0.98214287,
"precision": 0.98214287
},
{
"confidenceThreshold": 0.2,
"recall": 0.98214287,
"precision": 0.98214287
},
{
"confidenceThreshold": 0.25,
"recall": 0.98214287,
"precision": 0.98214287
},
{
"confidenceThreshold": 0.3,
"recall": 0.98214287,
"precision": 0.98214287
},
{
"confidenceThreshold": 0.35,
"recall": 0.96428573,
"precision": 0.9818182
},
{
"confidenceThreshold": 0.4,
"recall": 0.9464286,
"precision": 0.9814815
},
{
"confidenceThreshold": 0.45,
"recall": 0.9464286,
"precision": 0.9814815
},
{
"confidenceThreshold": 0.5,
"recall": 0.9464286,
"precision": 0.9814815
},
{
"confidenceThreshold": 0.55,
"recall": 0.9464286,
"precision": 0.9814815
},
{
"confidenceThreshold": 0.6,
"recall": 0.9285714,
"precision": 0.9811321
},
{
"confidenceThreshold": 0.65,
"recall": 0.9285714,
"precision": 0.9811321
},
{
"confidenceThreshold": 0.7,
"recall": 0.9285714,
"precision": 0.9811321
},
{
"confidenceThreshold": 0.75,
"recall": 0.9285714,
"precision": 0.9811321
},
{
"confidenceThreshold": 0.8,
"recall": 0.9285714,
"precision": 0.9811321
},
{
"confidenceThreshold": 0.85,
"recall": 0.9285714,
"precision": 1.0
},
{
"confidenceThreshold": 0.875,
"recall": 0.9285714,
"precision": 1.0
},
{
"confidenceThreshold": 0.9,
"recall": 0.9285714,
"precision": 1.0
},
{
"confidenceThreshold": 0.91,
"recall": 0.9285714,
"precision": 1.0
},
{
"confidenceThreshold": 0.92,
"recall": 0.9285714,
"precision": 1.0
},
{
"confidenceThreshold": 0.93,
"recall": 0.9285714,
"precision": 1.0
},
{
"confidenceThreshold": 0.94,
"recall": 0.9285714,
"precision": 1.0
},
{
"confidenceThreshold": 0.95,
"recall": 0.91071427,
"precision": 1.0
},
{
"confidenceThreshold": 0.96,
"recall": 0.91071427,
"precision": 1.0
},
{
"confidenceThreshold": 0.97,
"recall": 0.91071427,
"precision": 1.0
},
{
"confidenceThreshold": 0.98,
"recall": 0.91071427,
"precision": 1.0
},
{
"confidenceThreshold": 0.99,
"recall": 0.875,
"precision": 1.0
},
{
"confidenceThreshold": 0.995,
"recall": 0.83928573,
"precision": 1.0
},
{
"confidenceThreshold": 0.996,
"recall": 0.8214286,
"precision": 1.0
},
{
"confidenceThreshold": 0.997,
"recall": 0.8035714,
"precision": 1.0
},
{
"confidenceThreshold": 0.998,
"recall": 0.78571427,
"precision": 1.0
},
{
"confidenceThreshold": 0.999,
"recall": 0.76785713,
"precision": 1.0
},
{
"confidenceThreshold": 1.0,
"recall": 0.30357143,
"precision": 1.0
}
],
"auRoc": 0.99967295,
"logLoss": 0.022124559
},
"displayName": "daisy"
},
{
"name": "projects/116273516712/locations/us-central1/models/ICN3600040762873479168/modelEvaluations/7147485663377408481",
"annotationSpecId": "548545251585818624",
"createTime": "2021-02-26T03:00:19.383521Z",
"classificationEvaluationMetrics": {
"auPrc": 0.9971625,
"confidenceMetricsEntry": [
{
"recall": 1.0,
"precision": 0.1762918
},
{
"confidenceThreshold": 0.05,
"recall": 0.9655172,
"precision": 0.93333334
},
{
"confidenceThreshold": 0.1,
"recall": 0.9655172,
"precision": 0.9655172
},
{
"confidenceThreshold": 0.15,
"recall": 0.9655172,
"precision": 0.9655172
},
{
"confidenceThreshold": 0.2,
"recall": 0.9655172,
"precision": 0.9655172
},
{
"confidenceThreshold": 0.25,
"recall": 0.94827586,
"precision": 0.9649123
},
{
"confidenceThreshold": 0.3,
"recall": 0.94827586,
"precision": 0.9649123
},
{
"confidenceThreshold": 0.35,
"recall": 0.94827586,
"precision": 0.9649123
},
{
"confidenceThreshold": 0.4,
"recall": 0.94827586,
"precision": 0.9649123
},
{
"confidenceThreshold": 0.45,
"recall": 0.94827586,
"precision": 0.98214287
},
{
"confidenceThreshold": 0.5,
"recall": 0.94827586,
"precision": 1.0
},
{
"confidenceThreshold": 0.55,
"recall": 0.94827586,
"precision": 1.0
},
{
"confidenceThreshold": 0.6,
"recall": 0.94827586,
"precision": 1.0
},
{
"confidenceThreshold": 0.65,
"recall": 0.94827586,
"precision": 1.0
},
{
"confidenceThreshold": 0.7,
"recall": 0.94827586,
"precision": 1.0
},
{
"confidenceThreshold": 0.75,
"recall": 0.94827586,
"precision": 1.0
},
{
"confidenceThreshold": 0.8,
"recall": 0.94827586,
"precision": 1.0
},
{
"confidenceThreshold": 0.85,
"recall": 0.87931037,
"precision": 1.0
},
{
"confidenceThreshold": 0.875,
"recall": 0.87931037,
"precision": 1.0
},
{
"confidenceThreshold": 0.9,
"recall": 0.87931037,
"precision": 1.0
},
{
"confidenceThreshold": 0.91,
"recall": 0.87931037,
"precision": 1.0
},
{
"confidenceThreshold": 0.92,
"recall": 0.86206895,
"precision": 1.0
},
{
"confidenceThreshold": 0.93,
"recall": 0.86206895,
"precision": 1.0
},
{
"confidenceThreshold": 0.94,
"recall": 0.86206895,
"precision": 1.0
},
{
"confidenceThreshold": 0.95,
"recall": 0.86206895,
"precision": 1.0
},
{
"confidenceThreshold": 0.96,
"recall": 0.86206895,
"precision": 1.0
},
{
"confidenceThreshold": 0.97,
"recall": 0.86206895,
"precision": 1.0
},
{
"confidenceThreshold": 0.98,
"recall": 0.8448276,
"precision": 1.0
},
{
"confidenceThreshold": 0.99,
"recall": 0.79310346,
"precision": 1.0
},
{
"confidenceThreshold": 0.995,
"recall": 0.79310346,
"precision": 1.0
},
{
"confidenceThreshold": 0.996,
"recall": 0.7758621,
"precision": 1.0
},
{
"confidenceThreshold": 0.997,
"recall": 0.7758621,
"precision": 1.0
},
{
"confidenceThreshold": 0.998,
"recall": 0.70689654,
"precision": 1.0
},
{
"confidenceThreshold": 0.999,
"recall": 0.6896552,
"precision": 1.0
},
{
"confidenceThreshold": 1.0,
"recall": 0.03448276,
"precision": 1.0
}
],
"auRoc": 0.9993638,
"logLoss": 0.034111425
},
"displayName": "roses"
},
{
"name": "projects/116273516712/locations/us-central1/models/ICN3600040762873479168/modelEvaluations/8076647367053688867",
"annotationSpecId": "5160231270013206528",
"createTime": "2021-02-26T03:00:19.383521Z",
"classificationEvaluationMetrics": {
"auPrc": 0.9989403,
"confidenceMetricsEntry": [
{
"recall": 1.0,
"precision": 0.2462006
},
{
"confidenceThreshold": 0.05,
"recall": 1.0,
"precision": 0.9101124
},
{
"confidenceThreshold": 0.1,
"recall": 1.0,
"precision": 0.92045456
},
{
"confidenceThreshold": 0.15,
"recall": 1.0,
"precision": 0.92045456
},
{
"confidenceThreshold": 0.2,
"recall": 1.0,
"precision": 0.9310345
},
{
"confidenceThreshold": 0.25,
"recall": 1.0,
"precision": 0.94186044
},
{
"confidenceThreshold": 0.3,
"recall": 1.0,
"precision": 0.94186044
},
{
"confidenceThreshold": 0.35,
"recall": 1.0,
"precision": 0.94186044
},
{
"confidenceThreshold": 0.4,
"recall": 1.0,
"precision": 0.94186044
},
{
"confidenceThreshold": 0.45,
"recall": 1.0,
"precision": 0.9529412
},
{
"confidenceThreshold": 0.5,
"recall": 1.0,
"precision": 0.9529412
},
{
"confidenceThreshold": 0.55,
"recall": 1.0,
"precision": 0.9529412
},
{
"confidenceThreshold": 0.6,
"recall": 1.0,
"precision": 0.9529412
},
{
"confidenceThreshold": 0.65,
"recall": 1.0,
"precision": 0.96428573
},
{
"confidenceThreshold": 0.7,
"recall": 1.0,
"precision": 0.97590363
},
{
"confidenceThreshold": 0.75,
"recall": 1.0,
"precision": 0.97590363
},
{
"confidenceThreshold": 0.8,
"recall": 0.9876543,
"precision": 0.9756098
},
{
"confidenceThreshold": 0.85,
"recall": 0.9876543,
"precision": 0.9756098
},
{
"confidenceThreshold": 0.875,
"recall": 0.97530866,
"precision": 0.97530866
},
{
"confidenceThreshold": 0.9,
"recall": 0.962963,
"precision": 0.975
},
{
"confidenceThreshold": 0.91,
"recall": 0.962963,
"precision": 0.975
},
{
"confidenceThreshold": 0.92,
"recall": 0.962963,
"precision": 0.98734176
},
{
"confidenceThreshold": 0.93,
"recall": 0.962963,
"precision": 0.98734176
},
{
"confidenceThreshold": 0.94,
"recall": 0.962963,
"precision": 0.98734176
},
{
"confidenceThreshold": 0.95,
"recall": 0.962963,
"precision": 0.98734176
},
{
"confidenceThreshold": 0.96,
"recall": 0.9506173,
"precision": 1.0
},
{
"confidenceThreshold": 0.97,
"recall": 0.9506173,
"precision": 1.0
},
{
"confidenceThreshold": 0.98,
"recall": 0.9506173,
"precision": 1.0
},
{
"confidenceThreshold": 0.99,
"recall": 0.9506173,
"precision": 1.0
},
{
"confidenceThreshold": 0.995,
"recall": 0.9506173,
"precision": 1.0
},
{
"confidenceThreshold": 0.996,
"recall": 0.9506173,
"precision": 1.0
},
{
"confidenceThreshold": 0.997,
"recall": 0.9506173,
"precision": 1.0
},
{
"confidenceThreshold": 0.998,
"recall": 0.9382716,
"precision": 1.0
},
{
"confidenceThreshold": 0.999,
"recall": 0.9259259,
"precision": 1.0
},
{
"confidenceThreshold": 1.0,
"recall": 0.5555556,
"precision": 1.0
}
],
"auRoc": 0.99965155,
"logLoss": 0.029262401
},
"displayName": "dandelion"
},
{
"name": "projects/116273516712/locations/us-central1/models/ICN3600040762873479168/modelEvaluations/8816571236383372686",
"annotationSpecId": "4295540141558071296",
"createTime": "2021-02-26T03:00:19.383521Z",
"classificationEvaluationMetrics": {
"auPrc": 0.99703646,
"confidenceMetricsEntry": [
{
"recall": 1.0,
"precision": 0.18541034
},
{
"confidenceThreshold": 0.05,
"recall": 0.9836066,
"precision": 0.9836066
},
{
"confidenceThreshold": 0.1,
"recall": 0.9836066,
"precision": 0.9836066
},
{
"confidenceThreshold": 0.15,
"recall": 0.9672131,
"precision": 0.98333335
},
{
"confidenceThreshold": 0.2,
"recall": 0.9672131,
"precision": 0.98333335
},
{
"confidenceThreshold": 0.25,
"recall": 0.9672131,
"precision": 0.98333335
},
{
"confidenceThreshold": 0.3,
"recall": 0.9672131,
"precision": 0.98333335
},
{
"confidenceThreshold": 0.35,
"recall": 0.9672131,
"precision": 0.98333335
},
{
"confidenceThreshold": 0.4,
"recall": 0.9672131,
"precision": 0.98333335
},
{
"confidenceThreshold": 0.45,
"recall": 0.9672131,
"precision": 0.98333335
},
{
"confidenceThreshold": 0.5,
"recall": 0.9672131,
"precision": 0.98333335
},
{
"confidenceThreshold": 0.55,
"recall": 0.9672131,
"precision": 0.98333335
},
{
"confidenceThreshold": 0.6,
"recall": 0.9672131,
"precision": 0.98333335
},
{
"confidenceThreshold": 0.65,
"recall": 0.9672131,
"precision": 1.0
},
{
"confidenceThreshold": 0.7,
"recall": 0.9672131,
"precision": 1.0
},
{
"confidenceThreshold": 0.75,
"recall": 0.9672131,
"precision": 1.0
},
{
"confidenceThreshold": 0.8,
"recall": 0.9508197,
"precision": 1.0
},
{
"confidenceThreshold": 0.85,
"recall": 0.93442625,
"precision": 1.0
},
{
"confidenceThreshold": 0.875,
"recall": 0.93442625,
"precision": 1.0
},
{
"confidenceThreshold": 0.9,
"recall": 0.93442625,
"precision": 1.0
},
{
"confidenceThreshold": 0.91,
"recall": 0.93442625,
"precision": 1.0
},
{
"confidenceThreshold": 0.92,
"recall": 0.93442625,
"precision": 1.0
},
{
"confidenceThreshold": 0.93,
"recall": 0.93442625,
"precision": 1.0
},
{
"confidenceThreshold": 0.94,
"recall": 0.93442625,
"precision": 1.0
},
{
"confidenceThreshold": 0.95,
"recall": 0.93442625,
"precision": 1.0
},
{
"confidenceThreshold": 0.96,
"recall": 0.93442625,
"precision": 1.0
},
{
"confidenceThreshold": 0.97,
"recall": 0.91803277,
"precision": 1.0
},
{
"confidenceThreshold": 0.98,
"recall": 0.8852459,
"precision": 1.0
},
{
"confidenceThreshold": 0.99,
"recall": 0.8852459,
"precision": 1.0
},
{
"confidenceThreshold": 0.995,
"recall": 0.86885244,
"precision": 1.0
},
{
"confidenceThreshold": 0.996,
"recall": 0.852459,
"precision": 1.0
},
{
"confidenceThreshold": 0.997,
"recall": 0.852459,
"precision": 1.0
},
{
"confidenceThreshold": 0.998,
"recall": 0.8360656,
"precision": 1.0
},
{
"confidenceThreshold": 0.999,
"recall": 0.78688526,
"precision": 1.0
},
{
"confidenceThreshold": 1.0,
"recall": 0.09836066,
"precision": 1.0
}
],
"auRoc": 0.9992048,
"logLoss": 0.03316421
},
"displayName": "sunflowers"
}
]
projects.locations.models.modelEvaluations.get
Call
End of explanation
print(MessageToJson(request.__dict__["_pb"]))
Explanation: Response
End of explanation
test_items = !gsutil cat $IMPORT_FILE | head -n2
if len(str(test_items[0]).split(',')) == 3:
_, test_item_1, test_label_1 = str(test_items[0]).split(',')
_, test_item_2, test_label_2 = str(test_items[1]).split(',')
else:
test_item_1, test_label_1 = str(test_items[0]).split(',')
test_item_2, test_label_2 = str(test_items[1]).split(',')
print(test_item_1, test_label_1)
print(test_item_2, test_label_2)
Explanation: Example output:
{
"name": "projects/116273516712/locations/us-central1/models/ICN3600040762873479168/modelEvaluations/1701367336556072668",
"createTime": "2021-02-26T03:00:19.383521Z",
"evaluatedExampleCount": 329,
"classificationEvaluationMetrics": {
"auPrc": 0.99747145,
"confidenceMetricsEntry": [
{
"recall": 1.0,
"precision": 0.2
},
{
"confidenceThreshold": 0.05,
"recall": 0.99088144,
"precision": 0.92877495
},
{
"confidenceThreshold": 0.1,
"recall": 0.98784196,
"precision": 0.9447674
},
{
"confidenceThreshold": 0.15,
"recall": 0.9848024,
"precision": 0.9501466
},
{
"confidenceThreshold": 0.2,
"recall": 0.9848024,
"precision": 0.96142435
},
{
"confidenceThreshold": 0.25,
"recall": 0.98176295,
"precision": 0.9641791
},
{
"confidenceThreshold": 0.3,
"recall": 0.98176295,
"precision": 0.9670659
},
{
"confidenceThreshold": 0.35,
"recall": 0.9787234,
"precision": 0.966967
},
{
"confidenceThreshold": 0.4,
"recall": 0.97568387,
"precision": 0.96686745
},
{
"confidenceThreshold": 0.45,
"recall": 0.97568387,
"precision": 0.9727273
},
{
"confidenceThreshold": 0.5,
"recall": 0.9726444,
"precision": 0.9756098
},
{
"confidenceThreshold": 0.55,
"recall": 0.9726444,
"precision": 0.9756098
},
{
"confidenceThreshold": 0.6,
"recall": 0.9665654,
"precision": 0.9754601
},
{
"confidenceThreshold": 0.65,
"recall": 0.9665654,
"precision": 0.9814815
},
{
"confidenceThreshold": 0.7,
"recall": 0.9665654,
"precision": 0.98452014
},
{
"confidenceThreshold": 0.75,
"recall": 0.9665654,
"precision": 0.98452014
},
{
"confidenceThreshold": 0.8,
"recall": 0.9604863,
"precision": 0.9875
},
{
"confidenceThreshold": 0.85,
"recall": 0.9452888,
"precision": 0.99044585
},
{
"confidenceThreshold": 0.875,
"recall": 0.94224924,
"precision": 0.99041533
},
{
"confidenceThreshold": 0.9,
"recall": 0.9392097,
"precision": 0.99038464
},
{
"confidenceThreshold": 0.91,
"recall": 0.9392097,
"precision": 0.99038464
},
{
"confidenceThreshold": 0.92,
"recall": 0.9361702,
"precision": 0.9935484
},
{
"confidenceThreshold": 0.93,
"recall": 0.9361702,
"precision": 0.9935484
},
{
"confidenceThreshold": 0.94,
"recall": 0.9361702,
"precision": 0.9935484
},
{
"confidenceThreshold": 0.95,
"recall": 0.9331307,
"precision": 0.99352753
},
{
"confidenceThreshold": 0.96,
"recall": 0.9300912,
"precision": 0.99674267
},
{
"confidenceThreshold": 0.97,
"recall": 0.92705166,
"precision": 0.996732
},
{
"confidenceThreshold": 0.98,
"recall": 0.9148936,
"precision": 1.0
},
{
"confidenceThreshold": 0.99,
"recall": 0.89361703,
"precision": 1.0
},
{
"confidenceThreshold": 0.995,
"recall": 0.88145894,
"precision": 1.0
},
{
"confidenceThreshold": 0.996,
"recall": 0.87234044,
"precision": 1.0
},
{
"confidenceThreshold": 0.997,
"recall": 0.8693009,
"precision": 1.0
},
{
"confidenceThreshold": 0.998,
"recall": 0.8449848,
"precision": 1.0
},
{
"confidenceThreshold": 0.999,
"recall": 0.81155014,
"precision": 1.0
},
{
"confidenceThreshold": 1.0,
"recall": 0.24012157,
"precision": 1.0
}
],
"confusionMatrix": {
"annotationSpecId": [
"548545251585818624",
"4295540141558071296",
"5160231270013206528",
"6601383150771765248",
"8907226159985459200"
],
"row": [
{
"exampleCount": [
55,
0,
1,
2,
0
]
},
{
"exampleCount": [
0,
59,
1,
0,
1
]
},
{
"exampleCount": [
0,
0,
81,
0,
0
]
},
{
"exampleCount": [
0,
0,
0,
73,
0
]
},
{
"exampleCount": [
0,
1,
2,
0,
53
]
}
],
"displayName": [
"roses",
"sunflowers",
"dandelion",
"tulips",
"daisy"
]
},
"logLoss": 0.02853713
}
}
Make batch predictions
Make a batch prediction file
End of explanation
file_1 = test_item_1.split('/')[-1]
file_2 = test_item_2.split('/')[-1]
! gsutil cp $test_item_1 gs://$BUCKET_NAME/$file_1
! gsutil cp $test_item_2 gs://$BUCKET_NAME/$file_2
test_item_1 = "gs://" + BUCKET_NAME + "/" + file_1
test_item_2 = "gs://" + BUCKET_NAME + "/" + file_2
Explanation: Example output:
gs://migration-ucaip-trainingaip-20210226015151/flower_photos/daisy/754296579_30a9ae018c_n.jpg daisy
gs://migration-ucaip-trainingaip-20210226015151/flower_photos/dandelion/18089878729_907ed2c7cd_m.jpg dandelion
End of explanation
import tensorflow as tf
import json
gcs_input_uri = "gs://" + BUCKET_NAME + '/test.csv'
with tf.io.gfile.GFile(gcs_input_uri, 'w') as f:
f.write(test_item_1 + '\n')
f.write(test_item_2 + '\n')
!gsutil cat $gcs_input_uri
Explanation: Make the batch input file
End of explanation
input_config = {
"gcs_source": {
"input_uris": [gcs_input_uri]
},
}
output_config = {
"gcs_destination": {
"output_uri_prefix": "gs://" + f"{BUCKET_NAME}/batch_output/"
}
}
batch_prediction = automl.BatchPredictRequest(
name=model_id,
input_config=input_config,
output_config=output_config
)
print(MessageToJson(
batch_prediction.__dict__["_pb"])
)
Explanation: Example output:
gs://migration-ucaip-trainingaip-20210226015151/754296579_30a9ae018c_n.jpg
gs://migration-ucaip-trainingaip-20210226015151/18089878729_907ed2c7cd_m.jpg
projects.locations.models.batchPredict
Request
End of explanation
request = clients["prediction"].batch_predict(
request=batch_prediction
)
Explanation: Example output:
{
"name": "projects/116273516712/locations/us-central1/models/ICN3600040762873479168",
"inputConfig": {
"gcsSource": {
"inputUris": [
"gs://migration-ucaip-trainingaip-20210226015151/test.csv"
]
}
},
"outputConfig": {
"gcsDestination": {
"outputUriPrefix": "gs://migration-ucaip-trainingaip-20210226015151/batch_output/"
}
}
}
Call
End of explanation
result = request.result()
print(MessageToJson(result.__dict__["_pb"]))
Explanation: Response
End of explanation
destination_uri = batch_prediction.output_config.gcs_destination.output_uri_prefix[:-1]
! gsutil ls $destination_uri/*
! gsutil cat $destination_uri/prediction*/*.jsonl
Explanation: Example output:
{}
End of explanation
request = clients["automl"].deploy_model(
name=model_id
)
Explanation: Example output:
gs://migration-ucaip-trainingaip-20210226015151/batch_output/prediction-flowers_20210226015151-2021-02-26T03:00:47.533913Z/image_classification_0.jsonl
gs://migration-ucaip-trainingaip-20210226015151/batch_output/prediction-flowers_20210226015151-2021-02-26T03:00:47.533913Z/image_classification_1.jsonl
{"ID":"gs://migration-ucaip-trainingaip-20210226015151/18089878729_907ed2c7cd_m.jpg","annotations":[{"annotation_spec_id":"5160231270013206528","classification":{"score":0.9993481},"display_name":"dandelion"}]}
{"ID":"gs://migration-ucaip-trainingaip-20210226015151/754296579_30a9ae018c_n.jpg","annotations":[{"annotation_spec_id":"8907226159985459200","classification":{"score":1},"display_name":"daisy"}]}
Make online predictions
Prepare file for online prediction
projects.locations.models.deploy
Call
End of explanation
result = request.result()
print(MessageToJson(result))
Explanation: Response
End of explanation
test_item = !gsutil cat $IMPORT_FILE | head -n1
test_item = test_item[0].split(",")[0]
with tf.io.gfile.GFile(test_item, "rb") as f:
content = f.read()
payload = [{
"image": {
"image_bytes": content
}
}]
params = {"score_threshold": "0.8"}
prediction_r = automl.PredictRequest(
name=model_id,
payload=payload,
params=params
)
print(MessageToJson(
automl.PredictRequest(
name=model_id,
payload=payload,
params=params
).__dict__["_pb"])
)
Explanation: Example output:
{}
projects.locations.models.predict
Request
End of explanation
request = clients["prediction"].predict(
name=model_id,
payload=payload,
params=params
)
Explanation: Example output:
{
"name": "projects/116273516712/locations/us-central1/models/ICN3600040762873479168",
"payload": {
"image": {
"imageBytes": "/9j/4AAQSkZJRgABAQAAAQABAAD/4gRISUNDX1BST0ZJTEUAAQEAAAQ4YXBwbAIgAABtbnRyUkdCIFhZWiAH0AAIAA0AEAAGAAdhY3NwQVBQTAAAAABhcHBsAAAAAAAAAAAAAAAAAAAAAQAA9tYAAQAAAADTLWFwcGwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAxjcHJ0AAACBAAAAEhkZXNjAAABFAAAADF3dHB0AAABSAAAABRyVFJDAAABXAAAAA5nVFJDAAABXAAAAA5iVFJDAAABXAAAAA5yWFlaAAABbAAAABRnWFlaAAABgAAAABRiWFlaAAABlAAAABR2Y2d0AAABqAAAADBjaGFkAAAB2AAAACxkc2NtAAACTAAAAepkZXNjAAAAAAAAAA1zUkdCIFByb2ZpbGUAAAAAAAAAAAAAAA1zUkdCIFByb2ZpbGUAAAAAWFlaIAAAAAAAAPNRAAEAAAABFsxjdXJ2AAAAAAAAAAECMwAAWFlaIAAAAAAAAG+iAAA49QAAA5BYWVogAAAAAAAAYpkAALeFAAAY2lhZWiAAAAAAAAAkoAAAD4QAALbPdmNndAAAAAAAAAABAADhSAAAAAAAAQAAAADhSAAAAAAAAQAAAADhSAAAAAAAAQAAc2YzMgAAAAAAAQxCAAAF3v//8yYAAAeTAAD9kP//+6L///2jAAAD3AAAwG50ZXh0AAAAAENvcHlyaWdodCAxOTk4IC0gMjAwMyBBcHBsZSBDb21wdXRlciBJbmMuLCBhbGwgcmlnaHRzIHJlc2VydmVkLgBtbHVjAAAAAAAAAA8AAAAMZW5VUwAAABgAAAHSZXNFUwAAABYAAAEyZGFESwAAACAAAAFwZGVERQAAABYAAAFIZmlGSQAAABoAAADEZnJGVQAAABYAAAD0aXRJVAAAABgAAAG6bmxOTAAAABgAAAGQbm9OTwAAABYAAADecHRCUgAAABYAAAEyc3ZTRQAAABYAAADeamFKUAAAABYAAAEKa29LUgAAABIAAAGoemhUVwAAABIAAAEgemhDTgAAABIAAAFeAHMAUgBHAEIALQBwAHIAbwBmAGkAaQBsAGkAcwBSAEcAQgAtAHAAcgBvAGYAaQBsAFAAcgBvAGYAaQBsACAAcwBSAFYAQgBzAFIARwBCACAw1zDtMNUwoTCkMOsAcwBSAEcAQgAggnJfaWPPj/AAUABlAHIAZgBpAGwAIABzAFIARwBCAHMAUgBHAEIALQBQAHIAbwBmAGkAbABzAFIARwBCACBjz4/wZYdO9gBzAFIARwBCAC0AYgBlAHMAawByAGkAdgBlAGwAcwBlAHMAUgBHAEIALQBwAHIAbwBmAGkAZQBsAHMAUgBHAEIAINUEuFzTDMd8AFAAcgBvAGYAaQBsAG8AIABzAFIARwBCAHMAUgBHAEIAIABQAHIAbwBmAGkAbABlAAD/2wBDAAMCAgMCAgMDAwMEAwMEBQgFBQQEBQoHBwYIDAoMDAsKCwsNDhIQDQ4RDgsLEBYQERMUFRUVDA8XGBYUGBIUFRT/2wBDAQMEBAUEBQkFBQkUDQsNFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBT/wAARCADVAUADAREAAhEBAxEB/8QAHQAAAQQDAQEAAAAAAAAAAAAABgQFBwgAAgMBCf/EAEkQAAEDAwMCBAMFBgMFBwIHAAECAwQABREGEiEHMRNBUWEIInEUFYGRoQkjMkJSsXLB8BYkM2LRFyVTgpLh8UNzNTZEVLKzwv/EABwBAAEFAQEBAAAAAAAAAAAAAAMAAQIEBQYHCP/EADsRAAIBAwMDAgMGBgEDBAMAAAABAgMRIQQSMQVBURNhInGBBjKRobHwFCNCwdHh8QckUhUWNHJigrL/2gAMAwEAAhEDEQA/APllihATDk+VOI8pDmd+KQj3HekI8/vSEe4/GmGubbcUhrmelIR755phjCBTjnm304pCMxxSFcwJNIVzzaTjPNIVzNvNIe57twKQ1zMUhG2KQxg4phGZpxHeK0XFihTdkQlnA/RoqkpBxzWfOaB2O7m5KaErDu4zz3skjkVepxsMssalEk+1W0HWDynJDxp5rdLRVLUu0GKGZEsWI+CgKHHvXHV228Fkf3b4puOoE+VTpampFgZJApdroXvPnyFXXKVV3kSSSBaavavjNXaauiMhXZGy4srPPpQa7srE4oONJ2/7VdWm8ZB71UoPdUSGkWf6f6KBZaUEAnArraCVlYZImG1aVUy0AEeVXyVjW5aaCk4LYzj0pm7D2vgjHV3T1lalOJaTn1xzWLrKrUW0y5Qpq5G8zR7LcpeGxkcdq8h1+uk68l4Or09CO1Dlpi3Jt09IKcJKh5VUpa90KsavbuEr6ZSjgoP2r3o87MPekOjzjOaQ5hGaQjMd6Qj0DikMKYFvkXOShiM2p51XAAqUYuTsiMpKKySBP6Dalg6fN08IOAI3loJOce3rRnQklcipNq9ievgz6J6P6mwVuXVlmVLCyhaXRkoP0o1JR23IqO9tMsZ1A/Z3aRv1vUu3RzAkAZDsX5T+XancYSCenb7rI2Y+ACy2KA4mYXZL4RlLriv4vwqSpQSIbW+WVE66dJ1dKtVfZWipcCQCppSvIjumqdWG14JJ9mRsOKCSPfOkI8wDTiMxzSEZ+NIRnFIRs2hbziW20la1EJCQOST2pC45LadBfggf1za27tqR11hl1O5uK0dv5mrkKKWZEEpTzwg66nfATp/T+nJE63yZLD7aMj95uTn6GnlThYlsaV7lM4loMWU60ohRaWpBI7EgkVgV6m1tA/ce24hSkcD61mOdxxLNZKArt2o1OVyL8AtPV86u34Vr01gUMiI81YDGBNMIINMt5lJNZ2qfwjw5JStZT4QrkKt7hza6KCY6iCBSoq8iLBSU7+8BrXgsEW7DdJHiO5x3qzB7URbuPVjjkNJ4qlqJZCxeCU+m1s8afuwMjGKBpHeqxNls+mjC20tpWjA4xXYUOEIm+FDSlhJKc8elXiQhukZtQPAHsarzYWOQEvzLZbWCAawNbO0WaVFZInuUNBnukAYJrwrXVL6mo15Ov00fgQ2uRA08Fj1qqptqxclHB87DX06eSGYpCPMUhHoGRxSEeYpCNgnOAO54pC4LZfDf0pgP2wz5DaXHsg/MOa1YQVOKaK0fjbuWntemIt7srtuKEEAEIHp7VCUi1COLFR5rtz+FjrnGurQcbsE57bIQBhOCeT9R3/Oqqltd+zByTg7o+rPSnVsTW+lYs2M8h9K2woKSc7gRmivBZi9yuJNfWpEqO4lI2qGSMeVPGVhpRuUk+JjoJJ6lMx2oqg1JadC0Obc+RBH61Ga3qwGUXhorxdPgY17BbU614D7YG4HaoZFC9B+SHx+AZb+EXqQ4lZ+5wAnkEr/i+nFR9GSEnJ5sDV96Aa+08oiTpqYsf1MJ8TP5c0zozQt/lAsvROoGt++yz0bP4t0dQx+lR2T8D74hP0g6K37rBqIW22sqaZbVh+QtBw37fX2qUKbnzwPuu7R5LK3z9m/dItrS9Cu7pkhOVB1sFJP4Ub0Y9mPtmu4PdKfg11FaOpEQ31tl+BGWF7m88nPGQaeNPY7tkbSk7NF7JNza0vBj26LhBSkDCaLuyHeFZEJ/EBrO/OaakQrNEkz3th+WOkqO40OrJ7XtBzdsFCY0F6HJcalNLakIUQ4h1JSpKvPINcnXbvkGhxAG0DAxVElYaLosIQau0VdkHgD5qtzqq24KyHjyJjRQh7SEEelk/vkmsrV8EqZI1uUdvfiuYqoOlc4XmRsaPOKJQjdkWrAlIlYWea2YwwCk7GId8Yik47SCuFdkw00kEd6yqzvcsR4Jn6PxhKnnAwnI5pdPjebF3LlaFtDKYzRIGcV2dJWSHJGYbSG9oPFWWIQXOBvbJz5VXmg0WRrqmEpvfgnNYGtpuUWkaFGdnYhy9yJEWYoFJKc5rxHVaaVKvNVOWzrtLVi4o4CSZCM4qhsUWXp1E1g+dWK+nTyU9NMIw0hGUhHmeaQj0EpIPpzSuJ5Vi5fw7aoH3Gjac5SMjNbUGpwsU4va7k82TUy7dcUvjPhqIzQJRLUZWYn+JLphG6s9OpD0VCftiEeI0r0WORVCo7Fhx3rALfs4OtU233KboO8LUJEBf7gLPOzJBT/5SPyIosJbogYPbLaX26jNGNDTKb/hxk49DToO8EKayfXDZhzWkhQ8QJUCPI098kXxdEoMyYrmm4ZWwglYAPHJoq5yM3gJYWl4P2RpRjIyoelM3klbAK6105ZY7rYfjIHfHAosXdA3yQNrqwaeSX0JYRvV2wkc5qbWCGExJ0dsVh6ZszJbbTaC4suZAAJUfOoW7IUEo5Ce69dHJDqmIzZcHYDyNS2pEt9wm0vJK7K5c5raG3nAV8elBm12CRvywMt0R7VmpzgnYokD2FDuLknvTHS+zx4KUPx0LJHcjPNPfBKyKs/Fr8JMC6Jcvdl8KFcRyVAYSsehx/es/VaVVldcgZQtlEQ6B+GKz+G0u8L+0OnGQ4ogflVOHTl/U7klHuxn+KD4f9JaK0Q5dbS+3GmtpCglCuF+oNWnpIQjddiFSO3KKSOncc4qSwQRpmpEzBzj60hBZpVs8msfWMJTD6Cgoa9a5yo7stRQ0age2o/WrumjdgpcgVKlHxeTW9CGCtIVwHd2BQKkSUUGsIfwnsKwagZcE69HCGm0LHJJq1oFa79xi2eirsfBQj2FdbSwOSJBm7gM1YEbTZaSgnP60GROLAbUCkvrUkDNUasL4LEZWI+vVhEkqG0FRrm9b0unqYtSVy/T1DhwMzOjlt85IFcjL7NRvmT/AH9DQ/jW0fMUV64zjT2mEZTiMpCPKQjKQzJq+HbVv3fdzAdXhJIKcnyq/pqlsFeas7lu4KgtsYOQRkVaku48fAf6Du4Wly1yVZbdG1OfI1TqQuW4Stgr3qrSMjot8TumtUW9Batt3lCNJ29krV5/jx+VAjiRCotkk0fTK+KF30Ew8fmJZ7/hRHgsNXRESreL3aGmSMlDqf70K+UPbDJBVahHtEBs90rAxVm5BhuuQ3DiNKWQEgdzSHIU6vauZdloZZcClJyTg0aKsgM3myIDv81ydKJ3HNTvYEk2xplPvvpDO87B5ZoW8JtvgddLWoOT0JWOM80OVS4SMbEl3y5rdhNwI5KUkAHb5ChOVifOAr6X6d8F7xyOfXFRjlknglefemrJEU64sJIHGaMMV96idQJuqZaozZ/3cHAAHJpt2bIiyvPWvqgnptZipt4fbF5CEg8k1GdRU1dkJO2Cod3veuOs9wKN8qaxu+VtJIZT/wBarpzq8lfLYb6T+Ea53kJ+87q3CUrnw20gkfnRVTS5CKMiP+sXR9/pRc2mTNRPju5AWBtWk+hFDlFLgVnF2kR2kfMKh2E+A20o3hGTWDrHkPCIaIdCGsZrBcbstdgS1FL3rIBGBW1pYWWStMDn3Ct4DNbkVZFQfrO0V7c+VZ1d2DxWA6iIAaTnvXPTeQrJQ6XX1MN0NqVg5q7oZJScSD8lntI6pjpZRlQ7DzrrqfFyVyQbfqhlSAA7+Zow4pfvyHEnC8mojoYn5+9ZJI59aE43JJnNKUPc+dBlBMIpM6mIkIIP6UF0UyamfIBDanFpQgFSlHAA8zVoy7lkekXwc3TXcBqfcn1RWHBlLbffHuasqku7ElKWVwShO/Z8Q1xz9luUlt7HG4g81L0o9iWx+SFeonwea20TveiR/vaMnybG1ePp2NDdJrgi7rkh1WlLw3JWw5bZLLyDhSHGykj86jGlOXCIOpFHVWjLylO4wHcewqToTXYiqsWe2Zc/S95jTFx3WfDVzlJHHnUYqUJXaFJqSwXj6aaja1JpyNIbWFqCRmtS+5EIhzEeMd9DyOFJPlQZIsJhXrbTUXqPpBpxSQqTGWh0KxylaSCFfpVKSs7h2t8Wi0+hQq69NY7bg/eIZCVZ+lSZOLuiOdPtvW6+PR3GFlpK+4Scd8ig2ZJEl3iVG+zRlbgEggmietCK+KSFsk+EIOocK73GzBFoa8R3bwN2M1aja+QUr2wVpuOnL/FuihcYElDiyfmUgkH6EcUWckkAjF3ybO9Pru54a/sK0B5OW1OcBX0oDbfAdIHlaJv8OWov2aY0kZOS0SPzFV3v8ElYKNK6dlqK3vsrpA8w2eKFdk7JBRCtQDm9aTvJ7EdqV7jpEmacU1Z7YXV4ScedHpqyuRYA6x1DIv0pTLSj4WcYH81DlUzZD7e7Be82OXabI/KjxC/IKCRngDj1okU+Qbdsla7F8M2petGunbjqVOIqF4bjgnw20Z8z60JUpTlumBs5ZZbPSnQXRfTqPGtyYbU24rSDtV8qGk+alegq2kuwZRUcB3bOm2jtXW5SI+n4UiKlW4SHG8F5Q/p/5aewzUX2PmB8bvRXU3TjXki4SQ7J03MeUqMoDKYx/wDDPt6H8KBVg1lcFZu0rP6FZWxlYqq+CTDSxLDLIHFYeoW6RZhwPzkrLZxms5QyGbBi6NrdKiAT9BWxQ8IqVHYa49ndcdBKSR51qOL23K6d2FFlgbSBt5+lc/qZu9mW4hhHifIMDPFYcp5CMerBaZCZHioJSPSrekvOakgfGA3j3y4W5IKFq+WuzpXtgErj7aeqM1BCFKOR6mi38k1II2+rT8dG5Su3vUrsfcYnrSl5xKFK+dRwMU913G3EsaQvRnxkLKSdwBGaawRMI3pYbyTwBUbImmfKjQOnvv6+xwuSmK02sKKyMkn0FKCTZRld4R9Guirkdq0RmGb6h0ISBtLiT/arLaDwvbksNBhIkQgUupcXjhQOaQSw5Ri4IIYnQWH0njkZyKnuuQUbEea16H6c1Upb33c008exSkAip06rgQnRjMhnUvQk2RS9sZDjQ7EpxV1VYzKToyjwEejfhk0Zd2m2tTxHG5soBUdLo2x157DcDzVOb3dizGnFcsJZfQbS/TopisadkWcKOEPw5CnGl/Tdx+BqCk1wE9KCzYZNQaFk2WIZ8Vf2+25+ZxCcLa9lp8vr2qbkmR2uJmi7z9jkmOsgsujaQfSqla0Y3Yane9ixWiZUiZbVQGJJSjwxw3wQfL61W0ur0+qco05JuPK8B6lKdJJyVkx1mwJMm1l+GrwbjHJ4xwojuk/WrsuMA0Ct0va79ZS42fAuLGfEYzjeB3/GvM/tp0fVa3R/xWgqOM6eWk2ty/yuUdD0jVUqdVUq6un38EldPLwjUul4knjxmv3bgPkocfrXRfZjqr6v0qjqJ/fS2y/+yw39cP6lLqel/hNVOmuOV8mOdwgsOS3IchtJakIJSrHY+f8AeurMsjDWxNs1FZ7NkltpOdx8wexpuBEix2/GssCS4hKiD4auPwp7iGRx2NarrJjLbQnjxEHbxim5FwJrjcrHJebakW9JewTlHFVZTXFgqjdXBTV8CXHtKJSEeHAcXtTzyKjNtRuhksjHpTTy7jJDhQVAdgBTU1ceTDTUNimNwW2vsiEoXgEqPYeuPOrqBM4THoHTfSrstaEpecH7pOMb1kcfl3NJjpWIYZu8vUTs1+RJUy2+rMl/OCof0j2p1zYi+5ITHUV3TllSLXCMotoAbaQMDHkBRJSigebYI/6yMXTrX04uMC5acSl91tSUtuHntwc44OaruqngeUHKOUfL29dDtc6bvDVvmaYuJfed8FkMsl0OqPYApzVWUJWwivd8Msr0q/Z5dSdXxEyLxJt2jm1JCkNXAqeeVn1Q3/D+Jz7VW/gnUzN2LKvwkJOsXwU9QejkFdxdbj6lsiElTtxtIUoMgd/EQRuSPfkVUq9PnB7ou6FKdvvEJRtPqfWn5d2far1CklhFaV2OatHqQ1vDZB+laFSK2g0jjGthYcwpPP0riuoSaqZLtO1glscIPyUpNc5Xk0sBSQIdlSy0MDiuk0UFTSQF3ZpItpW8Ep8+O1dZTlFRIbWNs6wOsulSDgZ8hRN0bi2sRyrc84NqCSTT3T7icWuRTY9HykSUOryskg9u1Sa8EUix+ifEjw20lOCAO4p7B0FEyQsMnKB+IpNDnzh6L9CtW9Ybh4FhbVHi52uTF5CB9Md6HClKeexTbu7LkuLoT9mpqK1Mty29cT7fLGFYhgJTn3BJzR/SS7k1CXJPWjeivUjQKUNP38X1hHAU60ErI98HBpWCpTj3uSnbUXBLYRNilCuxwOKZoJcRjUVufuDsOMsLcaVtcUoEN58wlZG0kdu9JJyE3YeRpZm6spakKS0p3OxDqQM/Q5wfwNESS7kb3Mj6dFoty7PdYJm2nuh1rKlM/wCYAphGsiAhy3/cVykGfapIxDn7sqz5JUryWPI+dPcS8ANY1fZbnKstyKWpjeWw6R8ryD23DsQai1bIk74ALqNoB7TUldxtzKkxQQp1lP8A9IH+ZPqn+1cL9r56uPTm9Ne1/ityl5/ybXSoUpahKp9B46V64XbbhGnEYbbUEvI5OUdjXgvROqS+z3VYaht7JYn7p9/muUdvrdHHV6Z00srj5lkrkhqK7GnsFJizAEqUk5GTylX+VfWMZRklKLunlPs0eZ2adnyQvrplq1a2eZZUUIkIS+jacYJyCB+Ioc8O4y8BX0buCrdf5NtWshiY2XEJ8gsd8f68q8u6HD/0P7Q6jpSxSrL1ILt7pfLK+SR1Otl/G9Pp6r+qHwv5dv37kl6yyxbo81IIUw4nJH9J4NernKsA+qcdMpFnuqAC6gBKiPP/AFihzxkdBxp8pnaUdSk5Lawrv64NSTuhAv1BYw/AloGCWlJWfXFJ8jAiwhdwuMFbR3rdSEbQexziqlSN5qxYg/hySDr2NaImjRBnyPBcwPCCTyVD2o8opraBTtkFtFdQNN6cbaZfadUoD53ijjPtRFDZEbcmx8nXP7zC7nOUI8YDxcKOPDbHYfU1JYQzyQH1I1g/r2/tsRgr7I0rw2Ghzuye/wBTQXO5KwTK0ZCtVviQXwC4ynxJBz3cPOPwHFM5OOB9qYpiLbbAaiNcA4B7mhbmySSQ4tSn40xlgsOOLc5Kh2SPU1JRkxtyN7u9CjTQ4oMB5pGVPhIw3nuc+tWY4VgbzkS2nVKpboeW8puKFbE5Pzvq9AP8vzoijbDI3HHXvxKaG6WQVwNR3Jpy4ONf/l+CkPyFpP8A4o7IB/5iB9aHOpGm8vIrp8lC9ataZ1Vru53bS9m+4rNKWHGoBIw2oj5iAOEgnJ2jgeVQTTd0is42wIbjZ2m4+AkYxSqPA6iAV6hIY3KSBnNcb1JotRRtYiGnkOehzXKVm1wSSuyVrItm4MDCgCB2rRoa+DxezLcKFxebY0h7cojPoKPX63T06tJl+Gj39hYixNy8ZTnNYlX7VxWEmXY9NFtt0O0XQC1z6kUDTfan1qqg3a4WXTklewZ23RDDaR+7Ga9E0eu9WKdzFraZR4DG02ERxgJFdFCe5Ga47XY1v8RTMZWB5VNkGiOvhe1HadDaLtjDQQ2pTQJIxkkirkWtqsVKa2os/aetdubZGVlXsBmhStzcspseo3WuE+vb9mdPuUYoTkiWThf9dMakbhR2JarU0h0uSFhkqU8APlQCFDCckknOTgD1qKmrjvg6aV6gumUqDPhGGrdtbdbkeIw6PLaogAE/0kg0VPcrshww8BiTg4w2lDq/54j6QhzOPQ/35qNx7CePvjqUGFLfQ2MKjucPtf4fX6f3pciEtxtTFwjubEtpLh5Vt+R057LT5K+nPpSYsEX9R9LSXmG7lHQtM6IBuAVuK0fXzx6+Yqad8Mg01lGaM1WxqOF923DCZCU4StYyCP8ApQZwTTjJXTCRlezQF6y6cytKyHpVqYU5AWSVMoOS0c54Hmk/pXhH2l+x1WnUep6fHdB8xXMf8r5ZO76b1iEoqnqHaXnsySPh+1unX2lblpmaSidCypjfwopB/wAjXW/YzqEp6d9Lrv46Sur/APj4/wD1bt8mjK6zp4xqLU0/uyw7ef8AYG9XH3WdWRHSna42wEqT6KCjn869Jkt0Dmb2Y7afuhhzrLdmsFLMhtTn/wBtR2q/LNcL16hGlW0nVu9Cav8A/STUZfhf9Te6fPdCrpXxOLt81lFi7/CTcLBNa5+ZkkHHGRyK7z2MPsRVqZf2zSdvd7pQoA+tRlmLGCfpvKL0F2MT/wAVnIz6pqMHgmwX6h3xoOQIrbmXklXiAdgDxipT4IpXG/p2pmK/cLlJH7iCguFXvzgUFLN/BPhWQM6n1A/qScX3CrKzhKM52j0qUJXdyLVsHeFotxFwZTMGzYgPKbB59gaJu3Ma1hB1X1oZCBaoq8MtgB0pPC1+n0FDqT24CRi2xn6TafXLusi6lkvIt7ZcQjH8bhHyj+5oMHd38EpKyMni/wB+lu/7u4zlRKgoYOali+QWXwbxVTLLLixX3ymQ8va20hJJUe/f6f2o0ZR7EWn3DO5XZVtg7XXEqlLQNyz2QnFTuIjG6XWRqC5IixWluo3YS0kcuH1P+uKknbLGecD9ffh26raujqj2bU9n0Y2Wtipu1yTMGe6EAAIaHqrKlH/loVScpLbB2G2Mp71r+FPXXQOX94XdSNQWh9W9d7h71jeTz4wV8yVH+okg+tZf8POk9zdyDduQX07eW1oSnxElXsRV2LsiI/3Gf+6754odSeAiQAX6SHFbc8k1xuunuqbQq4uLdPQzIKf6a5jUz2hqSyS3pOxNeEk4wT51Ro0t/wAcuexs08cBL9w4eGSTXKa+pKjWlGTudBp0mh+tlqDOARXPVa242YU00EEeKEgHOMedUPVlGW6Lyh5001wP1pIUlOBkivXfs7rp16MbvKwznNZSUWwgZcKEY2jNeuaSo7ZOWrwzgaL6svNKTjHGK03K5RsVE6fXrTugLNDZu1z8R9CB8q1gAcVJVElZFaMVFZZLGm+ueiJag2i6sIUPLeBQm7h1KL7km2zX9gkxC8zcmVoSM8LFQvkIrNBNbNUWm/QFrtJW++lOdxPGasSjCCywabk8IZW+qtwsTDltkRErUV5fQoZCx5CixqQ2og1K4a6G6kXCZHSi7MtyIKl5aS0f38Uf8qifmA/pP4YoEqkXLARJ2uyU0T1vIbdRKTJA4blJ5P8AhV5/gampXyK3YdY90S6kiSgEq4UodlD0Pt+oqdxfMSXuEYWHCftEJzso8lOfX3/uM01xES6s0UuA8u6WknYn94tpPdA/qHqn19Kle+CFrcBDo7UTWpoH2STtEpCcEH+b/wBqFJWdiSyIo9mb0vq2DeIyCzKjuZCkceIg8KQr14/yrJqaGi9VDWJWqR7runhp+Vn6MtRrT9N0m/hfb+536+6cS9Lh36KN0aW0Fcdgr/4rYvm3kqvyBujJXjRXoSzzg7fYEY/vWVqqMNRSnp6ivGSafyZapTlCSqR5Rae2T2Faeguy32mQ6wkFTqwkKO3nk1bnWp6WnF6magvMmo3/ABsQUXUk/TV/lkh68X23N2SXb0SA4pbhLKkp+QgHIwo8cjkVw2r+3nQdJJ0/Wc2nZ7U37c4uvka1Lo2sqq+23zYm09rliwtoWN5dQlzyGBnj+9Yb/wCpPSYThGMJtPl2StjxfN/y89i9/wC39U07tewOTpCbrJ8RTwS6lW5ZXyOee4qpV/6n9OjKMYUJuL5eE18l3/FBI/ZzUpNuSv8AUdp1xYgaCdhx3d0yVLKpKAOUoA+X8/8ArXUdL+2fR+qpU6VXZOTdozw8W78ZvjPkztR0rVad3lG8V3WRs0C0zL1VHVLThhkKdwocLKRwPz/tXT6TqOi1ctmmrRm1yoyTf4J3M+pp61LNSDS90OWrNWCIZLyHAqQ+ooGOceZP9h+FaSmllAHG5DdwkKmzSCSrnnP61RlUvItRhZFgentgcs2h2nW8JccV474H8WMfL+lHyoYBcyuxfIvELUTHg25sm5oOCUpwk/4jQ1UVZWjyO47HfsDlwdZhOOTJCW3zGBQ24gYCnCMHB9B2qxTi4ZllgpS3YRGN+u8i6yfBaKnFOq7DkqVRk7gmSz0+6eRNMw2pdwwqcf3rgJACcc4J9BTSld2HirZHS9dbLZbXnI0TEtfZTgVhsH6+f4VNU2+ROXgHVdSLrrpblmi29V28ZO12G0wFI2H+vPAH+I0dbYoHl8FXepHWq2QNcy9JaD6TaZ19JgENS5cC3KmR2Xhnc1lhCQVJxgnfgfhQpSd7RiCbV7JDN1d6X3XUujNO6k0j0v1BYL7IcfavVhiMreZjhKUFDqEElQCiVYx3AORkZNKvTqShdRyEwrW4KySy+3NdYlMux5LLnhusPIKHG1DulSSAUn2IBrh68ZxqP1FZhL3V0GOmZCWkoGfrWG4bqyuFg7ImXSGHI6XCfl7VsQ0alG5oUqlg5jxG38KSTu964vq/R5VZOpT+8b2n1CjyOC4yoyNxTx6iuBr9P1VDM4Y9jdp6mL7njdwQgYzWa6UpPAd100LYd5TCSNxGSc16z9mtHUoUvjVr5Oc1tVSeB0Z1O2cZUMV6rppdjm6r7nGXc25CspUCD6Vq3KLPkjPuMm6P+LJdU6s+aqSVij8xMR7Uhwm0dbdYahmiJpaHfLrLPAj2hl59Z9trYJ/SpqMnwhrMsXofp98Vek4rblu6cawVGVghuRZ3Ek/UHB/SpenJhVvjwyTomseulh2u6q6G6ukJx88mJZn3kgDzO1KsVF0pIIpyXKuJHvi8sFtlpbmW+fYZaThbMuKtkg+4IFBlCS4J+ou6Jm6YfFJYb8pH2a6Mrc4yhSsFQ9CPOoKpKHJNbZ8MszpbU8HVEESoLgdSOHWc5Ug/9D5H/OrsJqaugbVuQlgSQx/u7o8WI6MAH+1TGENysioDwcjklhXzNLT3SfMf67inyIBNQ6PdRIN1safAuDJLjsRvgLHcqb9vak8objgXWe9R9YW8KX+7mIwHm+x+o9xQW74ZP3CSIiLetHXTT14fZivREl6O/IUEICfI7jwBk/kaHKrCnG9WSil3bsvzJKEpO0VcgiNPi2O8KZeUqO4lS2VyFjKG1pO0oIHOcg4Hn5V8/wD2j+1/V56irp9F/KVNuLtZzdm1e7WMeOH3O50HSNOqcalX420n7ceDxrVUuUww+ZqnW3krKPGdUBE2jBSokfINx7nAzkehPl+q9bUztqZSk42WXuum74u3f5L5nTQpwgmoRSQhXqQOyGA1OEfxY6XVIcWFBKwobdqsnnYMk8Y96GtLti90L2dseLd8LvxyWkvYWRtjakMPyAtbym0trbWoFtKmyRwTnnBTgnjI9KBO7vOKwr397Pyvxv3LVNRZtdJkiDbFG3xn3XvlUClslSAOCoeqcEDPkaajTjUqfzZJL5/l8/7B1sv8Q72d19/c68ghT4G9SiMIV2PAPr6d6pV4wj8MXx+f7/IE0rWHNUhcZe1vdggbsDBwewH40CjOdOca1OTjJO6aeU13vyBnSjUjaSwM8y2JufypV4EpIKQFn5c7jn6c+ZNendO+3/VdG0tW/Wp+/wB7jHxd/qmc7quhaepd0vhf5fgDtisj7uoWIjrStylkgkcKSOSR7cV7n0nq2l6xSVXST3cXXeLtezX5X4fZs43VaeppXtqK3j3CzVatYQrnHjomt2mA/wD8d0KO9LY8gkdz5CtuVKpuu5WM5TvhIdIeqPDjot1uBgWtoDxnicuvH1Uff0q7TSitsVZEZ+4zajvhngJSC3Ea4bb9fc0VzA7Rw0DY2LcwvUN0whtGSwlQ7++P7VK41vI0aw1xcNWSlNMb41sbV/w0H+P/ABHz+lEj7EZAE/qmAi6OQGy5cbg0NzkKCRubT6uuH5GU+XJKldkoVzh3VW7ZDLIe7FNy1xdrhYnbKQzAszmfEt0Hchl4Hv4yshT2fPeQD224qzCK5kDk2wdl680705s8VzU2oYmnLI0AiNEQCgLA42sRmklSgO3yoCR5qFWHJQWcEPcU6d/aAdJrI42wxa9U3FtJH+8ohstp+oS49u/PFZ9XW0abs2TQm+KXqH0t+InpENb6YlKVqywzI0R8TIZjTTHeKkht0nKXUApylSVr24I+XOKx+ozoarSynB3cf79h2mmmU9RfPu51PPGea4uNFye5die62CStL9T2G1NM9uwANblKvC21qxNTaJp07qRqShCgoYqU6cZl2nWaCtd2bcjjBBzVWWhjNcF1aiwO3G8tsKIBGaDDo1Ldfah5at25B24aoSlOQsEjyrSWkUEowwU3XvljErXhS9grwAe1XqS9JFWdTezZvqQBJQjxOMgd6lLUWdgfJ8//AHrVKIQ9PfuM60tJ1InfZEvhUlJUUpUkAnaojnBIAOOcE1ODSd2Sjl2LFo/aI6409A+6dGad09pGwtgJZgw2XMhOP51IWgKPqQkUeVdvCWCV0uBPF/aD64W6VXSw2S55OT+8ltn35Dx/tQ/UY+/2JN0R+0Q06t9KL9pa72BWR/vlpnJmJ/FtQaUkD1ClH2qSqIkpplidLdftEdcIgt0XVVq1WHR/+FXxKVPkenhSE5P/AJc1O9yYE62+CTpvqhxcm1R7h08vJJ2ybMd7AV5FUZw4I9kKb+tRcYyGcUyOX7Z15+EmaL7ERH6haUiqw5NtwWvLXmmQwR4rY8icEAjIWeDVf0nB7oivNK3JdroT1v0r8Q+h2tQaakFIBS3NtzywqRb3/NC8dx3wscKAzxyBZ+Qk75JRitKVuhPDAWMp9D6Ef64pl4HGm6WdxKvGjkpebPB7EKHcH0P/AM0z8jgfedJquWL/AGIoj3BCtkqKOEuH39CfX1ocvjV0OsEM9X+qUq4WxliClpP2IKZltvAJdcWFduTkpSDjgZznvxXjH2p6zT6jUj02m3tX3nb+pN4v7fT5nZ9J0To/9xLnt8rABMvTcRmG7JktSIL6g482FBLclGUkJKclTaiQEpV/DtA8ySPM4UJVJSUU1NcPLaee+FJWy1zf8DrI4SQrmCKq53iLEkTRcCthDrS4iQplKkqDiCoOBLrY+XBODnByQDkUdyp0pzS22lbLy1az4bi39VzhYJxbaEBiKubURlaB4zzkmKyI8ZxDjzzZUlKxuJCVAghSc8EAHI+YH3qk5STwlFu7TSUrXWMteHbPOHgOn3HSzTHXWoioe1wKhplxAE5cmBCjkE+RIAOPJSkp7GqdeEU5Kp/5OL8Ruv0XF+6TfJYVrBHHnCUy24w+laCgTFOAkbEqU4EN88qJUlJ48h6kVlyp7G1Jf/j+FrvwsX+oW/kIoFwCLWtBQFPsNI3FKjgKUoYyMDtxkH196zKlNure+G3+Fv7k+JDi1PUmU60QQhRQkEZG9XBJA9Mc4z71UdK8FL5/RCsmjULalBIWAQvcrH8oweR7kY5p7OHH7/fYi8C60TnLZc4Vx8JMh1pXLaycHPdOR5Hit7onW9R0LVrUad4TV43xJLs/zz25MnW6KlrabhPD7Puhq1Fdpl6u7r0pe95w8AH5Qnyx7V9RdM6vR6vpIayk/vLK8S7r6f7POK+llpqjpNcfmvJxLaWI5K1YQOTg9v8A3NbsJ2RRkrs1s9uN8lqdfJZtzHzOue3klPqo/p3oyeSDQr1PffvRola27faIqe61hCEJA7knj8TRo/ECeMkQXy7aj6iSU2jTLUjTOkiMSdSOo2SpqfNEJtQylKv/AB1j1KQeMycty2wwvINxb5CK1WCz6HsiYNujIjRm8kNNkqUtXmtajypR81E5NHpUlFbYqyIt2ENsuVsMkLkQ5MxSTkJddS2j32oAJ/Ekmru1LuBbbfBK9kb0ZrYpY1DoW0TGRtaDk22oc+UH+tSMgD6moOnEle/KArr5+z40VrTSki99MYCNMaqjIU83bo7hVCuKQOWwk/wL9FJxzwU85GfqNLGrFpLInG2Ynz+gR5ENDsZ7xGVIWQ4wokYWkkfMn1ByOe3NcFXlKLcGFWUhsvZO7Hei0AUxTp9am321qPYjmnqNKSBxbJw01qVuPGRlzGAPOtWhaVgqnYJh1BYaQEh0Z9c1tRpxsE9Rgze9cJeWoocz6c0RqKRHc2Bd11g6yhaw53HrWVWmoZE3cA5OuZrkzg/L7VSlOcle9ge6wrZ1MojxCtQV35rLqRqSldssxkiCa7gomeXvSEe5pCJl+G34Y798RmpFxYk2LYbJGyZd3ncpTjGUNoyC4v5k8AgDIyRkAmhT3ZfBJK59GtCfszuhthtjLd6ZvGrJagC5Nm3JyKhR/wCVtjZgfVSj71Y9KPgJtXA63z9mJ0BuySIkO9WR0jKHIN+Uop9CA8lfI+tR2R8ErLsGujfhOuHTi2tQtO9UNRXK2M8Jt2rWGLkyE4/hQ8gNPNAeQCikf00nHGCSuFLmhb9ZwHUtocWnuqC4Vfkk4Vjg+RqGUh7XI0ndHbQdXo1po55rQnUFALblzhM5hXRv+aPcIgIS8hXmtOx1J2qCipCcJNPKGaJg0lrI3wps99hixaibQXUxvF8Rt1IICnozmB4zWSArgKQSAtKSU5TYgf1J1109CLivGUqew4ph8IT8qlJOPmB5B8s0Ob8ciIa1P8Q0lsXM2ofY3JrBZC46slJPG9ORwr/5HNcj1jqi0mnk4S2yldL2fn6efc09JQ9eola6WfoV71ze7zqOZJnvuKcfcGHnkpSkuqxtyQkAEnAye5PJ5Oa8qVdams62oac5W7W/Tv393zm529OmqUVCGEh2ul3F3Q2qLFuc2MLdm3rYbYbUksEbipsDcpCcZO4DgLwoYJGPQoqg2qrinu+K7k/vcZ4u+MeVjsXd9+B6kwrxdA1KdaavcicpFztk92GylUhTSEFyOtSXRsOMfKrJUCDg44owlQpRcY3hGPwTSlKy3N2krxz34tZrlFhPNjazYuaYkiAVNxXZwegtR3HluW91R2ux1NbitbZKXAFclJCu2QKhX/kuUKuZKNpNqKU0sqSlaykk1jhq3OWWIS73FtskxHHLO+phCDKdlECG+pYW5gJWWm0kFC1fO4CjAwVYAIG0FaFRKpBO+1R5SWO12+UsRe67vbLTd7Ckuw8wZ0mPGZivSXokqOyDJDqlpDZTuyyoEAIWojcM898dzVCpThKTnGKabxa2b2+L3SWPwCKXcf404llHjoU1HSXJIUUBQajJUs4UvncUDuk5yUZ57nNnTTk9uXhfOTS4XZS7PGH2CKVh1bnlMcOPIDYBVIOWztbCuEpzngHsfPgVTdO7tF37fO3L/v8AiS3HVp0tpZUrKkJYytaAMDco4/uO3nQ3G9/n+hFyuLUOrcRtbbU43sRyjBJ47kd/p60BxSd5PyDdu5znKbjMokrbCFISAtRONqTjHH14zXpn2G6rHS6mpo60/hnmPjcuc9m1+L97HLdZ07qQVaK45+Q3ILt9Upe77PCR3cVwPr/7V7tT1CqfdZxzp2HNT6pMZuFESpENvsnzUfNR+tacJXVyrJZGu7CIdjcpLTyWVApS6NzaFjsdvZSh5Zzjv35q0m547A7JHEXCL9iel+IkNg4XLkq+UH0A7qPtyT6VcguLAJPyDq7kq7OK+wRCtvsqXL+UY9ucJ/U/SrqpqStLN/H+iu5NO6C3TlpfTgoutvirXyS2o5J9yBz+dHysJEOSUNM6eviAy/HvkR5IIwFKXg44waDJt8hErElQZb0OREiyC006sKcQpCtyVnAGM9/P68edBbzyEwfMP4x7Hb7H8QeqmbXbkwI58B97wiVNqedZQ6tX/KSpZyD51wvVoqOqdlyk/qRXgrvchvXg1WpOyA1LmRXwyR2GKU47gabRpctYPwGlBpeDj1q7pYTjhMTYwJ17PH8bpIPvW5eY9zuxrV58/M4e/rTyqSS4HjI6yL4uUNpVnNZEoyk7yCbjiyPnBPfzqEuCI5NkFs5H51VfIeJE4rsCqZSEZTiJG6b9ddR9J9M3q26XkLtdwubqFqujThDrCUgAhsYwkq5BV37YxgGiRm4rBNNJDjD+Kfq1CcCxry7yCP8A94tMjP13pOab1JDb2Hukvj46oafebTNNo1A0TgtS4KWlL9tzRQakqjZJTb7FnOnX7QvTk1tkau0hqLS61fKqbb2hPip/5iPkcSn6BZ+tFU0widy0XTbrfYepUVD+jNa27UDeATHZkAvN5HZbK8OJP1SPrU+Rw4nzWbyjZdYe2RgD7ZGAS8kj1B4V+J/KoOKZK5H3VGBBZ0hIRqFp6fp9C0ut3S2lSHoL2CEOpUPnjujOArlJBKSSkkEMns54JYeSpupZ7yVOOG8LvD+SPvN1CWnZQzwpxAJTvIxnBwTnFZWo1Lpq0sryufw/sSjTT4eSPbff3JUmZ+6+0JyGylK8EHPPB7e/avNetVXqasbysl7P93Ok6XD090rHN2W1AnxJ8a6TbeW3CouI/gCwP4RwRjgDNY6hKpTlSnTUrr8jbdlJS3NfoJ5NzbIvG1XgLQ2n7OxLGXWxySUKzwOw4z5ZwKJClL+XfN3lrh/Nf5DeraTz2x/oW6YlMR9HLWVvMamdcR92MuxQ8ZSgofu0PZHh53KJAPYjcPQGrhKWrSsnRSe9p22q3LjbPa118n5NSq7aScvvPi4bp1pGZZi3e/GJEfu8vcbowksyrc60gIJUjaE5UU5PfKiQTu5rB/gJycqGlu1Tj915jNN3w73xf2suMYLbqxgk3hP8hVB1CzdmkLjlyGzdn1RX4kRsrfEpr5w+hByQlaULUeARjuNxNBqaWVFtT+J01dN4W2WNra7ptJc/WwaNVNprCf43HO038alERq3rjMzLmwmW3bkTS2kvtk5ddCknbkqaAAyMhWdpANVK2m/hd0qybjB7XLbf4X2jZ54ld82ta/AWNTclbvlII7XqVi8KjSHQuXAlvgruIi43yCdoZBHCwdudh80kAnucutpZ0VKEcSivu3/p53e3PK7O9lwFhUi7NPD/AHYdHLkp+4JgRA0p+WlLqleIW0R0ZUks85V83PGAMZJxjFVFSUafqzvaOOLtvD3eMeb+2Qm/4tqz3HZF2Zfiu4Q2Xd2SwlzJbQng4wcEZzxxkk4zVJ0ZRkvHny3/AK7+FklvTQ+WxG8ocQlgIWsAjcQUq/vz7duKz6rth3x+/l/kd4wbuqjohvtyW1KY2qykK4UEHdgkdjx3881Z0NX0NXSrNX2yTtw/oUNVF1KMo3tg9tVtuF9YRKfaTb4uNzbK+No9duf1Vg+1fS/T/VrRU3HauyfP7+f4HntVxi7cs53q/wAKyQcIWdpBxg4W8fY+Sffz8vWuqpJJJv8Af+ijL2I8kT37y4txW1lhPG5R2pSPQDy+nc1owTk7RK0rI3i2R+5qaCnfBjtg7HZXCQD32N9zn1496uqcKaty/YrNOWXhBrYtP2xDjZkBcsp4D0x8NoH+FGQf0p/VrSwlZDbIL3CSTCnIQ2iywbe23jlSFx0E9/61f6zUJRqy7kk4rsFGmtIardkMvz3PsUJCQsqjy0rKs/4f/imUKi5Y94gxr/qLEsa5sVVxU4koLSpLiiAgHOSFZ4IHp5/pFz2/eZF2sV56n/E1pKNp+dZLJa0X+XKQpD01/lGSOSVHJUfz+tY+r6jRUXTgtzf4f7/eQbd+Cm8p5KVknyrAhF2sBl7DK9dEoUrn9avRpNggfuU7x3SAcgVpUqe1XJdxAtJIA71ZTGszrDbO/wBMVGbwSSY8sHaRVCWSfsObKgpNVJInbAtaUAkc8GgtZDRRFddaVGeU457TDGAcc96cctl8KPwOP9ZIMTVuuLo5pfQzqiY6WsCbcgnglrcCG288eIQrODtSe9WadFvMiSXkt3a+rPwrfDYHLZpeLp03aMfCclBIlPBQ773iFuLOfRQGfIUT4FgJwLB+0P6ZzlFDmpkx2U922YDiGgPf92T+ZpboMW4eLP1N6A9UJzElu5aMnXZRGxclcJuTn2LiEuZ+hBpWTHuTnboERMVKIT5CEj5E+IpSSPLBUVf3NM0OazoTimXULjtymnEFDjKkjC0nuFJOQag0xylPxCdNbXCuL7mk7g/apoyp2zSmSfDB/oCiCtv05Ch/UsViavQ0q3xK6fs7flx+QVVJRKwIupslwfbujxZcVjetgqTjGRtwrB/HFcVqNHJPZTXHlfmbGl1EIX3u1/ARQ5dxbQBBvNvltqSoCNcUFKmyU5wFJUFcnOMp5/GsWcKLf82lJPGY9/o1b8zajKorenUT9n/rP5Cl+4B6BJfZjIvSWFsvPeOsb21JGSjkcg7Tg8/U8ihxpOM4xlL073SssO/f8/8AgI6t4SaW61r8f4O1zutyhWDwJd6Z0vAiy2pcW0PtB10LXjetshQSggckYPPHy+cKNGjUr7oUnVlJOLmnZWXCeG3fs8fXs9SrOELykqaw7Ozf0yvwz9Btm3SzPW7WFvdkQLtNHgXC3XlbmJSFoIygbMIUnjOABkn3GLVOlqIz01VRlCOYyhb4bPvnKf6L5ZjKdOXqQc1Jq0k+/wCWH9AjtN0VKjXG7Q4LdxvNhmR7hMvsCUlstQVApdYQ2VELKmyvjvwFA5wKy61HZKGnqT206sZRjCSbvNZUm0la0rfmmrZLzrXk6kfilFptrw+Va7za/wCp31VrHTi+nhAan2habwq42ZgxVMPXOEsjed38WxRW5kpVjDSAocggej0WsWv5jP4FGbvdQmuMcXSStdf1StxkVbV0vRUsqzullOUf1s7u/wAkSTA+1SY3/ddjVEVdmkvaejOykiLCW222kuOhspGCVbigbyQspBByRy1TZCX8+rf03aq7PdJNt2je/CVr4SavZqyNdSnttCNm8xXZfO36ZHNF0ZSXGXrqy+Jbqvv+e/HShthxKkBDaVoIRlRRjlWUhGMneaqujLEo02tq/lxTu2mnd2avi98KzbvZbSXqR+65Kz+8+M+PH5hXAvLqENPRJFoXa1qXHYXDC1SHkf1BAG08bux+UY/iJzWPUoRbcakZ71Zu9kk/F73/AMvwsB/UbtJW2vGP3b99x8ZvMZxxJbuhUl0IBCGgSE+2D6geXNZ0tPO1nTyr9/32JerB/dkjjfNYN6VtK5a9sx9biWmmWwofaCo5IyOSUjKvbn2o2k0j1GoikrKOXftb/PBV1NVQpP3wIXuo0udFBdQ1Hax8lva3bU+7hJyfp/avdela6ervKVlFYUV+rff5HF1aahjuMKUzL7KL7ylrBGd5HJHsPIe/avQdNTnVd3x5MmpOMFYdEww1tBH8AyEgZ2/h/nW9FRpqxntuTuOkWxzJOHHdsRo8+LJJzj2T3P6VF1orER1DyP501EhQQ5sXKkODhb/CR7hA4/PNZ9fUzStEs06UeWJfsyY6Ut4BUrsmqTm7ZYbar4Dn/bZzQ2n0piPpZCEFTqnOUAY54rRpzdOKRWmtzPnj1n6yu9Q9SPQbW6W7LHWUBaTy+rzUfb+9YmurOXw9im53fsAik7WvoK5+92RuCN7mFnfzzW1p4bivJ2BCVcSSRnFbUKRBXZpGy4cknmpSwEQq8POO9BuTFLDJQN2aHKVydhWkkD/OgMbg6NSFNk4qDjcdMXCSohI9KDsQaLsiOq6Ur8mYpCPdp9D9aQ1zo2yXDjHFPFXYzlYsbr/4iFah6W2ay2KTcLddHUGJcWVZS3GjNpSG22Fg42ryrgAFISR/NmrcqkpY7C3RUbrk1+Hj4K9UdeG27vIlM6W0eFlKrrMRuW/g4UGG8jcB23EhOeBkgioKi3keGVdl1LT0e+GT4Z9PKnXO0WjUElpaYzl41htmIU/gkIQysFvPBOEN8Acnii+nGCuwya4QnPxedLUxlxoOodIWWGQU/ZbZYg2jb6EJi4NDUorgfdflhDpr4nem0wIZVr+xlIGEoVchGwfLCXAgCpbkxJrklfTPUXT+o2R9z6gh3Hb2EO4syQPwQtWPypx7oXap0zZ9dWowb9amLpEUCUqU3hbKj/MhQ5QfdJGfPNAqUoVY7Zq5NNrgo98Q3wmah0cl7UOl35N7scZXiqaQXHpERAySVtYUVIA7qQMDzCRzXNarp0oNuC3ReLdy3Tq5TXKK7Qpsa+sNi42iBcStJysEIeIHykJynueDwRXO1Kc9O36NSUf085s/8mvCarperBP9f0CG73O4TGxNjttRihaWX4CQW33UJI+VSUkj5h2xkc+XNZlCjRpv05tu+VLlJvum/D/bLVWVSdpwVrcru14+osiTo5uyTDkuoiT2PsMiRe4avCgK252pUsAIPITlJKeU5IwKDOnP0n6kVug9yUJK8890ufOVfmyYZTSqKcLpPD3LEfx/tjyazL/qKfcbXqY/c97ct8gWhgowFSFhKUIIUTtScFIGD8uD2FPT02kp06mj+OCmt79k228Wu+978/MBKtXbhqFtlte1e/bD/wAGj1tej/dsy+2BFtscKSLbd49oeW27NC8bElLYRvAUtB25JUc+pqUasZb6elrbqklug5pNRtzl7rYTV7WWCc1JbZV6aUIvbJRfN+MK11drHLCe2an1VYoTF4dvlvgwbU87p9q0XyMkSYkV5SSVOoABUohDRVuPzJT5gKrJq6TQ6ib08aUpSqJVHOD+GUo3wn2V3K1uG/NixGrqKaVWU0knttJZSdrN92/7D7pC0WO83O66csU+96rvcOQwzb7jEu/2VgwyEFxrJWlsJ+dxGEYyoJIUBnGfrq+qoUqes1UIUaclJyi4bnvzZ8OV8KWeFdWbLFCNKpKVKnKU5RtlSstvjm3tj8Qqt13gWFgJnwFt6JvgdEO1RUh5w7A0pSHCMhLZVtI52jOCrgA5FWhV1Er0p31FK26Tws7ldcXdrri/dLxeUo00ozX8uX3Us+Of3b3CV3VzOnU264XS1twJ86EluIm2pcCcJcUBvUlASVZJHP8ALt5AwayloparfSoVN0ISbe63dLhNtpf3v7lt140bSmrSaxb/AIFV110zZYsm5SFwLc6tsuOxWFBS3853OOLSAFKPOSefcnFAo9PlXlGjDdJJ2TfC4sknwvH6DSqOCc2lG/K8/OwJ2TVN+6iOOMWgLasrSwoqIU22FYIGVLwBgE8E55zjtXZaX7N1Zv8Al091Tu8WXzfBg6jqEGtqdor8Q8tGmoduAM2X9oWO6Y43D/1HA/LNerdH6DDQ016zTl+RzWo1nqP4OA+ter7TbGfBjW99sEYUtrClq+pIrr1GK7mZuYshWXTt1fMn72uEJ3OQ1KQlKEk+YSEgH6kk0N0oy/qHUrCiZ07vbjjcu2eBqaIk7iGJSGVjHkUrP9jQ/wCFm/6gnqK/Ati2/UV0WpiTpidaShJIXLW2GzjyCgT38j296rT0dTmLTCxrx4asMotkli4rRNYcjvpP/DcTg+x9x7jis3bONTbNWsWm043iVl+M/rQbPDb0ZaJOyXJTmY42r5m2+xTkdie351d7XMyvO3wL6lWNPjYlPFYOpyU7j9KfAYVkgcVnQj8Q265HmopOFK5rptLAg0wXWdy81rLCJLgd7agKbqnVeSURyaZ3EVVcrBl4FLjWxvnzoSd2JpWOOeKkQNm0lRFJuw6QtabOcmgNh0AIBUa6IqvAsjxSupxjfkDJsd4dm8TA25GaKoohkfomkQpAUEURRtwKwvRpZQChgjI49qi0uSSTLm9J9bLvWiLTb2tQw7LGgx0xG4Dr6WlJUlOFEAkZPGQfQ0Z11HhFiELrLK69Z9RS+o2oGmkurNntm5mC0TwewW6R6qx39MCqVSspu7H2vhAxYunk68zGIcKG9MlvrCGo7CCtbivQAcmhqSbshvTZa7pn+zM1hqmOxL1NeIWlI7nP2VLKpksD3SlSW0H6qJ9RRlTk+cE1BLksNoD9nhoDpRcDd3tSXudMW14WS1HYV3ydgSgqyfrT+ku7CRSjwiS4HQiIwtDls1XqazMD/wCm5cWnQoeXyqaIH60nS8SYRNd0Ol36MxLjB2HWN8akAcSGJDKVg+uA0BSlTclbcxrrwU966fANeJ803PS+poNxnqX4ngXiN9kU4rIOfHZCkk9/40Jz/VWRLpuX8bd/Zf2sv0DqpFNNq1hNZ/2d98lwGrhL11a4GoeXEqatTslpvKcFIWp1KiO/O0d+1Vl0Rbdkp/D3Xn6/6Zbetbe5R+Lz+/8AJC3WboVr3pa2P9qrkTZ1rAVLtzIXCkLKjgFQSVNFQx8qsZyeeOcGp06Wim3HT38Svey/K9vdfQveutRGzqteVbn9+xHH2iPLu8oxtIpzcI//AHc03IQFxlju6sADbxgjHzcAcZ4obZwpR36n7j+J2dmvCznx475JtJ1Hto/eXw54a7+36jtYNP2e/QHFIcuF61DIZUFJdfLamJoJC1q3kDA4G5IUcJwMk1S1Op1Gmmk1GFJPsr3g+ErefDtl3dkg9DTUqsL5lN+XxLznx5z4Q6G8SbBd4Fxul6tt4n3mG5Z7qblGKxb8Jw2pQGAnlS0pz35JBFVPRhqaU6NClKEaclOG123+V3b4TduOE7hnKdGpGdSabmtsrrEfH9+RXDdRJtibPCmwo12szTj9tnsNONp1CgIytgKyNuRlHG4FeFD5DQZxcaj1FSMnCo0pRbT9F3xK3e3ObNRuvvIKswVKD+KH3Xa29Wyvrw8vOeAzVrOVo8RtSmJb9L2qVi0S7HFZWZEZ3DhU4AMY/pUAAQSggKBKhhLQw1u7R7pVZx+OM21aSxZd/mub5Tawnadd0Eq0koxlhxSynn/h48PJ5M1/dY9lNg01aZ+p4cyO4G3VR3FmP82GsuK/jKQePlBBT3OcizpeirWaj+IqtQnFq6TWfOFxf5tNPjAOtro0aeyn8Saf08Xf7eDtozotqbUb8d/VDkGOhnGGZEle4+fzNt53H/EpIFd3S6FXk2qFoJ93l/S2V+8mFU125JVXe3ZE1RdFsQURo339DjJ4ShKYZCEeoACziuu0vTqtGCh6i+kbf3ZlTrxk72/MI09K96ylrUkd2QAFbHI5HB5BA39q0VQlHlgHJPsIV9PpDUkNvahtzYPGAshf/pVgfrVhU1FXlj54IN3eMjgemz0ZO56ddXEDBKoraHEY/wDIomrCjFJNcA3fgJNIWC4291D2mrsi5Ka/4kJxSmXj9AokE+2RRsJEck6WyeLhaWJbqPD+UGQy5wtryJx54PB9Ac0B4CcjT1H0sZelbhMht+LOhx3ZEcDGVFKCopH1xj0zg+uQVaSqxs+ScJODwfDLUWqZuutVXC+zllUic6XsZzsSeUpH0BArIrYwVL3k78j/AGdPhpHljzrnq7uyVje6yFJaPlTUYpsg42yR/eXyte30ro6EUkQ9hqFXCQ92r+AfWqFbklCw9RUZUO1UJsOvJ2ltEo7VCDyO+BIGjkelG3A7HVkfPioMksDzEZBSCR3qjOQeJGLXcV1pQkENnhF8jgY96sR4KuW8BzZ7OCEgYyfSiBLWwSJYdKuPNJJb71O6Q6TY+q0QsgZawD5iqVSTLEYHn+wSgoFVZlaptRahC4rZ0K2k57/UVkOrNvktqmkiw/wwTNNdN3Z95ucUP3dw/Z4pxktp88emc8n0ro+nJSpObeblStiSSLHay+Jex6SYDQW7NnFoOBprKEJ9B6n8TV51qcW03wRUW0mQhJ6s9WtfTVvWOPHhR3M7HZLZUMeuVHKh9E4oPrTm/hiK1u4WaE/2yv8AI8K7dQShDDXiSnIMFhLbaR3+daD+iaLGnVk/ilb2sLdFLgl5Gv4FgsbTsd4LaDRWmVcFYJQkcurxgJT/AKxVvZblkNyKuah/aEaKvGpXIDT0m8tMEtLlswSmOvnko+cEp9Dg5rO1Grp0Fd8CUle1x80Z8Sln6gPrh6R0zq26zm14xabcpxtB9FL37U8c5URxQYa6FVXhFv6E1ngLepJ19crRBtLNphRlXXLElOoVtuJQwSErHgJVlxRB4BITwSTQatWtUlGEYYfN+30DxUY5bA7rh8Kz/VTp7pG69Mo9rtMawMyIrTAiLjz5aQvwdjbpKR4ZKCvLmSThQPJJr6/Tb6DVOCft8n+YWlVlCe6Us/5RVmf0C6sWqTcDfdGTro3Y4BH2aIksLDQUCHkKQf3pTgnKMkAFQGc1yb0jUpUqMXCTe5trcu91ns/+cGjDUysnV+JJWSTt9RP0/wBA9RdRaPmTYVtk6iL7raS0pbJy2kHASk7VK9FYB3YFVf4Klra3/bQsoX4ve7+v4eLlmnrZ0qT9V3k/NrWQddAvhuufVSFFgXPVUmxogrK4zcaOlxyOR2CApQ2DsnndgfLwOKsaLT09frKn8u0WrNv+p+67+W8ZzyV5V6lKkot8ce3y/T5ErdNeh1it2kNTac6lT0ydQt3Vb0OfJbW4poJACF71A5DgKtyc9iPQVKn0vSydSNR+jNWjjCsvFlZp/j35ITr1ZRj/AFLn/n3HyL0clgrGnHYT0ZHZLExJOPccfqK6XQ9H09FbqS573v8AmUqmpk8M7QdPXTRkxD15dlwEKGwPJabW0SfLcQpP5100KMI4KDm3lj7qfQK7/pt242h77fIYHiKYS2hDu3zKdgAV9MZ96tRiouwKXxo46Thsak0kDNZEyTb074z3I3t/zIUB6d/wrivtpDWrpNWvoK0qc6fxNxdrx7r8M/Q2+iSovVRp14KSljObPseRoPhSktojNpG7O9KAAfYHzr5L1Gtr6lOderKT95N/qz12nRo04/y4pfJII2mwhaS04WVp/mbUUkH8KL0v7RdU6NPfo68orxzF/OLwUNXodNq1arBP37/iJJ+qn2ng4ooF1ZUPDmpSEucHkLxwsfWvp7pP2ofUtDDVSgk5Lt5WGebajpyo1XTTHpjqLeJd3jzVJDnhpIVGbGAsHG788dq3aPUpVaik1aP7yVZ6dRi13OfUb4qNIdKtDaldnXVkvMRXEW6CVhUh9xbWW2koPJIKtp44ABNbqqR5M94wfMvpb8LOtNZwo816Km0QVJSUuS8hahjvt/61i1U5/dIxpyeXgnS1fB4zGZT9svj6l9iGWQB/nVL+A9R5kE2JCq5/B3p6Qzj/AGkuEZWO7jSSP/41cp9PhHKkDcbkS63+CnUUTxJGnbpFv7aezP8Aw3cfqKvLTuKtF3K0qcuxXvV2jLxom6fYL3bZFsl43BuQjbuHqk9iPcU1pRxIjlYZztf/AAx9ap1uSUAktrO9Q+lZVWVizEcJUYbKrwmSYiMfBHHFG3ETj4Xhujng0S90LuO0Vz5B7VTmg6ZGDY+YV1rM6QS2CWlCgD/ajKWCusMmHQkFuW+2tXIyO9HRPlkzW6O202kJwAPSoy4DxQ/ww2sYOKpTZYihHclNoJxjGaw9VIu01fg0gqQ4Qc1mRlcsMci6lsfIognyHmavU5yWIPkFJJ5ZNmhtK2DTaIs6+oTedSvJDgZkKyxESf4N39SuR7Dk9q6Wjp9tt+ZFOU78YQz9XOuotFtlswFoagtJKF+EAj7Ss8YOP5eOB6ZNXJNUYtgL7sFb9O9a33BOjP3F0uPJIW34mA4nOQn0ArMWonRk58p8hElUW3uiKeuPxEah1lGkacacft1lO1LzJcyt8DslRH8nntHHrVmVd1FgrTe12QA9JoLN21RBiSFqbjyJbTLi090oUsAkfgTWLq4qUoQfDYoc5Prvq2+jplo/T+ndKH7msbQVHbZhnaE7SPPPJPJUrkqJJJNdKqKjBxjhIO5WaA/R1/eveoHH31reUnA3uK3KOSAOT+NElT2kYyvctq2pKYzDSQAgIGQOwAHAoLSYUANY6ljybhshOuhUELMhxs7eRg4Sc9wR39TUHTSe7wPuxYgn7/kQbzJuDRCX3XS+pRAyc9wfaq1LT7ZubWWPKpdWBC5Sha7zKuMIiNIkOqdJZO3lZJUB9STUY6ZU6r2q13ck6jlHJJQvTfUTTiJ6gk3WOA1JGMF1OOFfX/oasVaKnysjRnZEfTEOWyc282pTK0nLchv5VoP19Kqxoem90R3O+GH+m+pLkiOu2XxpmQy6nwypxAKHR6LHatSnU3YkAatwKnbc/oSQ3f8ATSlP2dPMmDu3LYGe6fVH6ijNvh8EcXugottktLkl+9QChi23JsvLQk4S25t+f8FZzj1zVXUxp1qFSlW+600/k07/AJBqTlGcZQ5TRHcRUtUlClbUJQAAj+UcetfCVRU9rR7fnLfPgc4cNx5RWwlz5j82e31q9oela7qs/S0lFz+Swvm+F9WVK+ro0I3rSSBfqHp/UyZsWRaLO9Mikj7VJZUlfhJ9SkHdj3xxXrn2c6B1rpdOUdVS2wbT5i//AOW/qcb1DW6evNOjK7+TX6jfqG93qFaTbtNxhN1HLb8Nkr4bZBHK1nyA/M16lpcvc1hGFXbS2x5BDpX8JVk0VcHL9qJSdR6tkuF96ZISClCzydo8v71qupKbKUKSp+7J0iW1ClobSjxdowEtjIHtiipkuchPAsDC0/72kxk443ECjRuyDseRNNyX5S/sTzciMD8wQEu8e6TzVuMXbDAtj/J6cWVEFuW9bWUuKICyyNuD6j0oxCyIy+JPoLbOqvSC7W1UFcu4xmFP29ZQC826E5G1XvjHv5021NWZCot0fdcHx5hxXob7seQ0tiQ0stuNOJ2qQoHCkkHsQcisSthgIBXZGwVYPORWJqHgtRHpyIVp7c1QU7MnbBwXAykjBzRFUI2Ga4slgDg8Gr1KSkROcV8g1OcScboj9P8AEK6N8FJjjbntjwPp5U6YF4ZLmhb14OwFWMc0WMgkUS3B1ElaB84z6UmwqyO8TUIQoFSgB71Xkrh07Dfd9TIU+AlYKayNTRc1gswmkbsagS00DvAHesv0JRLHqJizTWqEztSRWd4O3KwCf6Rn+9avT6d66uuAFeXw2JCuWoZMoqSlalLcVg88k9hXYwilJtmZKV1ggb4p9Qv6Ve07ai+C+/GcmOtJ/kBVsRn/ANK/zrN1Mt0kkTacYore7qSQp0OIUUrByFA4xVaytkHd8jfJnPTHS46srWfMnmkopLAnd5YZ9No7r75Syla3lr2oCP4iryx71idRbukgkFnBe97qTeL9pKxQ7olH26GlPjLbOQpW0Amuw0NX1qSdT71skKqlAKdB3RyHdPCUSkrWhQ/A5qzNpxTHgmmXOiXLw7P9qcOUNshaj6gJzVRJIsMA7ZanYlvn3BKUqeeUX3mlHcQhRJOB7Zz+FSlxYaKu7kJ9RAi0zW5UdKhEez3Odqv5k/rkexo1OO5Aqj2O5Gl0uJkNqQ2s/wBSeaM4XVwO4Jun2rE2KVIaeWUfaEDYVcAKHr9e1DnTurxCwnbDCm9OR7pCblMgBt4kEf0rHcVVcch28Aw3IAy2s8p7H2pttsEb9yQLvqBvQbViuUdfg2u5xNxDzmUeMnCXGxnvnIUB7n0rkeq9dn0bWUoV47qNRcpfFFp5b8xyr91+RtaXQR1lCcoO04v6NPj5PDEqNYBuZMskUeBbXdsxLRIwlKhkoz7kHj0Iryv7VfbOprqE9N0x2pS+Fys9z8pLsuVfl+x0/S+kRoyVTU/fWbdvb5sILFp9EiOhyQypCVq8RDG7kn39qzfsn9k1rdvUOoK9P+mP/l7v28Lv8uTdU6k6b9GhyuX++47ybqzDWI8dsSJOcBKR8if+te8Uo06EFSpRSS7JWX4HFycpvdJ3Zka9SYEpDipjipLZBDTBCUo9if8ALmrEZWywTjfBIo0ZZ7za275GhohvykBbqow2nPO724INE/h6VVbkrfIHvlB2I71HYJFiu64jziXDgLQsnAUk9jiqsqEoPagqnuVxbpyyXKY4W4YKucKUCEoH496NCm7kZSwPl86T3i5Wh0MXdMWcoHYsN70oPuCeauenjkBfwU96m2D4iOjc2XPtl3RfoiSXPGhxRvSB/wAmc4+hNZ7jq6LvFqSAS3LuQ9qf9ov1dnaWj2Zx62xJ8d4qXc2ohDywONikE7Qc9zj8KnHVykrNZAepJrkAdOfHp1esGr7feJeoTeo0VwqXbJLaEMPpIwUq2pz9DnirPqyfJKM5LLZGmsdau9SNfX/VL8Ni3u3ia5MVEjZ8NoqP8IJ7/XzOaztTLc7jxy7i21ktFJrBrZuHVwrZbDjOe+ax5OzCMwMDB470aLXciDmo2dicgce1X9NK7E0MrCflq9LkT8AHXQlQ6NOlCs+lM/Yi0EFn1A7AcSUnHnTppgleId23XiEoHir/AFohPf5Pbl1RQ0goaKs+9QY/qeAe/wC0eWHtwWpQz51VlubJqpYfouunrhHAS7tUO4zzQnG4VVELdPayXYNQQrg6tammnB4qU5JLZ4VgeZwTRKSdOSkhSlfkuZC0tFk2q33aHcIsiEFB8TUvDwn2FHKXEqPAx5juPOtV13Fk1TTRRP4iNbsa+6u3y4w3kyLeypMKK6g5SptobQoH0Ktx/GqsnuYOo7uxGxORTAjOc+9MxE3/AA125Mm9PvqGSwytxOfI8Jz+prNqpSr/ACQejyWRiKTEcbLyVLaA+dKTzk1apVJUXdFmUFJZOOpOv+ndFymIjTq5l2huNpU2GyNiCQSFq7cDnj1qxU1cUrgXaLsWCT8WNltuhUrly2JDL2NiGXQVqHBASM85NWHrKMKbqTeBWdyKOkXxl3G4dULgi9KTGjzVkwkE/KhscBk+WeMg+ZJHpWVpuouvUe/CfA90vhJN60PxJcdi52pzdaphO5CTww76fQ+X4iumpNxd+wCqlJWIPRPcYkbc4GTjJ7e1aM7NbkZ8bp7QsbvkW8W9CblHLMhobW5rSf4sfyrHn9ay5zjTk9jNCCco/EiZulGjf9r9Oz4a2HIbz6N8UunALyRkEexB59jVZVN7ZY2WRHWorZcLDLInRHoqge60EBX0PY1ZVmrgGmnYM4mmbz1B6dWJ2ywmLrO0/c5D6oj/AHcaW1j5B/MoKHA+orhPtf0qr1TRKGni3NPs0nZ82vzwsc+Df6Rqo6aruqPDXdd/3cYOn0WPcby/ZzJizG/EMiW2ykpcZVu5bV/ThXy44PGK8K6d0mrruoU4VqcoqPN/C5/Hj6nZV9TCjQl6c07+Pf8AwSxdbstK/sMTmSvhSk8bR6CvoCM1BKMfl8jjHG/Iym6xoLyoMV0vTDw662Nys/0p9PrR1NRx3BtNnVEhuK2hBSlhXknO5R/96Mmmsg37Em9PtcRo8IWea4GSG1uJU8sYIKh59uMn6VoUZK1itUTvcBdY9RIOptRvvQEFbQAZbefd2NhKc9gOTkkn8qHUqKUsDxjgVadui4clt5qW466Dy20MII9CM5IqUW1wM0iY7RcTcIDcqKtZCgQpl052qHl/r1q4pJoE1YFdTx7veo7T8CMyl08LQt3KFp/KldjZPlT8eHTKRoHqSi4mzPWuPdUlal+EQy46O+1XYnHJHfjNU6tNblJFSatKyKs5+amGH6y84rPrkoPIXwE5KfSsSoWEE1vkZbUk8kGsqrHNyVxWcYHlQk2IY9QM+KycDnvxV7TO0hNgwn5RitcYA66EqmUhHRDhTUWiLRt9pX5Kp8kdqOa3FKPJpWJJJG7ajiotDNCpt9bStyVFJ9qHYb2O7l1kKRtKs/WkkLk4OX65Ltq7aZ8n7uWvxFQ/GV4JV3zszjP4UdN2sETaEHakIz/XFIRg/CkMS50ava9NyWpiUeKjlDjecbkHvj37flXP6ut6Nbcg1J7ck13fXcibHQq0MKDyiEIS8kErWo4SMA+uPPzrLqdSnUqxp0ljvcO6l/uj7pHoNorT1qfvetr+rUeqn3t7lrgJ3NJUokqKnDgEDtkeZ4FdHKnG15vI8acVl5Y/a86o6a0l0+n2fSuiLZZHJrRjO3AoS9JLahhQCin5c57is/VVYQp+nShzi7CN2Vyqst3xJC1JVt/gAIOCOd1Z8FtS+pRly/oTboDr61EZb0zqKVxIaCm3XFfKvnACj5K47+ddR0zWVJQaqZs7E5WeGH4jR5UkIaUp1ladyHB/EPb3xXQutZc4Aqnd5EvUHqEx0uuWhrkhluekNy/ttrUsJEhshGxZByAQc4z71yfU8bZ0cSbv8/matCajibLNaS6qonotc+wpzBtcVue8Cnl11zBdx7JbwB9KuaepwlxbPzGlltk2mXbb1PududaakRJUVUhtDiApJQ4gqB59DmtYBaxHvUrqrP6W6Ab+4LaZ99uDCW4yWmx4UccBTznIAxwAPMnnhPOB1nqD6dp3Uisu+ey93j8P3e9o6Ea1S03hfn7f5Ic0FfbhBDst+DHYuMnKpD+7KnVEklR98k1430vTdSWrlqKU7xlzv557cs6nV1aEoqCXHgJJF1krZUiPIEdTnDj20qV74r06FOq1a6RgSnFdhbarYbdbXHIqXZG84VJYSN30GavwobYgJVLjbJsT9xYU5bb5HTIOQWrm04yc/wCNIWP0oipSf3ZfiRdRd0Rb1Ms/VLSNkk3ZrSzF5tbCSt6dbbh9sDScd1NpQlYHvjFJaeq8yf4AZ1UleKKxXT4jtWLcBiyI0YDt4bIP981GTUHZMo+tKR0tfxP9Rrc6lxi+JbWnkH7Og/5VB6ia4H3vyGVj+N3q9FcJRqFlSO5QqG3hXtwKaWuq0+AO+beWfRjpd1fgX/oVbtVSpUZ6dAtKZl3aZUkJSUoKl4GeOAcA/St/T1fVpRm/BYk9q3HTq70+018TfQmfbG3GZ9uu8ATbVPThRac272nEn1zj9RVpbZq/Zg5rcsHwrnwXbZPkw5AAfjuqZcA8lJJSf1BqhJWbRWvfI8WJOSKzNQyUAzt7e1IJPesKqw8cjtCVsd78Gqc8okO2Mo5ql3HQ1XMBSceVXKWGOCMkeG6tNbUXdXGSAH/Wa6MqmUhGcetIRlIRlIRuiosizvnjvQ7DHijkYpxCdXeiomjzypDmUhjPpSESZob5ben1rleoZmyceCW9KlKrlaUE/wD6gH8gT/lWJoob9ZH5r9ScOQ8fdVscVnnNdu1yw98AR1QL0ONDiONLbEhkyULWnAWgKKQR6jcFf+msfW/CoLyxSzgiZ1P75X/3D+gxVdPH77lVrIF6xkqfvzyc8NIS2Mewz/cmt3RR20E/N2RnyPui+tmr9B7EW25B2OjgRprSX28emDyPzrRUmhKbQpg6huWrZdxu91kqlSnlEkq/hSM8JSOyUjyArE18vjigkW5XbLD/AAo63j2GFeLaJRcmSHvEXEWSP93CEJOw5wc/MFD3H1otGezn98lmlZx23yWdtHVG8plx3WUoWwzFMJvxDhW0k7c8c4BrUjXeLIm4ruJNd9RY1n0zc7xeFFUOMwVrQnnCAQAAKq6inHURcaiun2JRm6fxeBo6S660z1iW2xpS7QVzFA7oktwMvII8ihXOD/UOPegU9HTjZRjYItQ55TD5eiruxelWa5SW7XLdQVR943tvewUMY9D6eYqytPZ24ZF1Lq5yNwtlm05cVXaLOhO2JLsiWbeSXltpwFkp7nAGePIEjvRHTjbOGiDqbc9hisPxM9IpaQy1rqda3TwFT2ypH470/wD+hUacqT4n+P8AwBdeLxwT30uvVkvznj2nUllvTS2ypEiE8gKPspG4ggjgitKMLK6GU1LhnyF6uosznVzWR04U/cH3vKMEND5A14hwE+3fHtisDVbVUltKsc8DEmMQ3x6VmOWQjWDrEBYCz5VCfxWQG1uDV68SYzD7LUp5ll4bXW23ClLg9FAHBH1qxT3LCFfB9C/2ZvWNd26aak0pc3dzenJHjxnHCMJjupUVI+gUFH8a6TSNqKgyxSzFt9j5m9Qp7F26g6lmxSPs0i5yXWinsUl1RGPwp6rTm2ipHhG9h7DntWRqQkWFcdZ2Jwax5IKhe28UISc8+dVmrsnfFx4TKy2Dk9qpOGRK413CRkHmrdOJL3Bqb8731NasMIXbkAOK6MqmfWkMZ50hzKQjPM0hzdsZVioy4IsUpQT5UJsY1WkjnFOncQmWOaKh0ZTjnhpCPRx5UhiU9FNbbc3n2rkNe71GGj90PPvVuxyrTKdVtaZlNqcPondgn8iazen/APyUx1jJONo085Jm2ovtbocqV4Oc8HBGQfwIruWrlhIjr4gZChrtEZf8MK1xIyQP5Qd7p/8A7BXPdTluqxiuyHas2Qqk7lpPbjcfxOf8qF2aK3cjm5PfaLhJcP8AM4T+tdPSjthGPsAeWJRRRg40ugNaeWrHKl1gat3rpFiH3TvCLaZrLjgSUIdSs58gFDNJuysRjbcXoi3dCQraoEeKnBB49q3YpItN3ZHvxH31X/ZDemUlWXiyycehdT39uKG3n5EajtBlM7Df7lpO/Q7xaJbkG5Q3A4xIbPKT/mCOCDwQakmZ8JuDwfTD4f8A4s9OfEDppjSus3W7NqRABZljjw3U/wALqFH+U/wnnjOFcYVVmMlUW1l2FRTJQaYW/qONJuUcKucBaLZeo+AUzYbv7tD6T2UNqgM+aVI/pNESs8/UJa+D5b61s7dh1JebW04l9qBNfiIdSeFpbcUgKH1Cc/jXNSjsqyinwzNeQZ+xNqcC9idx5zjmi+pJK1wSdx5gRkq4xx24qjVkXKaH6LbvETkis2dSzLijdCW5oSwjaBijUm5O4KSXDA+6SvDJAzityjC5VlZHlk6l6n0zZrlabRepVut1ySUSmY6tnipPcEjnkcd61ILZlcjqT27XwDbKfmAA+lJsjyEtmRtSOOay67uEiErKglIGKy5E1gUJO5OBQeGTFni/ugkdwKBbI5tFtT1ycASn5R3NWaMd7sh0hPeLI5GeSEpBHqBVypFUkSxYiPgGuiKRmMZpCPDinEZSHM86Q53ioBOTQ5sG+RwSnGP+lVmyZq40FA84P0qSYmrjY4nCiKtLgZGtOSPcZA5xSImyE5UBUbjEu6MaxBaGcdq4vXv42Wor4Rb1BdCLclsHgkAigdNV6lwc1gkXp58TabPo5qy3q3qkSoxQWLi2d3KP4FKR3CwPlJBwoAZAI56v1klZrISnWstsgQ17rBes7tMvqt+ZYCgHAAoAICE8DtwBxWBWl6uob+X5BW7x3IB5Kyy1JWDnYggfgKPBbnFeQHBHLo2rPr51064KyNaccP7S34Omo47FXNc5We7USLC+4bMRy6CkJ3FQIxSlK2RoOzuWttFwH3TEUhQUFMsqCh5/IK3KbvTT+X6Fh8gb17kur6TTHDgNOTIzHzHGVFSl8Dz4Qafm7IVsQKpOp+eoozWONnfciSW3mXFtPNnchxtRSpJ9QRyDQajayhoN3J3tfxO9QWtMmy/fKFI8H7Oiapj/AHptrOdqXM9s8jKSQexFAlrasY27+S16kmrEYTGC6VLJKs8kqPJrNjPOQbwNzg8MpTnj3qys5K6WR5tDOQCe1Ua77GhTXcJGiGmfTisp5ZdwkDV6fG4nPlWrQiV5WQC3V/xXcA10NGO1ZKMmrjYpGPrVq46Z2iNblZoc3ZDpXCi0NDbmsquwqQ9tpOBziqDZNIVRm9yCPegzdmSSHKIykrAJqrOTsMrB7pS3Ntw3nFgDA4zV7RKycmFEjUFN0vzbKUgoKgCKuVZbmosexWYjOK37lAzFK4jw9/WnuOeY8/KlcR5TjiyGMgetAmwbdmOCGVVWckTXsauJKeDxUk7iGp4ZWT6+VW48DI5pHNSY7PcUwx3jN73BwaHJ2QxMGkGx9nZFcVrX8TZopfChL1GeAWhAPZWcUbpccXKtQC2pGchJyfrW64+QCdwym4aYjs5wEBKfyHP9qw6eZORelxY42uKzPdDcnHgrBK9x45olacqavDkVOO92Ylnasstja+xRoUZ5QyFlLYWnP1PejU9HqdQ/UnNr62IucYfCkDUtmLqYIVBcaiPNjYIiwlCSM5ynHrmtSEqmkxVTaffn8Qb/AJixj2D1zSFwhaVtk1xttMN7LSHA4D8w7ggdq55aylPUzgvvLPBYdOWxPsNqC3DSvadysElWeKsu9S1yCtEm3pnfGrhoq0OF9tx5prwHEKdG4KQSkZGcjjBGe+a6CMlFWCR+KKdwH+IbXw1Cq3adihsRbe4qS8WznLyk7UpP+FO78VmnlUSW1AK0ruy7EFSRtdNSjwUZLJ2iLIUMZ/CoTQPgIra7l0A5rMqrAaLyO72EsqUapLLsFfcZHlBTg5HBq/FWQFRuwjs4+UVl13k0oIc57vhtHBxxVWlG7CzYDXyWopVhVdBp4IoVZAyoEnJ5rUKlzRSMnmpXJp2FURGO470ObDRCazo+TgVlV3kNEe2kBQrPk7BEOAYCGjVZyuxnhGn2nwFIVnz5p9m66IxYewrqj7rQWlj5hzzVyjK0Sx2CDplBTOvHirwTuplO8uSSVyoOe1dYZp6OaYYwUhHnfFOOYexphhbCO3+9Ankg3ZjshWcDFVGSvc5SuWyanDkZ5GRferqHXBrTjnoGTikIXwEAup+tAqPAO/xIlvSQwln864vW8s1HlA51FeUZ+PLJrV6ZFbDOrSd7AxZR4t1itnsp1IP51rV8U5NeAdJ/GkGV3cJeUcfyn9cCsKgsF6bGC/rPgNoGQkkqPv6Vo6ZZbB1HaKS7ghJ4crajwAieNDc42D5qFO3hkiTZbi/AaZ3q8NIBCNxxn1xXKQSu5WyW6jsJ5zYYgLKe5RiiU3umkwbdkMDaRkZAJ9a0mwKebM2eOBgCmjySbwMkrl8j3q/H7pWbvdjtboiScn+1U6s2KMcj5EjpQrI8qz5ybwG4N7i+r7KrHHFRpxW4dvAxJdLj6c1otWiRpZe4M7OkBCfpWFqOWacDa8uENmmoK7IzYC3hZOBjzroaCM2s8pDURgirYG9j1PI+tOTT7CxhACU1Xk8h0x/tAG0jyFZtbkKnyP8AGA3J471nTCoeVNAx8+1UU/iBydkDlzWW14B7VqUkmiMHdnOPd5DSA2lWEk9qI6Syw1yb+iTqlPoWeT3rJT21mkW6a+G5/9k="
}
},
"params": {
"score_threshold": "0.8"
}
}
Call
End of explanation
print(MessageToJson(request.__dict__["_pb"]))
Explanation: Response
End of explanation
request = clients["automl"].undeploy_model(
name=model_id
)
Explanation: Example output:
{
"payload": [
{
"annotationSpecId": "8907226159985459200",
"classification": {
"score": 1.0
},
"displayName": "daisy"
}
]
}
projects.locations.models.undeploy
Request
End of explanation
result = request.result()
Explanation: Call
End of explanation
print(MessageToJson(result))
Explanation: Response
End of explanation
# creating edge model for export
model_edge = {
"display_name": "flowers_edge_" + TIMESTAMP,
"dataset_id": dataset_short_id,
"image_classification_model_metadata":{
"train_budget_milli_node_hours": 8000,
"model_type": "mobile-versatile-1"
},
}
print(
MessageToJson(
automl.CreateModelRequest(
parent=PARENT,
model=model_edge
).__dict__["_pb"]
)
)
Explanation: Example output:
{}
Train and export an Edge model
projects.locations.models.create
Request
End of explanation
request = clients["automl"].create_model(
parent=PARENT,
model=model_edge
)
Explanation: Example output:
{
"parent": "projects/migration-ucaip-training/locations/us-central1",
"model": {
"displayName": "flowers_edge_20210226015151",
"datasetId": "ICN2833688305139187712",
"imageClassificationModelMetadata": {
"modelType": "mobile-versatile-1",
"trainBudgetMilliNodeHours": "8000"
}
}
}
Call
End of explanation
result = request.result()
print(MessageToJson(result.__dict__["_pb"]))
Explanation: Response
End of explanation
model_edge_id = result.name
Explanation: Example output:
{
"name": "projects/116273516712/locations/us-central1/models/ICN8566948201909714944"
}
End of explanation
output_config = {
"model_format": "tflite",
"gcs_destination": {
"output_uri_prefix": "gs://" + f"{BUCKET_NAME}/export/",
}
}
print(MessageToJson(
automl.ExportModelRequest(
name=model_edge_id,
output_config=output_config
).__dict__["_pb"])
)
Explanation: projects.locations.models.export
End of explanation
request = clients["automl"].export_model(
name=model_edge_id,
output_config=output_config
)
Explanation: Example output:
{
"name": "projects/116273516712/locations/us-central1/models/ICN8566948201909714944",
"outputConfig": {
"gcsDestination": {
"outputUriPrefix": "gs://migration-ucaip-trainingaip-20210226015151/export/"
},
"modelFormat": "tflite"
}
}
Call
End of explanation
result = request.result()
print(MessageToJson(result))
Explanation: Response
End of explanation
model_export_dir = output_config["gcs_destination"]["output_uri_prefix"]
! gsutil ls -r $model_export_dir
Explanation: Example output:
{}
End of explanation
delete_dataset = True
delete_model = True
delete_bucket = True
# Delete the dataset using the AutoML fully qualified identifier for the dataset
try:
if delete_dataset:
clients['automl'].delete_dataset(name=dataset_id)
except Exception as e:
print(e)
# Delete the model using the AutoML fully qualified identifier for the model
try:
if delete_model:
clients['automl'].delete_model(name=model_id)
except Exception as e:
print(e)
# Delete the model using the AutoML fully qualified identifier for the model
try:
if delete_model:
clients['automl'].delete_model(name=model_edge_id)
except Exception as e:
print(e)
if delete_bucket and 'BUCKET_NAME' in globals():
! gsutil rm -r gs://$BUCKET_NAME
Explanation: Example output:
```
gs://migration-ucaip-trainingaip-20210226015151/export/:
gs://migration-ucaip-trainingaip-20210226015151/export/model-export/:
gs://migration-ucaip-trainingaip-20210226015151/export/model-export/icn/:
gs://migration-ucaip-trainingaip-20210226015151/export/model-export/icn/tflite-flowers_edge_20210226015151-2021-02-26T06:16:19.437101Z/:
gs://migration-ucaip-trainingaip-20210226015151/export/model-export/icn/tflite-flowers_edge_20210226015151-2021-02-26T06:16:19.437101Z/dict.txt
gs://migration-ucaip-trainingaip-20210226015151/export/model-export/icn/tflite-flowers_edge_20210226015151-2021-02-26T06:16:19.437101Z/model.tflite
gs://migration-ucaip-trainingaip-20210226015151/export/model-export/icn/tflite-flowers_edge_20210226015151-2021-02-26T06:16:19.437101Z/tflite_metadata.json
```
Cleaning up
To clean up all GCP resources used in this project, you can delete the GCP
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial.
End of explanation
<END_TASK> |
15,632 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Copyright 2019 The TensorFlow Authors.
Step1: Transfer learning and fine-tuning
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Data preprocessing
Data download
In this tutorial, you will use a dataset containing several thousand images of cats and dogs. Download and extract a zip file containing the images, then create a tf.data.Dataset for training and validation using the tf.keras.utils.image_dataset_from_directory utility. You can learn more about loading images in this tutorial.
Step3: Show the first nine images and labels from the training set
Step4: As the original dataset doesn't contain a test set, you will create one. To do so, determine how many batches of data are available in the validation set using tf.data.experimental.cardinality, then move 20% of them to a test set.
Step5: Configure the dataset for performance
Use buffered prefetching to load images from disk without having I/O become blocking. To learn more about this method see the data performance guide.
Step6: Use data augmentation
When you don't have a large image dataset, it's a good practice to artificially introduce sample diversity by applying random, yet realistic, transformations to the training images, such as rotation and horizontal flipping. This helps expose the model to different aspects of the training data and reduce overfitting. You can learn more about data augmentation in this tutorial.
Step7: Note
Step8: Rescale pixel values
In a moment, you will download tf.keras.applications.MobileNetV2 for use as your base model. This model expects pixel values in [-1, 1], but at this point, the pixel values in your images are in [0, 255]. To rescale them, use the preprocessing method included with the model.
Step9: Note
Step10: Note
Step11: This feature extractor converts each 160x160x3 image into a 5x5x1280 block of features. Let's see what it does to an example batch of images
Step12: Feature extraction
In this step, you will freeze the convolutional base created from the previous step and to use as a feature extractor. Additionally, you add a classifier on top of it and train the top-level classifier.
Freeze the convolutional base
It is important to freeze the convolutional base before you compile and train the model. Freezing (by setting layer.trainable = False) prevents the weights in a given layer from being updated during training. MobileNet V2 has many layers, so setting the entire model's trainable flag to False will freeze all of them.
Step13: Important note about BatchNormalization layers
Many models contain tf.keras.layers.BatchNormalization layers. This layer is a special case and precautions should be taken in the context of fine-tuning, as shown later in this tutorial.
When you set layer.trainable = False, the BatchNormalization layer will run in inference mode, and will not update its mean and variance statistics.
When you unfreeze a model that contains BatchNormalization layers in order to do fine-tuning, you should keep the BatchNormalization layers in inference mode by passing training = False when calling the base model. Otherwise, the updates applied to the non-trainable weights will destroy what the model has learned.
For more details, see the Transfer learning guide.
Step14: Add a classification head
To generate predictions from the block of features, average over the spatial 5x5 spatial locations, using a tf.keras.layers.GlobalAveragePooling2D layer to convert the features to a single 1280-element vector per image.
Step15: Apply a tf.keras.layers.Dense layer to convert these features into a single prediction per image. You don't need an activation function here because this prediction will be treated as a logit, or a raw prediction value. Positive numbers predict class 1, negative numbers predict class 0.
Step16: Build a model by chaining together the data augmentation, rescaling, base_model and feature extractor layers using the Keras Functional API. As previously mentioned, use training=False as our model contains a BatchNormalization layer.
Step17: Compile the model
Compile the model before training it. Since there are two classes, use the tf.keras.losses.BinaryCrossentropy loss with from_logits=True since the model provides a linear output.
Step18: The 2.5 million parameters in MobileNet are frozen, but there are 1.2 thousand trainable parameters in the Dense layer. These are divided between two tf.Variable objects, the weights and biases.
Step19: Train the model
After training for 10 epochs, you should see ~94% accuracy on the validation set.
Step20: Learning curves
Let's take a look at the learning curves of the training and validation accuracy/loss when using the MobileNetV2 base model as a fixed feature extractor.
Step21: Note
Step22: Compile the model
As you are training a much larger model and want to readapt the pretrained weights, it is important to use a lower learning rate at this stage. Otherwise, your model could overfit very quickly.
Step23: Continue training the model
If you trained to convergence earlier, this step will improve your accuracy by a few percentage points.
Step24: Let's take a look at the learning curves of the training and validation accuracy/loss when fine-tuning the last few layers of the MobileNetV2 base model and training the classifier on top of it. The validation loss is much higher than the training loss, so you may get some overfitting.
You may also get some overfitting as the new training set is relatively small and similar to the original MobileNetV2 datasets.
After fine tuning the model nearly reaches 98% accuracy on the validation set.
Step25: Evaluation and prediction
Finally you can verify the performance of the model on new data using test set.
Step26: And now you are all set to use this model to predict if your pet is a cat or dog. | <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet # IGNORE_COPYRIGHT: cleared by OSS licensing
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
import matplotlib.pyplot as plt
import numpy as np
import os
import tensorflow as tf
Explanation: Transfer learning and fine-tuning
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/images/transfer_learning"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/images/transfer_learning.ipynb?force_kitty_mode=1&force_corgi_mode=1"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/images/transfer_learning.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/images/transfer_learning.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
In this tutorial, you will learn how to classify images of cats and dogs by using transfer learning from a pre-trained network.
A pre-trained model is a saved network that was previously trained on a large dataset, typically on a large-scale image-classification task. You either use the pretrained model as is or use transfer learning to customize this model to a given task.
The intuition behind transfer learning for image classification is that if a model is trained on a large and general enough dataset, this model will effectively serve as a generic model of the visual world. You can then take advantage of these learned feature maps without having to start from scratch by training a large model on a large dataset.
In this notebook, you will try two ways to customize a pretrained model:
Feature Extraction: Use the representations learned by a previous network to extract meaningful features from new samples. You simply add a new classifier, which will be trained from scratch, on top of the pretrained model so that you can repurpose the feature maps learned previously for the dataset.
You do not need to (re)train the entire model. The base convolutional network already contains features that are generically useful for classifying pictures. However, the final, classification part of the pretrained model is specific to the original classification task, and subsequently specific to the set of classes on which the model was trained.
Fine-Tuning: Unfreeze a few of the top layers of a frozen model base and jointly train both the newly-added classifier layers and the last layers of the base model. This allows us to "fine-tune" the higher-order feature representations in the base model in order to make them more relevant for the specific task.
You will follow the general machine learning workflow.
Examine and understand the data
Build an input pipeline, in this case using Keras ImageDataGenerator
Compose the model
Load in the pretrained base model (and pretrained weights)
Stack the classification layers on top
Train the model
Evaluate model
End of explanation
_URL = 'https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip'
path_to_zip = tf.keras.utils.get_file('cats_and_dogs.zip', origin=_URL, extract=True)
PATH = os.path.join(os.path.dirname(path_to_zip), 'cats_and_dogs_filtered')
train_dir = os.path.join(PATH, 'train')
validation_dir = os.path.join(PATH, 'validation')
BATCH_SIZE = 32
IMG_SIZE = (160, 160)
train_dataset = tf.keras.utils.image_dataset_from_directory(train_dir,
shuffle=True,
batch_size=BATCH_SIZE,
image_size=IMG_SIZE)
validation_dataset = tf.keras.utils.image_dataset_from_directory(validation_dir,
shuffle=True,
batch_size=BATCH_SIZE,
image_size=IMG_SIZE)
Explanation: Data preprocessing
Data download
In this tutorial, you will use a dataset containing several thousand images of cats and dogs. Download and extract a zip file containing the images, then create a tf.data.Dataset for training and validation using the tf.keras.utils.image_dataset_from_directory utility. You can learn more about loading images in this tutorial.
End of explanation
class_names = train_dataset.class_names
plt.figure(figsize=(10, 10))
for images, labels in train_dataset.take(1):
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(images[i].numpy().astype("uint8"))
plt.title(class_names[labels[i]])
plt.axis("off")
Explanation: Show the first nine images and labels from the training set:
End of explanation
val_batches = tf.data.experimental.cardinality(validation_dataset)
test_dataset = validation_dataset.take(val_batches // 5)
validation_dataset = validation_dataset.skip(val_batches // 5)
print('Number of validation batches: %d' % tf.data.experimental.cardinality(validation_dataset))
print('Number of test batches: %d' % tf.data.experimental.cardinality(test_dataset))
Explanation: As the original dataset doesn't contain a test set, you will create one. To do so, determine how many batches of data are available in the validation set using tf.data.experimental.cardinality, then move 20% of them to a test set.
End of explanation
AUTOTUNE = tf.data.AUTOTUNE
train_dataset = train_dataset.prefetch(buffer_size=AUTOTUNE)
validation_dataset = validation_dataset.prefetch(buffer_size=AUTOTUNE)
test_dataset = test_dataset.prefetch(buffer_size=AUTOTUNE)
Explanation: Configure the dataset for performance
Use buffered prefetching to load images from disk without having I/O become blocking. To learn more about this method see the data performance guide.
End of explanation
data_augmentation = tf.keras.Sequential([
tf.keras.layers.RandomFlip('horizontal'),
tf.keras.layers.RandomRotation(0.2),
])
Explanation: Use data augmentation
When you don't have a large image dataset, it's a good practice to artificially introduce sample diversity by applying random, yet realistic, transformations to the training images, such as rotation and horizontal flipping. This helps expose the model to different aspects of the training data and reduce overfitting. You can learn more about data augmentation in this tutorial.
End of explanation
for image, _ in train_dataset.take(1):
plt.figure(figsize=(10, 10))
first_image = image[0]
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
augmented_image = data_augmentation(tf.expand_dims(first_image, 0))
plt.imshow(augmented_image[0] / 255)
plt.axis('off')
Explanation: Note: These layers are active only during training, when you call Model.fit. They are inactive when the model is used in inference mode in Model.evaluate or Model.fit.
Let's repeatedly apply these layers to the same image and see the result.
End of explanation
preprocess_input = tf.keras.applications.mobilenet_v2.preprocess_input
Explanation: Rescale pixel values
In a moment, you will download tf.keras.applications.MobileNetV2 for use as your base model. This model expects pixel values in [-1, 1], but at this point, the pixel values in your images are in [0, 255]. To rescale them, use the preprocessing method included with the model.
End of explanation
rescale = tf.keras.layers.Rescaling(1./127.5, offset=-1)
Explanation: Note: Alternatively, you could rescale pixel values from [0, 255] to [-1, 1] using tf.keras.layers.Rescaling.
End of explanation
# Create the base model from the pre-trained model MobileNet V2
IMG_SHAPE = IMG_SIZE + (3,)
base_model = tf.keras.applications.MobileNetV2(input_shape=IMG_SHAPE,
include_top=False,
weights='imagenet')
Explanation: Note: If using other tf.keras.applications, be sure to check the API doc to determine if they expect pixels in [-1, 1] or [0, 1], or use the included preprocess_input function.
Create the base model from the pre-trained convnets
You will create the base model from the MobileNet V2 model developed at Google. This is pre-trained on the ImageNet dataset, a large dataset consisting of 1.4M images and 1000 classes. ImageNet is a research training dataset with a wide variety of categories like jackfruit and syringe. This base of knowledge will help us classify cats and dogs from our specific dataset.
First, you need to pick which layer of MobileNet V2 you will use for feature extraction. The very last classification layer (on "top", as most diagrams of machine learning models go from bottom to top) is not very useful. Instead, you will follow the common practice to depend on the very last layer before the flatten operation. This layer is called the "bottleneck layer". The bottleneck layer features retain more generality as compared to the final/top layer.
First, instantiate a MobileNet V2 model pre-loaded with weights trained on ImageNet. By specifying the include_top=False argument, you load a network that doesn't include the classification layers at the top, which is ideal for feature extraction.
End of explanation
image_batch, label_batch = next(iter(train_dataset))
feature_batch = base_model(image_batch)
print(feature_batch.shape)
Explanation: This feature extractor converts each 160x160x3 image into a 5x5x1280 block of features. Let's see what it does to an example batch of images:
End of explanation
base_model.trainable = False
Explanation: Feature extraction
In this step, you will freeze the convolutional base created from the previous step and to use as a feature extractor. Additionally, you add a classifier on top of it and train the top-level classifier.
Freeze the convolutional base
It is important to freeze the convolutional base before you compile and train the model. Freezing (by setting layer.trainable = False) prevents the weights in a given layer from being updated during training. MobileNet V2 has many layers, so setting the entire model's trainable flag to False will freeze all of them.
End of explanation
# Let's take a look at the base model architecture
base_model.summary()
Explanation: Important note about BatchNormalization layers
Many models contain tf.keras.layers.BatchNormalization layers. This layer is a special case and precautions should be taken in the context of fine-tuning, as shown later in this tutorial.
When you set layer.trainable = False, the BatchNormalization layer will run in inference mode, and will not update its mean and variance statistics.
When you unfreeze a model that contains BatchNormalization layers in order to do fine-tuning, you should keep the BatchNormalization layers in inference mode by passing training = False when calling the base model. Otherwise, the updates applied to the non-trainable weights will destroy what the model has learned.
For more details, see the Transfer learning guide.
End of explanation
global_average_layer = tf.keras.layers.GlobalAveragePooling2D()
feature_batch_average = global_average_layer(feature_batch)
print(feature_batch_average.shape)
Explanation: Add a classification head
To generate predictions from the block of features, average over the spatial 5x5 spatial locations, using a tf.keras.layers.GlobalAveragePooling2D layer to convert the features to a single 1280-element vector per image.
End of explanation
prediction_layer = tf.keras.layers.Dense(1)
prediction_batch = prediction_layer(feature_batch_average)
print(prediction_batch.shape)
Explanation: Apply a tf.keras.layers.Dense layer to convert these features into a single prediction per image. You don't need an activation function here because this prediction will be treated as a logit, or a raw prediction value. Positive numbers predict class 1, negative numbers predict class 0.
End of explanation
inputs = tf.keras.Input(shape=(160, 160, 3))
x = data_augmentation(inputs)
x = preprocess_input(x)
x = base_model(x, training=False)
x = global_average_layer(x)
x = tf.keras.layers.Dropout(0.2)(x)
outputs = prediction_layer(x)
model = tf.keras.Model(inputs, outputs)
Explanation: Build a model by chaining together the data augmentation, rescaling, base_model and feature extractor layers using the Keras Functional API. As previously mentioned, use training=False as our model contains a BatchNormalization layer.
End of explanation
base_learning_rate = 0.0001
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=base_learning_rate),
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
model.summary()
Explanation: Compile the model
Compile the model before training it. Since there are two classes, use the tf.keras.losses.BinaryCrossentropy loss with from_logits=True since the model provides a linear output.
End of explanation
len(model.trainable_variables)
Explanation: The 2.5 million parameters in MobileNet are frozen, but there are 1.2 thousand trainable parameters in the Dense layer. These are divided between two tf.Variable objects, the weights and biases.
End of explanation
initial_epochs = 10
loss0, accuracy0 = model.evaluate(validation_dataset)
print("initial loss: {:.2f}".format(loss0))
print("initial accuracy: {:.2f}".format(accuracy0))
history = model.fit(train_dataset,
epochs=initial_epochs,
validation_data=validation_dataset)
Explanation: Train the model
After training for 10 epochs, you should see ~94% accuracy on the validation set.
End of explanation
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
plt.figure(figsize=(8, 8))
plt.subplot(2, 1, 1)
plt.plot(acc, label='Training Accuracy')
plt.plot(val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.ylabel('Accuracy')
plt.ylim([min(plt.ylim()),1])
plt.title('Training and Validation Accuracy')
plt.subplot(2, 1, 2)
plt.plot(loss, label='Training Loss')
plt.plot(val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.ylabel('Cross Entropy')
plt.ylim([0,1.0])
plt.title('Training and Validation Loss')
plt.xlabel('epoch')
plt.show()
Explanation: Learning curves
Let's take a look at the learning curves of the training and validation accuracy/loss when using the MobileNetV2 base model as a fixed feature extractor.
End of explanation
base_model.trainable = True
# Let's take a look to see how many layers are in the base model
print("Number of layers in the base model: ", len(base_model.layers))
# Fine-tune from this layer onwards
fine_tune_at = 100
# Freeze all the layers before the `fine_tune_at` layer
for layer in base_model.layers[:fine_tune_at]:
layer.trainable = False
Explanation: Note: If you are wondering why the validation metrics are clearly better than the training metrics, the main factor is because layers like tf.keras.layers.BatchNormalization and tf.keras.layers.Dropout affect accuracy during training. They are turned off when calculating validation loss.
To a lesser extent, it is also because training metrics report the average for an epoch, while validation metrics are evaluated after the epoch, so validation metrics see a model that has trained slightly longer.
Fine tuning
In the feature extraction experiment, you were only training a few layers on top of an MobileNetV2 base model. The weights of the pre-trained network were not updated during training.
One way to increase performance even further is to train (or "fine-tune") the weights of the top layers of the pre-trained model alongside the training of the classifier you added. The training process will force the weights to be tuned from generic feature maps to features associated specifically with the dataset.
Note: This should only be attempted after you have trained the top-level classifier with the pre-trained model set to non-trainable. If you add a randomly initialized classifier on top of a pre-trained model and attempt to train all layers jointly, the magnitude of the gradient updates will be too large (due to the random weights from the classifier) and your pre-trained model will forget what it has learned.
Also, you should try to fine-tune a small number of top layers rather than the whole MobileNet model. In most convolutional networks, the higher up a layer is, the more specialized it is. The first few layers learn very simple and generic features that generalize to almost all types of images. As you go higher up, the features are increasingly more specific to the dataset on which the model was trained. The goal of fine-tuning is to adapt these specialized features to work with the new dataset, rather than overwrite the generic learning.
Un-freeze the top layers of the model
All you need to do is unfreeze the base_model and set the bottom layers to be un-trainable. Then, you should recompile the model (necessary for these changes to take effect), and resume training.
End of explanation
model.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
optimizer = tf.keras.optimizers.RMSprop(learning_rate=base_learning_rate/10),
metrics=['accuracy'])
model.summary()
len(model.trainable_variables)
Explanation: Compile the model
As you are training a much larger model and want to readapt the pretrained weights, it is important to use a lower learning rate at this stage. Otherwise, your model could overfit very quickly.
End of explanation
fine_tune_epochs = 10
total_epochs = initial_epochs + fine_tune_epochs
history_fine = model.fit(train_dataset,
epochs=total_epochs,
initial_epoch=history.epoch[-1],
validation_data=validation_dataset)
Explanation: Continue training the model
If you trained to convergence earlier, this step will improve your accuracy by a few percentage points.
End of explanation
acc += history_fine.history['accuracy']
val_acc += history_fine.history['val_accuracy']
loss += history_fine.history['loss']
val_loss += history_fine.history['val_loss']
plt.figure(figsize=(8, 8))
plt.subplot(2, 1, 1)
plt.plot(acc, label='Training Accuracy')
plt.plot(val_acc, label='Validation Accuracy')
plt.ylim([0.8, 1])
plt.plot([initial_epochs-1,initial_epochs-1],
plt.ylim(), label='Start Fine Tuning')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(2, 1, 2)
plt.plot(loss, label='Training Loss')
plt.plot(val_loss, label='Validation Loss')
plt.ylim([0, 1.0])
plt.plot([initial_epochs-1,initial_epochs-1],
plt.ylim(), label='Start Fine Tuning')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.xlabel('epoch')
plt.show()
Explanation: Let's take a look at the learning curves of the training and validation accuracy/loss when fine-tuning the last few layers of the MobileNetV2 base model and training the classifier on top of it. The validation loss is much higher than the training loss, so you may get some overfitting.
You may also get some overfitting as the new training set is relatively small and similar to the original MobileNetV2 datasets.
After fine tuning the model nearly reaches 98% accuracy on the validation set.
End of explanation
loss, accuracy = model.evaluate(test_dataset)
print('Test accuracy :', accuracy)
Explanation: Evaluation and prediction
Finally you can verify the performance of the model on new data using test set.
End of explanation
# Retrieve a batch of images from the test set
image_batch, label_batch = test_dataset.as_numpy_iterator().next()
predictions = model.predict_on_batch(image_batch).flatten()
# Apply a sigmoid since our model returns logits
predictions = tf.nn.sigmoid(predictions)
predictions = tf.where(predictions < 0.5, 0, 1)
print('Predictions:\n', predictions.numpy())
print('Labels:\n', label_batch)
plt.figure(figsize=(10, 10))
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(image_batch[i].astype("uint8"))
plt.title(class_names[predictions[i]])
plt.axis("off")
Explanation: And now you are all set to use this model to predict if your pet is a cat or dog.
End of explanation
<END_TASK> |
15,633 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Breast cancer data set
Step1: Class frequencies
Step2: Compute the cross-validation scores
Here, the scores are accuracy on the data set.
Step3: Visualize the predictions vs. actual status
Each dot corresponds to one prediction.
Training data
Step4: Testing data
Step5: Crowd machine
Run random forest with 15 or 20 different terminal node sizes, on the same training data, in each case getting the probability for each subject or instance;
Use the output from each as a new synthetic feature, which is then input to another (single) random random forest, also run in regression mode; In this case the probability estimates from each synthetic feature will be sort of continuous as they are probability estimates and not just zero or one things;
Generate some simple plots for the crowd;
Compare the crowd results to some individual random forest runs, using some two or three terminal node settings.
Step6: Spambase data set
Step7: Class frequencies
Step8: Compute the cross-validation scores
Here, the scores are accuracy on the data set.
Step9: Visualize the predictions vs. actual status
Each dot corresponds to one prediction.
Training data
Step10: Testing data
Step11: Crowd machine
Run random forest with 15 or 20 different terminal node sizes, on the same training data, in each case getting the probability for each subject or instance;
Use the output from each as a new synthetic feature, which is then input to another (single) random random forest, also run in regression mode; In this case the probability estimates from each synthetic feature will be sort of continuous as they are probability estimates and not just zero or one things;
Generate some simple plots for the crowd;
Compare the crowd results to some individual random forest runs, using some two or three terminal node settings. | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
breast_cancer_data = pd.read_csv('data/breast-cancer-wisconsin.tsv.gz',
sep='\t',
compression='gzip')
Explanation: Breast cancer data set
End of explanation
from collections import Counter
Counter(breast_cancer_data['class'].values)
Explanation: Class frequencies
End of explanation
from sklearn.model_selection import cross_val_score, StratifiedKFold
from sklearn.ensemble import RandomForestClassifier
cross_val_score(RandomForestClassifier(n_estimators=100, n_jobs=-1),
breast_cancer_data.drop('class', axis=1).values,
breast_cancer_data.loc[:, 'class'].values, cv=StratifiedKFold(n_splits=5, shuffle=True))
Explanation: Compute the cross-validation scores
Here, the scores are accuracy on the data set.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sb
import pandas as pd
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(breast_cancer_data.drop('class', axis=1).values,
breast_cancer_data['class'].values,
stratify=breast_cancer_data['class'].values,
train_size=0.75, test_size=0.25)
clf = RandomForestRegressor(n_estimators=100, n_jobs=-1)
clf.fit(X_train, y_train)
plt.figure(figsize=(12, 7))
sb.swarmplot(y_train, clf.predict(X_train))
plt.xticks(fontsize=12)
plt.yticks(fontsize=12)
plt.xlabel('Actual status', fontsize=14)
plt.ylabel('Predicted probability', fontsize=14)
plt.ylim(-0.01, 1.01)
;
Explanation: Visualize the predictions vs. actual status
Each dot corresponds to one prediction.
Training data
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sb
import pandas as pd
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(breast_cancer_data.drop('class', axis=1).values,
breast_cancer_data['class'].values,
stratify=breast_cancer_data['class'].values,
train_size=0.75, test_size=0.25)
clf = RandomForestRegressor(n_estimators=100, n_jobs=-1)
clf.fit(X_train, y_train)
plt.figure(figsize=(12, 7))
sb.swarmplot(y_test, clf.predict(X_test))
plt.xticks(fontsize=12)
plt.yticks(fontsize=12)
plt.xlabel('Actual status', fontsize=14)
plt.ylabel('Predicted probability', fontsize=14)
plt.ylim(-0.01, 1.01)
;
Explanation: Testing data
End of explanation
import pandas as pd
import numpy as np
from sklearn.pipeline import make_pipeline, make_union
from sklearn.ensemble import RandomForestRegressor, RandomForestClassifier, VotingClassifier
from sklearn.feature_selection import SelectKBest
from sklearn.model_selection import cross_val_score, StratifiedKFold
breast_cancer_data = pd.read_csv('data/breast-cancer-wisconsin.tsv.gz',
sep='\t',
compression='gzip')
all_features = breast_cancer_data.drop('class', axis=1).values
all_classes = breast_cancer_data['class'].values
union_ops = [SelectKBest(k='all')]
for i, mwfl in enumerate(np.arange(0., 0.21, 0.01)):
union_ops.append(VotingClassifier(estimators=[('rf-mwfl={}'.format(mwfl),
RandomForestRegressor(n_estimators=100,
n_jobs=-1,
min_weight_fraction_leaf=mwfl))]))
clf = RandomForestClassifier(n_estimators=100, n_jobs=-1, min_weight_fraction_leaf=mwfl)
print('RF w/ mwfl={:0.2f} CV score: {:0.3f}'.format(
mwfl,
np.mean(cross_val_score(clf, all_features, all_classes, cv=StratifiedKFold(n_splits=5, shuffle=True)))))
clf = make_pipeline(make_union(*union_ops), RandomForestClassifier(n_estimators=100, n_jobs=-1))
print('Crowd machine CV score: {:0.3f}'.format(np.mean(cross_val_score(clf, all_features, all_classes, cv=StratifiedKFold(n_splits=5, shuffle=True)))))
Explanation: Crowd machine
Run random forest with 15 or 20 different terminal node sizes, on the same training data, in each case getting the probability for each subject or instance;
Use the output from each as a new synthetic feature, which is then input to another (single) random random forest, also run in regression mode; In this case the probability estimates from each synthetic feature will be sort of continuous as they are probability estimates and not just zero or one things;
Generate some simple plots for the crowd;
Compare the crowd results to some individual random forest runs, using some two or three terminal node settings.
End of explanation
import pandas as pd
spambase_data = pd.read_csv('data/spambase.tsv.gz',
sep='\t',
compression='gzip')
Explanation: Spambase data set
End of explanation
from collections import Counter
Counter(spambase_data['class'].values)
Explanation: Class frequencies
End of explanation
from sklearn.model_selection import cross_val_score, StratifiedKFold
from sklearn.ensemble import RandomForestClassifier
cross_val_score(RandomForestClassifier(n_estimators=100, n_jobs=-1),
spambase_data.drop('class', axis=1).values,
spambase_data.loc[:, 'class'].values,
cv=StratifiedKFold(n_splits=5, shuffle=True))
Explanation: Compute the cross-validation scores
Here, the scores are accuracy on the data set.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sb
import pandas as pd
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(spambase_data.drop('class', axis=1).values,
spambase_data['class'].values,
stratify=spambase_data['class'].values,
train_size=0.75, test_size=0.25)
clf = RandomForestRegressor(n_estimators=100, n_jobs=-1)
clf.fit(X_train, y_train)
plt.figure(figsize=(12, 7))
sb.boxplot(y_train, clf.predict(X_train))
plt.xticks(fontsize=12)
plt.yticks(fontsize=12)
plt.xlabel('Actual status', fontsize=14)
plt.ylabel('Predicted probability', fontsize=14)
plt.ylim(-0.01, 1.01)
;
Explanation: Visualize the predictions vs. actual status
Each dot corresponds to one prediction.
Training data
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sb
import pandas as pd
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(spambase_data.drop('class', axis=1).values,
spambase_data['class'].values,
stratify=spambase_data['class'].values,
train_size=0.75, test_size=0.25)
clf = RandomForestRegressor(n_estimators=100, n_jobs=-1)
clf.fit(X_train, y_train)
plt.figure(figsize=(12, 7))
sb.boxplot(y_test, clf.predict(X_test))
plt.xticks(fontsize=12)
plt.yticks(fontsize=12)
plt.xlabel('Actual status', fontsize=14)
plt.ylabel('Predicted probability', fontsize=14)
plt.ylim(-0.01, 1.01)
;
Explanation: Testing data
End of explanation
import pandas as pd
import numpy as np
from sklearn.pipeline import make_pipeline, make_union
from sklearn.ensemble import RandomForestRegressor, RandomForestClassifier, VotingClassifier
from sklearn.feature_selection import SelectKBest
from sklearn.model_selection import cross_val_score, StratifiedKFold
spambase_data = pd.read_csv('data/spambase.tsv.gz',
sep='\t',
compression='gzip')
all_features = spambase_data.drop('class', axis=1).values
all_classes = spambase_data['class'].values
union_ops = [SelectKBest(k='all')]
for i, mwfl in enumerate(np.arange(0., 0.21, 0.01)):
union_ops.append(VotingClassifier(estimators=[('rf-mwfl={}'.format(mwfl),
RandomForestRegressor(n_estimators=100,
n_jobs=-1,
min_weight_fraction_leaf=mwfl))]))
clf = RandomForestClassifier(n_estimators=100, n_jobs=-1, min_weight_fraction_leaf=mwfl)
print('RF w/ mwfl={:0.2f} CV score: {:0.3f}'.format(
mwfl,
np.mean(cross_val_score(clf, all_features, all_classes, cv=StratifiedKFold(n_splits=5, shuffle=True)))))
clf = make_pipeline(make_union(*union_ops), RandomForestClassifier(n_estimators=100, n_jobs=-1))
print('Crowd machine CV score: {:0.3f}'.format(np.mean(cross_val_score(clf, all_features, all_classes,
cv=StratifiedKFold(n_splits=5, shuffle=True)))))
Explanation: Crowd machine
Run random forest with 15 or 20 different terminal node sizes, on the same training data, in each case getting the probability for each subject or instance;
Use the output from each as a new synthetic feature, which is then input to another (single) random random forest, also run in regression mode; In this case the probability estimates from each synthetic feature will be sort of continuous as they are probability estimates and not just zero or one things;
Generate some simple plots for the crowd;
Compare the crowd results to some individual random forest runs, using some two or three terminal node settings.
End of explanation
<END_TASK> |
15,634 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Linear Weights Prediction
Step1: Data import and cleaning
Step2: The data are messed up; name fields contain commas in a comma-separated file so two extra columns are created.
Step3: Clean pitch type column (convert all to upper case)
Step4: Parse dates to datetime types
Step5: I'm going to discard a few pitch types
Step6: So that I can look at patterns at different scales, I will create columns for month, week and day (game).
Step7: Data exploration
We can get an idea of some of the best pitches by summing weights across pitcher and pitch type
Step8: Let's look at Corey Kluber, just to isolate one player
Step9: About 10 runs saved from his cutter over 5 months
Step10: If you sum the allowed weights by month for each pitch, it gives the impression of a trend, in some instances.
Step11: However, if you look at the per-game observed run values, by summing the weights for each game, the trends mostly disappear.
Step12: If you, take this further and look at the distribution of linear weights allowed per game, you can see the underlying variability in the data. I will proceed with the analysis using the pitch-level data, as the monthly/weekly sums would gloss over the variability associated with those summaries.
Step13: Predictive modeling
The question posed suggests a time series prediction problem
Step14: I'm going to use PyMC3, and open-source Bayesian library for Python that I created many years ago, and continue to develop and maintain today. There are a variety of other Python packages I could have used instead
Step15: So, this is a flexible covariance function that is parameterized by scale and lengthscale parameters, which will estimate from the data. I will also specify a noise parameter $\sigma$ to characterize the variation of weights allowed within a game.
We will use optimization to obtain the maximum a posteriori (MAP) estimate of the model.
Step16: Here's an estimate of the standard deviation within days, which looks reasonable compared to the empirical, which is around 0.1.
Step17: The great thing about Gaussian processes is that it is trivial to predict to other points outside the dataset, so we can define a set of points that extends into September, and draw from the conditional distribution
Step18: Here we draw 1000 posterior samples from the predictive GP, to use for inference.
Step19: The plot below shows the estimated function, along with its uncertainty, which is characterized by many poserior draws from the estimated function. I've also plotted the observed mean of the daily weights allowed each day as a dashed blue line, as well as the per-pitch weights allowed themselves, for which I've specified a shading alpha so that mutliple occurrences of the same weight value appear darker.
Step20: If we look at the mean of the estimates for days in September, we get
Step21: That is, an estimate wSL/C of around -1.5 runs per 100 pitches, with a credible interval of (-4.3, 1.4).
Modeling components of variation
A more comprehensive approach involves modeling the components of variation in the time series. A nice property of Gausian processes is that covariance functions are additive, meaning that variation across different scales (in this case, temporal scales) can be modeled directly.
We can apply this here if, for example, we think there are short-term (the order of a couple games) and medium- or long-term (several weeks or months) components to the variability of particular pitches. Short term variability might involve the effects of a road trip, a minor injury, or other unmeasured factors that could come and go, and which are not particularly predictive. On the other hand, we may be more interested in the variation over a monthly time scale that may reveal the steady development of a pitch, and which may be predictive. Since this is very noisy data, this may be our best hope.
This approach involves using more informative priors, encoding information about the scales we will expect to see the observed weights to vary. Here, we will set the majority of the expected variation for the short term trend to be over a 1-5 game range (via a gamma(1, 0.75) prior), while the prior for the long-term lengthscale will cover the 20-60 day range (via a gamma(20, 0.5) prior).
It is simple to wrap all of the above in a function, so that it can be applied to other players and pitches
Step22: Here is Trevor Bauer's fastball, as another example. The prediction is smoothed relative to the simpler covariance model.
Step23: Here are the resulting predictions (mean and 95% interval) for September, shown as wSI/C
Step24: Conclusions
I am not confident that linear weights are predictive, though they are certaintly useful for evaluating how a pitcher/pitch combination fared over some sufficiently long time period. Even though they are adjusted for the count, they are still confounded with many other variables that contributed to the observed outcome
Step25: The predictiveness can be characterized by both $p$, which quantifies the proportion players that differ from the league mean, and the proportion of "skill variance" relative to the total variance | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import pymc3 as pm
from pymc3.gp.util import plot_gp_dist
import theano.tensor as tt
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('dark')
Explanation: Linear Weights Prediction
End of explanation
seasonal_pitch_raw = pd.read_csv('../private_data/seasonal_pitch_data.csv', encoding='utf-8')
seasonal_pitch_raw.head()
Explanation: Data import and cleaning
End of explanation
colnames = seasonal_pitch_raw.columns.copy()
seasonal_pitch_raw.iloc[:, 5] = seasonal_pitch_raw.iloc[:, 5] + seasonal_pitch_raw.iloc[:, 6]
seasonal_pitch_raw.iloc[:, 1] = seasonal_pitch_raw.iloc[:, 1] + seasonal_pitch_raw.iloc[:, 2]
seasonal_pitch = (seasonal_pitch_raw.drop(colnames[[2, 6]], axis=1)
.reset_index())
seasonal_pitch.columns = colnames
Explanation: The data are messed up; name fields contain commas in a comma-separated file so two extra columns are created.
End of explanation
seasonal_pitch['pi_pitch_type'] = seasonal_pitch.pi_pitch_type.str.upper()
Explanation: Clean pitch type column (convert all to upper case)
End of explanation
seasonal_pitch['date'] = pd.to_datetime(seasonal_pitch.date)
seasonal_pitch.head()
Explanation: Parse dates to datetime types
End of explanation
bad_pitches = ~seasonal_pitch.pi_pitch_type.isin(['KN', 'IB', 'XX'])
data_subset = seasonal_pitch[bad_pitches].copy()
Explanation: I'm going to discard a few pitch types: 'KN', 'IB', 'XX'
End of explanation
data_subset['month'] = data_subset.date.dt.month
data_subset['week'] = data_subset.date.dt.week
data_subset['dayofyear'] = data_subset.date.dt.dayofyear
Explanation: So that I can look at patterns at different scales, I will create columns for month, week and day (game).
End of explanation
data_subset.groupby(['pitcher', 'pi_pitch_type']).lw.sum().sort_values()
Explanation: Data exploration
We can get an idea of some of the best pitches by summing weights across pitcher and pitch type:
End of explanation
kluber_pitches = (data_subset.loc[data_subset.pitcherid==446372,
['pi_pitch_type', 'month', 'dayofyear', 'lw']]
.sort_values(by='lw'))
kluber_pitches.head()
Explanation: Let's look at Corey Kluber, just to isolate one player:
End of explanation
kluber_pitches[kluber_pitches.pi_pitch_type=='FC'].lw.sum()
Explanation: About 10 runs saved from his cutter over 5 months:
End of explanation
kluber_month_sum = kluber_pitches.groupby(['pi_pitch_type', 'month']).lw.sum().reset_index()
g = sns.factorplot(data=kluber_month_sum, col="pi_pitch_type", x="month", y="lw",
col_wrap=3);
Explanation: If you sum the allowed weights by month for each pitch, it gives the impression of a trend, in some instances.
End of explanation
kluber_game_sum = (kluber_pitches.groupby(['pi_pitch_type', 'dayofyear']).lw
.sum().reset_index())
g = sns.factorplot(data=kluber_game_sum, col="pi_pitch_type", x="dayofyear", y="lw",
col_wrap=3)
g.set_xticklabels(rotation=90);
Explanation: However, if you look at the per-game observed run values, by summing the weights for each game, the trends mostly disappear.
End of explanation
g = sns.factorplot(data=kluber_pitches, col="pi_pitch_type", x="dayofyear", y="lw",
col_wrap=3)
g.set_xticklabels(rotation=90);
Explanation: If you, take this further and look at the distribution of linear weights allowed per game, you can see the underlying variability in the data. I will proceed with the analysis using the pitch-level data, as the monthly/weekly sums would gloss over the variability associated with those summaries.
End of explanation
PITCH = 'SL'
day_min = kluber_pitches.dayofyear - kluber_pitches.dayofyear.min()
day_kluber_fc, lw_kluber_fc = (kluber_pitches.assign(day=day_min)
.loc[kluber_pitches.pi_pitch_type==PITCH, ['day', 'lw']].T.values)
X = day_kluber_fc.reshape(-1,1)
y = lw_kluber_fc
Explanation: Predictive modeling
The question posed suggests a time series prediction problem: predicting the next month's linear weights allowed from the observed weights allowed in the previous 5 months. A conventional approach here might be an ARIMA model, which includes a first-order differencing term and a moving average component. I prefer instead to use a non-parametric Bayesian structural time series approach via Gaussian processes (GP).
A Gaussian process can be viewed as a probabilistic "distribution over functions", which seeks to model the covariance structure of the time series, estimating the degree to which particular observations in the time series are related to those nearby. This seems appropriate here: treating the observed linear weights allowed during each game as a set of Gaussian (this can be relaxed to a different distribution) outcomes, which are correlated with the outcomes from games before and after it. This is another way of saying we have a multivariate Gaussian model. A Gaussian process is just an infinitely-dimensional Gaussian, where we may marginalize over any non-observed elements.
I prefer to build a "data-generating model" based on the observed weights allowed, rather than on the weekly or monthly summaries of the data. I don't expect this to be predictive, but with this approach we may at least be able to characterize the covariance structure and be able to esimate how variable things might look in September.
As an example, let's look at Corey Kluber's slider, but we could easily swap in any player/pitch combination we like:
End of explanation
ls = 0.1
tau = 0.5
cov = tau * pm.gp.cov.Matern32(1, ls)
X_vals = np.linspace(0, 2, 200)[:,None]
K = cov(X_vals).eval()
plt.figure(figsize=(14,4))
plt.plot(X_vals, pm.MvNormal.dist(mu=np.zeros(K.shape[0]), cov=K).random(size=3).T);
plt.xlabel("X");
Explanation: I'm going to use PyMC3, and open-source Bayesian library for Python that I created many years ago, and continue to develop and maintain today. There are a variety of other Python packages I could have used instead: scikit-learn, Stan, GPFlow, and others. PyMC3 makes it very easy to implement GP models. PyMC lets me specify a GP in just a few lines of code.
Gaussian processes are parameterized by a mean function (instead of a mean vector in a multivariate normal) and a covariance function (in place of a covariance matrix). The form of the GP is dictated by the covariance function, which can be specified to account for different components of a time series (e.g. periodic). I will use a simple covariance function called the Matérn covariance. Here are a few samples from functions drawn from a Matérn(3/2), just to give an idea:
End of explanation
with pm.Model() as kluber_model:
# Specify covariance function
ℓ = pm.Exponential("ℓ", 0.1)
η = pm.HalfCauchy("η", 1)
cov = η**2 * pm.gp.cov.Matern32(1, ℓ)
# Define marginal GP
gp = pm.gp.Marginal(cov_func=cov)
# Noise parameter
σ = pm.Uniform("σ", 0, 0.3)
# Pass data to marginal likelihood
ml = gp.marginal_likelihood("ml", X=X, y=y, noise=σ)
mp = pm.find_MAP()
Explanation: So, this is a flexible covariance function that is parameterized by scale and lengthscale parameters, which will estimate from the data. I will also specify a noise parameter $\sigma$ to characterize the variation of weights allowed within a game.
We will use optimization to obtain the maximum a posteriori (MAP) estimate of the model.
End of explanation
mp['σ']
Explanation: Here's an estimate of the standard deviation within days, which looks reasonable compared to the empirical, which is around 0.1.
End of explanation
# new values from April through September
X_new = np.linspace(0, 180, 500)[:,None]
# add the GP conditional to the model, given the new X values
with kluber_model:
f_pred = gp.conditional("f_pred", X_new)
Explanation: The great thing about Gaussian processes is that it is trivial to predict to other points outside the dataset, so we can define a set of points that extends into September, and draw from the conditional distribution:
End of explanation
with kluber_model:
pred_samples = pm.sample_ppc([mp], vars=[f_pred], samples=1000)
Explanation: Here we draw 1000 posterior samples from the predictive GP, to use for inference.
End of explanation
# plot the results
fig, axes = plt.subplots(figsize=(12,5), sharex=True)
scale = 100
# plot the samples from the gp posterior with samples and shading
plot_gp_dist(axes, pred_samples["f_pred"]*scale, X_new, palette="bone_r");
# plot the data alongside the esitmates
axes.plot(X, y*scale, 'ok', ms=3, alpha=0.1, label="Observed pitch");
axes.set_ylim(-0.1*scale, 0.1*scale)
axes.set_title("Corey Kluber {}".format(PITCH))
axes.set_ylabel("Linear weight")
mean_lw = (kluber_pitches[kluber_pitches.pi_pitch_type==PITCH].groupby('dayofyear')
.lw.mean()*scale)
mean_lw.index = mean_lw.index - mean_lw.index.min()
mean_lw.plot(ax=axes, style=':', label='Empirical mean')
# axis labels and title
plt.xlabel("Day")
plt.legend()
Explanation: The plot below shows the estimated function, along with its uncertainty, which is characterized by many poserior draws from the estimated function. I've also plotted the observed mean of the daily weights allowed each day as a dashed blue line, as well as the per-pitch weights allowed themselves, for which I've specified a shading alpha so that mutliple occurrences of the same weight value appear darker.
End of explanation
pred_samples['f_pred'][:, 150:].mean()
np.percentile(pred_samples['f_pred'][:, 150:], [2.5, 97.5])
Explanation: If we look at the mean of the estimates for days in September, we get:
End of explanation
player_lookup = dict(data_subset[['pitcherid', 'pitcher']].drop_duplicates().values)
def predict_weights(player_id, pitch):
player_pitches = (data_subset.loc[(data_subset.pitcherid==player_id) & (data_subset.pi_pitch_type==pitch),
['dayofyear', 'lw']]
.sort_values(by='lw'))
day_min = player_pitches.dayofyear - player_pitches.dayofyear.min()
day, lw = (player_pitches.assign(day=day_min)[['day', 'lw']].T.values)
X = day.reshape(-1,1)
y = lw
with pm.Model():
# Short-term variation
η_short = pm.HalfCauchy("η_short", beta=0.5, testval=0.1)
ℓ_short = pm.Gamma("ℓ_short", alpha=1, beta=0.75)
cov_short = η_short**2 * pm.gp.cov.Matern32(1, ℓ_short)
gp_short = pm.gp.Marginal(cov_func=cov_short)
# long term trend (1-2 month scale)
η_trend = pm.HalfCauchy("η_trend", beta=2, testval=2)
ℓ_trend = pm.Gamma("ℓ_trend", alpha=20, beta=0.5)
cov_trend = η_trend**2 * pm.gp.cov.ExpQuad(1, ℓ_trend)
gp_trend = pm.gp.Marginal(cov_func=cov_trend)
# Define marginal GP
gp = gp_trend + gp_short
# Noise parameter
σ = pm.Exponential("σ", 10)
cov_noise = pm.gp.cov.WhiteNoise(σ)
# Pass data to marginal likelihood
ml = gp.marginal_likelihood("ml", X=X, y=y, noise=cov_noise)
mp = pm.find_MAP()
X_new = np.linspace(0, 180, 500)[:,None]
f_pred = gp.conditional("f_pred", X_new)
pred_samples = pm.sample_ppc([mp], vars=[f_pred], samples=1000)
# plot the results
fig, axes = plt.subplots(figsize=(12,5), sharex=True)
scale = 100
# plot the samples from the gp posterior with samples and shading
plot_gp_dist(axes, pred_samples["f_pred"]*scale, X_new, palette="bone_r");
# plot the data alongside the esitmates
axes.plot(X, y*scale, 'ok', ms=3, alpha=0.1, label="Observed pitch");
axes.set_ylim(-0.1*scale, 0.1*scale)
axes.set_title("{} {}".format(player_lookup[player_id], pitch))
axes.set_ylabel("Linear weight")
mean_lw = player_pitches.groupby('dayofyear').lw.mean()*scale
mean_lw.index = mean_lw.index - mean_lw.index.min()
mean_lw.plot(ax=axes, style=':', label='Empirical mean')
# axis labels and title
plt.xlabel("Day")
plt.legend()
return pred_samples
Explanation: That is, an estimate wSL/C of around -1.5 runs per 100 pitches, with a credible interval of (-4.3, 1.4).
Modeling components of variation
A more comprehensive approach involves modeling the components of variation in the time series. A nice property of Gausian processes is that covariance functions are additive, meaning that variation across different scales (in this case, temporal scales) can be modeled directly.
We can apply this here if, for example, we think there are short-term (the order of a couple games) and medium- or long-term (several weeks or months) components to the variability of particular pitches. Short term variability might involve the effects of a road trip, a minor injury, or other unmeasured factors that could come and go, and which are not particularly predictive. On the other hand, we may be more interested in the variation over a monthly time scale that may reveal the steady development of a pitch, and which may be predictive. Since this is very noisy data, this may be our best hope.
This approach involves using more informative priors, encoding information about the scales we will expect to see the observed weights to vary. Here, we will set the majority of the expected variation for the short term trend to be over a 1-5 game range (via a gamma(1, 0.75) prior), while the prior for the long-term lengthscale will cover the 20-60 day range (via a gamma(20, 0.5) prior).
It is simple to wrap all of the above in a function, so that it can be applied to other players and pitches:
End of explanation
pred_samples = predict_weights(545333, 'FA')
Explanation: Here is Trevor Bauer's fastball, as another example. The prediction is smoothed relative to the simpler covariance model.
End of explanation
pred_samples['f_pred'][:, 150:].mean() * 100
np.percentile(pred_samples['f_pred'][:, 150:], [2.5, 97.5]) * 100
Explanation: Here are the resulting predictions (mean and 95% interval) for September, shown as wSI/C:
End of explanation
data_summary = (data_subset[data_subset.pi_pitch_type=='CU'].groupby(['pitcher', 'month']).lw
.agg([sum, np.size])
.reset_index()
.rename(columns={'sum': 'weight', 'size': 'n'}))
all_pitchers = data_summary.pitcher.unique()
pitcher_lookup = dict(zip(all_pitchers, np.arange(len(all_pitchers))))
data_summary['pitcher_idx'] = data_summary.pitcher.replace(pitcher_lookup)
# all_pitches = data_summary.pi_pitch_type.unique()
# pitch_lookup = dict(zip(all_pitches, np.arange(len(all_pitches))))
# data_summary['pitch_idx'] = data_summary.pi_pitch_type.replace(pitch_lookup)
data_summary['var_weight'] = data_summary['n'] / data_summary['n'].mean()
y = data_summary.weight.values
w = data_summary.var_weight.values
i = data_summary.pitcher_idx.values
with pm.Model() as hier_weights_curves:
p = pm.Beta('p', 1, 1)
v = pm.Bernoulli('v', p, shape=len(all_pitchers))
σ_a = pm.HalfCauchy('σ_a', 1)
η = pm.Normal('η', 0, 1, shape=len(all_pitchers))
α = pm.Deterministic('α', η*σ_a*v)
μ = pm.Normal('μ', 0, sd=100)
σ = pm.HalfCauchy('σ', 1)
r = pm.Deterministic('r', σ_a / (σ_a + σ))
weight_pred = pm.Normal('weight_pred', μ + α[i], w*σ, observed=y)
with hier_weights_curves:
trace = pm.sample(1000, tune=2000)
pm.energyplot(trace)
Explanation: Conclusions
I am not confident that linear weights are predictive, though they are certaintly useful for evaluating how a pitcher/pitch combination fared over some sufficiently long time period. Even though they are adjusted for the count, they are still confounded with many other variables that contributed to the observed outcome: the effects of a particular batter, the pitch combination that preceded the current pitch, the possible influence of the presence of baserunners (was he pitching from the stretch?), and more. I would roughly equate this exercise with trying to predict future stock market returns (another stochastic process) based on past performance. There is serial autocorrelation that may be sometimes predictive over a very short time period, but in general it is not predictive. As with the stock market, we may be able to characterize the temporal variability (volatility) of linear weights allowed, but little more.
As a general approach, however, I like Gaussian processes for robust time series estimation and prediction. Since it is driven by the covariance function, the uncertainty in predictions extrapolated beyond the range of the data is automatically accounted for. The degree to which today's data are predictive of tomorrow's outcome is governed by the covariance function; once these are no longer closely related, the process just reverts to the prior (i.e. what is known in the absence of data).
Addendum
Modified from the approach of McShane et al. (2011), we can quantify the predictiveness of linear weights using a hierarchical model. I will fit the pitch weights via a population model:
$$lw_{ij} \sim N(\mu + \alpha_i, w_{ij} \sigma^2)$$
where $\mu$ is the population mean and $\alpha_i$ is a random effect corresponding to player $i$ that sum to predict the linear weight for that player in month $j$.
The partial pooling is governed by the global variance $\sigma^2$, which is weighted for each player-month by the number of times the pitch was thrown relative to the average:
$$w_{ij} = \frac{n_{ij}}{\bar{n}}$$
Finally, the hierarchical random effect $\alpha_i$ is modeled as a zero-inflated mixture that hypothesizes that some subset of players are no different from the population mean for a particular pitch, while others are allowed to vary. Thus, a probability $p$ governs the proportion of players that vary according to $\alpha_i \sim N(0, \sigma_a)$ versus those that are zero (with probability $1-p$).
This model is run for any particular pitch type; I will here use the curveball.
End of explanation
pm.traceplot(trace, varnames=['p', 'r']);
pm.summary(trace, varnames=['p', 'r']).round(3)
plt.figure(figsize=(5, 16))
pm.forestplot(trace, varnames=['α'], quartiles=False, ylabels=['']);
Explanation: The predictiveness can be characterized by both $p$, which quantifies the proportion players that differ from the league mean, and the proportion of "skill variance" relative to the total variance:
$$r = \frac{\sigma_a}{\sigma_a + \sigma}$$
From the posterior estimates below, we can see that both proportions are low (around 30%), making linear weights not particularly predictive, at least at the monthly scale.
End of explanation
<END_TASK> |
15,635 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
The rad-ness of notebooks
I use notebooks more often than I use an executable .py script. This is partially because notebooks were my first major introduction to python, but my continued use relates back to the fact that it allows me to break up problems I'm solving into different blocks.
Either install Anaconda (which comes with jupyter notebooks), or use pip install jupyter for Python 2, or pip3 install jupyter for Python 3
Then it's as simple as typing jupyter notebook into your terminal to launch the application!
What problem are you trying to solve?
A major advantage of notebooks is that you can utilise Markdown and $\LaTeX$ to incorperate discussion and directions into your code. This will likely make it more readable for another user (or even your future self!)
In this example, we're going to work through an emcee example, where we fit a model to some data. The most common example is a linear fit to data with errors, but I'm going to cahnge it up a little to prove you can use it for models other than straight lines. Let's examine how well we can fit the general sinusiod
Step1: Often I reserve a single notebook cell for all of my imports, similar to how you would normally import all of your libraries at the beginning of the program. I can always come back to add more. This is also usually where I configure the general look I want my plots to have, using the matplotlib.rcParams.update() function.
Step2: Generating data
Often, I would read in data from a file but I'll just generate some here for simplicity. Let's define the "true" parameters of our model and attempt to recover these from data with random errors.
Step3: Now we can generate some synthetic data. We'll need to add some noise as well.
Step4: Establishing the model
Now define the likelihood function. This is the probablility of our data given our model (including its free parameters). Traditionally, we define the logarithm of the likelihood function.
$$ L_i = \frac{1}{\sqrt{2\pi \sigma_i^2}} \exp \left( -\frac{1}{2} \frac{(x_i - \mu_i)^2}{\sigma_i^2} \right)$$
Likelihood values must then be multiplied together, alternatively, their logarithms may be summed
Step5: We may wish to impose priors on our observations. The prior represents any information you already know about your parameters. For example, perhaps we are modelling some sort of physical system, and we know that the amplitude cannot be negative. Thus, we would wish to exclude any evaluations of the likelihood here, since we know this would give an unphysical result. Let us impose the condition $A>0$. We also know that the phase must be given by $0 \ge \phi < 2\pi$.
Step6: We can now define the posterior distibution, which is the product of the product of the likelihood and prior
Step7: Initialising emcee
The key parameters for emcee (aside from the likelihood defined earlier) are the number of dimensions and number of walkers. $n_{dim}$ is given by the number of free parameters (in our case, 3), and $n_{walkers}$ is the number of chains we'd like to generate. We'll also need to specify an initial starting position for every walker. A good approach is to pick a sensible estimate and distribute the walkers randomly around this point
Step8: Run emcee
Step9: Plotting options
My favourite thing about notebooks is that I no longer have to run any of the previous cells in order to change the plots that I'm making. Let's examine two plotting approaches
Step10: ChainConsumer
Step11: Similarly, now that I've defined the instance of chain consumer, I can ask for statistics in a new cell without rerunning the plot!
Gelman-Rubin Statistic
This is a measure of chain convergence
Step12: Parameter correlation
we can ask for a python array of the correlation, or a latex table that could be given straight to a paper! | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
Explanation: The rad-ness of notebooks
I use notebooks more often than I use an executable .py script. This is partially because notebooks were my first major introduction to python, but my continued use relates back to the fact that it allows me to break up problems I'm solving into different blocks.
Either install Anaconda (which comes with jupyter notebooks), or use pip install jupyter for Python 2, or pip3 install jupyter for Python 3
Then it's as simple as typing jupyter notebook into your terminal to launch the application!
What problem are you trying to solve?
A major advantage of notebooks is that you can utilise Markdown and $\LaTeX$ to incorperate discussion and directions into your code. This will likely make it more readable for another user (or even your future self!)
In this example, we're going to work through an emcee example, where we fit a model to some data. The most common example is a linear fit to data with errors, but I'm going to cahnge it up a little to prove you can use it for models other than straight lines. Let's examine how well we can fit the general sinusiod:
$$y(t) = A\sin(2\pi ft+ \phi),$$
where $A$ is the amplitude, $f$ is the frequency, $t$ is time, and $\phi$ is the phase.
Setting up your notebook
Commands begining with '%' are known as magic commands in IPython. Depending on what you'd like to do, there are any number of useful magic commands. By far the most common command I use is %matplotlib inline which incorporates plots directly into the notebook, rather than being opened in a new window:
End of explanation
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import emcee
from __future__ import division
matplotlib.rcParams.update({'font.size': 16,
'xtick.major.size': 12,
'ytick.major.size': 12,
'xtick.major.width': 1,
'ytick.major.width': 1,
'ytick.minor.size': 5,
'xtick.minor.size': 5,
'axes.linewidth': 1,
'font.family': 'serif',
'font.serif': 'Times New Roman',
'text.usetex': True})
Explanation: Often I reserve a single notebook cell for all of my imports, similar to how you would normally import all of your libraries at the beginning of the program. I can always come back to add more. This is also usually where I configure the general look I want my plots to have, using the matplotlib.rcParams.update() function.
End of explanation
A_true = 2
f_true = 0.2
phi_true = np.pi/4.
param_names = ["$A$", "$f$","$\phi$"]
Explanation: Generating data
Often, I would read in data from a file but I'll just generate some here for simplicity. Let's define the "true" parameters of our model and attempt to recover these from data with random errors.
End of explanation
N = 50
t = np.sort(10*np.random.rand(N))
t_lin = np.linspace(0,10,100)
y_true = A_true * np.sin(2*np.pi*f_true*t + phi_true)
displacement_err = 0.5*np.random.rand(N)*np.sqrt(max(abs(y_true)))
y_alt = y_true + displacement_err*np.random.randn(N)
plt.errorbar(t, y_alt, yerr=displacement_err, fmt=".k")
plt.plot(t_lin, A_true * np.sin(2*np.pi*f_true*t_lin + phi_true), "-k", lw=1, alpha=0.6)
plt.xlabel("$x$")
plt.ylabel("$y$")
plt.tight_layout()
plt.show()
Explanation: Now we can generate some synthetic data. We'll need to add some noise as well.
End of explanation
def lnlike(theta, t, y, yerr):
A, f, phi = theta
model = A * np.sin(2*np.pi*f*t + phi)
inv_sigma2 = 1.0/(yerr**2)
return -0.5*(np.sum((y-model)**2*inv_sigma2 - np.log(inv_sigma2) + np.log(2.*np.pi)))
Explanation: Establishing the model
Now define the likelihood function. This is the probablility of our data given our model (including its free parameters). Traditionally, we define the logarithm of the likelihood function.
$$ L_i = \frac{1}{\sqrt{2\pi \sigma_i^2}} \exp \left( -\frac{1}{2} \frac{(x_i - \mu_i)^2}{\sigma_i^2} \right)$$
Likelihood values must then be multiplied together, alternatively, their logarithms may be summed:
$$ \ln(L) = -\frac{1}{2} \sum_i \left( \ln(2\pi) + \ln(\sigma_i^2) + \frac{(x_i - \mu_i)^2}{\sigma_i^2} \right) $$
End of explanation
def lnprior(theta):
A, f, phi = theta
if (0 < phi) and (phi < 2.*np.pi) and (0 < A) and (0 < f):
return 0.0
else:
return -np.inf
Explanation: We may wish to impose priors on our observations. The prior represents any information you already know about your parameters. For example, perhaps we are modelling some sort of physical system, and we know that the amplitude cannot be negative. Thus, we would wish to exclude any evaluations of the likelihood here, since we know this would give an unphysical result. Let us impose the condition $A>0$. We also know that the phase must be given by $0 \ge \phi < 2\pi$.
End of explanation
def lnpost(theta, t, y, yerr):
lnp = lnprior(theta)
if not np.isfinite(lnp):
return -np.inf
return lnp + lnlike(theta, t, y, yerr)
Explanation: We can now define the posterior distibution, which is the product of the product of the likelihood and prior
End of explanation
ndim = 3
nwalkers = 100
initial_guess = [2, 0.2, np.pi/4.]
pos = [initial_guess + 1e-4*np.random.randn(ndim) for i in range(nwalkers)]
sampler = emcee.EnsembleSampler(nwalkers, ndim, lnpost, args=(t, y_alt, displacement_err))
Explanation: Initialising emcee
The key parameters for emcee (aside from the likelihood defined earlier) are the number of dimensions and number of walkers. $n_{dim}$ is given by the number of free parameters (in our case, 3), and $n_{walkers}$ is the number of chains we'd like to generate. We'll also need to specify an initial starting position for every walker. A good approach is to pick a sensible estimate and distribute the walkers randomly around this point
End of explanation
sampler.run_mcmc(pos, 5000)
burnin = 1000
samples = sampler.chain[:, burnin:, :].reshape((-1, ndim))
Explanation: Run emcee
End of explanation
import corner
fig = corner.corner(samples, truths=initial_guess, labels=param_names, verbose=True)
Explanation: Plotting options
My favourite thing about notebooks is that I no longer have to run any of the previous cells in order to change the plots that I'm making. Let's examine two plotting approaches:
Corner
End of explanation
from chainconsumer import ChainConsumer
c = ChainConsumer()
c.add_chain(samples, parameters=param_names, name="samples", walkers=nwalkers)
c.configure(statistics='cumulative', flip=True, diagonal_tick_labels=False)
fig = c.plot(figsize=2.5, truth=initial_guess)
Explanation: ChainConsumer
End of explanation
print c.diagnostic_gelman_rubin()
Explanation: Similarly, now that I've defined the instance of chain consumer, I can ask for statistics in a new cell without rerunning the plot!
Gelman-Rubin Statistic
This is a measure of chain convergence
End of explanation
pythonarr = c.get_correlations()
latextab = c.get_correlation_table()
print pythonarr
print latextab
Explanation: Parameter correlation
we can ask for a python array of the correlation, or a latex table that could be given straight to a paper!
End of explanation
<END_TASK> |
15,636 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Advection (conservative form)
Scalar advection problem in conservative form with variable velocity field.
There are four Riemann solvers that can be tried out here, all described in
LeVeque (Cambridge Press, 2002)
rp-solver=1
Step1: Or, run code in parallel mode (command may need to be customized, depending your on MPI installation.)
Step2: Create PNG files for web-browser viewing, or animation.
Step3: View PNG files in browser, using URL above, or create an animation of all PNG files, using code below. | <ASSISTANT_TASK:>
Python Code:
!swirlcons --user:example=2 --user:rp-solver=4
Explanation: Advection (conservative form)
Scalar advection problem in conservative form with variable velocity field.
There are four Riemann solvers that can be tried out here, all described in
LeVeque (Cambridge Press, 2002)
rp-solver=1 : Q-star approach in which a $q^*$ value is defined to enforce flux continuity across the stationery wave.
rp-solver=2 : Wave-decomposition approach based on solving the Riemann problem for system of two equations.
rp-solver=3 : Edge centered velocities are used to construct classic update based on flux formulation
rp=sovler=4 : F-wave approach.
Two examples are avaible. In Example 1, the velocity field $u(x)$ is positive. In Example 2, the velocity field changes sign. Both velocity fields have non-zero
divergence.
Run code in serial mode (will work, even if code is compiled with MPI)
End of explanation
#!mpirun -n 4 swirlcons
Explanation: Or, run code in parallel mode (command may need to be customized, depending your on MPI installation.)
End of explanation
%run make_plots.py
Explanation: Create PNG files for web-browser viewing, or animation.
End of explanation
%pylab inline
import glob
from matplotlib import image
from clawpack.visclaw.JSAnimation import IPython_display
from matplotlib import animation
figno = 0
fname = '_plots/*fig' + str(figno) + '.png'
filenames=sorted(glob.glob(fname))
fig = plt.figure()
im = plt.imshow(image.imread(filenames[0]))
def init():
im.set_data(image.imread(filenames[0]))
return im,
def animate(i):
image_i=image.imread(filenames[i])
im.set_data(image_i)
return im,
animation.FuncAnimation(fig, animate, init_func=init,
frames=len(filenames), interval=500, blit=True)
Explanation: View PNG files in browser, using URL above, or create an animation of all PNG files, using code below.
End of explanation
<END_TASK> |
15,637 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Media
Introduction
skrf supports some basic circuit simulation based on transmission line models. Network creation is accomplished through methods of the Media class, which represents a transmission line object for a given medium. Once constructed, a Media object contains the necessary properties such as propagation constant and characteristic impedance, that are needed to generate microwave networks.
This tutorial illustrates how created Networks using several different Media objects. The basic usage is,
Step1: To create a transmission line of 100um
Step2: More detailed examples illustrating how to create various kinds of Media
objects are given below. A full list of media's supported can be found in the Media API page. The network creation and connection syntax of skrf are cumbersome if you need to doing complex circuit design. skrf's synthesis capabilities lend themselves more to scripted applications such as calibration, optimization or batch processing.
Media Object Basics
Two arguments are common to all media constructors
frequency (required)
z0 (optional)
frequency is a Frequency object, and z0 is the port impedance. z0 is only needed if the port impedance is different from the media's characteristic impedance. Here is an example of how to initialize a coplanar waveguide [0] media. The instance has a 10um center conductor, gap of 5um, and substrate with relative permativity of 10.6,
Step3: For the purpose of microwave network analysis, the defining properties of a (single moded) transmission line are it's characteristic impedance and propagation constant. These properties return complex numpy.ndarray's, A port impedance is also needed when different networks are connected.
The characteristic impedance is given by a Z0 (capital Z)
Step4: The port impedance is given by z0 (lower z). Which we set to 1, just to illustrate how this works. The port impedance is used to compute impedance mismatched if circuits of different port impedance are connected.
Step5: The propagation constant is given by gamma
Step6: Lets take a look at some other Media's
Slab of Si in Freespace
A plane-wave in freespace from 10-20GHz.
Step7: Simulate a 1cm slab of Si in half-space,
Step8: Rectangular Waveguide
a WR-10 Rectangular Waveguide
Step9: The z0 argument in the Rectangular Waveguide constructor is used
to force a specific port impedance. This is commonly used to match
the port impedance to what a VNA stores in a touchstone file. Lets compare the propagation constant in waveguide to that of freespace,
Step10: Because the wave quantities are dynamic they change when the attributes
of the media change. To illustrate, plot the propagation constant of the cpw for various values of substrated permativity,
Step11: Network Synthesis
Networks are created through methods of a Media object. To create a 1-port network for a rectangular waveguide short,
Step12: Or to create a $90^{\circ}$ section of cpw line,
Step13: Building Circuits
By connecting a series of simple circuits, more complex circuits can be
made. To build a the $90^{\circ}$ delay short, in the
rectangular waveguide media defined above.
Step14: When Networks with more than 2 ports need to be connected together, use
rf.connect(). To create a two-port network for a shunted delayed open, you can create an ideal 3-way splitter (a 'tee') and connect the delayed open to one of its ports,
Step15: Adding networks in shunt is pretty common, so there is a Media.shunt() function to do this,
Step16: If a specific circuit is created frequently, it may make sense to
use a function to create the circuit. This can be done most quickly using lambda
Step17: A more useful example may be to create a function for a shunt-stub tuner,
that will work for any media object
Step18: This approach lends itself to design optimization.
Design Optimization
The abilities of scipy's optimizers can be used to automate network design. In this example, skrf is used to automate the single stub impedance matching network design. First, we create a 'cost' function which returns something we want to minimize, such as the reflection coefficient magnitude at band center. Then, one of scipy's minimization algorithms is used to determine the optimal parameters of the stub lengths to minimize this cost. | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import skrf as rf
rf.stylely()
from skrf import Frequency
from skrf.media import CPW
freq = Frequency(75,110,101,'ghz')
cpw = CPW(freq, w=10e-6, s=5e-6, ep_r=10.6)
cpw
Explanation: Media
Introduction
skrf supports some basic circuit simulation based on transmission line models. Network creation is accomplished through methods of the Media class, which represents a transmission line object for a given medium. Once constructed, a Media object contains the necessary properties such as propagation constant and characteristic impedance, that are needed to generate microwave networks.
This tutorial illustrates how created Networks using several different Media objects. The basic usage is,
End of explanation
cpw.line(100*1e-6, name = '100um line')
Explanation: To create a transmission line of 100um
End of explanation
freq = Frequency(75,110,101,'ghz')
cpw = CPW(freq, w=10e-6, s=5e-6, ep_r=10.6, z0 =1)
cpw
Explanation: More detailed examples illustrating how to create various kinds of Media
objects are given below. A full list of media's supported can be found in the Media API page. The network creation and connection syntax of skrf are cumbersome if you need to doing complex circuit design. skrf's synthesis capabilities lend themselves more to scripted applications such as calibration, optimization or batch processing.
Media Object Basics
Two arguments are common to all media constructors
frequency (required)
z0 (optional)
frequency is a Frequency object, and z0 is the port impedance. z0 is only needed if the port impedance is different from the media's characteristic impedance. Here is an example of how to initialize a coplanar waveguide [0] media. The instance has a 10um center conductor, gap of 5um, and substrate with relative permativity of 10.6,
End of explanation
cpw.Z0[:3]
Explanation: For the purpose of microwave network analysis, the defining properties of a (single moded) transmission line are it's characteristic impedance and propagation constant. These properties return complex numpy.ndarray's, A port impedance is also needed when different networks are connected.
The characteristic impedance is given by a Z0 (capital Z)
End of explanation
cpw.z0[:3]
Explanation: The port impedance is given by z0 (lower z). Which we set to 1, just to illustrate how this works. The port impedance is used to compute impedance mismatched if circuits of different port impedance are connected.
End of explanation
cpw.gamma[:3]
Explanation: The propagation constant is given by gamma
End of explanation
from skrf.media import Freespace
freq = Frequency(10,20,101,'ghz')
air = Freespace(freq)
air
air.z0[:2] # 377ohm baby!
# plane wave in Si
si = Freespace(freq, ep_r = 11.2)
si.z0[:3] # ~110ohm
Explanation: Lets take a look at some other Media's
Slab of Si in Freespace
A plane-wave in freespace from 10-20GHz.
End of explanation
slab = air.thru() ** si.line(1, 'cm') ** air.thru()
slab.plot_s_db(n=0)
Explanation: Simulate a 1cm slab of Si in half-space,
End of explanation
from skrf.media import RectangularWaveguide
freq = Frequency(75,110,101,'ghz')
wg = RectangularWaveguide(freq, a=100*rf.mil, z0=50) # see note below about z0
wg
Explanation: Rectangular Waveguide
a WR-10 Rectangular Waveguide
End of explanation
air = Freespace(freq)
from matplotlib import pyplot as plt
air.plot(air.gamma.imag, label='Freespace')
wg.plot(wg.gamma.imag, label='WR10')
plt.ylabel('Propagation Constant (rad/m)')
plt.legend()
Explanation: The z0 argument in the Rectangular Waveguide constructor is used
to force a specific port impedance. This is commonly used to match
the port impedance to what a VNA stores in a touchstone file. Lets compare the propagation constant in waveguide to that of freespace,
End of explanation
for ep_r in [9,10,11]:
cpw.ep_r = ep_r
cpw.frequency.plot(cpw.beta, label='er=%.1f'%ep_r)
plt.xlabel('Frequency [GHz]')
plt.ylabel('Propagation Constant [rad/m]')
plt.legend()
Explanation: Because the wave quantities are dynamic they change when the attributes
of the media change. To illustrate, plot the propagation constant of the cpw for various values of substrated permativity,
End of explanation
wg.short(name = 'short')
Explanation: Network Synthesis
Networks are created through methods of a Media object. To create a 1-port network for a rectangular waveguide short,
End of explanation
cpw.line(d=90,unit='deg', name='line')
Explanation: Or to create a $90^{\circ}$ section of cpw line,
End of explanation
delay_short = wg.line(d=90,unit='deg') ** wg.short()
delay_short.name = 'delay short'
delay_short
Explanation: Building Circuits
By connecting a series of simple circuits, more complex circuits can be
made. To build a the $90^{\circ}$ delay short, in the
rectangular waveguide media defined above.
End of explanation
tee = cpw.tee()
delay_open = cpw.delay_open(40,'deg')
shunt_open = rf.connect(tee,1,delay_open,0)
Explanation: When Networks with more than 2 ports need to be connected together, use
rf.connect(). To create a two-port network for a shunted delayed open, you can create an ideal 3-way splitter (a 'tee') and connect the delayed open to one of its ports,
End of explanation
cpw.shunt(delay_open)
Explanation: Adding networks in shunt is pretty common, so there is a Media.shunt() function to do this,
End of explanation
delay_short = lambda d: wg.line(d,'deg')**wg.short()
delay_short(90)
Explanation: If a specific circuit is created frequently, it may make sense to
use a function to create the circuit. This can be done most quickly using lambda
End of explanation
def shunt_stub(med, d0, d1):
return med.line(d0,'deg')**med.shunt_delay_open(d1,'deg')
shunt_stub(cpw,10,90)
Explanation: A more useful example may be to create a function for a shunt-stub tuner,
that will work for any media object
End of explanation
from scipy.optimize import fmin
# the load we are trying to match
load = cpw.load(.2+.2j)
# single stub circuit generator function
def shunt_stub(med, d0, d1):
return med.line(d0,'deg')**med.shunt_delay_open(d1,'deg')
# define the cost function we want to minimize (this uses sloppy namespace)
def cost(d):
# prevent negative length lines, returning high cost
if d[0] <0 or d[1] <0:
return 1e3
return (shunt_stub(cpw,d[0],d[1]) ** load)[100].s_mag.squeeze()
# initial guess of optimal delay lengths in degrees
d0 = 120,40 # initial guess
#determine the optimal delays
d_opt = fmin(cost,(120,40))
d_opt
Explanation: This approach lends itself to design optimization.
Design Optimization
The abilities of scipy's optimizers can be used to automate network design. In this example, skrf is used to automate the single stub impedance matching network design. First, we create a 'cost' function which returns something we want to minimize, such as the reflection coefficient magnitude at band center. Then, one of scipy's minimization algorithms is used to determine the optimal parameters of the stub lengths to minimize this cost.
End of explanation
<END_TASK> |
15,638 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
mpl_toolkits
In addition to the core library of matplotlib, there are a few additional utilities that are set apart from matplotlib proper for some reason or another, but are often shipped with matplotlib.
Basemap - shipped separately from matplotlib due to size of mapping data that are included.
mplot3d - shipped with matplotlib to provide very simple, rudimentary 3D plots in the same style as matplotlib's 2D plots.
axes_grid1 - An enhanced SubplotAxes. Very Enhanced...
mplot3d
By taking advantage of matplotlib's z-order layering engine, mplot3d emulates 3D plotting by projecting 3D data into 2D space, layer by layer. While it isn't going to replace any of the true 3D plotting libraries anytime soon, its goal is to allow for matplotlib users to produce 3D plots with the same amount of simplicity as 2D plots are.
Step1: axes_grid1
This module was originally intended as a collection of helper classes to ease the displaying of (possibly multiple) images with matplotlib. Some of the functionality has come to be useful for non-image plotting as well. Some classes deals with the sizing and positioning of multiple Axes relative to each other (ImageGrid, RGB Axes, and AxesDivider). The ParasiteAxes allow for the plotting of multiple datasets in the same axes, but with each their own x or y scale. Also, there is the AnchoredArtist that can be used to anchor particular artist objects in place.
One can get a sense of the neat things that can be done with this toolkit by browsing through its user guide linked above. There is one particular feature that is an absolute must-have for me -- automatic allocation of space for colorbars.
Step2: This next feature is commonly requested on the mailing lists. The problem is that most people who request it don't quite know how to describe it. We call it "Parasite Axes".
Step6: And finally, as a nice teaser of what else axes_grid1 can do... | <ASSISTANT_TASK:>
Python Code:
from mpl_toolkits.mplot3d import Axes3D, axes3d
fig, ax = plt.subplots(1, 1, subplot_kw={'projection': '3d'})
X, Y, Z = axes3d.get_test_data(0.05)
ax.plot_wireframe(X, Y, Z, rstride=10, cstride=10)
plt.show()
Explanation: mpl_toolkits
In addition to the core library of matplotlib, there are a few additional utilities that are set apart from matplotlib proper for some reason or another, but are often shipped with matplotlib.
Basemap - shipped separately from matplotlib due to size of mapping data that are included.
mplot3d - shipped with matplotlib to provide very simple, rudimentary 3D plots in the same style as matplotlib's 2D plots.
axes_grid1 - An enhanced SubplotAxes. Very Enhanced...
mplot3d
By taking advantage of matplotlib's z-order layering engine, mplot3d emulates 3D plotting by projecting 3D data into 2D space, layer by layer. While it isn't going to replace any of the true 3D plotting libraries anytime soon, its goal is to allow for matplotlib users to produce 3D plots with the same amount of simplicity as 2D plots are.
End of explanation
from mpl_toolkits.axes_grid1 import AxesGrid
fig = plt.figure()
grid = AxesGrid(fig, 111, # similar to subplot(111)
nrows_ncols = (2, 2),
axes_pad = 0.2,
share_all=True,
label_mode = "L", # similar to "label_outer"
cbar_location = "right",
cbar_mode="single",
)
extent = (-3,4,-4,3)
for i in range(4):
im = grid[i].imshow(Z, extent=extent, interpolation="nearest")
grid.cbar_axes[0].colorbar(im)
plt.show()
Explanation: axes_grid1
This module was originally intended as a collection of helper classes to ease the displaying of (possibly multiple) images with matplotlib. Some of the functionality has come to be useful for non-image plotting as well. Some classes deals with the sizing and positioning of multiple Axes relative to each other (ImageGrid, RGB Axes, and AxesDivider). The ParasiteAxes allow for the plotting of multiple datasets in the same axes, but with each their own x or y scale. Also, there is the AnchoredArtist that can be used to anchor particular artist objects in place.
One can get a sense of the neat things that can be done with this toolkit by browsing through its user guide linked above. There is one particular feature that is an absolute must-have for me -- automatic allocation of space for colorbars.
End of explanation
# %load http://matplotlib.org/mpl_examples/axes_grid/demo_parasite_axes2.py
from mpl_toolkits.axes_grid1 import host_subplot
import mpl_toolkits.axisartist as AA
import matplotlib.pyplot as plt
if 1:
host = host_subplot(111, axes_class=AA.Axes)
plt.subplots_adjust(right=0.75)
par1 = host.twinx()
par2 = host.twinx()
offset = 60
new_fixed_axis = par2.get_grid_helper().new_fixed_axis
par2.axis["right"] = new_fixed_axis(loc="right",
axes=par2,
offset=(offset, 0))
par2.axis["right"].toggle(all=True)
host.set_xlim(0, 2)
host.set_ylim(0, 2)
host.set_xlabel("Distance")
host.set_ylabel("Density")
par1.set_ylabel("Temperature")
par2.set_ylabel("Velocity")
p1, = host.plot([0, 1, 2], [0, 1, 2], label="Density")
p2, = par1.plot([0, 1, 2], [0, 3, 2], label="Temperature")
p3, = par2.plot([0, 1, 2], [50, 30, 15], label="Velocity")
par1.set_ylim(0, 4)
par2.set_ylim(1, 65)
host.legend()
host.axis["left"].label.set_color(p1.get_color())
par1.axis["right"].label.set_color(p2.get_color())
par2.axis["right"].label.set_color(p3.get_color())
plt.draw()
plt.show()
#plt.savefig("Test")
from mpl_toolkits.axes_grid1 import host_subplot
import mpl_toolkits.axisartist as AA
import matplotlib.pyplot as plt
fig = plt.fig
if 1:
host = host_subplot(111, axes_class=AA.Axes)
plt.subplots_adjust(right=0.75)
par1 = host.twinx()
par2 = host.twinx()
offset = 60
new_fixed_axis = par2.get_grid_helper().new_fixed_axis
par2.axis["right"] = new_fixed_axis(loc="right",
axes=par2,
offset=(offset, 0))
par2.axis["right"].toggle(all=True)
host.set_xlim(0, 2)
host.set_ylim(0, 2)
host.set_xlabel("Distance")
host.set_ylabel("Density")
par1.set_ylabel("Temperature")
par2.set_ylabel("Velocity")
p1, = host.plot([0, 1, 2], [0, 1, 2], label="Density")
p2, = par1.plot([0, 1, 2], [0, 3, 2], label="Temperature")
p3, = par2.plot([0, 1, 2], [50, 30, 15], label="Velocity")
par1.set_ylim(0, 4)
par2.set_ylim(1, 65)
host.legend()
host.axis["left"].label.set_color(p1.get_color())
par1.axis["right"].label.set_color(p2.get_color())
par2.axis["right"].label.set_color(p3.get_color())
plt.draw()
plt.show()
#plt.savefig("Test")
Explanation: This next feature is commonly requested on the mailing lists. The problem is that most people who request it don't quite know how to describe it. We call it "Parasite Axes".
End of explanation
%load http://matplotlib.org/mpl_toolkits/axes_grid/examples/demo_floating_axes.py
from matplotlib.transforms import Affine2D
import mpl_toolkits.axisartist.floating_axes as floating_axes
import numpy as np
import mpl_toolkits.axisartist.angle_helper as angle_helper
from matplotlib.projections import PolarAxes
from mpl_toolkits.axisartist.grid_finder import FixedLocator, MaxNLocator, \
DictFormatter
def setup_axes1(fig, rect):
A simple one.
tr = Affine2D().scale(2, 1).rotate_deg(30)
grid_helper = floating_axes.GridHelperCurveLinear(tr, extremes=(0, 4, 0, 4))
ax1 = floating_axes.FloatingSubplot(fig, rect, grid_helper=grid_helper)
fig.add_subplot(ax1)
aux_ax = ax1.get_aux_axes(tr)
grid_helper.grid_finder.grid_locator1._nbins = 4
grid_helper.grid_finder.grid_locator2._nbins = 4
return ax1, aux_ax
def setup_axes2(fig, rect):
With custom locator and formatter.
Note that the extreme values are swapped.
#tr_scale = Affine2D().scale(np.pi/180., 1.)
tr = PolarAxes.PolarTransform()
pi = np.pi
angle_ticks = [(0, r"$0$"),
(.25*pi, r"$\frac{1}{4}\pi$"),
(.5*pi, r"$\frac{1}{2}\pi$")]
grid_locator1 = FixedLocator([v for v, s in angle_ticks])
tick_formatter1 = DictFormatter(dict(angle_ticks))
grid_locator2 = MaxNLocator(2)
grid_helper = floating_axes.GridHelperCurveLinear(tr,
extremes=(.5*pi, 0, 2, 1),
grid_locator1=grid_locator1,
grid_locator2=grid_locator2,
tick_formatter1=tick_formatter1,
tick_formatter2=None,
)
ax1 = floating_axes.FloatingSubplot(fig, rect, grid_helper=grid_helper)
fig.add_subplot(ax1)
# create a parasite axes whose transData in RA, cz
aux_ax = ax1.get_aux_axes(tr)
aux_ax.patch = ax1.patch # for aux_ax to have a clip path as in ax
ax1.patch.zorder=0.9 # but this has a side effect that the patch is
# drawn twice, and possibly over some other
# artists. So, we decrease the zorder a bit to
# prevent this.
return ax1, aux_ax
def setup_axes3(fig, rect):
Sometimes, things like axis_direction need to be adjusted.
# rotate a bit for better orientation
tr_rotate = Affine2D().translate(-95, 0)
# scale degree to radians
tr_scale = Affine2D().scale(np.pi/180., 1.)
tr = tr_rotate + tr_scale + PolarAxes.PolarTransform()
grid_locator1 = angle_helper.LocatorHMS(4)
tick_formatter1 = angle_helper.FormatterHMS()
grid_locator2 = MaxNLocator(3)
ra0, ra1 = 8.*15, 14.*15
cz0, cz1 = 0, 14000
grid_helper = floating_axes.GridHelperCurveLinear(tr,
extremes=(ra0, ra1, cz0, cz1),
grid_locator1=grid_locator1,
grid_locator2=grid_locator2,
tick_formatter1=tick_formatter1,
tick_formatter2=None,
)
ax1 = floating_axes.FloatingSubplot(fig, rect, grid_helper=grid_helper)
fig.add_subplot(ax1)
# adjust axis
ax1.axis["left"].set_axis_direction("bottom")
ax1.axis["right"].set_axis_direction("top")
ax1.axis["bottom"].set_visible(False)
ax1.axis["top"].set_axis_direction("bottom")
ax1.axis["top"].toggle(ticklabels=True, label=True)
ax1.axis["top"].major_ticklabels.set_axis_direction("top")
ax1.axis["top"].label.set_axis_direction("top")
ax1.axis["left"].label.set_text(r"cz [km$^{-1}$]")
ax1.axis["top"].label.set_text(r"$\alpha_{1950}$")
# create a parasite axes whose transData in RA, cz
aux_ax = ax1.get_aux_axes(tr)
aux_ax.patch = ax1.patch # for aux_ax to have a clip path as in ax
ax1.patch.zorder=0.9 # but this has a side effect that the patch is
# drawn twice, and possibly over some other
# artists. So, we decrease the zorder a bit to
# prevent this.
return ax1, aux_ax
if 1:
import matplotlib.pyplot as plt
fig = plt.figure(1, figsize=(8, 4))
fig.subplots_adjust(wspace=0.3, left=0.05, right=0.95)
ax1, aux_ax2 = setup_axes1(fig, 131)
aux_ax2.bar([0, 1, 2, 3], [3, 2, 1, 3])
#theta = np.random.rand(10) #*.5*np.pi
#radius = np.random.rand(10) #+1.
#aux_ax1.scatter(theta, radius)
ax2, aux_ax2 = setup_axes2(fig, 132)
theta = np.random.rand(10)*.5*np.pi
radius = np.random.rand(10)+1.
aux_ax2.scatter(theta, radius)
ax3, aux_ax3 = setup_axes3(fig, 133)
theta = (8 + np.random.rand(10)*(14-8))*15. # in degrees
radius = np.random.rand(10)*14000.
aux_ax3.scatter(theta, radius)
plt.show()
Explanation: And finally, as a nice teaser of what else axes_grid1 can do...
End of explanation
<END_TASK> |
15,639 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Load data
Step1: Explore data
Step2: From scratch
Step3: With sklearn
On a le choix entre
BernoulliNB | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
from IPython.core.display import display, HTML
display(HTML('''
<style>
.dataframe td, .dataframe th {
border: 1px solid black;
background: white;
}
.dataframe td {
text-align: left;
}
</style>
'''))
df = pd.DataFrame({
'Outlook': ['sunny', 'sunny', 'overcast', 'rain', 'rain', 'rain', 'overcast', 'sunny', 'sunny', 'rain', 'sunny', 'overcast', 'overcast', 'rain'],
'Temperature': ['hot', 'hot', 'hot', 'mild', 'cool', 'cool', 'cool', 'mild', 'cool', 'mild', 'mild', 'mild', 'hot', 'mild'],
'Humidity': ['high', 'high', 'high', 'high', 'normal', 'normal', 'normal', 'high', 'normal', 'normal', 'normal', 'high', 'normal','high'],
'Wind': ['weak', 'strong', 'weak', 'weak', 'weak', 'strong', 'strong', 'weak', 'weak', 'weak', 'strong', 'strong', 'weak', 'strong'],
'Play': ['no', 'no', 'yes', 'yes', 'yes', 'no', 'yes', 'no', 'yes', 'yes', 'yes', 'yes', 'yes', 'no']
})
HTML(df.to_html(index=False))
Explanation: Load data
End of explanation
val, count = np.unique(df['Play'], return_counts=True)
n = np.sum(count)
for i,v in enumerate(val):
print('P(Play={:<3s}) = {:d}/{:d}'.format(v, count[i], n))
for column in df.drop('Play', axis=1).columns:
dftmp = pd.crosstab(df[column], df['Play'],
margins=False,
rownames=[None],colnames=[column])
dftmp.columns = 'Play=' + dftmp.columns
for i,v in enumerate(val):
dftmp.iloc[:,i] = dftmp.iloc[:,i].astype('string') + '/' + str(count[i])
display(HTML(dftmp.to_html()))
Explanation: Explore data
End of explanation
dfYes = df[df['Play'] == 'yes']
dfNo = df[df['Play'] == 'no']
nYes = len(dfYes)
nNo = len(dfNo)
print(nYes, nNo)
pYes = (dfYes['Outlook'] == 'sunny').sum()/nYes \
* (dfYes['Temperature'] == 'cool').sum()/nYes \
* (dfYes['Humidity'] == 'high').sum()/nYes \
* (dfYes['Wind'] == 'strong').sum()/nYes \
* nYes/len(df)
pYes
pNo = (dfNo['Outlook'] == 'sunny').sum()/nNo \
* (dfNo['Temperature'] == 'cool').sum()/nNo \
* (dfNo['Humidity'] == 'high').sum()/nNo \
* (dfNo['Wind'] == 'strong').sum()/nNo \
* nNo/len(df)
pNo
print('Prediction:', ('yes' if pYes > pNo else 'no'))
Explanation: From scratch
End of explanation
from sklearn.naive_bayes import GaussianNB, BernoulliNB, MultinomialNB
from sklearn.preprocessing import LabelEncoder
# Encode labels to integers
encoder = LabelEncoder()
c = {}
Y = encoder.fit_transform(df['Play'])
c['Play'] = list(encoder.classes_)
X = df.drop('Play', axis=1)
for column in X.columns:
X[column] = encoder.fit_transform(X[column])
c[column] = list(encoder.classes_)
# Pre-compute likelihood tables
model = MultinomialNB()
model.fit(X, Y)
# Predict most likely outcome
res = model.predict([[
c['Outlook'].index('sunny'),
c['Temperature'].index('cool'),
c['Humidity'].index('high'),
c['Wind'].index('strong'),
]])[0]
print('Prediction:', c['Play'][res])
'''
# Evaluate
from sklearn.metrics import accuracy_score, confusion_matrix
y_pred = model.predict(X_test)
accuracy_score(y_test, y_pred, normalize=True)
confusion_matrix(y_test, y_pred)
'''
Explanation: With sklearn
On a le choix entre
BernoulliNB: si toutes les caractéristiques sont binaires ({0,1})
MultinomialNB: si les données sont discrètes (ex {1,2,3})
GaussianNB: si les données sont continues (ex [1..5])
End of explanation
<END_TASK> |
15,640 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Overlap reproduction
This notebook should reproduce some results of the Amit's book (Attractor Neural Networks) in the section 4.1
Step1: Symmetric mix of attractors
According to Amit with the Hebbian rule of this model a very particular type of spurios states appears
Step2: Then we run the network
Step3: Ploting
Step4: Here we see a symmetric mixture of three states
Effect of temperature in symmetric mixtures
As Amit discusses we can increase the temperature to a sweet spot where the spurious states are destroyed and only the real attractors are preserved. We can try to repeat the same process with higher temperature
Step5: Then we run the network
Step6: Ploting
Step7: We can appreciate here that with higher noises all the other overlaps become close to 0 and one state the symmetric reflexion of state m0 is being recalled
Effect of very high noise
Finally if the noise is very high only the state where all the overlaps vanish will be stable.
Step8: Then we run the network
Step9: Ploting | <ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
import sys
sys.path.append('../')
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from hopfield import Hopfield
%matplotlib inline
sns.set(font_scale=2.0)
Explanation: Overlap reproduction
This notebook should reproduce some results of the Amit's book (Attractor Neural Networks) in the section 4.1
End of explanation
n_dim = 400
n_store = 7
T = 0.0
prng = np.random.RandomState(seed=10000)
N = 2000
nn = Hopfield(n_dim=n_dim, T=T, prng=prng)
list_of_patterns = nn.generate_random_patterns(n_store)
nn.train(list_of_patterns)
Explanation: Symmetric mix of attractors
According to Amit with the Hebbian rule of this model a very particular type of spurios states appears: symmetric mixtures.
Symmetric mixtures are called like that because they overlaps with the intended attractors become more or less same. In the asynchronous case only symmetric mixtures of odd number of states are stable. We show a symmetric mixture here
First we build the network with very low temperature (noise)
End of explanation
overlaps = np.zeros((N, n_store))
for i in range(N):
nn.update_async()
overlaps[i, :] = nn.calculate_overlap()
Explanation: Then we run the network
End of explanation
# Plot this thing
fig = plt.figure(figsize=(16, 12))
ax = fig.add_subplot(111)
ax.set_xlabel('Iterations')
ax.set_ylabel('Overlap')
ax.axhline(y=0, color='k')
for pattern_n, overlap in enumerate(overlaps.T):
ax.plot(overlap, '-', label='m' +str(pattern_n))
ax.legend()
ax.set_ylim(-1.1, 1.1)
plt.show()
Explanation: Ploting
End of explanation
n_dim = 400
n_store = 7
T = 0.8
prng = np.random.RandomState(seed=10000)
N = 2000
nn = Hopfield(n_dim=n_dim, T=T, prng=prng)
list_of_patterns = nn.generate_random_patterns(n_store)
nn.train(list_of_patterns)
Explanation: Here we see a symmetric mixture of three states
Effect of temperature in symmetric mixtures
As Amit discusses we can increase the temperature to a sweet spot where the spurious states are destroyed and only the real attractors are preserved. We can try to repeat the same process with higher temperature
End of explanation
overlaps = np.zeros((N, n_store))
for i in range(N):
nn.update_async()
overlaps[i, :] = nn.calculate_overlap()
Explanation: Then we run the network
End of explanation
# Plot this thing
fig = plt.figure(figsize=(16, 12))
ax = fig.add_subplot(111)
ax.set_xlabel('Iterations')
ax.set_ylabel('Overlap')
ax.axhline(y=0, color='k')
for pattern_n, overlap in enumerate(overlaps.T):
ax.plot(overlap, '-', label='m' +str(pattern_n))
ax.legend()
ax.set_ylim(-1.1, 1.1)
plt.show()
Explanation: Ploting
End of explanation
n_dim = 400
n_store = 7
T = 3.0
nn = Hopfield(n_dim=n_dim, T=T, prng=prng)
list_of_patterns = nn.generate_random_patterns(n_store)
nn.train(list_of_patterns)
Explanation: We can appreciate here that with higher noises all the other overlaps become close to 0 and one state the symmetric reflexion of state m0 is being recalled
Effect of very high noise
Finally if the noise is very high only the state where all the overlaps vanish will be stable.
End of explanation
overlaps = np.zeros((N, n_store))
for i in range(N):
nn.update_async()
overlaps[i, :] = nn.calculate_overlap()
Explanation: Then we run the network
End of explanation
# Plot this thing
fig = plt.figure(figsize=(16, 12))
ax = fig.add_subplot(111)
ax.set_xlabel('Iterations')
ax.set_ylabel('Overlap')
ax.axhline(y=0, color='k')
for pattern_n, overlap in enumerate(overlaps.T):
ax.plot(overlap, '-', label='m' +str(pattern_n))
ax.legend()
ax.set_ylim(-1.1, 1.1)
plt.show()
Explanation: Ploting
End of explanation
<END_TASK> |
15,641 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Online word2vec tutorial
So far, word2vec cannot increase the size of vocabulary after initial training. To handle unknown words, not in word2vec vocaburary, you must retrain updated documents over again.
In this tutorial, we introduce gensim new feature, online vocaburary update. This additional feature overcomes the unknown word problems. Despite after initial training, we can continuously add new vocaburary to the pre-trained word2vec model using this online feature.
Step1: Download wikipedia dump files
We use the past and the current version of wiki dump files as online training.
Step2: Convert two wikipedia dump files
To avoid alert when convert old verision of wikipedia dump, you should download alternative wikicorpus.py in my repo.
Step3: Initial training
At first we train word2vec using "enwiki-20101011-pages-articles.xml.bz2". After that, we update model using "enwiki-20160820-pages-articles.xml.bz2".
Step4: Japanese new idol group, "Babymetal", weren't known worldwide in 2010, so that the word, "babymetal", is not in oldmodel vocaburary.
Note
Step5: Online update
To use online word2vec feature, set update=True when you use build_vocab using new documents.
Step6: Model Comparison
By the online training, the size of vocaburaries are increased about 3 millions.
Step7: After online training, the word, "babymetal", is added in model. This word is simillar with rock and metal bands.
Step8: The word, "Zootopia", become disney movie through the years.
In the past, the word, "Zootopia", was used just for an annual summer concert put on by New York top-40 radio station Z100, so that the word, "zootopia", is simillar with music festival.
In 2016, Zootopia is a American 3D computer-animated comedy film released by Walt Disney Pictures. As a result, the word, "zootopia", was often used as Animation films. | <ASSISTANT_TASK:>
Python Code:
from gensim.corpora.wikicorpus import WikiCorpus
from gensim.models.word2vec import Word2Vec, LineSentence
from pprint import pprint
from copy import deepcopy
from multiprocessing import cpu_count
Explanation: Online word2vec tutorial
So far, word2vec cannot increase the size of vocabulary after initial training. To handle unknown words, not in word2vec vocaburary, you must retrain updated documents over again.
In this tutorial, we introduce gensim new feature, online vocaburary update. This additional feature overcomes the unknown word problems. Despite after initial training, we can continuously add new vocaburary to the pre-trained word2vec model using this online feature.
End of explanation
%%bash
wget https://dumps.wikimedia.org/archive/2010/2010-11/enwiki/20101011/enwiki-20101011-pages-articles.xml.bz2
wget https://dumps.wikimedia.org/enwiki/20160820/enwiki-20160820-pages-articles.xml.bz2
Explanation: Download wikipedia dump files
We use the past and the current version of wiki dump files as online training.
End of explanation
old, new = [WikiCorpus('enwiki-{}-pages-articles.xml.bz2'.format(ymd)) for ymd in ['20101011', '20160820']]
def write_wiki(wiki, name, titles = []):
with open('{}.wiki'.format(name), 'wb') as f:
wiki.metadata = True
for text, (page_id, title) in wiki.get_texts():
if title not in titles:
f.write(b' '.join(text)+b'\n')
titles.append(title)
return titles
old_titles = write_wiki(old, 'old')
all_titles = write_wiki(new, 'new', old_titles)
oldwiki, newwiki = [LineSentence(f+'.wiki') for f in ['old', 'new']]
Explanation: Convert two wikipedia dump files
To avoid alert when convert old verision of wikipedia dump, you should download alternative wikicorpus.py in my repo.
End of explanation
%%time
model = Word2Vec(oldwiki, min_count = 0, workers=cpu_count())
# model = Word2Vec.load('oldmodel')
oldmodel = deepcopy(model)
oldmodel.save('oldmodel')
Explanation: Initial training
At first we train word2vec using "enwiki-20101011-pages-articles.xml.bz2". After that, we update model using "enwiki-20160820-pages-articles.xml.bz2".
End of explanation
try:
print(oldmodel.most_similar('babymetal'))
except KeyError as e:
print(e)
Explanation: Japanese new idol group, "Babymetal", weren't known worldwide in 2010, so that the word, "babymetal", is not in oldmodel vocaburary.
Note: In recent years, they became the famous idol group not only in Japan. They won many music awards and run world tour.
End of explanation
%%time
model.build_vocab(newwiki, update=True)
model.train(newwiki)
model.save('newmodel')
# model = Word2Vec.load('newmodel')
Explanation: Online update
To use online word2vec feature, set update=True when you use build_vocab using new documents.
End of explanation
for m in ['oldmodel', 'model']:
print('The vocabulary size of the', m, 'is', len(eval(m).vocab))
Explanation: Model Comparison
By the online training, the size of vocaburaries are increased about 3 millions.
End of explanation
try:
pprint(model.most_similar('babymetal'))
except KeyError as e:
print(e)
Explanation: After online training, the word, "babymetal", is added in model. This word is simillar with rock and metal bands.
End of explanation
w = 'zootopia'
for m in ['oldmodel', 'model']:
print('The count of the word,'+w+', is', eval(m).vocab[w].count, 'in', m)
pprint(eval(m).most_similar(w))
print('')
Explanation: The word, "Zootopia", become disney movie through the years.
In the past, the word, "Zootopia", was used just for an annual summer concert put on by New York top-40 radio station Z100, so that the word, "zootopia", is simillar with music festival.
In 2016, Zootopia is a American 3D computer-animated comedy film released by Walt Disney Pictures. As a result, the word, "zootopia", was often used as Animation films.
End of explanation
<END_TASK> |
15,642 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Reduce original dataset to questions
Step1: You only need to execute the setup cells once, uncomment to run. The dataset can be downloaded here.
Step2: Put them in a DataBunch
Our questions look like this now
Step3: To make it simple, we lowercase everything.
Step4: The first thing is that we will need to collate inputs and targets in a batch
Step5: Then we create a special DataBunch that uses this collate function.
Step6: And a subclass of TextList that will use this DataBunch class in the call .databunch and will use TextList to label (since our targets are other texts).
Step7: Thats all we need to use the data block API!
Step8: We remove the items where one of the target is more than 30 tokens long.
Step9: Model
Pretrained embeddings
To install fastText
Step10: We create an embedding module with the pretrained vectors and random data for the missing parts.
Step11: Free some RAM
Step12: QRNN seq2seq
Our model we use QRNNs at its base (you can use GRUs or LSTMs by adapting a little bit). Using QRNNs require you have properly installed cuda (a version that matches your PyTorch install).
Step13: The model in itself consists in an encoder and a decoder
The encoder is a (quasi) recurrent neural net and we feed it our input sentence, producing an output (that we discard for now) and a hidden state. That hidden state is then given to the decoder (an other RNN) which uses it in conjunction with the outputs it predicts to get produce the translation. We loop until the decoder produces a padding token (or at 30 iterations to make sure it's not an infinite loop at the beginning of training).
Step14: Loss function
The loss pads output and target so that they are of the same size before using the usual flattened version of cross entropy. We do the same for accuracy.
Step15: Bleu metric (see dedicated notebook)
In translation, the metric usually used is BLEU, see the corresponding notebook for the details.
Step16: We load our pretrained embeddings to create the model.
Step17: So how good is our model? Let's see a few predictions.
Step18: It's usually beginning well, but falls into easy word at the end of the question.
Teacher forcing
One way to help training is to help the decoder by feeding it the real targets instead of its predictions (if it starts with wrong words, it's very unlikely to give us the right translation). We do that all the time at the beginning, then progressively reduce the amount of teacher forcing.
Step19: Bidir
A second things that might help is to use a bidirectional model for the encoder.
Step20: Attention
Attention is a technique that uses the output of our encoder | <ASSISTANT_TASK:>
Python Code:
path = Config().data_path()/'giga-fren'
Explanation: Reduce original dataset to questions
End of explanation
#! wget https://s3.amazonaws.com/fast-ai-nlp/giga-fren.tgz -P {path}
#! tar xf {path}/giga-fren.tgz -C {path}
# with open(path/'giga-fren.release2.fixed.fr') as f:
# fr = f.read().split('\n')
# with open(path/'giga-fren.release2.fixed.en') as f:
# en = f.read().split('\n')
# re_eq = re.compile('^(Wh[^?.!]+\?)')
# re_fq = re.compile('^([^?.!]+\?)')
# en_fname = path/'giga-fren.release2.fixed.en'
# fr_fname = path/'giga-fren.release2.fixed.fr'
# lines = ((re_eq.search(eq), re_fq.search(fq))
# for eq, fq in zip(open(en_fname, encoding='utf-8'), open(fr_fname, encoding='utf-8')))
# qs = [(e.group(), f.group()) for e,f in lines if e and f]
# qs = [(q1,q2) for q1,q2 in qs]
# df = pd.DataFrame({'fr': [q[1] for q in qs], 'en': [q[0] for q in qs]}, columns = ['en', 'fr'])
# df.to_csv(path/'questions_easy.csv', index=False)
# del en, fr, lines, qs, df # free RAM or restart the nb
### fastText pre-trained word vectors https://fasttext.cc/docs/en/crawl-vectors.html
#! wget https://dl.fbaipublicfiles.com/fasttext/vectors-crawl/cc.fr.300.bin.gz -P {path}
#! wget https://dl.fbaipublicfiles.com/fasttext/vectors-crawl/cc.en.300.bin.gz -P {path}
#! gzip -d {path}/cc.fr.300.bin.gz
#! gzip -d {path}/cc.en.300.bin.gz
path.ls()
Explanation: You only need to execute the setup cells once, uncomment to run. The dataset can be downloaded here.
End of explanation
df = pd.read_csv(path/'questions_easy.csv')
df.head()
Explanation: Put them in a DataBunch
Our questions look like this now:
End of explanation
df['en'] = df['en'].apply(lambda x:x.lower())
df['fr'] = df['fr'].apply(lambda x:x.lower())
Explanation: To make it simple, we lowercase everything.
End of explanation
def seq2seq_collate(samples:BatchSamples, pad_idx:int=1, pad_first:bool=True, backwards:bool=False) -> Tuple[LongTensor, LongTensor]:
"Function that collect samples and adds padding. Flips token order if needed"
samples = to_data(samples)
max_len_x,max_len_y = max([len(s[0]) for s in samples]),max([len(s[1]) for s in samples])
res_x = torch.zeros(len(samples), max_len_x).long() + pad_idx
res_y = torch.zeros(len(samples), max_len_y).long() + pad_idx
if backwards: pad_first = not pad_first
for i,s in enumerate(samples):
if pad_first:
res_x[i,-len(s[0]):],res_y[i,-len(s[1]):] = LongTensor(s[0]),LongTensor(s[1])
else:
res_x[i,:len(s[0]):],res_y[i,:len(s[1]):] = LongTensor(s[0]),LongTensor(s[1])
if backwards: res_x,res_y = res_x.flip(1),res_y.flip(1)
return res_x,res_y
Explanation: The first thing is that we will need to collate inputs and targets in a batch: they have different lengths so we need to add padding to make the sequence length the same;
End of explanation
class Seq2SeqDataBunch(TextDataBunch):
"Create a `TextDataBunch` suitable for training an RNN classifier."
@classmethod
def create(cls, train_ds, valid_ds, test_ds=None, path:PathOrStr='.', bs:int=32, val_bs:int=None, pad_idx=1,
pad_first=False, device:torch.device=None, no_check:bool=False, backwards:bool=False, **dl_kwargs) -> DataBunch:
"Function that transform the `datasets` in a `DataBunch` for classification. Passes `**dl_kwargs` on to `DataLoader()`"
datasets = cls._init_ds(train_ds, valid_ds, test_ds)
val_bs = ifnone(val_bs, bs)
collate_fn = partial(seq2seq_collate, pad_idx=pad_idx, pad_first=pad_first, backwards=backwards)
train_sampler = SortishSampler(datasets[0].x, key=lambda t: len(datasets[0][t][0].data), bs=bs//2)
train_dl = DataLoader(datasets[0], batch_size=bs, sampler=train_sampler, drop_last=True, **dl_kwargs)
dataloaders = [train_dl]
for ds in datasets[1:]:
lengths = [len(t) for t in ds.x.items]
sampler = SortSampler(ds.x, key=lengths.__getitem__)
dataloaders.append(DataLoader(ds, batch_size=val_bs, sampler=sampler, **dl_kwargs))
return cls(*dataloaders, path=path, device=device, collate_fn=collate_fn, no_check=no_check)
Explanation: Then we create a special DataBunch that uses this collate function.
End of explanation
class Seq2SeqTextList(TextList):
_bunch = Seq2SeqDataBunch
_label_cls = TextList
Explanation: And a subclass of TextList that will use this DataBunch class in the call .databunch and will use TextList to label (since our targets are other texts).
End of explanation
src = Seq2SeqTextList.from_df(df, path = path, cols='fr').split_by_rand_pct().label_from_df(cols='en', label_cls=TextList)
np.percentile([len(o) for o in src.train.x.items] + [len(o) for o in src.valid.x.items], 90)
np.percentile([len(o) for o in src.train.y.items] + [len(o) for o in src.valid.y.items], 90)
Explanation: Thats all we need to use the data block API!
End of explanation
src = src.filter_by_func(lambda x,y: len(x) > 30 or len(y) > 30)
len(src.train) + len(src.valid)
data = src.databunch()
data.save()
data = load_data(path)
data.show_batch()
Explanation: We remove the items where one of the target is more than 30 tokens long.
End of explanation
# Installation: https://github.com/facebookresearch/fastText#building-fasttext-for-python
import fastText as ft
fr_vecs = ft.load_model(str((path/'cc.fr.300.bin')))
en_vecs = ft.load_model(str((path/'cc.en.300.bin')))
Explanation: Model
Pretrained embeddings
To install fastText:
$ git clone https://github.com/facebookresearch/fastText.git
$ cd fastText
$ pip install .
End of explanation
def create_emb(vecs, itos, em_sz=300, mult=1.):
emb = nn.Embedding(len(itos), em_sz, padding_idx=1)
wgts = emb.weight.data
vec_dic = {w:vecs.get_word_vector(w) for w in vecs.get_words()}
miss = []
for i,w in enumerate(itos):
try: wgts[i] = tensor(vec_dic[w])
except: miss.append(w)
return emb
emb_enc = create_emb(fr_vecs, data.x.vocab.itos)
emb_dec = create_emb(en_vecs, data.y.vocab.itos)
torch.save(emb_enc, path/'models'/'fr_emb.pth')
torch.save(emb_dec, path/'models'/'en_emb.pth')
Explanation: We create an embedding module with the pretrained vectors and random data for the missing parts.
End of explanation
del fr_vecs
del en_vecs
Explanation: Free some RAM
End of explanation
from fastai.text.models.qrnn import QRNN, QRNNLayer
Explanation: QRNN seq2seq
Our model we use QRNNs at its base (you can use GRUs or LSTMs by adapting a little bit). Using QRNNs require you have properly installed cuda (a version that matches your PyTorch install).
End of explanation
class Seq2SeqQRNN(nn.Module):
def __init__(self, emb_enc, emb_dec, n_hid, max_len, n_layers=2, p_inp:float=0.15, p_enc:float=0.25,
p_dec:float=0.1, p_out:float=0.35, p_hid:float=0.05, bos_idx:int=0, pad_idx:int=1):
super().__init__()
self.n_layers,self.n_hid,self.max_len,self.bos_idx,self.pad_idx = n_layers,n_hid,max_len,bos_idx,pad_idx
self.emb_enc = emb_enc
self.emb_enc_drop = nn.Dropout(p_inp)
self.encoder = QRNN(emb_enc.weight.size(1), n_hid, n_layers=n_layers, dropout=p_enc)
self.out_enc = nn.Linear(n_hid, emb_enc.weight.size(1), bias=False)
self.hid_dp = nn.Dropout(p_hid)
self.emb_dec = emb_dec
self.decoder = QRNN(emb_dec.weight.size(1), emb_dec.weight.size(1), n_layers=n_layers, dropout=p_dec)
self.out_drop = nn.Dropout(p_out)
self.out = nn.Linear(emb_dec.weight.size(1), emb_dec.weight.size(0))
self.out.weight.data = self.emb_dec.weight.data
def forward(self, inp):
bs,sl = inp.size()
self.encoder.reset()
self.decoder.reset()
hid = self.initHidden(bs)
emb = self.emb_enc_drop(self.emb_enc(inp))
enc_out, hid = self.encoder(emb, hid)
hid = self.out_enc(self.hid_dp(hid))
dec_inp = inp.new_zeros(bs).long() + self.bos_idx
outs = []
for i in range(self.max_len):
emb = self.emb_dec(dec_inp).unsqueeze(1)
out, hid = self.decoder(emb, hid)
out = self.out(self.out_drop(out[:,0]))
outs.append(out)
dec_inp = out.max(1)[1]
if (dec_inp==self.pad_idx).all(): break
return torch.stack(outs, dim=1)
def initHidden(self, bs): return one_param(self).new_zeros(self.n_layers, bs, self.n_hid)
Explanation: The model in itself consists in an encoder and a decoder
The encoder is a (quasi) recurrent neural net and we feed it our input sentence, producing an output (that we discard for now) and a hidden state. That hidden state is then given to the decoder (an other RNN) which uses it in conjunction with the outputs it predicts to get produce the translation. We loop until the decoder produces a padding token (or at 30 iterations to make sure it's not an infinite loop at the beginning of training).
End of explanation
def seq2seq_loss(out, targ, pad_idx=1):
bs,targ_len = targ.size()
_,out_len,vs = out.size()
if targ_len>out_len: out = F.pad(out, (0,0,0,targ_len-out_len,0,0), value=pad_idx)
if out_len>targ_len: targ = F.pad(targ, (0,out_len-targ_len,0,0), value=pad_idx)
return CrossEntropyFlat()(out, targ)
def seq2seq_acc(out, targ, pad_idx=1):
bs,targ_len = targ.size()
_,out_len,vs = out.size()
if targ_len>out_len: out = F.pad(out, (0,0,0,targ_len-out_len,0,0), value=pad_idx)
if out_len>targ_len: targ = F.pad(targ, (0,out_len-targ_len,0,0), value=pad_idx)
out = out.argmax(2)
return (out==targ).float().mean()
Explanation: Loss function
The loss pads output and target so that they are of the same size before using the usual flattened version of cross entropy. We do the same for accuracy.
End of explanation
class NGram():
def __init__(self, ngram, max_n=5000): self.ngram,self.max_n = ngram,max_n
def __eq__(self, other):
if len(self.ngram) != len(other.ngram): return False
return np.all(np.array(self.ngram) == np.array(other.ngram))
def __hash__(self): return int(sum([o * self.max_n**i for i,o in enumerate(self.ngram)]))
def get_grams(x, n, max_n=5000):
return x if n==1 else [NGram(x[i:i+n], max_n=max_n) for i in range(len(x)-n+1)]
def get_correct_ngrams(pred, targ, n, max_n=5000):
pred_grams,targ_grams = get_grams(pred, n, max_n=max_n),get_grams(targ, n, max_n=max_n)
pred_cnt,targ_cnt = Counter(pred_grams),Counter(targ_grams)
return sum([min(c, targ_cnt[g]) for g,c in pred_cnt.items()]),len(pred_grams)
class CorpusBLEU(Callback):
def __init__(self, vocab_sz):
self.vocab_sz = vocab_sz
self.name = 'bleu'
def on_epoch_begin(self, **kwargs):
self.pred_len,self.targ_len,self.corrects,self.counts = 0,0,[0]*4,[0]*4
def on_batch_end(self, last_output, last_target, **kwargs):
last_output = last_output.argmax(dim=-1)
for pred,targ in zip(last_output.cpu().numpy(),last_target.cpu().numpy()):
self.pred_len += len(pred)
self.targ_len += len(targ)
for i in range(4):
c,t = get_correct_ngrams(pred, targ, i+1, max_n=self.vocab_sz)
self.corrects[i] += c
self.counts[i] += t
def on_epoch_end(self, last_metrics, **kwargs):
precs = [c/t for c,t in zip(self.corrects,self.counts)]
len_penalty = exp(1 - self.targ_len/self.pred_len) if self.pred_len < self.targ_len else 1
bleu = len_penalty * ((precs[0]*precs[1]*precs[2]*precs[3]) ** 0.25)
return add_metrics(last_metrics, bleu)
Explanation: Bleu metric (see dedicated notebook)
In translation, the metric usually used is BLEU, see the corresponding notebook for the details.
End of explanation
emb_enc = torch.load(path/'models'/'fr_emb.pth')
emb_dec = torch.load(path/'models'/'en_emb.pth')
model = Seq2SeqQRNN(emb_enc, emb_dec, 256, 30, n_layers=2)
learn = Learner(data, model, loss_func=seq2seq_loss, metrics=[seq2seq_acc, CorpusBLEU(len(data.y.vocab.itos))])
learn.lr_find()
learn.recorder.plot()
learn.fit_one_cycle(8, 1e-2)
Explanation: We load our pretrained embeddings to create the model.
End of explanation
def get_predictions(learn, ds_type=DatasetType.Valid):
learn.model.eval()
inputs, targets, outputs = [],[],[]
with torch.no_grad():
for xb,yb in progress_bar(learn.dl(ds_type)):
out = learn.model(xb)
for x,y,z in zip(xb,yb,out):
inputs.append(learn.data.train_ds.x.reconstruct(x))
targets.append(learn.data.train_ds.y.reconstruct(y))
outputs.append(learn.data.train_ds.y.reconstruct(z.argmax(1)))
return inputs, targets, outputs
inputs, targets, outputs = get_predictions(learn)
inputs[700], targets[700], outputs[700]
inputs[701], targets[701], outputs[701]
inputs[2513], targets[2513], outputs[2513]
inputs[4000], targets[4000], outputs[4000]
Explanation: So how good is our model? Let's see a few predictions.
End of explanation
class TeacherForcing(LearnerCallback):
def __init__(self, learn, end_epoch):
super().__init__(learn)
self.end_epoch = end_epoch
def on_batch_begin(self, last_input, last_target, train, **kwargs):
if train: return {'last_input': [last_input, last_target]}
def on_epoch_begin(self, epoch, **kwargs):
self.learn.model.pr_force = 1 - 0.5 * epoch/self.end_epoch
class Seq2SeqQRNN(nn.Module):
def __init__(self, emb_enc, emb_dec, n_hid, max_len, n_layers=2, p_inp:float=0.15, p_enc:float=0.25,
p_dec:float=0.1, p_out:float=0.35, p_hid:float=0.05, bos_idx:int=0, pad_idx:int=1):
super().__init__()
self.n_layers,self.n_hid,self.max_len,self.bos_idx,self.pad_idx = n_layers,n_hid,max_len,bos_idx,pad_idx
self.emb_enc = emb_enc
self.emb_enc_drop = nn.Dropout(p_inp)
self.encoder = QRNN(emb_enc.weight.size(1), n_hid, n_layers=n_layers, dropout=p_enc)
self.out_enc = nn.Linear(n_hid, emb_enc.weight.size(1), bias=False)
self.hid_dp = nn.Dropout(p_hid)
self.emb_dec = emb_dec
self.decoder = QRNN(emb_dec.weight.size(1), emb_dec.weight.size(1), n_layers=n_layers, dropout=p_dec)
self.out_drop = nn.Dropout(p_out)
self.out = nn.Linear(emb_dec.weight.size(1), emb_dec.weight.size(0))
self.out.weight.data = self.emb_dec.weight.data
self.pr_force = 0.
def forward(self, inp, targ=None):
bs,sl = inp.size()
hid = self.initHidden(bs)
emb = self.emb_enc_drop(self.emb_enc(inp))
enc_out, hid = self.encoder(emb, hid)
hid = self.out_enc(self.hid_dp(hid))
dec_inp = inp.new_zeros(bs).long() + self.bos_idx
res = []
for i in range(self.max_len):
emb = self.emb_dec(dec_inp).unsqueeze(1)
outp, hid = self.decoder(emb, hid)
outp = self.out(self.out_drop(outp[:,0]))
res.append(outp)
dec_inp = outp.data.max(1)[1]
if (dec_inp==self.pad_idx).all(): break
if (targ is not None) and (random.random()<self.pr_force):
if i>=targ.shape[1]: break
dec_inp = targ[:,i]
return torch.stack(res, dim=1)
def initHidden(self, bs): return one_param(self).new_zeros(self.n_layers, bs, self.n_hid)
emb_enc = torch.load(path/'models'/'fr_emb.pth')
emb_dec = torch.load(path/'models'/'en_emb.pth')
model = Seq2SeqQRNN(emb_enc, emb_dec, 256, 30, n_layers=2)
learn = Learner(data, model, loss_func=seq2seq_loss, metrics=[seq2seq_acc, CorpusBLEU(len(data.y.vocab.itos))],
callback_fns=partial(TeacherForcing, end_epoch=8))
learn.fit_one_cycle(8, 1e-2)
inputs, targets, outputs = get_predictions(learn)
inputs[700],targets[700],outputs[700]
inputs[2513], targets[2513], outputs[2513]
inputs[4000], targets[4000], outputs[4000]
#get_bleu(learn)
Explanation: It's usually beginning well, but falls into easy word at the end of the question.
Teacher forcing
One way to help training is to help the decoder by feeding it the real targets instead of its predictions (if it starts with wrong words, it's very unlikely to give us the right translation). We do that all the time at the beginning, then progressively reduce the amount of teacher forcing.
End of explanation
class Seq2SeqQRNN(nn.Module):
def __init__(self, emb_enc, emb_dec, n_hid, max_len, n_layers=2, p_inp:float=0.15, p_enc:float=0.25,
p_dec:float=0.1, p_out:float=0.35, p_hid:float=0.05, bos_idx:int=0, pad_idx:int=1):
super().__init__()
self.n_layers,self.n_hid,self.max_len,self.bos_idx,self.pad_idx = n_layers,n_hid,max_len,bos_idx,pad_idx
self.emb_enc = emb_enc
self.emb_enc_drop = nn.Dropout(p_inp)
self.encoder = QRNN(emb_enc.weight.size(1), n_hid, n_layers=n_layers, dropout=p_enc, bidirectional=True)
self.out_enc = nn.Linear(2*n_hid, emb_enc.weight.size(1), bias=False)
self.hid_dp = nn.Dropout(p_hid)
self.emb_dec = emb_dec
self.decoder = QRNN(emb_dec.weight.size(1), emb_dec.weight.size(1), n_layers=n_layers, dropout=p_dec)
self.out_drop = nn.Dropout(p_out)
self.out = nn.Linear(emb_dec.weight.size(1), emb_dec.weight.size(0))
self.out.weight.data = self.emb_dec.weight.data
self.pr_force = 0.
def forward(self, inp, targ=None):
bs,sl = inp.size()
hid = self.initHidden(bs)
emb = self.emb_enc_drop(self.emb_enc(inp))
enc_out, hid = self.encoder(emb, hid)
hid = hid.view(2,self.n_layers, bs, self.n_hid).permute(1,2,0,3).contiguous()
hid = self.out_enc(self.hid_dp(hid).view(self.n_layers, bs, 2*self.n_hid))
dec_inp = inp.new_zeros(bs).long() + self.bos_idx
res = []
for i in range(self.max_len):
emb = self.emb_dec(dec_inp).unsqueeze(1)
outp, hid = self.decoder(emb, hid)
outp = self.out(self.out_drop(outp[:,0]))
res.append(outp)
dec_inp = outp.data.max(1)[1]
if (dec_inp==self.pad_idx).all(): break
if (targ is not None) and (random.random()<self.pr_force):
if i>=targ.shape[1]: break
dec_inp = targ[:,i]
return torch.stack(res, dim=1)
def initHidden(self, bs): return one_param(self).new_zeros(2*self.n_layers, bs, self.n_hid)
emb_enc = torch.load(path/'models'/'fr_emb.pth')
emb_dec = torch.load(path/'models'/'en_emb.pth')
model = Seq2SeqQRNN(emb_enc, emb_dec, 256, 30, n_layers=2)
learn = Learner(data, model, loss_func=seq2seq_loss, metrics=[seq2seq_acc, CorpusBLEU(len(data.y.vocab.itos))],
callback_fns=partial(TeacherForcing, end_epoch=8))
learn.lr_find()
learn.recorder.plot()
learn.fit_one_cycle(8, 1e-2)
inputs, targets, outputs = get_predictions(learn)
inputs[700], targets[700], outputs[700]
inputs[701], targets[701], outputs[701]
inputs[4001], targets[4001], outputs[4001]
#get_bleu(learn)
Explanation: Bidir
A second things that might help is to use a bidirectional model for the encoder.
End of explanation
def init_param(*sz): return nn.Parameter(torch.randn(sz)/math.sqrt(sz[0]))
class Seq2SeqQRNN(nn.Module):
def __init__(self, emb_enc, emb_dec, n_hid, max_len, n_layers=2, p_inp:float=0.15, p_enc:float=0.25,
p_dec:float=0.1, p_out:float=0.35, p_hid:float=0.05, bos_idx:int=0, pad_idx:int=1):
super().__init__()
self.n_layers,self.n_hid,self.max_len,self.bos_idx,self.pad_idx = n_layers,n_hid,max_len,bos_idx,pad_idx
self.emb_enc = emb_enc
self.emb_enc_drop = nn.Dropout(p_inp)
self.encoder = QRNN(emb_enc.weight.size(1), n_hid, n_layers=n_layers, dropout=p_enc, bidirectional=True)
self.out_enc = nn.Linear(2*n_hid, emb_enc.weight.size(1), bias=False)
self.hid_dp = nn.Dropout(p_hid)
self.emb_dec = emb_dec
emb_sz = emb_dec.weight.size(1)
self.decoder = QRNN(emb_sz + 2*n_hid, emb_dec.weight.size(1), n_layers=n_layers, dropout=p_dec)
self.out_drop = nn.Dropout(p_out)
self.out = nn.Linear(emb_sz, emb_dec.weight.size(0))
self.out.weight.data = self.emb_dec.weight.data #Try tying
self.enc_att = nn.Linear(2*n_hid, emb_sz, bias=False)
self.hid_att = nn.Linear(emb_sz, emb_sz)
self.V = init_param(emb_sz)
self.pr_force = 0.
def forward(self, inp, targ=None):
bs,sl = inp.size()
hid = self.initHidden(bs)
emb = self.emb_enc_drop(self.emb_enc(inp))
enc_out, hid = self.encoder(emb, hid)
hid = hid.view(2,self.n_layers, bs, self.n_hid).permute(1,2,0,3).contiguous()
hid = self.out_enc(self.hid_dp(hid).view(self.n_layers, bs, 2*self.n_hid))
dec_inp = inp.new_zeros(bs).long() + self.bos_idx
res = []
enc_att = self.enc_att(enc_out)
for i in range(self.max_len):
hid_att = self.hid_att(hid[-1])
u = torch.tanh(enc_att + hid_att[:,None])
attn_wgts = F.softmax(u @ self.V, 1)
ctx = (attn_wgts[...,None] * enc_out).sum(1)
emb = self.emb_dec(dec_inp)
outp, hid = self.decoder(torch.cat([emb, ctx], 1)[:,None], hid)
outp = self.out(self.out_drop(outp[:,0]))
res.append(outp)
dec_inp = outp.data.max(1)[1]
if (dec_inp==self.pad_idx).all(): break
if (targ is not None) and (random.random()<self.pr_force):
if i>=targ.shape[1]: break
dec_inp = targ[:,i]
return torch.stack(res, dim=1)
def initHidden(self, bs): return one_param(self).new_zeros(2*self.n_layers, bs, self.n_hid)
emb_enc = torch.load(path/'models'/'fr_emb.pth')
emb_dec = torch.load(path/'models'/'en_emb.pth')
model = Seq2SeqQRNN(emb_enc, emb_dec, 256, 30, n_layers=2)
learn = Learner(data, model, loss_func=seq2seq_loss, metrics=[seq2seq_acc, CorpusBLEU(len(data.y.vocab.itos))],
callback_fns=partial(TeacherForcing, end_epoch=8))
learn.lr_find()
learn.recorder.plot()
learn.fit_one_cycle(8, 3e-3)
inputs, targets, outputs = get_predictions(learn)
inputs[700], targets[700], outputs[700]
inputs[701], targets[701], outputs[701]
inputs[4002], targets[4002], outputs[4002]
Explanation: Attention
Attention is a technique that uses the output of our encoder: instead of discarding it entirely, we use it with our hidden state to pay attention to specific words in the input sentence for the predictions in the output sentence. Specifically, we compute attention weights, then add to the input of the decoder the linear combination of the output of the encoder, with those attention weights.
End of explanation
<END_TASK> |
15,643 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Let's take a look at the original images that you took
Step1: This image is not science-ready yet...
Dark image
Step2: Why is this?
Another interesting feature of CCD cameras is that the chips do not respond equally to the same light intensity. For example if you illuminate the camera with uniform light (this is called flat image).
Step3: Let's create a better image!
Step4: Compare to the original!
Step5: Reduce the rest of images (in principle we should take a different bias image for each filter) because the CCD has different sensitivity at different wavelengths
Step6: An example from SDSS
Step7: If you want to know more about Jupyter | <ASSISTANT_TASK:>
Python Code:
science_image_path_g = 'data/seo_m66_g-band_180s_apagul_1.fits' #Type the path to your image
sci_g = fits.open(science_image_path_g)
sci_im_g = fits.open(science_image_path_g)[0].data
plt.imshow(sci_im_g,cmap='gray', vmax=1800, norm=matplotlib.colors.LogNorm())
plt.colorbar()
Explanation: Let's take a look at the original images that you took
End of explanation
dark_image_path='data/dark.fits' #Type the path to your dark image
drk_im = fits.open(dark_image_path)[0].data
plt.imshow(drk_im,cmap='gray', vmax=2000)
plt.colorbar()
bias_image_path = 'data/bias.fits' #Type the path to your bias image
bias_image = fits.open(bias_image_path)[0].data
plt.imshow(bias_image, cmap='gray')
plt.colorbar()
plt.hist(drk_im.flatten());
plt.yscale('log')
plt.xlabel('Output counts')
plt.ylabel('Number of pixels')
Explanation: This image is not science-ready yet...
Dark image: If you take a shot with the shutter closed (i.e., no light/photons incoming in the camera) you still have a non-zero image.
End of explanation
flat_image_path = 'data/FLAT_g-band_2016-10-06_bin1_id5908.fits' #Type the path to your flat image here
flat_image = fits.open(flat_image_path)[0].data
#You can try cmap='hot' or cmap='jet' to see how it changes
plt.imshow(flat_image, cmap='gray')
plt.colorbar()
plt.hist(flat_image.flatten())
def reduce_image(sci_im,drk_im,flat_im, bias_im, filter_dark=True):
from scipy.stats import mode
dkr_im = drk_im - bias_im
#First part: We take "zero" the image
#The next part is optional and averages the dark image in a 10 pixel radius
#to get rid of salt/pepper noise
if(filter_dark):
selem = disk(10) #We are going to perform averages in 10 pixel radius disks
selem2 = disk(4)
drk_im = rank.mean(drk_im, selem=selem) #We perform an average to remove salt-pepper noise
flat_im = rank.mean(flat_im, selem=selem2)
#Second part: Make every part have the same sensitivity
#flat_im = (flat_im - drk_im)/mode(flat_im-drk_im,axis=None)[0] #most common pixel value will equal 1
flat_im = (flat_im - drk_im)/np.median(flat_im-drk_im)
#Lower than 1 where the CCD is less sensitive and more than 1 where it's more sensitive
sci_im = (sci_im -drk_im)/flat_im
#Error image
return sci_im
Explanation: Why is this?
Another interesting feature of CCD cameras is that the chips do not respond equally to the same light intensity. For example if you illuminate the camera with uniform light (this is called flat image).
End of explanation
new_sci_image_g = reduce_image(sci_im_g,drk_im,flat_image,bias_image, filter_dark=False)
plt.imshow(new_sci_image_g, cmap='gray', vmax=4000, vmin=50, norm=matplotlib.colors.LogNorm())
plt.colorbar()
Explanation: Let's create a better image!
End of explanation
fig, ax = plt.subplots(nrows=1,ncols=3,figsize=(10,8))
ax[0].imshow(sci_im_g,cmap='gray',vmax=1800, norm=matplotlib.colors.LogNorm())
ax[0].set_title('Before reduction')
ax[1].imshow(new_sci_image_g,cmap='gray',vmax=2000, vmin=50, norm=matplotlib.colors.LogNorm())
ax[1].set_title('After reduction')
ax[2].imshow(sci_im_g-new_sci_image_g,cmap='gray', vmax=1050, vmin=1000)
ax[2].set_title('Difference')
science_image_path_r = 'data/seo_m66_r_180s_apagul_1.fits'
sci_im_r = fits.open(science_image_path_r)[0].data
science_image_path_i = 'data/seo_m66_i-band_180s_apagul_1.fits'
sci_im_i = fits.open(science_image_path_i)[0].data
flat_r = fits.open('data/FLAT_r-band_2016-10-06_bin1_id5906.fits')[0].data
flat_i = fits.open('data/FLAT_i-band_2016-10-06_bin1_id5907.fits')[0].data
Explanation: Compare to the original!
End of explanation
new_sci_image_r = reduce_image(sci_im_r,drk_im,flat_r,bias_image)
new_sci_image_i = reduce_image(sci_im_i,drk_im,flat_i,bias_image)
Explanation: Reduce the rest of images (in principle we should take a different bias image for each filter) because the CCD has different sensitivity at different wavelengths
End of explanation
# Read in the three images downloaded from here:
# g: http://dr13.sdss.org/sas/dr13/eboss/photoObj/frames/301/1737/5/frame-g-001737-5-0039.fits.bz2
# r: http://dr13.sdss.org/sas/dr13/eboss/photoObj/frames/301/1737/5/frame-r-001737-5-0039.fits.bz2
# i: http://dr13.sdss.org/sas/dr13/eboss/photoObj/frames/301/1737/5/frame-i-001737-5-0039.fits.bz2
g = fits.open('data/frame-g-001737-5-0039.fits.bz2')[0]
r = fits.open('data/frame-r-001737-5-0039.fits.bz2')[0]
i = fits.open('data/frame-i-001737-5-0039.fits.bz2')[0]
# remap r and i onto g
r_new, r_mask = reproject_interp(r, g.header)
i_new, i_mask = reproject_interp(i, g.header)
# zero out the unmapped values
i_new[np.logical_not(i_mask)] = 0
r_new[np.logical_not(r_mask)] = 0
# red=i, green=r, blue=g
# make a file with the default scaling
rgb_default = make_lupton_rgb(i_new, r_new, g.data, filename="ngc6976-default.jpeg")
# this scaling is very similar to the one used in Lupton et al. (2004)
rgb = make_lupton_rgb(i_new, r_new, g.data, Q=10, stretch=0.5, filename="ngc6976.jpeg")
plt.imshow(rgb)
Explanation: An example from SDSS:
End of explanation
positions = [(550., 600.), (450., 500.)] #Change it and include the position of an object in your image
apertures = CircularAperture(positions, r=20.)
phot_table = aperture_photometry(new_sci_image_g, apertures)
print phot_table
Explanation: If you want to know more about Jupyter:
https://github.com/fjaviersanchez/JupyterTutorial/blob/master/TutorialJupyter.ipynb
Aperture photometry
Astronomers use the magnitude scale to characterize the bright of an object. With the magnitude scale you quantify the brightness of an object by comparing it with other objects. Astronomers have agreed to use "Vega" as the zero magnitude point (like the freezing point for water is the zero-point for the Celsius temperature scale). The magnitude scale goes "backwards" in the sense that brighter objects have smaller magnitude. For example the Sun has magnitude -27, the full Moon -13, and Venus -5.
How can we measure magnitudes from an image?
A first approach is to use an object which magnitude we know, called "standard" and refer the rest of the objects in an image to it.
But what do you use to count the total brightness of an object?
Use the brightest pixel?
Add the brightness in a certain radius?
Count only the pixels which belong to each object?
End of explanation
<END_TASK> |
15,644 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Marginalized Gaussian Mixture Model
Author
Step1: Gaussian mixtures are a flexible class of models for data that exhibits subpopulation heterogeneity. A toy example of such a data set is shown below.
Step2: A natural parameterization of the Gaussian mixture model is as the latent variable model
$$
\begin{align}
\mu_1, \ldots, \mu_K
& \sim N(0, \sigma^2) \
\tau_1, \ldots, \tau_K
& \sim \textrm{Gamma}(a, b) \
\boldsymbol{w}
& \sim \textrm{Dir}(\boldsymbol{\alpha}) \
z\ |\ \boldsymbol{w}
& \sim \textrm{Cat}(\boldsymbol{w}) \
x\ |\ z
& \sim N(\mu_z, \tau^{-1}_z).
\end{align}
$$
An implementation of this parameterization in PyMC3 is available here. A drawback of this parameterization is that is posterior relies on sampling the discrete latent variable $z$. This reliance can cause slow mixing and ineffective exploration of the tails of the distribution.
An alternative, equivalent parameterization that addresses these problems is to marginalize over $z$. The marginalized model is
$$
\begin{align}
\mu_1, \ldots, \mu_K
& \sim N(0, \sigma^2) \
\tau_1, \ldots, \tau_K
& \sim \textrm{Gamma}(a, b) \
\boldsymbol{w}
& \sim \textrm{Dir}(\boldsymbol{\alpha}) \
f(x\ |\ \boldsymbol{w})
& = \sum_{i = 1}^K w_i\ N(x\ |\ \mu_i, \tau^{-1}_z),
\end{align}
$$
where
$$N(x\ |\ \mu, \sigma^2) = \frac{1}{\sqrt{2 \pi} \sigma} \exp\left(-\frac{1}{2 \sigma^2} (x - \mu)^2\right)$$
is the probability density function of the normal distribution.
Marginalizing $z$ out of the model generally leads to faster mixing and better exploration of the tails of the posterior distribution. Marginalization over discrete parameters is a common trick in the Stan community, since Stan does not support sampling from discrete distributions. For further details on marginalization and several worked examples, see the Stan User's Guide and Reference Manual.
PyMC3 supports marginalized Gaussian mixture models through its NormalMixture class. (It also supports marginalized general mixture models through its Mixture class.) Below we specify and fit a marginalized Gaussian mixture model to this data in PyMC3.
Step3: We see in the following plot that the posterior distribution on the weights and the component means has captured the true value quite well.
Step4: We can also sample from the model's posterior predictive distribution, as follows.
Step5: We see that the posterior predictive samples have a distribution quite close to that of the observed data. | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
import pymc3 as pm
import seaborn as sns
SEED = 383561
np.random.seed(SEED) # from random.org, for reproducibility
Explanation: Marginalized Gaussian Mixture Model
Author: Austin Rochford
End of explanation
N = 1000
W = np.array([0.35, 0.4, 0.25])
MU = np.array([0., 2., 5.])
SIGMA = np.array([0.5, 0.5, 1.])
component = np.random.choice(MU.size, size=N, p=W)
x = np.random.normal(MU[component], SIGMA[component], size=N)
fig, ax = plt.subplots(figsize=(8, 6))
ax.hist(x, bins=30, normed=True, lw=0);
Explanation: Gaussian mixtures are a flexible class of models for data that exhibits subpopulation heterogeneity. A toy example of such a data set is shown below.
End of explanation
with pm.Model() as model:
w = pm.Dirichlet('w', np.ones_like(W))
mu = pm.Normal('mu', 0., 10., shape=W.size)
tau = pm.Gamma('tau', 1., 1., shape=W.size)
x_obs = pm.NormalMixture('x_obs', w, mu, tau=tau, observed=x)
with model:
step = pm.Metropolis()
trace_ = pm.sample(20000, step, random_seed=SEED)
trace = trace_[10000::10]
Explanation: A natural parameterization of the Gaussian mixture model is as the latent variable model
$$
\begin{align}
\mu_1, \ldots, \mu_K
& \sim N(0, \sigma^2) \
\tau_1, \ldots, \tau_K
& \sim \textrm{Gamma}(a, b) \
\boldsymbol{w}
& \sim \textrm{Dir}(\boldsymbol{\alpha}) \
z\ |\ \boldsymbol{w}
& \sim \textrm{Cat}(\boldsymbol{w}) \
x\ |\ z
& \sim N(\mu_z, \tau^{-1}_z).
\end{align}
$$
An implementation of this parameterization in PyMC3 is available here. A drawback of this parameterization is that is posterior relies on sampling the discrete latent variable $z$. This reliance can cause slow mixing and ineffective exploration of the tails of the distribution.
An alternative, equivalent parameterization that addresses these problems is to marginalize over $z$. The marginalized model is
$$
\begin{align}
\mu_1, \ldots, \mu_K
& \sim N(0, \sigma^2) \
\tau_1, \ldots, \tau_K
& \sim \textrm{Gamma}(a, b) \
\boldsymbol{w}
& \sim \textrm{Dir}(\boldsymbol{\alpha}) \
f(x\ |\ \boldsymbol{w})
& = \sum_{i = 1}^K w_i\ N(x\ |\ \mu_i, \tau^{-1}_z),
\end{align}
$$
where
$$N(x\ |\ \mu, \sigma^2) = \frac{1}{\sqrt{2 \pi} \sigma} \exp\left(-\frac{1}{2 \sigma^2} (x - \mu)^2\right)$$
is the probability density function of the normal distribution.
Marginalizing $z$ out of the model generally leads to faster mixing and better exploration of the tails of the posterior distribution. Marginalization over discrete parameters is a common trick in the Stan community, since Stan does not support sampling from discrete distributions. For further details on marginalization and several worked examples, see the Stan User's Guide and Reference Manual.
PyMC3 supports marginalized Gaussian mixture models through its NormalMixture class. (It also supports marginalized general mixture models through its Mixture class.) Below we specify and fit a marginalized Gaussian mixture model to this data in PyMC3.
End of explanation
pm.traceplot(trace, varnames=['w', 'mu']);
Explanation: We see in the following plot that the posterior distribution on the weights and the component means has captured the true value quite well.
End of explanation
with model:
ppc_trace = pm.sample_ppc(trace, 5000, random_seed=SEED)
Explanation: We can also sample from the model's posterior predictive distribution, as follows.
End of explanation
fig, ax = plt.subplots(figsize=(8, 6))
ax.hist(x, bins=30, normed=True,
histtype='step', lw=2,
label='Observed data');
ax.hist(ppc_trace['x_obs'], bins=30, normed=True,
histtype='step', lw=2,
label='Posterior predictive distribution');
ax.legend(loc=1);
Explanation: We see that the posterior predictive samples have a distribution quite close to that of the observed data.
End of explanation
<END_TASK> |
15,645 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Hands-on 1
Step1: Preprocessing Workflow Structure
So let's get our hands dirty. First things first, it's always good to know which interfaces you want to use in your workflow and in which order you want to execute them. For the preprocessing workflow, I recommend that we use the following nodes
Step2: Note
Step3: Gunzip
I've already created the Gunzip node as a template for the other nodes. Also, we've specified an in_file here so that we can directly test the nodes without worrying about the Input/Output data stream to the workflow. This will be taken care of in a later section.
Step4: Drop Dummy Scans
The functional images of this dataset were recorded with 4 dummy scans at the beginning (see the corresponding publication). But those dummy scans were not yet taken out from the functional images.
To better illustrate this, let's plot the time course of a random voxel of the just defined func_file
Step5: In the figure above, we see that at the very beginning there are extreme values, which hint to the fact that steady state wasn't reached yet. Therefore, we want to exclude the dummy scans from the original data. This can be achieved with FSL's ExtractROI.
Step6: This ExtractROI node can now be connected to the gunzip_func node from above. To do this, we use the following command
Step7: Slice Time Correction
Now to the next step. Let's us SPM's SliceTiming to correct for slice wise acquisition of the volumes. As a reminder, the tutorial dataset was recorded...
- with a time repetition (TR) of 2.5 seconds
- with 30 slices per volume
- in an interleaved fashion, i.e. slice order is [1, 3, 5, 7, ..., 2, 4, 6, ..., 30]
- with a time acquisition (TA) of 2.4167 seconds, i.e. TR-(TR/num_slices)
Step8: Now the next step is to connect the SliceTiming node to the rest of the workflow, i.e. the ExtractROI node.
Step9: Motion Correction
To correct for motion in the scanner, we will be using FSL's MCFLIRT.
Step10: Connect the MCFLIRT node to the rest of the workflow.
Step11: Artifact Detection
We will use the really cool and useful ArtifactDetection tool from Nipype to detect motion and intensity outliers in the functional images. The interface is initiated as follows
Step12: The parameters above mean the following
Step13: Segmentation of anatomical image
Now let's work on the anatomical image. In particular, let's use SPM's NewSegment to create probability maps for the gray matter, white matter tissue and CSF.
Step14: We will again be using a Gunzip node to unzip the anatomical image that we then want to use as input to the segmentation node. We again also need to specify the anatomical image that we want to use in this case. As before, this will later also be handled directly by the Input/Output stream.
Step15: Now we can connect the NewSegment node to the rest of the workflow.
Step16: Compute Coregistration Matrix
As a next step, we will make sure that the functional images are coregistered to the anatomical image. For this, we will use FSL's FLIRT function. As we just created a white matter probability map, we can use this together with the Boundary-Based Registration (BBR) cost function to optimize the image coregistration. As some helpful notes...
- use a degree of freedom of 6
- specify the cost function as bbr
- use the schedule='/usr/share/fsl/5.0/etc/flirtsch/bbr.sch'
Step17: As mentioned above, the bbr routine can use the subject-specific white matter probability map to guide the coregistration. But for this, we need to create a binary mask out of the WM probability map. This can easily be done by FSL's Threshold interface.
Step18: Now, to select the WM probability map that the NewSegment node created, we need some helper function. Because the output field partial_volume_files form the segmentation node, will give us a list of files, i.e. [[GM_prob], [WM_prob], [], [], [], []]. Therefore, using the following function, we can select only the last element of this list.
Step19: Now we can just connect this Threshold node to the coregistration node from above.
Step20: Apply Coregistration Matrix to functional image
Now that we know the coregistration matrix to correctly overlay the functional mean image on the subject-specific anatomy, we need to apply to coregistration to the whole time series. This can be achieved with FSL's FLIRT as follows
Step21: <span style="color
Step22: Smoothing
Next step is image smoothing. The most simple way to do this is to use FSL's or SPM's Smooth function. But for learning purposes, let's use FSL's SUSAN workflow as it is implemented in Nipype. Note that this time, we are importing a workflow instead of an interface.
Step23: If you type create_susan_smooth? you can see how to specify the input variables to the susan workflow. In particular, they are...
- fwhm
Step24: Create Binary Mask
There are many possible approaches on how you can mask your functional images. One of them is not at all, one is with a simple brain mask and one that only considers certain kind of brain tissue, e.g. gray matter.
For the current example, we want to create a dilated gray matter mask. For this purpose we need to
Step25: The second and third step can luckily be done with just one node. We can take almost the same Threshold node as above. We just need to add another additional argument
Step26: Now we can connect the resample and the gray matter mask node to the segmentation node and each other.
Step27: This should do the trick.
Apply the binary mask
Now we can connect this dilated gray matter mask to the susan node, as well as actually applying this to the resulting smoothed images.
Step28: To apply the mask to the smoothed functional images, we will use FSL's ApplyMask interface.
Step29: Important
Step30: Remove linear trends in functional images
Last but not least. Let's use Nipype's TSNR module to remove linear and quadratic trends in the functionally smoothed images. For this, you only have to specify the regress_poly parameter in the node initiation.
Step31: Datainput with SelectFiles and iterables
This is all nice and well. But so far we still had to specify the input values for gunzip_anat and gunzip_func ourselves. How can we scale this up to multiple subjects and/or multiple functional images and make the workflow take the input directly from the BIDS dataset?
For this, we need SelectFiles and iterables! It's rather simple, specify a template and fill-up the placeholder variables.
Step32: Now we can specify over which subjects the workflow should iterate. To test the workflow, let's still just look at subject 7.
Step33: Visualize the workflow
Now that we're done. Let's look at the workflow that we just created.
Step34: Run the Workflow
Now we are ready to run the workflow! Be careful about the n_procs parameter if you run a workflow in 'MultiProc' mode. n_procs specifies the number of jobs/cores your computer will use to run the workflow. If this number is too high your computer will try to execute too many things at once and will most likely crash.
Note
Step35: Inspect output
What did we actually do? Let's look at all the data that was created.
Step36: But what did we do specifically? Well, let's investigate.
Motion Correction and Artifact Detection
How much did the subject move in the scanner and where there any outliers in the functional images?
Step37: The motion parameters seems to look ok. What about the detection of artifacts?
Step38: Which volumes are problematic?
Step39: Masks and Probability maps
Let's see what all the masks and probability maps look like. For this, we will use nilearn's plot_anat function.
Step40: First, let's look at the tissue probability maps.
Step41: And how does the gray matter mask look like that we used on the functional images?
Step42: Functional Image transformations
Let's also investigate the transformation that we applied to the functional images.
Step43: That's all nice and beautiful, but what did smoothing and detrending actually do to the data?
Step44: Data output with DataSink
The results look fine, but we don't need all those temporary files. So let's use Datasink to keep only those files that we actually need for the 1st and 2nd level analysis.
Step45: Now the next step is to specify all the output that we want to keep in our output folder output. Make sure to keep
Step46: Run the workflow
After adding the datasink folder, let's run the preprocessing workflow again.
Step47: Let's look now at the output of this datasink folder.
Step48: Much better! But we're still not there yet. There are many unnecessary file specifiers that we can get rid off. To do so, we can use DataSink's substitutions parameter. For this, we create a list of tuples
Step49: Before we run the preprocessing workflow again, let's first delete the current output folder
Step50: Run Preprocessing workflow on 6 right-handed subjects
Perfect! Now let's run the whole workflow for right-handed subjects. For this, you just need to change the subject_list variable and run again the places where this variable is used (i.e. sf.iterables and in DataSink substitutions.
Step51: Now we can run the workflow again, this time for all right handed subjects in parallel. | <ASSISTANT_TASK:>
Python Code:
%%bash
datalad get -J 4 -d /data/ds000114 \
/data/ds000114/sub-0[234789]/ses-test/anat/sub-0[234789]_ses-test_T1w.nii.gz \
/data/ds000114/sub-0[234789]/ses-test/func/*fingerfootlips*
Explanation: Hands-on 1: How to create a fMRI preprocessing workflow
The purpose of this section is that you set-up a complete fMRI analysis workflow yourself. So that in the end, you are able to perform the analysis from A-Z, i.e. from preprocessing to group analysis. This section will cover the preprocessing part, and the section Hands-on 2: Analysis will handle the analysis part.
We will use this opportunity to show you some nice additional interfaces/nodes that might not be relevant to your usual analysis. But it's always nice to know that they exist. And hopefully, this will encourage you to investigate all other interfaces that Nipype can bring to the tip of your finger.
Preparation
Before we can start with anything we first need to download the data. For this hands-on, we will only use the right-handed subjects 2-4 and 7-9. This can be done very quickly with the following datalad command.
Note: This might take a while, as datalad needs to download ~200MB of data
End of explanation
from nilearn import plotting
%matplotlib inline
# Get the Node and Workflow object
from nipype import Node, Workflow
# Specify which SPM to use
from nipype.interfaces.matlab import MatlabCommand
MatlabCommand.set_default_paths('/opt/spm12-r7219/spm12_mcr/spm12')
Explanation: Preprocessing Workflow Structure
So let's get our hands dirty. First things first, it's always good to know which interfaces you want to use in your workflow and in which order you want to execute them. For the preprocessing workflow, I recommend that we use the following nodes:
1. Gunzip (Nipype)
2. Drop Dummy Scans (FSL)
3. Slice Time Correction (SPM)
4. Motion Correction (SPM)
5. Artifact Detection
6. Segmentation (SPM)
7. Coregistration (FSL)
8. Smoothing (FSL)
9. Apply Binary Mask (FSL)
10. Remove Linear Trends (Nipype)
Note: This workflow might be overkill concerning data manipulation, but it hopefully serves as a good Nipype exercise.
Imports
It's always best to have all relevant module imports at the beginning of your script. So let's import what we most certainly need.
End of explanation
# Create the workflow here
# Hint: use 'base_dir' to specify where to store the working directory
preproc = Workflow(name='work_preproc', base_dir='/output/')
Explanation: Note: Ideally you would also put the imports of all the interfaces that you use here at the top. But as we will develop the workflow step by step, we can also import the relevant modules as we go.
Create Nodes and Workflow connections
Let's create all the nodes that we need! Make sure to specify all relevant inputs and keep in mind which ones you later on need to connect in your pipeline.
Workflow
We recommend to create the workflow and establish all its connections at a later place in your script. This helps to have everything nicely together. But for this hands-on example, it makes sense to establish the connections between the nodes as we go.
And for this, we first need to create a workflow:
End of explanation
from nipype.algorithms.misc import Gunzip
# Specify example input file
func_file = '/data/ds000114/sub-07/ses-test/func/sub-07_ses-test_task-fingerfootlips_bold.nii.gz'
# Initiate Gunzip node
gunzip_func = Node(Gunzip(in_file=func_file), name='gunzip_func')
Explanation: Gunzip
I've already created the Gunzip node as a template for the other nodes. Also, we've specified an in_file here so that we can directly test the nodes without worrying about the Input/Output data stream to the workflow. This will be taken care of in a later section.
End of explanation
import nibabel as nb
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot(nb.load(func_file).get_fdata()[32, 32, 15, :]);
Explanation: Drop Dummy Scans
The functional images of this dataset were recorded with 4 dummy scans at the beginning (see the corresponding publication). But those dummy scans were not yet taken out from the functional images.
To better illustrate this, let's plot the time course of a random voxel of the just defined func_file:
End of explanation
from nipype.interfaces.fsl import ExtractROI
extract = Node(ExtractROI(t_min=4, t_size=-1, output_type='NIFTI'),
name="extract")
Explanation: In the figure above, we see that at the very beginning there are extreme values, which hint to the fact that steady state wasn't reached yet. Therefore, we want to exclude the dummy scans from the original data. This can be achieved with FSL's ExtractROI.
End of explanation
preproc.connect([(gunzip_func, extract, [('out_file', 'in_file')])])
Explanation: This ExtractROI node can now be connected to the gunzip_func node from above. To do this, we use the following command:
End of explanation
from nipype.interfaces.spm import SliceTiming
slice_order = list(range(1, 31, 2)) + list(range(2, 31, 2))
print(slice_order)
# Initiate SliceTiming node here
slicetime = Node(SliceTiming(num_slices=30,
ref_slice=15,
slice_order=slice_order,
time_repetition=2.5,
time_acquisition=2.5-(2.5/30)),
name='slicetime')
Explanation: Slice Time Correction
Now to the next step. Let's us SPM's SliceTiming to correct for slice wise acquisition of the volumes. As a reminder, the tutorial dataset was recorded...
- with a time repetition (TR) of 2.5 seconds
- with 30 slices per volume
- in an interleaved fashion, i.e. slice order is [1, 3, 5, 7, ..., 2, 4, 6, ..., 30]
- with a time acquisition (TA) of 2.4167 seconds, i.e. TR-(TR/num_slices)
End of explanation
# Connect SliceTiming node to the other nodes here
preproc.connect([(extract, slicetime, [('roi_file', 'in_files')])])
Explanation: Now the next step is to connect the SliceTiming node to the rest of the workflow, i.e. the ExtractROI node.
End of explanation
from nipype.interfaces.fsl import MCFLIRT
# Initate MCFLIRT node here
mcflirt = Node(MCFLIRT(mean_vol=True,
save_plots=True),
name="mcflirt")
Explanation: Motion Correction
To correct for motion in the scanner, we will be using FSL's MCFLIRT.
End of explanation
# Connect MCFLIRT node to the other nodes here
preproc.connect([(slicetime, mcflirt, [('timecorrected_files', 'in_file')])])
Explanation: Connect the MCFLIRT node to the rest of the workflow.
End of explanation
from nipype.algorithms.rapidart import ArtifactDetect
art = Node(ArtifactDetect(norm_threshold=2,
zintensity_threshold=3,
mask_type='spm_global',
parameter_source='FSL',
use_differences=[True, False],
plot_type='svg'),
name="art")
Explanation: Artifact Detection
We will use the really cool and useful ArtifactDetection tool from Nipype to detect motion and intensity outliers in the functional images. The interface is initiated as follows:
End of explanation
preproc.connect([(mcflirt, art, [('out_file', 'realigned_files'),
('par_file', 'realignment_parameters')])
])
Explanation: The parameters above mean the following:
- norm_threshold - Threshold to use to detect motion-related outliers when composite motion is being used
- zintensity_threshold - Intensity Z-threshold use to detection images that deviate from the mean
- mask_type - Type of mask that should be used to mask the functional data. spm_global uses an spm_global like calculation to determine the brain mask
- parameter_source - Source of movement parameters
- use_differences - If you want to use differences between successive motion (first element) and intensity parameter (second element) estimates in order to determine outliers
And this is how you connect this node to the rest of the workflow:
End of explanation
from nipype.interfaces.spm import NewSegment
# Use the following tissue specification to get a GM and WM probability map
tpm_img ='/opt/spm12-r7219/spm12_mcr/spm12/tpm/TPM.nii'
tissue1 = ((tpm_img, 1), 1, (True,False), (False, False))
tissue2 = ((tpm_img, 2), 1, (True,False), (False, False))
tissue3 = ((tpm_img, 3), 2, (True,False), (False, False))
tissue4 = ((tpm_img, 4), 3, (False,False), (False, False))
tissue5 = ((tpm_img, 5), 4, (False,False), (False, False))
tissue6 = ((tpm_img, 6), 2, (False,False), (False, False))
tissues = [tissue1, tissue2, tissue3, tissue4, tissue5, tissue6]
# Initiate NewSegment node here
segment = Node(NewSegment(tissues=tissues), name='segment')
Explanation: Segmentation of anatomical image
Now let's work on the anatomical image. In particular, let's use SPM's NewSegment to create probability maps for the gray matter, white matter tissue and CSF.
End of explanation
# Specify example input file
anat_file = '/data/ds000114/sub-07/ses-test/anat/sub-07_ses-test_T1w.nii.gz'
# Initiate Gunzip node
gunzip_anat = Node(Gunzip(in_file=anat_file), name='gunzip_anat')
Explanation: We will again be using a Gunzip node to unzip the anatomical image that we then want to use as input to the segmentation node. We again also need to specify the anatomical image that we want to use in this case. As before, this will later also be handled directly by the Input/Output stream.
End of explanation
# Connect NewSegment node to the other nodes here
preproc.connect([(gunzip_anat, segment, [('out_file', 'channel_files')])])
Explanation: Now we can connect the NewSegment node to the rest of the workflow.
End of explanation
from nipype.interfaces.fsl import FLIRT
# Initiate FLIRT node here
coreg = Node(FLIRT(dof=6,
cost='bbr',
schedule='/usr/share/fsl/5.0/etc/flirtsch/bbr.sch',
output_type='NIFTI'),
name="coreg")
# Connect FLIRT node to the other nodes here
preproc.connect([(gunzip_anat, coreg, [('out_file', 'reference')]),
(mcflirt, coreg, [('mean_img', 'in_file')])
])
Explanation: Compute Coregistration Matrix
As a next step, we will make sure that the functional images are coregistered to the anatomical image. For this, we will use FSL's FLIRT function. As we just created a white matter probability map, we can use this together with the Boundary-Based Registration (BBR) cost function to optimize the image coregistration. As some helpful notes...
- use a degree of freedom of 6
- specify the cost function as bbr
- use the schedule='/usr/share/fsl/5.0/etc/flirtsch/bbr.sch'
End of explanation
from nipype.interfaces.fsl import Threshold
# Threshold - Threshold WM probability image
threshold_WM = Node(Threshold(thresh=0.5,
args='-bin',
output_type='NIFTI'),
name="threshold_WM")
Explanation: As mentioned above, the bbr routine can use the subject-specific white matter probability map to guide the coregistration. But for this, we need to create a binary mask out of the WM probability map. This can easily be done by FSL's Threshold interface.
End of explanation
# Select WM segmentation file from segmentation output
def get_wm(files):
return files[1][0]
# Connecting the segmentation node with the threshold node
preproc.connect([(segment, threshold_WM, [(('native_class_images', get_wm),
'in_file')])])
Explanation: Now, to select the WM probability map that the NewSegment node created, we need some helper function. Because the output field partial_volume_files form the segmentation node, will give us a list of files, i.e. [[GM_prob], [WM_prob], [], [], [], []]. Therefore, using the following function, we can select only the last element of this list.
End of explanation
# Connect Threshold node to coregistration node above here
preproc.connect([(threshold_WM, coreg, [('out_file', 'wm_seg')])])
Explanation: Now we can just connect this Threshold node to the coregistration node from above.
End of explanation
# Specify the isometric voxel resolution you want after coregistration
desired_voxel_iso = 4
# Apply coregistration warp to functional images
applywarp = Node(FLIRT(interp='spline',
apply_isoxfm=desired_voxel_iso,
output_type='NIFTI'),
name="applywarp")
Explanation: Apply Coregistration Matrix to functional image
Now that we know the coregistration matrix to correctly overlay the functional mean image on the subject-specific anatomy, we need to apply to coregistration to the whole time series. This can be achieved with FSL's FLIRT as follows:
End of explanation
# Connecting the ApplyWarp node to all the other nodes
preproc.connect([(mcflirt, applywarp, [('out_file', 'in_file')]),
(coreg, applywarp, [('out_matrix_file', 'in_matrix_file')]),
(gunzip_anat, applywarp, [('out_file', 'reference')])
])
Explanation: <span style="color:red">Important</span>: As you can see above, we also specified a variable desired_voxel_iso. This is very important at this stage, otherwise FLIRT will transform your functional images to a resolution of the anatomical image, which will dramatically increase the file size (e.g. to 1-10GB per file). If you don't want to change the voxel resolution, use the additional parameter no_resample=True. Important, for this to work, you still need to define apply_isoxfm.
End of explanation
from niflow.nipype1.workflows.fmri.fsl.preprocess import create_susan_smooth
Explanation: Smoothing
Next step is image smoothing. The most simple way to do this is to use FSL's or SPM's Smooth function. But for learning purposes, let's use FSL's SUSAN workflow as it is implemented in Nipype. Note that this time, we are importing a workflow instead of an interface.
End of explanation
# Initiate SUSAN workflow here
susan = create_susan_smooth(name='susan')
susan.inputs.inputnode.fwhm = 4
# Connect Threshold node to coregistration node above here
preproc.connect([(applywarp, susan, [('out_file', 'inputnode.in_files')])])
Explanation: If you type create_susan_smooth? you can see how to specify the input variables to the susan workflow. In particular, they are...
- fwhm: set this value to 4 (or whichever value you want)
- mask_file: will be created in a later step
- in_file: will be handled while connection to other nodes in the preproc workflow
End of explanation
from nipype.interfaces.fsl import FLIRT
# Initiate resample node
resample = Node(FLIRT(apply_isoxfm=desired_voxel_iso,
output_type='NIFTI'),
name="resample")
Explanation: Create Binary Mask
There are many possible approaches on how you can mask your functional images. One of them is not at all, one is with a simple brain mask and one that only considers certain kind of brain tissue, e.g. gray matter.
For the current example, we want to create a dilated gray matter mask. For this purpose we need to:
1. Resample the gray matter probability map to the same resolution as the functional images
2. Threshold this resampled probability map at a specific value
3. Dilate this mask by some voxels to make the mask less conservative and more inclusive
The first step can be done in many ways (eg. using freesurfer's mri_convert, nibabel) but in our case, we will use FSL's FLIRT. The trick is to use the probability mask, as input file and a reference file.
End of explanation
from nipype.interfaces.fsl import Threshold
# Threshold - Threshold GM probability image
mask_GM = Node(Threshold(thresh=0.5,
args='-bin -dilF',
output_type='NIFTI'),
name="mask_GM")
# Select GM segmentation file from segmentation output
def get_gm(files):
return files[0][0]
Explanation: The second and third step can luckily be done with just one node. We can take almost the same Threshold node as above. We just need to add another additional argument: -dilF - which applies a maximum filtering of all voxels.
End of explanation
preproc.connect([(segment, resample, [(('native_class_images', get_gm), 'in_file'),
(('native_class_images', get_gm), 'reference')
]),
(resample, mask_GM, [('out_file', 'in_file')])
])
Explanation: Now we can connect the resample and the gray matter mask node to the segmentation node and each other.
End of explanation
# Connect gray matter Mask node to the susan workflow here
preproc.connect([(mask_GM, susan, [('out_file', 'inputnode.mask_file')])])
Explanation: This should do the trick.
Apply the binary mask
Now we can connect this dilated gray matter mask to the susan node, as well as actually applying this to the resulting smoothed images.
End of explanation
from nipype.interfaces.fsl import ApplyMask
Explanation: To apply the mask to the smoothed functional images, we will use FSL's ApplyMask interface.
End of explanation
from nipype import MapNode
# Initiate ApplyMask node here
mask_func = MapNode(ApplyMask(output_type='NIFTI'),
name="mask_func",
iterfield=["in_file"])
# Connect smoothed susan output file to ApplyMask node here
preproc.connect([(susan, mask_func, [('outputnode.smoothed_files', 'in_file')]),
(mask_GM, mask_func, [('out_file', 'mask_file')])
])
Explanation: Important: The susan workflow gives out a list of files, i.e. [smoothed_func.nii] instead of just the filename directly. If we would use a normal Node for ApplyMask this would lead to the following error:
TraitError: The 'in_file' trait of an ApplyMaskInput instance must be an existing file name, but a value of ['/output/work_preproc/susan/smooth/mapflow/_smooth0/asub-07_ses-test_task-fingerfootlips_bold_mcf_flirt_smooth.nii.gz'] <class 'list'> was specified.
To prevent this we will be using a MapNode and specify the in_file as it's iterfield. Like this, the node is capable to handle a list of inputs as it will know that it has to apply itself iteratively to the list of inputs.
End of explanation
from nipype.algorithms.confounds import TSNR
# Initiate TSNR node here
detrend = Node(TSNR(regress_poly=2), name="detrend")
# Connect the detrend node to the other nodes here
preproc.connect([(mask_func, detrend, [('out_file', 'in_file')])])
Explanation: Remove linear trends in functional images
Last but not least. Let's use Nipype's TSNR module to remove linear and quadratic trends in the functionally smoothed images. For this, you only have to specify the regress_poly parameter in the node initiation.
End of explanation
# Import the SelectFiles
from nipype import SelectFiles
# String template with {}-based strings
templates = {'anat': 'sub-{subject_id}/ses-{ses_id}/anat/'
'sub-{subject_id}_ses-test_T1w.nii.gz',
'func': 'sub-{subject_id}/ses-{ses_id}/func/'
'sub-{subject_id}_ses-{ses_id}_task-{task_id}_bold.nii.gz'}
# Create SelectFiles node
sf = Node(SelectFiles(templates,
base_directory='/data/ds000114',
sort_filelist=True),
name='selectfiles')
sf.inputs.ses_id='test'
sf.inputs.task_id='fingerfootlips'
Explanation: Datainput with SelectFiles and iterables
This is all nice and well. But so far we still had to specify the input values for gunzip_anat and gunzip_func ourselves. How can we scale this up to multiple subjects and/or multiple functional images and make the workflow take the input directly from the BIDS dataset?
For this, we need SelectFiles and iterables! It's rather simple, specify a template and fill-up the placeholder variables.
End of explanation
subject_list = ['07']
sf.iterables = [('subject_id', subject_list)]
# Connect SelectFiles node to the other nodes here
preproc.connect([(sf, gunzip_anat, [('anat', 'in_file')]),
(sf, gunzip_func, [('func', 'in_file')])])
Explanation: Now we can specify over which subjects the workflow should iterate. To test the workflow, let's still just look at subject 7.
End of explanation
# Create preproc output graph
preproc.write_graph(graph2use='colored', format='png', simple_form=True)
# Visualize the graph
from IPython.display import Image
Image(filename='/output/work_preproc/graph.png', width=750)
Explanation: Visualize the workflow
Now that we're done. Let's look at the workflow that we just created.
End of explanation
preproc.run('MultiProc', plugin_args={'n_procs': 4})
Explanation: Run the Workflow
Now we are ready to run the workflow! Be careful about the n_procs parameter if you run a workflow in 'MultiProc' mode. n_procs specifies the number of jobs/cores your computer will use to run the workflow. If this number is too high your computer will try to execute too many things at once and will most likely crash.
Note: If you're using a Docker container and FLIRT fails to run without any good reason, you might need to change memory settings in the Docker preferences (6 GB should be enough for this workflow).
End of explanation
!tree /output/work_preproc/ -I '*js|*json|*pklz|_report|*dot|*html|*txt|*.m'
Explanation: Inspect output
What did we actually do? Let's look at all the data that was created.
End of explanation
%matplotlib inline
# Plot the motion paramters
import numpy as np
import matplotlib.pyplot as plt
par = np.loadtxt('/output/work_preproc/_subject_id_07/mcflirt/'
'asub-07_ses-test_task-fingerfootlips_bold_roi_mcf.nii.gz.par')
fig, axes = plt.subplots(2, 1, figsize=(15, 5))
axes[0].set_ylabel('rotation (radians)')
axes[0].plot(par[0:, :3])
axes[1].plot(par[0:, 3:])
axes[1].set_xlabel('time (TR)')
axes[1].set_ylabel('translation (mm)');
Explanation: But what did we do specifically? Well, let's investigate.
Motion Correction and Artifact Detection
How much did the subject move in the scanner and where there any outliers in the functional images?
End of explanation
# Showing the artifact detection output
from IPython.display import SVG
SVG(filename='/output/work_preproc/_subject_id_07/art/'
'plot.asub-07_ses-test_task-fingerfootlips_bold_roi_mcf.svg')
Explanation: The motion parameters seems to look ok. What about the detection of artifacts?
End of explanation
outliers = np.loadtxt('/output/work_preproc/_subject_id_07/art/'
'art.asub-07_ses-test_task-fingerfootlips_bold_roi_mcf_outliers.txt')
list(outliers.astype('int'))
Explanation: Which volumes are problematic?
End of explanation
from nilearn import image as nli
from nilearn.plotting import plot_stat_map
%matplotlib inline
output = '/output/work_preproc/_subject_id_07/'
Explanation: Masks and Probability maps
Let's see what all the masks and probability maps look like. For this, we will use nilearn's plot_anat function.
End of explanation
anat = output + 'gunzip_anat/sub-07_ses-test_T1w.nii'
plot_stat_map(
output + 'segment/c1sub-07_ses-test_T1w.nii', title='GM prob. map', cmap=plt.cm.magma,
threshold=0.5, bg_img=anat, display_mode='z', cut_coords=range(-35, 15, 10), dim=-1);
plot_stat_map(
output + 'segment/c2sub-07_ses-test_T1w.nii', title='WM prob. map', cmap=plt.cm.magma,
threshold=0.5, bg_img=anat, display_mode='z', cut_coords=range(-35, 15, 10), dim=-1);
plot_stat_map(
output + 'segment/c3sub-07_ses-test_T1w.nii', title='CSF prob. map', cmap=plt.cm.magma,
threshold=0.5, bg_img=anat, display_mode='z', cut_coords=range(-35, 15, 10), dim=-1);
Explanation: First, let's look at the tissue probability maps.
End of explanation
plot_stat_map(
output + 'mask_GM/c1sub-07_ses-test_T1w_flirt_thresh.nii', title='dilated GM Mask', cmap=plt.cm.magma,
threshold=0.5, bg_img=anat, display_mode='z', cut_coords=range(-35, 15, 10), dim=-1);
Explanation: And how does the gray matter mask look like that we used on the functional images?
End of explanation
from nilearn import image as nli
from nilearn.plotting import plot_epi
%matplotlib inline
output = '/output/work_preproc/_subject_id_07/'
plot_epi(output + 'mcflirt/asub-07_ses-test_task-fingerfootlips_bold_roi_mcf.nii.gz_mean_reg.nii.gz',
title='Motion Corrected mean image', display_mode='z', cut_coords=range(-40, 21, 15),
cmap=plt.cm.viridis);
mean = nli.mean_img(output + 'applywarp/asub-07_ses-test_task-fingerfootlips_bold_roi_mcf_flirt.nii')
plot_epi(mean, title='Coregistred mean image', display_mode='z', cut_coords=range(-40, 21, 15),
cmap=plt.cm.viridis);
mean = nli.mean_img('/output/work_preproc/susan/_subject_id_07/smooth/mapflow/_smooth0/'
'asub-07_ses-test_task-fingerfootlips_bold_roi_mcf_flirt_smooth.nii.gz')
plot_epi(mean, title='Smoothed mean image', display_mode='z', cut_coords=range(-40, 21, 15),
cmap=plt.cm.viridis);
mean = nli.mean_img(output + 'mask_func/mapflow/_mask_func0/'
'asub-07_ses-test_task-fingerfootlips_bold_roi_mcf_flirt_smooth_masked.nii')
plot_epi(mean, title='Masked mean image', display_mode='z', cut_coords=range(-40, 21, 15),
cmap=plt.cm.viridis);
plot_epi(output + 'detrend/mean.nii.gz', title='Detrended mean image', display_mode='z',
cut_coords=range(-40, 21, 15), cmap=plt.cm.viridis);
Explanation: Functional Image transformations
Let's also investigate the transformation that we applied to the functional images.
End of explanation
import nibabel as nb
%matplotlib inline
output = '/output/work_preproc/_subject_id_07/'
# Load the relevant datasets
mc = nb.load(output + 'applywarp/asub-07_ses-test_task-fingerfootlips_bold_roi_mcf_flirt.nii')
smooth = nb.load('/output/work_preproc/susan/_subject_id_07/smooth/mapflow/'
'_smooth0/asub-07_ses-test_task-fingerfootlips_bold_roi_mcf_flirt_smooth.nii.gz')
detrended_data = nb.load(output + 'detrend/detrend.nii.gz')
# Plot a representative voxel
x, y, z = 32, 34, 43
fig = plt.figure(figsize=(12, 4))
plt.plot(mc.get_data()[x, y, z, :])
plt.plot(smooth.get_data()[x, y, z, :])
plt.plot(detrended_data.get_data()[x, y, z, :])
plt.legend(['motion corrected', 'smoothed', 'detrended']);
Explanation: That's all nice and beautiful, but what did smoothing and detrending actually do to the data?
End of explanation
from nipype.interfaces.io import DataSink
# Initiate the datasink node
output_folder = 'datasink_handson'
datasink = Node(DataSink(base_directory='/output/',
container=output_folder),
name="datasink")
Explanation: Data output with DataSink
The results look fine, but we don't need all those temporary files. So let's use Datasink to keep only those files that we actually need for the 1st and 2nd level analysis.
End of explanation
# Connect nodes to datasink here
preproc.connect([(art, datasink, [('outlier_files', 'preproc.@outlier_files'),
('plot_files', 'preproc.@plot_files')]),
(mcflirt, datasink, [('par_file', 'preproc.@par')]),
(detrend, datasink, [('detrended_file', 'preproc.@func')]),
])
Explanation: Now the next step is to specify all the output that we want to keep in our output folder output. Make sure to keep:
- from the artifact detection node the outlier file as well as the outlier plot
- from the motion correction node the motion parameters
- from the last node, the detrended functional image
End of explanation
preproc.run('MultiProc', plugin_args={'n_procs': 4})
Explanation: Run the workflow
After adding the datasink folder, let's run the preprocessing workflow again.
End of explanation
!tree /output/datasink_handson -I '*js|*json|*pklz|_report|*dot|*html|*txt|*.m'
Explanation: Let's look now at the output of this datasink folder.
End of explanation
## Use the following substitutions for the DataSink output
substitutions = [('asub', 'sub'),
('_ses-test_task-fingerfootlips_bold_roi_mcf', ''),
('.nii.gz.par', '.par'),
]
# To get rid of the folder '_subject_id_07' and renaming detrend
substitutions += [('_subject_id_%s/detrend' % s,
'_subject_id_%s/sub-%s_detrend' % (s, s)) for s in subject_list]
substitutions += [('_subject_id_%s/' % s, '') for s in subject_list]
datasink.inputs.substitutions = substitutions
Explanation: Much better! But we're still not there yet. There are many unnecessary file specifiers that we can get rid off. To do so, we can use DataSink's substitutions parameter. For this, we create a list of tuples: on the left, we specify the string that we want to replace and on the right, with what we want to replace it with.
End of explanation
# Delets the current output folder
!rm -rf /output/datasink_handson
# Runs the preprocessing workflow again, this time with substitutions
preproc.run('MultiProc', plugin_args={'n_procs': 4})
!tree /output/datasink_handson -I '*js|*json|*pklz|_report|*dot|*html|*.m'
Explanation: Before we run the preprocessing workflow again, let's first delete the current output folder:
End of explanation
# Update 'subject_list' and its dependencies here
subject_list = ['02', '03', '04', '07', '08', '09']
sf.iterables = [('subject_id', subject_list)]
# To get rid of the folder '_subject_id_02' and renaming detrend
substitutions += [('_subject_id_%s/detrend' % s,
'_subject_id_%s/sub-%s_detrend' % (s, s)) for s in subject_list]
substitutions += [('_subject_id_%s/' % s, '') for s in subject_list]
datasink.inputs.substitutions = substitutions
Explanation: Run Preprocessing workflow on 6 right-handed subjects
Perfect! Now let's run the whole workflow for right-handed subjects. For this, you just need to change the subject_list variable and run again the places where this variable is used (i.e. sf.iterables and in DataSink substitutions.
End of explanation
# Runs the preprocessing workflow again, this time with substitutions
preproc.run('MultiProc', plugin_args={'n_procs': 4})
Explanation: Now we can run the workflow again, this time for all right handed subjects in parallel.
End of explanation
<END_TASK> |
15,646 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Copyright 2018 The TensorFlow Authors.
Step1: TensorFlow Hub による転移学習
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https
Step2: ImageNet の分類器
ImageNet ベンチマークデータセットで事前トレーニングされた分類器モデルを使用するため、初期トレーニングは不要です!
分類器のダウンロード
TensorFlow Hub から事前トレーニング済みの <a href="https
Step3: 1 枚の画像で実行する
モデルを試すために、画像を 1 枚ダウンロードします。
Step4: バッチの次元を追加し、画像をモデルに入力します。
Step5: 結果は、1001 要素のベクトルのロジットで、画像の各クラスの確率を評価します。
そのため、トップのクラス ID は argmax を使うことでみつけることができます
Step6: 推論結果のデコード
predicted_class ID(653 など)を取り、ImageNet データセットラベルをフェッチして予測をデコードします。
Step7: シンプルな転移学習
ただし、元の ImageNet データセット(事前トレーニング済みモデルがトレーニングされたデータセット)に含まれないクラスを持つ独自のデータセットを使用してカスタム分類器を作成する場合はどうでしょうか。
これは、以下のようにして行います。
TensorFlow Hub から事前トレーニング済みモデルを選択します。
カスタムデータセットのクラスを認識できるよう、最上位(最後)のレイヤーを保持します。
データセット
この例では、TensorFlow flowers データセットを使用します。
Step8: まず、tf.keras.utils.image_dataset_from_directory を使用して、このデータをディスクの画像データを使ったモデルに読み込みます。これにより、tf.data.Dataset が生成されます。
Step9: flowers データセットには 5 つのクラスがあります。
Step10: 次に、画像モデルに使用される TensorFlow Hub の規則では[0, 1] 範囲の浮動小数点数の入力が期待されるため、tf.keras.layers.Rescaling 前処理レイヤーを使用してこれを達成します。
注意
Step11: 3 番目に、Dataset.prefetch を使って、バッファリングされたプリフェッチで入力パイプラインを終了します。これで、I/O ブロッキングの問題が生じずにディスクからデータを生成することができます。
これらが、データを読み込む際に使用することが推奨される、いくつかの最も重要な tf.data メソッドです。さらに関心がある場合は、これらのメソッド、ディスクへのデータのキャッシュ方法、およびその他の手法について、tf.data API によるパフォーマンスの改善ガイドをご覧ください。
Step12: 分類器で画像をバッチ処理する
分類器で画像をバッチ処理していきます。
Step13: これらの予測が画像とどれくらい整合しているかを確認します。
Step14: 注意
Step15: hub.KerasLayer を使用して、事前トレーニング済みモデルを Keras レイヤーとしてラップし、特徴量抽出器を作成します。trainable=False 引数を使用して変数を凍結し、トレーニングのみが新しい分類器レイヤーを変更できるようにします。
Step16: 特徴量抽出器は、画像ごとに 1280 長のベクトルを返します(この例では、画像バッチサイズは 32 のママになります)。
Step17: 上位の分類レイヤーを接合する
モデルを完成するために、特徴量抽出器レイヤーをtf.keras.Sequential モデルにラップし、分類用に全結合レイヤーを追加します。
Step18: モデルのトレーニング
Model.compile を使用してトレーニングプロセスを構成し、tf.keras.callbacks.TensorBoard コールバックを追加してログの作成と保存を行います。
Step19: 次に、Model.fit メソッドを使用して、モデルをトレーニングします。
この例を短くするために、10 エポックだけトレーニングします。後で TensorBoard にトレーニングプロセスを可視化できるよう、TensorBoard コールバックでログを作成して保存します。
Step20: エポックごとに指標がどのように変化しているかを表示し、他のスカラー値を追跡するために、TensorBoard を起動します。
Step21: <!-- <img class="tfo-display-only-on-site" src="https
Step22: 結果をプロットします
Step23: モデルのエクスポート
モデルのトレーニングが完了したので、後で再利用するために、SavedModel としてエクスポートします。
Step24: SavedModel を再読み込みできることと、モデルが同じ結果を出力できることを確認します。 | <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2018 The TensorFlow Authors.
End of explanation
import numpy as np
import time
import PIL.Image as Image
import matplotlib.pylab as plt
import tensorflow as tf
import tensorflow_hub as hub
import datetime
%load_ext tensorboard
Explanation: TensorFlow Hub による転移学習
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https://www.tensorflow.org/tutorials/images/transfer_learning_with_hub"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">View on TensorFlow.org</a> </td>
<td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/tutorials/images/transfer_learning_with_hub.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Run in Google Colab</a> </td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/tutorials/images/transfer_learning_with_hub.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub で表示</a></td>
<td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/tutorials/images/transfer_learning_with_hub.ipynb" class="_active_edit_href"><img src="https://www.tensorflow.org/images/download_logo_32px.png">Download notebook</a> </td>
<td><a href="https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4" class="_active_edit_href"><img src="https://www.tensorflow.org/images/hub_logo_32px.png">TF Hub モデルを参照</a></td>
</table>
TensorFlow Hub は、トレーニング済みの TensorFlow モデルのリポジトリです。
このチュートリアルでは、以下の方法を実演します。
TensorFlow Hub からのモデルを tf.keras で利用する。
TensorFlow Hub からの画像分類モデルを使用する。
独自の画像クラスのモデルを微調整するためにシンプルな転移学習を行う。
セットアップ
End of explanation
mobilenet_v2 ="https://tfhub.dev/google/tf2-preview/mobilenet_v2/classification/4"
inception_v3 = "https://tfhub.dev/google/imagenet/inception_v3/classification/5"
classifier_model = mobilenet_v2 #@param ["mobilenet_v2", "inception_v3"] {type:"raw"}
IMAGE_SHAPE = (224, 224)
classifier = tf.keras.Sequential([
hub.KerasLayer(classifier_model, input_shape=IMAGE_SHAPE+(3,))
])
Explanation: ImageNet の分類器
ImageNet ベンチマークデータセットで事前トレーニングされた分類器モデルを使用するため、初期トレーニングは不要です!
分類器のダウンロード
TensorFlow Hub から事前トレーニング済みの <a href="https://arxiv.org/abs/1801.04381" class="external">MobileNetV2</a> モデルを選択し、Keras レイヤーとして hub.KerasLayer でラップします。ここでは、TensorFlow Hub からであれば、以下のドロップダウンに提供されている Example も含み、<a href="https://tfhub.dev/s?q=tf2&module-type=image-classification/" class="external">互換性のあるどの画像分類器モデル</a>でも構いません。
End of explanation
grace_hopper = tf.keras.utils.get_file('image.jpg','https://storage.googleapis.com/download.tensorflow.org/example_images/grace_hopper.jpg')
grace_hopper = Image.open(grace_hopper).resize(IMAGE_SHAPE)
grace_hopper
grace_hopper = np.array(grace_hopper)/255.0
grace_hopper.shape
Explanation: 1 枚の画像で実行する
モデルを試すために、画像を 1 枚ダウンロードします。
End of explanation
result = classifier.predict(grace_hopper[np.newaxis, ...])
result.shape
Explanation: バッチの次元を追加し、画像をモデルに入力します。
End of explanation
predicted_class = tf.math.argmax(result[0], axis=-1)
predicted_class
Explanation: 結果は、1001 要素のベクトルのロジットで、画像の各クラスの確率を評価します。
そのため、トップのクラス ID は argmax を使うことでみつけることができます:
End of explanation
labels_path = tf.keras.utils.get_file('ImageNetLabels.txt','https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt')
imagenet_labels = np.array(open(labels_path).read().splitlines())
plt.imshow(grace_hopper)
plt.axis('off')
predicted_class_name = imagenet_labels[predicted_class]
_ = plt.title("Prediction: " + predicted_class_name.title())
Explanation: 推論結果のデコード
predicted_class ID(653 など)を取り、ImageNet データセットラベルをフェッチして予測をデコードします。
End of explanation
data_root = tf.keras.utils.get_file(
'flower_photos',
'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',
untar=True)
Explanation: シンプルな転移学習
ただし、元の ImageNet データセット(事前トレーニング済みモデルがトレーニングされたデータセット)に含まれないクラスを持つ独自のデータセットを使用してカスタム分類器を作成する場合はどうでしょうか。
これは、以下のようにして行います。
TensorFlow Hub から事前トレーニング済みモデルを選択します。
カスタムデータセットのクラスを認識できるよう、最上位(最後)のレイヤーを保持します。
データセット
この例では、TensorFlow flowers データセットを使用します。
End of explanation
batch_size = 32
img_height = 224
img_width = 224
train_ds = tf.keras.utils.image_dataset_from_directory(
str(data_root),
validation_split=0.2,
subset="training",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size
)
val_ds = tf.keras.utils.image_dataset_from_directory(
str(data_root),
validation_split=0.2,
subset="validation",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size
)
Explanation: まず、tf.keras.utils.image_dataset_from_directory を使用して、このデータをディスクの画像データを使ったモデルに読み込みます。これにより、tf.data.Dataset が生成されます。
End of explanation
class_names = np.array(train_ds.class_names)
print(class_names)
Explanation: flowers データセットには 5 つのクラスがあります。
End of explanation
normalization_layer = tf.keras.layers.Rescaling(1./255)
train_ds = train_ds.map(lambda x, y: (normalization_layer(x), y)) # Where x—images, y—labels.
val_ds = val_ds.map(lambda x, y: (normalization_layer(x), y)) # Where x—images, y—labels.
Explanation: 次に、画像モデルに使用される TensorFlow Hub の規則では[0, 1] 範囲の浮動小数点数の入力が期待されるため、tf.keras.layers.Rescaling 前処理レイヤーを使用してこれを達成します。
注意: モデルには、tf.keras.layers.Rescaling レイヤーも含めることができます。トレードオフに関する議論について、前処理レイヤーの操作ガイドをご覧ください。
End of explanation
AUTOTUNE = tf.data.AUTOTUNE
train_ds = train_ds.cache().prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
for image_batch, labels_batch in train_ds:
print(image_batch.shape)
print(labels_batch.shape)
break
Explanation: 3 番目に、Dataset.prefetch を使って、バッファリングされたプリフェッチで入力パイプラインを終了します。これで、I/O ブロッキングの問題が生じずにディスクからデータを生成することができます。
これらが、データを読み込む際に使用することが推奨される、いくつかの最も重要な tf.data メソッドです。さらに関心がある場合は、これらのメソッド、ディスクへのデータのキャッシュ方法、およびその他の手法について、tf.data API によるパフォーマンスの改善ガイドをご覧ください。
End of explanation
result_batch = classifier.predict(train_ds)
predicted_class_names = imagenet_labels[tf.math.argmax(result_batch, axis=-1)]
predicted_class_names
Explanation: 分類器で画像をバッチ処理する
分類器で画像をバッチ処理していきます。
End of explanation
plt.figure(figsize=(10,9))
plt.subplots_adjust(hspace=0.5)
for n in range(30):
plt.subplot(6,5,n+1)
plt.imshow(image_batch[n])
plt.title(predicted_class_names[n])
plt.axis('off')
_ = plt.suptitle("ImageNet predictions")
Explanation: これらの予測が画像とどれくらい整合しているかを確認します。
End of explanation
mobilenet_v2 = "https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4"
inception_v3 = "https://tfhub.dev/google/tf2-preview/inception_v3/feature_vector/4"
feature_extractor_model = mobilenet_v2 #@param ["mobilenet_v2", "inception_v3"] {type:"raw"}
Explanation: 注意: すべての画像は CC-BY のライセンス下にあります。作成者のリストは LICENSE.txt ファイルをご覧ください。
結果は完全とは決して言えませんが、これらはモデルがトレーニングされたクラスではないこと(「daisy」を除く)考慮すれば、合理的です。
ヘッドレスモデルのダウンロード
TensorFlow Hub は最上位の分類層を含まないモデルも配布しています。これらは転移学習に簡単に利用することができます。
<a href="https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4" class="external">TensorFlow Hub から</a>事前トレーニング済みの <a href="https://arxiv.org/abs/1801.04381" class="external">MobileNetV2</a> モデルを選択します。ここでは、TensorFlow Hub からであれば、以下のドロップダウンに提供されている Example も含み、<a href="https://tfhub.dev/s?module-type=image-feature-vector&q=tf2" class="external">互換性のあるどの画像分類器モデル</a>でも構いません。
End of explanation
feature_extractor_layer = hub.KerasLayer(
feature_extractor_model,
input_shape=(224, 224, 3),
trainable=False)
Explanation: hub.KerasLayer を使用して、事前トレーニング済みモデルを Keras レイヤーとしてラップし、特徴量抽出器を作成します。trainable=False 引数を使用して変数を凍結し、トレーニングのみが新しい分類器レイヤーを変更できるようにします。
End of explanation
feature_batch = feature_extractor_layer(image_batch)
print(feature_batch.shape)
Explanation: 特徴量抽出器は、画像ごとに 1280 長のベクトルを返します(この例では、画像バッチサイズは 32 のママになります)。
End of explanation
num_classes = len(class_names)
model = tf.keras.Sequential([
feature_extractor_layer,
tf.keras.layers.Dense(num_classes)
])
model.summary()
predictions = model(image_batch)
predictions.shape
Explanation: 上位の分類レイヤーを接合する
モデルを完成するために、特徴量抽出器レイヤーをtf.keras.Sequential モデルにラップし、分類用に全結合レイヤーを追加します。
End of explanation
model.compile(
optimizer=tf.keras.optimizers.Adam(),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['acc'])
log_dir = "logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = tf.keras.callbacks.TensorBoard(
log_dir=log_dir,
histogram_freq=1) # Enable histogram computation for every epoch.
Explanation: モデルのトレーニング
Model.compile を使用してトレーニングプロセスを構成し、tf.keras.callbacks.TensorBoard コールバックを追加してログの作成と保存を行います。
End of explanation
NUM_EPOCHS = 10
history = model.fit(train_ds,
validation_data=val_ds,
epochs=NUM_EPOCHS,
callbacks=tensorboard_callback)
Explanation: 次に、Model.fit メソッドを使用して、モデルをトレーニングします。
この例を短くするために、10 エポックだけトレーニングします。後で TensorBoard にトレーニングプロセスを可視化できるよう、TensorBoard コールバックでログを作成して保存します。
End of explanation
%tensorboard --logdir logs/fit
Explanation: エポックごとに指標がどのように変化しているかを表示し、他のスカラー値を追跡するために、TensorBoard を起動します。
End of explanation
predicted_batch = model.predict(image_batch)
predicted_id = tf.math.argmax(predicted_batch, axis=-1)
predicted_label_batch = class_names[predicted_id]
print(predicted_label_batch)
Explanation: <!-- <img class="tfo-display-only-on-site" src="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/images/images/tensorboard_transfer_learning_with_hub.png?raw=1"/> -->
推論結果の確認
モデルの予測からクラス名の番号付きリストを取得します。
End of explanation
plt.figure(figsize=(10,9))
plt.subplots_adjust(hspace=0.5)
for n in range(30):
plt.subplot(6,5,n+1)
plt.imshow(image_batch[n])
plt.title(predicted_label_batch[n].title())
plt.axis('off')
_ = plt.suptitle("Model predictions")
Explanation: 結果をプロットします
End of explanation
t = time.time()
export_path = "/tmp/saved_models/{}".format(int(t))
model.save(export_path)
export_path
Explanation: モデルのエクスポート
モデルのトレーニングが完了したので、後で再利用するために、SavedModel としてエクスポートします。
End of explanation
reloaded = tf.keras.models.load_model(export_path)
result_batch = model.predict(image_batch)
reloaded_result_batch = reloaded.predict(image_batch)
abs(reloaded_result_batch - result_batch).max()
reloaded_predicted_id = tf.math.argmax(reloaded_result_batch, axis=-1)
reloaded_predicted_label_batch = class_names[reloaded_predicted_id]
print(reloaded_predicted_label_batch)
plt.figure(figsize=(10,9))
plt.subplots_adjust(hspace=0.5)
for n in range(30):
plt.subplot(6,5,n+1)
plt.imshow(image_batch[n])
plt.title(reloaded_predicted_label_batch[n].title())
plt.axis('off')
_ = plt.suptitle("Model predictions")
Explanation: SavedModel を再読み込みできることと、モデルが同じ結果を出力できることを確認します。
End of explanation
<END_TASK> |
15,647 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Demo caching
This notebook shows how caching of daily results is organised. First we show the low-level approach, then a high-level function is used.
Low-level approach
Step1: We demonstrate the caching for the minimal daily water consumption (should be close to zero unless there is a water leak). We create a cache object by specifying what we like to store and retrieve through this object. The cached data is saved as a single csv per sensor in a folder specified in the opengrid.cfg. Add the path to a folder where you want these csv-files to be stored as follows to your opengrid.cfg
[data]
folder
Step2: If this is the first time you run this demo, no cached data will be found, and you get an empty graph.
Let's store some results in this cache. We start from the water consumption of last week.
Step3: We use the method daily_min() from the analysis module to obtain a dataframe with daily minima for each sensor.
Step4: Now we can get the daily water minima from the cache directly. Pass a start or end date to limit the returned dataframe.
Step5: A high-level cache function
The caching of daily results is very similar for all kinds of results. Therefore, a high-level function is defined that can be parametrised to cache a lot of different things. | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
from opengrid.library import misc
from opengrid.library import houseprint
from opengrid.library import caching
import charts
hp = houseprint.Houseprint()
Explanation: Demo caching
This notebook shows how caching of daily results is organised. First we show the low-level approach, then a high-level function is used.
Low-level approach
End of explanation
cache_water = caching.Cache(variable='water_daily_min')
df_cache = cache_water.get(sensors=hp.get_sensors(sensortype='water'))
charts.plot(df_cache.ix[-8:], stock=True, show='inline')
Explanation: We demonstrate the caching for the minimal daily water consumption (should be close to zero unless there is a water leak). We create a cache object by specifying what we like to store and retrieve through this object. The cached data is saved as a single csv per sensor in a folder specified in the opengrid.cfg. Add the path to a folder where you want these csv-files to be stored as follows to your opengrid.cfg
[data]
folder: path_to_folder
End of explanation
hp.sync_tmpos()
start = pd.Timestamp('now') - pd.Timedelta(weeks=1)
df_water = hp.get_data(sensortype='water', head=start, )
df_water.info()
Explanation: If this is the first time you run this demo, no cached data will be found, and you get an empty graph.
Let's store some results in this cache. We start from the water consumption of last week.
End of explanation
daily_min = analysis.DailyAgg(df_water, agg='min').result
daily_min.info()
daily_min
cache_water.update(daily_min)
Explanation: We use the method daily_min() from the analysis module to obtain a dataframe with daily minima for each sensor.
End of explanation
sensors = hp.get_sensors(sensortype='water') # sensor objects
charts.plot(cache_water.get(sensors=sensors, start=start, end=None), show='inline', stock=True)
Explanation: Now we can get the daily water minima from the cache directly. Pass a start or end date to limit the returned dataframe.
End of explanation
import pandas as pd
from opengrid.library import misc
from opengrid.library import houseprint
from opengrid.library import caching
from opengrid.library import analysis
import charts
hp = houseprint.Houseprint()
#hp.sync_tmpos()
sensors = hp.get_sensors(sensortype='water')
caching.cache_results(hp=hp, sensors=sensors, resultname='water_daily_min', AnalysisClass=analysis.DailyAgg, agg='min')
cache = caching.Cache('water_daily_min')
daily_min = cache.get(sensors = sensors, start = '20151201')
charts.plot(daily_min, stock=True, show='inline')
Explanation: A high-level cache function
The caching of daily results is very similar for all kinds of results. Therefore, a high-level function is defined that can be parametrised to cache a lot of different things.
End of explanation
<END_TASK> |
15,648 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Note
Step1: Now we move onto the ISPAQ-specific ones. Because of the directory structure and where this example lives, we need to add the main ispaq directory to our path. Then we will be able to import the ISPAQ modules.
Step2: With the modules imported, we now need to set up some variables that will be required to run the ISPAQ code. This includes a logger and an arguments class that contains the fields from the preference file.
Step3: Those will now be used to create a userRequest, which will then be used to create a concierge object. The concierge object will need to later be passed into the method that actually calculates the PDFs.
Step4: Now that we've handled that, we can calculate the PDFs. First, we move into the directory that contains the database, since that's where the ISPAQ code expects us to be. Then we call on the calculate_PDF method, which will return dataframes that contain
Step5: With it in a dataframe, you can now do what you want with it! Manipulate it how you want.
Below I call on the plot_PDF function to plot it up and save the figure to the pdf_dir specified above. The plot will show up below when done. | <ASSISTANT_TASK:>
Python Code:
import sys
import os
import logging
import pandas as pd
from obspy import UTCDateTime
Explanation: Note:
In this directory, there are two examples using PDFs: Example 3 - Plot PDF for a station, and Example 4 - Calculate PDFs from PSDs. These two examples are provided in order to highlight different ways that you can use the PDF and PSD values that ISPAQ generates.
To be specific, the difference between the two examples are:
Example 3 - Plot PDF for a station:
Example 3 uses PDFs that already exist in the ISPAQ example database. This means that they have been calculated using an ispaq.py command with the --output db --db_name ispaq_example.db option.
This is a great way to do it, especially if you plan to run the PSDs and PDFs at the same time, say on some sort of regular schedule. In that case, you might as well calculate both in the same command and store them both in the ISPAQ database for later retrieval.
Additionally, we have tried to make it simple to calculate PDFs in ISPAQ for cases where you already have PSDs for the time span you are interested in. For example, PDFs calculation does not require seismic data since it instead reads in existing PSDs. That means that if you, the user, have been calculating daily PSDs for the past year, you don’t need to load a year’s worth of data to calculate a year-long PDF - you can just use the existing PSDs! By calculating that year-long PDF using ISPAQ, it will be saved to either the database or the csv file and you will be able to retrieve it later.
Example 4 - Calculate PDFs from PSDs:
Example 4 will calculate PDFs on the fly, meaning that they do not need to exist in the ISPAQ metric database, nor will they be saved to the ISPAQ metric database.
Why would you want to do this if you can simply use an ispaq.py command to calculate and save the PDFs in the database? Here are a couple possible reasons:
1) You may want to calculate PDFs on an arbitrary timeframe but don't feel the need to save the PDF values, say if you are just poking around at or investigating changes in the data and don't want to clutter the database.
2) To prevent the ISPAQ database from growing too complicated, the pdf table in the ISPAQ database is very simple and PDFs values are stored with the start and end times used to calculate that particular PDF. If you calculate daily PDFs for a week and then additionally calculate a week-long PDF, the database will store 8 PDFs - one for each day in the week, and one that spans the entire week. This means that, even if you have used ISPAQ to calculate your arbitrary time frame, you must know the specific start and end times of the PDF that you are looking to retrieve. If you look for a time range using less-than and greater-than (<>) instead of equals-to (==) then you risk retrieving multiple PDFs, including ones that you did not intend. By using this on-the-fly method, you bypass this risk since PSDs are stored by the individual PSD (usually an hour span, can vary depending on the sample rate of the data), and only those PSDs that are needed to calculate the PDF are retrieved.
Both methods are valid and can be useful in different situations.
Example 4 - Calculate PDFs from PSDs
The intent of this series of Jupyter Notebooks is to demonstrate how metrics can be retrieved from the ISPAQ example sqlite database and provide some ideas on how to use or plot those metrics.
In this example, we will use the pdf_aggregator.py script to calculate PDFs on-the-fly. The calculate_pdfs method in the pdf_aggregator will look for existing PSDs, calculate the PDFs, and then return the PDF values in a dataframe. It also returns dataframes of the modes, minimums, and maximums.
To generate PDFs in this example, corrected PSD values must already exist. If they do not yet exist, then you can run them via:
./run_ispaq.py -M psd_corrected -S ANMO --starttime 2020-10-01 --endtime 2020-10-16 --output db --db_name ispaq_example.db
This example is slightly more complicated because it is tying into the ISPAQ code directly. That means that we need to import multiple functions from various ISPAQ scripts, and we need the proper arguments available for each of those.
First we import the simple ones:
End of explanation
path_parent = os.path.dirname(os.getcwd())
sys.path.insert(1, f'{path_parent}/ispaq/')
import concierge
from user_request import UserRequest
import PDF_aggregator
Explanation: Now we move onto the ISPAQ-specific ones. Because of the directory structure and where this example lives, we need to add the main ispaq directory to our path. Then we will be able to import the ISPAQ modules.
End of explanation
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s', datefmt='%Y-%m-%d %H:%M:%S')
class args:
db_name = 'ispaq_example.db'
starttime = UTCDateTime('2020-10-01')
endtime = UTCDateTime('2021-10-15')
metrics = 'pdf'
stations = 'IU.ANMO.00.BHZ.M' # The "stations" must refer to a single target, including the quality code (N.S.L.C.Q)
preferences_file = f'{path_parent}/preference_files/default.txt'
station_url = 'IRIS'
dataselect_url = 'IRIS'
event_url = 'IRIS'
resp_dir = ''
output = 'db'
csv_dir = f'{path_parent}/csv/'
sncl_format = 'N.S.L.C.'
sigfigs = 6
pdf_type = 'plot'
pdf_interval = 'aggregated'
plot_include = ''
pdf_dir = f'{path_parent}/pdfs/'
psd_dir = f'{path_parent}/psds/'
Explanation: With the modules imported, we now need to set up some variables that will be required to run the ISPAQ code. This includes a logger and an arguments class that contains the fields from the preference file.
End of explanation
user_request = UserRequest(args, logger=logger)
concierge = concierge.Concierge(user_request, logger=logger)
print(concierge, logger)
print(concierge.logger)
Explanation: Those will now be used to create a userRequest, which will then be used to create a concierge object. The concierge object will need to later be passed into the method that actually calculates the PDFs.
End of explanation
os.chdir(path_parent)
[pdfDF,modesDF, maxDF, minDF] = PDF_aggregator.calculate_PDF(pd.DataFrame(), args.stations, args.starttime, args.endtime, concierge)
print(pdfDF)
Explanation: Now that we've handled that, we can calculate the PDFs. First, we move into the directory that contains the database, since that's where the ISPAQ code expects us to be. Then we call on the calculate_PDF method, which will return dataframes that contain: PDF values, modes, maximums, and minimums. This may take a few minutes.
End of explanation
PDF_aggregator.plot_PDF(args.stations, args.starttime, args.endtime, pdfDF, modesDF, maxDF, minDF, concierge)
Explanation: With it in a dataframe, you can now do what you want with it! Manipulate it how you want.
Below I call on the plot_PDF function to plot it up and save the figure to the pdf_dir specified above. The plot will show up below when done.
End of explanation
<END_TASK> |
15,649 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Python examples and notes for Machine Learning for Computational Linguistics
(C) 2017-2019 by Damir Cavar
Download
Step1: If we specify a threshold $t$ that seperates spam from ham, with $t = 5$ in our example, the decision for spam or ham could be coded in Python as
Step2: In the code example above we define $x$ and $w$ as vectors of the same length. The computation of the result could be achieved even easier by making use of linear algebra and calculating the dot-product of $x$ and $w$
Step3: We can use a trick to manipulate the data to be rooted in the origin of an extended coordiante system. We can add a new dimension by adding a new virtual test result $x_0 = 1$ and a corresponding weight $w_0 = -t$. This way the decision boundary $t$ can be moved to $0$
Step4: This kind of transformation of the vector space is usefull for other purposes as well. More on that later.
Creating and Using an SVM Classifier
The following example is inspired and partially taken from the page Linear SVC Machine learning SVM example with Python.
To start learning and classifying, we will need to import some Python modules in addition to the ones above
Step5: We will use two features that represent the axis on a graph. The samples are tuples taken from the ordered arrays $x$ and $y$, that is, the $i^{th}$ sample is $X_i = (x_i, y_i)$, sample $X_1 = (1,2)$, sample $X_2 = (5, 8)$, and so on.
Step6: We can plot the datapoints now
Step7: We can create an array of features, that is, we convert the coordinates in the $x$ and $y$ feature arrays above to an array of tuples that represent the datapoints or features of samples
Step8: Assuming two classes represented by $0$ and $1$, we can encode the assignment of the datapoints in $X$ to classes $0$ or $1$ by using a vector with the class labels in the order of the samples in $X$. The $i^{th}$ datapoint of $X$ is assigned to the $i^{th}$ class label in $y$.
Step9: We define a classifier as a linear Support Vector Classifier using the svm module of Scikit-learn
Step10: We train the classifier using our features in X and the labels in y
Step11: We can now create a new sample and ask the classifier for a guess to which class this sample belongs. Note that in the following code we generate a numpy array from the features $[0.58, 0.76]$. This array needs to be reshaped to an array the contains one element, an array with a set of sample features.
Step12: Instead of using the reshape() function, we could have also defined the sample directly as an array with a sample feature array
Step13: The following code will visualize the data and the identified hyperplane that separates the two classes. | <ASSISTANT_TASK:>
Python Code:
x = (1, 0)
w = (4, 4)
result = 0
for e in range(len(x)):
result += x[e] * w[e]
print(result)
Explanation: Python examples and notes for Machine Learning for Computational Linguistics
(C) 2017-2019 by Damir Cavar
Download: This and various other Jupyter notebooks are available from my GitHub repo.
Version: 1.1, September 2019
License: Creative Commons Attribution-ShareAlike 4.0 International License (CA BY-SA 4.0)
This is a tutorial related to the discussion of SpamAssassin in the textbook Machine Learning: The Art and Science of Algorithms that Make Sense of Data by Peter Flach.
This tutorial was developed as part of my course material for the course Machine Learning for Computational Linguistics in the Computational Linguistics Program of the Department of Linguistics at Indiana University.
SpamAssassin
The linear classifier can be described as follows. A test $x$ returns $1$ (for true), if it succedes, otherwise it returns $0$. The $i^{th}$ test in the set of tests $x$ is refered to as $x_i$. The weight of the $i^{th}$ test is denoted as $w_i$.
The total score of test results for a specific e-mail can be expressed as the sum of the products of $n$ test results and corresponding weights, that is $\sum_{i=1}^{n} w_i x_i$.
If we assume two tests $x_1$ and $x_2$ with the corresponding weights $w_1 = 4$ and $w_2 = 4$, for some e-mail $e_1$ the tests could result in two positives $x_1 = 1$ and $x_2 = 1$. The computation of the equation above for the results can be coded in Python in the following way:
End of explanation
t = 5
if result >= t:
print("spam", result)
else:
print("ham", result)
Explanation: If we specify a threshold $t$ that seperates spam from ham, with $t = 5$ in our example, the decision for spam or ham could be coded in Python as:
End of explanation
import numpy
wn = [4, 4]
xn = [1, 1]
numpy.dot(wn, xn)
Explanation: In the code example above we define $x$ and $w$ as vectors of the same length. The computation of the result could be achieved even easier by making use of linear algebra and calculating the dot-product of $x$ and $w$:
End of explanation
x0 = (1, 1, 1)
w0 = (-t, 4, 4)
numpy.dot(w0, x0)
Explanation: We can use a trick to manipulate the data to be rooted in the origin of an extended coordiante system. We can add a new dimension by adding a new virtual test result $x_0 = 1$ and a corresponding weight $w_0 = -t$. This way the decision boundary $t$ can be moved to $0$:
End of explanation
import matplotlib.pyplot
from matplotlib import style
style.use("ggplot")
from sklearn import svm
Explanation: This kind of transformation of the vector space is usefull for other purposes as well. More on that later.
Creating and Using an SVM Classifier
The following example is inspired and partially taken from the page Linear SVC Machine learning SVM example with Python.
To start learning and classifying, we will need to import some Python modules in addition to the ones above:
End of explanation
x = [1, 5, 1.5, 8, 1, 9]
y = [2, 8, 1.8, 8, 0.6, 11]
Explanation: We will use two features that represent the axis on a graph. The samples are tuples taken from the ordered arrays $x$ and $y$, that is, the $i^{th}$ sample is $X_i = (x_i, y_i)$, sample $X_1 = (1,2)$, sample $X_2 = (5, 8)$, and so on.
End of explanation
matplotlib.pyplot.scatter(x,y)
matplotlib.pyplot.show()
Explanation: We can plot the datapoints now:
End of explanation
X = numpy.array([[1,2],
[5,8],
[1.5,1.8],
[8,8],
[1,0.6],
[9,11]])
Explanation: We can create an array of features, that is, we convert the coordinates in the $x$ and $y$ feature arrays above to an array of tuples that represent the datapoints or features of samples:
End of explanation
y = [0,1,0,1,0,1]
Explanation: Assuming two classes represented by $0$ and $1$, we can encode the assignment of the datapoints in $X$ to classes $0$ or $1$ by using a vector with the class labels in the order of the samples in $X$. The $i^{th}$ datapoint of $X$ is assigned to the $i^{th}$ class label in $y$.
End of explanation
classifier = svm.SVC(kernel='linear')
Explanation: We define a classifier as a linear Support Vector Classifier using the svm module of Scikit-learn:
End of explanation
classifier.fit(X,y)
Explanation: We train the classifier using our features in X and the labels in y:
End of explanation
sample = numpy.array([0.58,0.76]).reshape(1,-1)
print("Sample:", sample)
print(" Class:", classifier.predict(sample))
Explanation: We can now create a new sample and ask the classifier for a guess to which class this sample belongs. Note that in the following code we generate a numpy array from the features $[0.58, 0.76]$. This array needs to be reshaped to an array the contains one element, an array with a set of sample features.
End of explanation
sample = numpy.array( [ [0.58,0.76] ] )
Explanation: Instead of using the reshape() function, we could have also defined the sample directly as an array with a sample feature array:
End of explanation
w = classifier.coef_[0]
print(w)
a = -w[0] / w[1]
xx = numpy.linspace(0,12)
yy = a * xx - classifier.intercept_[0] / w[1]
h0 = matplotlib.pyplot.plot(xx, yy, 'k-', label="non weighted div")
matplotlib.pyplot.scatter(X[:, 0], X[:, 1], c = y)
matplotlib.pyplot.legend()
matplotlib.pyplot.show()
Explanation: The following code will visualize the data and the identified hyperplane that separates the two classes.
End of explanation
<END_TASK> |
15,650 | <SYSTEM_TASK:>
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
<END_TASK>
<USER_TASK:>
Problem:
I'd like to convert a torch tensor to pandas dataframe but by using pd.DataFrame I'm getting a dataframe filled with tensors instead of numeric values. | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import torch
import pandas as pd
x = load_data()
px = pd.DataFrame(x.numpy())
<END_TASK> |
15,651 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: PWC-Net-large model training (with cyclical learning rate schedule)
In this notebook, we
Step2: TODO
Step3: Pre-train on FlyingChairs+FlyingThings3DHalfRes mix
Load the dataset
Step4: Configure the training
Step5: Train the model | <ASSISTANT_TASK:>
Python Code:
pwcnet_train.ipynb
PWC-Net model training.
Written by Phil Ferriere
Licensed under the MIT License (see LICENSE for details)
Tensorboard:
[win] tensorboard --logdir=E:\\repos\\tf-optflow\\tfoptflow\\pwcnet-lg-6-2-cyclic-chairsthingsmix
[ubu] tensorboard --logdir=/media/EDrive/repos/tf-optflow/tfoptflow/pwcnet-lg-6-2-cyclic-chairsthingsmix
from __future__ import absolute_import, division, print_function
import sys
from copy import deepcopy
from dataset_base import _DEFAULT_DS_TRAIN_OPTIONS
from dataset_flyingchairs import FlyingChairsDataset
from dataset_flyingthings3d import FlyingThings3DHalfResDataset
from dataset_mixer import MixedDataset
from model_pwcnet import ModelPWCNet, _DEFAULT_PWCNET_TRAIN_OPTIONS
Explanation: PWC-Net-large model training (with cyclical learning rate schedule)
In this notebook, we:
- Use a PWC-Net-large model (with dense and residual connections), 6 level pyramid, uspample level 2 by 4 as the final flow prediction
- Train the model on a mix of the FlyingChairs and FlyingThings3DHalfRes dataset using a Cyclic<sub>short</sub> schedule of our own
- The Cyclic<sub>short</sub> schedule oscillates between 5e-04 and 1e-05 for 200,000 steps
Below, look for TODO references and customize this notebook based on your own machine setup.
Reference
[2018a]<a name="2018a"></a> Sun et al. 2018. PWC-Net: CNNs for Optical Flow Using Pyramid, Warping, and Cost Volume. [arXiv] [web] [PyTorch (Official)] [Caffe (Official)]
End of explanation
# TODO: You MUST set dataset_root to the correct path on your machine!
if sys.platform.startswith("win"):
_DATASET_ROOT = 'E:/datasets/'
else:
_DATASET_ROOT = '/media/EDrive/datasets/'
_FLYINGCHAIRS_ROOT = _DATASET_ROOT + 'FlyingChairs_release'
_FLYINGTHINGS3DHALFRES_ROOT = _DATASET_ROOT + 'FlyingThings3D_HalfRes'
# TODO: You MUST adjust the settings below based on the number of GPU(s) used for training
# Set controller device and devices
# A one-gpu setup would be something like controller='/device:GPU:0' and gpu_devices=['/device:GPU:0']
# Here, we use a dual-GPU setup, as shown below
gpu_devices = ['/device:GPU:0', '/device:GPU:1']
controller = '/device:CPU:0'
# TODO: You MUST adjust this setting below based on the amount of memory on your GPU(s)
# Batch size
batch_size = 8
Explanation: TODO: Set this first!
End of explanation
# TODO: You MUST set the batch size based on the capabilities of your GPU(s)
# Load train dataset
ds_opts = deepcopy(_DEFAULT_DS_TRAIN_OPTIONS)
ds_opts['in_memory'] = False # Too many samples to keep in memory at once, so don't preload them
ds_opts['aug_type'] = 'heavy' # Apply all supported augmentations
ds_opts['batch_size'] = batch_size * len(gpu_devices) # Use a multiple of 8; here, 16 for dual-GPU mode (Titan X & 1080 Ti)
ds_opts['crop_preproc'] = (256, 448) # Crop to a smaller input size
ds1 = FlyingChairsDataset(mode='train_with_val', ds_root=_FLYINGCHAIRS_ROOT, options=ds_opts)
ds_opts['type'] = 'into_future'
ds2 = FlyingThings3DHalfResDataset(mode='train_with_val', ds_root=_FLYINGTHINGS3DHALFRES_ROOT, options=ds_opts)
ds = MixedDataset(mode='train_with_val', datasets=[ds1, ds2], options=ds_opts)
# Display dataset configuration
ds.print_config()
Explanation: Pre-train on FlyingChairs+FlyingThings3DHalfRes mix
Load the dataset
End of explanation
# Start from the default options
nn_opts = deepcopy(_DEFAULT_PWCNET_TRAIN_OPTIONS)
nn_opts['verbose'] = True
nn_opts['ckpt_dir'] = './pwcnet-lg-6-2-cyclic-chairsthingsmix/'
nn_opts['batch_size'] = ds_opts['batch_size']
nn_opts['x_shape'] = [2, ds_opts['crop_preproc'][0], ds_opts['crop_preproc'][1], 3]
nn_opts['y_shape'] = [ds_opts['crop_preproc'][0], ds_opts['crop_preproc'][1], 2]
nn_opts['use_tf_data'] = True # Use tf.data reader
nn_opts['gpu_devices'] = gpu_devices
nn_opts['controller'] = controller
# Use the PWC-Net-large model in quarter-resolution mode
nn_opts['use_dense_cx'] = True
nn_opts['use_res_cx'] = True
nn_opts['pyr_lvls'] = 6
nn_opts['flow_pred_lvl'] = 2
# Set the learning rate schedule. This schedule is for a single GPU using a batch size of 8.
# Below,we adjust the schedule to the size of the batch and the number of GPUs.
nn_opts['lr_policy'] = 'cyclic'
nn_opts['cyclic_lr_max'] = 5e-04 # Anything higher will generate NaNs
nn_opts['cyclic_lr_base'] = 1e-05
nn_opts['cyclic_lr_stepsize'] = 20000
nn_opts['max_steps'] = 200000
# Below,we adjust the schedule to the size of the batch and our number of GPUs (2).
nn_opts['cyclic_lr_stepsize'] /= len(gpu_devices)
nn_opts['max_steps'] /= len(gpu_devices)
nn_opts['cyclic_lr_stepsize'] = int(nn_opts['cyclic_lr_stepsize'] / (float(ds_opts['batch_size']) / 8))
nn_opts['max_steps'] = int(nn_opts['max_steps'] / (float(ds_opts['batch_size']) / 8))
# Instantiate the model and display the model configuration
nn = ModelPWCNet(mode='train_with_val', options=nn_opts, dataset=ds)
nn.print_config()
Explanation: Configure the training
End of explanation
# Train the model
nn.train()
Explanation: Train the model
End of explanation
<END_TASK> |
15,652 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
XCS Tutorial
This is the official tutorial for the xcs package for Python 3. You can find the latest release and get updates on the project's status at the project home page.
What is XCS?
XCS is a Python 3 implementation of the XCS algorithm as described in the 2001 paper, An Algorithmic Description of XCS, by
Martin Butz and Stewart Wilson. XCS is a type of Learning Classifier System (LCS), a machine learning algorithm that utilizes a genetic algorithm acting on a rule-based system, to solve a reinforcement learning problem.
In its canonical form, XCS accepts a fixed-width string of bits as its input, and attempts to select the best action from a predetermined list of choices using an evolving set of rules that match inputs and offer appropriate suggestions. It then receives a reward signal indicating the quality of its decision, which it uses to adjust the rule set that was used to make the decision. This process is subsequently repeated, allowing the algorithm to evaluate the changes it has already made and further refine the rule set.
A key feature of XCS is that, unlike many other machine learning algorithms, it not only learns the optimal input/output mapping, but also produces a minimal set of rules for describing that mapping. This is a big advantage over other learning algorithms such as neural networks whose models are largely opaque to human analysis, making XCS an important tool in any data scientist's tool belt.
The XCS library provides not only an implementation of the standard XCS algorithm, but a set of interfaces which together constitute a framework for implementing and experimenting with other LCS variants. Future plans for the XCS library include continued expansion of the tool set with additional algorithms, and refinement of the interface to support reinforcement learning algorithms in general.
Terminology
Being both a reinforcement learning algorithm and an evolutionary algorithm, XCS requires an understanding of terms pertaining to both.
Situation
A situation is just another term for an input received by the classifier.
Action
An action is an output produced by the classifier.
Scenario
A series of situations, each of which the algorithm must respond to in order with an appropriate action in order to maximize the total reward received for each action. A scenario may be dynamic, meaning that later training cycles can be affected by earlier actions, or static, meaning that each training cycle is independent of the actions that came before it.
Classifier Rule
A classifier rule, sometimes referred to as just a rule or a classifier, is a pairing between a condition, describing which situations can be matched, and a suggested action. Each classifier rule has an associated prediction indicating the expected reward if the suggested action is taken when the condition matches the situation, a fitness indicating its suitability for reproduction and continued use in the population, and a numerosity value which indicates the number of (virtual) instances of the rule in the population. (There are other parameters associated with each rule, as well, but these are visibly important ones.)
Classifier Set
Also referred to as the population, this is the collection of all rules currently used and tracked by the classifier. The genetic algorithm operates on this set of rules over time to optimize them for accuracy and generality in their descriptiveness of the problem space. Note that the population is virtual, meaning that if the same rule has multiple copies in the population, it is represented only once, with an associated numerosity value to indicate the number of virtual instances of the rule in the population.
Match Set
The match set is the set of rules which match against the current situation.
Action Set
The action set is the set of rules which match against the current situation and recommend the selected action. Thus the action set is a subset of the match set. In fact, the match set can be seen as a collection of mutually exclusive and competing action sets, from which only one is to be selected.
Reward
The reward is a floating point value which acts as the signal the algorithm attempts to maximize. There are three types of reward that are commonly mentioned with respect to temporal difference learning algorithms. The immediate reward (aka raw reward) is the original, unaltered reward value returned by the scenario in response to each action. The expected future reward is the estimated payoff for later reward cycles, specifically excluding the current one; the prediction of the action set on the next reward cycle acts in this role in the canonical XCS algorithm. The payoff or combined reward is the combined sum of the immediate reward, plus the discounted expected future reward. (Discounted means the value is multiplied by a non-negative coefficient whose value is less than 1, which causes the algorithm to value immediate reward more highly than reward received later on.) The term reward, when used alone, is generally used to mean the immediate reward.
Prediction
A prediction is an estimate by a classifier rule or an action set as to the payoff expected to be received by taking the suggested action in the given situation. The prediction of an action set is formed by taking the fitness-weighted average of the predictions made by the individual rules within it.
Fitness
Fitness is another floating point value similar in function to the reward, except that in this case it is an internal signal defined by the algorithm itself, which is then used as a guide for selection of which rules are to act as parents to the next generation. Each rule in the population has its own associated fitness value. In XCS, as opposed to strength-based LCS variants such as ZCS, the fitness is actually based on the accuracy of each rule's reward prediction, as opposed to its size. Thus a rule with a very low expected reward can have a high fitness provided it is accurate in its prediction of low reward, whereas a rule with very high expected reward may have low fitness because the reward it receives varies widely from one reward cycle to the next. Using reward prediction accuracy instead of reward prediction size helps XCS find rules that describe the problem in a stable, predictable way.
Installation
To install xcs, you will of course need a Python 3 interpreter. The latest version of the standard CPython distribution is available for download from the Python Software Foundation, or if you prefer a download that comes with a long list of top-notch machine learning and scientific computing packages already built for you, I recommend Anaconda from Continuum Analytics.
Starting with Python 3.4, the standard CPython distribution comes with the package installation tool, pip, as part of the standard distribution. Anaconda comes with pip regardless of the Python version. If you have pip, installation of xcs is straight forward
Step1: Then we import the xcs module and run the built-in test() function. By default, the test() function runs the canonical XCS algorithm on the 11-bit (3-bit address) MUX problem for 10,000 steps.
Step2: ```
INFO
Step3: The XCSAlgorithm class contains the actual XCS algorithm implementation. The ClassifierSet class is used to represent the algorithm's state, in the form of a set of classifier rules. MUXProblem is the classic multiplexer problem, which defaults to 3 address bits (11 bits total). ScenarioObserver is a wrapper for scenarios which logs the inputs, actions, and rewards as the algorithm attempts to solve the problem.
Now that we've imported the necessary tools, we can define the actual problem, telling it to give the algorithm 10,000 reward cycles to attempt to learn the appropriate input/output mapping, and wrapping it with an observer so we can see the algorithm's progress.
Step4: Next, we'll create the algorithm which will be used to manage the classifier set and learn the mapping defined by the problem we have selected
Step5: The algorithm's parameters are set to appropriate defaults for most problems, but it is straight forward to modify them if it becomes necessary.
Step6: Here we have selected an exploration probability of .1, which will sacrifice most (9 out of 10) learning opportunities in favor of taking advantage of what has already been learned so far. This makes sense in real-time learning environment; a lower value is more appropriate in cases where the classifier is being trained in advance or is being used simply to learn a minimal rule set. The discount factor is set to 0, since future rewards are not affected at all by the currently selected action. (This is not strictly necessary, since the scenario will inform the algorithm that reward chaining should not be used, but it is useful to highlight this fact.) We have also elected to turn on GA and action set subsumption, which help the system to converge to the minimal effective rule set more quickly in some types of scenarios.
Next, we create the classifier set
Step7: The algorithm does the work for us, initializing the classifier set as it deems appropriate for the scenario we have provided. It provides the classifier set with the possible actions that can be taken in the given scenario; these are necessary for the classifier set to perform covering operations when the algorithm determines that the classifiers in the population provide insufficient coverage for a particular situation. (Covering is the addition to the population of a randomly generated classifier rule whose condition matches the current situation.)
And finally, this is where all the magic happens
Step8: We pass the scenario to the classifier set and ask it to run to learn the appropriate input/output mapping. It executes training cycles until the scenario dictates that training should stop. Note that if you wish to see the progress as the algorithm interacts with the scenario, you will need to set the logging level to INFO, as described in the previous section, before calling the run() method.
Now we can observe the fruits of our labors.
Step9: ```
10001#10100 => True
Time Stamp
Step10: Defining New Scenario Types
To define a new scenario type, inherit from the Scenario abstract class defined in the xcs.scenarios submodule. Suppose, as an example, that we wish to test the algorithm's ability to find a single important input bit from among a large number of irrelevant input bits.
Step11: We defined a new class, HaystackProblem, to represent this test case, which inherits from Scenario to ensure that we cannot instantiate the problem until the appropriate methods have been implemented.
Now let's define an __init__ method for this class. We'll need a parameter, training_cycles, to determine how many reward cycles the algorithm has to identify the "needle", and another parameter, input_size, to determine how big the "haystack" is.
Step12: The input_size is saved as a member for later use. Likewise, the value of training_cycles was saved in two places
Step13: The implementations for the property and the methods other than sense() and execute() will be trivial, so let's start with those
Step14: Now we are going to get into the meat of the problem. We want to give the algorithm a random string of bits of size input_size and have it pick out the location of the needle bit through trial and error, by telling us what it thinks the value of the needle bit is. For this to be a useful test, the needle bit needs to be in a fixed location, which we have not yet defined. Let's choose a random bit from among inputs on each run.
Step15: The sense() method is going to create a string of random bits of size input_size and return it. But first it will pick out the value of the needle bit, located at needle_index, and store it in a new member, needle_value, so that execute(action) will know what the correct value for action is.
Step16: Now we need to define the execute(action) method. In order to give the algorithm appropriate feedback to make problem solvable, we should return a high reward when it guesses the correct value for the needle bit, and a low value otherwise. Thus we will return a 1 when the action is the value of the needle bit, and a 0 otherwise. We must also make sure to decrement the remaining cycles to prevent the problem from running indefinitely.
Step17: We have now defined all of the methods that Scenario requires. Let's give it a test run.
Step18: ```
INFO
Step19: ```
INFO | <ASSISTANT_TASK:>
Python Code:
import logging
logging.root.setLevel(logging.INFO)
Explanation: XCS Tutorial
This is the official tutorial for the xcs package for Python 3. You can find the latest release and get updates on the project's status at the project home page.
What is XCS?
XCS is a Python 3 implementation of the XCS algorithm as described in the 2001 paper, An Algorithmic Description of XCS, by
Martin Butz and Stewart Wilson. XCS is a type of Learning Classifier System (LCS), a machine learning algorithm that utilizes a genetic algorithm acting on a rule-based system, to solve a reinforcement learning problem.
In its canonical form, XCS accepts a fixed-width string of bits as its input, and attempts to select the best action from a predetermined list of choices using an evolving set of rules that match inputs and offer appropriate suggestions. It then receives a reward signal indicating the quality of its decision, which it uses to adjust the rule set that was used to make the decision. This process is subsequently repeated, allowing the algorithm to evaluate the changes it has already made and further refine the rule set.
A key feature of XCS is that, unlike many other machine learning algorithms, it not only learns the optimal input/output mapping, but also produces a minimal set of rules for describing that mapping. This is a big advantage over other learning algorithms such as neural networks whose models are largely opaque to human analysis, making XCS an important tool in any data scientist's tool belt.
The XCS library provides not only an implementation of the standard XCS algorithm, but a set of interfaces which together constitute a framework for implementing and experimenting with other LCS variants. Future plans for the XCS library include continued expansion of the tool set with additional algorithms, and refinement of the interface to support reinforcement learning algorithms in general.
Terminology
Being both a reinforcement learning algorithm and an evolutionary algorithm, XCS requires an understanding of terms pertaining to both.
Situation
A situation is just another term for an input received by the classifier.
Action
An action is an output produced by the classifier.
Scenario
A series of situations, each of which the algorithm must respond to in order with an appropriate action in order to maximize the total reward received for each action. A scenario may be dynamic, meaning that later training cycles can be affected by earlier actions, or static, meaning that each training cycle is independent of the actions that came before it.
Classifier Rule
A classifier rule, sometimes referred to as just a rule or a classifier, is a pairing between a condition, describing which situations can be matched, and a suggested action. Each classifier rule has an associated prediction indicating the expected reward if the suggested action is taken when the condition matches the situation, a fitness indicating its suitability for reproduction and continued use in the population, and a numerosity value which indicates the number of (virtual) instances of the rule in the population. (There are other parameters associated with each rule, as well, but these are visibly important ones.)
Classifier Set
Also referred to as the population, this is the collection of all rules currently used and tracked by the classifier. The genetic algorithm operates on this set of rules over time to optimize them for accuracy and generality in their descriptiveness of the problem space. Note that the population is virtual, meaning that if the same rule has multiple copies in the population, it is represented only once, with an associated numerosity value to indicate the number of virtual instances of the rule in the population.
Match Set
The match set is the set of rules which match against the current situation.
Action Set
The action set is the set of rules which match against the current situation and recommend the selected action. Thus the action set is a subset of the match set. In fact, the match set can be seen as a collection of mutually exclusive and competing action sets, from which only one is to be selected.
Reward
The reward is a floating point value which acts as the signal the algorithm attempts to maximize. There are three types of reward that are commonly mentioned with respect to temporal difference learning algorithms. The immediate reward (aka raw reward) is the original, unaltered reward value returned by the scenario in response to each action. The expected future reward is the estimated payoff for later reward cycles, specifically excluding the current one; the prediction of the action set on the next reward cycle acts in this role in the canonical XCS algorithm. The payoff or combined reward is the combined sum of the immediate reward, plus the discounted expected future reward. (Discounted means the value is multiplied by a non-negative coefficient whose value is less than 1, which causes the algorithm to value immediate reward more highly than reward received later on.) The term reward, when used alone, is generally used to mean the immediate reward.
Prediction
A prediction is an estimate by a classifier rule or an action set as to the payoff expected to be received by taking the suggested action in the given situation. The prediction of an action set is formed by taking the fitness-weighted average of the predictions made by the individual rules within it.
Fitness
Fitness is another floating point value similar in function to the reward, except that in this case it is an internal signal defined by the algorithm itself, which is then used as a guide for selection of which rules are to act as parents to the next generation. Each rule in the population has its own associated fitness value. In XCS, as opposed to strength-based LCS variants such as ZCS, the fitness is actually based on the accuracy of each rule's reward prediction, as opposed to its size. Thus a rule with a very low expected reward can have a high fitness provided it is accurate in its prediction of low reward, whereas a rule with very high expected reward may have low fitness because the reward it receives varies widely from one reward cycle to the next. Using reward prediction accuracy instead of reward prediction size helps XCS find rules that describe the problem in a stable, predictable way.
Installation
To install xcs, you will of course need a Python 3 interpreter. The latest version of the standard CPython distribution is available for download from the Python Software Foundation, or if you prefer a download that comes with a long list of top-notch machine learning and scientific computing packages already built for you, I recommend Anaconda from Continuum Analytics.
Starting with Python 3.4, the standard CPython distribution comes with the package installation tool, pip, as part of the standard distribution. Anaconda comes with pip regardless of the Python version. If you have pip, installation of xcs is straight forward:
pip install xcs
If all goes as planned, you should see a message like this:
Successfully installed xcs-1.0.0
If for some reason you are unable to use pip, you can still install xcs manually. The latest release can be found here or here. Download the zip file, unpack it, and cd into the directory. Then run:
python setup.py install
You should see a message indicating that the package was successfully installed.
Testing the Newly Installed Package
Let's start things off with a quick test, to verify that everything has been installed properly. First, fire up the Python interpreter. We'll set up Python's built-in logging system so we can see the test's progress.
End of explanation
import xcs
xcs.test()
Explanation: Then we import the xcs module and run the built-in test() function. By default, the test() function runs the canonical XCS algorithm on the 11-bit (3-bit address) MUX problem for 10,000 steps.
End of explanation
from xcs import XCSAlgorithm
from xcs.scenarios import MUXProblem, ScenarioObserver
Explanation: ```
INFO:xcs.scenarios:Possible actions:
INFO:xcs.scenarios: False
INFO:xcs.scenarios: True
INFO:xcs.scenarios:Steps completed: 0
INFO:xcs.scenarios:Average reward per step: 0.00000
INFO:xcs.scenarios:Steps completed: 100
INFO:xcs.scenarios:Average reward per step: 0.57000
INFO:xcs.scenarios:Steps completed: 200
INFO:xcs.scenarios:Average reward per step: 0.58500
.
.
.
001#0###### => False
Time Stamp: 9980
Average Reward: 1.0
Error: 0.0
Fitness: 0.8161150828153352
Experience: 236
Action Set Size: 25.03847865419106
Numerosity: 9
11#######11 => True
Time Stamp: 9994
Average Reward: 1.0
Error: 0.0
Fitness: 0.9749473121531844
Experience: 428
Action Set Size: 20.685392494947063
Numerosity: 11
INFO:xcs:Total time: 15.05068 seconds
```
Your results may vary somewhat from what is shown here. XCS relies on randomization to discover new rules, so unless you set the random seed with random.seed(), each run will be different.
Usage
Now we'll run through a quick demo of how to use existing algorithms and problems. This is essentially the same code that appears in the test() function we called above.
First, we're going to need to import a few things:
End of explanation
scenario = ScenarioObserver(MUXProblem(50000))
Explanation: The XCSAlgorithm class contains the actual XCS algorithm implementation. The ClassifierSet class is used to represent the algorithm's state, in the form of a set of classifier rules. MUXProblem is the classic multiplexer problem, which defaults to 3 address bits (11 bits total). ScenarioObserver is a wrapper for scenarios which logs the inputs, actions, and rewards as the algorithm attempts to solve the problem.
Now that we've imported the necessary tools, we can define the actual problem, telling it to give the algorithm 10,000 reward cycles to attempt to learn the appropriate input/output mapping, and wrapping it with an observer so we can see the algorithm's progress.
End of explanation
algorithm = XCSAlgorithm()
Explanation: Next, we'll create the algorithm which will be used to manage the classifier set and learn the mapping defined by the problem we have selected:
End of explanation
algorithm.exploration_probability = .1
algorithm.discount_factor = 0
algorithm.do_ga_subsumption = True
algorithm.do_action_set_subsumption = True
Explanation: The algorithm's parameters are set to appropriate defaults for most problems, but it is straight forward to modify them if it becomes necessary.
End of explanation
model = algorithm.new_model(scenario)
Explanation: Here we have selected an exploration probability of .1, which will sacrifice most (9 out of 10) learning opportunities in favor of taking advantage of what has already been learned so far. This makes sense in real-time learning environment; a lower value is more appropriate in cases where the classifier is being trained in advance or is being used simply to learn a minimal rule set. The discount factor is set to 0, since future rewards are not affected at all by the currently selected action. (This is not strictly necessary, since the scenario will inform the algorithm that reward chaining should not be used, but it is useful to highlight this fact.) We have also elected to turn on GA and action set subsumption, which help the system to converge to the minimal effective rule set more quickly in some types of scenarios.
Next, we create the classifier set:
End of explanation
model.run(scenario, learn=True)
Explanation: The algorithm does the work for us, initializing the classifier set as it deems appropriate for the scenario we have provided. It provides the classifier set with the possible actions that can be taken in the given scenario; these are necessary for the classifier set to perform covering operations when the algorithm determines that the classifiers in the population provide insufficient coverage for a particular situation. (Covering is the addition to the population of a randomly generated classifier rule whose condition matches the current situation.)
And finally, this is where all the magic happens:
End of explanation
print(model)
Explanation: We pass the scenario to the classifier set and ask it to run to learn the appropriate input/output mapping. It executes training cycles until the scenario dictates that training should stop. Note that if you wish to see the progress as the algorithm interacts with the scenario, you will need to set the logging level to INFO, as described in the previous section, before calling the run() method.
Now we can observe the fruits of our labors.
End of explanation
print(len(model))
for rule in model:
if rule.fitness > .5 and rule.experience >= 10:
print(rule.condition, '=>', rule.action, ' [%.5f]' % rule.fitness)
Explanation: ```
10001#10100 => True
Time Stamp: 41601
Average Reward: 1e-05
Error: 1e-05
Fitness: 1e-05
Experience: 0
Action Set Size: 1
Numerosity: 1
00#00100#00 => True
Time Stamp: 48589
Average Reward: 1e-05
Error: 1e-05
Fitness: 1e-05
Experience: 0
Action Set Size: 1
Numerosity: 1
.
.
.
1111######1 => True
Time Stamp: 49968
Average Reward: 1.0
Error: 0.0
Fitness: 0.9654542879926405
Experience: 131
Action Set Size: 27.598176294274904
Numerosity: 10
010##1##### => True
Time Stamp: 49962
Average Reward: 1.0
Error: 0.0
Fitness: 0.8516265524887351
Experience: 1257
Action Set Size: 27.21325456027306
Numerosity: 13
```
This gives us a printout of each classifier rule, in the form condition => action, followed by various stats about the rule pertaining to the algorithm we selected. The classifier set can also be accessed as an iterable container:
End of explanation
from xcs.scenarios import Scenario
class HaystackProblem(Scenario):
pass
Explanation: Defining New Scenario Types
To define a new scenario type, inherit from the Scenario abstract class defined in the xcs.scenarios submodule. Suppose, as an example, that we wish to test the algorithm's ability to find a single important input bit from among a large number of irrelevant input bits.
End of explanation
from xcs.scenarios import Scenario
class HaystackProblem(Scenario):
def __init__(self, training_cycles=1000, input_size=500):
self.input_size = input_size
self.possible_actions = (True, False)
self.initial_training_cycles = training_cycles
self.remaining_cycles = training_cycles
Explanation: We defined a new class, HaystackProblem, to represent this test case, which inherits from Scenario to ensure that we cannot instantiate the problem until the appropriate methods have been implemented.
Now let's define an __init__ method for this class. We'll need a parameter, training_cycles, to determine how many reward cycles the algorithm has to identify the "needle", and another parameter, input_size, to determine how big the "haystack" is.
End of explanation
problem = HaystackProblem()
Explanation: The input_size is saved as a member for later use. Likewise, the value of training_cycles was saved in two places: the remaining_cycles member, which tells the instance how many training cycles remain for the current run, and the initial_training_cycles member, which the instance will use to reset remaining_cycles to the original value for a new run.
We also defined the possible_actions member, which we set to (True, False). This is the value we will return when the algorithm asks for the possible actions. We will expect the algorithm to return True when the needle bit is set, and False when the needle bit is clear, in order to indicate that it has correctly identified the needle's location.
Now let's define some methods for the class. The Scenario base class defines several abstract methods, and one abstract property:
* is_dynamic is a property with a Boolean value that indicates whether the actions from one reward cycle can affect the rewards or situations of later reward cycles.
* get_possible_actions() is a method that should return the actions the algorithm can take.
* reset() should restart the problem for a new run.
* sense() should return a new input (the "situation").
* execute(action) should accept an action from among those returned by get_possible_actions(), in response to the last situation that was returned by sense(). It should then return a reward value indicating how well the algorithm is doing at responding correctly to each situation.
* more() should return a Boolean value to indicate whether the algorithm has remaining reward cycles in which to learn.
The abstract methods and the property must each be defined, or we will get a TypeError when we attempt to instantiate the class:
End of explanation
from xcs.scenarios import Scenario
class HaystackProblem(Scenario):
def __init__(self, training_cycles=1000, input_size=500):
self.input_size = input_size
self.possible_actions = (True, False)
self.initial_training_cycles = training_cycles
self.remaining_cycles = training_cycles
@property
def is_dynamic(self):
return False
def get_possible_actions(self):
return self.possible_actions
def reset(self):
self.remaining_cycles = self.initial_training_cycles
def more(self):
return self.remaining_cycles > 0
Explanation: The implementations for the property and the methods other than sense() and execute() will be trivial, so let's start with those:
End of explanation
import random
from xcs.scenarios import Scenario
class HaystackProblem(Scenario):
def __init__(self, training_cycles=1000, input_size=500):
self.input_size = input_size
self.possible_actions = (True, False)
self.initial_training_cycles = training_cycles
self.remaining_cycles = training_cycles
self.needle_index = random.randrange(input_size)
@property
def is_dynamic(self):
return False
def get_possible_actions(self):
return self.possible_actions
def reset(self):
self.remaining_cycles = self.initial_training_cycles
self.needle_index = random.randrange(self.input_size)
def more(self):
return self.remaining_cycles > 0
Explanation: Now we are going to get into the meat of the problem. We want to give the algorithm a random string of bits of size input_size and have it pick out the location of the needle bit through trial and error, by telling us what it thinks the value of the needle bit is. For this to be a useful test, the needle bit needs to be in a fixed location, which we have not yet defined. Let's choose a random bit from among inputs on each run.
End of explanation
import random
from xcs.scenarios import Scenario
from xcs.bitstrings import BitString
class HaystackProblem(Scenario):
def __init__(self, training_cycles=1000, input_size=500):
self.input_size = input_size
self.possible_actions = (True, False)
self.initial_training_cycles = training_cycles
self.remaining_cycles = training_cycles
self.needle_index = random.randrange(input_size)
self.needle_value = None
@property
def is_dynamic(self):
return False
def get_possible_actions(self):
return self.possible_actions
def reset(self):
self.remaining_cycles = self.initial_training_cycles
self.needle_index = random.randrange(self.input_size)
def more(self):
return self.remaining_cycles > 0
def sense(self):
haystack = BitString.random(self.input_size)
self.needle_value = haystack[self.needle_index]
return haystack
Explanation: The sense() method is going to create a string of random bits of size input_size and return it. But first it will pick out the value of the needle bit, located at needle_index, and store it in a new member, needle_value, so that execute(action) will know what the correct value for action is.
End of explanation
import random
from xcs.scenarios import Scenario
from xcs.bitstrings import BitString
class HaystackProblem(Scenario):
def __init__(self, training_cycles=1000, input_size=500):
self.input_size = input_size
self.possible_actions = (True, False)
self.initial_training_cycles = training_cycles
self.remaining_cycles = training_cycles
self.needle_index = random.randrange(input_size)
self.needle_value = None
@property
def is_dynamic(self):
return False
def get_possible_actions(self):
return self.possible_actions
def reset(self):
self.remaining_cycles = self.initial_training_cycles
self.needle_index = random.randrange(self.input_size)
def more(self):
return self.remaining_cycles > 0
def sense(self):
haystack = BitString.random(self.input_size)
self.needle_value = haystack[self.needle_index]
return haystack
def execute(self, action):
self.remaining_cycles -= 1
return action == self.needle_value
Explanation: Now we need to define the execute(action) method. In order to give the algorithm appropriate feedback to make problem solvable, we should return a high reward when it guesses the correct value for the needle bit, and a low value otherwise. Thus we will return a 1 when the action is the value of the needle bit, and a 0 otherwise. We must also make sure to decrement the remaining cycles to prevent the problem from running indefinitely.
End of explanation
import logging
import xcs
from xcs.scenarios import ScenarioObserver
# Setup logging so we can see the test run as it progresses.
logging.root.setLevel(logging.INFO)
# Create the scenario instance
problem = HaystackProblem()
# Wrap the scenario instance in an observer so progress gets logged,
# and pass it on to the test() function.
xcs.test(scenario=ScenarioObserver(problem))
Explanation: We have now defined all of the methods that Scenario requires. Let's give it a test run.
End of explanation
problem = HaystackProblem(training_cycles=10000, input_size=100)
xcs.test(scenario=ScenarioObserver(problem))
Explanation: ```
INFO:xcs.scenarios:Possible actions:
INFO:xcs.scenarios: False
INFO:xcs.scenarios: True
INFO:xcs.scenarios:Steps completed: 0
INFO:xcs.scenarios:Average reward per step: 0.00000
INFO:xcs.scenarios:Steps completed: 100
INFO:xcs.scenarios:Average reward per step: 0.55000
.
.
.
INFO:xcs.scenarios:Steps completed: 900
INFO:xcs.scenarios:Average reward per step: 0.51667
INFO:xcs.scenarios:Steps completed: 1000
INFO:xcs.scenarios:Average reward per step: 0.50900
INFO:xcs.scenarios:Run completed.
INFO:xcs.scenarios:Total steps: 1000
INFO:xcs.scenarios:Total reward received: 509.00000
INFO:xcs.scenarios:Average reward per step: 0.50900
INFO:xcs:Classifiers:
010#11110##001###01#101001#00#1##100110##11#111#00#00#1#10#10#1110#100110#1#1100#10#111#1011100###1#1##1#0#1##011#1#0#0##1011010011#0#0101#00#01#0#0##01101##100#00010111##111010#100110##1101110##11#01110##1#0#110#000#010#1011##10#00#0#101011#000000##11#00#1#0110#0110100010##0100011#1#0###11#110#0###1##0100##1#11#1##101####111011#01#110101011001#110110#011111##1#0##1010#011000101001#10#10#0#00##1#110##1011100#1111##01#00#11#010001100#10####01###010001###1##1110#10####100#0#01#0#10##100####1110#00 => False
Time Stamp: 169
Average Reward: 0.0
Error: 0.0
Fitness: 0.15000850000000002
Experience: 1
Action Set Size: 1.0
Numerosity: 1
11##101#1###11101#0010####01#111##100011010###10##01#1100#010#11##01011#00##0#0#1001111#0#11011100010100101#1#1#01#0001000##101100###11#1#1111011110010#01010#101010###010##010##001#1#10#1001##0#1101111##0#0#0#1#11#01011000####111#1#1##10110##1###1#1#00#110##00000#11101110010###01#0#11#1###1#1#01#100110####11##0000#01#0#0011#01##10#100##00##010111##0#1#100#0##10#01000000001#00##1#11001#1011##1##1100011#1###01#####0#0111111#00#1101101##101#01#101#11##001#0000#1011#01#0#11#0#0#0##0#1010#0#01110110# => False
Time Stamp: 254
Average Reward: 0.0
Error: 0.0
Fitness: 0.15000850000000002
Experience: 1
Action Set Size: 1.0
Numerosity: 1
.
.
.
10010010010110#1#01###000100##0#0##0###01#1#1#100101#01#110#0##011#0100#0#1111001##01010##0#1#01011110#0#100110#00##1100##1011##1##0#0####111##111##000##01#001##110##10#01#0#1#00#110#100#10#1#0#1100#010#110##1011##1110#0#01#00#011#0001110#1110#0110111#0#101#01#101#00#0#1110100#1##0#101101#1###11#11###001100010###0#111101##1#111#111010#1##0011##00111000##11110#0#01#0#0#0#1#0#110000###00110##10001001011111#001101#11#111##01#0#1#10#1##000######0110##01#1#010#011#11#001##10111#1101#0#1001##011#10 => True
Time Stamp: 996
Average Reward: 1.0
Error: 0.0
Fitness: 0.15000850000000002
Experience: 1
Action Set Size: 1.0
Numerosity: 1
0101#0010100011#11##1100##001001###010#111001#####111001#1011#1100#1111#00101111#0#1011##1#1###00001011001#10##00###101##011111##1#00#1011001###10001###11####1##1#01#0#1#0#11100001110##11#001001#01#####0110#011011#0#111#1111##0#110111001#100#011111100110#11####0##01#100#11#1000#10#00#00#0#0#1##0100#100#11###01#1100##1###000##01#10#0#0001#0100#10#1#001#11####1001#110#0##11#0#0100#010##0#011100##11#0#11101#000000010#00101#0#0#11110#0010#1100#11#01#11##10#10#10#1100#1#00#0100#10#1##10#00011010100#0 => True
Time Stamp: 998
Average Reward: 1.0
Error: 0.0
Fitness: 0.15000850000000002
Experience: 1
Action Set Size: 1.0
Numerosity: 1
INFO:xcs:Total time: 2.65542 seconds
```
Hmm, the classifier set didn't do so hot. Maybe we've found a weakness in the algorithm, or maybe some different parameter settings will improve its performance. Let's reduce the size of the haystack and give it more reward cycles so we can see whether it's learning at all.
End of explanation
problem = HaystackProblem(training_cycles=10000, input_size=500)
algorithm = xcs.XCSAlgorithm()
# Default parameter settings in test()
algorithm.exploration_probability = .1
# Modified parameter settings
algorithm.ga_threshold = 1
algorithm.crossover_probability = .5
algorithm.do_action_set_subsumption = True
algorithm.do_ga_subsumption = False
algorithm.wildcard_probability = .998
algorithm.deletion_threshold = 1
algorithm.mutation_probability = .002
xcs.test(algorithm, scenario=ScenarioObserver(problem))
Explanation: ```
INFO:xcs.scenarios:Possible actions:
INFO:xcs.scenarios: False
INFO:xcs.scenarios: True
INFO:xcs.scenarios:Steps completed: 0
INFO:xcs.scenarios:Average reward per step: 0.00000
INFO:xcs.scenarios:Steps completed: 100
INFO:xcs.scenarios:Average reward per step: 0.47000
.
.
.
INFO:xcs.scenarios:Steps completed: 9900
INFO:xcs.scenarios:Average reward per step: 0.49222
INFO:xcs.scenarios:Steps completed: 10000
INFO:xcs.scenarios:Average reward per step: 0.49210
INFO:xcs.scenarios:Run completed.
INFO:xcs.scenarios:Total steps: 10000
INFO:xcs.scenarios:Total reward received: 4921.00000
INFO:xcs.scenarios:Average reward per step: 0.49210
INFO:xcs:Classifiers:
11#1001##0110000#101####001010##111111#1110#00#0100#11100#1###0110110####11#011##0#0#1###011#1#11001 => False
Time Stamp: 9771
Average Reward: 1.0
Error: 0.0
Fitness: 8.5e-07
Experience: 0
Action Set Size: 1
Numerosity: 1
00001100##1010#01111101001#0###0#10#10#11###10#1#0#0#11#11010111111#0#01#111#0#100#00#10000111##000 => False
Time Stamp: 8972
Average Reward: 0.0
Error: 0.0
Fitness: 0.15000850000000002
Experience: 1
Action Set Size: 1.0
Numerosity: 1
.
.
.
100#0010010###0#1001#1#0100##0#1##101#011#0#0101110#1111#11#000##0#1#0##001#1110##001011###1001##01# => True
Time Stamp: 9993
Average Reward: 1.0
Error: 0.0
Fitness: 0.15000850000000002
Experience: 1
Action Set Size: 1.0
Numerosity: 1
10#100##110##00#001##0#100100#00#1110##100##1#1##1111###00#0#1#1##00#010##00011#10#1#11##0#0#01100#0 => False
Time Stamp: 9997
Average Reward: 1.0
Error: 0.0
Fitness: 0.15000850000000002
Experience: 1
Action Set Size: 1.0
Numerosity: 1
INFO:xcs:Total time: 21.50882 seconds
```
It appears the algorithm isn't learning at all, at least not at a visible rate. But after a few rounds of playing with the parameter values, it becomes apparent that with the correct settings and sufficient training cycles, it is possible for the algorithm to handle the new scenario.
End of explanation
<END_TASK> |
15,653 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
MIMO Least Squares Detection
This code is provided as supplementary material of the lecture Machine Learning and Optimization in Communications (MLOC).<br>
This code illustrates
Step1: We want to transmit $x$ over a MIMO channel $H\in \mathbb{R}^{k \times n}$.
The receiver measures $y$, which is the result of
$y=Hx$. At the receiver side, we have channel state information (CSI) and therefore know $H$.
Specify the simulation paramters.
You can vary $k$ (number of receive antennas) but leave $n$ (number of transmit antennas) fixed
if you want to get a graphical output.
Step2: Now, we want to estimate $\boldsymbol{x}$ by using a Least-Square Detector
Step3: Plots
Step4: Now we use Newton's method. It reaches the minimum in one step,
because the objective function is quadratic (Least-Square).
Step5: A limitation of the transmit signal energy is known.
$\boldsymbol{x}^T\boldsymbol{x} \leq 1$.
We add this information as a constraint to the problem with the use
of a Lagrange multiplier.
Use gradient descent direction to find the optimal $\boldsymbol{x}$ of the new constrained
problem.
Step6: Plots | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: MIMO Least Squares Detection
This code is provided as supplementary material of the lecture Machine Learning and Optimization in Communications (MLOC).<br>
This code illustrates:
* Toy example of MIMO Detection with constrained least-squares
* Implementation of constrained least squares via gradient descent
End of explanation
n = 2 # Number of TX antennas. Leave n fixed to 2!
k = 3 # Number of RX antennas.You can play around with k.
x = np.random.rand(n) # Transmit data (random).
x = x/np.linalg.norm(x) * np.random.rand() # Normalize x to a transmit energy in [0,1].
H = np.random.randn(k, n) # MIMO channel (random).
y = np.dot(H, x) # Apply channel to data.
print("x =",x)
Explanation: We want to transmit $x$ over a MIMO channel $H\in \mathbb{R}^{k \times n}$.
The receiver measures $y$, which is the result of
$y=Hx$. At the receiver side, we have channel state information (CSI) and therefore know $H$.
Specify the simulation paramters.
You can vary $k$ (number of receive antennas) but leave $n$ (number of transmit antennas) fixed
if you want to get a graphical output.
End of explanation
delta = 1e-9 # Threshold for stopping criterion.
epsilon = 1e-4 # Step length.
max_iter = 100000
# Initial guess.
init_xg = np.random.rand(*x.shape)*1.4
xg = init_xg
# Gradient descent line search.
points = []
while len(points) < max_iter:
points.append(xg)
grad = 2*H.T.dot(H).dot(xg)-2*np.dot(H.T,y) # Calc gradient at current position.
if np.linalg.norm(grad) < delta:
break
xg = xg - 2*epsilon*grad
print("xg =",xg)
Explanation: Now, we want to estimate $\boldsymbol{x}$ by using a Least-Square Detector:
$\min\limits_{\boldsymbol{x}} ||\boldsymbol{H}\boldsymbol{x}-\boldsymbol{y}||_2^2$.
This is a minimization problem.
The first approach is a line search with gradient descent direction and fixed step length.
End of explanation
def obj_func(mesh):
return np.linalg.norm(np.tensordot(H, mesh, axes=1)-y[:, np.newaxis, np.newaxis], axis=0)**2
# Least-Square function.doing a matrix multiplication for a mesh
x_grid = np.arange(-1.5, 1.5, 0.02)
y_grid = np.arange(-1.5, 1.5, 0.02)
X, Y = np.meshgrid(x_grid, y_grid)
fZ = obj_func([X, Y])
# Line search trajectory.
trajectory_x = [points[i][0] for i in range(len(points))]
trajectory_y = [points[i][1] for i in range(len(points))]
font = {'size' : 20}
plt.rc('font', **font)
plt.rc('text', usetex=True)
params= {'text.latex.preamble' : [r'\usepackage{amsmath}']}
plt.rcParams.update(params)
plt.figure(1,figsize=(15,6))
plt.rcParams.update({'font.size': 15})
plt.subplot(121)
plt.contourf(X,Y,fZ,levels=20)
plt.colorbar()
plt.xlabel("$x_1$")
plt.ylabel("$x_2$")
plt.plot(trajectory_x, trajectory_y,marker='.',color='w',linewidth=2)
plt.plot(x[0], x[1], marker='x',color='r',markersize=12, markeredgewidth=2)
plt.plot(init_xg[0],init_xg[1], marker='x',color='g',markersize=12, markeredgewidth=2)
plt.subplot(122)
plt.plot(range(0,len(points)),[np.linalg.norm(p-x) for p in points])
plt.grid(True)
plt.xlabel("Step $i$")
plt.ylabel(r"$\Vert f(\boldsymbol{x}^{(i)})-\boldsymbol{x}\Vert_2$")
plt.show()
Explanation: Plots:
* [left subplot]: The function and the trajectory of the line search.
The minimum at $x$ is marked with a red cross and
the first guess with a green cross.
* [right subplot]: The euclidean distance of the trajectory
to the minimum at each iteration.
End of explanation
xh = np.linalg.inv(H.T.dot(H)).dot(H.T).dot(y)
print('xh = ', xh)
Explanation: Now we use Newton's method. It reaches the minimum in one step,
because the objective function is quadratic (Least-Square).
End of explanation
max_iter = 100000
lam = 5 # Init value for lambda.
init_xg = np.random.rand(*x.shape)*1.4 # Initial guess.
xg = init_xg
points = []
while len(points) < max_iter:
points.append(xg)
xg = np.linalg.inv(H.T.dot(H)+lam*np.identity(n)).dot(H.T).dot(y)
lam = lam - epsilon*(1-xg.T.dot(xg))
if np.abs(1-xg.T.dot(xg)) < delta or lam < delta:
break
print(xg)
Explanation: A limitation of the transmit signal energy is known.
$\boldsymbol{x}^T\boldsymbol{x} \leq 1$.
We add this information as a constraint to the problem with the use
of a Lagrange multiplier.
Use gradient descent direction to find the optimal $\boldsymbol{x}$ of the new constrained
problem.
End of explanation
trajectory_x = [points[i][0] for i in range(len(points))]
trajectory_y = [points[i][1] for i in range(len(points))]
x_grid = np.arange(-1.5, 1.5, 0.02)
y_grid = np.arange(-1.5, 1.5, 0.02)
X, Y = np.meshgrid(x_grid, y_grid)
fZ = obj_func([X, Y])
plt.figure(1,figsize=(15,6))
plt.subplot(121)
fig = plt.gcf()
ax = fig.gca()
plt.rcParams.update({'font.size': 14})
plt.contourf(X,Y,fZ,levels=20)
plt.colorbar()
plt.xlabel("$x_1$")
plt.ylabel("$x_2$")
circle = plt.Circle((0,0),radius=1, fill=False, color='r')
ax.add_artist(circle)
plt.plot(trajectory_x, trajectory_y,marker='.',color='w',linewidth=2)
plt.plot(x[0],x[1], marker='x',color='r',markersize=12, markeredgewidth=2)
plt.plot(init_xg[0],init_xg[1], marker='x',color='g',markersize=12, markeredgewidth=2)
plt.subplot(122)
plt.plot(range(0,len(points)),[np.linalg.norm(p-x) for p in points])
plt.grid(True)
plt.xlabel("Step $i$")
plt.ylabel(r"$\Vert f(\boldsymbol{x}^{(i)})-\boldsymbol{x}\Vert$")
plt.show()
Explanation: Plots:
* [left subplot]: The function and the trajectory of the line search.
The minimum at $x$ is marked with a red cross and
the first guess with a green cross. The constraint is displayed with a black line.
* [right subplot]: The euclidean distance of the trajectory
to the minimum at each iteration.
End of explanation
<END_TASK> |
15,654 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below
Step9: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token
Step11: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step13: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step15: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below
Step18: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders
Step21: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
Step24: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
Step27: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
Step30: Build the Neural Network
Apply the functions you implemented above to
Step33: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements
Step35: Neural Network Training
Hyperparameters
Tune the following parameters
Step37: Build the Graph
Build the graph using the neural network you implemented.
Step39: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forums to see if anyone is having the same problem.
Step41: Save Parameters
Save seq_length and save_dir for generating a new TV script.
Step43: Checkpoint
Step46: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names
Step49: Choose Word
Implement the pick_word() function to select the next word using probabilities.
Step51: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate. | <ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
import numpy as np
import problem_unittests as tests
from string import punctuation
from collections import Counter
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
#print(text)
# TODO: Implement Function
counts = Counter(text)
vocab = sorted(counts, key=counts.get, reverse=True)
vocab_to_int = {word: ii for ii, word in enumerate(vocab, 1)}
int_to_vocab = dict(enumerate(vocab, 1))
return vocab_to_int, int_to_vocab
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
# TODO: Implement Function
tokens_dict = dict([('.','||Period||'), (',','||Comma||'), ('"', '||Quotation_Mark||'),
(';', '||Semicolon||'), ('!', "||Exclamation_Mark||"), ('?', '||Question_Mark'),
('(', '||Left_Parentheses||'), (')', '||Right_Parentheses||'),
('--', '||Dash||'), ('\n', '||Return||')])
return tokens_dict
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
# TODO: Implement Function
inputs = tf.placeholder(tf.int32, [None, None], name='input')
targets = tf.placeholder(tf.int32,[None, None], name='target')
learning_rate = tf.placeholder(tf.float32, name='learning_rate')
return inputs, targets, learning_rate
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following tuple (Input, Targets, LearningRate)
End of explanation
def get_init_cell(batch_size, rnn_size):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
# TODO: Implement Function
rnn_layer = 2
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
cell = tf.contrib.rnn.MultiRNNCell([lstm] * rnn_layer)
initial_state = tf.identity(cell.zero_state(batch_size, tf.float32), name='initial_state')
return (cell, initial_state)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
# TODO: Implement Function
embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))
embed = tf.nn.embedding_lookup(embedding, input_data)
return embed
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
# TODO: Implement Function
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
final_state = tf.identity(final_state, name='final_state')
return (outputs, final_state)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
# TODO: Implement Function
embed = get_embed(input_data, vocab_size, embed_dim)
outputs, final_state = build_rnn(cell, embed)
logits = tf.contrib.layers.fully_connected(inputs=outputs,num_outputs=vocab_size,
activation_fn=None,
weights_initializer=tf.truncated_normal_initializer(stddev=0.1),
biases_initializer=tf.zeros_initializer())
return logits,final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
# TODO: Implement Function
n_batches = int(len(int_text) / (batch_size * seq_length))
# Drop the last few characters to make only full batches
xdata = np.array(int_text[: n_batches * batch_size * seq_length])
ydata = int_text[1: n_batches * batch_size * seq_length]
ydata.append(int_text[0])
ydata = np.array(ydata)
x_batches = np.split(xdata.reshape(batch_size, -1), n_batches, 1)
y_batches = np.split(ydata.reshape(batch_size, -1), n_batches, 1)
return np.array(list(zip(x_batches, y_batches)))
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_batches(get_batches)
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2], [ 7 8], [13 14]]
# Batch of targets
[[ 2 3], [ 8 9], [14 15]]
]
# Second Batch
[
# Batch of Input
[[ 3 4], [ 9 10], [15 16]]
# Batch of targets
[[ 4 5], [10 11], [16 17]]
]
# Third Batch
[
# Batch of Input
[[ 5 6], [11 12], [17 18]]
# Batch of targets
[[ 6 7], [12 13], [18 1]]
]
]
```
Notice that the last target value in the last batch is the first input value of the first batch. In this case, 1. This is a common technique used when creating sequence batches, although it is rather unintuitive.
End of explanation
# Number of Epochs
num_epochs = 200
# Batch Size
batch_size = 256
# RNN Size
rnn_size = 100
# Embedding Dimension Size
embed_dim = 256
# Sequence Length
seq_length = 15
# Learning Rate
learning_rate = 0.01
# Show stats for every n number of batches
show_every_n_batches = 100
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set embed_dim to the size of the embedding.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forums to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
Explanation: Checkpoint
End of explanation
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
# TODO: Implement Function
inputs = loaded_graph.get_tensor_by_name("input:0")
initial_state = loaded_graph.get_tensor_by_name("initial_state:0")
final_state = loaded_graph.get_tensor_by_name("final_state:0")
probs = loaded_graph.get_tensor_by_name("probs:0")
return (inputs, initial_state, final_state, probs)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
# TODO: Implement Function
prob = list(probabilities)
word_id= prob.index(max(prob))
return int_to_vocab[word_id]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation
<END_TASK> |
15,655 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Python for Everyone!<br/>Oregon Curriculum Network
VPython inside Jupyter Notebooks
The Vector, Edge and Polyhedron types
The Vector class below is but a thin wrapper around VPython's built-in vector type. One might wonder, why bother? Why not just use vpython.vector and be done with it? Also, if wanting to reimplement, why not just subclass instead of wrap? All good questions.
A primary motivation is to keep the Vector and Edge types somewhat aloof from vpython's vector and more welded to vpython's cylinder instead. We want vectors and edges to materialize as cylinders quite easily.
So whether we subclass, or wrap, we want our vectors to have the ability to self-draw.
The three basis vectors must be negated to give all six spokes of the XYZ apparatus. Here's an opportunity to test our __neg__ operator then.
The overall plan is to have an XYZ "jack" floating in space, around which two tetrahedrons will be drawn, with a common center, as twins.
Their edges will intersect as at the respective face centers of the six-faced, twelve-edged hexahedron, our "duo-tet" cube (implied, but could be hard-wired as a next Polyhedron instance, just give it the six faces).
A lot of this wrapper code is about turning vpython.vectors into lists for feeding to Vector, which expects three separate arguments. A star in front of an iterable accomplishes the feat of exploding it into the separate arguments required.
Note that vector operations, including negation, always return fresh vectors. Even color has not been made a mutable property, but maybe could be.
Step3: Even though the top code cell contains no instructions to draw, Vpython's way of integrating into Jupyter Notebook seems to be by adding a scene right after the first code cell. Look below for the code that made all of the above happen. Yes, that's a bit strange. | <ASSISTANT_TASK:>
Python Code:
from vpython import *
class Vector:
def __init__(self, x, y, z):
self.v = vector(x, y, z)
def __add__(self, other):
v_sum = self.v + other.v
return Vector(*v_sum.value)
def __neg__(self):
return Vector(*((-self.v).value))
def __sub__(self, other):
V = (self + (-other))
return Vector(*V.v.value)
def __mul__(self, scalar):
V = scalar * self.v
return Vector(*V.value)
def norm(self):
v = norm(self.v)
return Vector(*v.value)
def length(self):
return mag(self.v)
def draw(self):
self.the_cyl = cylinder(pos=vector(0,0,0), axis=self.v, radius=0.1)
self.the_cyl.color = color.cyan
XBASIS = Vector(1,0,0)
YBASIS = Vector(0,1,0)
ZBASIS = Vector(0,0,1)
XNEG = -XBASIS
YNEG = -YBASIS
ZNEG = -ZBASIS
XYZ = [XBASIS, XNEG, YBASIS, YNEG, ZBASIS, ZNEG]
sphere(pos=vector(0,0,0), color = color.orange, radius=0.2)
for radial in XYZ:
radial.draw()
Explanation: Python for Everyone!<br/>Oregon Curriculum Network
VPython inside Jupyter Notebooks
The Vector, Edge and Polyhedron types
The Vector class below is but a thin wrapper around VPython's built-in vector type. One might wonder, why bother? Why not just use vpython.vector and be done with it? Also, if wanting to reimplement, why not just subclass instead of wrap? All good questions.
A primary motivation is to keep the Vector and Edge types somewhat aloof from vpython's vector and more welded to vpython's cylinder instead. We want vectors and edges to materialize as cylinders quite easily.
So whether we subclass, or wrap, we want our vectors to have the ability to self-draw.
The three basis vectors must be negated to give all six spokes of the XYZ apparatus. Here's an opportunity to test our __neg__ operator then.
The overall plan is to have an XYZ "jack" floating in space, around which two tetrahedrons will be drawn, with a common center, as twins.
Their edges will intersect as at the respective face centers of the six-faced, twelve-edged hexahedron, our "duo-tet" cube (implied, but could be hard-wired as a next Polyhedron instance, just give it the six faces).
A lot of this wrapper code is about turning vpython.vectors into lists for feeding to Vector, which expects three separate arguments. A star in front of an iterable accomplishes the feat of exploding it into the separate arguments required.
Note that vector operations, including negation, always return fresh vectors. Even color has not been made a mutable property, but maybe could be.
End of explanation
class Edge:
def __init__(self, v0, v1):
self.v0 = v0
self.v1 = v1
def draw(self):
cylinder wants a starting point, and a direction vector
pointer = (self.v1 - self.v0)
direction_v = norm(pointer) * pointer.length() # normalize then stretch
self.the_cyl = cylinder(pos = self.v0.v, axis=direction_v.v, radius=0.1)
self.the_cyl.color = color.green
class Polyhedron:
def __init__(self, faces, corners):
self.faces = faces
self.corners = corners
self.edges = self._get_edges()
def _get_edges(self):
take a list of face-tuples and distill
all the unique edges,
e.g. ((1,2,3)) => ((1,2),(2,3),(1,3))
e.g. icosahedron has 20 faces and 30 unique edges
( = cubocta 24 + tetra's 6 edges to squares per
jitterbug)
uniqueset = set()
for f in self.faces:
edgetries = zip(f, f[1:]+ (f[0],))
for e in edgetries:
e = tuple(sorted(e)) # keeps out dupes
uniqueset.add(e)
return tuple(uniqueset)
def draw(self):
for edge in self.edges:
the_edge = Edge(Vector(*self.corners[edge[0]]),
Vector(*self.corners[edge[1]]))
the_edge.draw()
the_verts = \
{ 'A': (0.35355339059327373, 0.35355339059327373, 0.35355339059327373),
'B': (-0.35355339059327373, -0.35355339059327373, 0.35355339059327373),
'C': (-0.35355339059327373, 0.35355339059327373, -0.35355339059327373),
'D': (0.35355339059327373, -0.35355339059327373, -0.35355339059327373),
'E': (-0.35355339059327373, -0.35355339059327373, -0.35355339059327373),
'F': (0.35355339059327373, 0.35355339059327373, -0.35355339059327373),
'G': (0.35355339059327373, -0.35355339059327373, 0.35355339059327373),
'H': (-0.35355339059327373, 0.35355339059327373, 0.35355339059327373)}
the_faces = (('A','B','C'),('A','C','D'),('A','D','B'),('B','C','D'))
other_faces = (('E','F','G'), ('E','G','H'),('E','H','F'),('F','G','H'))
tetrahedron = Polyhedron(the_faces, the_verts)
inv_tetrahedron = Polyhedron(other_faces, the_verts)
print(tetrahedron._get_edges())
print(inv_tetrahedron._get_edges())
tetrahedron.draw()
inv_tetrahedron.draw()
Explanation: Even though the top code cell contains no instructions to draw, Vpython's way of integrating into Jupyter Notebook seems to be by adding a scene right after the first code cell. Look below for the code that made all of the above happen. Yes, that's a bit strange.
End of explanation
<END_TASK> |
15,656 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Performance Overview
Here, we will example the performance of FNGS as a function of time on several datasets. These investigations were performed on a 4 core machine (4 threads) with a 4.0 GhZ processor. These investigations were performed on the version of FNGS in ndmg/eric-dev-gkiar-fmri on 03/27.
Step1: BNU 1
Step2: HNU Dataset
Step3: DC1 Dataset
Step4: NKI 1 | <ASSISTANT_TASK:>
Python Code:
%%script false
## disklog.sh
#!/bin/bash -e
# run this in the background with nohup ./disklog.sh > disk.txt &
#
while true; do
echo "$(du -s $1 | awk '{print $1}')"
sleep 30
done
##cpulog.sh
import psutil
import time
import argparse
def cpulog(outfile):
with open(outfile, 'w') as outf:
while(True):
cores = psutil.cpu_percent(percpu=True)
corestr = ",".join([str(core) for core in cores])
outf.write(corestr + '\n')
outf.flush()
time.sleep(1) # delay for 1 second
def main():
parser = argparse.ArgumentParser()
parser.add_argument('outfile', help='the file to write core usage to.')
args = parser.parse_args()
cpulog(args.outfile)
if __name__ == "__main__":
main()
## memlog.sh
#!/bin/bash -e
# run this in the background with nohup ./memlog.sh > mem.txt &
#
while true; do
echo "$(free -m | grep buffers/cache | awk '{print $3}')"
sleep 1
done
## runonesub.sh
# A function for generating memory and cpu summaries for fngs pipeline.
#
# Usage: ./generate_statistics.sh /path/to/rest /path/to/anat /path/to/output
rm -rf $3
mkdir $3
./memlog.sh > ${3}/mem.txt &
memkey=$!
python cpulog.py ${3}/cpu.txt &
cpukey=$!
./disklog.sh $3 > ${3}/disk.txt &
diskkey=$!
res=2mm
atlas='/FNGS_server/atlases/atlas/MNI152_T1-${res}.nii.gz'
atlas_brain='/FNGS_server/atlases/atlas/MNI152_T1-${res}_brain.nii.gz'
atlas_mask='/FNGS_server/atlases/mask/MNI152_T1-${res}_brain_mask.nii.gz'
lv_mask='/FNGS_server/atlases/mask/HarvOx_lv_thr25-${res}.nii.gz'
label='/FNGS_server/atlases/label/desikan-${res}.nii.gz'
exec 4<$1
exec 5<$2
fngs_pipeline $1 $2 $atlas $atlas_brain $atlas_mask $lv_mask $3 none $label --fmt graphml
kill $memkey $cpukey $diskkey
%matplotlib inline
import numpy as np
import re
import matplotlib.pyplot as plt
from IPython.display import Image, display
def memory_function(infile, dataset):
with open(infile, 'r') as mem:
lines = mem.readlines()
testar = np.asarray([line.strip() for line in lines]).astype(float)/1000
fig=plt.figure()
ax = fig.add_subplot(111)
ax.plot(range(0, testar.shape[0]), testar - min(testar))
ax.set_ylabel('memory usage in GB')
ax.set_xlabel('Time (s)')
ax.set_title(dataset + ' Memory Usage; max = %.2f GB; mean = %.2f GB' % (max(testar), np.mean(testar)))
return fig
def cpu_function(infile, dataset):
with open(infile, 'r') as cpuf:
lines = cpuf.readlines()
testar = [re.split(',',line.strip()) for line in lines][0:-1]
corear = np.zeros((len(testar), len(testar[0])))
for i in range(0, len(testar)):
corear[i,:] = np.array([float(cpu) for cpu in testar[i]])
fig=plt.figure()
ax = fig.add_subplot(111)
lines = [ax.plot(corear[:,i], '--', label='cpu '+ str(i),
alpha=0.5)[0] for i in range(0, corear.shape[1])]
total = corear.sum(axis=1)
lines.append(ax.plot(total, label='all cores')[0])
labels = [h.get_label() for h in lines]
fig.legend(handles=lines, labels=labels, loc='lower right', prop={'size':6})
ax.set_ylabel('CPU usage (%)')
ax.set_ylim([0, max(total)+10])
ax.set_xlabel('Time (s)')
ax.set_title(dataset + ' Processor Usage; max = %.1f per; mean = %.1f per' % (max(total), np.mean(total)))
return fig
def disk_function(infile, dataset):
with open(infile, 'r') as disk:
lines = disk.readlines()
testar = np.asarray([line.strip() for line in lines]).astype(float)/1000000
fig=plt.figure()
ax = fig.add_subplot(111)
ax.plot(range(0, testar.shape[0]), testar - min(testar))
ax.set_ylabel('Disk usage GB')
ax.set_xlabel('Time (30 s)')
ax.set_title(dataset + ' Disk Usage; max = %.2f GB; mean = %.2f GB' % (max(testar), np.mean(testar)))
return fig
Explanation: Performance Overview
Here, we will example the performance of FNGS as a function of time on several datasets. These investigations were performed on a 4 core machine (4 threads) with a 4.0 GhZ processor. These investigations were performed on the version of FNGS in ndmg/eric-dev-gkiar-fmri on 03/27.
End of explanation
memfig = memory_function('/data/BNU_sub/BNU_single/mem.txt', 'BNU 1 single')
diskfig = disk_function('/data/BNU_sub/BNU_single/disk.txt', 'BNU 1 single')
cpufig = cpu_function('/data/BNU_sub/BNU_single/cpu.txt', 'BNU 1 single')
memfig.show()
diskfig.show()
cpufig.show()
Explanation: BNU 1
End of explanation
memfig = memory_function('/data/HNU_sub/HNU_single/mem.txt', 'HNU 1 single')
diskfig = disk_function('/data/HNU_sub/HNU_single/disk.txt', 'HNU 1 single')
cpufig = cpu_function('/data/HNU_sub/HNU_single/cpu.txt', 'HNU 1 single')
memfig.show()
diskfig.show()
cpufig.show()
Explanation: HNU Dataset
End of explanation
memfig = memory_function('/data/DC_sub/DC_single/mem.txt', 'DC 1 single')
diskfig = disk_function('/data/DC_sub/DC_single/disk.txt', 'DC 1 single')
cpufig = cpu_function('/data/DC_sub/DC_single/cpu.txt', 'DC 1 single')
memfig.show()
diskfig.show()
cpufig.show()
Explanation: DC1 Dataset
End of explanation
memfig = memory_function('/data/NKI_sub/NKI_single/mem.txt', 'NKI 1 single')
diskfig = disk_function('/data/NKI_sub/NKI_single/disk.txt', 'NKI 1 single')
cpufig = cpu_function('/data/NKI_sub/NKI_single/cpu.txt', 'NKI 1 single')
memfig.show()
diskfig.show()
cpufig.show()
Explanation: NKI 1
End of explanation
<END_TASK> |
15,657 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Lesson 13 - Validation
Step1: Create a decision tree classifier (just use the default parameters), train it on all the data. Print out the accuracy.
THIS IS AN OVERFIT TREE, DO NOT TRUST THIS NUMBER! Nonetheless,
- what’s the accuracy?
Step2: Now you’ll add in training and testing, so that you get a trustworthy accuracy number. Use the train_test_split validation available in sklearn.cross_validation; hold out 30% of the data for testing and set the random_state parameter to 42 (random_state controls which points go into the training set and which are used for testing; setting it to 42 means we know exactly which events are in which set, and can check the results you get).
- What’s your updated accuracy? | <ASSISTANT_TASK:>
Python Code:
import pickle
import sys
sys.path.append("../tools/")
from feature_format import featureFormat, targetFeatureSplit
data_dict = pickle.load(open("../final_project/final_project_dataset.pkl", "r") )
### first element is our labels, any added elements are predictor
### features. Keep this the same for the mini-project, but you'll
### have a different feature list when you do the final project.
features_list = ["poi", "salary"]
data = featureFormat(data_dict, features_list)
labels, features = targetFeatureSplit(data)
print len(labels), len(features)
Explanation: Lesson 13 - Validation
End of explanation
from sklearn import tree
from time import time
def submitAcc(features, labels):
return clf.score(features, labels)
clf = tree.DecisionTreeClassifier()
t0 = time()
clf.fit(features, labels)
print("done in %0.3fs" % (time() - t0))
pred = clf.predict(features)
print "Classifier with accurancy %.2f%%" % (submitAcc(features, labels))
Explanation: Create a decision tree classifier (just use the default parameters), train it on all the data. Print out the accuracy.
THIS IS AN OVERFIT TREE, DO NOT TRUST THIS NUMBER! Nonetheless,
- what’s the accuracy?
End of explanation
from sklearn import cross_validation
X_train, X_test, y_train, y_test = cross_validation.train_test_split(features, labels, test_size=0.30, random_state=42)
print len(X_train), len(y_train)
print len(X_test), len(y_test)
clf = tree.DecisionTreeClassifier()
t0 = time()
clf.fit(X_train, y_train)
print("done in %0.3fs" % (time() - t0))
pred = clf.predict(X_test)
print "Classifier with accurancy %.2f%%" % (submitAcc(X_test, y_test))
Explanation: Now you’ll add in training and testing, so that you get a trustworthy accuracy number. Use the train_test_split validation available in sklearn.cross_validation; hold out 30% of the data for testing and set the random_state parameter to 42 (random_state controls which points go into the training set and which are used for testing; setting it to 42 means we know exactly which events are in which set, and can check the results you get).
- What’s your updated accuracy?
End of explanation
<END_TASK> |
15,658 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
<div style='background-image
Step1: Exercise 1
Define a python function call "get_cheby_matrix(nx)" that initializes the Chebyshev derivative matrix $D_{ij}$
Step2: Exercise 2
Calculate the numerical derivative by applying the differentiation matrix $D_{ij}$. Define an arbitrary function (e.g. a Gaussian) and initialize its analytical derivative on the Chebyshev collocation points. Calculate the numerical derivative and the difference to the analytical solution. Vary the wavenumber content of the analytical function. Does it make a difference? Why is the numerical result not entirely exact?
Step3: Exercise 3
Now that the numerical derivative is available, we can visually inspect our results. Make a plot of both, the analytical and numerical derivatives together with the difference error. | <ASSISTANT_TASK:>
Python Code:
# This is a configuration step for the exercise. Please run it before calculating the derivative!
import numpy as np
import matplotlib.pyplot as plt
# Show the plots in the Notebook.
plt.switch_backend("nbagg")
Explanation: <div style='background-image: url("../../share/images/header.svg") ; padding: 0px ; background-size: cover ; border-radius: 5px ; height: 250px'>
<div style="float: right ; margin: 50px ; padding: 20px ; background: rgba(255 , 255 , 255 , 0.7) ; width: 50% ; height: 150px">
<div style="position: relative ; top: 50% ; transform: translatey(-50%)">
<div style="font-size: xx-large ; font-weight: 900 ; color: rgba(0 , 0 , 0 , 0.8) ; line-height: 100%">Computational Seismology</div>
<div style="font-size: large ; padding-top: 20px ; color: rgba(0 , 0 , 0 , 0.5)">Numerical derivatives based on a derivative matrix</div>
</div>
</div>
</div>
Seismo-Live: http://seismo-live.org
Authors:
Fabian Linder (@fablindner)
Heiner Igel (@heinerigel)
David Vargas (@dvargas)
Basic Equations
Calculating a derivative using the differentation theorem of the Fourier Transform is in the mathematical sense a convolution of the function $f(x)$ with $ik$, where $k$ is the wavenumber and $i$ the imaginary unit. This can also be formulated as a matrix-vector product involving so-called Toeplitz matrices. An elegant (but inefficient) way of performing a derivative operation on a space-dependent function described on the Chebyshev collocation points is by defining a derivative matrix $D_{ij}$
$$ D_{ij} \ = \ -\frac{2 N^2 + 1}{6} \hspace{1.5cm} \text{for i = j = N} $$
$$ D_{ij} \ = \ -\frac{1}{2} \frac{x_i}{1-x_i^2} \hspace{1.5cm} \text{for i = j = 1,2,...,N-1} $$
$$ D_{ij} \ = \ \frac{c_i}{c_j} \frac{(-1)^{i+j}}{x_i - x_j} \hspace{1.5cm} \text{for i $\neq$ j =
0,1,...,N}$$
where $N+1$ is the number of Chebyshev collocation points $ \ x_i = cos(i\pi / N)$, $ \ i=0,...,N$ and the $c_i$ are given as
$$ c_i = 2 \hspace{1.5cm} \text{for i = 0 or N} $$
$$ c_i = 1 \hspace{1.5cm} \text{otherwise} $$
This differentiation matrix allows us to write the derivative of the function $f_i = f(x_i)$ (possibly depending on time) simply as
$$\partial_x u_i = D_{ij} \ u_j$$
where the right-hand side is a matrix-vector product, and the Einstein summation convention applies.
End of explanation
#################################################################
# IMPLEMENT THE CHEBYSHEV DERIVATIVE MATRIX METHOD HERE!
#################################################################
Explanation: Exercise 1
Define a python function call "get_cheby_matrix(nx)" that initializes the Chebyshev derivative matrix $D_{ij}$
End of explanation
#################################################################
# IMPLEMENT YOUR SOLUTION HERE!
#################################################################
Explanation: Exercise 2
Calculate the numerical derivative by applying the differentiation matrix $D_{ij}$. Define an arbitrary function (e.g. a Gaussian) and initialize its analytical derivative on the Chebyshev collocation points. Calculate the numerical derivative and the difference to the analytical solution. Vary the wavenumber content of the analytical function. Does it make a difference? Why is the numerical result not entirely exact?
End of explanation
#################################################################
# PLOT YOUR SOLUTION HERE!
#################################################################
Explanation: Exercise 3
Now that the numerical derivative is available, we can visually inspect our results. Make a plot of both, the analytical and numerical derivatives together with the difference error.
End of explanation
<END_TASK> |
15,659 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Interactive UMAP visualization with Graphistry for scale and explainability
UMAP is a great algorithm for clustering rich data. It is commonly used for turning data with many columns into more approachable 2D/3D visualizations, finding clusters, and making cluster classifiers. You may be familiar with earlier dimensionality-reduction algorithms like PCA, K-Means, and t-SNE that UMAP generally improves upon. As great as UMAP is, unfortunately, traditional UMAP results are still hard to understand for questions like what elements are in a cluster or why, and hard to interact with as you tweak their many settings. This notebook shows how to use Graphistry and graphs to quickly understand and interact with the results.
We demonstrate working the leading CPU implementation of UMAP, umap_learn. You may also enjoy our end-to-end GPU tutorial that uses the GPU-accelerated RAPIDS cuML implementaton of UMAP, though it currently takes an extract step of having to manually compute the k-nn.
For an ongoing example, we use a security event log (IPs, timestamps, counts, alert names, ...), and we've seen similarly great results for areas like fraud, genomics, and misinformation
Step1: Prep 2
Step2: Prep 3
Step3: Prep 4
Step4: UMAP
UMAP has many options -- run help(umap.UMAP) for details. It returns two things we use
Step5: Visualize 1
Step6: Fascinatingly, when Graphistry's force-directed graph layout algorithm reuses UMAP's inferred edge connectivities, the layout does not significantly change from what UMAP computes. Try hitting the "play" button in the tool to see for yourself! That means the graph-based intuitions for subsequent interactions, such as removing key nodes/edges and reclustering, should be consistent.
Visualize 2 | <ASSISTANT_TASK:>
Python Code:
# Already installed in Graphistry & RAPIDS distros
# ! pip install --user umap-learn
# ! pip install --user graphistry
import graphistry, pandas as pd, umap
# To specify Graphistry account & server, use:
# graphistry.register(api=3, username='...', password='...', protocol='https', server='hub.graphistry.com')
# For more options, see https://github.com/graphistry/pygraphistry#configure
Explanation: Interactive UMAP visualization with Graphistry for scale and explainability
UMAP is a great algorithm for clustering rich data. It is commonly used for turning data with many columns into more approachable 2D/3D visualizations, finding clusters, and making cluster classifiers. You may be familiar with earlier dimensionality-reduction algorithms like PCA, K-Means, and t-SNE that UMAP generally improves upon. As great as UMAP is, unfortunately, traditional UMAP results are still hard to understand for questions like what elements are in a cluster or why, and hard to interact with as you tweak their many settings. This notebook shows how to use Graphistry and graphs to quickly understand and interact with the results.
We demonstrate working the leading CPU implementation of UMAP, umap_learn. You may also enjoy our end-to-end GPU tutorial that uses the GPU-accelerated RAPIDS cuML implementaton of UMAP, though it currently takes an extract step of having to manually compute the k-nn.
For an ongoing example, we use a security event log (IPs, timestamps, counts, alert names, ...), and we've seen similarly great results for areas like fraud, genomics, and misinformation:
Prep 1: Install
Prep 2: Load and clean data
Prep 3: Featurization
Prep 4: Normalize & weight
UMAP
Visualize 1: UMAP as a graph
Visualize 2: Explaining UMAP connections
Prep 1: Install
Install umap and graphistry if you have not already
If you are not running a graphistry server, you can use a free Hub account via the username/password option
End of explanation
df = pd.read_csv('../../data/honeypot.csv')
df['victimPort'] = df['victimPort'].astype('uint32')
df['time(max)'] = pd.to_datetime(df['time(max)'] * 1000 * 1000 * 1000)
df['time(min)'] = pd.to_datetime(df['time(min)'] * 1000 * 1000 * 1000)
print(df.info())
df.sample(5)
Explanation: Prep 2: Load and clean data
UMAP works with most tabular data. You can use it with rows that have strings, numbers, dates and more!
The below small example is server logs of security honeypots getting hacked.
End of explanation
dummmies = [
pd.get_dummies(df[c], prefix=f'{c}_oh')
for c in ['victimIP', 'victimPort', 'vulnName']
]
encoded_ips = ([
df[[]].assign(
attackerIP_a = df['attackerIP'].str.extract("^(\d+)\.").astype('uint8'),
attackerIP_b = df['attackerIP'].str.extract("^\d+\.(\d+)\.").astype('uint8'),
attackerIP_c = df['attackerIP'].str.extract("^\d+\.\d+\.(\d+)\.").astype('uint8'),
attackerIP_d = df['attackerIP'].str.extract("^\d+\.\d+\.\d+\.(\d+)$").astype('uint8'),
victimIP_a = df['victimIP'].str.extract("^(\d+)\.").astype('uint8'),
victimIP_b = df['victimIP'].str.extract("^\d+\.(\d+)\.").astype('uint8'),
victimIP_c = df['victimIP'].str.extract("^\d+\.\d+\.(\d+)\.").astype('uint8'),
victimIP_d = df['victimIP'].str.extract("^\d+\.\d+\.\d+\.(\d+)$").astype('uint8')
)
])
orig_continuous = [
df[['victimPort', 'count', 'time(max)', 'time(min)']].assign(
duration=df['time(max)'] - df['time(min)']
)
]
df2 = pd.concat(encoded_ips + dummmies + orig_continuous, axis=1)
print('new shape:', df2.info())
df2.sample(5)
Explanation: Prep 3: Featurization
UMAP operates on numeric columns, so we create a new table of numeric values using several common feature encodings:
* Replace categorical values like specific IPs, ports, and alert names with many one-hot encoded columns. Ex: For column "victimIP", and many columns like "victimIP_oh_127.0.0.1" whose values are 0/1
* Component columns: Split IPs like "172.31.13.124" into parts like "172" vs "31" in case there are phenomena like coordinated IP ranges
* Compute derived and entangled columns, like augmenting the min/max times of an alert being seen with the duration (max - min)
While the original data only had 7 columns, the new one has 33. We've worked with 10K+ columns in GPU-accelerated use cases.
You may benefit from using libraries to streamline the normalization. We only use pandas calls to be clear, and in a way that is directly translatable to cuDF for automatic GPU acceleration on bigger workloads.
End of explanation
df3 = df2.copy()
for c in df3:
#print(c)
df3[c] = ((df3[c] - df3[c].min())/(df3[c].max() - df3[c].min())).fillna(0)
print(df3.info())
df3.sample(5)
Explanation: Prep 4: Normalize & weight
Once you have numeric data, UMAP is still sensitive to how you normalize each column. We do a simple conversion of each column to values between 0-1.
Fancier normalizations of some columns might also try to adjust for aspects like the distribution. Likewise, you can try increasing specific columns to being 0-10 to increase their relative weight.
End of explanation
# see help(umap.UMAP)
umap_options = {
'n_components': 2,
'metric': 'euclidean'
}
%%time
embedding = umap.UMAP(**umap_options).fit(df3)
embedding
%%time
coo = embedding.graph_.tocoo()
print('coo lens', len(coo.row), len(coo.col), len(coo.data))
print(coo.row[0:5], coo.col[0:5], coo.data[0:5])
weighted_edges_df = pd.DataFrame({
's': coo.row,
'd': coo.col,
'w': coo.data
})
weighted_edges_df.sample(3)
nodes_df = pd.concat([
df,
pd.DataFrame(embedding.embedding_).rename(columns={0: 'x', 1: 'y'})
], axis=1)
nodes_df['x'] = nodes_df['x'] * 100
nodes_df['y'] = nodes_df['y'] * 100
nodes_df = nodes_df.reset_index().rename(columns={'index': 'n'})
print(nodes_df.info())
nodes_df.sample(5)
Explanation: UMAP
UMAP has many options -- run help(umap.UMAP) for details. It returns two things we use:
* An (x,y) position pair for each record
* A weighted edgelist (sparse matrix) listing higher-value similarities between records
We enrich our original data frame with the x/y positions and create a new one with the edges. Note that we throw away most of the features: we'll get explainable summaries later.
End of explanation
# Most of the settings are optional and can be changed on-the-fly in the UI
g = (
graphistry
.nodes(nodes_df, 'n')
.edges(weighted_edges_df, 's', 'd')
.bind(point_x='x', point_y='y', edge_weight='w')
.settings(url_params={'play': 0, 'edgeInfluence': 5})
.encode_edge_color('w', ['maroon', 'pink', 'white'], as_continuous=True)
.encode_point_size('count')
)
g.plot()
Explanation: Visualize 1: Interactive UMAP using graphs
We first use UMAP data for an interactive graph visualization you can inspect and manipulate
Nodes: Represents the original records
Position: From the UMAP embedding
Size: Bind to the original 'count' column
Color: Use Graphistry's default to autoinfer a community label based on edges (below)
Edges: Shows UMAP's inferred connectivites (correlations)
Color: Edge weight, cold to hot
Weights: From UMAP's inferred connectivities
You can think of UMAP's edges as being the most important pairwise weighted votes stating "these records should be close together". In force-directed graph layout algorithms, they act as elastic springs that prevent the nodes from drifting too far apart.
The visualization lets you interactively explore phenomena like coloring by alert name and time and drilling into specific clusters. As UMAP is fairly fast here (and can be faster via the cuML flow), we have both a fast visual interaction loop and decently fast coding loop.
End of explanation
#triple: src_node_EDGE_dst_node
edge_triples = (g
._edges
.merge(g._nodes, left_on=g._source, right_on=g._node)
.rename(columns={c: f'src_{c}' for c in g._nodes})
.merge(g._nodes, left_on=g._destination, right_on=g._node)
.rename(columns={c: f'dst_{c}' for c in g._nodes})
)
#print(edge_triplescolumns)
equivs = []
for c in g._nodes:
equiv = edge_triples[ edge_triples[f'src_{c}'] == edge_triples[f'dst_{c}'] ]
if len(equiv) > 0:
equiv = equiv[[g._source, g._destination]].assign(
type=c,
match_val=edge_triples[f'src_{c}'],
w=0.1)
equiv[c] = edge_triples[f'src_{c}']
print('adding', c, len(equiv))
equivs.append(equiv)
else:
print('no hits on col', c)
equivs_df = pd.concat(equivs)
equivs_df['match_val'] = equivs_df['match_val'].astype(str) # ensure arrow works
#equivs_df.sample(10)
edges2 = pd.concat([g._edges.assign(type='umap', match_val='1'), equivs_df])
g2 = (g
.edges(edges2)
#.edges(edges2[edges2['type'] == 'attackerIP'])
.bind(edge_label='match_val')
.encode_edge_color('type', categorical_mapping={
'umap': 'grey',
'victimIP': 'blue',
'attackerIP': 'lightblue',
'victimPort': 'green',
'vulnName': 'yellow',
'count': 'white'
})
)
g2.plot()
print(g2._edges.info())
g2._edges.sample(3)
Explanation: Fascinatingly, when Graphistry's force-directed graph layout algorithm reuses UMAP's inferred edge connectivities, the layout does not significantly change from what UMAP computes. Try hitting the "play" button in the tool to see for yourself! That means the graph-based intuitions for subsequent interactions, such as removing key nodes/edges and reclustering, should be consistent.
Visualize 2: Explainable UMAP connections
When nodes have many features, even having UMAP's edges showing their nearest neighbors does not clarify which attributes best explain why they are being clustered. For example, the primary partitioning largely follows alert name, but those split into subclusters with interesting designs based on secondary factors like IP address and time.
To visually clarify which features a pair of nodes have in commmon, we add edges between them, one for each attribute in common. The more common features between two nodes, the more edges. This is similar to how graphistry.hypergraph(df, ...)['graph'].plot() works. For simplicity, the below algorithm computes new edges far exact feature matches.
For initial intuition, we color the edges based on the type -- IP, alert name, etc. -- but it can also may make sense to color them by specific values, like a particular IP address or alert name.
End of explanation
<END_TASK> |
15,660 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Template for test
Step1: Controlling for Random Negatve vs Sans Random in Imbalanced Techniques using S, T, and Y Phosphorylation.
Included is N Phosphorylation however no benchmarks are available, yet.
Training data is from phospho.elm and benchmarks are from dbptm.
Note
Step2: Y Phosphorylation
Step3: T Phosphorylation | <ASSISTANT_TASK:>
Python Code:
from pred import Predictor
from pred import sequence_vector
from pred import chemical_vector
Explanation: Template for test
End of explanation
par = ["pass", "ADASYN", "SMOTEENN", "random_under_sample", "ncl", "near_miss"]
for i in par:
print("y", i)
y = Predictor()
y.load_data(file="Data/Training/clean_s_filtered.csv")
y.process_data(vector_function="sequence", amino_acid="S", imbalance_function=i, random_data=0)
y.supervised_training("bagging")
y.benchmark("Data/Benchmarks/phos_stripped.csv", "S")
del y
print("x", i)
x = Predictor()
x.load_data(file="Data/Training/clean_s_filtered.csv")
x.process_data(vector_function="sequence", amino_acid="S", imbalance_function=i, random_data=1)
x.supervised_training("bagging")
x.benchmark("Data/Benchmarks/phos_stripped.csv", "S")
del x
Explanation: Controlling for Random Negatve vs Sans Random in Imbalanced Techniques using S, T, and Y Phosphorylation.
Included is N Phosphorylation however no benchmarks are available, yet.
Training data is from phospho.elm and benchmarks are from dbptm.
Note: SMOTEEN seems to preform best
End of explanation
par = ["pass", "ADASYN", "SMOTEENN", "random_under_sample", "ncl", "near_miss"]
for i in par:
print("y", i)
y = Predictor()
y.load_data(file="Data/Training/clean_Y_filtered.csv")
y.process_data(vector_function="sequence", amino_acid="Y", imbalance_function=i, random_data=0)
y.supervised_training("bagging")
y.benchmark("Data/Benchmarks/phos_stripped.csv", "Y")
del y
print("x", i)
x = Predictor()
x.load_data(file="Data/Training/clean_Y_filtered.csv")
x.process_data(vector_function="sequence", amino_acid="Y", imbalance_function=i, random_data=1)
x.supervised_training("bagging")
x.benchmark("Data/Benchmarks/phos_stripped.csv", "Y")
del x
Explanation: Y Phosphorylation
End of explanation
par = ["pass", "ADASYN", "SMOTEENN", "random_under_sample", "ncl", "near_miss"]
for i in par:
print("y", i)
y = Predictor()
y.load_data(file="Data/Training/clean_t_filtered.csv")
y.process_data(vector_function="sequence", amino_acid="T", imbalance_function=i, random_data=0)
y.supervised_training("bagging")
y.benchmark("Data/Benchmarks/phos_stripped.csv", "T")
del y
print("x", i)
x = Predictor()
x.load_data(file="Data/Training/clean_t_filtered.csv")
x.process_data(vector_function="sequence", amino_acid="T", imbalance_function=i, random_data=1)
x.supervised_training("bagging")
x.benchmark("Data/Benchmarks/phos_stripped.csv", "T")
del x
Explanation: T Phosphorylation
End of explanation
<END_TASK> |
15,661 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Advanced Sounding
Plot a sounding using MetPy with more advanced features.
Beyond just plotting data, this uses calculations from metpy.calc to find the lifted
condensation level (LCL) and the profile of a surface-based parcel. The area between the
ambient profile and the parcel profile is colored as well.
Step1: Upper air data can be obtained using the siphon package, but for this example we will use
some of MetPy's sample data.
Step2: We will pull the data out of the example dataset into individual variables and
assign units.
Step3: Create a new figure. The dimensions here give a good aspect ratio. | <ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import pandas as pd
import metpy.calc as mpcalc
from metpy.cbook import get_test_data
from metpy.plots import add_metpy_logo, SkewT
from metpy.units import units
Explanation: Advanced Sounding
Plot a sounding using MetPy with more advanced features.
Beyond just plotting data, this uses calculations from metpy.calc to find the lifted
condensation level (LCL) and the profile of a surface-based parcel. The area between the
ambient profile and the parcel profile is colored as well.
End of explanation
col_names = ['pressure', 'height', 'temperature', 'dewpoint', 'direction', 'speed']
df = pd.read_fwf(get_test_data('may4_sounding.txt', as_file_obj=False),
skiprows=5, usecols=[0, 1, 2, 3, 6, 7], names=col_names)
# Drop any rows with all NaN values for T, Td, winds
df = df.dropna(subset=('temperature', 'dewpoint', 'direction', 'speed'), how='all'
).reset_index(drop=True)
Explanation: Upper air data can be obtained using the siphon package, but for this example we will use
some of MetPy's sample data.
End of explanation
p = df['pressure'].values * units.hPa
T = df['temperature'].values * units.degC
Td = df['dewpoint'].values * units.degC
wind_speed = df['speed'].values * units.knots
wind_dir = df['direction'].values * units.degrees
u, v = mpcalc.wind_components(wind_speed, wind_dir)
Explanation: We will pull the data out of the example dataset into individual variables and
assign units.
End of explanation
fig = plt.figure(figsize=(9, 9))
add_metpy_logo(fig, 115, 100)
skew = SkewT(fig, rotation=45)
# Plot the data using normal plotting functions, in this case using
# log scaling in Y, as dictated by the typical meteorological plot.
skew.plot(p, T, 'r')
skew.plot(p, Td, 'g')
skew.plot_barbs(p, u, v)
skew.ax.set_ylim(1000, 100)
skew.ax.set_xlim(-40, 60)
# Calculate LCL height and plot as black dot. Because `p`'s first value is
# ~1000 mb and its last value is ~250 mb, the `0` index is selected for
# `p`, `T`, and `Td` to lift the parcel from the surface. If `p` was inverted,
# i.e. start from low value, 250 mb, to a high value, 1000 mb, the `-1` index
# should be selected.
lcl_pressure, lcl_temperature = mpcalc.lcl(p[0], T[0], Td[0])
skew.plot(lcl_pressure, lcl_temperature, 'ko', markerfacecolor='black')
# Calculate full parcel profile and add to plot as black line
prof = mpcalc.parcel_profile(p, T[0], Td[0]).to('degC')
skew.plot(p, prof, 'k', linewidth=2)
# Shade areas of CAPE and CIN
skew.shade_cin(p, T, prof)
skew.shade_cape(p, T, prof)
# An example of a slanted line at constant T -- in this case the 0
# isotherm
skew.ax.axvline(0, color='c', linestyle='--', linewidth=2)
# Add the relevant special lines
skew.plot_dry_adiabats()
skew.plot_moist_adiabats()
skew.plot_mixing_lines()
# Show the plot
plt.show()
Explanation: Create a new figure. The dimensions here give a good aspect ratio.
End of explanation
<END_TASK> |
15,662 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below
Step9: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token
Step11: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step13: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step15: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below
Step18: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders
Step21: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
Step24: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
Step27: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
Step30: Build the Neural Network
Apply the functions you implemented above to
Step33: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements
Step35: Neural Network Training
Hyperparameters
Tune the following parameters
Step37: Build the Graph
Build the graph using the neural network you implemented.
Step39: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forums to see if anyone is having the same problem.
Step41: Save Parameters
Save seq_length and save_dir for generating a new TV script.
Step43: Checkpoint
Step46: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names
Step49: Choose Word
Implement the pick_word() function to select the next word using probabilities.
Step51: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate. | <ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
import numpy as np
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
# TODO: Implement Function
counts = Counter(text)
vocab = sorted(counts, key=counts.get, reverse=True)
vocab_to_int = { w : i for i, w in enumerate(vocab, 0)}
int_to_vocab = dict(enumerate(vocab))
return vocab_to_int, int_to_vocab
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
# TODO: Implement Function
token_dict = {'.' : "||Period||", ',' : "||Comma||", '"' : "||Quotation_Mark||",\
';' : "||Semicolon||", '!': "||Exclamation_Mark||", '?': "||Question_Mark||", \
'(' : "||Left_Parentheses||", ')' : "||Right_Parentheses||", '--' : "||Dash||", '\n' : "||Return||"}
return token_dict
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
# TODO: Implement Function
inputs_ = tf.placeholder(tf.int32, shape=[None, None], name='input')
targets_ = tf.placeholder(tf.int32, shape=[None, None], name='targets')
learn_rate_ = tf.placeholder(tf.float32, shape=None, name='learning_rate')
return (inputs_, targets_, learn_rate_)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following tuple (Input, Targets, LearningRate)
End of explanation
def get_init_cell(batch_size, rnn_size):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
# TODO: Implement Function
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
cell = tf.contrib.rnn.MultiRNNCell([lstm])
initial_state = tf.identity(cell.zero_state(batch_size, tf.int32), name="initial_state")
return cell, initial_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
# TODO: Implement Function
embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))
embed = tf.nn.embedding_lookup(embedding, input_data)
return embed
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
# TODO: Implement Function
outputs, fs = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
final_state = tf.identity(fs, name='final_state')
return outputs, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
# TODO: Implement Function
embed = get_embed(input_data, vocab_size, embed_dim)
rnn, final_state = build_rnn(cell, embed)
logits = tf.contrib.layers.fully_connected(rnn, vocab_size, activation_fn=None, \
weights_initializer = tf.truncated_normal_initializer(stddev=0.1),\
biases_initializer=tf.zeros_initializer())
return logits, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
# TODO: Implement Function
num_batches = int(len(int_text) / (batch_size * seq_length))
num_words = num_batches * batch_size * seq_length
input_data = np.array(int_text[:num_words])
target_data = np.array(int_text[1:num_words+1])
input_batches = np.split(input_data.reshape(batch_size, -1), num_batches, 1)
target_batches = np.split(target_data.reshape(batch_size, -1), num_batches, 1)
#last target value in the last batch is the first input value of the first batch
#print (batches)
target_batches[-1][-1][-1]=input_batches[0][0][0]
return np.array(list(zip(input_batches, target_batches)))
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_batches(get_batches)
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2], [ 7 8], [13 14]]
# Batch of targets
[[ 2 3], [ 8 9], [14 15]]
]
# Second Batch
[
# Batch of Input
[[ 3 4], [ 9 10], [15 16]]
# Batch of targets
[[ 4 5], [10 11], [16 17]]
]
# Third Batch
[
# Batch of Input
[[ 5 6], [11 12], [17 18]]
# Batch of targets
[[ 6 7], [12 13], [18 1]]
]
]
```
Notice that the last target value in the last batch is the first input value of the first batch. In this case, 1. This is a common technique used when creating sequence batches, although it is rather unintuitive.
End of explanation
# Number of Epochs
num_epochs = 20
# Batch Size
batch_size = 100
# RNN Size
rnn_size = 512
# Embedding Dimension Size
embed_dim = 300
# Sequence Length
seq_length = 10
# Learning Rate
learning_rate = 0.01
# Show stats for every n number of batches
show_every_n_batches = 10
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set embed_dim to the size of the embedding.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forums to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
Explanation: Checkpoint
End of explanation
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
# TODO: Implement Function
InputTensor = loaded_graph.get_tensor_by_name('input:0')
InitialStateTensor = loaded_graph.get_tensor_by_name('initial_state:0')
FinalStateTensor = loaded_graph.get_tensor_by_name('final_state:0')
ProbsTensor = loaded_graph.get_tensor_by_name('probs:0')
return InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
# TODO: Implement Function
return int_to_vocab[np.argmax(probabilities)]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[0, dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation
<END_TASK> |
15,663 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Pandas
Pandas é um pacote bastante poderoso para a análise e processamento de dados tabulares, trazendo consigo estruturas de dados e operações muito eficientes
Step1: Series
Séries são estruturas unidimensionais, como um array do Numpy de dimensão 1
Step2: Os dados em uma série podem conter um índice, permitindo uma otimização no acesso dos dados
Step3: Além disso, o índice pode ser utilizado para dar uma semântia ao dado de uma série, permitindo também que o mesmo seja acessado pelo índice atribuído
Step4: As Series também possuem um método de transformação, conforme apresentado na aula anterior. Esse método chama-se apply
Step5: DataFrame
Um DataFrame é uma tabela onde cada coluna é uma Serie. Assim como as Series, o DataFrame possui um índice, porém, o índice refere-se a uma linha inteira, ou seja, ao elemento naquela posição em todas suas colunas
Step6: Para fins de exportação, um DataFrame pode ser representado em diversos formatos
Step7: Enquanto em uma Serie utilizamos os colchetes ([]) para acessar um elemento em um certo índice, no DataFrame o operador refere-se à uma Serie, permitindo acessá-la, sobrescrevê-la ou adicionar uma nova
Step8: Um DataFrame também pode ser transposto, ou seja, as labels das suas colunas viram índices e os índices viram as novas colunas
Step9: Podemos também ordenar as linhas do DataFrame a partir de uma de suas colunas
Step10: Importando um dataset real
Plataforma Kaggle - Competições de Ciências de Dados
Titanic
Step11: O operador colchetes ([]) do pandas também pode ser usado como uma filtragem, ou seja, dada uma condição (ou predicado), ele retorna apenas as linhas do DataFrame que satisfaçam o predicado.
OBS
Step12: O DataFrame permite também que sejam relizadas contagens sobre os valores presentes nas séries, permitindo assim analizarmos a ocorrência de certos dados categóricos
Step13: Por último mas nunca menos importante, podemos agrupar as linhas do DataFrame a partir de uma coluna e operar sobre os grupos criados | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
Explanation: Pandas
Pandas é um pacote bastante poderoso para a análise e processamento de dados tabulares, trazendo consigo estruturas de dados e operações muito eficientes
End of explanation
a = pd.Series([20, 50, 190, 11, 76])
a
Explanation: Series
Séries são estruturas unidimensionais, como um array do Numpy de dimensão 1
End of explanation
dados = [20, 50, 190, 11, 76]
rotulos = ['a', 'b', 'c', 'd', 'e']
b = pd.Series(dados, index=rotulos)
b
Explanation: Os dados em uma série podem conter um índice, permitindo uma otimização no acesso dos dados
End of explanation
print(a[2])
print(b[2])
print(b['c'])
Explanation: Além disso, o índice pode ser utilizado para dar uma semântia ao dado de uma série, permitindo também que o mesmo seja acessado pelo índice atribuído
End of explanation
a.apply(lambda x: 2*x)
Explanation: As Series também possuem um método de transformação, conforme apresentado na aula anterior. Esse método chama-se apply: ele recebe uma função que será aplicada sobre todos os elementos da Serie, retornando então uma Serie com os resultados
End of explanation
matriz = np.array([[1, 2, 3], [4, 5, 6]])
nomes_linhas = ['L1', 'L2']
nomes_cols = ['C1', 'C2', 'C3']
df = pd.DataFrame(matriz, index=nomes_linhas, columns=nomes_cols)
df
Explanation: DataFrame
Um DataFrame é uma tabela onde cada coluna é uma Serie. Assim como as Series, o DataFrame possui um índice, porém, o índice refere-se a uma linha inteira, ou seja, ao elemento naquela posição em todas suas colunas
End of explanation
print(df.to_latex()) # latex
print(df.to_csv(index=False)) #csv
print(df.to_json()) # JSON
print(df.to_html()) #HTML
Explanation: Para fins de exportação, um DataFrame pode ser representado em diversos formatos
End of explanation
df['C3']
df['C4'] = [1, 0]
df
df['C4'] = [4, 7]
df
Explanation: Enquanto em uma Serie utilizamos os colchetes ([]) para acessar um elemento em um certo índice, no DataFrame o operador refere-se à uma Serie, permitindo acessá-la, sobrescrevê-la ou adicionar uma nova
End of explanation
df.transpose()
Explanation: Um DataFrame também pode ser transposto, ou seja, as labels das suas colunas viram índices e os índices viram as novas colunas
End of explanation
df.sort_values(by='C4', ascending=False)
Explanation: Podemos também ordenar as linhas do DataFrame a partir de uma de suas colunas
End of explanation
df = pd.read_csv('titanic.csv')
df.head() # 5 primeiras linhas
df.tail() # últimas 5 linhas
df.columns # colunas do dataframe
df.describe() # calcula estatísticas para cada coluna numérica do DataFrame
Explanation: Importando um dataset real
Plataforma Kaggle - Competições de Ciências de Dados
Titanic: Machine Learning from Disaster
Nela, são providenciadas diversas informações sobre os passageiros, como idade, sexo, cabine, valor do tíquete pago, entre outros.
O pandas possui funções pré-definidas para a leitura de alguns formatos de arquivos.
End of explanation
df[df.Sex == "female"]
Explanation: O operador colchetes ([]) do pandas também pode ser usado como uma filtragem, ou seja, dada uma condição (ou predicado), ele retorna apenas as linhas do DataFrame que satisfaçam o predicado.
OBS: escrever um predicado nesse operador nem sempre é tão simples quanto um predicado Python comum
End of explanation
df['Sex'].value_counts()
df['Survived'].apply(lambda s: "Yes" if s == 1 else "No").value_counts()
Explanation: O DataFrame permite também que sejam relizadas contagens sobre os valores presentes nas séries, permitindo assim analizarmos a ocorrência de certos dados categóricos
End of explanation
df.groupby('Sex')['Survived'].value_counts()
Explanation: Por último mas nunca menos importante, podemos agrupar as linhas do DataFrame a partir de uma coluna e operar sobre os grupos criados
End of explanation
<END_TASK> |
15,664 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
<p>
<img src="http
Step1:
Step2:
Step3: | <ASSISTANT_TASK:>
Python Code:
from itertools import repeat
from sympy import *
#from type_system import *
%run ../../src/commons.py
%run ./type-system.py
Explanation: <p>
<img src="http://www.cerm.unifi.it/chianti/images/logo%20unifi_positivo.jpg"
alt="UniFI logo" style="float: left; width: 20%; height: 20%;">
<div align="right">
Massimo Nocentini<br>
</div>
</p>
<br>
<div align="center">
<b>Abstract</b><br>
In this document we collect a naive <i>type system</i> based on sets.
</div>
End of explanation
init_printing()
x,y,m,n,t,z = symbols('x y m n t z', commutative=True)
alpha, beta, gamma, eta = symbols(r'\alpha \beta \gamma \eta', commutative=True)
f,g = Function('f'), Function('g')
Explanation:
End of explanation
bin_tree_gfs = bin_tree(tyvar(alpha)[z]).gf()
bin_tree_gfs
bin_tree_gf = bin_tree_gfs[0]
bin_tree_gf.series(z, n=10, kernel='ordinary')
bin_tree_gf.series(z, n=10, kernel='catalan')
occupancy(bin_tree_gf, syms=[alpha], objects='unlike', boxes='unlike').series(z)
Explanation:
End of explanation
bin_tree_of_boolean_gfs = bin_tree(maybe(tyvar(alpha))[z]).gf()
bin_tree_of_boolean_gfs
bin_tree_of_boolean_gf = bin_tree_of_boolean_gfs[0]
occupancy(bin_tree_of_boolean_gf, syms=[alpha], objects='unlike', boxes='unlike').series(z,n=6, kernel='ordinary')
Explanation:
End of explanation
<END_TASK> |
15,665 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Goal
Trying varying levels of bandwidth and DBL scaling with pre-fractionation abundances ('DBL-comm')
Varying parameters
bandwidth (bw)
0.2, 0.6, 1
diffusive boundary layer (DBL) scaling (DBL scaling by abundance)
0.15, 0.2, 0.25
This notebook is the same as rep3_DBL-comm_bw but more narrow parameter ranges
Init
Step1: BD min/max
Step2: Nestly
assuming fragments already simulated
Step3: Notes
Errors due to memory limitations
re-ran these simulations
Comparing to emperical data
correlation/regression analyses of metrics on community composition
Step4: Shannon index
Step5: BD spans
Step6: correlograms (jaccard ~ BD)
Step7: Summary plots for all simulations
Step8: Shannon
Step9: BD span
Step10: correlogram | <ASSISTANT_TASK:>
Python Code:
import os
import glob
import re
import nestly
%load_ext rpy2.ipython
%load_ext pushnote
%%R
library(ggplot2)
library(dplyr)
library(tidyr)
library(gridExtra)
library(phyloseq)
Explanation: Goal
Trying varying levels of bandwidth and DBL scaling with pre-fractionation abundances ('DBL-comm')
Varying parameters
bandwidth (bw)
0.2, 0.6, 1
diffusive boundary layer (DBL) scaling (DBL scaling by abundance)
0.15, 0.2, 0.25
This notebook is the same as rep3_DBL-comm_bw but more narrow parameter ranges
Init
End of explanation
%%R
## min G+C cutoff
min_GC = 13.5
## max G+C cutoff
max_GC = 80
## max G+C shift
max_13C_shift_in_BD = 0.036
min_BD = min_GC/100.0 * 0.098 + 1.66
max_BD = max_GC/100.0 * 0.098 + 1.66
max_BD = max_BD + max_13C_shift_in_BD
cat('Min BD:', min_BD, '\n')
cat('Max BD:', max_BD, '\n')
Explanation: BD min/max
End of explanation
workDir = '/home/nick/notebook/SIPSim/dev/fullCyc/n1147_frag_norm_9_2.5_n5/'
buildDir = os.path.join(workDir, 'rep4_DBL-comm_bw_HR')
R_dir = '/home/nick/notebook/SIPSim/lib/R/'
fragFile = '/home/nick/notebook/SIPSim/dev/bac_genome1147/validation/ampFrags_kde_parsed.pkl'
commFile = '/home/nick/notebook/SIPSim/dev/fullCyc/fullCyc_12C-Con_trm_comm.txt'
# emperical data for validation
emp_shan_file = '/home/nick/notebook/SIPSim/dev/fullCyc_trim/SIP-core_unk_shan.txt'
emp_BDspan_file = '/home/nick/notebook/SIPSim/dev/fullCyc_trim/SIP-core_unk_trm_BD-span.txt'
emp_corr_file = '/home/nick/notebook/SIPSim/dev/fullCyc_trim/SIP-core_unk_trm_corr.txt'
nreps = 4
# building tree structure
nest = nestly.Nest()
# varying params
nest.add('DBL_scaling', [0.15, 0.2, 0.25])
nest.add('bandwidth', [0.2, 0.6, 1])
nest.add('rep', [x + 1 for x in xrange(nreps)])
## set params
nest.add('abs', ['1e9'], create_dir=False)
nest.add('percIncorp', [0], create_dir=False)
nest.add('percTaxa', [0], create_dir=False)
nest.add('np', [6], create_dir=False)
nest.add('subsample_dist', ['lognormal'], create_dir=False)
nest.add('subsample_mean', [9.432], create_dir=False)
nest.add('subsample_scale', [0.5], create_dir=False)
nest.add('subsample_min', [10000], create_dir=False)
nest.add('subsample_max', [30000], create_dir=False)
### input/output files
nest.add('buildDir', [buildDir], create_dir=False)
nest.add('R_dir', [R_dir], create_dir=False)
nest.add('fragFile', [fragFile], create_dir=False)
nest.add('commFile', [commFile], create_dir=False)
# building directory tree
nest.build(buildDir)
# bash file to run
bashFile = os.path.join(buildDir, 'SIPSimRun.sh')
%%writefile $bashFile
#!/bin/bash
export PATH={R_dir}:$PATH
echo '#-- SIPSim pipeline --#'
echo '# shuffling taxa in comm file'
comm_shuffle_taxa.r {commFile} > comm.txt
echo '# adding diffusion'
SIPSim diffusion \
{fragFile} \
--bw {bandwidth} \
--np {np} \
> ampFrags_KDE_dif.pkl
echo '# adding DBL contamination; abundance-weighted smearing'
SIPSim DBL \
ampFrags_KDE_dif.pkl \
--comm comm.txt \
--commx {DBL_scaling} \
--bw {bandwidth} \
--np {np} \
> ampFrags_KDE_dif_DBL.pkl
echo '# making incorp file'
SIPSim incorpConfigExample \
--percTaxa {percTaxa} \
--percIncorpUnif {percIncorp} \
> {percTaxa}_{percIncorp}.config
echo '# adding isotope incorporation to BD distribution'
SIPSim isotope_incorp \
ampFrags_KDE_dif_DBL.pkl \
{percTaxa}_{percIncorp}.config \
--comm comm.txt \
--bw {bandwidth} \
--np {np} \
> ampFrags_KDE_dif_DBL_inc.pkl
echo '# simulating gradient fractions'
SIPSim gradient_fractions \
comm.txt \
> fracs.txt
echo '# simulating an OTU table'
SIPSim OTU_table \
ampFrags_KDE_dif_DBL_inc.pkl \
comm.txt \
fracs.txt \
--abs {abs} \
--np {np} \
> OTU_abs{abs}.txt
#-- w/ PCR simulation --#
echo '# simulating PCR'
SIPSim OTU_PCR \
OTU_abs{abs}.txt \
> OTU_abs{abs}_PCR.txt
echo '# subsampling from the OTU table (simulating sequencing of the DNA pool)'
SIPSim OTU_subsample \
--dist {subsample_dist} \
--dist_params mean:{subsample_mean},sigma:{subsample_scale} \
--min_size {subsample_min} \
--max_size {subsample_max} \
OTU_abs{abs}_PCR.txt \
> OTU_abs{abs}_PCR_sub.txt
echo '# making a wide-formatted table'
SIPSim OTU_wideLong -w \
OTU_abs{abs}_PCR_sub.txt \
> OTU_abs{abs}_PCR_sub_w.txt
echo '# making metadata (phyloseq: sample_data)'
SIPSim OTU_sampleData \
OTU_abs{abs}_PCR_sub.txt \
> OTU_abs{abs}_PCR_sub_meta.txt
#-- w/out PCR simulation --#
echo '# subsampling from the OTU table (simulating sequencing of the DNA pool)'
SIPSim OTU_subsample \
--dist {subsample_dist} \
--dist_params mean:{subsample_mean},sigma:{subsample_scale} \
--min_size {subsample_min} \
--max_size {subsample_max} \
OTU_abs{abs}.txt \
> OTU_abs{abs}_sub.txt
echo '# making a wide-formatted table'
SIPSim OTU_wideLong -w \
OTU_abs{abs}_sub.txt \
> OTU_abs{abs}_sub_w.txt
echo '# making metadata (phyloseq: sample_data)'
SIPSim OTU_sampleData \
OTU_abs{abs}_sub.txt \
> OTU_abs{abs}_sub_meta.txt
#-- making summary tables --#
# PCR
shannon_calc.r OTU_abs{abs}_PCR_sub.txt > OTU_abs{abs}_PCR_sub_shan.txt
BD_span_calc.r OTU_abs{abs}_PCR_sub.txt comm.txt > OTU_abs{abs}_PCR_sub_BD-span.txt
correlogram_make.r OTU_abs{abs}_PCR_sub.txt > OTU_abs{abs}_PCR_sub_corr.txt
# no PCR
shannon_calc.r OTU_abs{abs}_sub.txt > OTU_abs{abs}_sub_shan.txt
BD_span_calc.r OTU_abs{abs}_sub.txt comm.txt > OTU_abs{abs}_sub_BD-span.txt
correlogram_make.r OTU_abs{abs}_sub.txt > OTU_abs{abs}_sub_corr.txt
!chmod 777 $bashFile
!cd $workDir; \
nestrun --template-file $bashFile -d rep4_DBL-comm_bw_HR --log-file log.txt -j 3
Explanation: Nestly
assuming fragments already simulated
End of explanation
%%R
# function for loading dataset files
load.data.files = function(sim.files, emp.file){
# loading
## simulations
df = list()
for(x in sim.files){
# simulation
tmp = read.delim(x, sep='\t')
xx = strsplit(x, '/')[[1]]
tmp$DBL_scale = xx[10] %>% as.numeric
tmp$bw = xx[11] %>% as.numeric
tmp$SIM_rep = xx[12] %>% as.numeric
tmp$dataset = 'Simulation'
df[[x]] = tmp
# emperical (matched for each simulation)
if(xx[12] %>% as.numeric == 1){
tmp = read.delim(emp.file, sep='\t')
tmp$DBL_scale = xx[10] %>% as.numeric
tmp$bw = xx[11] %>% as.numeric
tmp$SIM_rep = 1
tmp$dataset = 'Emperical'
xy = paste0(x, '_EMP')
df[[xy]] = tmp
}
}
df = do.call(rbind, df) %>% as.data.frame
rownames(df) = 1:nrow(df)
# return
return(df)
}
Explanation: Notes
Errors due to memory limitations
re-ran these simulations
Comparing to emperical data
correlation/regression analyses of metrics on community composition
End of explanation
sim_shan_files = !find $buildDir -name "OTU_abs1e9_PCR_sub_shan.txt"
print len(sim_shan_files)
print emp_shan_file
%%R -i sim_shan_files -i emp_shan_file
df.shan = load.data.files(sim_shan_files, emp_shan_file)
df.shan %>% tail(n=3)
%%R -w 800 -h 600
# summarizing
df.shan.s = df.shan %>%
group_by(dataset, bw, DBL_scale, BD_bin = ntile(Buoyant_density, 24)) %>%
summarize(mean_shannon = mean(shannon),
sd_shannon = sd(shannon),
mean_BD = mean(Buoyant_density))
ggplot(df.shan.s, aes(mean_BD, mean_shannon, color=dataset,
ymin=mean_shannon-sd_shannon, ymax=mean_shannon+sd_shannon)) +
geom_pointrange() +
facet_grid(DBL_scale ~ bw) +
labs(x='Buoyant density (binned; 24 bins)', y='Shannon index') +
theme_bw() +
theme(
text = element_text(size=16)
)
%%R -w 650 -h 600
# pairwise correlations for each dataset
df.shan.bin = df.shan %>%
group_by(BD_bin = ntile(Buoyant_density, 24))
#calc.spearman = function(x){
# cor(x[,'shannon.x'], x['shannon.y'], method='spearman')[1,1]
#}
calc.pearson = function(x){
cor(x[,'shannon.x'], x['shannon.y'], method='pearson')[1,1]
}
df.shan.corr = inner_join(df.shan.bin, df.shan.bin, c('BD_bin' = 'BD_bin',
'bw' = 'bw',
'DBL_scale' = 'DBL_scale')) %>%
group_by(bw, DBL_scale, dataset.x, dataset.y) %>%
nest() %>%
mutate(model = purrr::map(data, calc.pearson)) %>%
unnest(pearson = model %>% purrr::map(function(x) x)) %>%
ungroup() %>%
select(-data, -model) %>%
mutate(pearson_txt = round(pearson, 2))
# plotting
ggplot(df.shan.corr, aes(dataset.x, dataset.y, fill=pearson)) +
geom_tile() +
geom_text(aes(label=pearson_txt), color='white', size=6) +
scale_fill_gradient(low='black', high='red') +
labs(title='Shannon index') +
facet_grid(DBL_scale ~ bw) +
theme(
text = element_text(size=16)
)
Explanation: Shannon index
End of explanation
sim_BDspan_files = !find $buildDir -name "OTU_abs1e9_PCR_sub_BD-span.txt"
print len(sim_BDspan_files)
print emp_BDspan_file
%%R -i sim_BDspan_files -i emp_BDspan_file
df.BDspan = load.data.files(sim_BDspan_files, emp_BDspan_file)
df.BDspan %>% head
%%R -w 700 -h 600
# plotting
ggplot(df.BDspan, aes(mean_preFrac_abund, BD_range_perc, fill=dataset)) +
geom_hex(alpha=0.5) +
scale_x_log10() +
facet_grid(DBL_scale ~ bw) +
labs(x='Pre-fractionation abundance', y='BD span') +
theme_bw() +
theme(
text = element_text(size=16)
)
%%R -i sim_BDspan_files -i emp_BDspan_file
# binning by pre-fractionation abundances
n.tile = 20
df.BDspan = df.BDspan %>%
group_by(dataset, library, DBL_scale, bw, preFrac_abund_bin = ntile(mean_preFrac_abund, n.tile)) %>%
summarize(mean_preFrac_abund = mean(mean_preFrac_abund),
var_BD_range = var(BD_range),
sd_BD_range = sd(BD_range))
df.BDspan %>% tail(n=3)
%%R -w 650 -h 600
calc.spearman = function(x){
cor(x[,'var_BD_range.x'], x['var_BD_range.y'], method='spearman')[1,1]
}
df.BDspan.corr = inner_join(df.BDspan, df.BDspan, c('preFrac_abund_bin' = 'preFrac_abund_bin',
'DBL_scale' = 'DBL_scale',
'bw' = 'bw')) %>%
group_by(DBL_scale, bw, dataset.x, dataset.y) %>%
nest() %>%
mutate(model = purrr::map(data, calc.spearman)) %>%
unnest(spearman = model %>% purrr::map(function(x) x)) %>%
ungroup() %>%
select(-data, -model) %>%
mutate(spearman_txt = round(spearman, 2))
# plotting
ggplot(df.BDspan.corr, aes(dataset.x, dataset.y, fill=spearman)) +
geom_tile() +
geom_text(aes(label=spearman_txt), color='white', size=6) +
scale_fill_gradient(low='black', high='red') +
labs(title='BD span') +
facet_grid(DBL_scale ~ bw) +
theme(
text = element_text(size=16)
)
Explanation: BD spans
End of explanation
sim_corr_files = !find $buildDir -name "OTU_abs1e9_PCR_sub_corr.txt"
print len(sim_corr_files)
print emp_corr_file
%%R -i sim_corr_files -i emp_corr_file
df.corr = load.data.files(sim_corr_files, emp_corr_file)
# binning
df.corr = df.corr %>%
filter(!is.na(Mantel.corr)) %>%
group_by(DBL_scale, bw, dataset, library, class.index.bin = ntile(class.index, 12))
df.corr %>% tail(n=3) %>% as.data.frame
%%R -w 800 -h 600
# plotting
df.corr.s = df.corr %>%
group_by(DBL_scale, bw, dataset, class.index.bin) %>%
summarize(mean_Mantel.corr = mean(Mantel.corr),
sd_Mantel.corr = sd(Mantel.corr),
mean_class.index = mean(class.index))
ggplot(df.corr.s, aes(mean_class.index, mean_Mantel.corr, color=dataset,
ymin=mean_Mantel.corr-sd_Mantel.corr,
ymax=mean_Mantel.corr+sd_Mantel.corr)) +
geom_pointrange() +
labs(x='Class index (binned; 12 bins)', y='Mantel correlation coef.') +
facet_grid(DBL_scale ~ bw) +
theme_bw() +
theme(
text = element_text(size=16)
)
%%R -w 700 -h 600
# pairwise correlations for each dataset
df.shan.bin = df.shan %>%
group_by(BD_bin = ntile(Buoyant_density, 24))
calc.pearson = function(x){
cor(x[,'Mantel.corr.x'], x['Mantel.corr.y'], method='pearson')[1,1]
}
df.corr.lm = inner_join(df.corr, df.corr, c('class.index.bin' = 'class.index.bin',
'bw' = 'bw',
'DBL_scale' = 'DBL_scale')) %>%
group_by(bw, DBL_scale, dataset.x, dataset.y) %>%
nest() %>%
mutate(model = purrr::map(data, calc.pearson)) %>%
unnest(pearson = model %>% purrr::map(function(x) x)) %>%
ungroup() %>%
select(-data, -model) %>%
mutate(pearson_txt = round(pearson, 2))
# plotting
ggplot(df.corr.lm, aes(dataset.x, dataset.y, fill=pearson)) +
geom_tile() +
geom_text(aes(label=pearson_txt), color='white', size=6) +
scale_fill_gradient(low='black', high='red') +
labs(title='Beta diversity correlogram') +
facet_grid(DBL_scale ~ bw) +
theme(
text = element_text(size=16)
)
Explanation: correlograms (jaccard ~ BD)
End of explanation
course_data_dir = "/home/nick/notebook/SIPSim/dev/fullCyc/n1147_frag_norm_9_2.5_n5/rep4_DBL-comm_bw/"
Explanation: Summary plots for all simulations
End of explanation
sim_shan_files1 = !find $course_data_dir -name "OTU_abs1e9_PCR_sub_shan.txt"
to_rm = '/home/nick/notebook/SIPSim/dev/fullCyc/n1147_frag_norm_9_2.5_n5/rep4_DBL-comm_bw/0.2/0.6'
sim_shan_files1 = [x for x in sim_shan_files1 if not x.startswith(to_rm)]
sim_shan_files2 = !find $buildDir -name "OTU_abs1e9_PCR_sub_shan.txt"
sim_shan_files = sim_shan_files1 + sim_shan_files2
print len(sim_shan_files)
%%R -i sim_shan_files -i emp_shan_file
df.shan = load.data.files(sim_shan_files, emp_shan_file)
df.shan %>% tail(n=3)
%%R -h 300
# pairwise correlations for each dataset
df.shan.bin = df.shan %>%
group_by(BD_bin = ntile(Buoyant_density, 24))
calc.pearson = function(x){
cor(x[,'shannon.x'], x['shannon.y'], method='pearson')[1,1]
}
df.shan.corr = inner_join(df.shan.bin, df.shan.bin, c('BD_bin' = 'BD_bin',
'bw' = 'bw',
'DBL_scale' = 'DBL_scale')) %>%
group_by(bw, DBL_scale, dataset.x, dataset.y) %>%
nest() %>%
mutate(model = purrr::map(data, calc.pearson)) %>%
unnest(pearson = model %>% purrr::map(function(x) x)) %>%
ungroup() %>%
select(-data, -model) %>%
mutate(pearson_txt = round(pearson, 2))
# getting emperical-emperical corr
emp.val = df.shan.corr %>%
filter((dataset.x == 'Emperical' &
dataset.y == 'Emperical')) %>%
group_by() %>%
summarize(max_value = max(pearson)) %>%
ungroup() %>%
select(max_value) %>% as.matrix %>% as.vector
emp.val = emp.val[1]
# filtering
df.shan.corr.f = df.shan.corr %>%
filter((dataset.x == 'Simulation' &
dataset.y == 'Emperical')) %>%
mutate(DBL_scale = DBL_scale %>% as.character,
bw = bw %>% as.character,
gt_emp = ifelse(pearson > emp.val, 'bold.italic', 'plain')) %>%
complete(DBL_scale, bw)
df.shan.corr.f %>% head(n=3)
# plotting
ggplot(df.shan.corr.f, aes(DBL_scale,bw, fill=pearson)) +
geom_tile() +
geom_text(aes(label=pearson_txt,fontface=gt_emp), color='white', size=6) +
scale_color_manual(values=c('white', 'black')) +
scale_fill_gradient('Pearson', low='black', high='red') +
labs(title='Shannon index', x='DBL scaling', y='KDE Bandwidth') +
theme(
text = element_text(size=16)
)
Explanation: Shannon
End of explanation
sim_BDspan_files1 = !find $course_data_dir -name "OTU_abs1e9_PCR_sub_BD-span.txt"
to_rm = '/home/nick/notebook/SIPSim/dev/fullCyc/n1147_frag_norm_9_2.5_n5/rep4_DBL-comm_bw/0.2/0.6'
sim_BDspan_files1 = [x for x in sim_BDspan_files1 if not x.startswith(to_rm)]
sim_BDspan_files2 = !find $buildDir -name "OTU_abs1e9_PCR_sub_BD-span.txt"
sim_BDspan_files = sim_BDspan_files1 + sim_BDspan_files2
print len(sim_BDspan_files)
%%R -i sim_BDspan_files -i emp_BDspan_file
df.BDspan = load.data.files(sim_BDspan_files, emp_BDspan_file)
df.BDspan %>% head(n=3)
%%R
# binning by pre-fractionation abundances
n.tile = 20
df.BDspan = df.BDspan %>%
group_by(dataset, library, DBL_scale, bw, preFrac_abund_bin = ntile(mean_preFrac_abund, n.tile)) %>%
summarize(mean_preFrac_abund = mean(mean_preFrac_abund),
var_BD_range = var(BD_range),
sd_BD_range = sd(BD_range))
df.BDspan %>% tail(n=3)
%%R -h 300
calc.spearman = function(x){
cor(x[,'var_BD_range.x'], x['var_BD_range.y'], method='spearman')[1,1]
}
df.BDspan.corr = inner_join(df.BDspan, df.BDspan, c('preFrac_abund_bin' = 'preFrac_abund_bin',
'DBL_scale' = 'DBL_scale',
'bw' = 'bw')) %>%
group_by(DBL_scale, bw, dataset.x, dataset.y) %>%
nest() %>%
mutate(model = purrr::map(data, calc.spearman)) %>%
unnest(spearman = model %>% purrr::map(function(x) x)) %>%
ungroup() %>%
select(-data, -model) %>%
mutate(spearman_txt = round(spearman, 2))
# getting emperical-emperical corr
emp.val = df.BDspan.corr %>%
filter((dataset.x == 'Emperical' &
dataset.y == 'Emperical')) %>%
group_by() %>%
summarize(max_value = max(spearman, na.rm=TRUE)) %>%
ungroup() %>%
select(max_value) %>% as.matrix %>% as.vector
emp.val = emp.val[1]
# filtering
df.BDspan.corr.f = df.BDspan.corr %>%
filter((dataset.x == 'Simulation' &
dataset.y == 'Emperical')) %>%
mutate(DBL_scale = DBL_scale %>% as.character,
bw = bw %>% as.character,
gt_emp = ifelse(spearman > emp.val, 'bold.italic', 'plain')) %>%
complete(DBL_scale, bw)
# plotting
ggplot(df.BDspan.corr.f, aes(DBL_scale, bw, fill=spearman)) +
geom_tile() +
geom_text(aes(label=spearman_txt, fontface=gt_emp), color='white', size=6) +
scale_color_manual(values=c('white', 'black')) +
scale_fill_gradient('Spearman', low='black', high='red') +
labs(title='BD span', x='DBL scaling', y='KDE Bandwidth') +
theme(
text = element_text(size=16)
)
Explanation: BD span
End of explanation
sim_corr_files1 = !find $course_data_dir -name "OTU_abs1e9_PCR_sub_corr.txt"
to_rm = '/home/nick/notebook/SIPSim/dev/fullCyc/n1147_frag_norm_9_2.5_n5/rep4_DBL-comm_bw/0.2/0.6'
sim_corr_files1 = [x for x in sim_corr_files1 if not x.startswith(to_rm)]
sim_corr_files2 = !find $buildDir -name "OTU_abs1e9_PCR_sub_corr.txt"
sim_corr_files = sim_corr_files1 + sim_corr_files2
print len(sim_corr_files)
%%R -i sim_corr_files -i emp_corr_file
df.corr = load.data.files(sim_corr_files, emp_corr_file)
# binning
df.corr = df.corr %>%
filter(!is.na(Mantel.corr)) %>%
group_by(DBL_scale, bw, dataset, library, class.index.bin = ntile(class.index, 12))
df.corr %>% tail(n=3) %>% as.data.frame
%%R -h 300
# pairwise correlations for each dataset
df.shan.bin = df.shan %>%
group_by(BD_bin = ntile(Buoyant_density, 24))
calc.pearson = function(x){
cor(x[,'Mantel.corr.x'], x['Mantel.corr.y'], method='pearson')[1,1]
}
df.corr.lm = inner_join(df.corr, df.corr, c('class.index.bin' = 'class.index.bin',
'bw' = 'bw',
'DBL_scale' = 'DBL_scale')) %>%
group_by(bw, DBL_scale, dataset.x, dataset.y) %>%
nest() %>%
mutate(model = purrr::map(data, calc.pearson)) %>%
unnest(pearson = model %>% purrr::map(function(x) x)) %>%
ungroup() %>%
select(-data, -model) %>%
mutate(pearson_txt = round(pearson, 2))
# getting emperical-emperical corr
emp.val = df.corr.lm %>%
filter((dataset.x == 'Emperical' &
dataset.y == 'Emperical')) %>%
group_by() %>%
summarize(max_value = max(pearson)) %>%
ungroup() %>%
select(max_value) %>% as.matrix %>% as.vector
emp.val = emp.val[1]
print(emp.val)
# filtering
df.corr.lm.f = df.corr.lm %>%
filter((dataset.x == 'Simulation' &
dataset.y == 'Emperical')) %>%
mutate(DBL_scale = DBL_scale %>% as.character,
bw = bw %>% as.character,
gt_emp = ifelse(pearson >= emp.val, 'bold.italic', 'plain')) %>%
complete(DBL_scale, bw)
df.corr.lm.f %>% head(n=3)
# plotting
ggplot(df.corr.lm.f, aes(DBL_scale,bw, fill=pearson)) +
geom_tile() +
geom_text(aes(label=pearson_txt,fontface=gt_emp), color='white', size=6) +
scale_color_manual(values=c('white', 'black')) +
scale_fill_gradient('Pearson', low='black', high='red') +
labs(title='Beta diversity correlogram', x='DBL scaling', y='KDE Bandwidth') +
theme(
text = element_text(size=16)
)
Explanation: correlogram
End of explanation
<END_TASK> |
15,666 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Built-in plotting methods for Raw objects
This tutorial shows how to plot continuous data as a time series, how to plot
the spectral density of continuous data, and how to plot the sensor locations
and projectors stored in ~mne.io.Raw objects.
As usual we'll start by importing the modules we need, loading some
example data <sample-dataset>, and cropping the ~mne.io.Raw
object to just 60 seconds before loading it into RAM to save memory
Step1: We've seen in a previous tutorial <tut-raw-class> how to plot data
from a ~mne.io.Raw object using
Step2: It may not be obvious when viewing this tutorial online, but by default, the
~mne.io.Raw.plot method generates an interactive plot window with
several useful features
Step3: If the data have been filtered, vertical dashed lines will automatically
indicate filter boundaries. The spectrum for each channel type is drawn in
its own subplot; here we've passed the average=True parameter to get a
summary for each channel type, but it is also possible to plot each channel
individually, with options for how the spectrum should be computed,
color-coding the channels by location, and more. For example, here is a plot
of just a few sensors (specified with the picks parameter), color-coded
by spatial location (via the spatial_colors parameter, see the
documentation of ~mne.io.Raw.plot_psd for full details)
Step4: Alternatively, you can plot the PSD for every sensor on its own axes, with
the axes arranged spatially to correspond to sensor locations in space, using
~mne.io.Raw.plot_psd_topo
Step5: This plot is also interactive; hovering over each "thumbnail" plot will
display the channel name in the bottom left of the plot window, and clicking
on a thumbnail plot will create a second figure showing a larger version of
the selected channel's spectral density (as if you had called
~mne.io.Raw.plot_psd on that channel).
By default, ~mne.io.Raw.plot_psd_topo will show only the MEG
channels if MEG channels are present; if only EEG channels are found, they
will be plotted instead
Step6: Plotting sensor locations from Raw objects
The channel locations in a ~mne.io.Raw object can be easily plotted
with the ~mne.io.Raw.plot_sensors method. A brief example is shown
here; notice that channels in raw.info['bads'] are plotted in red. More
details and additional examples are given in the tutorial
tut-sensor-locations.
Step7: Plotting projectors from Raw objects
As seen in the output of mne.io.read_raw_fif above, there are | <ASSISTANT_TASK:>
Python Code:
import os
import mne
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file)
raw.crop(tmax=60).load_data()
Explanation: Built-in plotting methods for Raw objects
This tutorial shows how to plot continuous data as a time series, how to plot
the spectral density of continuous data, and how to plot the sensor locations
and projectors stored in ~mne.io.Raw objects.
As usual we'll start by importing the modules we need, loading some
example data <sample-dataset>, and cropping the ~mne.io.Raw
object to just 60 seconds before loading it into RAM to save memory:
End of explanation
raw.plot()
Explanation: We've seen in a previous tutorial <tut-raw-class> how to plot data
from a ~mne.io.Raw object using :doc:matplotlib <matplotlib:index>,
but ~mne.io.Raw objects also have several built-in plotting methods:
~mne.io.Raw.plot
~mne.io.Raw.plot_psd
~mne.io.Raw.plot_psd_topo
~mne.io.Raw.plot_sensors
~mne.io.Raw.plot_projs_topomap
The first three are discussed here in detail; the last two are shown briefly
and covered in-depth in other tutorials.
Interactive data browsing with Raw.plot()
The ~mne.io.Raw.plot method of ~mne.io.Raw objects provides
a versatile interface for exploring continuous data. For interactive viewing
and data quality checking, it can be called with no additional parameters:
End of explanation
raw.plot_psd(average=True)
Explanation: It may not be obvious when viewing this tutorial online, but by default, the
~mne.io.Raw.plot method generates an interactive plot window with
several useful features:
It spaces the channels equally along the y-axis.
20 channels are shown by default; you can scroll through the channels
using the :kbd:↑ and :kbd:↓ arrow keys, or by clicking on the
colored scroll bar on the right edge of the plot.
The number of visible channels can be adjusted by the n_channels
parameter, or changed interactively using :kbd:page up and :kbd:page
down keys.
You can toggle the display to "butterfly" mode (superimposing all
channels of the same type on top of one another) by pressing :kbd:b,
or start in butterfly mode by passing the butterfly=True parameter.
It shows the first 10 seconds of the ~mne.io.Raw object.
You can shorten or lengthen the window length using :kbd:home and
:kbd:end keys, or start with a specific window duration by passing the
duration parameter.
You can scroll in the time domain using the :kbd:← and
:kbd:→ arrow keys, or start at a specific point by passing the
start parameter. Scrolling using :kbd:shift:kbd:→ or
:kbd:shift:kbd:← scrolls a full window width at a time.
It allows clicking on channels to mark/unmark as "bad".
When the plot window is closed, the ~mne.io.Raw object's
info attribute will be updated, adding or removing the newly
(un)marked channels to/from the ~mne.Info object's bads
field (A.K.A. raw.info['bads']).
.. TODO: discuss annotation snapping in the below bullets
It allows interactive :term:annotation <annotations> of the raw data.
This allows you to mark time spans that should be excluded from future
computations due to large movement artifacts, line noise, or other
distortions of the signal. Annotation mode is entered by pressing
:kbd:a. See annotations-tutorial for details.
It automatically applies any :term:projectors <projector> before plotting
the data.
These can be enabled/disabled interactively by clicking the Proj
button at the lower right corner of the plot window, or disabled by
default by passing the proj=False parameter. See
tut-projectors-background for more info on projectors.
These and other keyboard shortcuts are listed in the Help window, accessed
through the Help button at the lower left corner of the plot window.
Other plot properties (such as color of the channel traces, channel order and
grouping, simultaneous plotting of :term:events, scaling, clipping,
filtering, etc.) can also be adjusted through parameters passed to the
~mne.io.Raw.plot method; see the docstring for details.
Plotting spectral density of continuous data
To visualize the frequency content of continuous data, the ~mne.io.Raw
object provides a ~mne.io.Raw.plot_psd to plot the spectral density_ of
the data.
End of explanation
midline = ['EEG 002', 'EEG 012', 'EEG 030', 'EEG 048', 'EEG 058', 'EEG 060']
raw.plot_psd(picks=midline)
Explanation: If the data have been filtered, vertical dashed lines will automatically
indicate filter boundaries. The spectrum for each channel type is drawn in
its own subplot; here we've passed the average=True parameter to get a
summary for each channel type, but it is also possible to plot each channel
individually, with options for how the spectrum should be computed,
color-coding the channels by location, and more. For example, here is a plot
of just a few sensors (specified with the picks parameter), color-coded
by spatial location (via the spatial_colors parameter, see the
documentation of ~mne.io.Raw.plot_psd for full details):
End of explanation
raw.plot_psd_topo()
Explanation: Alternatively, you can plot the PSD for every sensor on its own axes, with
the axes arranged spatially to correspond to sensor locations in space, using
~mne.io.Raw.plot_psd_topo:
End of explanation
raw.copy().pick_types(meg=False, eeg=True).plot_psd_topo()
Explanation: This plot is also interactive; hovering over each "thumbnail" plot will
display the channel name in the bottom left of the plot window, and clicking
on a thumbnail plot will create a second figure showing a larger version of
the selected channel's spectral density (as if you had called
~mne.io.Raw.plot_psd on that channel).
By default, ~mne.io.Raw.plot_psd_topo will show only the MEG
channels if MEG channels are present; if only EEG channels are found, they
will be plotted instead:
End of explanation
raw.plot_sensors(ch_type='eeg')
Explanation: Plotting sensor locations from Raw objects
The channel locations in a ~mne.io.Raw object can be easily plotted
with the ~mne.io.Raw.plot_sensors method. A brief example is shown
here; notice that channels in raw.info['bads'] are plotted in red. More
details and additional examples are given in the tutorial
tut-sensor-locations.
End of explanation
raw.plot_projs_topomap(colorbar=True)
Explanation: Plotting projectors from Raw objects
As seen in the output of mne.io.read_raw_fif above, there are
:term:projectors <projector> included in the example ~mne.io.Raw
file (representing environmental noise in the signal, so it can later be
"projected out" during preprocessing). You can visualize these projectors
using the ~mne.io.Raw.plot_projs_topomap method. By default it will
show one figure per channel type for which projectors are present, and each
figure will have one subplot per projector. The three projectors in this file
were only computed for magnetometers, so one figure with three subplots is
generated. More details on working with and plotting projectors are given in
tut-projectors-background and tut-artifact-ssp.
End of explanation
<END_TASK> |
15,667 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Running the Solow Model
First, the model needs to be initialized.
Step1: Then, after a log folder is created, if it doesn't exist, the logger will be initialized. The log files will contain information about how the simulation went.
Step2: Before initializing the model, let's inspect the parameters that are used to initialize it.
Step3: First, the environment will be initialized. It contains the model parameters and variables. The environment also contains a list of agents. In the Solow model, the environment contains one bank, one firm, and one household. Finally, it also contains initialized transactions for these agents
Step4: Next up, the runner object is created. As the name suggests, the runner object handles the running of the model. But that is not all, it also initializes the updater.
Step5: Now you can actually run the model. Since the agent-based model is simulated using Monte Carlo methods. This means that the model will be run multiple times (num_simulations). For every simulation run, the environment and runner will be (re-)initialized. Then, the runner will run the Solow model. During the run, the runner will write data about capital to a csv file specified within the Measurement class and print out all the transactions happening and the books of firm and household at the end of each sweep. | <ASSISTANT_TASK:>
Python Code:
environment_directory = "environments/"
identifier = "test_all_methods"
log_directory = "log/"
Explanation: Running the Solow Model
First, the model needs to be initialized.
End of explanation
if not os.path.exists('log'):
os.makedirs('log')
logging.basicConfig(format='%(asctime)s %(message)s', datefmt='%m/%d/%Y %H:%M:%S',
filename=log_directory + identifier + ".log", level=logging.INFO)
logging.info('START logging for run: %s', environment_directory + identifier + ".xml")
Explanation: Then, after a log folder is created, if it doesn't exist, the logger will be initialized. The log files will contain information about how the simulation went.
End of explanation
#tree = ET.parse('environments/test_all_methods.xml')
tree = ET.parse('environments/solow_parameters.xml')
root = tree.getroot()
# all item attributes
print('All parameters: ')
for elem in root:
print(elem.attrib)
Explanation: Before initializing the model, let's inspect the parameters that are used to initialize it.
End of explanation
environment = Environment(environment_directory, identifier)
Explanation: First, the environment will be initialized. It contains the model parameters and variables. The environment also contains a list of agents. In the Solow model, the environment contains one bank, one firm, and one household. Finally, it also contains initialized transactions for these agents
End of explanation
runner = Runner(environment)
Explanation: Next up, the runner object is created. As the name suggests, the runner object handles the running of the model. But that is not all, it also initializes the updater.
End of explanation
for i in range(int(environment.num_simulations)):
logging.info(' STARTED with run %s', str(i))
environment.initialize(environment_directory, identifier)
runner.initialize(environment)
runner.do_run(environment)
logging.info(' DONE')
Explanation: Now you can actually run the model. Since the agent-based model is simulated using Monte Carlo methods. This means that the model will be run multiple times (num_simulations). For every simulation run, the environment and runner will be (re-)initialized. Then, the runner will run the Solow model. During the run, the runner will write data about capital to a csv file specified within the Measurement class and print out all the transactions happening and the books of firm and household at the end of each sweep.
End of explanation
<END_TASK> |
15,668 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Tutorial - Migrating from Catalyst
Incrementally adding fastai goodness to your Catalyst training
Catalyst code
We're going to use the MNIST training code from Catalyst's README (as at August 2020), converted to a module.
Note
Step1: To use it in fastai, we first convert the Catalyst dict into a DataLoaders object
Step2: Using callbacks
In the Catalyst code, a training loop is defined manually, which is where the input tensor is flattened. In fastai, there's no need to define your own training loop - you can insert your own code into any part of the training process by using a callback, which can even modify data, gradients, the loss function, or anything else in the training loop
Step3: The Catalyst example also modifies the training loop to add metrics, but you can pass these directly to your Learner in fastai
Step4: You can now fit your model. fastai supports many schedulers. We recommend using 1cycle
Step5: As you can see, migrating from Catalyst allowed us to replace 17 lines of code (in CustomRunner) with just 3 lines, and doesn't require you to change any of your existing data pipelines, optimizers, loss functions, models, etc. Once you've made this change, you can then benefit from fastai's rich set of callbacks, transforms, visualizations, and so forth.
Note that fastai is very different from Catalyst, in that it is much more than just a training loop (although we're only using the training loop in this example) - it is a complete framework including GPU-accelerated transformations, end-to-end inference, integrated applications for vision, text, tabular, and collaborative filtering, and so forth. You can use any part of the framework on its own, or combine them together, as described in the fastai paper.
Changing the model
Instead of using callbacks, in this case you can also simply change the model. Here we pull the view() out of the training loop, and into the model, using fastai's Flatten layer
Step6: We can now create a Learner and train without using any callbacks | <ASSISTANT_TASK:>
Python Code:
from migrating_catalyst import *
Explanation: Tutorial - Migrating from Catalyst
Incrementally adding fastai goodness to your Catalyst training
Catalyst code
We're going to use the MNIST training code from Catalyst's README (as at August 2020), converted to a module.
Note: The source script for migrating_catalyst is in the examples subdirectory of this folder if you checked out the fastai repo from git, or can be downloaded from here if you're using an online viewer such as Colab.
End of explanation
data = DataLoaders(loaders['train'], loaders['valid']).cuda()
Explanation: To use it in fastai, we first convert the Catalyst dict into a DataLoaders object:
End of explanation
@before_batch_cb
def cb(self, xb, yb): return (xb[0].view(xb[0].size(0), -1),),yb
Explanation: Using callbacks
In the Catalyst code, a training loop is defined manually, which is where the input tensor is flattened. In fastai, there's no need to define your own training loop - you can insert your own code into any part of the training process by using a callback, which can even modify data, gradients, the loss function, or anything else in the training loop:
End of explanation
metrics=[accuracy,top_k_accuracy]
learn = Learner(data, model, loss_func=F.cross_entropy, opt_func=Adam,
metrics=metrics, cbs=cb)
Explanation: The Catalyst example also modifies the training loop to add metrics, but you can pass these directly to your Learner in fastai:
End of explanation
learn.fit_one_cycle(1, 0.02)
Explanation: You can now fit your model. fastai supports many schedulers. We recommend using 1cycle:
End of explanation
model = nn.Sequential(
Flatten(),
torch.nn.Linear(28 * 28, 10))
Explanation: As you can see, migrating from Catalyst allowed us to replace 17 lines of code (in CustomRunner) with just 3 lines, and doesn't require you to change any of your existing data pipelines, optimizers, loss functions, models, etc. Once you've made this change, you can then benefit from fastai's rich set of callbacks, transforms, visualizations, and so forth.
Note that fastai is very different from Catalyst, in that it is much more than just a training loop (although we're only using the training loop in this example) - it is a complete framework including GPU-accelerated transformations, end-to-end inference, integrated applications for vision, text, tabular, and collaborative filtering, and so forth. You can use any part of the framework on its own, or combine them together, as described in the fastai paper.
Changing the model
Instead of using callbacks, in this case you can also simply change the model. Here we pull the view() out of the training loop, and into the model, using fastai's Flatten layer:
End of explanation
learn = Learner(data, model, loss_func=F.cross_entropy, opt_func=Adam, metrics=metrics)
learn.fit_one_cycle(1, 0.02)
Explanation: We can now create a Learner and train without using any callbacks:
End of explanation
<END_TASK> |
15,669 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
TensorFlow Tutorial #04
Save & Restore
by Magnus Erik Hvass Pedersen
/ GitHub / Videos on YouTube
Introduction
This tutorial demonstrates how to save and restore the variables of a Neural Network. During optimization we save the variables of the neural network whenever its classification accuracy has improved on the validation-set. The optimization is aborted when there has been no improvement for 1000 iterations. We then reload the variables that performed best on the validation-set.
This strategy is called Early Stopping. It is used to avoid overfitting of the neural network. This occurs when the neural network is being trained for too long so it starts to learn the noise of the training-set, which causes the neural network to mis-classify new images.
Overfitting is not really a problem for the neural network used in this tutorial on the MNIST data-set for recognizing hand-written digits. But this tutorial demonstrates the idea of Early Stopping.
This builds on the previous tutorials, so you should have a basic understanding of TensorFlow and the add-on package Pretty Tensor. A lot of the source-code and text in this tutorial is similar to the previous tutorials and may be read quickly if you have recently read the previous tutorials.
Flowchart
The following chart shows roughly how the data flows in the Convolutional Neural Network that is implemented below. The network has two convolutional layers and two fully-connected layers, with the last layer being used for the final classification of the input images. See Tutorial #02 for a more detailed description of this network and convolution in general.
Step1: Imports
Step2: This was developed using Python 3.5.2 (Anaconda) and TensorFlow version
Step3: Load Data
The MNIST data-set is about 12 MB and will be downloaded automatically if it is not located in the given path.
Step4: The MNIST data-set has now been loaded and consists of 70,000 images and associated labels (i.e. classifications of the images). The data-set is split into 3 mutually exclusive sub-sets. We will only use the training and test-sets in this tutorial.
Step5: The class-labels are One-Hot encoded, which means that each label is a vector with 10 elements, all of which are zero except for one element. The index of this one element is the class-number, that is, the digit shown in the associated image. We also need the class-numbers as integers for the test- and validation-sets, so we calculate them now.
Step6: Data Dimensions
The data dimensions are used in several places in the source-code below. They are defined once so we can use these variables instead of numbers throughout the source-code below.
Step7: Helper-function for plotting images
Function used to plot 9 images in a 3x3 grid, and writing the true and predicted classes below each image.
Step8: Plot a few images to see if data is correct
Step9: TensorFlow Graph
The entire purpose of TensorFlow is to have a so-called computational graph that can be executed much more efficiently than if the same calculations were to be performed directly in Python. TensorFlow can be more efficient than NumPy because TensorFlow knows the entire computation graph that must be executed, while NumPy only knows the computation of a single mathematical operation at a time.
TensorFlow can also automatically calculate the gradients that are needed to optimize the variables of the graph so as to make the model perform better. This is because the graph is a combination of simple mathematical expressions so the gradient of the entire graph can be calculated using the chain-rule for derivatives.
TensorFlow can also take advantage of multi-core CPUs as well as GPUs - and Google has even built special chips just for TensorFlow which are called TPUs (Tensor Processing Units) and are even faster than GPUs.
A TensorFlow graph consists of the following parts which will be detailed below
Step10: The convolutional layers expect x to be encoded as a 4-dim tensor so we have to reshape it so its shape is instead [num_images, img_height, img_width, num_channels]. Note that img_height == img_width == img_size and num_images can be inferred automatically by using -1 for the size of the first dimension. So the reshape operation is
Step11: Next we have the placeholder variable for the true labels associated with the images that were input in the placeholder variable x. The shape of this placeholder variable is [None, num_classes] which means it may hold an arbitrary number of labels and each label is a vector of length num_classes which is 10 in this case.
Step12: We could also have a placeholder variable for the class-number, but we will instead calculate it using argmax. Note that this is a TensorFlow operator so nothing is calculated at this point.
Step13: Neural Network
This section implements the Convolutional Neural Network using Pretty Tensor, which is much simpler than a direct implementation in TensorFlow, see Tutorial #03.
The basic idea is to wrap the input tensor x_image in a Pretty Tensor object which has helper-functions for adding new computational layers so as to create an entire neural network. Pretty Tensor takes care of the variable allocation, etc.
Step14: Now that we have wrapped the input image in a Pretty Tensor object, we can add the convolutional and fully-connected layers in just a few lines of source-code.
Note that pt.defaults_scope(activation_fn=tf.nn.relu) makes activation_fn=tf.nn.relu an argument for each of the layers constructed inside the with-block, so that Rectified Linear Units (ReLU) are used for each of these layers. The defaults_scope makes it easy to change arguments for all of the layers.
Step15: Getting the Weights
Further below, we want to plot the weights of the neural network. When the network is constructed using Pretty Tensor, all the variables of the layers are created indirectly by Pretty Tensor. We therefore have to retrieve the variables from TensorFlow.
We used the names layer_conv1 and layer_conv2 for the two convolutional layers. These are also called variable scopes (not to be confused with defaults_scope as described above). Pretty Tensor automatically gives names to the variables it creates for each layer, so we can retrieve the weights for a layer using the layer's scope-name and the variable-name.
The implementation is somewhat awkward because we have to use the TensorFlow function get_variable() which was designed for another purpose; either creating a new variable or re-using an existing variable. The easiest thing is to make the following helper-function.
Step16: Using this helper-function we can retrieve the variables. These are TensorFlow objects. In order to get the contents of the variables, you must do something like
Step17: Optimization Method
Pretty Tensor gave us the predicted class-label (y_pred) as well as a loss-measure that must be minimized, so as to improve the ability of the neural network to classify the input images.
It is unclear from the documentation for Pretty Tensor whether the loss-measure is cross-entropy or something else. But we now use the AdamOptimizer to minimize the loss.
Note that optimization is not performed at this point. In fact, nothing is calculated at all, we just add the optimizer-object to the TensorFlow graph for later execution.
Step18: Performance Measures
We need a few more performance measures to display the progress to the user.
First we calculate the predicted class number from the output of the neural network y_pred, which is a vector with 10 elements. The class number is the index of the largest element.
Step19: Then we create a vector of booleans telling us whether the predicted class equals the true class of each image.
Step20: The classification accuracy is calculated by first type-casting the vector of booleans to floats, so that False becomes 0 and True becomes 1, and then taking the average of these numbers.
Step21: Saver
In order to save the variables of the neural network, we now create a so-called Saver-object which is used for storing and retrieving all the variables of the TensorFlow graph. Nothing is actually saved at this point, which will be done further below in the optimize()-function.
Step22: The saved files are often called checkpoints because they may be written at regular intervals during optimization.
This is the directory used for saving and retrieving the data.
Step23: Create the directory if it does not exist.
Step24: This is the path for the checkpoint-file.
Step25: TensorFlow Run
Create TensorFlow session
Once the TensorFlow graph has been created, we have to create a TensorFlow session which is used to execute the graph.
Step26: Initialize variables
The variables for weights and biases must be initialized before we start optimizing them. We make a simple wrapper-function for this, because we will call it again below.
Step27: Execute the function now to initialize the variables.
Step28: Helper-function to perform optimization iterations
There are 55,000 images in the training-set. It takes a long time to calculate the gradient of the model using all these images. We therefore only use a small batch of images in each iteration of the optimizer.
If your computer crashes or becomes very slow because you run out of RAM, then you may try and lower this number, but you may then need to perform more optimization iterations.
Step29: The classification accuracy for the validation-set will be calculated for every 100 iterations of the optimization function below. The optimization will be stopped if the validation accuracy has not been improved in 1000 iterations. We need a few variables to keep track of this.
Step30: Function for performing a number of optimization iterations so as to gradually improve the variables of the network layers. In each iteration, a new batch of data is selected from the training-set and then TensorFlow executes the optimizer using those training samples. The progress is printed every 100 iterations where the validation accuracy is also calculated and saved to a file if it is an improvement.
Step31: Helper-function to plot example errors
Function for plotting examples of images from the test-set that have been mis-classified.
Step32: Helper-function to plot confusion matrix
Step33: Helper-functions for calculating classifications
This function calculates the predicted classes of images and also returns a boolean array whether the classification of each image is correct.
The calculation is done in batches because it might use too much RAM otherwise. If your computer crashes then you can try and lower the batch-size.
Step34: Calculate the predicted class for the test-set.
Step35: Calculate the predicted class for the validation-set.
Step36: Helper-functions for the classification accuracy
This function calculates the classification accuracy given a boolean array whether each image was correctly classified. E.g. cls_accuracy([True, True, False, False, False]) = 2/5 = 0.4
Step37: Calculate the classification accuracy on the validation-set.
Step38: Helper-function for showing the performance
Function for printing the classification accuracy on the test-set.
It takes a while to compute the classification for all the images in the test-set, that's why the results are re-used by calling the above functions directly from this function, so the classifications don't have to be recalculated by each function.
Step39: Helper-function for plotting convolutional weights
Step40: Performance before any optimization
The accuracy on the test-set is very low because the model variables have only been initialized and not optimized at all, so it just classifies the images randomly.
Step41: The convolutional weights are random, but it can be difficult to see any difference from the optimized weights that are shown below. The mean and standard deviation is shown so we can see whether there is a difference.
Step42: Perform 10,000 optimization iterations
We now perform 10,000 optimization iterations and abort the optimization if no improvement is found on the validation-set in 1000 iterations.
An asterisk * is shown if the classification accuracy on the validation-set is an improvement.
Step43: The convolutional weights have now been optimized. Compare these to the random weights shown above. They appear to be almost identical. In fact, I first thought there was a bug in the program because the weights look identical before and after optimization.
But try and save the images and compare them side-by-side (you can just right-click the image to save it). You will notice very small differences before and after optimization.
The mean and standard deviation has also changed slightly, so the optimized weights must be different.
Step44: Initialize Variables Again
Re-initialize all the variables of the neural network with random values.
Step45: This means the neural network classifies the images completely randomly again, so the classification accuracy is very poor because it is like random guesses.
Step46: The convolutional weights should now be different from the weights shown above.
Step47: Restore Best Variables
Re-load all the variables that were saved to file during optimization.
Step48: The classification accuracy is high again when using the variables that were previously saved.
Note that the classification accuracy may be slightly higher or lower than that reported above, because the variables in the file were chosen to maximize the classification accuracy on the validation-set, but the optimization actually continued for another 1000 iterations after saving those variables, so we are reporting the results for two slightly different sets of variables. Sometimes this leads to slightly better or worse performance on the test-set.
Step49: The convolutional weights should be nearly identical to those shown above, although not completely identical because the weights shown above had 1000 optimization iterations more.
Step50: Close TensorFlow Session
We are now done using TensorFlow, so we close the session to release its resources. | <ASSISTANT_TASK:>
Python Code:
from IPython.display import Image
Image('images/02_network_flowchart.png')
Explanation: TensorFlow Tutorial #04
Save & Restore
by Magnus Erik Hvass Pedersen
/ GitHub / Videos on YouTube
Introduction
This tutorial demonstrates how to save and restore the variables of a Neural Network. During optimization we save the variables of the neural network whenever its classification accuracy has improved on the validation-set. The optimization is aborted when there has been no improvement for 1000 iterations. We then reload the variables that performed best on the validation-set.
This strategy is called Early Stopping. It is used to avoid overfitting of the neural network. This occurs when the neural network is being trained for too long so it starts to learn the noise of the training-set, which causes the neural network to mis-classify new images.
Overfitting is not really a problem for the neural network used in this tutorial on the MNIST data-set for recognizing hand-written digits. But this tutorial demonstrates the idea of Early Stopping.
This builds on the previous tutorials, so you should have a basic understanding of TensorFlow and the add-on package Pretty Tensor. A lot of the source-code and text in this tutorial is similar to the previous tutorials and may be read quickly if you have recently read the previous tutorials.
Flowchart
The following chart shows roughly how the data flows in the Convolutional Neural Network that is implemented below. The network has two convolutional layers and two fully-connected layers, with the last layer being used for the final classification of the input images. See Tutorial #02 for a more detailed description of this network and convolution in general.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
from sklearn.metrics import confusion_matrix
import time
from datetime import timedelta
import math
import os
# Use PrettyTensor to simplify Neural Network construction.
import prettytensor as pt
Explanation: Imports
End of explanation
tf.__version__
Explanation: This was developed using Python 3.5.2 (Anaconda) and TensorFlow version:
End of explanation
from tensorflow.examples.tutorials.mnist import input_data
data = input_data.read_data_sets('data/MNIST/', one_hot=True)
Explanation: Load Data
The MNIST data-set is about 12 MB and will be downloaded automatically if it is not located in the given path.
End of explanation
print("Size of:")
print("- Training-set:\t\t{}".format(len(data.train.labels)))
print("- Test-set:\t\t{}".format(len(data.test.labels)))
print("- Validation-set:\t{}".format(len(data.validation.labels)))
Explanation: The MNIST data-set has now been loaded and consists of 70,000 images and associated labels (i.e. classifications of the images). The data-set is split into 3 mutually exclusive sub-sets. We will only use the training and test-sets in this tutorial.
End of explanation
data.test.cls = np.argmax(data.test.labels, axis=1)
data.validation.cls = np.argmax(data.validation.labels, axis=1)
Explanation: The class-labels are One-Hot encoded, which means that each label is a vector with 10 elements, all of which are zero except for one element. The index of this one element is the class-number, that is, the digit shown in the associated image. We also need the class-numbers as integers for the test- and validation-sets, so we calculate them now.
End of explanation
# We know that MNIST images are 28 pixels in each dimension.
img_size = 28
# Images are stored in one-dimensional arrays of this length.
img_size_flat = img_size * img_size
# Tuple with height and width of images used to reshape arrays.
img_shape = (img_size, img_size)
# Number of colour channels for the images: 1 channel for gray-scale.
num_channels = 1
# Number of classes, one class for each of 10 digits.
num_classes = 10
Explanation: Data Dimensions
The data dimensions are used in several places in the source-code below. They are defined once so we can use these variables instead of numbers throughout the source-code below.
End of explanation
def plot_images(images, cls_true, cls_pred=None):
assert len(images) == len(cls_true) == 9
# Create figure with 3x3 sub-plots.
fig, axes = plt.subplots(3, 3)
fig.subplots_adjust(hspace=0.3, wspace=0.3)
for i, ax in enumerate(axes.flat):
# Plot image.
ax.imshow(images[i].reshape(img_shape), cmap='binary')
# Show true and predicted classes.
if cls_pred is None:
xlabel = "True: {0}".format(cls_true[i])
else:
xlabel = "True: {0}, Pred: {1}".format(cls_true[i], cls_pred[i])
# Show the classes as the label on the x-axis.
ax.set_xlabel(xlabel)
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
Explanation: Helper-function for plotting images
Function used to plot 9 images in a 3x3 grid, and writing the true and predicted classes below each image.
End of explanation
# Get the first images from the test-set.
images = data.test.images[0:9]
# Get the true classes for those images.
cls_true = data.test.cls[0:9]
# Plot the images and labels using our helper-function above.
plot_images(images=images, cls_true=cls_true)
Explanation: Plot a few images to see if data is correct
End of explanation
x = tf.placeholder(tf.float32, shape=[None, img_size_flat], name='x')
Explanation: TensorFlow Graph
The entire purpose of TensorFlow is to have a so-called computational graph that can be executed much more efficiently than if the same calculations were to be performed directly in Python. TensorFlow can be more efficient than NumPy because TensorFlow knows the entire computation graph that must be executed, while NumPy only knows the computation of a single mathematical operation at a time.
TensorFlow can also automatically calculate the gradients that are needed to optimize the variables of the graph so as to make the model perform better. This is because the graph is a combination of simple mathematical expressions so the gradient of the entire graph can be calculated using the chain-rule for derivatives.
TensorFlow can also take advantage of multi-core CPUs as well as GPUs - and Google has even built special chips just for TensorFlow which are called TPUs (Tensor Processing Units) and are even faster than GPUs.
A TensorFlow graph consists of the following parts which will be detailed below:
Placeholder variables used for inputting data to the graph.
Variables that are going to be optimized so as to make the convolutional network perform better.
The mathematical formulas for the convolutional network.
A loss measure that can be used to guide the optimization of the variables.
An optimization method which updates the variables.
In addition, the TensorFlow graph may also contain various debugging statements e.g. for logging data to be displayed using TensorBoard, which is not covered in this tutorial.
Placeholder variables
Placeholder variables serve as the input to the TensorFlow computational graph that we may change each time we execute the graph. We call this feeding the placeholder variables and it is demonstrated further below.
First we define the placeholder variable for the input images. This allows us to change the images that are input to the TensorFlow graph. This is a so-called tensor, which just means that it is a multi-dimensional array. The data-type is set to float32 and the shape is set to [None, img_size_flat], where None means that the tensor may hold an arbitrary number of images with each image being a vector of length img_size_flat.
End of explanation
x_image = tf.reshape(x, [-1, img_size, img_size, num_channels])
Explanation: The convolutional layers expect x to be encoded as a 4-dim tensor so we have to reshape it so its shape is instead [num_images, img_height, img_width, num_channels]. Note that img_height == img_width == img_size and num_images can be inferred automatically by using -1 for the size of the first dimension. So the reshape operation is:
End of explanation
y_true = tf.placeholder(tf.float32, shape=[None, 10], name='y_true')
Explanation: Next we have the placeholder variable for the true labels associated with the images that were input in the placeholder variable x. The shape of this placeholder variable is [None, num_classes] which means it may hold an arbitrary number of labels and each label is a vector of length num_classes which is 10 in this case.
End of explanation
y_true_cls = tf.argmax(y_true, dimension=1)
Explanation: We could also have a placeholder variable for the class-number, but we will instead calculate it using argmax. Note that this is a TensorFlow operator so nothing is calculated at this point.
End of explanation
x_pretty = pt.wrap(x_image)
Explanation: Neural Network
This section implements the Convolutional Neural Network using Pretty Tensor, which is much simpler than a direct implementation in TensorFlow, see Tutorial #03.
The basic idea is to wrap the input tensor x_image in a Pretty Tensor object which has helper-functions for adding new computational layers so as to create an entire neural network. Pretty Tensor takes care of the variable allocation, etc.
End of explanation
with pt.defaults_scope(activation_fn=tf.nn.relu):
y_pred, loss = x_pretty.\
conv2d(kernel=5, depth=16, name='layer_conv1').\
max_pool(kernel=2, stride=2).\
conv2d(kernel=5, depth=36, name='layer_conv2').\
max_pool(kernel=2, stride=2).\
flatten().\
fully_connected(size=128, name='layer_fc1').\
softmax_classifier(class_count=10, labels=y_true)
Explanation: Now that we have wrapped the input image in a Pretty Tensor object, we can add the convolutional and fully-connected layers in just a few lines of source-code.
Note that pt.defaults_scope(activation_fn=tf.nn.relu) makes activation_fn=tf.nn.relu an argument for each of the layers constructed inside the with-block, so that Rectified Linear Units (ReLU) are used for each of these layers. The defaults_scope makes it easy to change arguments for all of the layers.
End of explanation
def get_weights_variable(layer_name):
# Retrieve an existing variable named 'weights' in the scope
# with the given layer_name.
# This is awkward because the TensorFlow function was
# really intended for another purpose.
with tf.variable_scope(layer_name, reuse=True):
variable = tf.get_variable('weights')
return variable
Explanation: Getting the Weights
Further below, we want to plot the weights of the neural network. When the network is constructed using Pretty Tensor, all the variables of the layers are created indirectly by Pretty Tensor. We therefore have to retrieve the variables from TensorFlow.
We used the names layer_conv1 and layer_conv2 for the two convolutional layers. These are also called variable scopes (not to be confused with defaults_scope as described above). Pretty Tensor automatically gives names to the variables it creates for each layer, so we can retrieve the weights for a layer using the layer's scope-name and the variable-name.
The implementation is somewhat awkward because we have to use the TensorFlow function get_variable() which was designed for another purpose; either creating a new variable or re-using an existing variable. The easiest thing is to make the following helper-function.
End of explanation
weights_conv1 = get_weights_variable(layer_name='layer_conv1')
weights_conv2 = get_weights_variable(layer_name='layer_conv2')
Explanation: Using this helper-function we can retrieve the variables. These are TensorFlow objects. In order to get the contents of the variables, you must do something like: contents = session.run(weights_conv1) as demonstrated further below.
End of explanation
optimizer = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(loss)
Explanation: Optimization Method
Pretty Tensor gave us the predicted class-label (y_pred) as well as a loss-measure that must be minimized, so as to improve the ability of the neural network to classify the input images.
It is unclear from the documentation for Pretty Tensor whether the loss-measure is cross-entropy or something else. But we now use the AdamOptimizer to minimize the loss.
Note that optimization is not performed at this point. In fact, nothing is calculated at all, we just add the optimizer-object to the TensorFlow graph for later execution.
End of explanation
y_pred_cls = tf.argmax(y_pred, dimension=1)
Explanation: Performance Measures
We need a few more performance measures to display the progress to the user.
First we calculate the predicted class number from the output of the neural network y_pred, which is a vector with 10 elements. The class number is the index of the largest element.
End of explanation
correct_prediction = tf.equal(y_pred_cls, y_true_cls)
Explanation: Then we create a vector of booleans telling us whether the predicted class equals the true class of each image.
End of explanation
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
Explanation: The classification accuracy is calculated by first type-casting the vector of booleans to floats, so that False becomes 0 and True becomes 1, and then taking the average of these numbers.
End of explanation
saver = tf.train.Saver()
Explanation: Saver
In order to save the variables of the neural network, we now create a so-called Saver-object which is used for storing and retrieving all the variables of the TensorFlow graph. Nothing is actually saved at this point, which will be done further below in the optimize()-function.
End of explanation
save_dir = 'checkpoints/'
Explanation: The saved files are often called checkpoints because they may be written at regular intervals during optimization.
This is the directory used for saving and retrieving the data.
End of explanation
if not os.path.exists(save_dir):
os.makedirs(save_dir)
Explanation: Create the directory if it does not exist.
End of explanation
save_path = save_dir + 'best_validation'
Explanation: This is the path for the checkpoint-file.
End of explanation
session = tf.Session()
Explanation: TensorFlow Run
Create TensorFlow session
Once the TensorFlow graph has been created, we have to create a TensorFlow session which is used to execute the graph.
End of explanation
def init_variables():
session.run(tf.initialize_all_variables())
Explanation: Initialize variables
The variables for weights and biases must be initialized before we start optimizing them. We make a simple wrapper-function for this, because we will call it again below.
End of explanation
init_variables()
Explanation: Execute the function now to initialize the variables.
End of explanation
train_batch_size = 64
Explanation: Helper-function to perform optimization iterations
There are 55,000 images in the training-set. It takes a long time to calculate the gradient of the model using all these images. We therefore only use a small batch of images in each iteration of the optimizer.
If your computer crashes or becomes very slow because you run out of RAM, then you may try and lower this number, but you may then need to perform more optimization iterations.
End of explanation
# Best validation accuracy seen so far.
best_validation_accuracy = 0.0
# Iteration-number for last improvement to validation accuracy.
last_improvement = 0
# Stop optimization if no improvement found in this many iterations.
require_improvement = 1000
Explanation: The classification accuracy for the validation-set will be calculated for every 100 iterations of the optimization function below. The optimization will be stopped if the validation accuracy has not been improved in 1000 iterations. We need a few variables to keep track of this.
End of explanation
# Counter for total number of iterations performed so far.
total_iterations = 0
def optimize(num_iterations):
# Ensure we update the global variables rather than local copies.
global total_iterations
global best_validation_accuracy
global last_improvement
# Start-time used for printing time-usage below.
start_time = time.time()
for i in range(num_iterations):
# Increase the total number of iterations performed.
# It is easier to update it in each iteration because
# we need this number several times in the following.
total_iterations += 1
# Get a batch of training examples.
# x_batch now holds a batch of images and
# y_true_batch are the true labels for those images.
x_batch, y_true_batch = data.train.next_batch(train_batch_size)
# Put the batch into a dict with the proper names
# for placeholder variables in the TensorFlow graph.
feed_dict_train = {x: x_batch,
y_true: y_true_batch}
# Run the optimizer using this batch of training data.
# TensorFlow assigns the variables in feed_dict_train
# to the placeholder variables and then runs the optimizer.
session.run(optimizer, feed_dict=feed_dict_train)
# Print status every 100 iterations and after last iteration.
if (total_iterations % 100 == 0) or (i == (num_iterations - 1)):
# Calculate the accuracy on the training-batch.
acc_train = session.run(accuracy, feed_dict=feed_dict_train)
# Calculate the accuracy on the validation-set.
# The function returns 2 values but we only need the first.
acc_validation, _ = validation_accuracy()
# If validation accuracy is an improvement over best-known.
if acc_validation > best_validation_accuracy:
# Update the best-known validation accuracy.
best_validation_accuracy = acc_validation
# Set the iteration for the last improvement to current.
last_improvement = total_iterations
# Save all variables of the TensorFlow graph to file.
saver.save(sess=session, save_path=save_path)
# A string to be printed below, shows improvement found.
improved_str = '*'
else:
# An empty string to be printed below.
# Shows that no improvement was found.
improved_str = ''
# Status-message for printing.
msg = "Iter: {0:>6}, Train-Batch Accuracy: {1:>6.1%}, Validation Acc: {2:>6.1%} {3}"
# Print it.
print(msg.format(i + 1, acc_train, acc_validation, improved_str))
# If no improvement found in the required number of iterations.
if total_iterations - last_improvement > require_improvement:
print("No improvement found in a while, stopping optimization.")
# Break out from the for-loop.
break
# Ending time.
end_time = time.time()
# Difference between start and end-times.
time_dif = end_time - start_time
# Print the time-usage.
print("Time usage: " + str(timedelta(seconds=int(round(time_dif)))))
Explanation: Function for performing a number of optimization iterations so as to gradually improve the variables of the network layers. In each iteration, a new batch of data is selected from the training-set and then TensorFlow executes the optimizer using those training samples. The progress is printed every 100 iterations where the validation accuracy is also calculated and saved to a file if it is an improvement.
End of explanation
def plot_example_errors(cls_pred, correct):
# This function is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# correct is a boolean array whether the predicted class
# is equal to the true class for each image in the test-set.
# Negate the boolean array.
incorrect = (correct == False)
# Get the images from the test-set that have been
# incorrectly classified.
images = data.test.images[incorrect]
# Get the predicted classes for those images.
cls_pred = cls_pred[incorrect]
# Get the true classes for those images.
cls_true = data.test.cls[incorrect]
# Plot the first 9 images.
plot_images(images=images[0:9],
cls_true=cls_true[0:9],
cls_pred=cls_pred[0:9])
Explanation: Helper-function to plot example errors
Function for plotting examples of images from the test-set that have been mis-classified.
End of explanation
def plot_confusion_matrix(cls_pred):
# This is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# Get the true classifications for the test-set.
cls_true = data.test.cls
# Get the confusion matrix using sklearn.
cm = confusion_matrix(y_true=cls_true,
y_pred=cls_pred)
# Print the confusion matrix as text.
print(cm)
# Plot the confusion matrix as an image.
plt.matshow(cm)
# Make various adjustments to the plot.
plt.colorbar()
tick_marks = np.arange(num_classes)
plt.xticks(tick_marks, range(num_classes))
plt.yticks(tick_marks, range(num_classes))
plt.xlabel('Predicted')
plt.ylabel('True')
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
Explanation: Helper-function to plot confusion matrix
End of explanation
# Split the data-set in batches of this size to limit RAM usage.
batch_size = 256
def predict_cls(images, labels, cls_true):
# Number of images.
num_images = len(images)
# Allocate an array for the predicted classes which
# will be calculated in batches and filled into this array.
cls_pred = np.zeros(shape=num_images, dtype=np.int)
# Now calculate the predicted classes for the batches.
# We will just iterate through all the batches.
# There might be a more clever and Pythonic way of doing this.
# The starting index for the next batch is denoted i.
i = 0
while i < num_images:
# The ending index for the next batch is denoted j.
j = min(i + batch_size, num_images)
# Create a feed-dict with the images and labels
# between index i and j.
feed_dict = {x: images[i:j, :],
y_true: labels[i:j, :]}
# Calculate the predicted class using TensorFlow.
cls_pred[i:j] = session.run(y_pred_cls, feed_dict=feed_dict)
# Set the start-index for the next batch to the
# end-index of the current batch.
i = j
# Create a boolean array whether each image is correctly classified.
correct = (cls_true == cls_pred)
return correct, cls_pred
Explanation: Helper-functions for calculating classifications
This function calculates the predicted classes of images and also returns a boolean array whether the classification of each image is correct.
The calculation is done in batches because it might use too much RAM otherwise. If your computer crashes then you can try and lower the batch-size.
End of explanation
def predict_cls_test():
return predict_cls(images = data.test.images,
labels = data.test.labels,
cls_true = data.test.cls)
Explanation: Calculate the predicted class for the test-set.
End of explanation
def predict_cls_validation():
return predict_cls(images = data.validation.images,
labels = data.validation.labels,
cls_true = data.validation.cls)
Explanation: Calculate the predicted class for the validation-set.
End of explanation
def cls_accuracy(correct):
# Calculate the number of correctly classified images.
# When summing a boolean array, False means 0 and True means 1.
correct_sum = correct.sum()
# Classification accuracy is the number of correctly classified
# images divided by the total number of images in the test-set.
acc = float(correct_sum) / len(correct)
return acc, correct_sum
Explanation: Helper-functions for the classification accuracy
This function calculates the classification accuracy given a boolean array whether each image was correctly classified. E.g. cls_accuracy([True, True, False, False, False]) = 2/5 = 0.4
End of explanation
def validation_accuracy():
# Get the array of booleans whether the classifications are correct
# for the validation-set.
# The function returns two values but we only need the first.
correct, _ = predict_cls_validation()
# Calculate the classification accuracy and return it.
return cls_accuracy(correct)
Explanation: Calculate the classification accuracy on the validation-set.
End of explanation
def print_test_accuracy(show_example_errors=False,
show_confusion_matrix=False):
# For all the images in the test-set,
# calculate the predicted classes and whether they are correct.
correct, cls_pred = predict_cls_test()
# Classification accuracy and the number of correct classifications.
acc, num_correct = cls_accuracy(correct)
# Number of images being classified.
num_images = len(correct)
# Print the accuracy.
msg = "Accuracy on Test-Set: {0:.1%} ({1} / {2})"
print(msg.format(acc, num_correct, num_images))
# Plot some examples of mis-classifications, if desired.
if show_example_errors:
print("Example errors:")
plot_example_errors(cls_pred=cls_pred, correct=correct)
# Plot the confusion matrix, if desired.
if show_confusion_matrix:
print("Confusion Matrix:")
plot_confusion_matrix(cls_pred=cls_pred)
Explanation: Helper-function for showing the performance
Function for printing the classification accuracy on the test-set.
It takes a while to compute the classification for all the images in the test-set, that's why the results are re-used by calling the above functions directly from this function, so the classifications don't have to be recalculated by each function.
End of explanation
def plot_conv_weights(weights, input_channel=0):
# Assume weights are TensorFlow ops for 4-dim variables
# e.g. weights_conv1 or weights_conv2.
# Retrieve the values of the weight-variables from TensorFlow.
# A feed-dict is not necessary because nothing is calculated.
w = session.run(weights)
# Print mean and standard deviation.
print("Mean: {0:.5f}, Stdev: {1:.5f}".format(w.mean(), w.std()))
# Get the lowest and highest values for the weights.
# This is used to correct the colour intensity across
# the images so they can be compared with each other.
w_min = np.min(w)
w_max = np.max(w)
# Number of filters used in the conv. layer.
num_filters = w.shape[3]
# Number of grids to plot.
# Rounded-up, square-root of the number of filters.
num_grids = math.ceil(math.sqrt(num_filters))
# Create figure with a grid of sub-plots.
fig, axes = plt.subplots(num_grids, num_grids)
# Plot all the filter-weights.
for i, ax in enumerate(axes.flat):
# Only plot the valid filter-weights.
if i<num_filters:
# Get the weights for the i'th filter of the input channel.
# The format of this 4-dim tensor is determined by the
# TensorFlow API. See Tutorial #02 for more details.
img = w[:, :, input_channel, i]
# Plot image.
ax.imshow(img, vmin=w_min, vmax=w_max,
interpolation='nearest', cmap='seismic')
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
Explanation: Helper-function for plotting convolutional weights
End of explanation
print_test_accuracy()
Explanation: Performance before any optimization
The accuracy on the test-set is very low because the model variables have only been initialized and not optimized at all, so it just classifies the images randomly.
End of explanation
plot_conv_weights(weights=weights_conv1)
Explanation: The convolutional weights are random, but it can be difficult to see any difference from the optimized weights that are shown below. The mean and standard deviation is shown so we can see whether there is a difference.
End of explanation
optimize(num_iterations=10000)
print_test_accuracy(show_example_errors=True,
show_confusion_matrix=True)
Explanation: Perform 10,000 optimization iterations
We now perform 10,000 optimization iterations and abort the optimization if no improvement is found on the validation-set in 1000 iterations.
An asterisk * is shown if the classification accuracy on the validation-set is an improvement.
End of explanation
plot_conv_weights(weights=weights_conv1)
Explanation: The convolutional weights have now been optimized. Compare these to the random weights shown above. They appear to be almost identical. In fact, I first thought there was a bug in the program because the weights look identical before and after optimization.
But try and save the images and compare them side-by-side (you can just right-click the image to save it). You will notice very small differences before and after optimization.
The mean and standard deviation has also changed slightly, so the optimized weights must be different.
End of explanation
init_variables()
Explanation: Initialize Variables Again
Re-initialize all the variables of the neural network with random values.
End of explanation
print_test_accuracy()
Explanation: This means the neural network classifies the images completely randomly again, so the classification accuracy is very poor because it is like random guesses.
End of explanation
plot_conv_weights(weights=weights_conv1)
Explanation: The convolutional weights should now be different from the weights shown above.
End of explanation
saver.restore(sess=session, save_path=save_path)
Explanation: Restore Best Variables
Re-load all the variables that were saved to file during optimization.
End of explanation
print_test_accuracy(show_example_errors=True,
show_confusion_matrix=True)
Explanation: The classification accuracy is high again when using the variables that were previously saved.
Note that the classification accuracy may be slightly higher or lower than that reported above, because the variables in the file were chosen to maximize the classification accuracy on the validation-set, but the optimization actually continued for another 1000 iterations after saving those variables, so we are reporting the results for two slightly different sets of variables. Sometimes this leads to slightly better or worse performance on the test-set.
End of explanation
plot_conv_weights(weights=weights_conv1)
Explanation: The convolutional weights should be nearly identical to those shown above, although not completely identical because the weights shown above had 1000 optimization iterations more.
End of explanation
# This has been commented out in case you want to modify and experiment
# with the Notebook without having to restart it.
# session.close()
Explanation: Close TensorFlow Session
We are now done using TensorFlow, so we close the session to release its resources.
End of explanation
<END_TASK> |
15,670 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Categorical Data
Categoricals are a pandas data type, which correspond to categorical variables in statistics
Step1: Change data type
change data type for "Grade" column to category
documentation for astype()
Step2: Rename the categories
Rename the categories to more meaningful names (assigning to Series.cat.categories is inplace)
Step3: Values in data frame have not changed
Step4: tabulate Department, Name, and YearsOfService, by Grade | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
file_name_string = 'C:/Users/Charles Kelly/Desktop/Exercise Files/02_07/Begin/EmployeesWithGrades.xlsx'
employees_df = pd.read_excel(file_name_string, 'Sheet1', index_col=None, na_values=['NA'])
employees_df
Explanation: Categorical Data
Categoricals are a pandas data type, which correspond to categorical variables in statistics: a variable, which can take
on only a limited, and usually fixed, number of possible values (categories; levels in R). Examples are gender, social
class, blood types, country affiliations, observation time or ratings via Likert scales.
In contrast to statistical categorical variables, categorical data might have an order (e.g. ‘strongly agree’ vs ‘agree’ or
‘first observation’ vs. ‘second observation’), but numerical operations (additions, divisions, ...) are not possible.
All values of categorical data are either in categories or np.nan. Order is defined by the order of categories, not lexical
order of the values.
documentation: http://pandas.pydata.org/pandas-docs/stable/categorical.html
End of explanation
employees_df["Grade"] = employees_df["Grade"].astype("category")
Explanation: Change data type
change data type for "Grade" column to category
documentation for astype(): http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.astype.html
End of explanation
employees_df["Grade"].cat.categories = ["excellent", "good", "acceptable", "poor", "unacceptable"]
Explanation: Rename the categories
Rename the categories to more meaningful names (assigning to Series.cat.categories is inplace)
End of explanation
employees_df
Explanation: Values in data frame have not changed
End of explanation
employees_df.groupby('Grade').count()
Explanation: tabulate Department, Name, and YearsOfService, by Grade
End of explanation
<END_TASK> |
15,671 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
<img style="float
Step1: Let us take a sneak peek at the data
Step2: What is the size of the dataset?
Step3: Now we see that there are different models of hard disks, let us list them
<img style="float
Step4: let us see how many models are there in total
Step5: <img style="float
Step6: Sort and plot
Step7: Question 2. Find failures for a hard disk models
Step8: Question 3. How do you compute failure rate for a model
Now let us express the failure / total number of hard disks as ratio. This will give us an understanding of models and their failure behavior
To get that data, instead of computing again, we can join the 2 data frames that were previously computed
and compute the ratio
Step9: let us see in total hard disks for a model, how many failed and how many did not
Step10: now let us compute the ratio of failure number/total_hard_disk of hard disk
Step11: The higher the ratio value is , the model is prone to failure
Step12: Now we know which models fail the most, let us introduce a new feature in our analysis, capacity.
We are going feature by feature the reason being, the more features we add that add value to the outcome, we see how our understanding of the data starts to change.
Let us look at the capacity
Step13: Question 4. Given a model and capacity bytes, what does failure count look like
Step14: Looking at this chart can you tell what is not being represented right?
We are having repeated entries for the same capacity and this really does not give us insights on the relation between capacity data and the models.
Step15: we see that for some models and their respective capacitys we do not have a fail count, lets fill it with 0
Step16: This heat map gives us a better understanding of model, capacity vs failure
Step17: The above charts give us an explanation of which models failed the most, which models had the most number of hard disks running , the ratio of hard disk
Step18: Question 6. Find the average running time for failed hard disks and average running time for hard disks that have not failed
Step19: <img style="float
Step20: Now what can we do with this data? Is this useful? What can I generate from the above data that gives
me a little more insight ?
We can generate what is the average time of failure and average success time for capacity
Step21: Question 7. How about using hours (SMART_9) column now and co-relate it with failure
Step22: Now we want to know upto when for a given hard disk and capacity , how long the hard disk ran
Step23: Question 8. Given the data , identify the model and capacity of the hard disk to buy based on how long it runs
Step24: Let us convert bytes to gigabytes and round it to the nearest number
Step25: The above visualization is confusing as the bars reflect combination of failure and hours count | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
%matplotlib inline
plt.style.use('ggplot')
plt.rcParams['figure.figsize']=15,10
df = pd.read_csv('data/data.csv')
Explanation: <img style="float:center" src="img/explore.jpg" width=300/>
Exploring the data
When we look at spreadsheets or large amounts of data, its hard for us to understand what is really happening. But when we visually interpret the data then everything starts making sense.
<img style="float::left" src="img/explore-reason.png" />
Question 1. Find the total number of hard disks for a given model
Question 2. Find total failures for a hard disk models
Question 3. How do you compute failure rate for a model
Question 4. Given a model and capacity bytes, what does failure count look like
Question 5. Let us count how many days each hard disk ran
Question 6. Find the average running time for failed hard disks and average running time for hard disks that have not failed
Question 7. How about using hours (SMART_9) column now and co-relate it with failure
Question 8. Given the data , identify the model and capacity of the hard disk to buy based on how long it runs
Step by step approach
First let us look at our data
End of explanation
df.head()
Explanation: Let us take a sneak peek at the data
End of explanation
df.shape
Explanation: What is the size of the dataset?
End of explanation
df_model = pd.DataFrame(df.model.unique(),columns=['model'])
df_model.head()
df_model.count()[0]
Explanation: Now we see that there are different models of hard disks, let us list them
<img style="float:center" src="img/distinct.gif" />
End of explanation
print "Total number of distinct models : "+ str(df_model.count()[0])
# Exerice 1: Find the distinct number of serial numbers
# Exercise 2: Find the distinct number of capacity bytes
Explanation: let us see how many models are there in total
End of explanation
df_model_serial = pd.DataFrame(df.groupby(['model']).serial.nunique())
df_model_serial.head()
df_model_serial = df_model_serial.reset_index()
df_model_serial.head()
df_model_serial.columns = ['model','total_HD']
df_model_serial.head(39)
df_model_serial.plot(kind="barh",x="model",y="total_HD")
Explanation: <img style="float:center" src="img/group-by.gif" />
Question 1. Find the total number of hard disks for a given model
Now let us see how many hard disks are there for each model and visualize it.
We see that serial number represents the hard disk and they are related to a model i.e multiple serial numbers belongs to one type of model
End of explanation
df_model_serial.sort_values(by='total_HD',inplace=True)
df_model_serial.plot(kind="barh",x="model",y="total_HD")
#Exercise 3: Find the count of different capacity bytes for a model and plot with and without sorting
Explanation: Sort and plot
End of explanation
df_fail = pd.DataFrame(df.groupby('model').failure.sum())
df_fail.head()
df_fail = df_fail.reset_index()
df_fail.head()
df_fail.plot(kind="barh",x="model",y="failure",figsize=(18,10))
# Exercise 4 : sort the above data frame and plot it
Explanation: Question 2. Find failures for a hard disk models
End of explanation
merged_df = df_model_serial.merge(df_fail,how='inner',on='model')
merged_df.head()
Explanation: Question 3. How do you compute failure rate for a model
Now let us express the failure / total number of hard disks as ratio. This will give us an understanding of models and their failure behavior
To get that data, instead of computing again, we can join the 2 data frames that were previously computed
and compute the ratio
End of explanation
merged_df['success'] = merged_df.total_HD - merged_df.failure
merged_df.head()
merged_df.plot(kind="bar",x="model",y=["failure","success"],subplots=True)
Explanation: let us see in total hard disks for a model, how many failed and how many did not
End of explanation
merged_df['ratio_failure'] = merged_df.failure / merged_df.total_HD
merged_df.head(25)
merged_df.sort_values(by="ratio_failure",ascending=False,inplace=True)
merged_df.head()
merged_df.plot(kind="bar",x="model",y="ratio_failure")
Explanation: now let us compute the ratio of failure number/total_hard_disk of hard disk
End of explanation
#Exercise: Find ratio of success and plot it
#Exercise : Plot multiple bar charts comparing ratio of success and failure
Explanation: The higher the ratio value is , the model is prone to failure
End of explanation
df_capacity = pd.DataFrame(df.capacity.unique(),columns=['capacity'])
df_capacity.head()
df_capacity.shape
#Exercise : For a given capacity bytes, find the total number of failures and plot it
Explanation: Now we know which models fail the most, let us introduce a new feature in our analysis, capacity.
We are going feature by feature the reason being, the more features we add that add value to the outcome, we see how our understanding of the data starts to change.
Let us look at the capacity
End of explanation
df_fail_mod_cap = pd.DataFrame(df.groupby(['model','capacity']).failure.sum())
df_fail_mod_cap.head()
df_fail_mod_cap = df_fail_mod_cap.reset_index()
df_fail_mod_cap.head(25)
df_fail_mod_cap.plot(x="capacity",y="failure",kind="bar",figsize=(20,5))
Explanation: Question 4. Given a model and capacity bytes, what does failure count look like
End of explanation
df_fail_mod_cap.head()
df_fail_mod_cap_pivot = df_fail_mod_cap.pivot("model","capacity","failure")
df_fail_mod_cap_pivot.head()
Explanation: Looking at this chart can you tell what is not being represented right?
We are having repeated entries for the same capacity and this really does not give us insights on the relation between capacity data and the models.
End of explanation
df_fail_mod_cap.fillna(0,inplace=True)
df_fail_mod_cap.head()
sns.heatmap(df_fail_mod_cap_pivot)
Explanation: we see that for some models and their respective capacitys we do not have a fail count, lets fill it with 0
End of explanation
#Exercise : Find count of success for a model with different capacities and plot it
Explanation: This heat map gives us a better understanding of model, capacity vs failure
End of explanation
df_days = pd.DataFrame(df.groupby(['capacity','serial']).date.count())
df_days = df_days.reset_index()
df_days.head()
df_days.columns = ['capacity','serial','total_days']
df_days.head()
df_days.capacity.value_counts()
df_days.shape
df_days_pivot = df_days.pivot('capacity','serial','total_days')
df_days_pivot.head()
df_days_pivot.fillna(0,inplace=True)
df_days_pivot.head()
# Exercise : Visualize the above dataframe
Explanation: The above charts give us an explanation of which models failed the most, which models had the most number of hard disks running , the ratio of hard disk : failure rate and hard disk and for a given capacity of a model what the failure count looks like
<img style="float:center" src="img/explore-clock.png" width=150/>
Hard disk data is time series data, so let us start using time
Question 5. Let us count how many days each hard disk ran
End of explanation
df_fail_days = pd.DataFrame(df[['capacity','serial','failure']].loc[df['failure'] == 1 ])
df_fail_days.head()
Explanation: Question 6. Find the average running time for failed hard disks and average running time for hard disks that have not failed
End of explanation
df_fail_count = df_days.merge(df_fail_days,how="left",on=['capacity','serial'])
df_fail_count.head()
df_fail_count.fillna(0,inplace=True)
df_fail_count.head()
df_fail_count.dtypes
g = sns.FacetGrid(df_fail_count, col="failure",hue='failure',size=5,aspect=1.5)
g.map_dataframe(plt.scatter,x='capacity',y='total_days')
Explanation: <img style="float:center" src="img/sql-joins.jpg"/>
now let us merge the previous data frame which had serial number and count of days
End of explanation
df_fail_count_avg = pd.DataFrame(df_fail_count.groupby(['capacity','failure']).total_days.mean())
df_fail_count_avg.head()
df_fail_count_avg = df_fail_count_avg.reset_index()
df_fail_count_avg.head()
df_fail_count_avg_pivot = df_fail_count_avg.pivot('capacity','failure','total_days')
df_fail_count_avg_pivot.head()
df_fail_count_avg_pivot.plot(kind="bar")
Explanation: Now what can we do with this data? Is this useful? What can I generate from the above data that gives
me a little more insight ?
We can generate what is the average time of failure and average success time for capacity
End of explanation
df_hours = df[['serial','capacity','failure','smart_9']]
df_hours.head()
df_hours.shape
Explanation: Question 7. How about using hours (SMART_9) column now and co-relate it with failure
End of explanation
df_hours_max = pd.DataFrame(df_hours.groupby(['serial','capacity']).smart_9.max())
df_hours_max.head()
df_hours_max.shape
df_hours_max = df_hours_max.reset_index()
df_hours_max_merge = df_hours_max.merge(df_hours,on=['serial','capacity','smart_9'],how='inner')
df_hours_max_merge.head()
df_hours_max_merge_pivot = pd.pivot_table(df_hours_max_merge,index='capacity',columns='failure',values='smart_9'
,aggfunc='mean')
df_hours_max_merge_pivot.head()
df_hours_max_merge_pivot.plot(kind='bar')
Explanation: Now we want to know upto when for a given hard disk and capacity , how long the hard disk ran
End of explanation
df_model_capacity_hours = df[['model','capacity','failure','smart_9']]
df_model_capacity_hours.head()
Explanation: Question 8. Given the data , identify the model and capacity of the hard disk to buy based on how long it runs
End of explanation
df_model_capacity_hours.capacity = df_model_capacity_hours.capacity / 1024 ** 3
df_model_capacity_hours.head()
df_model_capacity_hours.capacity = df_model_capacity_hours.capacity.astype(np.int64)
df_model_capacity_hours.head()
df_model_capacity_hours_pivot = pd.pivot_table(data=df_model_capacity_hours,index='model',columns=['failure','capacity'],
values='smart_9',aggfunc='mean')
df_model_capacity_hours_pivot.head()
df_model_capacity_hours_pivot.fillna(0,inplace=True)
df_model_capacity_hours_pivot.head()
df_model_capacity_hours_pivot.plot(kind="barh")
Explanation: Let us convert bytes to gigabytes and round it to the nearest number
End of explanation
sns.heatmap(df_model_capacity_hours_pivot)
Explanation: The above visualization is confusing as the bars reflect combination of failure and hours count
End of explanation
<END_TASK> |
15,672 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Receptive Field Estimation and Prediction
This example reproduces figures from Lalor et al.'s mTRF toolbox in
MATLAB
Step1: Load the data from the publication
First we will load the data collected in
Step2: Create and fit a receptive field model
We will construct an encoding model to find the linear relationship between
a time-delayed version of the speech envelope and the EEG signal. This allows
us to make predictions about the response to new stimuli.
Step3: Investigate model coefficients
Finally, we will look at how the linear coefficients (sometimes
referred to as beta values) are distributed across time delays as well as
across the scalp. We will recreate figure 1 and figure 2 from
Step4: Create and fit a stimulus reconstruction model
We will now demonstrate another use case for the for the
Step5: Visualize stimulus reconstruction
To get a sense of our model performance, we can plot the actual and predicted
stimulus envelopes side by side.
Step6: Investigate model coefficients
Finally, we will look at how the decoding model coefficients are distributed
across the scalp. We will attempt to recreate figure 5_ from | <ASSISTANT_TASK:>
Python Code:
# Authors: Chris Holdgraf <[email protected]>
# Eric Larson <[email protected]>
# Nicolas Barascud <[email protected]>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
from scipy.io import loadmat
from os.path import join
import mne
from mne.decoding import ReceptiveField
from sklearn.model_selection import KFold
from sklearn.preprocessing import scale
Explanation: Receptive Field Estimation and Prediction
This example reproduces figures from Lalor et al.'s mTRF toolbox in
MATLAB :footcite:CrosseEtAl2016. We will show how the
:class:mne.decoding.ReceptiveField class
can perform a similar function along with scikit-learn. We will first fit a
linear encoding model using the continuously-varying speech envelope to predict
activity of a 128 channel EEG system. Then, we will take the reverse approach
and try to predict the speech envelope from the EEG (known in the literature
as a decoding model, or simply stimulus reconstruction).
End of explanation
path = mne.datasets.mtrf.data_path()
decim = 2
data = loadmat(join(path, 'speech_data.mat'))
raw = data['EEG'].T
speech = data['envelope'].T
sfreq = float(data['Fs'])
sfreq /= decim
speech = mne.filter.resample(speech, down=decim, npad='auto')
raw = mne.filter.resample(raw, down=decim, npad='auto')
# Read in channel positions and create our MNE objects from the raw data
montage = mne.channels.make_standard_montage('biosemi128')
info = mne.create_info(montage.ch_names, sfreq, 'eeg').set_montage(montage)
raw = mne.io.RawArray(raw, info)
n_channels = len(raw.ch_names)
# Plot a sample of brain and stimulus activity
fig, ax = plt.subplots()
lns = ax.plot(scale(raw[:, :800][0].T), color='k', alpha=.1)
ln1 = ax.plot(scale(speech[0, :800]), color='r', lw=2)
ax.legend([lns[0], ln1[0]], ['EEG', 'Speech Envelope'], frameon=False)
ax.set(title="Sample activity", xlabel="Time (s)")
mne.viz.tight_layout()
Explanation: Load the data from the publication
First we will load the data collected in :footcite:CrosseEtAl2016.
In this experiment subjects
listened to natural speech. Raw EEG and the speech stimulus are provided.
We will load these below, downsampling the data in order to speed up
computation since we know that our features are primarily low-frequency in
nature. Then we'll visualize both the EEG and speech envelope.
End of explanation
# Define the delays that we will use in the receptive field
tmin, tmax = -.2, .4
# Initialize the model
rf = ReceptiveField(tmin, tmax, sfreq, feature_names=['envelope'],
estimator=1., scoring='corrcoef')
# We'll have (tmax - tmin) * sfreq delays
# and an extra 2 delays since we are inclusive on the beginning / end index
n_delays = int((tmax - tmin) * sfreq) + 2
n_splits = 3
cv = KFold(n_splits)
# Prepare model data (make time the first dimension)
speech = speech.T
Y, _ = raw[:] # Outputs for the model
Y = Y.T
# Iterate through splits, fit the model, and predict/test on held-out data
coefs = np.zeros((n_splits, n_channels, n_delays))
scores = np.zeros((n_splits, n_channels))
for ii, (train, test) in enumerate(cv.split(speech)):
print('split %s / %s' % (ii + 1, n_splits))
rf.fit(speech[train], Y[train])
scores[ii] = rf.score(speech[test], Y[test])
# coef_ is shape (n_outputs, n_features, n_delays). we only have 1 feature
coefs[ii] = rf.coef_[:, 0, :]
times = rf.delays_ / float(rf.sfreq)
# Average scores and coefficients across CV splits
mean_coefs = coefs.mean(axis=0)
mean_scores = scores.mean(axis=0)
# Plot mean prediction scores across all channels
fig, ax = plt.subplots()
ix_chs = np.arange(n_channels)
ax.plot(ix_chs, mean_scores)
ax.axhline(0, ls='--', color='r')
ax.set(title="Mean prediction score", xlabel="Channel", ylabel="Score ($r$)")
mne.viz.tight_layout()
Explanation: Create and fit a receptive field model
We will construct an encoding model to find the linear relationship between
a time-delayed version of the speech envelope and the EEG signal. This allows
us to make predictions about the response to new stimuli.
End of explanation
# Print mean coefficients across all time delays / channels (see Fig 1)
time_plot = 0.180 # For highlighting a specific time.
fig, ax = plt.subplots(figsize=(4, 8))
max_coef = mean_coefs.max()
ax.pcolormesh(times, ix_chs, mean_coefs, cmap='RdBu_r',
vmin=-max_coef, vmax=max_coef, shading='gouraud')
ax.axvline(time_plot, ls='--', color='k', lw=2)
ax.set(xlabel='Delay (s)', ylabel='Channel', title="Mean Model\nCoefficients",
xlim=times[[0, -1]], ylim=[len(ix_chs) - 1, 0],
xticks=np.arange(tmin, tmax + .2, .2))
plt.setp(ax.get_xticklabels(), rotation=45)
mne.viz.tight_layout()
# Make a topographic map of coefficients for a given delay (see Fig 2C)
ix_plot = np.argmin(np.abs(time_plot - times))
fig, ax = plt.subplots()
mne.viz.plot_topomap(mean_coefs[:, ix_plot], pos=info, axes=ax, show=False,
vmin=-max_coef, vmax=max_coef)
ax.set(title="Topomap of model coefficients\nfor delay %s" % time_plot)
mne.viz.tight_layout()
Explanation: Investigate model coefficients
Finally, we will look at how the linear coefficients (sometimes
referred to as beta values) are distributed across time delays as well as
across the scalp. We will recreate figure 1 and figure 2 from
:footcite:CrosseEtAl2016.
End of explanation
# We use the same lags as in :footcite:`CrosseEtAl2016`. Negative lags now
# index the relationship
# between the neural response and the speech envelope earlier in time, whereas
# positive lags would index how a unit change in the amplitude of the EEG would
# affect later stimulus activity (obviously this should have an amplitude of
# zero).
tmin, tmax = -.2, 0.
# Initialize the model. Here the features are the EEG data. We also specify
# ``patterns=True`` to compute inverse-transformed coefficients during model
# fitting (cf. next section and :footcite:`HaufeEtAl2014`).
# We'll use a ridge regression estimator with an alpha value similar to
# Crosse et al.
sr = ReceptiveField(tmin, tmax, sfreq, feature_names=raw.ch_names,
estimator=1e4, scoring='corrcoef', patterns=True)
# We'll have (tmax - tmin) * sfreq delays
# and an extra 2 delays since we are inclusive on the beginning / end index
n_delays = int((tmax - tmin) * sfreq) + 2
n_splits = 3
cv = KFold(n_splits)
# Iterate through splits, fit the model, and predict/test on held-out data
coefs = np.zeros((n_splits, n_channels, n_delays))
patterns = coefs.copy()
scores = np.zeros((n_splits,))
for ii, (train, test) in enumerate(cv.split(speech)):
print('split %s / %s' % (ii + 1, n_splits))
sr.fit(Y[train], speech[train])
scores[ii] = sr.score(Y[test], speech[test])[0]
# coef_ is shape (n_outputs, n_features, n_delays). We have 128 features
coefs[ii] = sr.coef_[0, :, :]
patterns[ii] = sr.patterns_[0, :, :]
times = sr.delays_ / float(sr.sfreq)
# Average scores and coefficients across CV splits
mean_coefs = coefs.mean(axis=0)
mean_patterns = patterns.mean(axis=0)
mean_scores = scores.mean(axis=0)
max_coef = np.abs(mean_coefs).max()
max_patterns = np.abs(mean_patterns).max()
Explanation: Create and fit a stimulus reconstruction model
We will now demonstrate another use case for the for the
:class:mne.decoding.ReceptiveField class as we try to predict the stimulus
activity from the EEG data. This is known in the literature as a decoding, or
stimulus reconstruction model :footcite:CrosseEtAl2016.
A decoding model aims to find the
relationship between the speech signal and a time-delayed version of the EEG.
This can be useful as we exploit all of the available neural data in a
multivariate context, compared to the encoding case which treats each M/EEG
channel as an independent feature. Therefore, decoding models might provide a
better quality of fit (at the expense of not controlling for stimulus
covariance), especially for low SNR stimuli such as speech.
End of explanation
y_pred = sr.predict(Y[test])
time = np.linspace(0, 2., 5 * int(sfreq))
fig, ax = plt.subplots(figsize=(8, 4))
ax.plot(time, speech[test][sr.valid_samples_][:int(5 * sfreq)],
color='grey', lw=2, ls='--')
ax.plot(time, y_pred[sr.valid_samples_][:int(5 * sfreq)], color='r', lw=2)
ax.legend([lns[0], ln1[0]], ['Envelope', 'Reconstruction'], frameon=False)
ax.set(title="Stimulus reconstruction")
ax.set_xlabel('Time (s)')
mne.viz.tight_layout()
Explanation: Visualize stimulus reconstruction
To get a sense of our model performance, we can plot the actual and predicted
stimulus envelopes side by side.
End of explanation
time_plot = (-.140, -.125) # To average between two timepoints.
ix_plot = np.arange(np.argmin(np.abs(time_plot[0] - times)),
np.argmin(np.abs(time_plot[1] - times)))
fig, ax = plt.subplots(1, 2)
mne.viz.plot_topomap(np.mean(mean_coefs[:, ix_plot], axis=1),
pos=info, axes=ax[0], show=False,
vmin=-max_coef, vmax=max_coef)
ax[0].set(title="Model coefficients\nbetween delays %s and %s"
% (time_plot[0], time_plot[1]))
mne.viz.plot_topomap(np.mean(mean_patterns[:, ix_plot], axis=1),
pos=info, axes=ax[1],
show=False, vmin=-max_patterns, vmax=max_patterns)
ax[1].set(title="Inverse-transformed coefficients\nbetween delays %s and %s"
% (time_plot[0], time_plot[1]))
mne.viz.tight_layout()
Explanation: Investigate model coefficients
Finally, we will look at how the decoding model coefficients are distributed
across the scalp. We will attempt to recreate figure 5_ from
:footcite:CrosseEtAl2016. The
decoding model weights reflect the channels that contribute most toward
reconstructing the stimulus signal, but are not directly interpretable in a
neurophysiological sense. Here we also look at the coefficients obtained
via an inversion procedure :footcite:HaufeEtAl2014, which have a more
straightforward
interpretation as their value (and sign) directly relates to the stimulus
signal's strength (and effect direction).
End of explanation
<END_TASK> |
15,673 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Setup data directory
Step1: Download database files
Step2: download a small test dataset
ATT | <ASSISTANT_TASK:>
Python Code:
cd /usr/local/notebooks
mkdir -p ./data
cd ./data
Explanation: Setup data directory
End of explanation
!wget https://s3.amazonaws.com/ssusearchdb/SSUsearch_db.tgz
!tar -xzvf SSUsearch_db.tgz
Explanation: Download database files
End of explanation
!wget https://s3.amazonaws.com/ssusearchdb/test.tgz
!tar -xzvf test.tgz
ls test/data/
Explanation: download a small test dataset
ATT: for real (larger) dataset, make sure there is enough disk space.
End of explanation
<END_TASK> |
15,674 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Minimizing a <span style="font-variant
Step1: The function cart_prod(A, B) computes the Cartesian product $A \times B$ of the sets $A$ and $B$ where $A \times B$ is defined as follows
Step2: The function separate takes four arguments
Step3: Given a state p and a Partition of the set of all states, the function find_equivalence_class(p, Partition) returns the equivalence class of p, i.e. it returns the set from Partition that contains x.
Step4: The function reachable(q0, Σ, 𝛿) takes three arguments
Step5: The function all_separable(Q, A, Σ, 𝛿) takes four arguments
Step6: The function minimize(A) takes a deterministic
<span style="font-variant | <ASSISTANT_TASK:>
Python Code:
def arb(M):
for x in M:
return x
assert False, 'Error: arb called with empty set!'
Explanation: Minimizing a <span style="font-variant:small-caps;">Fsm</span>
The function arb(M) takes a non-empty set M as its argument and returns an arbitrary element from this set.
The set M is not changed.
End of explanation
def cart_prod(A, B):
return { (x, y) for x in A for y in B }
Explanation: The function cart_prod(A, B) computes the Cartesian product $A \times B$ of the sets $A$ and $B$ where $A \times B$ is defined as follows:
$$ A \times B := { (x, y) \mid x \in A \wedge y \in B }. $$
End of explanation
def separate(Pairs, States, Σ, 𝛿):
Result = { (q1, q2) for q1 in States
for q2 in States
for c in Σ
if (𝛿[q1, c], 𝛿[q2, c]) in Pairs
}
return Result
Explanation: The function separate takes four arguments:
- Pairs a set Pairs of pairs of states from some given <span style="font-variant:small-caps;">Fsm</span> $F$.
If $(p_1, p_2) \in \texttt{Pairs}$, then $p_1$ and $p_2$ are known to be separable.
- States is the set of all states of the <span style="font-variant:small-caps;">Fsm</span> $F$,
- Σ is the alphabet of the <span style="font-variant:small-caps;">Fsm</span> $F$.
- 𝛿 is the transition function of the <span style="font-variant:small-caps;">Fsm</span>
The function separate(Pairs, States, Σ, 𝛿) computes the set of pairs of states $(q_1, q_2)$ that are separable because there is some character $c \in \Sigma$ such that
$$\delta(q_1,c) = p_1, \quad \textrm{but} \quad \delta(q_2,c) = p_2. $$
End of explanation
def find_equivalence_class(p, Partition):
return arb({ C for C in Partition if p in C })
Explanation: Given a state p and a Partition of the set of all states, the function find_equivalence_class(p, Partition) returns the equivalence class of p, i.e. it returns the set from Partition that contains x.
End of explanation
def reachable(q0, Σ, 𝛿):
Result = { q0 }
while True:
NewStates = { 𝛿[p, c] for p in Result for c in Σ }
if NewStates <= Result:
return Result
Result |= NewStates
Explanation: The function reachable(q0, Σ, 𝛿) takes three arguments:
* q0 is the start state of an Fsm,
* Σ is the alphabet.
* 𝛿 is the transition function. The transition function is assumed to be complete. 𝛿 is represented as a dictionary.
It returns the set of all states that can be reached from the start state q0 by reading strings of characters from Σ.
End of explanation
def all_separable(Q, A, Σ, 𝛿):
Separable = cart_prod(Q - A, A) | cart_prod(A, Q - A)
while True:
NewPairs = separate(Separable, Q, Σ, 𝛿)
if NewPairs <= Separable:
return Separable
Separable |= NewPairs
Explanation: The function all_separable(Q, A, Σ, 𝛿) takes four arguments:
* Q is the set of states of the Fsm.
* A is the set of all accepting states,
* Σ is the alphabet.
* 𝛿 is the transition function.
𝛿 is represented as a dictionary.
The function computes the set of all Pairs (p, q) such that p and q are separable, i.e. all pairs such that
$$ \exists s \in \Sigma^: \bigl(\delta^(p, s) \in A \wedge \delta^(q,s) \not\in A\bigr) \vee
\bigl(\delta^(p, s) \not\in A \wedge \delta^*(q,s) \in A\bigr).
$$
End of explanation
def minimize(F):
Q, Σ, 𝛿, q0, A = F
Q = reachable(q0, Σ, 𝛿)
Separable = all_separable(Q, A, Σ, 𝛿)
Equivalent = cart_prod(Q, Q) - Separable
EquivClasses = { frozenset({ p for p in Q if (p, q) in Equivalent })
for q in Q
}
newQ0 = arb({ M for M in EquivClasses if q0 in M })
newAccept = { M for M in EquivClasses if arb(M) in A }
newDelta = {}
for q in Q:
for c in Σ:
p = 𝛿.get((q, c))
if p != None:
classOfP = find_equivalence_class(p, EquivClasses)
classOfQ = find_equivalence_class(q, EquivClasses)
newDelta[(classOfQ, c)] = classOfP
else:
newDelta[(classOfQ, c)] = frozenset()
return EquivClasses, Σ, newDelta, newQ0, newAccept
Explanation: The function minimize(A) takes a deterministic
<span style="font-variant:small-caps;">Fsm</span> F as its input.
Here F is a 5-tuple of the form
$$ F = (Q, \Sigma, \delta, q_0, A) $$
The algorithm performs the following steps:
1. All unreachable states are eliminated.
2. All accepting states are separated form all non-accepting states.
3. States are separated as long as possible.
Two states $p_1$ and $p_2$ are separable if there is a character
$c \in \Sigma$ such that
$$\delta(p_1,c) = q_1, \quad \delta(p_2,c) = q_2, \quad \textrm{and} \quad
\mbox{$q_1$ and $q_2$ are separable.}
$$
4. States that are not separable are equivalent and are therefore identified and grouped
in equivalence classes. The states in an equivalence class are then identified.
End of explanation
<END_TASK> |
15,675 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Generative Adversarial Network
In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!
GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out
Step1: Model Inputs
First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks.
Exercise
Step2: Generator network
Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.
Variable Scope
Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.
We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.
To use tf.variable_scope, you use a with statement
Step3: Discriminator
The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.
Exercise
Step4: Hyperparameters
Step5: Build network
Now we're building the network from the functions defined above.
First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.
Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.
Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).
Exercise
Step6: Discriminator and Generator Losses
Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropies, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like
python
tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)
The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.
Exercise
Step7: Optimizers
We want to update the generator and discriminator variables separately. So we need to get the variables for each part and build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.
For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables that start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance).
We can do something similar with the discriminator. All the variables in the discriminator start with discriminator.
Then, in the optimizer we pass the variable lists to the var_list keyword argument of the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.
Exercise
Step8: Training
Step9: Training loss
Here we'll check out the training losses for the generator and discriminator.
Step10: Generator samples from training
Here we can view samples of images from the generator. First we'll look at images taken while training.
Step11: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make.
Step12: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
Step13: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3.
Sampling from the generator
We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples! | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pickle as pkl
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data')
Explanation: Generative Adversarial Network
In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!
GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out:
Pix2Pix
CycleGAN
A whole list
The idea behind GANs is that you have two networks, a generator $G$ and a discriminator $D$, competing against each other. The generator makes fake data to pass to the discriminator. The discriminator also sees real data and predicts if the data it's received is real or fake. The generator is trained to fool the discriminator, it wants to output data that looks as close as possible to real data. And the discriminator is trained to figure out which data is real and which is fake. What ends up happening is that the generator learns to make data that is indistiguishable from real data to the discriminator.
The general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector the generator uses to contruct it's fake images. As the generator learns through training, it figures out how to map these random vectors to recognizable images that can fool the discriminator.
The output of the discriminator is a sigmoid function, where 0 indicates a fake image and 1 indicates an real image. If you're interested only in generating new images, you can throw out the discriminator after training. Now, let's see how we build this thing in TensorFlow.
End of explanation
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, shape=(None,real_dim), name='inputs_real')
inputs_z = tf.placeholder(tf.float32, shape=(None,z_dim), name='inputs_z')
return inputs_real, inputs_z
Explanation: Model Inputs
First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks.
Exercise: Finish the model_inputs function below. Create the placeholders for inputs_real and inputs_z using the input sizes real_dim and z_dim respectively.
End of explanation
def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):
with tf.variable_scope('generator', reuse=reuse):
# Hidden layer
h1 = tf.layers.dense(z, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(alpha * h1, h1)
# Logits and tanh output
logits = tf.layers.dense(h1, out_dim, activation=None)
out = tf.tanh(logits)
return out
Explanation: Generator network
Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.
Variable Scope
Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.
We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.
To use tf.variable_scope, you use a with statement:
python
with tf.variable_scope('scope_name', reuse=False):
# code here
Here's more from the TensorFlow documentation to get another look at using tf.variable_scope.
Leaky ReLU
TensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can just take the outputs from a linear fully connected layer and pass them to tf.maximum. Typically, a parameter alpha sets the magnitude of the output for negative values. So, the output for negative input (x) values is alpha*x, and the output for positive x is x:
$$
f(x) = max(\alpha * x, x)
$$
Tanh Output
The generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1.
Exercise: Implement the generator network in the function below. You'll need to return the tanh output. Make sure to wrap your code in a variable scope, with 'generator' as the scope name, and pass the reuse keyword argument from the function to tf.variable_scope.
End of explanation
def discriminator(x, n_units=128, reuse=False, alpha=0.01):
''' Build the discriminator network.
Arguments
---------
x : Input tensor for the discriminator
n_units: Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope('discriminator', reuse=reuse): # finish this
# Hidden layer
h1 = tf.layers.dense(x, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(alpha*h1,h1)
logits = tf.layers.dense(h1, 1, activation=None)
out = tf.sigmoid(logits)
return out, logits
Explanation: Discriminator
The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.
Exercise: Implement the discriminator network in the function below. Same as above, you'll need to return both the logits and the sigmoid output. Make sure to wrap your code in a variable scope, with 'discriminator' as the scope name, and pass the reuse keyword argument from the function arguments to tf.variable_scope.
End of explanation
# Size of input image to discriminator
input_size = 784 # 28x28 MNIST images flattened
# Size of latent vector to generator
z_size = 100
# Sizes of hidden layers in generator and discriminator
g_hidden_size = 128
d_hidden_size = 128
# Leak factor for leaky ReLU
alpha = 0.01
# Label smoothing
smooth = 0.1
Explanation: Hyperparameters
End of explanation
tf.reset_default_graph()
# Create our input placeholders
input_real, input_z = model_inputs(input_size, z_size)
# Generator network here
g_model = generator(input_z, input_size, n_units=g_hidden_size, alpha=alpha)
# g_model is the generator output
# Disriminator network here
d_model_real, d_logits_real = discriminator(input_real, n_units=d_hidden_size, reuse=False, alpha=alpha)
d_model_fake, d_logits_fake = discriminator(g_model, n_units=d_hidden_size, reuse=True, alpha=alpha)
Explanation: Build network
Now we're building the network from the functions defined above.
First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.
Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.
Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).
Exercise: Build the network from the functions you defined earlier.
End of explanation
# Calculate losses
d_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=(tf.ones_like(d_logits_real) * (1 - smooth))))
d_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=(tf.zeros_like(d_logits_fake))))
d_loss = d_loss_real + d_loss_fake
g_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=(tf.ones_like(d_logits_fake))))
Explanation: Discriminator and Generator Losses
Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropies, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like
python
tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)
The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.
Exercise: Calculate the losses for the discriminator and the generator. There are two discriminator losses, one for real images and one for fake images. For the real image loss, use the real logits and (smoothed) labels of ones. For the fake image loss, use the fake logits with labels of all zeros. The total discriminator loss is the sum of those two losses. Finally, the generator loss again uses the fake logits from the discriminator, but this time the labels are all ones because the generator wants to fool the discriminator.
End of explanation
# Optimizers
learning_rate = 0.002
# Get the trainable_variables, split into G and D parts
t_vars = tf.trainable_variables()
g_vars = [x for x in t_vars if x.name[0:9] == 'generator']
d_vars = [x for x in t_vars if x.name[0:13] == 'discriminator']
d_train_opt = tf.train.AdamOptimizer().minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer().minimize(g_loss, var_list=g_vars)
Explanation: Optimizers
We want to update the generator and discriminator variables separately. So we need to get the variables for each part and build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.
For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables that start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance).
We can do something similar with the discriminator. All the variables in the discriminator start with discriminator.
Then, in the optimizer we pass the variable lists to the var_list keyword argument of the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.
Exercise: Below, implement the optimizers for the generator and discriminator. First you'll need to get a list of trainable variables, then split that list into two lists, one for the generator variables and another for the discriminator variables. Finally, using AdamOptimizer, create an optimizer for each network that update the network variables separately.
End of explanation
batch_size = 100
epochs = 100
samples = []
losses = []
saver = tf.train.Saver(var_list = g_vars)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images, reshape and rescale to pass to D
batch_images = batch[0].reshape((batch_size, 784))
batch_images = batch_images*2 - 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})
_ = sess.run(g_train_opt, feed_dict={input_z: batch_z})
# At the end of each epoch, get the losses and print them out
train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images})
train_loss_g = g_loss.eval({input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
# Sample from generator as we're training for viewing afterwards
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
samples.append(gen_samples)
saver.save(sess, './checkpoints/generator.ckpt')
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
Explanation: Training
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator')
plt.plot(losses.T[1], label='Generator')
plt.title("Training Losses")
plt.legend()
Explanation: Training loss
Here we'll check out the training losses for the generator and discriminator.
End of explanation
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')
return fig, axes
# Load samples from generator taken while training
with open('train_samples.pkl', 'rb') as f:
samples = pkl.load(f)
Explanation: Generator samples from training
Here we can view samples of images from the generator. First we'll look at images taken while training.
End of explanation
_ = view_samples(-1, samples)
Explanation: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make.
End of explanation
rows, cols = 10, 6
fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)
for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):
for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):
ax.imshow(img.reshape((28,28)), cmap='Greys_r')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
Explanation: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
End of explanation
saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
view_samples(0, [gen_samples])
Explanation: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3.
Sampling from the generator
We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples!
End of explanation
<END_TASK> |
15,676 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
exportByFeat(img, fc, prop, folder, name, scale, dataType, **kwargs)
Step1: FeatureCollection
Step2: Image
Step3: Execute | <ASSISTANT_TASK:>
Python Code:
import ee
ee.Initialize()
from geetools import batch
Explanation: exportByFeat(img, fc, prop, folder, name, scale, dataType, **kwargs):
Export an image clipped by features (Polygons). You can use the same arguments as the original function ee.batch.export.image.toDrive
Parameters
img: image to clip
fc: feature collection
prop: name of the property of the features to paste in the image
folder: same as ee.Export
name: name of the resulting image. If None uses image's ID
scale: same as ee.Export. Default to 1000
dataType: as downloaded images must have the same data type in all
bands, you have to set it here. Can be one of: "float", "double", "int",
"Uint8", "Int8" or a casting function like ee.Image.toFloat
kwargs: keyword arguments that will be passed to ee.batch.export.image.toDrive
Return a list of all tasks (for further processing/checking)
End of explanation
p1 = ee.Geometry.Point([-71,-42])
p2 = ee.Geometry.Point([-71,-43])
p3 = ee.Geometry.Point([-71,-44])
feat1 = ee.Feature(p1.buffer(1000), {'site': 1})
feat2 = ee.Feature(p2.buffer(1000), {'site': 2})
feat3 = ee.Feature(p3.buffer(1000), {'site': 3})
fc = ee.FeatureCollection([feat1, feat2, feat3])
Explanation: FeatureCollection
End of explanation
collection = ee.ImageCollection('COPERNICUS/S2').filterBounds(fc.geometry())
image = collection.mosaic()
Explanation: Image
End of explanation
task = batch.Export.image.toDriveByFeature(
image,
collection=fc,
folder='tools_exportbyfeat',
namePattern='test {site}',
scale=10,
dataType='float',
verbose=True
)
Explanation: Execute
End of explanation
<END_TASK> |
15,677 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Mixed NB
gnb
Step1: training MultiNB & parameter tuning
cat_X => countvec
Step2: X_counts로 cv했을때
alpha
Step3: X_tfidf로 cv했을때
alpha
Step4: Tuning & Improvement
Step5: Retraining with new parameters & 1sigma rule
Step7: Gaussian & Multinomial NB fitting
gnb
Step8: Final Test
Step9: score | <ASSISTANT_TASK:>
Python Code:
df = pd.read_csv('../resource/final_df3.csv')
sample = df.title
y = df['rating(y)'].values
real_X = df[['avg_rating']].values
cat_X = df.text.fillna("").values
Explanation: Mixed NB
gnb : 'avg_rating' 피쳐 한개만
mnb : alpha는 피쳐가 달라진 관계로(콤마, 띄어쓰기 제거) 다시 cv시행
ngram_range : (1, 2)
tfidf : true
sub_alpha : 0.3
score(mae) : 0.7674
End of explanation
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer, TfidfTransformer
count_vect = CountVectorizer()
X_counts = count_vect.fit_transform(cat_X)
tfidf_vect = TfidfVectorizer()
X_tfidf = tfidf_vect.fit_transform(cat_X)
from sklearn.cross_validation import StratifiedKFold
from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import mean_absolute_error
Explanation: training MultiNB & parameter tuning
cat_X => countvec
End of explanation
cv = StratifiedKFold(y, n_folds=5, random_state=51)
i_range = []
score_range = []
sigma = []
for a in np.arange(0, 2, 0.01):
mnb = MultinomialNB(alpha = a)
scores = np.zeros(5)
for i, (train_idx, test_idx) in enumerate(cv):
X_train = X_counts[train_idx]
y_train = y[train_idx]
X_test = X_counts[test_idx]
y_test = y[test_idx]
mnb.fit(X_train, y_train)
y_pred = mnb.predict(X_test)
scores[i] = mean_absolute_error(y_test, y_pred)
i_range.append(a)
score_range.append(np.mean(scores))
sigma.append(np.std(scores))
best_idx = np.argmin(score_range)
best_alpha = i_range[best_idx]
best_score = score_range[best_idx]
sigma
plt.figure(figsize = (15, 5))
plt.plot(i_range, score_range)
plt.plot(i_range, np.array(score_range) + sigma, 'b--')
plt.plot(i_range, np.array(score_range) - sigma, 'b--')
plt.axhline(best_score + sigma[best_idx], linestyle=':', color='r')
plt.axvline(best_alpha, linestyle=':', color='r')
def find_nearest(array, value):
idx = (np.abs(array-value)).argmin()
return idx
sub_alpha = i_range[find_nearest(score_range, best_score+sigma[best_idx])]
sub_score = best_score+sigma[best_idx]
plt.scatter(sub_alpha, sub_score, s=100, c='red')
plt.xlim(0, 2)
plt.ylabel('CV score(mae)')
plt.xlabel('alpha')
print("best alpha : ", best_alpha)
print("best score : ", best_score)
print(' 1-sigma : ', round(sigma[best_idx], 4))
print('='*25)
print("sub_opt alpha : ", sub_alpha)
print("sub_opt score : ", sub_score)
Explanation: X_counts로 cv했을때
alpha : 0.74
score : 0.819739769701
End of explanation
cv = StratifiedKFold(y, n_folds=5, random_state=51)
i_range = []
score_range = []
sigma = []
for a in np.arange(0, 1, 0.01):
mnb = MultinomialNB(alpha = a)
scores = np.zeros(5)
for i, (train_idx, test_idx) in enumerate(cv):
X_train = X_tfidf[train_idx]
y_train = y[train_idx]
X_test = X_tfidf[test_idx]
y_test = y[test_idx]
mnb.fit(X_train, y_train)
y_pred = mnb.predict(X_test)
scores[i] = mean_absolute_error(y_test, y_pred)
i_range.append(a)
score_range.append(np.mean(scores))
sigma.append(np.std(scores))
best_idx = np.argmin(score_range)
best_alpha = i_range[best_idx]
best_score = score_range[best_idx]
sigma
plt.figure(figsize = (15, 5))
plt.plot(i_range, score_range)
plt.plot(i_range, np.array(score_range) + sigma, 'b--')
plt.plot(i_range, np.array(score_range) - sigma, 'b--')
plt.axhline(best_score + sigma[best_idx], linestyle=':', color='r')
plt.axvline(best_alpha, linestyle=':', color='r')
def find_nearest(array, value):
idx = (np.abs(array-value)).argmin()
return idx
sub_alpha = i_range[find_nearest(score_range, best_score+sigma[best_idx])]
sub_score = best_score+sigma[best_idx]
plt.scatter(sub_alpha, sub_score, s=100, c='red')
plt.xlim(0, 1)
plt.ylabel('CV score(mae)')
plt.xlabel('alpha')
print("best alpha : ", best_alpha)
print("best score : ", best_score)
print(' 1-sigma : ', round(sigma[best_idx], 4))
print('='*25)
print("sub_opt alpha : ", sub_alpha)
print("sub_opt score : ", sub_score)
Explanation: X_tfidf로 cv했을때
alpha : 0.23
score : 0.791257638511
End of explanation
from sklearn.pipeline import Pipeline
text_clf = Pipeline([
('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('clf', MultinomialNB()),
])
from sklearn.grid_search import GridSearchCV
parameters = {
'vect__ngram_range': [(1, 1), (1, 2), (1, 3), (1, 4), ],
'tfidf__use_idf' : [True, False],
'clf__alpha' : np.arange(0, 1, 0.01),
}
gs_clf = GridSearchCV(text_clf, parameters, cv=5, scoring='mean_absolute_error', n_jobs=-1)
gs_clf = gs_clf.fit(cat_X, y)
best_parameters, score, _ = max(gs_clf.grid_scores_, key=lambda x: x[1])
for param_name in sorted(parameters.keys()):
print("{name}: {best}".format(
name=param_name, best=best_parameters[param_name]
))
print("="*25)
print('score :', score)
Explanation: Tuning & Improvement
End of explanation
cv = StratifiedKFold(y, n_folds=5, random_state=51)
i_range = []
score_range = []
sigma = []
for a in np.arange(0, 0.45, 0.01):
text_clf = Pipeline([
('vect', CountVectorizer(ngram_range=(1, 2))),
('tfidf', TfidfTransformer()),
('clf', MultinomialNB(alpha=a)),
])
scores = np.zeros(5)
for i, (train_idx, test_idx) in enumerate(cv):
X_train = cat_X[train_idx]
y_train = y[train_idx]
X_test = cat_X[test_idx]
y_test = y[test_idx]
text_clf.fit(X_train, y_train)
y_pred = text_clf.predict(X_test)
scores[i] = mean_absolute_error(y_test, y_pred)
i_range.append(a)
score_range.append(np.mean(scores))
sigma.append(np.std(scores))
best_idx = np.argmin(score_range)
best_alpha = i_range[best_idx]
best_score = score_range[best_idx]
sigma
plt.figure(figsize = (15, 5))
plt.plot(i_range, score_range)
plt.plot(i_range, np.array(score_range) + sigma, 'b--')
plt.plot(i_range, np.array(score_range) - sigma, 'b--')
plt.axhline(best_score + sigma[best_idx], linestyle=':', color='r')
plt.axvline(best_alpha, linestyle=':', color='r')
def find_nearest(array, value):
idx = (np.abs(array-value)).argmin()
return idx
sub_alpha = i_range[find_nearest(score_range, best_score+sigma[best_idx])]
sub_score = best_score+sigma[best_idx]
plt.scatter(sub_alpha, sub_score, s=100, c='red')
plt.xlim(0, 0.45)
plt.ylabel('CV score(mae)')
plt.xlabel('alpha')
print("best alpha : ", best_alpha)
print("best score : ", best_score)
print(' 1-sigma : ', round(sigma[best_idx], 4))
print('='*25)
print("sub_opt alpha : ", sub_alpha)
print("sub_opt score : ", sub_score)
Explanation: Retraining with new parameters & 1sigma rule
End of explanation
from sklearn.naive_bayes import GaussianNB, MultinomialNB
gnb = GaussianNB()
mnb = Pipeline([
('vect', CountVectorizer(ngram_range=(1, 2),)),
('tfidf', TfidfTransformer()),
('clf', MultinomialNB(alpha=0.3)),
])
gnb.fit(real_X, y)
gnb_pred = gnb.predict(real_X)
gnb_prob = gnb.predict_proba(real_X)
mnb.fit(cat_X, y)
mnb_pred = mnb.predict(cat_X)
mnb_prob = mnb.predict_proba(cat_X)
mix_prob = np.multiply(gnb_prob, mnb_prob)
mix_prob.shape
def softmax(w, t=1.0):
Calculate the softmax of a list of numbers w.
Parameters
----------
w : list of numbers
t : float
Return
------
a list of the same length as w of non-negative numbers
Examples
--------
>>> softmax([0.1, 0.2])
array([ 0.47502081, 0.52497919])
>>> softmax([-0.1, 0.2])
array([ 0.42555748, 0.57444252])
>>> softmax([0.9, -10])
array([ 9.99981542e-01, 1.84578933e-05])
>>> softmax([0, 10])
array([ 4.53978687e-05, 9.99954602e-01])
e = np.exp(np.array(w) / t)
dist = e / np.sum(e)
return dist
mix_prob_softmax = np.zeros((544, 5))
for i in range(544):
mix_prob_softmax[i] = softmax(mix_prob[i])
mix_prob_softmax
np.sum(mix_prob_softmax[0])
mix_pred = np.zeros(544, )
for i in range(544):
mix_pred[i] = np.argmax(mix_prob_softmax[i])
mix_pred += 1 # 별점은 1점부터 5점까지이므로(int)
mix_pred
Explanation: Gaussian & Multinomial NB fitting
gnb : 'avg_rating' 피쳐 한개만
mnb : alpha는 피쳐가 달라진 관계로(콤마, 띄어쓰기 제거) 다시 cv시행
ngram_range : (1, 2)
tfidf : true
sub_alpha : 0.3
score(mae) : 0.7674
End of explanation
test_df = pd.read_excel('../resource/test_df.xlsx')
test_sample = test_df.title
test_y = test_df['my_rating'].values
test_real_X = test_df[['avg_rating']].values
test_cat_X = test_df.text
test_watcha_y = test_df['watcha_rating'].values
gnb_test_pred = gnb.predict(test_real_X)
gnb_test_prob = gnb.predict_proba(test_real_X)
mnb_test_pred = mnb.predict(test_cat_X)
mnb_test_prob = mnb.predict_proba(test_cat_X)
mix_test_prob = np.multiply(gnb_test_prob, mnb_test_prob)
mix_test_prob_softmax = np.zeros((12, 5))
for i in range(12):
mix_test_prob_softmax[i] = softmax(mix_test_prob[i])
mix_test_prob_softmax
np.sum(mix_test_prob_softmax[0])
mix_test_pred = np.zeros(12, )
for i in range(12):
mix_test_pred[i] = np.argmax(mix_test_prob_softmax[i])
mix_test_pred += 1 # 별점은 1점부터 5점까지이므로(int)
mix_test_pred
test_df['predict'] = mix_test_pred
test_df
Explanation: Final Test
End of explanation
mix_score = mean_absolute_error(mix_test_pred, test_y)
watcha_score = mean_absolute_error(test_watcha_y, test_y)
print('mix_score :', mix_score)
print('watcha_score :', watcha_score)
# watcha_rating을 반올림하여 정수로변환하여 스코어 측정해봄
test_watchar_round_y = np.round(test_watcha_y,)
mean_absolute_error(test_watchar_round_y, test_y)
Explanation: score
End of explanation
<END_TASK> |
15,678 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Load data
Step1: Build topology
Step2: Calculate shortest path for every pair of computational nodes
Helper function for caching of results
It allows the program to save calculated tables or other objects
or load them from disk if they are already there
Step3: Actually do the work
Step10: Calculate feature lists of these paths
We also add a new column to paths table here
Helper functions
Step11: Test helper functions
Step13: Do the work | <ASSISTANT_TASK:>
Python Code:
nodes = pd.read_pickle("cachenodes.pkl")
edges = pd.read_pickle("edges.pkl")
comp_nodes = pd.read_pickle("comp_nodes.pkl")
Explanation: Load data
End of explanation
def build_topology(nodes, edges):
topology = nx.Graph()
# add all nodes
for index, row in nodes.iterrows():
node_name = row["name"]
node_attributes = row.drop(["name"]).to_dict()
topology.add_node(node_name, attr_dict=node_attributes)
# add all edges
for index, row in edges.iterrows():
node1_name = row["node1"]
node2_name = row["node2"]
edge_attributes = row.drop(["node1", "node2"]).to_dict()
topology.add_edge(node1_name, node2_name, attr_dict=edge_attributes)
return topology
topology = build_topology(nodes, edges)
Explanation: Build topology
End of explanation
from libcrap.core import calcsave_or_load
from functools import partial
pd_diskcache = partial(calcsave_or_load, load_func=pd.read_pickle, save_func=pd.to_pickle)
Explanation: Calculate shortest path for every pair of computational nodes
Helper function for caching of results
It allows the program to save calculated tables or other objects
or load them from disk if they are already there
End of explanation
import itertools
@pd_diskcache("paths.pkl")
def find_comp_to_comp_shortest_paths(topology, comp_nodes):
paths_ugly = nx.all_pairs_shortest_path(topology)
# calculates shortest paths and stores them in a dict of dicts
# build a table with all computational node pairs
# they are not duplicated
# if there is ("n48001", "n49419") then there is no ("n49419", "n48001") pair
comp_node_pairs = pd.DataFrame.from_records(
itertools.chain.from_iterable(
[(node1, node2) for node2 in comp_nodes.iloc[index:]]
for (index, node1) in comp_nodes.iteritems()
),
columns=["node1", "node2"]
)
# write shortest paths to this table
comp_node_pairs["shortest_path"] = comp_node_pairs.apply(
lambda row: paths_ugly[row.loc["node1"]][row.loc["node2"]],
axis=1
)
return comp_node_pairs
# shortest paths between all computational nodes
paths = find_comp_to_comp_shortest_paths(topology, comp_nodes)
Explanation: Actually do the work
End of explanation
def interleave(it1, it2):
>>> list(interleave([1, 2, 3, 4], ["a", "b", "c"]))
[1, 'a', 2, 'b', 3, 'c', 4]
return (
item for item
in itertools.chain.from_iterable(itertools.zip_longest(it1, it2))
if item is not None)
def get_node_features(topology, node):
Returns node features as a tuple of tuples.
>>> topology = nx.Graph()
>>> topology.add_node("kek", attr_dict={"a": 1, "b": "lol"})
>>> get_node_features(topology, "kek")
(('a', 1), ('b', 'lol'))
return tuple(topology.node[node].items())
def get_edge_features(topology, node1, node2):
Returns features of an edge as tuple of tuples.
>>> topology = nx.Graph()
>>> topology.add_node("a1")
>>> topology.add_node("b1")
>>> topology.add_edge("a1", "b1", attr_dict={"foo": "bar", "shim": "sham"})
>>> get_edge_features(topology, "a1", "b1")
(('foo', 'bar'), ('shim', 'sham'))
return tuple(topology.edge[node1][node2].items())
def maybe_reverse(l):
Takes list or tuple and reverses it, or not.
Using maybe_reverse on some list and on its reversed version will
yield the same result.
>>> maybe_reverse([1, 2, 3])
[1, 2, 3]
>>> maybe_reverse([3, 2, 1])
[1, 2, 3]
>>> maybe_reverse(('a', 'b', 'c'))
('a', 'b', 'c')
>>> maybe_reverse(('c', 'b', 'a'))
('a', 'b', 'c')
if type(l) == list:
constructor = list
elif type(l) == tuple:
constructor = tuple
else:
raise TypeError("can only take list or tuple arguments")
reversed_l = constructor(reversed(l))
if str(l) <= str(reversed_l):
return l
return reversed_l
def get_features_of_path(topology, path):
Returns features of path as a tuple of tuples of tuples.
The list of features will be normalized, so that
this function returns the same features in the same order for
path (A, B, C, D) and for path (D, C, B, A)
nodes_features = (get_node_features(topology, node) for node in path)
edges_features = (get_edge_features(topology, node1, node2)
for (node1, node2) in zip(path[:-1], path[1:]))
return maybe_reverse(tuple(interleave(nodes_features, edges_features)))
def df_loc_by_sequence(df, sequence):
Use this instead of `df.loc[sequence]`.
Pandas df gets confused by tuples and possibly by other
sequences. If you do `df.loc[(1, 2)]`, it will look for 1
or 2 in df's index instead of looking for the tuple itself.
You can use df.xs to overcome this problem. Or use this
function which hides the ugliness.
Also see
[stackoverflow question](https://goo.gl/emtjB8)
for better description of the problem.
return df.xs(sequence)
Explanation: Calculate feature lists of these paths
We also add a new column to paths table here
Helper functions
End of explanation
import doctest
def test_get_node_features():
doctest.run_docstring_examples(get_node_features, globals())
assert get_node_features(topology, "КГК.48.0.3") == (("type_", "switch"),)
def test_get_edge_features():
doctest.run_docstring_examples(get_edge_features, globals())
correct_result = (("connection_type", "backplane"),)
result1 = get_edge_features(topology, "КГК.48.0.3", "n48022")
result2 = get_edge_features(topology, "n48022", "КГК.48.0.3")
assert result1 == correct_result == result2
doctest.run_docstring_examples(interleave, globals())
test_get_node_features()
test_get_edge_features()
doctest.run_docstring_examples(maybe_reverse, globals())
Explanation: Test helper functions
End of explanation
@pd_diskcache("classes.pkl")
def list_path_classes(topology, paths):
unique_features_classes = frozenset(
get_features_of_path(topology, path)
for path in paths["shortest_path"]
)
return pd.DataFrame.from_records(
([features] for features in sorted(unique_features_classes)),
columns=["features"]
)
@pd_diskcache("paths_with_classes.pkl")
def add_class_id_col(paths, classes):
Adds class_id column to paths table
# create pandas table for quick getting index by value of features list
classes_reverse_lookup = classes.reset_index().set_index("features", verify_integrity=True)
def get_class_id_by_path(path):
return df_loc_by_sequence(classes_reverse_lookup, get_features_of_path(topology, path))["index"]
return paths.assign(class_=paths["shortest_path"].apply(get_class_id_by_path))
classes = list_path_classes(topology, paths)
paths_with_classes = add_class_id_col(paths, classes)
Explanation: Do the work
End of explanation
<END_TASK> |
15,679 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Mapping TFBS to alignments
Dictionary postions with an alignment
Me and Bronski's conversation on how to do this
Me
Step1: My Attempt with ES2
Breakdown of what I have to do
Step2: Buuuuuuuut, we don't really need the alignment as an alignment per se. But it is important for viewing and testing later. We need to have each seperate sequence, So I am going to use SeqIO.parse.
Step3: Notes on loop
enumerate()
Step4: We need two sequences. One that is not the alignment.
Putting the Remap together with TFBS
The last part is to create the vector that should span the entire alignment printing 1 if the position has a bicoid site or 0 if not.
Step5: Now I need to make a new vector that says if the bicoid site is present or absent on the position.
Can the negative positions be used in a query of the dictionary? Likely not. | <ASSISTANT_TASK:>
Python Code:
aligned_seq = 'AGC---TTCATCA'
remap_dict = {}
nuc_list = ['A', 'a', 'G', 'g', 'C', 'c', 'T', 't', 'N', 'n']
counter = 0
for xInd, x in enumerate(aligned_seq):
if x in nuc_list:
remap_dict[counter] = xInd
counter += 1
print(remap_dict)
Explanation: Mapping TFBS to alignments
Dictionary postions with an alignment
Me and Bronski's conversation on how to do this
Me: Hey! I want to map nucleotide sequence position after an alignment. I know you have done this before. So I would rather not reinvent the wheel. You did a dictionary in python, but how? Can I see your script? If this feature is embedded in a larger program it might be easier to just explain your strategy.
Bronski: So the strategy is to loop through an aligned sequence and create a dictionary where the keys are the original indices and the values are the indices in the alignment.
Here’s a simple example:
End of explanation
from Bio import AlignIO
alignment = AlignIO.read("../data/fasta/output_ludwig_eve-striped-2.fa", "fasta")
print(alignment)
for record in alignment:
print(record.id)
Explanation: My Attempt with ES2
Breakdown of what I have to do:
Read in alignment file.
seperate each sequence into it's own sequence
make dictionary for each sequence
print out sequence?
run TFBS finder for each sequence
Make vector of each sequence that says presence or absence at each position.
Figure out a way to visualize this.
Read in Alignment File
Use Bio.AlignIO.read()
- The first argument is a handle to read the data from, typically an open file (see Section 24.1), or a filename.
- The second argument is a lower case string specifying the alignment format. As in Bio.SeqIO we don’t try and guess the file format for you! See http://biopython.org/wiki/AlignIO for a full listing of supported formats.
End of explanation
from Bio import SeqIO
# read in alignment as a list of sequences
records = list(SeqIO.parse("../data/fasta/output_ludwig_eve-striped-2.fa", "fasta"))
# Testing with the first sequence
seqTest = records[0]
#print(seqTest.seq)
print(type(seqTest))
# Turn just the sequence into a string instead of fasta sequence
aligned_seq = str(seqTest.seq)
print(type(aligned_seq)) # check
Explanation: Buuuuuuuut, we don't really need the alignment as an alignment per se. But it is important for viewing and testing later. We need to have each seperate sequence, So I am going to use SeqIO.parse.
End of explanation
remap_dict = {}
nuc_list = ['A', 'a', 'G', 'g', 'C', 'c', 'T', 't', 'N', 'n']
counter = 0
for xInd, x in enumerate(aligned_seq):
if x in nuc_list:
remap_dict[counter] = xInd
counter += 1
#checking dictionary created
print(len(remap_dict)) # should be length of alignment
print(remap_dict[40]) #should print the value of the number key
print(type(remap_dict[40])) #Check data type
Explanation: Notes on loop
enumerate(): prints out numbers counting up
xInd is the keys that were enumerated.
then the remap_dict[counter] = xInd makes the dictionary
x is the nucleotide
End of explanation
## Attempt at vector
bcdSites = [0] * len(aligned_seq)
#from loctaingTFB.ipy
TFBS = [10, 102, 137, -741, -680, -595, 309, -497, -485, 429, 453, 459, 465, -376, -347, -339, -308, 593, 600, -289, 613, 623, -240, 679, -128, -77, 825, 826, 886]
#Need to make positive. This is not what I need.
#TFBS_pos = [abs(k) for k in TFBS]
print((TFBS))
m = 7
# This is the range of the motif
for pos in TFBS:
print(aligned_seq[pos:pos+m])
Explanation: We need two sequences. One that is not the alignment.
Putting the Remap together with TFBS
The last part is to create the vector that should span the entire alignment printing 1 if the position has a bicoid site or 0 if not.
End of explanation
print(type(TFBS))
print(type(remap_dict))
print(TFBS)
# Okay, the problem is the negative numbers
another_key = [82, 85, 98]
print(len(remap_dict))
# So I need to convert the negative number first.
print([remap_dict[x] for x in another_key])
# Working on converting TFBS negative numbers.
TFBS_2 = []
for x in TFBS:
if x < 0:
TFBS_2.append(905 + x)
else:
TFBS_2.append(x)
print(TFBS_2)
Explanation: Now I need to make a new vector that says if the bicoid site is present or absent on the position.
Can the negative positions be used in a query of the dictionary? Likely not.
End of explanation
<END_TASK> |
15,680 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
PyGSLIB
Histplot
Step1: Getting the data ready for work
If the data is in GSLIB format you can use the function pygslib.gslib.read_gslib_file(filename) to import the data into a Pandas DataFrame.
Step2: Testing histplot
This is not plotting results but is handy to get declustered bins for plots | <ASSISTANT_TASK:>
Python Code:
#general imports
import matplotlib.pyplot as plt
import pygslib
import numpy as np
#make the plots inline
%matplotlib inline
Explanation: PyGSLIB
Histplot
End of explanation
#get the data in gslib format into a pandas Dataframe
mydata= pygslib.gslib.read_gslib_file('../datasets/cluster.dat')
# This is a 2D file, in this GSLIB version we require 3D data and drillhole name or domain code
# so, we are adding constant elevation = 0 and a dummy BHID = 1
mydata['Zlocation']=0
mydata['bhid']=1
# printing to verify results
print ' \n **** 5 first rows in my datafile \n\n ', mydata.head(n=5)
#view data in a 2D projection
plt.scatter(mydata['Xlocation'],mydata['Ylocation'], c=mydata['Primary'])
plt.colorbar()
plt.grid(True)
plt.show()
Explanation: Getting the data ready for work
If the data is in GSLIB format you can use the function pygslib.gslib.read_gslib_file(filename) to import the data into a Pandas DataFrame.
End of explanation
print pygslib.gslib.__plot.histplt.__doc__
mydata['Declustering Weight'].sum()
parameters_histplot = {
'hmin' : 0.06, #in/output rank-0 array(float,'d')
'hmax' : 20.06, #in/output rank-0 array(float,'d')
'ncl' : 40, #int, number of bins
'iwt' : 0, #int, 1 use declustering weight
'ilog' : 1, #int, 1 use logscale
'icum' : 0, #int, 1 use cumulative
'va' : mydata['Primary'], # array('d') with bounds (nd)
'wt' : mydata['Declustering Weight']} # array('d') with bounds (nd), wight variable (obtained with declust?)
parameters_histplotd = {
'hmin' : 0.06, #in/output rank-0 array(float,'d')
'hmax' : 20.06, #in/output rank-0 array(float,'d')
'ncl' : 40, #int, number of bins
'iwt' : 1, #int, 1 use declustering weight
'ilog' : 1, #int, 1 use logscale
'icum' : 0, #int, 1 use cumulative
'va' : mydata['Primary'], # array('d') with bounds (nd)
'wt' : mydata['Declustering Weight']} # array('d') with bounds (nd), wight variable (obtained with declust?)
binval,nincls,cl, clwidth,xpt025,xlqt,xmed,xuqt,xpt975, \
xmin,xmax,xcvr,xmen,xvar,xfrmx,dcl,error = pygslib.gslib.__plot.histplt(**parameters_histplot)
binvald,ninclsd,cld, clwidthd, xpt025d,xlqtd,xmedd,xuqtd, \
xpt975d,xmind,xmaxd,xcvrd,xmend,xvard,xfrmxd,dcld,errord = pygslib.gslib.__plot.histplt(**parameters_histplotd)
print dcl
print cl.round(1)
print nincls
print binval.round(2)
print clwidth
mydata.Primary[mydata.Primary>20.1]
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.set_title('Bin probability')
plt.bar (cl, binval, width=-clwidth, label = 'Non-declustered')
plt.bar (cld, binvald, width=-clwidth, alpha=0.5, color='r', label = 'Declustered')
if parameters_histplot['ilog']>0:
ax.set_xscale('log')
plt.grid(True)
plt.legend(loc=2)
fig.show
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.set_title('Bin count (Warning: this will not show the effect of weight)')
plt.bar (cl, nincls, width=-clwidth,label = 'Non-Declustered')
plt.bar (cld, ninclsd, width=-clwidth, alpha=0.5, color='r',label = 'Declustered')
if parameters_histplot['ilog']>0:
ax.set_xscale('log')
plt.grid(True)
plt.legend(loc=2)
fig.show
parameters_histplot = {
'hmin' : 0.06, #in/output rank-0 array(float,'d')
'hmax' : 20.06, #in/output rank-0 array(float,'d')
'ncl' : 40, #int, number of bins
'iwt' : 0, #int, 1 use declustering weight
'ilog' : 1, #int, 1 use logscale
'icum' : 1, #int, 1 use cumulative
'va' : mydata['Primary'], # array('d') with bounds (nd)
'wt' : mydata['Declustering Weight']} # array('d') with bounds (nd), wight variable (obtained with declust?)
parameters_histplotd = {
'hmin' : 0.06, #in/output rank-0 array(float,'d')
'hmax' : 20.06, #in/output rank-0 array(float,'d')
'ncl' : 40, #int, number of bins
'iwt' : 1, #int, 1 use declustering weight
'ilog' : 1, #int, 1 use logscale
'icum' : 1, #int, 1 use cumulative
'va' : mydata['Primary'], # array('d') with bounds (nd)
'wt' : mydata['Declustering Weight']} # array('d') with bounds (nd), wight variable (obtained with declust?)
binval,nincls,cl, clwidth,xpt025,xlqt,xmed,xuqt,xpt975,xmin, \
xmax,xcvr,xmen,xvar,xfrmx,dcl,error = pygslib.gslib.__plot.histplt(**parameters_histplot)
binvald,ninclsd,cld, clwidthd,xpt025d,xlqtd,xmedd,xuqtd,xpt975d, \
xmind,xmaxd,xcvrd,xmend,xvard,xfrmxd,dcld,errord = pygslib.gslib.__plot.histplt(**parameters_histplotd)
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.set_title('Bin probability, bin style')
plt.bar (cld, binvald, width=-clwidth, color='r', label = 'Declustered')
plt.bar (cl, binval, width=-clwidth, label = 'Non-declustered')
if parameters_histplot['ilog']>0:
ax.set_xscale('log')
plt.grid(True)
plt.legend(loc=2)
fig.show
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.set_title('Bin probability, step style')
plt.step (cld, binvald, where='post', color='r', label = 'Declustered')
plt.step (cl, binval, where='post', label = 'Non-declustered')
if parameters_histplot['ilog']>0:
ax.set_xscale('log')
plt.grid(True)
plt.legend(loc=2)
fig.show
print 'data min, max: ', xmin, xmax
print 'data quantile 2.5%, 25%, 50%, 75%, 97.75%: ' , xpt025,xlqt,xmed,xuqt,xpt975
print 'data cv, mean, variance : ', xcvr,xmen,xvar
print 'hitogram max frequency (useful to rescal some plots)' , xfrmx
print 'error <> 0? Then all ok?' , error==0
Explanation: Testing histplot
This is not plotting results but is handy to get declustered bins for plots
End of explanation
<END_TASK> |
15,681 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Basic usage
Create an instance of PyFBU
Step1: Supply the input distribution to be unfolded as a 1-dimensional list for N bins, with each entry corresponding to the bin content.
Step2: Supply the response matrix where each row corresponds to a truth level bin.
The normalization of each row must be the acceptance efficiency of the corresponding bin (e.g. the normalization is 1 for resolution only unfolding).
N.B. For now, only squared response matrices are allowed.
Step3: Define the boundaries of the hyperbox to be sampled for each bin.
Step4: Run the MCMC sampling (this step might take up to several minutes for a large number of bins).
Step5: Retrieve the N-dimensional posterior distribution in the form of a list of N arrays.
Step6: Each array corresponds to the projection of the posterior distribution for a given bin.
Step7: Background
One or more backgrounds, with the corresponding normalization uncertainties (gaussian prior), can be taken into account in the unfolding procedure.
Step8: The background normalization is sampled from a gaussian with the given uncertainty. To fix the background normalization the uncertainty should be set to 0.
Systematics
Systematic uncertainties affecting signal and background can be taken into account as well with their per-bin relative magnitudes. The prior is gaussian. Each systematics needs to be provided for each background listed at the previous step.
Step9: Each systematics is treated as fully correlated across signal and the various backgrounds.
Nuisance parameters
The posterior probability for the nuisance parameters is stored in a dictionary of arrays. The correlation among nuisance parameters and with the estimates for the unfolded distribution is preserved in the array ordering. | <ASSISTANT_TASK:>
Python Code:
import fbu
myfbu = fbu.PyFBU()
Explanation: Basic usage
Create an instance of PyFBU
End of explanation
myfbu.data = [100,150]
Explanation: Supply the input distribution to be unfolded as a 1-dimensional list for N bins, with each entry corresponding to the bin content.
End of explanation
myfbu.response = [[0.08,0.02], #first truth bin
[0.02,0.08]] #second truth bin
Explanation: Supply the response matrix where each row corresponds to a truth level bin.
The normalization of each row must be the acceptance efficiency of the corresponding bin (e.g. the normalization is 1 for resolution only unfolding).
N.B. For now, only squared response matrices are allowed.
End of explanation
myfbu.lower = [0,0]
myfbu.upper = [3000,3000]
Explanation: Define the boundaries of the hyperbox to be sampled for each bin.
End of explanation
myfbu.run()
Explanation: Run the MCMC sampling (this step might take up to several minutes for a large number of bins).
End of explanation
trace = myfbu.trace
print( trace )
Explanation: Retrieve the N-dimensional posterior distribution in the form of a list of N arrays.
End of explanation
%matplotlib inline
from matplotlib import pyplot as plt
plt.hist(trace[1],
bins=20,alpha=0.85,
normed=True)
plt.ylabel('probability')
Explanation: Each array corresponds to the projection of the posterior distribution for a given bin.
End of explanation
myfbu.background = {'bckg1':[20,30],'bckg2':[10,10]}
myfbu.backgroundsyst = {'bckg1':0.5,'bckg2':0.04} #50% normalization uncertainty for bckg1 and 4% normalization uncertainty for bckg2
Explanation: Background
One or more backgrounds, with the corresponding normalization uncertainties (gaussian prior), can be taken into account in the unfolding procedure.
End of explanation
myfbu.objsyst = {
'signal':{'syst1':[0.,0.03],'syst2':[0.,0.01]},
'background':{
'syst1':{'bckg1':[0.,0.],'bckg2':[0.1,0.1]},
'syst2':{'bckg1':[0.,0.01],'bckg2':[0.,0.]}
}
}
Explanation: The background normalization is sampled from a gaussian with the given uncertainty. To fix the background normalization the uncertainty should be set to 0.
Systematics
Systematic uncertainties affecting signal and background can be taken into account as well with their per-bin relative magnitudes. The prior is gaussian. Each systematics needs to be provided for each background listed at the previous step.
End of explanation
myfbu.run() #rerun sampling with backgrounds and systematics
unfolded_bin1 = myfbu.trace[1]
bckg1 = myfbu.nuisancestrace['bckg1']
plt.hexbin(bckg1,unfolded_bin1,cmap=plt.cm.YlOrRd)
Explanation: Each systematics is treated as fully correlated across signal and the various backgrounds.
Nuisance parameters
The posterior probability for the nuisance parameters is stored in a dictionary of arrays. The correlation among nuisance parameters and with the estimates for the unfolded distribution is preserved in the array ordering.
End of explanation
<END_TASK> |
15,682 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Prepare table of 2001 area units and rental area units
Step1: Process area units and rental areas into GeoJSON
Step2: Create geodata for rental areas
Step3: Choose representative points for rental areas using approximate centroids of property titles
Step4: Prepare regional slices of data | <ASSISTANT_TASK:>
Python Code:
# 2001 census area units
path = hp.DATA_DIR/'collected'/'Geographical Table.csv'
f = pd.read_csv(path, dtype={'SAU': str})
f = f.rename(columns={
'SAU': 'au2001',
'SAU.Desc': 'au_name',
'TA': 'territory',
'Region': 'region',
})
del f['Water']
f.head()
# rental area units
path = hp.DATA_DIR/'collected'/'Market Rent Areas.csv'
g = pd.read_csv(path, dtype={'SAU': str})
g = g.rename(columns={
'SAU': 'au2001',
'MARKET RENT DESCRIPTION': 'rental_area',
'TA': 'territory',
'AU NAME': 'au_name',
})
# Clean rental areas
def clean(x):
y = x.split(' - ')
y = y[1] if 'District' not in y[1] else y[0]
return y
g['rental_area'] = g['rental_area'].map(clean)
f = f.merge(g[['au2001', 'rental_area']])
path = hp.get_path('au2001_csv')
f.to_csv(path, index=False)
f.head()
Explanation: Prepare table of 2001 area units and rental area units
End of explanation
# Read Shapefile
path = hp.DATA_DIR/'collected'/'NZ_AU01_region_simplified'/'NZ_AU01_region.shp'
au = gpd.read_file(str(path))
au.crs = hp.CRS_NZGD49
au = au.to_crs(hp.CRS_WGS84)
au = au.rename(columns={'AU01': 'au2001', 'AU_DESC': 'au_name'})
print(au.shape)
print(au.head())
au.head().plot()
# Remove water area units
pattern = r'ocean|strait|inlet|harbour'
cond = au['au_name'].str.contains(pattern, case=False)
au = au[~cond].copy()
print(au.shape)
au.head().plot()
# Merge geodata and metadata, drop null regions, and write to file
f = hp.get_data('au2001_csv')
g = au.merge(f[['au2001', 'territory', 'region', 'rental_area']])
g = g[g['region'].notnull()].copy()
path = hp.get_path('au2001')
with path.open('w') as tgt:
tgt.write(g.to_json())
g.head()
Explanation: Process area units and rental areas into GeoJSON
End of explanation
# Dissolve area units by area unit group
au = get_data('au2001')
ra = au[['rental_area', 'region', 'territory', 'geometry']].dissolve(by='rental_area').reset_index()
path = hp.get_path('rental_areas')
with path.open('w') as tgt:
tgt.write(ra.to_json())
ra.head()
Explanation: Create geodata for rental areas
End of explanation
ra = hp.get_data('rental_areas')
t = hp.get_data('property_titles')
t.head()
# Spatial-join titles to rental areas
%time f = gpd.sjoin(t[['geometry', 'fid']], ra, op='intersects')
f.head()
# Choose representative points for rental areas
def pt(group):
d = {}
d['geometry'] = so.unary_union(group['geometry']).representative_point()
d['territory'] = group['territory'].iat[0]
d['region'] = group['region'].iat[0]
return pd.Series(d)
g = gpd.GeoDataFrame(f.groupby('rental_area').apply(pt).reset_index())
path = hp.get_path('rental_points')
with path.open('w') as tgt:
tgt.write(g.to_json())
g.head()
Explanation: Choose representative points for rental areas using approximate centroids of property titles
End of explanation
ra = hp.get_data('rental_areas')
rap = hp.get_data('rental_points')
for region in hp.REGIONS:
region_root = hp.DATA_DIR/region
if not region_root.exists():
region_root.mkdir()
region_c = region.capitalize()
# Rental areas slice
f = ra[ra['region'] == region_c].copy()
path = hp.get_path('rental_areas', region)
with path.open('w') as tgt:
tgt.write(f.to_json())
# Rental area points slice
f = rap[rap['region'] == region_c].copy()
path = hp.get_path('rental_points', region)
with path.open('w') as tgt:
tgt.write(f.to_json())
Explanation: Prepare regional slices of data
End of explanation
<END_TASK> |
15,683 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Accessing ncSOS with OWSLib
We have an ncSOS server with a get observation example that works
Step1: Now try setting time range via eventTime. | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from owslib.sos import SensorObservationService
import pdb
from owslib.etree import etree
import pandas as pd
import datetime as dt
import numpy as np
url = 'http://sdf.ndbc.noaa.gov/sos/server.php?request=GetCapabilities&service=SOS&version=1.0.0'
ndbc = SensorObservationService(url)
# usgs woods hole
# buoy data (single current meter)
url='http://geoport-dev.whoi.edu/thredds/sos/usgs/data2/notebook/1211-AA.cdf'
usgs = SensorObservationService(url)
contents = usgs.contents
usgs.contents
off = usgs.offerings[1]
off.name
off.response_formats
off.observed_properties
off.procedures
# the get observation request below works. How can we recreate this using OWSLib?
# http://geoport-dev.whoi.edu/thredds/sos/usgs/data2/notebook/1211-A1H.cdf?service=SOS&version=1.0.0&request=GetObservation&responseFormat=text%2Fxml%3Bsubtype%3D%22om%2F1.0.0%22&offering=1211-A1H&observedProperty=u_1205&procedure=urn:ioos:station:gov.usgs:1211-A1H
#pdb.set_trace()
response = usgs.get_observation(offerings=['1211-AA'],
responseFormat='text/xml;subtype="om/1.0.0"',
observedProperties=['http://mmisw.org/ont/cf/parameter/eastward_sea_water_velocity'],
procedure='urn:ioos:station:gov.usgs:1211-AA')
print(response[0:4000])
# usgs woods hole ADCP data
# url='http://geoport-dev.whoi.edu/thredds/sos/usgs/data2/notebook/9111aqd-a.nc'
# adcp = SensorObservationService(url)
root = etree.fromstring(response)
print(root)
# root.findall(".//{%(om)s}Observation" % root.nsmap )
values = root.find(".//{%(swe)s}values" % root.nsmap )
date_value = np.array( [ (dt.datetime.strptime(d,"%Y-%m-%dT%H:%M:%SZ"),float(v))
for d,v in [l.split(',') for l in values.text.split()]] )
ts = pd.Series(date_value[:,1],index=date_value[:,0])
ts.plot(figsize=(12,4), grid='on');
Explanation: Accessing ncSOS with OWSLib
We have an ncSOS server with a get observation example that works:
http://geoport-dev.whoi.edu/thredds/sos/usgs/data2/notebook/1211-AA.cdf?service=SOS&version=1.0.0&request=GetObservation&responseFormat=text%2Fxml%3Bsubtype%3D%22om%2F1.0.0%22&offering=1211-AA&observedProperty=http://mmisw.org/ont/cf/parameter/eastward_sea_water_velocity&procedure=urn:ioos:station:gov.usgs.cmgp:1211-AA
But can we formulate, request and process this same query (and others like it) using OWSlib?
End of explanation
start = '1977-01-03T00:00:00Z'
stop = '1977-01-07T00:00:00Z'
response = usgs.get_observation(offerings=['1211-AA'],
responseFormat='text/xml;subtype="om/1.0.0"',
observedProperties=['http://mmisw.org/ont/cf/parameter/eastward_sea_water_velocity'],
procedure='urn:ioos:station:gov.usgs:1211-AA',
eventTime='{}/{}'.format(start,stop))
root = etree.fromstring(response)
date_value = np.array( [ (dt.datetime.strptime(d,"%Y-%m-%dT%H:%M:%SZ"),float(v))
for d,v in [l.split(',') for l in values.text.split()]] )
ts = pd.Series(date_value[:,1],index=date_value[:,0])
ts.plot(figsize=(12,4), grid='on');
Explanation: Now try setting time range via eventTime.
End of explanation
<END_TASK> |
15,684 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
House Prices Estimator
Note
Step1: First problem
The training and test datasets have almost the same size so it's going to be difficult to get good predictions. Worst if we want to take a part of the training set as the validation dataset. It would be desirable to have more training samples.
Selecting only numeric columns (by now)
Probably the non numeric features have a lot of useful information, but for our study purposes we are only going to get the numeric features and see what happens and if we are able to get a good model predictor.
Step2: Find if there's null values
Step3: NaN values will be filled with the mean of the feature they belong. It's a good way to replace NaN values in order to avoid modifying the distribution of the feature but it could cheat the predicted results. We'll assume that.
Step4: Normalizing
This process is useful when we are dealing with neural networs. At this moment we don't know what kind of model we are going to implement, so the data will be normalized.
Also the normalization bring us the possibility of managing smaller numbers which improves the speed in the calculations.
Step5: The 'SalePrice' has a skewed graph. We can stabilize it applying a logarithmic operation because we know that all the values are positive.
Step6: Selecting good features...
There are a lot of features in this dataset so we are going to select only the most correlated features with the 'SalePrice'.
With the following query we can see that the ten first features have a good correlation.
Step7: Showing relationships
Step8: Splitting dataset in train and test
Now we have to recover the datasets for training and test from our current preprocessed dataset. We'll use the 'train_samples' variable that we saved in advance to split it in these two.
The KFold object is also defined and instanced.
KFold
Step9: Anomaly Detection
Besides it's important to take into account the possible anomalies we can found in the training dataset. For this reason it's going to be calculated a small number of them to be removed.
Step10: Polynomial Features
It could be interesting to extract more information between the current features. Actually we are only working with a small number of the original so we can afford to get polynomial features from them. Obviously that will increase the number of final features.
Step11: Models
*__Note__
Step12: Stacked Model
In order to improve the previous predictions it's going to create a simple stacked model with the following architecture
Step13: Averaged Model
Same as before, the averaged model tries to improve the predictions results. In this case the architecture is indeed much simpler. The final prediction is averaged with the outcomes of the LinearRegressor, GradientBoostingRegressor and AdaBoostRegressor.
Step14: Evaluation
Step15: Get Predictions | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
#load the files
train = pd.read_csv('input/train.csv')
test = pd.read_csv('input/test.csv')
data = pd.concat([train, test])
#size of training dataset
train_samples = train.shape[0]
#print some of them
data.head()
# remove the Id feature, because is not useful for price predictions
data.drop(['Id'],1, inplace=True);
data.info()
print("Size training: {}".format(train.shape[0]))
print("Size testing: {}".format(test.shape[0]))
Explanation: House Prices Estimator
Note: It's a competition from Kaggle.com and the input data was retrieved from there.
Data Analysis
First off we're going to load the training and test dataset. A 'data' variable is created from both datasets and the number of training samples are saved it for future uses.
End of explanation
datanum = data.select_dtypes([np.number])
Explanation: First problem
The training and test datasets have almost the same size so it's going to be difficult to get good predictions. Worst if we want to take a part of the training set as the validation dataset. It would be desirable to have more training samples.
Selecting only numeric columns (by now)
Probably the non numeric features have a lot of useful information, but for our study purposes we are only going to get the numeric features and see what happens and if we are able to get a good model predictor.
End of explanation
datanum.columns[datanum.isnull().any()].tolist()
#number of row without NaN
print(datanum.shape[0] - datanum.dropna().shape[0])
#list of columns with NaN
datanum.columns[datanum.isnull().any()].tolist()
Explanation: Find if there's null values
End of explanation
#Filling with the mean
datanum_no_nan = datanum.fillna(datanum.dropna().mean())
#check
datanum_no_nan.columns[datanum_no_nan.isnull().any()].tolist()
Explanation: NaN values will be filled with the mean of the feature they belong. It's a good way to replace NaN values in order to avoid modifying the distribution of the feature but it could cheat the predicted results. We'll assume that.
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
# All numeric features except the 'SalePrice'
datanum_no_nan.drop(['SalePrice'], axis=1).head(30).plot(legend=False);
# SalePrice
datanum_no_nan['SalePrice'].head(30).plot(legend=False);
# Showing SalePrice distribution
data.SalePrice.hist(bins=50)
Explanation: Normalizing
This process is useful when we are dealing with neural networs. At this moment we don't know what kind of model we are going to implement, so the data will be normalized.
Also the normalization bring us the possibility of managing smaller numbers which improves the speed in the calculations.
End of explanation
# Transforming to non-skewed SalePrice
data.SalePrice = data.SalePrice.apply(np.log)
data.SalePrice.hist(bins=50)
#Squeeze the data using standard scaler: z = (x - mean)/ std
from sklearn import preprocessing
scaler = preprocessing.StandardScaler()
columns = datanum_no_nan.columns.drop('SalePrice')
print("Features: {}".format(columns))
#make a copy
data_norm = datanum_no_nan
data_norm[columns] = scaler.fit_transform(datanum_no_nan[columns])
print("Train shape: {}".format(data_norm.shape))
data_norm.drop(['SalePrice'], axis=1).head(30).plot(legend=False);
Explanation: The 'SalePrice' has a skewed graph. We can stabilize it applying a logarithmic operation because we know that all the values are positive.
End of explanation
# Correlation features
data_norm.corr()['SalePrice'].sort_values(ascending=False).head(10)
high_corr_feat_names = data_norm.corr()['SalePrice'].sort_values(ascending=False).head(10).axes[0].tolist()
high_corr_feat_names.remove('SalePrice')
data_norm_high_corr = data_norm[high_corr_feat_names]
Explanation: Selecting good features...
There are a lot of features in this dataset so we are going to select only the most correlated features with the 'SalePrice'.
With the following query we can see that the ten first features have a good correlation.
End of explanation
#heatmap between the most correlated features
import seaborn as sns
fig = plt.figure(figsize=(7, 5))
sns.heatmap(data_norm_high_corr.corr());
#plotting distributions of numeric features
data_norm_high_corr.hist(bins=50, figsize=(22,16));
#Relationships between correlated features
for feature in high_corr_feat_names:
data.plot.scatter(feature, 'SalePrice');
Explanation: Showing relationships
End of explanation
from sklearn.model_selection import KFold
y = np.array(data['SalePrice'])
X = np.array(data_norm_high_corr)
#split by idx
idx = train_samples
X_train, X_test = X[:idx], X[idx:]
y_train, y_test = y[:idx], y[idx:]
print("Shape X train: {}".format(X_train.shape))
print("Shape y train: {}".format(y_train.shape))
print("Shape X test: {}".format(X_test.shape))
print("Shape y test: {}".format(y_test.shape))
kf = KFold(n_splits=3, random_state=9, shuffle=True)
print(kf)
Explanation: Splitting dataset in train and test
Now we have to recover the datasets for training and test from our current preprocessed dataset. We'll use the 'train_samples' variable that we saved in advance to split it in these two.
The KFold object is also defined and instanced.
KFold
End of explanation
#plotting PCA
from sklearn.decomposition import PCA
def getX_PCA(X):
pca = PCA(n_components=1)
return pca.fit(X).transform(X)
def plotPCA(X, y):
pca = PCA(n_components=1)
X_r = pca.fit(X).transform(X)
plt.plot(X_r, y, 'x')
from sklearn.covariance import EllipticEnvelope
# fit the model
ee = EllipticEnvelope(contamination=0.1,
assume_centered=True,
random_state=9)
ee.fit(X_train)
pred = ee.predict(X_train)
X_train_orig = X_train
y_train_orig = y_train
X_bad = X_train[pred != 1]
y_bad = y_train[pred != 1]
X_train = X_train[pred == 1]
y_train = y_train[pred == 1]
print("Number samples: {}".format(X_train.shape[0]))
#after removing anomalies
plt.scatter(getX_PCA(X_train), y_train)
plt.scatter(getX_PCA(X_bad), y_bad)
Explanation: Anomaly Detection
Besides it's important to take into account the possible anomalies we can found in the training dataset. For this reason it's going to be calculated a small number of them to be removed.
End of explanation
# Get polynomial features
from sklearn.preprocessing import PolynomialFeatures
poly = preprocessing.PolynomialFeatures(degree=2)
X_train_orig = poly.fit_transform(X_train_orig)
X_train = poly.fit_transform(X_train)
X_test = poly.fit_transform(X_test)
Explanation: Polynomial Features
It could be interesting to extract more information between the current features. Actually we are only working with a small number of the original so we can afford to get polynomial features from them. Obviously that will increase the number of final features.
End of explanation
# Linear regression
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
lr = LinearRegression()
#
batch = 0
for train_idx, val_idx in kf.split(X_train, y_train):
X_t, X_v = X_train[train_idx], X_train[val_idx]
y_t, y_v = y_train[train_idx], y_train[val_idx]
#training
lr.fit(X_t, y_t)
#calculate costs
t_error = mean_squared_error(y_t, lr.predict(X_t))**0.5
v_error = mean_squared_error(y_v, lr.predict(X_v))**0.5
print("{}) Training error: {:.2f} Validation error: {:.2f} Score: {:.2f}"
.format(batch, t_error, v_error, lr.score(X_v, y_v)))
batch += 1
#Scores
print("Training score: {:.4f}".format(lr.score(X_train_orig, y_train_orig)))
#RMSLE
rmsle = mean_squared_error(y_train_orig, lr.predict(X_train_orig))**0.5
print("RMSLE: {:.4f}".format(rmsle))
# Plotting the results
plt.scatter(lr.predict(X_train_orig), y_train_orig)
# Gradient boosting
from sklearn import ensemble
params = {'n_estimators': 100, 'max_depth': X_train.shape[1], 'min_samples_split': 5,
'learning_rate': 0.1, 'loss': 'ls', 'random_state':9, 'warm_start':True}
gbr = ensemble.GradientBoostingRegressor(**params)
batch = 0
for train_idx, val_idx in kf.split(X_train, y_train):
X_t, X_v = X_train[train_idx], X_train[val_idx]
y_t, y_v = y_train[train_idx], y_train[val_idx]
#training
gbr.fit(X_t, y_t)
#calculate costs
t_error = mean_squared_error(y_t, gbr.predict(X_t))**0.5
v_error = mean_squared_error(y_v, gbr.predict(X_v))**0.5
print("{}) Training error: {:.2f} Validation error: {:.2f} Score: {:.2f}"
.format(batch, t_error, v_error, gbr.score(X_v, y_v)))
batch += 1
#Scores
print("Training score: {:.4f}".format(gbr.score(X_train_orig, y_train_orig)))
#RMSLE
rmsle = mean_squared_error(y_train_orig, gbr.predict(X_train_orig))**0.5
print("RMSLE: {:.4f}".format(rmsle))
# Plotting the results
plt.scatter(gbr.predict(X_train_orig), y_train_orig)
# AdaBoost
from sklearn.ensemble import AdaBoostRegressor
from sklearn.tree import DecisionTreeRegressor
abr = AdaBoostRegressor(DecisionTreeRegressor(max_depth=X_train.shape[1]),
n_estimators=100, random_state=9)
batch = 0
for train_idx, val_idx in kf.split(X_train, y_train):
X_t, X_v = X_train[train_idx], X_train[val_idx]
y_t, y_v = y_train[train_idx], y_train[val_idx]
#training
abr.fit(X_t, y_t)
#calculate costs
t_error = mean_squared_error(y_t, abr.predict(X_t))**0.5
v_error = mean_squared_error(y_v, abr.predict(X_v))**0.5
print("{}) Training error: {:.2f} Validation error: {:.2f} Score: {:.2f}"
.format(batch, t_error, v_error, abr.score(X_v, y_v)))
batch += 1
#Scores
print("Training score: {:.4f}".format(abr.score(X_train_orig, y_train_orig)))
#RMSLE
rmsle = mean_squared_error(y_train_orig, abr.predict(X_train_orig))**0.5
print("RMSLE: {:.4f}".format(rmsle))
# Plotting the results
plt.scatter(abr.predict(X_train_orig), y_train_orig)
Explanation: Models
*__Note__: a logarithmic function has been applied to 'SalePrice' so using the mean squared error directly we will obtain the RMSLE.
End of explanation
from sklearn.linear_model import LinearRegression
slr = LinearRegression()
sclr = preprocessing.StandardScaler()
def features_level1(X):
X0 = lr.predict(X)
X1 = gbr.predict(X)
X2 = abr.predict(X)
Xt = np.array([X0, X1, X2]).T
return sclr.fit_transform(Xt)
def stack_training(X, y):
slr.fit(features_level1(X), y)
def stack_predict(X):
return slr.predict(features_level1(X))
def stack_score(X, y):
return slr.score(features_level1(X), y)
#
batch = 0
kf = KFold(n_splits=3, random_state=9)
for train_idx, val_idx in kf.split(X_train, y_train):
X_t, X_v = X_train[train_idx], X_train[val_idx]
y_t, y_v = y_train[train_idx], y_train[val_idx]
#training
stack_training(X_t, y_t)
#calculate costs
t_error = mean_squared_error(y_t, stack_predict(X_t))**0.5
v_error = mean_squared_error(y_v, stack_predict(X_v))**0.5
print("{}) Training error: {:.2f} Validation error: {:.2f} Score: {:.2f}"
.format(batch, t_error, v_error, stack_score(X_v, y_v)))
batch += 1
rmsle = mean_squared_error(y_train_orig, stack_predict(X_train_orig))**0.5
print("RMSLE: {:.4f}".format(rmsle))
# Plotting the results
plt.scatter(stack_predict(X_train_orig), y_train_orig)
Explanation: Stacked Model
In order to improve the previous predictions it's going to create a simple stacked model with the following architecture:
Level1 is composed of previous trained LinearRegressor, GradiendBoostingRegressor and AdaBoostRegressor
The outcome of this level1 layer is passed to a LinearRegressor.
End of explanation
def avg_predict(X):
return (lr.predict(X) + gbr.predict(X) + abr.predict(X))/3
predictions = avg_predict(X_train_orig)
RMSLE = mean_squared_error(y_train_orig, predictions)**0.5
print("RMSLE: {:.3f}".format(RMSLE))
# Plotting the results
plt.scatter(avg_predict(X_train_orig), y_train_orig)
Explanation: Averaged Model
Same as before, the averaged model tries to improve the predictions results. In this case the architecture is indeed much simpler. The final prediction is averaged with the outcomes of the LinearRegressor, GradientBoostingRegressor and AdaBoostRegressor.
End of explanation
from sklearn.metrics import mean_squared_error
import random
RMSLE_lr = mean_squared_error(y_train, lr.predict(X_train))**0.5
RMSLE_gbr = mean_squared_error(y_train, gbr.predict(X_train))**0.5
RMSLE_abr = mean_squared_error(y_train, abr.predict(X_train))**0.5
RMSLE_avg = mean_squared_error(y_train, stack_predict(X_train))**0.5
RMSLE_stack = mean_squared_error(y_train, avg_predict(X_train))**0.5
print("RMSLE lr: {:.3f}".format(RMSLE_lr))
print("RMSLE gbr: {:.3f}".format(RMSLE_gbr))
print("RMSLE abr: {:.3f}".format(RMSLE_abr))
print("RMSLE average: {:.3f}".format(RMSLE_avg))
print("RMSLE stacked: {:.3f}".format(RMSLE_stack))
Explanation: Evaluation
End of explanation
import os
predict = avg_predict(X_test)
#predict = stack_predict(X_test)
#predict = lr.predict(X_test)
#predictions are logs, return to the value
predict = np.exp(predict)
file = "Id,SalePrice" + os.linesep
startId = 1461
for i in range(len(X_test)):
file += "{},{}".format(startId, (int)(predict[i])) + os.linesep
startId += 1
#print(file)
# Save to file
with open('attempt.txt', 'w') as f:
f.write(file)
Explanation: Get Predictions
End of explanation
<END_TASK> |
15,685 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
pandas 소개 1
자료 안내
Step1: 데이터 생성
1880년에 태어난 아이들 중에서 가장 많이 사용되는 5개의 이름을 담은 리스트 names와
해당 이름으로 출생신고된 아이들의 숫자를 담은 데이터 births가 다음과 같다.
Step2: 두 개의 리스트를 합하여 이름과 숫자를 쌍으로 묶기 위해서 zip 함수를 이용한다.
zip 함수의 리턴값은 zip 클래스의 객체이다.
zip 객체는 순서쌍들의 리스트와 비슷하다.
다만 리스트처럼 색인을 사용하여 직접 항목을 선택할 수는 없다.
zip 객체는 iterable 자료형이다.
즉, 예를 들어 for 반복문과 함께 사용하여 각 항목을 순서대로 활용할 수 있다.
Step3: zip 객체를 리스트 자료형으로 형변환을 하면 쌍들의 리스트로 활용할 수 있으며, 여기서는
이 기능을 활용한다.
Step4: 주의
Step5: df에 저장된 데이터 프레임을 확인하면 다음과 같다.
columns에 사용된 Names와 Births가 각 열의 항목이름으로 지정된 것을 확인할 수 있다.
첫재 줄은 따라서 헤더(header)라고 불린다.
반면에 첫째 열은 자동으로 줄 번호가 생성되며 색인(index)이라고 부른다.
Step6: 이 데이터 프레임을 births 1880.csv라는 이름의 csv 파일로 저장해보자.
데이터 프레임 객체에 포함된 to_csv 메소드는 데이터 프레임 객체를 csv 파일로 변환 후 저장한다.
저장 위치는 데이터 프레임 객체와 동일한 디렉토리이다.
먼저 아래 명령으로 to_csv 메소드에 대해 정보를 확인하는 것을 권장한다.
help(df.to_csv)
to_csv 메소드는 저장되는 파일의 이름 이외에 index와 header라는 두 개의 키워드 인자를 더 사용한다.
각각 색인과 헤더 항목을 함께 사용할 것인지 여부를 결정한다.
Step7: 데이터 호출
csv 파일을 불러오기 위해, pandas 모듈의 read_csv 함수를 이용한다.
read_csv 함수를 살펴보자.
help(read_csv)
read_csv 함수는 많은 인자들을 받을 수 있지만 여기서 우리는 csv 파일의 위치만 사용한다.
나머지 인자는 키워드 인자로 지정된 기본값이 사용된다.
Step8: df를 확인하면 기존의 데이터와 비슷하게 보인다.
Step9: 문제가 하나 있다.
read_csv 함수가 csv 파일의 첫 번째 줄을 헤더(header)로 사용하였다.
이를 해결하기 위해 read_csv 함수의 매개변수 header를 None으로 설정해보자.
파이썬에서 None은 null 값을 의미한다.
Step10: 열 항목으로 사용될 이름들을 지정하지 않았기 때문에 0, 1, 2 등의 색인을 기본값으로 사용하였다.
만약 열에 특정한 이름들을 사용하고 싶다면, names라는 매개변수를 사용한다.
이때, header라는 매개변수는 생략가능하다.
Step11: 행 번호 0, 1, 2, 3, 4는 데이터 프레임 객체에 기본적으로 포함된 색인(index) 기능으로 사용된다.
주의
Step12: 데이터 클리닝(cleaning)
데이터는 1880년에 태어난 아이의 이름과 출생수로 구성되어 있으며,
5개의 기록, 즉 5행으로 이루어져 있고, 결측치(missing values)는 없다.
즉, 모든 데이터가 완벽하다.
그런데 경우에 따라 어떤 열에 이질 데이터가 사용되거나 있어야 데이터가 없는 경우, 즉, 결측치가 존재하는 경우가 있다.
그런 경우가 발생하면 데이터 분석이 제대로 진행되지 않는다.
따라서 자료형이 서로 다른 데이터가 동일한 열에 위치하지 않는지,
아니면 결측치가 있는지 여부를 먼저 확인하여 대처 방안을 강구해야 한다.
어떤 열(column)이 동일한 자료형으로 구성되어 있는지 여부를 확인하려면
데이터 프레임 객체의 속성 중에서 dtypes를 확인하면 된다.
Step13: 위 결과는 아래 내용을 담고 있다.
Names 항목을 사용한 첫째 열의 자료형은 object이다.
object는 파이썬에서 제공하는 최상위 클래스이다.
즉, 임의의 자료형이 첫째 열에 사용되도 된다는 의미이다.
Births 항목을 사용한 둘째 열의 자료형은 int64이다.
int64는 64비트용 정수 자료형을 나타낸다.
즉, 임의의 정수들만 둘 째 열에 사용될 수 있다는 의미이다.
예를 들어, 부동 소수점(float), 문자열 등 정수형 이외의 자료형을 사용하면 오류가 발생한다.
모든 열이 아니라, 예를 들어, Births 열의 타입을 알고 싶다면, 아래와 같은 코드를 작성하면 된다.
Step14: 데이터 분석
예를 들어, 가장 인기있는 이름 즉, 출생수가 가장 높은 이름을 찾기 위해서 다음 두 가지 방식 중에
한 가지를 활용할 수 있다.
방법 1
Step15: 이제 첫째 행을 확인하면 된다.
Step16: 방법 2
Step17: 데이터 시각화
지금까지 다룬 데이터는 겨우 5줄짜리이다.
따라서 1880년도에 가장 인기 있었던 이름이 Mel이라는 사실을 한 눈에 알아 볼 수 있다.
하지만 데이터가 조금만 커져도 그런 정보를 쉽게 눈으로 확인할 수 없다.
따라서 일반인이 원하는 정보를 쉽게 얻을 수 있도록 하기 위해 데이터를 시각화하여 전달하는 일이
매우 중요하다.
데이터 프레임 객체는 plot() 이라는 시각화 메소드를 제공한다.
df['Names']
Step18: 따라서 Births 열을 얻고자 하면 아래와 같이 명령하면 된다.
Step19: 이제 앞서 최대 출생수를 확인했던 df['Births'].max()를 활용하여
최대 출생수가 사용된 모든 행을 확인할 수 있다.
다음과 같이 하면 된다.
Step20: 위 결과에 의하면 4번 색인 행, 즉, 다섯 번째 줄에서만 앞서 확인한 최대 출생수 973명이 사용되었다.
즉, Mel 이름 단독으로 가장 많이 출생아이들의 이름으로 사용되었다.
아래와 같이 명령해도 동일한 결과를 얻는다.
내림 차순으로 정렬한 후에 위로부터 1개의 행만 보이라는 명령이다.
Step21: 시각화 그래프에 좀 더 다양한 정보를 제공할 수 도 있다.
아래 프로그램은 그래프에 다양한 텍스트 정보를 입력하는 방법을 보여주는 예제이다. | <ASSISTANT_TASK:>
Python Code:
# pandas 모듈에서 DataFrame 함수와 read_csv 함수 임포트
from pandas import DataFrame, read_csv
# matplolib.pyplot 모듈과 pandas 모듈을 각각 plt와 pd라는 별칭으로 임포트
import matplotlib.pyplot as plt
import pandas as pd
# 쥬피터 노트북에서 그래프를 직접 나타내기 위해 사용하는 코드
# 파이썬 전문 에디터에서는 사용하지 않음
%matplotlib inline
Explanation: pandas 소개 1
자료 안내:
pandas 라이브러리 튜토리얼에
있는 Lessons for new pandas users의 01-Lesson 내용을 담고 있다.
End of explanation
# 아이 이름과 출생신고 숫자 리스트
names = ['Bob', 'Jessica', 'Mary', 'John', 'Mel']
births = [968, 155, 77, 578, 973]
Explanation: 데이터 생성
1880년에 태어난 아이들 중에서 가장 많이 사용되는 5개의 이름을 담은 리스트 names와
해당 이름으로 출생신고된 아이들의 숫자를 담은 데이터 births가 다음과 같다.
End of explanation
for item in zip(names, births):
name, num = item
print(name, "이름으로",num, "명이 신고되었다.")
Explanation: 두 개의 리스트를 합하여 이름과 숫자를 쌍으로 묶기 위해서 zip 함수를 이용한다.
zip 함수의 리턴값은 zip 클래스의 객체이다.
zip 객체는 순서쌍들의 리스트와 비슷하다.
다만 리스트처럼 색인을 사용하여 직접 항목을 선택할 수는 없다.
zip 객체는 iterable 자료형이다.
즉, 예를 들어 for 반복문과 함께 사용하여 각 항목을 순서대로 활용할 수 있다.
End of explanation
BabyDataSet = list(zip(names, births))
print(BabyDataSet)
Explanation: zip 객체를 리스트 자료형으로 형변환을 하면 쌍들의 리스트로 활용할 수 있으며, 여기서는
이 기능을 활용한다.
End of explanation
df = pd.DataFrame(data = BabyDataSet, columns = ['Names', 'Births'])
Explanation: 주의: zip 함수에 대한 정보는 아래 명령으로 확인할 수 있다.
help(zip)
zip을 활용하여 이름과 숫자를 서로 쌍으로 묶었다. 하지만 데이터 분석을 위해서는
pandas 모듈에서 데이터 프레임(DataFrame) 객체를 이용하는 것이 보다 유용하다.
BabyDataSet을 데이터 프레임 객체로 변형하면 엑셀 파일에 사용되는 스프레스쉬트(spreadsheet)와 같은
표가 된다.
End of explanation
df
Explanation: df에 저장된 데이터 프레임을 확인하면 다음과 같다.
columns에 사용된 Names와 Births가 각 열의 항목이름으로 지정된 것을 확인할 수 있다.
첫재 줄은 따라서 헤더(header)라고 불린다.
반면에 첫째 열은 자동으로 줄 번호가 생성되며 색인(index)이라고 부른다.
End of explanation
df.to_csv('births1880.csv', index = False, header = False)
Explanation: 이 데이터 프레임을 births 1880.csv라는 이름의 csv 파일로 저장해보자.
데이터 프레임 객체에 포함된 to_csv 메소드는 데이터 프레임 객체를 csv 파일로 변환 후 저장한다.
저장 위치는 데이터 프레임 객체와 동일한 디렉토리이다.
먼저 아래 명령으로 to_csv 메소드에 대해 정보를 확인하는 것을 권장한다.
help(df.to_csv)
to_csv 메소드는 저장되는 파일의 이름 이외에 index와 header라는 두 개의 키워드 인자를 더 사용한다.
각각 색인과 헤더 항목을 함께 사용할 것인지 여부를 결정한다.
End of explanation
Location = 'births1880.csv'
df = pd.read_csv(Location)
Explanation: 데이터 호출
csv 파일을 불러오기 위해, pandas 모듈의 read_csv 함수를 이용한다.
read_csv 함수를 살펴보자.
help(read_csv)
read_csv 함수는 많은 인자들을 받을 수 있지만 여기서 우리는 csv 파일의 위치만 사용한다.
나머지 인자는 키워드 인자로 지정된 기본값이 사용된다.
End of explanation
df
Explanation: df를 확인하면 기존의 데이터와 비슷하게 보인다.
End of explanation
df = pd.read_csv(Location, header=None)
df
Explanation: 문제가 하나 있다.
read_csv 함수가 csv 파일의 첫 번째 줄을 헤더(header)로 사용하였다.
이를 해결하기 위해 read_csv 함수의 매개변수 header를 None으로 설정해보자.
파이썬에서 None은 null 값을 의미한다.
End of explanation
df = pd.read_csv(Location, names=['Names','Births'])
df
Explanation: 열 항목으로 사용될 이름들을 지정하지 않았기 때문에 0, 1, 2 등의 색인을 기본값으로 사용하였다.
만약 열에 특정한 이름들을 사용하고 싶다면, names라는 매개변수를 사용한다.
이때, header라는 매개변수는 생략가능하다.
End of explanation
import os
os.remove('births1880.csv')
Explanation: 행 번호 0, 1, 2, 3, 4는 데이터 프레임 객체에 기본적으로 포함된 색인(index) 기능으로 사용된다.
주의: 동일한 색인이 여러 번 나올 수 있다.
끝으로 지금까지 사용한 csv 파일을 삭제해 보자. 더 이상 필요 없다.
End of explanation
df.dtypes
Explanation: 데이터 클리닝(cleaning)
데이터는 1880년에 태어난 아이의 이름과 출생수로 구성되어 있으며,
5개의 기록, 즉 5행으로 이루어져 있고, 결측치(missing values)는 없다.
즉, 모든 데이터가 완벽하다.
그런데 경우에 따라 어떤 열에 이질 데이터가 사용되거나 있어야 데이터가 없는 경우, 즉, 결측치가 존재하는 경우가 있다.
그런 경우가 발생하면 데이터 분석이 제대로 진행되지 않는다.
따라서 자료형이 서로 다른 데이터가 동일한 열에 위치하지 않는지,
아니면 결측치가 있는지 여부를 먼저 확인하여 대처 방안을 강구해야 한다.
어떤 열(column)이 동일한 자료형으로 구성되어 있는지 여부를 확인하려면
데이터 프레임 객체의 속성 중에서 dtypes를 확인하면 된다.
End of explanation
df.Births.dtype
Explanation: 위 결과는 아래 내용을 담고 있다.
Names 항목을 사용한 첫째 열의 자료형은 object이다.
object는 파이썬에서 제공하는 최상위 클래스이다.
즉, 임의의 자료형이 첫째 열에 사용되도 된다는 의미이다.
Births 항목을 사용한 둘째 열의 자료형은 int64이다.
int64는 64비트용 정수 자료형을 나타낸다.
즉, 임의의 정수들만 둘 째 열에 사용될 수 있다는 의미이다.
예를 들어, 부동 소수점(float), 문자열 등 정수형 이외의 자료형을 사용하면 오류가 발생한다.
모든 열이 아니라, 예를 들어, Births 열의 타입을 알고 싶다면, 아래와 같은 코드를 작성하면 된다.
End of explanation
Sorted = df.sort_values(['Births'], ascending=False)
Explanation: 데이터 분석
예를 들어, 가장 인기있는 이름 즉, 출생수가 가장 높은 이름을 찾기 위해서 다음 두 가지 방식 중에
한 가지를 활용할 수 있다.
방법 1: 둘 째 열을 기준으로 내림차순으로 정렬한 후 첫째 행 선택
방법 2: 둘째 열에 대해 max() 함수 적용
방법 1: 특정 열을 기준으로 내림차순으로 정렬하기
End of explanation
Sorted.head(1)
Explanation: 이제 첫째 행을 확인하면 된다.
End of explanation
df['Births'].max()
Explanation: 방법 2: 특정 열에 대해 일괄적으로 max() 메소드 적용하기
End of explanation
df['Births'].plot()
plt.xlabel("시간")
plt.show()
Explanation: 데이터 시각화
지금까지 다룬 데이터는 겨우 5줄짜리이다.
따라서 1880년도에 가장 인기 있었던 이름이 Mel이라는 사실을 한 눈에 알아 볼 수 있다.
하지만 데이터가 조금만 커져도 그런 정보를 쉽게 눈으로 확인할 수 없다.
따라서 일반인이 원하는 정보를 쉽게 얻을 수 있도록 하기 위해 데이터를 시각화하여 전달하는 일이
매우 중요하다.
데이터 프레임 객체는 plot() 이라는 시각화 메소드를 제공한다.
df['Names']: df에 저장된 데이터 프레임 객체의 Names 열을 가리킨다.
End of explanation
df['Births']
Explanation: 따라서 Births 열을 얻고자 하면 아래와 같이 명령하면 된다.
End of explanation
df['Names'][df['Births'] == df['Births'].max()]
Explanation: 이제 앞서 최대 출생수를 확인했던 df['Births'].max()를 활용하여
최대 출생수가 사용된 모든 행을 확인할 수 있다.
다음과 같이 하면 된다.
End of explanation
Sorted['Names'].head(1)
Explanation: 위 결과에 의하면 4번 색인 행, 즉, 다섯 번째 줄에서만 앞서 확인한 최대 출생수 973명이 사용되었다.
즉, Mel 이름 단독으로 가장 많이 출생아이들의 이름으로 사용되었다.
아래와 같이 명령해도 동일한 결과를 얻는다.
내림 차순으로 정렬한 후에 위로부터 1개의 행만 보이라는 명령이다.
End of explanation
# 그래프 만들기
df['Births'].plot()
# 데이터셋에 있는 최댓값
MaxValue = df['Births'].max()
# 최댓값을 가진 이름 확인
MaxName = df['Names'][df['Births'] == df['Births'].max()].values
# 그래프 위에 보여줄 텍스트
Text = str(MaxValue) + " - " + MaxName
# 그래프에 텍스트 추가하기
plt.annotate(Text, xy=(1, MaxValue), xytext=(8, 0),
xycoords=('axes fraction', 'data'), textcoords='offset points')
plt.show()
Explanation: 시각화 그래프에 좀 더 다양한 정보를 제공할 수 도 있다.
아래 프로그램은 그래프에 다양한 텍스트 정보를 입력하는 방법을 보여주는 예제이다.
End of explanation
<END_TASK> |
15,686 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Gradient Descent Visualization
This code is provided as supplementary material of the lecture Machine Learning and Optimization in Communications (MLOC).<br>
This code illustrates
Step1: Specify the function to minimize as a simple python function.<br>
We have implemented some test functions that can be selected using the function selector, however, you are free to implement your own functions.<br>
Right now, we have implemented the following functions
Step2: Plot the function and its derivative
Step3: Simple gradient descent strategy using only sign of the derivative
Carry out the simple gradient descent strategy by using only the sign of the gradient
\begin{equation}
x_i = x_{i-1} - \epsilon\cdot \mathrm{sign}(f^\prime(x_{i-1}))
\end{equation}
Step4: Gradient descent
Carry out the final gradient descent strategy, which is given by
\begin{equation}
x_i = x_{i-1} - \epsilon\cdot f^\prime(x_{i-1})
\end{equation}
Step5: Here, we provide an interactive tool to play around yourself with parameters of the gradient descent. | <ASSISTANT_TASK:>
Python Code:
import importlib
autograd_available = True
# if automatic differentiation is available, use it
try:
import autograd
except ImportError:
autograd_available = False
pass
if autograd_available:
import autograd.numpy as np
from autograd import elementwise_grad as egrad
else:
import numpy as np
import matplotlib.pyplot as plt
from ipywidgets import interactive
import ipywidgets as widgets
%matplotlib inline
if autograd_available:
print('Using autograd to compute gradients')
else:
print('Using hand-calculated gradient')
Explanation: Gradient Descent Visualization
This code is provided as supplementary material of the lecture Machine Learning and Optimization in Communications (MLOC).<br>
This code illustrates:
* Gradient descent with fixed step size
* Interactive visualization of influence of step size
End of explanation
function_select = 3
def myfun(x):
functions = {
1: 0.5*x**2,
2: 0.5*x**3,
3: x**2+x**3
}
return functions.get(function_select)
if autograd_available:
gradient = egrad(myfun)
else:
def gradient(x):
functions = {
1: x,
2: 1.5*x**2,
3: 2*x+3*x**2
}
return functions.get(function_select)
Explanation: Specify the function to minimize as a simple python function.<br>
We have implemented some test functions that can be selected using the function selector, however, you are free to implement your own functions.<br>
Right now, we have implemented the following functions:
1. $\frac{1}{2}x^2$, which is convex and has a global minimum at $x=0$
2. $\frac{1}{2}x^3$, which has no global minimum, but an inflection point at $x=0$
3. $x^2+x^3$, which has a minimum at $x=0$ and a maximum at $x=-\frac{2}{3}$
The derivative is automatically computed using the autograd library, which returns a function that evaluates the gradient of myfun
End of explanation
x = np.linspace(-3,3,100)
fy = myfun(x)
gy = gradient(x)
plt.figure(1,figsize=(10,6))
plt.rcParams.update({'font.size': 14})
plt.plot(x,fy,x,gy)
plt.grid(True)
plt.xlabel("x")
plt.ylabel("y")
plt.legend(["$f(x)$","$f^\prime(x)$"])
plt.show()
Explanation: Plot the function and its derivative
End of explanation
epsilon = 0.5
start = 3.75
points = []
while abs(gradient(start)) > 1e-8 and len(points) < 50:
points.append( (start,myfun(start)) )
start = start - epsilon*np.sign(gradient(start))
plt.figure(1,figsize=(15,6))
plt.rcParams.update({'font.size': 14})
plt.subplot(1,2,1)
plt.scatter(list(zip(*points))[0],list(zip(*points))[1],c=range(len(points),0,-1),cmap='gray',s=40,edgecolors='k')
plt.plot(x,fy)
plt.grid(True)
plt.xlabel("x")
plt.ylabel("y=f(x)")
plt.subplot(1,2,2)
plt.plot(range(0,len(points)),list(zip(*points))[0])
plt.grid(True)
plt.xlabel("Step i")
plt.ylabel("x_i")
plt.show()
Explanation: Simple gradient descent strategy using only sign of the derivative
Carry out the simple gradient descent strategy by using only the sign of the gradient
\begin{equation}
x_i = x_{i-1} - \epsilon\cdot \mathrm{sign}(f^\prime(x_{i-1}))
\end{equation}
End of explanation
epsilon = 0.01
start = 3.75
points = []
while abs(gradient(start)) > 1e-8 and len(points) < 500:
points.append( (start,myfun(start)) )
start = start - epsilon*gradient(start)
plt.figure(1,figsize=(15,6))
plt.rcParams.update({'font.size': 14})
plt.subplot(1,2,1)
plt.scatter(list(zip(*points))[0],list(zip(*points))[1],c=range(len(points),0,-1),cmap='gray',s=40,edgecolors='k')
plt.plot(x,fy)
plt.grid(True)
plt.xlabel("x")
plt.ylabel("y=f(x)")
plt.subplot(1,2,2)
plt.plot(range(0,len(points)),list(zip(*points))[0])
plt.grid(True)
plt.xlabel("Step i")
plt.ylabel("x_i")
plt.show()
Explanation: Gradient descent
Carry out the final gradient descent strategy, which is given by
\begin{equation}
x_i = x_{i-1} - \epsilon\cdot f^\prime(x_{i-1})
\end{equation}
End of explanation
def interactive_gradient_descent(start,epsilon, maximum_steps, xmin, xmax):
points = []
# assume 1e-10 is about zero
while abs(gradient(start)) > 1e-10 and len(points) < maximum_steps:
points.append( (start,myfun(start)) )
start = start - epsilon*gradient(start)
plt.figure(1,figsize=(15,6))
plt.rcParams.update({'font.size': 14})
plt.subplot(1,2,1)
plt.scatter(list(zip(*points))[0],list(zip(*points))[1],c=range(len(points),0,-1),cmap='gray',s=40,edgecolors='k')
px = np.linspace(xmin,xmax,1000)
pfy = myfun(px)
plt.plot(px,pfy)
plt.autoscale(enable=True,tight=True)
plt.xlim(xmin,xmax)
plt.grid(True)
plt.xlabel("x")
plt.ylabel("y=f(x)")
plt.subplot(1,2,2)
plt.plot(range(0,len(points)),list(zip(*points))[0])
plt.grid(True)
plt.xlabel("Step i")
plt.ylabel("x_i")
plt.show()
epsilon_values = np.arange(0.0,0.1,0.0001)
style = {'description_width': 'initial'}
interactive_update = interactive(interactive_gradient_descent, \
epsilon = widgets.SelectionSlider(options=[("%g"%i,i) for i in epsilon_values], value=0.01, continuous_update=False,description='epsilon',layout=widgets.Layout(width='50%'),style=style), \
start = widgets.FloatSlider(min=-5.0,max=5.0,step=0.0001,value=3.7, continuous_update=False, description='Start x', layout=widgets.Layout(width='75%'), style=style), \
maximum_steps = widgets.IntSlider(min=20, max=500, value= 200, continuous_update=False, description='Number steps',layout=widgets.Layout(width='50%'),style=style), \
xmin = widgets.FloatSlider(min=-10, max=0, step=0.1, value=-5, continuous_update=False, description='Plot negative x limit',layout=widgets.Layout(width='50%'), style=style), \
xmax = widgets.FloatSlider(min=0, max=10, step=0.1, value=5, continuous_update=False, description='Plot positive x limit',layout=widgets.Layout(width='50%'),style=style))
output = interactive_update.children[-1]
output.layout.height = '400px'
interactive_update
Explanation: Here, we provide an interactive tool to play around yourself with parameters of the gradient descent.
End of explanation
<END_TASK> |
15,687 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
<h2>Introducing miscellaneous utilities in AWOT.</h2>
<h4>This notebook will grow over time as utilites are added and I have time to update.</h4>
Step1: <b> First we'll need some data to interact with</b>
Step2: <b>Read in the radar data</b>
Step3: <h3>Data instances</h3>
<b>Read a variable from another file and add it to the AWOT dictionary. A mask of invalid data is automatically applied. Additional masking can be accomplished by setting the <i>mask_value</i> keyword.</b>
Step4: <b>Just as in the plotting routines, time can be subset using a date string. But maybe you want to just return a subsetted dicationary for use. The <i>time_subset_awot_dict</i> function does this.</b>
Step5: <b>AWOT uses Py-ART to read many radar files. Therefore we can read through AWOT.</b>
Step6: <b> By changing the <i>map_to_awot</i> key we can convert the Py-ART radar instance to an AWOT radar instance. Note this is the DEFAULT behavior to make working with the AWOT package a bit easier.</b>
Step7: <b>An experimental KMZ file creation is available for flight data.</b>
Step8: <b>Now we can create a KMZ file of the track. This saves a KMZ file to current working directory if not specified.</b> | <ASSISTANT_TASK:>
Python Code:
# Load the needed packages
import os
import matplotlib.pyplot as plt
import numpy as np
from netCDF4 import Dataset
import awot
%matplotlib inline
Explanation: <h2>Introducing miscellaneous utilities in AWOT.</h2>
<h4>This notebook will grow over time as utilites are added and I have time to update.</h4>
End of explanation
# Released data file
wcrf1 = os.path.join("/Users/guy/data/king_air/owles2013/wcr", "WCR.OWLES13.20131215.225944_234806.up-down.nc")
# Supplementary file with corrected velocity data
wcrf2 = os.path.join("/Users/guy/data/king_air/owles2013/wcr/", "W-CORRECTED.WCR.OWLES13.20131215.225944_234806.up-down.nc")
Explanation: <b> First we'll need some data to interact with</b>
End of explanation
wcr = awot.io.read_wcr2(fname=wcrf1)
Explanation: <b>Read in the radar data</b>
End of explanation
nc = Dataset(wcrf2)
velcor = nc.variables['Velocity_cor_2']
awot.util.add_dict_to_awot_fields(wcr, 'velocity_corrected', data=velcor[:],
units=velcor.units, longname=velcor.long_name, stdname="Corrected velocity")
print(wcr['fields']['velocity']['data'].shape, wcr['fields']['velocity_corrected']['data'].shape)
print(np.ma.min(wcr['fields']['velocity']['data']), np.ma.max(wcr['fields']['velocity']['data']))
print(np.ma.min(wcr['fields']['velocity_corrected']['data']), np.ma.max(wcr['fields']['velocity_corrected']['data']))
Explanation: <h3>Data instances</h3>
<b>Read a variable from another file and add it to the AWOT dictionary. A mask of invalid data is automatically applied. Additional masking can be accomplished by setting the <i>mask_value</i> keyword.</b>
End of explanation
start_time = "2013-12-15 23:05:00"
end_time = "2013-12-15 23:15:00"
# Create subsets of arrays
refsub = awot.util.time_subset_awot_dict(wcr['time'], wcr['fields']['reflectivity'],
start_time, end_time)
velsub = awot.util.time_subset_awot_dict(wcr['time'], wcr['fields']['velocity'],
start_time, end_time)
altsub = awot.util.time_subset_awot_dict(wcr['time'], wcr['altitude'],
start_time, end_time)
print(wcr['fields']['reflectivity']['data'].shape, refsub['data'].shape)
print(wcr['fields']['velocity']['data'].shape, velsub['data'].shape)
print(wcr['altitude']['data'].shape, altsub['data'].shape)
Explanation: <b>Just as in the plotting routines, time can be subset using a date string. But maybe you want to just return a subsetted dicationary for use. The <i>time_subset_awot_dict</i> function does this.</b>
End of explanation
nexf = os.path.join("/Users/guy/data/nexrad/KILN/nex2/20140429", "KILN20140430_004708_V06")
rnex = awot.io.read_ground_radar(nexf, map_to_awot=False)
rnex.fields.keys()
Explanation: <b>AWOT uses Py-ART to read many radar files. Therefore we can read through AWOT.</b>
End of explanation
rnex = awot.io.read_ground_radar(nexf, map_to_awot=True)
rnex.keys()
Explanation: <b> By changing the <i>map_to_awot</i> key we can convert the Py-ART radar instance to an AWOT radar instance. Note this is the DEFAULT behavior to make working with the AWOT package a bit easier.</b>
End of explanation
flname = os.path.join("/Users/guy/data/king_air/pecan2015", "20150716.c1.nc")
fl1 = awot.io.read_netcdf(fname=flname, platform='uwka')
Explanation: <b>An experimental KMZ file creation is available for flight data.</b>
End of explanation
awot.util.write_track_kmz(fl1, 'altitude', show_legend=False, end_time="2016-01-01T00:00:00")
Explanation: <b>Now we can create a KMZ file of the track. This saves a KMZ file to current working directory if not specified.</b>
End of explanation
<END_TASK> |
15,688 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
PDE
The acoustic wave equation for the square slowness m and a source q is given in 3D by
Step1: Time and space discretization as a Taylor expansion.
The time discretization is define as a second order ( $ O (dt^2)) $) centered finite difference to get an explicit Euler scheme easy to solve by steping in time.
$ \frac{d^2 u(x,t)}{dt^2} \simeq \frac{u(x,t+dt) - 2 u(x,t) + u(x,t-dt)}{dt^2} + O(dt^2) $
And we define the space discretization also as a Taylor serie, with oder chosen by the user. This can either be a direct expansion of the second derivative bulding the laplacian, or a combination of first oder space derivative. The second option can be a better choice in case you would want to extand the method to more complex wave equations involving first order derivatives in chain only.
$ \frac{d^2 u(x,t)}{dt^2} \simeq \frac{1}{dx^2} \sum_k \alpha_k (u(x+k dx,t)+u(x-k dx,t)) + O(dx^k) $
Step2: Solve forward in time
The wave equation with absorbing boundary conditions writes
$ \eta \frac{d u(x,t)}{dt} + m \frac{d^2 u(x,t)}{dt^2} - \rho \nabla(\frac{1}{\rho} \text{grad}(u(x,t)) =q $
and the adjont wave equation
$ -\eta \frac{d u(x,t)}{dt} + m \frac{d^2 u(x,t)}{dt^2} - \rho \nabla(\frac{1}{\rho} \text{grad}(u(x,t)) =q $
where $ \eta$ is a damping factor equal to zero inside the physical domain and decreasing inside the absorbing layer from the pysical domain to the border
And in order to simplify (kinda, I just like it that way) we will rewrite
$ \rho \nabla(\frac{1}{\rho} \text{grad}(u(x,t)) = \nabla^2 u(x,t) + \rho \text{grad}(\frac{1}{\rho}) . \text{grad}(u(x,t)) $
Step3: Define the discrete model
Step4: Create functions for the PDE
The Gradient/Born are here so that everything is at the correct place, it is described later
Step5: A Forward propagation example
Step6: Adjoint test
In ordr to guaranty we have the gradient we need to make sure that the solution of the adjoint wave equation is indeed the true adjoint. Tod os so one should check that
$ <Ax,y> - <x,A^Ty> = 0$
where $A$ is the wave_equation, $A^T$ is wave_equationA and $x,y$ are any random vectors in the range of each operator. This can however be expensive as this two vector would be of size $N * n_t$. To test our operator we will the relax this test by
$ <P_r A P_s^T x,y> - <x,P_SA^TP_r^Ty> = 0$
where $P_r , P_s^T$ are the source and recevier projection operator mapping the source and receiver locations and times onto the full domain. This allow to have only a random source of size $n_t$ at a random postion.
Step7: Least square objective Gradient
We will consider here the least square objective, as this is the one in need of an adjoint. The test that will follow are however necessary for any objective and associated gradient in a optimization framework. The objective function can be written
$ min_m \Phi(m)
Step8: Adjoint test for the gradient
The adjoint of the FWI Gradient is the Born modelling operator, implementing a double propagation forward in time with a wavefield scaled by the model perturbation for the second propagation
$ J dm = - A^{-1}(\frac{d A^{-1}q}{dt^2}) dm $
Step9: Jacobian test
The last part is to check that the operators are consistent with the problem. There is then two properties to be satisfied
$ U(m + hdm) = U(m) + \mathcal{O} (h) \
U(m + h dm) = U(m) + h J[m]dm + \mathcal{O} (h^2) $
which are the linearization conditions for the objective. This is a bit slow to run here but here is the way to test it.
1 - Genrate data for the true model m
2 - Define a smooth initial model $m_0$ and comput the data $d_0$ for this model
3 - You now have $U(m_0)$
4 - Define $ dm = m-m_0$ and $ h = {1,.1,.01,.001,...}$
5 - For each $h$ compute $U(m_0 + h dm)$ by generating data for $m_0 + h dm$ and compute $(J[m_0 + h dm]^T\delta |d) $
6 - Plot in Loglog the two lines of equation above
Step10: Gradient test
The last part is to check that the operators are consistent with the problem. There is then two properties to be satisfied
$ \Phi(m + hdm) = \Phi(m) + \mathcal{O} (h) \
\Phi(m + h dm) = \Phi(m) + h (J[m]^T\delta |d)dm + \mathcal{O} (h^2) $
which are the linearization conditions for the objective. This is a bit slow to run here but here is the way to test it.
1 - Genrate data for the true model m
2 - Define a smooth initial model $m_0$ and comput the data $d_0$ for this model
3 - You now have $\Phi(m_0)$
4 - Define $ dm = m-m_0$ and $ h = {1,.1,.01,.001,...}$
5 - For each $h$ compute $\Phi(m_0 + h dm)$ by generating data for $m_0 + h dm$ and compute $(J[m_0 + h dm]^T\delta |d) $
6 - Plot in Loglog the two lines of equation above | <ASSISTANT_TASK:>
Python Code:
p=Function('p')
b=Function('b')
m,s,h,r = symbols('m s h r')
m=M(x,y)
q=Q(x,y,t)
d=D(x,y,t)
e=E(x,y)
r=rho(x,y)
Explanation: PDE
The acoustic wave equation for the square slowness m and a source q is given in 3D by :
\begin{cases}
&m \frac{d^2 u(x,t)}{dt^2} - \rho \nabla(\frac{1}{\rho} \text{grad}(u(x,t)) =q \
&u(.,0) = 0 \
&\frac{d u(x,t)}{dt}|_{t=0} = 0
\end{cases}
with the zero initial conditons to guaranty unicity of the solution
End of explanation
dtt=as_finite_diff(p(x,y,t).diff(t,t), [t-s,t, t+s])
dt=as_finite_diff(p(x,y,t).diff(t), [t-s, t+s])
# Spacial finite differences can easily be extended to higher order by increasing the list of sampling point in the next expression.
# Be sure to keep this stencil symmetric and everything else in the notebook will follow.
dxx=as_finite_diff(p(x,y,t).diff(x,x), [x-h,x, x+h])
dyy=as_finite_diff(p(x,y,t).diff(y,y), [y-h,y, y+h])
dy=as_finite_diff(p(x,y,t).diff(y), [y-h, y+h])
dx=as_finite_diff(p(x,y,t).diff(x), [x-h, x+h])
dyr=as_finite_diff(b(x,y).diff(y), [y-h, y+h])
dxr=as_finite_diff(b(x,y).diff(x), [x-h, x+h])
dtt,dxx,dyy,dt,dx,dy
# In practice, compute rho. grad(rho) upfront (this is a constant) and just compute the following expression, it avoids
# the recomputation of the gradient at every time step. int the following X and Y are the component of rho.grad(rho) in X and Y respectively
gradgrad=X*dx+Y*dy
expand(gradgrad)
lap=(dxx+dyy)
expand(lap + gradgrad)
Explanation: Time and space discretization as a Taylor expansion.
The time discretization is define as a second order ( $ O (dt^2)) $) centered finite difference to get an explicit Euler scheme easy to solve by steping in time.
$ \frac{d^2 u(x,t)}{dt^2} \simeq \frac{u(x,t+dt) - 2 u(x,t) + u(x,t-dt)}{dt^2} + O(dt^2) $
And we define the space discretization also as a Taylor serie, with oder chosen by the user. This can either be a direct expansion of the second derivative bulding the laplacian, or a combination of first oder space derivative. The second option can be a better choice in case you would want to extand the method to more complex wave equations involving first order derivatives in chain only.
$ \frac{d^2 u(x,t)}{dt^2} \simeq \frac{1}{dx^2} \sum_k \alpha_k (u(x+k dx,t)+u(x-k dx,t)) + O(dx^k) $
End of explanation
# Forward wave equation
wave_equation = m*dtt- (dxx+dyy)- r*(dxr*dx+dyr*dy) - q + e*dt
stencil = solve(wave_equation,p(x,y,t+s))[0]
ts=lambdify((p(x,y,t-s),p(x-h,y,t), p(x,y,t), p(x+h,y,t),p(x,y-h,t), p(x,y+h,t),b(x-h,y), b(x,y), b(x+h,y),b(x,y-h), b(x,y+h), q , m,r, s, h,e),stencil,"numpy")
stencil
# Adjoint wave equation
wave_equationA = m*dtt- (dxx+dyy) - r*(dxr*dx+dyr*dy) - d - e*dt
stencilA = solve(wave_equationA,p(x,y,t-s))[0]
tsA=lambdify((p(x,y,t+s),p(x-h,y,t), p(x,y,t), p(x+h,y,t),p(x,y-h,t), p(x,y+h,t),b(x-h,y), b(x,y), b(x+h,y),b(x,y-h), b(x,y+h), d, m,r, s, h,e),stencilA,"numpy")
stencilA
Explanation: Solve forward in time
The wave equation with absorbing boundary conditions writes
$ \eta \frac{d u(x,t)}{dt} + m \frac{d^2 u(x,t)}{dt^2} - \rho \nabla(\frac{1}{\rho} \text{grad}(u(x,t)) =q $
and the adjont wave equation
$ -\eta \frac{d u(x,t)}{dt} + m \frac{d^2 u(x,t)}{dt^2} - \rho \nabla(\frac{1}{\rho} \text{grad}(u(x,t)) =q $
where $ \eta$ is a damping factor equal to zero inside the physical domain and decreasing inside the absorbing layer from the pysical domain to the border
And in order to simplify (kinda, I just like it that way) we will rewrite
$ \rho \nabla(\frac{1}{\rho} \text{grad}(u(x,t)) = \nabla^2 u(x,t) + \rho \text{grad}(\frac{1}{\rho}) . \text{grad}(u(x,t)) $
End of explanation
import matplotlib.pyplot as plt
from matplotlib import animation
hstep=25 #space increment d = minv/(10*f0);
tstep=2 #time increment dt < .5 * hstep /maxv;
tmin=0.0 #initial time
tmax=600 #simulate until
xmin=-500.0 - 10*hstep #left bound
xmax=500.0 + 10*hstep #right bound...assume packet never reaches boundary
ymin=-500.0 - 10*hstep #left bound
ymax=500.0 + 10*hstep #right bound...assume packet never reaches boundary
f0=.010
t0=1/.010
nbpml=10
nx = int((xmax-xmin)/hstep) + 1 #number of points on x grid
ny = int((ymax-ymin)/hstep) + 1 #number of points on x grid
nt = int((tmax-tmin)/tstep) + 2 #number of points on t grid
xsrc=-400
ysrc=0.0
xrec = nbpml+4
#set source as Ricker wavelet for f0
def source(x,y,t):
r = (np.pi*f0*(t-t0))
val = (1-2.*r**2)*np.exp(-r**2)
if abs(x-xsrc)<hstep/2 and abs(y-ysrc)<hstep/2:
return val
else:
return 0.0
def dampx(x):
dampcoeff=1.5*np.log(1.0/0.001)/(5.0*hstep);
if x<nbpml:
return dampcoeff*((nbpml-x)/nbpml)**2
elif x>nx-nbpml-1:
return dampcoeff*((x-nx+nbpml)/nbpml)**2
else:
return 0.0
def dampy(y):
dampcoeff=1.5*np.log(1.0/0.001)/(5.0*hstep);
if y<nbpml:
return dampcoeff*((nbpml-y)/nbpml)**2
elif y>ny-nbpml-1:
return dampcoeff*((y-ny+nbpml)/nbpml)**2
else:
return 0.0
# Velocity and density models
def smooth10(vel,nx,ny):
out=np.ones((nx,ny))
out[:,:]=vel[:,:]
for a in range(5,nx-6):
out[a,:]=np.sum(vel[a-5:a+5,:], axis=0) /10
return out
# True velocity
vel=np.ones((nx,ny)) + 2.0
rho=np.ones((nx,ny))
vel[floor(nx/2):nx,:]=4.5
rho[floor(nx/2):nx,:]=2.0
rho=rho**-1
mt=vel**-2
# Smooth velocity
v0=smooth10(vel,nx,ny)
m0=v0**-2
dm=m0-mt
Explanation: Define the discrete model
End of explanation
def Forward(nt,nx,ny,m,rho):
u=np.zeros((nt,nx,ny))
rec=np.zeros((nt,ny-2))
for ti in range(0,nt):
for a in range(1,nx-1):
for b in range(1,ny-1):
src = source(xmin+a*hstep,ymin+b*hstep,tstep*ti)
damp=dampx(a)+dampy(b)
if ti==0:
u[ti,a,b]=ts(0,0,0,0,0,0,rho[a-1,b],rho[a,b],rho[a+1,b],rho[a,b-1],rho[a,b+1],
src,m[a,b],rho[a,b]**-1,tstep,hstep,damp)
elif ti==1:
u[ti,a,b]=ts(0,u[ti-1,a-1,b],u[ti-1,a,b],u[ti-1,a+1,b],u[ti-1,a,b-1],u[ti-1,a,b+1],rho[a-1,b],rho[a,b],rho[a+1,b],rho[a,b-1],rho[a,b+1],
src,m[a,b],rho[a,b]**-1,tstep,hstep,damp)
else:
u[ti,a,b]=ts(u[ti-2,a,b],u[ti-1,a-1,b],u[ti-1,a,b],u[ti-1,a+1,b],u[ti-1,a,b-1],u[ti-1,a,b+1],rho[a-1,b],rho[a,b],rho[a+1,b],rho[a,b-1],rho[a,b+1],
src,m[a,b],rho[a,b]**-1,tstep,hstep,damp)
if a==xrec :
rec[ti,b-1]=u[ti,a,b]
return rec,u
def Adjoint(nt,nx,ny,m,rho,rec):
v=np.zeros((nt,nx,ny))
srca=np.zeros((nt))
for ti in range(nt-1, -1, -1):
for a in range(1,nx-1):
for b in range(1,ny-1):
if a==xrec:
resid=rec[ti,b-1]
else:
resid=0
damp=dampx(a)+dampy(b)
if ti==nt-1:
v[ti,a,b]=tsA(0,0,0,0,0,0,rho[a-1,b],rho[a,b],rho[a+1,b],rho[a,b-1],rho[a,b+1],
resid,m[a,b],rho[a,b]**-1,tstep,hstep,damp)
elif ti==nt-2:
v[ti,a,b]=tsA(0,v[ti+1,a-1,b],v[ti+1,a,b],v[ti+1,a+1,b],v[ti+1,a,b-1],v[ti+1,a,b+1],rho[a-1,b],rho[a,b],rho[a+1,b],rho[a,b-1],rho[a,b+1],
resid,m[a,b],rho[a,b]**-1,tstep,hstep,damp)
else:
v[ti,a,b]=tsA(v[ti+2,a,b],v[ti+1,a-1,b],v[ti+1,a,b],v[ti+1,a+1,b],v[ti+1,a,b-1],v[ti+1,a,b+1],rho[a-1,b],rho[a,b],rho[a+1,b],rho[a,b-1],rho[a,b+1],
resid,m[a,b],rho[a,b]**-1,tstep,hstep,damp)
if abs(xmin+a*hstep-xsrc)<hstep/2 and abs(ymin+b*hstep-ysrc)<hstep/2:
srca[ti]=v[ti,a,b]
return srca,v
def Gradient(nt,nx,ny,m,rho,rec,u):
v1=np.zeros((nx,ny))
v2=np.zeros((nx,ny))
v3=np.zeros((nx,ny))
grad=np.zeros((nx,ny))
for ti in range(nt-1,-1,-1):
for a in range(1,nx-1):
for b in range(1,ny-1):
if a==xrec:
resid=rec[ti,b-1]
else:
resid=0
damp=dampx(a)+dampy(b)
v3[a,b]=tsA(v1[a,b],v2[a-1,b],v2[a,b],v2[a+1,b],v2[a,b-1],v2[a,b+1],rho[a-1,b],rho[a,b],rho[a+1,b],rho[a,b-1],rho[a,b+1],
resid,m[a,b],rho[a,b]**-1,tstep,hstep,damp)
grad[a,b]=grad[a,b]-(v3[a,b]-2*v2[a,b]+v1[a,b])*(u[ti,a,b])
v1,v2,v3=v2,v3,v1
# No update inside the pml, only in the physical domain
# grad[0:nbpml-1,:]=0
# grad[nx-nbpml-1:nx-1,:]=0
# grad[:,0:nbpml-1]=0
# grad[:,ny-nbpml-1:ny-1]=0
return tstep**-2*grad
def Born(nt,nx,ny,m,rho,dm):
u1=np.zeros((nx,ny))
U1=np.zeros((nx,ny))
u2=np.zeros((nx,ny))
U2=np.zeros((nx,ny))
u3=np.zeros((nx,ny))
U3=np.zeros((nx,ny))
rec=np.zeros((nt,ny-2))
src2=0
for ti in range(0,nt):
for a in range(1,nx-1):
for b in range(1,ny-1):
damp=dampx(a)+dampy(b)
src = source(xmin+a*hstep,ymin+b*hstep,tstep*ti)
u3[a,b]=ts(u1[a,b],u2[a-1,b],u2[a,b],u2[a+1,b],u2[a,b-1],u2[a,b+1],rho[a-1,b],rho[a,b],rho[a+1,b],rho[a,b-1],rho[a,b+1],
src,m[a,b],rho[a,b]**-1,tstep,hstep,damp)
src2 = -tstep**-2*(u3[a,b]-2*u2[a,b]+u1[a,b])*dm[a,b]
U3[a,b]=ts(U1[a,b],U2[a-1,b],U2[a,b],U2[a+1,b],U2[a,b-1],U2[a,b+1],rho[a-1,b],rho[a,b],rho[a+1,b],rho[a,b-1],rho[a,b+1],
src2,m[a,b],rho[a,b]**-1,tstep,hstep,damp)
if a==xrec :
rec[ti,b-1]=U3[a,b]
u1,u2,u3=u2,u3,u1
U1,U2,U3=U2,U3,U1
return rec
Explanation: Create functions for the PDE
The Gradient/Born are here so that everything is at the correct place, it is described later
End of explanation
(rect,ut)=Forward(nt,nx,ny,mt,rho)
fig = plt.figure()
plts = [] # get ready to populate this list the Line artists to be plotted
plt.hold("off")
for i in range(nt):
r = plt.imshow(ut[i,:,:]) # this is how you'd plot a single line...
plts.append( [r] )
ani = animation.ArtistAnimation(fig, plts, interval=50, repeat = False) # run the animation
#plt.show()
fig2 = plt.figure()
plt.hold("off")
shotrec = plt.imshow(rect,vmin=-10,vmax=10) # this is how you'd plot a single line...
#plt.show()
Explanation: A Forward propagation example
End of explanation
(rec0,u0)=Forward(nt,nx,ny,m0,rho)
(srca,v)=Adjoint(nt,nx,ny,m0,rho,rec0)
plts = [] # get ready to populate this list the Line artists to be plotted
plt.hold("off")
for i in range(0,nt):
r = plt.imshow(v[i,:,:]) # this is how you'd plot a single line...
plts.append( [r] )
ani = animation.ArtistAnimation(fig, plts, interval=50, repeat = False) # run the animation
#plt.show()
shotrec = plt.plot(srca) # this is how you'd plot a single line...
plt.show()
# Actual adjoint test
term1=0
for ti in range(0,nt):
term1=term1+srca[ti]*source(xsrc,ysrc,(ti)*tstep)
term2=LA.norm(rec0)**2
term1,term2,term1-term2,term1/term2
#if abs(term11/term21-1)<1e-9
# print('Adjoint test passed')
Explanation: Adjoint test
In ordr to guaranty we have the gradient we need to make sure that the solution of the adjoint wave equation is indeed the true adjoint. Tod os so one should check that
$ <Ax,y> - <x,A^Ty> = 0$
where $A$ is the wave_equation, $A^T$ is wave_equationA and $x,y$ are any random vectors in the range of each operator. This can however be expensive as this two vector would be of size $N * n_t$. To test our operator we will the relax this test by
$ <P_r A P_s^T x,y> - <x,P_SA^TP_r^Ty> = 0$
where $P_r , P_s^T$ are the source and recevier projection operator mapping the source and receiver locations and times onto the full domain. This allow to have only a random source of size $n_t$ at a random postion.
End of explanation
# Misfit
F0=.5*LA.norm(rec0-rect)**2
F0
Im1=Gradient(nt,nx,ny,m0,rho,rec0-rect,u0)
shotrec = plt.imshow(rec0-rect,vmin=-100,vmax=100) # this is how you'd plot a single line...
#plt.show()
shotrec = plt.imshow(Im1,vmin=-100000,vmax=100000) # this is how you'd plot a single line...
#plt.show()
Explanation: Least square objective Gradient
We will consider here the least square objective, as this is the one in need of an adjoint. The test that will follow are however necessary for any objective and associated gradient in a optimization framework. The objective function can be written
$ min_m \Phi(m) := \frac{1}{2} \| P_r A^{-1}(m) q - d\|_2^2$
And it's gradient becomes
$ \nabla_m \Phi(m) = - (\frac{dA(m)u}{dm})^T v $
where v is the soltuion if the adjoint wave equation. For the simple acoustic case the gradient can be rewritten as
$ \nabla_m \Phi(m) = - \sum_{t=1}^{nt} \frac{d^2u(t)}{dt^2} v(t) $
End of explanation
Im2=Gradient(nt,nx,ny,m0,rho,rec0-rect,u0)
du1=Born(nt,nx,ny,m0,rho,dm)
term11=np.dot((rec0-rect).reshape(-1),du1.reshape(-1))
term21=np.dot(Im2.reshape(-1),dm.reshape(-1))
term11,term21,term11-term21,term11/term21
#if abs(term11/term21-1)<1e-9
# print('Adjoint test passed')
shotrec = plt.imshow(Im2,vmin=-10000,vmax=10000) # this is how you'd plot a single line...
#plt.show()
Explanation: Adjoint test for the gradient
The adjoint of the FWI Gradient is the Born modelling operator, implementing a double propagation forward in time with a wavefield scaled by the model perturbation for the second propagation
$ J dm = - A^{-1}(\frac{d A^{-1}q}{dt^2}) dm $
End of explanation
H=[1,0.1,0.01,.001,0.0001,0.00001,0.000001]
(D1,u0)=Forward(nt,nx,ny,m0,rho)
dub=Born(nt,nx,ny,m0,rho,dm)
error1=np.zeros((7))
error2=np.zeros((7))
for i in range(0,7):
mloc=m0+H[i]*dm
(d,u)=Forward(nt,nx,ny,mloc,rho)
error1[i] = LA.norm(d - D1,ord=1)
error2[i] = LA.norm(d - D1 - H[i]*dub,ord=1)
hh=np.zeros((7))
for i in range(0,7):
hh[i]=H[i]*H[i]
shotrec = plt.loglog(H,error1,H,H) # this is how you'd plot a single line...
plt.show()
shotrec = plt.loglog(H,error2,H,hh) # this is howyou'd plot a single line...
plt.show()
Explanation: Jacobian test
The last part is to check that the operators are consistent with the problem. There is then two properties to be satisfied
$ U(m + hdm) = U(m) + \mathcal{O} (h) \
U(m + h dm) = U(m) + h J[m]dm + \mathcal{O} (h^2) $
which are the linearization conditions for the objective. This is a bit slow to run here but here is the way to test it.
1 - Genrate data for the true model m
2 - Define a smooth initial model $m_0$ and comput the data $d_0$ for this model
3 - You now have $U(m_0)$
4 - Define $ dm = m-m_0$ and $ h = {1,.1,.01,.001,...}$
5 - For each $h$ compute $U(m_0 + h dm)$ by generating data for $m_0 + h dm$ and compute $(J[m_0 + h dm]^T\delta |d) $
6 - Plot in Loglog the two lines of equation above
End of explanation
(DT,uT)=Forward(nt,nx,ny,mt,rho)
(D1,u0)=Forward(nt,nx,ny,m0,rho)
F0=.5*LA.norm(D1-DT)**2
g=Gradient(nt,nx,ny,m0,rho,D1-DT,u0)
G=np.dot(g.reshape(-1),dm.reshape(-1));
error21=np.zeros((7))
error22=np.zeros((7))
for i in range(0,7):
mloc=m0+H[i]*dm
(D,u)=Forward(nt,nx,ny,mloc,rho)
error21[i] = .5*LA.norm(D-DT)**2 -F0
error22[i] = .5*LA.norm(D-DT)**2 -F0 - H[i]*G
shotrec = plt.loglog(H,error21,H,H) # this is how you'd plot a single line...
plt.show()
shotrec = plt.loglog(H,abs(error22),H,hh) # this is how you'd plot a single line...
plt.show()
error22
Explanation: Gradient test
The last part is to check that the operators are consistent with the problem. There is then two properties to be satisfied
$ \Phi(m + hdm) = \Phi(m) + \mathcal{O} (h) \
\Phi(m + h dm) = \Phi(m) + h (J[m]^T\delta |d)dm + \mathcal{O} (h^2) $
which are the linearization conditions for the objective. This is a bit slow to run here but here is the way to test it.
1 - Genrate data for the true model m
2 - Define a smooth initial model $m_0$ and comput the data $d_0$ for this model
3 - You now have $\Phi(m_0)$
4 - Define $ dm = m-m_0$ and $ h = {1,.1,.01,.001,...}$
5 - For each $h$ compute $\Phi(m_0 + h dm)$ by generating data for $m_0 + h dm$ and compute $(J[m_0 + h dm]^T\delta |d) $
6 - Plot in Loglog the two lines of equation above
End of explanation
<END_TASK> |
15,689 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
<CENTER>
<H1>
University of Maryland GRADMAP <BR>
Winter Workshop Python Boot Camp <BR>
</H1>
</CENTER>
More Data Structures, Control Statements, <BR> Functions, and Modules
Sets
Step1: Sets have unique elements. They can be compared, differenced, unionized, etc.
Step2: Like lists, we can use as (unordered) buckets
.pop() gives us a random element
Step3:
Dictionaries
denoted with a curly braces and colons
Step4: these are key
Step5: dictionaries are UNORDERED<sup>*</sup>.
You cannot assume that one key comes before or after another
<sup>*</sup> you can use a special type of ordered dict if you really need it
Step6:
Dictionaries
Step7: remember
Step8:
.keys() and .values()
Step9: we cannot ensure ordering here of the groups
Step10: .iteritems() is a handy method,
returning key,value pairs with each iteration
Step11: Some examples of getting values
Step12:
setting values
you can edit the values of keys and also .pop() & del to remove certain keys
Step13:
.update() method is very handy, like .append() for lists
Step14:
Loops and branches in python
Python has the usual control flow statements
Step15: IPython Notebook automatically converts tabs into spaces, but some programs do not. Be careful not to mix these up! Be consistent in your programming.
If you're working within the Python interpreter (not the IPython Notebook), you'll see this
Step16: What is a Function?
<UL>
<LI> A block of organized, reusable code that is used to perform a single, related action.
<LI> Provides better modularity for your application and a high degree of code reusing.
<LI> You can name a function anything you want as long as it
Step17: Scope of a Function
Step18: Pass by reference vs value
Step20: Function Arguments
You can call a function by using the following types of formal arguments
Step21: Unspecified args and keywords
Step22: What is a Module?
<UL>
<LI> A Python object with arbitrarily named attributes that you can bind and reference.
<LI> A file consisting of Python code.
<LI> Allows you to logically organize your Python code.
<LI> Makes the code easier to understand and use.
<LI> Can define functions, classes and variables.
<LI> Can also include runnable code.
</UL>
<B> Any file ending in .py is treated as a module. </B> | <ASSISTANT_TASK:>
Python Code:
{1,2,3,"bingo"}
type({1,2,3,"bingo"})
type({})
type(set())
set("spamIam")
Explanation: <CENTER>
<H1>
University of Maryland GRADMAP <BR>
Winter Workshop Python Boot Camp <BR>
</H1>
</CENTER>
More Data Structures, Control Statements, <BR> Functions, and Modules
Sets
End of explanation
a = set("sp"); b = set("am"); print a ; print b
c = set(["a","m"])
c == b
"p" in a
"ps" in a
q = set("spamIam")
a.issubset(q)
a | b
q - (a | b)
q & (a | b)
Explanation: Sets have unique elements. They can be compared, differenced, unionized, etc.
End of explanation
# this is pretty volitile...wont be the same
# order on all machines
for i in q & (a | b):
print i,
q.remove("a")
q.pop()
print q.pop()
print q.pop()
print q.pop()
# q.pop()
Explanation: Like lists, we can use as (unordered) buckets
.pop() gives us a random element
End of explanation
d = {"favorite cat": None, "favorite spam": "all"}
Explanation:
Dictionaries
denoted with a curly braces and colons
End of explanation
print d["favorite cat"]
d[0] ## this is not a list and you dont have a keyword = 0
e = {"favorite cat": None, "favorite spam": "all", \
1: 'loneliest number'}
e[1] == 'loneliest number'
Explanation: these are key: value, key: value, ...
End of explanation
# number 1...you've seen this
d = {"favorite cat": None, "favorite spam": "all"}
# number 2
d = dict(one = 1, two=2,cat = 'dog') ; print d
# number 3 ... just start filling in items/keys
d = {} # empty dictionary
d['cat'] = 'dog'
d['one'] = 1
d['two'] = 2
d
# number 4... start with a list of tuples
mylist = [("cat","dog"), ("one",1),("two",2)]
print dict(mylist)
dict(mylist) == d
Explanation: dictionaries are UNORDERED<sup>*</sup>.
You cannot assume that one key comes before or after another
<sup>*</sup> you can use a special type of ordered dict if you really need it:
http://docs.python.org/whatsnew/2.7.html#pep-372-adding-an-ordered-dictionary-to-collections
4 ways to make a Dictionary
End of explanation
d = {"favorite cat": None, "favorite spam": "all"}
d = {'favorites': {'cat': None, 'spam': 'all'}, \
'least favorite': {'cat': 'all', 'spam': None}}
print d['least favorite']['cat']
Explanation:
Dictionaries: they can be complicated (in a good way)
End of explanation
phone_numbers = {'family': [('mom','642-2322'),('dad','534-2311')],\
'friends': [('Sylvia','652-2212')]}
for group_type in ['friends','family']:
print "Group " + group_type + ":"
for info in phone_numbers[group_type]:
print " ",info[0], info[1]
# this will return a list, but you dont know in what order!
phone_numbers.keys()
phone_numbers.values()
Explanation: remember: the backslash () allows you to across break lines. Not technically needed when defining a dictionary or list
End of explanation
for group_type in phone_numbers.keys():
print "Group " + group_type + ":"
for info in phone_numbers[group_type]:
print " ",info[0], info[1]
Explanation:
.keys() and .values(): are called methods on dictionaries
End of explanation
groups = phone_numbers.keys()
groups.sort()
for group_type in groups:
print "Group " + group_type + ":"
for info in phone_numbers[group_type]:
print " ",info[0], info[1]
Explanation: we cannot ensure ordering here of the groups
End of explanation
for group_type, vals in phone_numbers.iteritems():
print "Group " + group_type + ":"
for info in vals:
print " ",info[0], info[1]
Explanation: .iteritems() is a handy method,
returning key,value pairs with each iteration
End of explanation
phone_numbers['co-workers']
phone_numbers.has_key('co-workers')
print phone_numbers.get('co-workers')
phone_numbers.get('friends') == phone_numbers['friends']
print phone_numbers.get('co-workers',"all alone")
Explanation: Some examples of getting values:
End of explanation
# add to the friends list
phone_numbers['friends'].append(("Jeremy","232-1121"))
print phone_numbers
## Sylvia's number changed
phone_numbers['friends'][0][1] = "532-1521"
phone_numbers['friends'][0] = ("Sylvia","232-1521");
print phone_numbers['friends']
## I lost all my friends preparing for this Python class
phone_numbers['friends'] = [] # sets this to an empty list
## remove the friends key altogether
print phone_numbers.pop('friends')
print phone_numbers
del phone_numbers['family']
print phone_numbers
Explanation:
setting values
you can edit the values of keys and also .pop() & del to remove certain keys
End of explanation
phone_numbers.update({"friends": [("Sylvia's friend, Dave", "532-1521")]})
print phone_numbers
Explanation:
.update() method is very handy, like .append() for lists
End of explanation
x = 1
print x
Explanation:
Loops and branches in python
Python has the usual control flow statements:
- if, else, elif
- for loops
- while loops
- break, continue, pass
Indentation in Python defines where blocks begin and end.
End of explanation
# You can mix indentations between different blocks ...
# but this is ugly and people will judge you
x = 1
if x > 0:
print "yo"
else:
print "dude"
# You can put everything on one line
print "yo" if x > 0 else "dude"
# Multiple cases
x = 1
if x < -10:
print "yo"
elif x > 10: # 'elif' is short for 'else if'
print "dude"
else:
print "sup"
for x in range(5):
print x**2
for x in ("all","we","wanna","do","is","eat","your","brains"):
print x
x = 0
while x < 5:
print pow(2,x)
x += 1 # don't forget to increment x!
# Multiple levels
for x in range(1,10):
if x % 2 == 0:
print str(x) + " is even."
else:
print str(x) + " is odd."
# Blocks cannot be empty
x = "fried goldfish"
if x == "spam for dinner":
print "I will destroy the universe"
else:
AA=1
# Nothing here.
# Use a 'pass' statement, which indicates 'do nothing'
x = "fried goldfish"
if x == "spam for dinner":
print "I will destroy the universe"
else:
pass
# Use a 'break' statement to escape a loop
x = 0
while True:
print x**2
if x**2 >= 100:
break
x +=1
Explanation: IPython Notebook automatically converts tabs into spaces, but some programs do not. Be careful not to mix these up! Be consistent in your programming.
If you're working within the Python interpreter (not the IPython Notebook), you'll see this:
>>> x = 1
>>> if x > 0:
... print "yo"
... else:
... print "dude"
... print "ok"
...
yo
ok
End of explanation
def addnums(x,y):
return x + y
addnums(2,3)
print addnums(0x1f,3.3)
print addnums("a","b")
print addnums("cat",23232)
Explanation: What is a Function?
<UL>
<LI> A block of organized, reusable code that is used to perform a single, related action.
<LI> Provides better modularity for your application and a high degree of code reusing.
<LI> You can name a function anything you want as long as it:
<OL>
<LI> Contains only numbers, letters, underscore
<LI> Does not start with a number
<LI> Is not the same name as a built-in function (like print).
</OL>
</UL>
Basic Synthax of a Function
An Example
End of explanation
def numop(x,y):
x *= 3.14
return x + y
x = 2
print numop(x, 8)
print x
def numop(x,y):
x *= 3.14
global a
a += 1
return x + y, a
a = 2
numop(1,1)
numop(1,1)
Explanation: Scope of a Function
End of explanation
def changeme_1( mylist ):
mylist = [1,2,3,4]; # This would assig new reference in mylist
print "Values inside the function changeme_1: ", mylist
return
def changeme_2( mylist ):
mylist.append([1,2,3,4]);
print "Values inside the function changeme_2: ", mylist
return
mylist1 = [10,20,30];
changeme_1( mylist1 );
print "Values outside the function: ", mylist1
print
mylist2 = [10,20,30];
changeme_2( mylist2 );
print "Values outside the function: ", mylist2
Explanation: Pass by reference vs value
End of explanation
def numop1(x,y,multiplier=1.0,greetings="Thank you for your inquiry."):
numop1 -- this does a simple operation on two numbers.
We expect x,y are numbers and return x + y times the multiplier
multiplier is also a number (a float is preferred) and is optional.
It defaults to 1.0.
You can also specify a small greeting as a string.
if greetings is not None:
print greetings
return (x + y)*multiplier
help(numop1)
numop1(1,1)
numop1(1,1,multiplier=-0.5,greetings=None)
Explanation: Function Arguments
You can call a function by using the following types of formal arguments:
<UL>
<LI> Required arguments (arguments passed to a function in correct positional order. Here, the number of arguments in the function call should match exactly with the function definition)
<LI> Keyword arguments (identified by parameter names)
<LI> Default arguments (assume default values if values are not provided in the function call for those arguments)
<LI> Variable-length arguments (are not explicitly named in the function definition)
</UL>
Keyword Arguments
End of explanation
def cheeseshop(kind, *arguments, **keywords):
print "-- Do you have any", kind, "?"
print "-- I'm sorry, we're all out of", kind
for arg in arguments:
print arg
print "-" * 40
keys = keywords.keys()
keys.sort()
for kw in keys:
print kw, ":", keywords[kw]
cheeseshop("Limburger",
"It's very runny, sir.",
"It's really very, VERY runny, sir.",
shopkeeper='Michael Palin',
client="John Cleese",
sketch="Cheese Shop Sketch")
Explanation: Unspecified args and keywords
End of explanation
import math
math.cos(0)
math.cos(math.pi)
math.sqrt(4)
from datetime import datetime
now = datetime.now()
print now.year, now.month, now.day
from math import acos as arccos
arccos(1)
Explanation: What is a Module?
<UL>
<LI> A Python object with arbitrarily named attributes that you can bind and reference.
<LI> A file consisting of Python code.
<LI> Allows you to logically organize your Python code.
<LI> Makes the code easier to understand and use.
<LI> Can define functions, classes and variables.
<LI> Can also include runnable code.
</UL>
<B> Any file ending in .py is treated as a module. </B>
End of explanation
<END_TASK> |
15,690 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Finite Markov Chains
Step1: Example 1
Step2: Create a MarkovChain instance
Step3: Classification of states
This Markov chain is reducible
Step4: Determine the communication classes
Step5: Classify the states of this Markov chain
Step6: Obtain a list of the recurrent states
Step7: Obtain a list of the transient states
Step8: A Markov chain is reducible (i.e., its directed graph is not strongly connected)
if and only if by symmetric permulations of rows and columns,
its transition probability matrix is written in the form ("canonical form")
$$
\begin{pmatrix}
U & 0 \
W & V
\end{pmatrix},
$$
where $U$ and $W$ are square matrices.
Such a form for mc1 is obtained by the following
Step9: This Markov chain is aperiodic
(i.e., the least common multiple of the periods of the recurrent sub-chains is one)
Step10: Indeed, each of the sub-chains corresponding to the recurrent classes has period $1$,
i.e., every recurrent state is aperiodic
Step11: Stationary distributions
For each recurrent class $C$, there is a unique stationary distribution $\psi^C$
such that $\psi^C_i > 0$ for all $i \in C$ and $\psi^C_i = 0$ otherwise.
MarkovChain.stationary_distributions returns
these unique stationary distributions for the recurrent classes.
Any stationary distribution is written as a convex combination of these distributions.
Step12: These are indeed stationary distributions
Step14: Plot these distributions.
Step15: Simulation
Let us simulate our Markov chain mc1.
The simualte method generates a sample path
of length given by the first argument, ts_length,
with an initial state as specified by an optional argument init;
if not specified, the initial state is randomly drawn.
A sample path from state 0
Step16: As is clear from the transition matrix P,
if it starts at state 0, the chain stays there forever,
i.e., 0 is an absorbing state, a state that constitutes a singleton recurrent class.
Start with state 1
Step17: You can observe that the chain stays in the recurrent class ${1,4}$
and visits states 1 and 4 with certain frequencies.
If init is not specified, the initial state is randomly chosen
Step18: Note on reproducibility
Step20: Time series averages
Now, let us compute the frequency distribution along a sample path, given by
$$
\frac{1}{t} \sum_{\tau=0}^{t-1} \mathbf{1}{X_{\tau} = s}
\quad (s \in S).
$$
Step21: Here is a frequency distribution along a sample path, of length 100,
from initial state 1, which is a recurrent state
Step22: Length 10,000
Step23: The distribution becomes close to the stationary distribution (0, 1/3, 0, 0, 2/3, 0).
Plot the frequency distributions for a couple of different time lengths
Step24: Start with state 2,
which is a transient state
Step25: Run the above cell several times;
you will observe that the limit distribution differs across sample paths.
Sometimes the state is absorbed into the recurrent class ${0}$,
while other times it is absorbed into the recurrent class ${1,4}$.
Step26: In fact,
for almost every sample path of a finite Markov chain ${X_t}$,
for some recurrent class $C$ we have
$$
\frac{1}{t} \sum_{\tau=0}^{t-1} \mathbf{1}{X_{\tau} = s} \to \psi^C[s]
\quad \text{as $t \to \infty$}
$$
for all states $s$,
where $\psi^C$ is the stationary distribution associated with the recurrent class $C$.
If the initial state $s_0$ is a recurrent state,
then the recurrent class $C$ above is the one that contains $s_0$,
while if it is a transient state,
then the recurrent class to which the convergence occurs depends on the sample path.
Let us simulate with the remaining states, 3, 4, and 5.
Step28: Cross sectional averages
Next, let us repeat the simulation for many times (say, 10,000 times)
and obtain the distribution of visits to each state at a given time period T.
That is, we want to simulate the marginal distribution at time T.
Step29: Start with state 1
Step30: The distribution is close to the stationary distribution (0, 1/3, 0, 0, 2/3, 0).
Plot the simulated marginal distribution at T for some values of T.
Step31: Starting with a transient state 2
Step32: Observe that the distribution is close to a convex combination of
the stationary distributions (1, 0, 0, 0, 0, 0) and (0, 1/3, 0, 0, 2/3, 0),
which is a stationary distribution itself.
How the simulated marginal distribution evolves
Step33: Since our Markov chain is aperiodic (i.e., every recurrent class is aperiodic),
the marginal disribution at time $T$ converges as $T \to \infty$
to some stationary distribution,
and the limit distribution depends on the initial state,
according to the probabilities that the state is absorbed into the recurrent classes.
For initial states 3, 4, and 5
Step34: Powers of $P$
The marginal distributions at time $T$ are obtained by $P^T$.
Step35: In the canonical form
Step36: Observe that the first three rows, which correspond to the recurrent states,
are close to the stationary distributions associated with the corresponding recurrent classes.
Example 2
Step37: This Markov chain is irreducible
Step38: This Markov chain is periodic
Step39: Its period, which we denote by $d$
Step40: Identify the cyclic classes
Step41: Cyclic normal form
If a Markov chain is periodic with period $d \geq 2$,
then its transition probability matrix is written in the form ("cyclic normal form")
$$
\begin{pmatrix}
0 & P_0 & 0 & 0 & \cdots & 0 \
0 & 0 & P_1 & 0 & \cdots & 0 \
0 & 0 & 0 & P_2 & \cdots & 0 \
\vdots & \vdots & \vdots & \vdots & \ddots & \vdots \
0 & 0 & 0 & 0 & \cdots & P_{d-2} \
P_{d-1} & 0 & 0 & 0 & \cdots & 0
\end{pmatrix}.
$$
Represent our Markov chain in cyclic normal form
Step42: Re-define the Markov chain with the above matrix Q
Step43: Obtain the block components $P_0, \cdots, P_{d-1}$
Step44: $P^d$ is block diagonal
Step45: The $i$th diagonal block of $P^d$ equals $P_i P_{i+1} \cdots P_{d-1} P_0 \cdots P_{i-1}$
Step46: Stationary distributions
The Markov chain mc2 has a unique stationary distribution,
which we denote by $\pi$
Step47: Obtain the stationary distributions $\pi^0, \ldots, \pi^{d-1}$
each associated with the diagonal blocks of $P^d$
Step48: Verify that $\pi^{i+1} = \pi^i P_i$
Step49: Verify that $\pi = (\pi^0 + \cdots + \pi^{d-1})/d$
Step50: Powers of $P$
Since the Markov chain in consideration is periodic,
the marginal distribution does not converge, but changes periodically.
Let us compute the powers of the transition probability matrix (in cyclic normal form)
Step51: Print $P^1, P^2, \ldots, P^d$
Step52: Print $P^{2d}$, $P^{4d}$, and $P^{6d}$
Step53: $P^{kd}$ converges as $k \to \infty$ to a matrix that contains $\pi^0, \ldots, \pi^{d-1}$.
Print $P^{kd+1}, \ldots, P^{kd+d}$ with $k = 10$ for example
Step54: But $P^i$ itself does not converge.
Simulation
Plot the frequency distribution of visits to the states
along a sample path starting at state 0
Step55: Observe that the distribution is close to the (unique) stationary distribution $\pi$.
Step56: Next, plot the simulated marginal distributions
at $T = 10d+1, \ldots, 11d, 11d+1, \ldots, 12d$ with initial state 0
Step57: Compare these with the rows of $P^{10d+1}, \ldots, P^{10d+d}$.
Example 3
Step58: If $\varepsilon = 0$,
then the Markovh chain is reducible into two recurrent classes, [0, 1] and [2]
Step59: If $\varepsilon > 0$ but small, the chain is irreducible,
but transition within each of the subsets [0, 1] and [2] is much more likely
than that between these sets.
Step60: Analytically, the unique stationary distribution of the chain with $\varepsilon > 0$
is (1/3, 1/3, 1/3), independent of the value of $\varepsilon$.
However,
for such matrices with small values of $\varepsilon > 0$,
general purpose eigenvalue solvers are numerically unstable.
For example, if we use numpy.linalg.eig
to compute the eigenvector that corresponds
to the dominant (i.e., largest in magnitude) eigenvalue
Step61: The output becomes farther from the actual stationary distribution (1/3, 1/3, 1/3)
as $\varepsilon$ becomes smaller.
The same applies to scipy.linalg.eig
Step62: MarkovChain in quantecon employs
the algorithm called the "GTH algorithm",
which is a numerically stable variant of Gaussian elimination,
specialized for Markov chains. | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from __future__ import division, print_function
import numpy as np
import matplotlib.pyplot as plt
from quantecon.markov import MarkovChain
Explanation: Finite Markov Chains: Examples
Daisuke Oyama
Faculty of Economics, University of Tokyo
This notebook demonstrates how to analyze finite-state Markov chains
with the MarkovChain class.
For basic concepts and properties on Markov chains, see
the lecture on finite Markov chains
in Quantitative Economics, and
the documentation for MarkovChain.
For algorithmic issues in detecting reducibility and periodicity of a Markov chain,
see, for example,
J. P. Jarvis and D. R. Shier,
"Graph-Theoretic Analysis of Finite Markov Chains,"
from which we draw some examples below.
End of explanation
P = np.zeros((6, 6))
P[0, 0] = 1
P[1, 4] = 1
P[2, [2, 3, 4]] = 1/3
P[3, [0, 5]] = 1/2
P[4, [1, 4]] = 1/2
P[5, [0, 3]] = 1/2
print(P)
Explanation: Example 1: Reducible chain
Consider the Markov chain given by the following stochastic matrix,
taken from Exercise 3 in Jarvis and Shier
(where the actual values of non-zero probabilities are not important):
End of explanation
mc1 = MarkovChain(P)
Explanation: Create a MarkovChain instance:
End of explanation
mc1.is_irreducible
mc1.num_communication_classes
Explanation: Classification of states
This Markov chain is reducible:
End of explanation
mc1.communication_classes
Explanation: Determine the communication classes:
End of explanation
mc1.recurrent_classes
Explanation: Classify the states of this Markov chain:
End of explanation
recurrent_states = np.concatenate(mc1.recurrent_classes)
print(recurrent_states)
Explanation: Obtain a list of the recurrent states:
End of explanation
transient_states = np.setdiff1d(np.arange(mc1.n), recurrent_states)
print(transient_states)
Explanation: Obtain a list of the transient states:
End of explanation
permutation = np.concatenate([recurrent_states, transient_states])
print(mc1.P[permutation, :][:, permutation])
Explanation: A Markov chain is reducible (i.e., its directed graph is not strongly connected)
if and only if by symmetric permulations of rows and columns,
its transition probability matrix is written in the form ("canonical form")
$$
\begin{pmatrix}
U & 0 \
W & V
\end{pmatrix},
$$
where $U$ and $W$ are square matrices.
Such a form for mc1 is obtained by the following:
End of explanation
mc1.is_aperiodic
Explanation: This Markov chain is aperiodic
(i.e., the least common multiple of the periods of the recurrent sub-chains is one):
End of explanation
for recurrent_class in mc1.recurrent_classes:
sub_matrix = P[recurrent_class, :][:, recurrent_class]
d = MarkovChain(sub_matrix).period
print('Period of the sub-chain\n{0}\n = {1}'.format(sub_matrix, d))
Explanation: Indeed, each of the sub-chains corresponding to the recurrent classes has period $1$,
i.e., every recurrent state is aperiodic:
End of explanation
print(mc1.stationary_distributions)
Explanation: Stationary distributions
For each recurrent class $C$, there is a unique stationary distribution $\psi^C$
such that $\psi^C_i > 0$ for all $i \in C$ and $\psi^C_i = 0$ otherwise.
MarkovChain.stationary_distributions returns
these unique stationary distributions for the recurrent classes.
Any stationary distribution is written as a convex combination of these distributions.
End of explanation
print(mc1.stationary_distributions.dot(mc1.P))
Explanation: These are indeed stationary distributions:
End of explanation
def draw_histogram(distribution, ax=None, figsize=None,
title=None, xlabel=None, ylabel=None, ylim=(0, 1)):
Plot the given distribution.
if ax is None:
fig, ax = plt.subplots(figsize=figsize)
n = len(distribution)
ax.bar(np.arange(n), distribution, align='center')
ax.set_xlim(-0.5, (n-1)+0.5)
ax.set_ylim(*ylim)
if title:
ax.set_title(title)
if xlabel:
ax.set_xlabel(xlabel)
if ylabel:
ax.set_ylabel(ylabel)
if ax is None:
plt.show()
fig, axes = plt.subplots(1, 2, figsize=(12, 4))
titles = ['Stationary distribution for the recurrent class {0}'.format(recurrent_class)
for recurrent_class in mc1.recurrent_classes]
for ax, title, dist in zip(axes, titles, mc1.stationary_distributions):
draw_histogram(dist, ax=ax, title=title, xlabel='States')
fig.suptitle('Stationary distributions', y=-0.05, fontsize=12)
plt.show()
Explanation: Plot these distributions.
End of explanation
mc1.simulate(50, init=0)
Explanation: Simulation
Let us simulate our Markov chain mc1.
The simualte method generates a sample path
of length given by the first argument, ts_length,
with an initial state as specified by an optional argument init;
if not specified, the initial state is randomly drawn.
A sample path from state 0:
End of explanation
mc1.simulate(50, init=1)
Explanation: As is clear from the transition matrix P,
if it starts at state 0, the chain stays there forever,
i.e., 0 is an absorbing state, a state that constitutes a singleton recurrent class.
Start with state 1:
End of explanation
mc1.simulate(50)
Explanation: You can observe that the chain stays in the recurrent class ${1,4}$
and visits states 1 and 4 with certain frequencies.
If init is not specified, the initial state is randomly chosen:
End of explanation
mc1.simulate(50, random_state=12345)
Explanation: Note on reproducibility:
The simulate method offers an option random_state to set a random seed
to initialize the pseudo-random number generator.
As you provide the same random seed value,
simulate returns the same outcome.
For example, the following will always give the same sequence:
End of explanation
def time_series_dist(mc, t, init=None, random_state=None):
Return the distribution of visits by a sample path of length t
of mc with an initial state init.
t_max = np.max(t)
dim = 1
try:
ts_size = len(t) # t is an array
ts_array = t
dim = 2
except: # t is an int
ts_size = 1
ts_array = [t]
X = mc.simulate(ts_length=t_max, init=init, random_state=random_state)
dists = np.empty((ts_size, mc.n))
bins = np.arange(mc.n+1)
for i, length in enumerate(ts_array):
hist, bin_edges = np.histogram(X[:length], bins=bins)
dists[i, :] = hist / length
if dim == 1:
return dists[0]
else:
return dists
Explanation: Time series averages
Now, let us compute the frequency distribution along a sample path, given by
$$
\frac{1}{t} \sum_{\tau=0}^{t-1} \mathbf{1}{X_{\tau} = s}
\quad (s \in S).
$$
End of explanation
time_series_dist(mc1, t=100, init=1)
Explanation: Here is a frequency distribution along a sample path, of length 100,
from initial state 1, which is a recurrent state:
End of explanation
time_series_dist(mc1, t=10**4, init=1)
Explanation: Length 10,000:
End of explanation
def plot_time_series_dists(mc, init, ts, seed=None, figsize=(12,4)):
dists = time_series_dist(mc, t=ts, init=init, random_state=seed)
fig, axes = plt.subplots(1, len(ts), figsize=figsize)
titles = ['t={0}'.format(t) for t in ts]
for ax, title, dist in zip(axes, titles, dists):
draw_histogram(dist, ax=ax, title=title, xlabel='States')
fig.suptitle('Time series distributions with init={0}'.format(init),
y=-0.05, fontsize=12)
plt.show()
init = 1
ts = [5, 10, 50, 100]
plot_time_series_dists(mc1, init, ts)
Explanation: The distribution becomes close to the stationary distribution (0, 1/3, 0, 0, 2/3, 0).
Plot the frequency distributions for a couple of different time lengths:
End of explanation
init = 2
ts = [5, 10, 50, 100]
plot_time_series_dists(mc1, init, ts)
Explanation: Start with state 2,
which is a transient state:
End of explanation
init = 2
ts = [5, 10, 50, 100]
seeds = [222, 2222]
descriptions = ['{0} sample path with init={1}'.format(adjective, init) for adjective
in ['Some'] + ['Another'] + ['Yet another']*(len(seeds)-1)]
for seed, description in zip(seeds, descriptions):
print(description)
plot_time_series_dists(mc1, init, ts, seed=seed)
Explanation: Run the above cell several times;
you will observe that the limit distribution differs across sample paths.
Sometimes the state is absorbed into the recurrent class ${0}$,
while other times it is absorbed into the recurrent class ${1,4}$.
End of explanation
inits = [3, 4, 5]
t = 100
fig, axes = plt.subplots(1, 3, figsize=(12, 3))
for init, ax in zip(inits, axes):
draw_histogram(time_series_dist(mc1, t=t, init=init), ax=ax,
title='Initial state = {0}'.format(init),
xlabel='States')
fig.suptitle('Time series distributions for t={0}'.format(t),
y=-0.05, fontsize=12)
plt.show()
Explanation: In fact,
for almost every sample path of a finite Markov chain ${X_t}$,
for some recurrent class $C$ we have
$$
\frac{1}{t} \sum_{\tau=0}^{t-1} \mathbf{1}{X_{\tau} = s} \to \psi^C[s]
\quad \text{as $t \to \infty$}
$$
for all states $s$,
where $\psi^C$ is the stationary distribution associated with the recurrent class $C$.
If the initial state $s_0$ is a recurrent state,
then the recurrent class $C$ above is the one that contains $s_0$,
while if it is a transient state,
then the recurrent class to which the convergence occurs depends on the sample path.
Let us simulate with the remaining states, 3, 4, and 5.
End of explanation
def cross_sectional_dist(mc, T, init=None, num_reps=10**4, random_state=None):
Return the distribution of visits at time T by num_reps times of simulation
of mc with an initial state init.
T_max = np.max(T)
dim = 1
try:
Ts_size = len(T) # T is an array
Ts_array = T
dim = 2
except: # T is an int
Ts_size = 1
Ts_array = [T]
x = mc.simulate(ts_length=T_max+1, init=init, num_reps=num_reps,
random_state=random_state)[:, Ts_array]
dists = np.empty((x.shape[-1], mc.n))
bins = np.arange(mc.n+1)
for i in range(x.shape[-1]):
hist, bin_edges = np.histogram(x[:, i], bins=bins)
dists[i, :] = hist / num_reps
if dim == 1:
return dists[0]
else:
return dists
Explanation: Cross sectional averages
Next, let us repeat the simulation for many times (say, 10,000 times)
and obtain the distribution of visits to each state at a given time period T.
That is, we want to simulate the marginal distribution at time T.
End of explanation
init = 1
T = 10
cross_sectional_dist(mc1, init=init, T=T)
T = 100
cross_sectional_dist(mc1, init=init, T=T)
Explanation: Start with state 1:
End of explanation
def plot_cross_sectional_dists(mc, init, Ts, num_reps=10**4, seed=None,
figsize=(12,4)):
dists = cross_sectional_dist(mc, T=Ts, init=init, num_reps=num_reps,
random_state=seed)
fig, axes = plt.subplots(1, len(Ts), figsize=figsize)
titles = ['T={0}'.format(T) for T in Ts]
for ax, title, dist in zip(axes, titles, dists):
draw_histogram(dist, ax=ax, title=title, xlabel='States')
fig.suptitle('Cross sectional distributions with init={0}'.format(init),
y=-0.05, fontsize=12)
plt.show()
init = 1
Ts = [2, 3, 5, 10]
plot_cross_sectional_dists(mc1, init, Ts)
Explanation: The distribution is close to the stationary distribution (0, 1/3, 0, 0, 2/3, 0).
Plot the simulated marginal distribution at T for some values of T.
End of explanation
init = 2
T = 10
cross_sectional_dist(mc1, init=init, T=T)
T = 100
dist = cross_sectional_dist(mc1, init=init, T=T)
dist
draw_histogram(dist,
title='Cross sectional distribution at T={T} with init={init}'
.format(T=T, init=init),
xlabel='States')
Explanation: Starting with a transient state 2:
End of explanation
init = 2
Ts = [2, 3, 5, 10]
plot_cross_sectional_dists(mc1, init, Ts)
Explanation: Observe that the distribution is close to a convex combination of
the stationary distributions (1, 0, 0, 0, 0, 0) and (0, 1/3, 0, 0, 2/3, 0),
which is a stationary distribution itself.
How the simulated marginal distribution evolves:
End of explanation
inits = [3, 4, 5]
T = 10
fig, axes = plt.subplots(1, 3, figsize=(12, 3))
for init, ax in zip(inits, axes):
draw_histogram(cross_sectional_dist(mc1, T=T, init=init), ax=ax,
title='Initial state = {0}'.format(init),
xlabel='States')
fig.suptitle('Cross sectional distribution at T={0}'.format(T),
y=-0.05, fontsize=12)
plt.show()
Explanation: Since our Markov chain is aperiodic (i.e., every recurrent class is aperiodic),
the marginal disribution at time $T$ converges as $T \to \infty$
to some stationary distribution,
and the limit distribution depends on the initial state,
according to the probabilities that the state is absorbed into the recurrent classes.
For initial states 3, 4, and 5:
End of explanation
np.set_printoptions(suppress=True) # Suppress printing with floating point notation
Ts = [10, 20, 30]
for T in Ts:
print('P^{T} =\n{P_T}'.format(T=T, P_T=np.linalg.matrix_power(mc1.P, T)))
Explanation: Powers of $P$
The marginal distributions at time $T$ are obtained by $P^T$.
End of explanation
Q = mc1.P[permutation, :][:, permutation]
print('Q =\n{Q}'.format(Q=Q))
for T in Ts:
print('Q^{T} =\n{Q_T}'.format(T=T, Q_T=np.linalg.matrix_power(Q, T)))
Explanation: In the canonical form:
End of explanation
P = np.zeros((10, 10))
P[0, 3] = 1
P[1, [0, 4]] = 1/2
P[2, 6] = 1
P[3, [1, 2, 7]] = 1/3
P[4, 3] = 1
P[5, 4] = 1
P[6, 3] = 1
P[7, [6, 8]] = 1/2
P[8, 9] = 1
P[9, 5] = 1
np.set_printoptions(precision=3) # Reduce the number of digits printed
print(P)
mc2 = MarkovChain(P)
Explanation: Observe that the first three rows, which correspond to the recurrent states,
are close to the stationary distributions associated with the corresponding recurrent classes.
Example 2: Periodic chain
Consider the Markov chain given by the following stochastic matrix,
taken from Exercise 9 (see also Exercise 11) in Jarvis and Shier
(where the actual values of non-zero probabilities are not important):
End of explanation
mc2.is_irreducible
Explanation: This Markov chain is irreducible:
End of explanation
mc2.is_aperiodic
Explanation: This Markov chain is periodic:
End of explanation
d = mc2.period
print(d)
Explanation: Its period, which we denote by $d$:
End of explanation
mc2.cyclic_classes
Explanation: Identify the cyclic classes:
End of explanation
permutation = np.concatenate(mc2.cyclic_classes)
Q = mc2.P[permutation, :][:, permutation]
print(Q)
Explanation: Cyclic normal form
If a Markov chain is periodic with period $d \geq 2$,
then its transition probability matrix is written in the form ("cyclic normal form")
$$
\begin{pmatrix}
0 & P_0 & 0 & 0 & \cdots & 0 \
0 & 0 & P_1 & 0 & \cdots & 0 \
0 & 0 & 0 & P_2 & \cdots & 0 \
\vdots & \vdots & \vdots & \vdots & \ddots & \vdots \
0 & 0 & 0 & 0 & \cdots & P_{d-2} \
P_{d-1} & 0 & 0 & 0 & \cdots & 0
\end{pmatrix}.
$$
Represent our Markov chain in cyclic normal form:
End of explanation
mc2 = MarkovChain(Q)
Explanation: Re-define the Markov chain with the above matrix Q:
End of explanation
P_blocks = []
for i in range(d):
P_blocks.append(mc2.P[mc2.cyclic_classes[i%d], :][:, mc2.cyclic_classes[(i+1)%d]])
print('P_{i} =\n{P_block}'.format(i=i, P_block=P_blocks[i]))
Explanation: Obtain the block components $P_0, \cdots, P_{d-1}$:
End of explanation
P_power_d = np.linalg.matrix_power(mc2.P, d)
print(P_power_d)
P_power_d_blocks = []
ordinals = ['0th', '1st', '2nd']
for i in range(d):
P_power_d_blocks.append(P_power_d[mc2.cyclic_classes[i], :][:, mc2.cyclic_classes[i]])
print('{ordinal} diagonal block of P^d =\n{P_power_d_block}'
.format(ordinal=ordinals[i], P_power_d_block=P_power_d_blocks[i]))
Explanation: $P^d$ is block diagonal:
End of explanation
products = []
for i in range(d):
R = np.eye(P_blocks[i].shape[0])
string = ''
for j in range(d):
R = R.dot(P_blocks[(i+j)%d])
string += 'P_{0} '.format((i+j)%d)
products.append(R)
print(string + '=\n{R}'.format(R=R))
for matrix0, matrix1 in zip(P_power_d_blocks, products):
print(np.array_equal(matrix0, matrix1))
Explanation: The $i$th diagonal block of $P^d$ equals $P_i P_{i+1} \cdots P_{d-1} P_0 \cdots P_{i-1}$:
End of explanation
len(mc2.stationary_distributions)
pi = mc2.stationary_distributions[0]
print(pi)
draw_histogram(pi,
title='Stationary distribution', xlabel='States', ylim=(0, 0.35))
Explanation: Stationary distributions
The Markov chain mc2 has a unique stationary distribution,
which we denote by $\pi$:
End of explanation
pi_s = []
for i in range(d):
pi_s.append(MarkovChain(P_power_d_blocks[i]).stationary_distributions[0])
print('pi^{i} =\n{pi_i}'.format(i=i, pi_i=pi_s[i]))
fig, axes = plt.subplots(1, d, figsize=(12, 3))
for i, ax in enumerate(axes):
pi_i_full_dim = np.zeros(mc2.n)
pi_i_full_dim[mc2.cyclic_classes[i]] = pi_s[i]
draw_histogram(pi_i_full_dim,
ax=ax, title='$\pi^{i}$'.format(i=i), xlabel='States')
fig.suptitle('Stationary distributions for the diagonal blocks',
y=-0.05, fontsize=12)
plt.show()
Explanation: Obtain the stationary distributions $\pi^0, \ldots, \pi^{d-1}$
each associated with the diagonal blocks of $P^d$:
End of explanation
for i in range(d):
print('pi^{i} P_{i} =\n{dot}'.format(i=i, dot=np.dot(pi_s[i], P_blocks[i])))
Explanation: Verify that $\pi^{i+1} = \pi^i P_i$:
End of explanation
# Right hand side of the above identity
rhs = np.zeros(mc2.n)
for i in range(d):
rhs[mc2.cyclic_classes[i]] = pi_s[i]
rhs /= d
print(rhs)
np.allclose(pi, rhs)
Explanation: Verify that $\pi = (\pi^0 + \cdots + \pi^{d-1})/d$:
End of explanation
np.set_printoptions(suppress=True) # Suppress printing with floating point notation
Explanation: Powers of $P$
Since the Markov chain in consideration is periodic,
the marginal distribution does not converge, but changes periodically.
Let us compute the powers of the transition probability matrix (in cyclic normal form):
End of explanation
for i in range(1, d+1):
print('P^{i} =\n{P_i}'.format(i=i, P_i=np.linalg.matrix_power(mc2.P, i)))
Explanation: Print $P^1, P^2, \ldots, P^d$:
End of explanation
for i in [k*d for k in [2, 4, 6]]:
print('P^{i} =\n{P_i}'.format(i=i, P_i=np.linalg.matrix_power(mc2.P, i)))
Explanation: Print $P^{2d}$, $P^{4d}$, and $P^{6d}$:
End of explanation
for i in range(10*d+1, 10*d+1+d):
print('P^{i} =\n{P_i}'.format(i=i, P_i=np.linalg.matrix_power(mc2.P, i)))
Explanation: $P^{kd}$ converges as $k \to \infty$ to a matrix that contains $\pi^0, \ldots, \pi^{d-1}$.
Print $P^{kd+1}, \ldots, P^{kd+d}$ with $k = 10$ for example:
End of explanation
init = 0
dist = time_series_dist(mc2, init=init, t=10**4)
print(dist)
draw_histogram(dist,
title='Time series distribution with init={0}'.format(init),
xlabel='States', ylim=(0, 0.35))
plt.show()
Explanation: But $P^i$ itself does not converge.
Simulation
Plot the frequency distribution of visits to the states
along a sample path starting at state 0:
End of explanation
print(pi)
Explanation: Observe that the distribution is close to the (unique) stationary distribution $\pi$.
End of explanation
init = 0
k = 10
Ts = [k*d + 1 + i for i in range(2*d)]
num_reps = 10**2
dists = cross_sectional_dist(mc2, T=Ts, init=init, num_reps=num_reps)
fig, axes = plt.subplots(2, d, figsize=(12, 6))
for dist, T, ax in zip(dists, Ts, axes.flatten()):
draw_histogram(dist, ax=ax,
title='T = {T}'.format(T=T))
fig.suptitle('Cross sectional distributions with init={init}'.format(init=init),
y=0.05, fontsize=12)
plt.show()
Explanation: Next, plot the simulated marginal distributions
at $T = 10d+1, \ldots, 11d, 11d+1, \ldots, 12d$ with initial state 0:
End of explanation
def P_epsilon(eps, p=0.5):
P = np.array([[1-(p+eps), p, eps],
[p, 1-(p+eps), eps],
[eps, eps, 1-2*eps]])
return P
Explanation: Compare these with the rows of $P^{10d+1}, \ldots, P^{10d+d}$.
Example 3: Nearly completely decomposable chain
Consider the Markov chain given by the following stochastic matrix $P^{\varepsilon}$,
parameterized by $\varepsilon$:
End of explanation
P_epsilon(0)
MarkovChain(P_epsilon(0)).recurrent_classes
Explanation: If $\varepsilon = 0$,
then the Markovh chain is reducible into two recurrent classes, [0, 1] and [2]:
End of explanation
P_epsilon(0.001)
MarkovChain(P_epsilon(0.001)).recurrent_classes
Explanation: If $\varepsilon > 0$ but small, the chain is irreducible,
but transition within each of the subsets [0, 1] and [2] is much more likely
than that between these sets.
End of explanation
np.set_printoptions(precision=15) # Increase the number of digits printed
epsilons = [float('1e-{i}'.format(i=i)) for i in range(12, 18)]
for eps in epsilons:
print('epsilon = {eps}'.format(eps=eps))
w, v = np.linalg.eig(P_epsilon(eps).T)
i = w.argmax()
print(v[:, i]/v[:, i].sum())
Explanation: Analytically, the unique stationary distribution of the chain with $\varepsilon > 0$
is (1/3, 1/3, 1/3), independent of the value of $\varepsilon$.
However,
for such matrices with small values of $\varepsilon > 0$,
general purpose eigenvalue solvers are numerically unstable.
For example, if we use numpy.linalg.eig
to compute the eigenvector that corresponds
to the dominant (i.e., largest in magnitude) eigenvalue:
End of explanation
import scipy.linalg
for eps in epsilons:
print('epsilon = {eps}'.format(eps=eps))
w, v = scipy.linalg.eig(P_epsilon(eps), left=True, right=False)
i = w.argmax()
print(v[:, i]/v[:, i].sum())
Explanation: The output becomes farther from the actual stationary distribution (1/3, 1/3, 1/3)
as $\varepsilon$ becomes smaller.
The same applies to scipy.linalg.eig:
End of explanation
for eps in epsilons + [1e-100]:
print('epsilon = {eps}'.format(eps=eps))
print(MarkovChain(P_epsilon(eps)).stationary_distributions[0])
Explanation: MarkovChain in quantecon employs
the algorithm called the "GTH algorithm",
which is a numerically stable variant of Gaussian elimination,
specialized for Markov chains.
End of explanation
<END_TASK> |
15,691 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Using Layout Templates
As we showed in containers exercises, multiple widgets can be arranged together using HBox and VBox. It is also possible to use the flexible GridBox specification. However, use of the specification involves some understanding of CSS properties and may impose a steep learning curve. Here, we will describe layout templates built on top of GridBox that simplify creation of common widget layouts.
Step1: 2x2 Grid
You can easily create a layout with 4 widgets arranged in a 2x2 matrix using the TwoByTwoLayout widget
Step2: If you don't define a widget for some of the slots, the layout will automatically re-configure itself by merging neighbouring cells
Step3: You can pass merge=False in the argument of the TwoByTwoLayout constructor if you don't want this behavior
Step4: You can add a missing widget even after the layout initialization
Step5: You can also use the linking feature of widgets to update some property of a widget based on another widget
Step6: You can easily create more complex layouts with custom widgets. For example, you can use a bqplot Figure widget to add plots
Step7: AppLayout
AppLayout is a widget layout template that allows you to create an application-like widget arrangements. It consist of a header, a footer, two sidebars and a central pane
Step8: However with the automatic merging feature, it's possible to achieve many other layouts
Step9: You can also modify the relative and absolute widths and heights of the panes using pane_widths and pane_heights arguments. Both accept a sequence of three elements, each of which is either an integer (equivalent to the weight given to the row/column) or a string in the format '1fr' (denoting one portion of the free space available) or '100px' (absolute size).
Step10: Grid layout
GridspecLayout is a N-by-M grid layout allowing for flexible layout definitions using an API similar to matplotlib's GridSpec.
You can use GridspecLayout to define a simple regularly-spaced grid. For example, to create a 4x3 layout
Step11: To make a widget span several columns and/or rows, you can use slice notation
Step12: You can still change properties of the widgets stored in the grid, using the same indexing notation.
Step13: Note
Step14: Note
Step15: Creating scatter plots using GridspecLayout
In this examples, we will demonstrate how to use GridspecLayout and bqplot widget to create a multipanel scatter plot. To run this example you will need to install the bqplot package.
For example, you can use the following snippet to obtain a scatter plot across multiple dimensions
Step16: Style attributes
You can specify extra style properties to modify the layout. For example, you can change the size of the whole layout using the height and width arguments.
Step17: The gap between the panes can be increase or decreased with grid_gap argument
Step18: Additionally, you can control the alignment of widgets within the layout using justify_content and align_items attributes
Step19: For other alignment options it's possible to use common names (top and bottom) or their CSS equivalents (flex-start and flex-end) | <ASSISTANT_TASK:>
Python Code:
# Utils widgets
from ipywidgets import Button, Layout, jslink, IntText, IntSlider
def create_expanded_button(description, button_style):
return Button(description=description, button_style=button_style,
layout=Layout(height='auto', width='auto'))
top_left_button = create_expanded_button("Top left", 'info')
top_right_button = create_expanded_button("Top right", 'success')
bottom_left_button = create_expanded_button("Bottom left", 'danger')
bottom_right_button = create_expanded_button("Bottom right", 'warning')
top_left_text = IntText(description='Top left', layout=Layout(width='auto', height='auto'))
top_right_text = IntText(description='Top right', layout=Layout(width='auto', height='auto'))
bottom_left_slider = IntSlider(description='Bottom left', layout=Layout(width='auto', height='auto'))
bottom_right_slider = IntSlider(description='Bottom right', layout=Layout(width='auto', height='auto'))
Explanation: Using Layout Templates
As we showed in containers exercises, multiple widgets can be arranged together using HBox and VBox. It is also possible to use the flexible GridBox specification. However, use of the specification involves some understanding of CSS properties and may impose a steep learning curve. Here, we will describe layout templates built on top of GridBox that simplify creation of common widget layouts.
End of explanation
from ipywidgets import TwoByTwoLayout
layout = dict(height='300px')
TwoByTwoLayout(top_left=top_left_button,
top_right=top_right_button,
bottom_left=bottom_left_button,
bottom_right=bottom_right_button,
layout=layout)
Explanation: 2x2 Grid
You can easily create a layout with 4 widgets arranged in a 2x2 matrix using the TwoByTwoLayout widget:
End of explanation
TwoByTwoLayout(top_left=top_left_button,
bottom_left=bottom_left_button,
bottom_right=bottom_right_button,
layout=layout)
Explanation: If you don't define a widget for some of the slots, the layout will automatically re-configure itself by merging neighbouring cells
End of explanation
TwoByTwoLayout(top_left=top_left_button,
bottom_left=bottom_left_button,
bottom_right=bottom_right_button,
merge=False,
layout=layout)
Explanation: You can pass merge=False in the argument of the TwoByTwoLayout constructor if you don't want this behavior
End of explanation
layout_2x2 = TwoByTwoLayout(top_left=top_left_button,
bottom_left=bottom_left_button,
bottom_right=bottom_right_button,
layout=layout)
layout_2x2
layout_2x2.top_right = top_right_button
Explanation: You can add a missing widget even after the layout initialization:
End of explanation
app = TwoByTwoLayout(top_left=top_left_text, top_right=top_right_text,
bottom_left=bottom_left_slider, bottom_right=bottom_right_slider)
link_left = jslink((app.top_left, 'value'), (app.bottom_left, 'value'))
link_right = jslink((app.top_right, 'value'), (app.bottom_right, 'value'))
app.bottom_right.value = 30
app.top_left.value = 25
app
Explanation: You can also use the linking feature of widgets to update some property of a widget based on another widget:
End of explanation
import bqplot as bq
import numpy as np
size = 100
np.random.seed(0)
x_data = range(size)
y_data = np.random.randn(size)
y_data_2 = np.random.randn(size)
y_data_3 = np.cumsum(np.random.randn(size) * 100.)
x_ord = bq.OrdinalScale()
y_sc = bq.LinearScale()
bar = bq.Bars(x=np.arange(10), y=np.random.rand(10), scales={'x': x_ord, 'y': y_sc})
ax_x = bq.Axis(scale=x_ord)
ax_y = bq.Axis(scale=y_sc, tick_format='0.2f', orientation='vertical')
fig = bq.Figure(marks=[bar], axes=[ax_x, ax_y], padding_x=0.025, padding_y=0.025,
layout=Layout(width='auto', height='90%'))
from ipywidgets import FloatSlider
max_slider = FloatSlider(min=0, max=10, default_value=2, description="Max: ",
layout=Layout(width='auto', height='auto'))
min_slider = FloatSlider(min=-1, max=10, description="Min: ",
layout=Layout(width='auto', height='auto'))
app = TwoByTwoLayout(top_left=min_slider,
bottom_left=max_slider,
bottom_right=fig,
align_items="center",
height='700px')
jslink((y_sc, 'max'), (max_slider, 'value'))
jslink((y_sc, 'min'), (min_slider, 'value'))
jslink((min_slider, 'max'), (max_slider, 'value'))
jslink((max_slider, 'min'), (min_slider, 'value'))
max_slider.value = 1.5
app
Explanation: You can easily create more complex layouts with custom widgets. For example, you can use a bqplot Figure widget to add plots:
End of explanation
from ipywidgets import AppLayout, Button, Layout
header_button = create_expanded_button('Header', 'success')
left_button = create_expanded_button('Left', 'info')
center_button = create_expanded_button('Center', 'warning')
right_button = create_expanded_button('Right', 'info')
footer_button = create_expanded_button('Footer', 'success')
AppLayout(header=header_button,
left_sidebar=left_button,
center=center_button,
right_sidebar=right_button,
footer=footer_button,
layout=layout)
Explanation: AppLayout
AppLayout is a widget layout template that allows you to create an application-like widget arrangements. It consist of a header, a footer, two sidebars and a central pane:
End of explanation
AppLayout(header=None,
left_sidebar=None,
center=center_button,
right_sidebar=None,
footer=None,
layout=layout)
AppLayout(header=header_button,
left_sidebar=left_button,
center=center_button,
right_sidebar=right_button,
footer=None,
layout=layout)
AppLayout(header=None,
left_sidebar=left_button,
center=center_button,
right_sidebar=right_button,
footer=None,
layout=layout)
AppLayout(header=header_button,
left_sidebar=left_button,
center=center_button,
right_sidebar=None,
footer=footer_button,
layout=layout)
AppLayout(header=header_button,
left_sidebar=None,
center=center_button,
right_sidebar=right_button,
footer=footer_button,
layout=layout)
AppLayout(header=header_button,
left_sidebar=None,
center=center_button,
right_sidebar=None,
footer=footer_button,
layout=layout)
AppLayout(header=header_button,
left_sidebar=left_button,
center=None,
right_sidebar=right_button,
footer=footer_button,
layout=layout)
Explanation: However with the automatic merging feature, it's possible to achieve many other layouts:
End of explanation
AppLayout(header=header_button,
left_sidebar=left_button,
center=center_button,
right_sidebar=right_button,
footer=footer_button,
pane_widths=[3, 3, 1],
pane_heights=[1, 5, '60px'],
layout=layout)
Explanation: You can also modify the relative and absolute widths and heights of the panes using pane_widths and pane_heights arguments. Both accept a sequence of three elements, each of which is either an integer (equivalent to the weight given to the row/column) or a string in the format '1fr' (denoting one portion of the free space available) or '100px' (absolute size).
End of explanation
from ipywidgets import GridspecLayout
grid = GridspecLayout(4, 3, layout=layout)
for i in range(4):
for j in range(3):
grid[i, j] = create_expanded_button('Button {} - {}'.format(i, j), 'warning')
grid
Explanation: Grid layout
GridspecLayout is a N-by-M grid layout allowing for flexible layout definitions using an API similar to matplotlib's GridSpec.
You can use GridspecLayout to define a simple regularly-spaced grid. For example, to create a 4x3 layout:
End of explanation
grid = GridspecLayout(4, 3, layout=layout)
grid[:3, 1:] = create_expanded_button('One', 'success')
grid[:, 0] = create_expanded_button('Two', 'info')
grid[3, 1] = create_expanded_button('Three', 'warning')
grid[3, 2] = create_expanded_button('Four', 'danger')
grid
Explanation: To make a widget span several columns and/or rows, you can use slice notation:
End of explanation
grid = GridspecLayout(4, 3, layout=layout)
grid[:3, 1:] = create_expanded_button('One', 'success')
grid[:, 0] = create_expanded_button('Two', 'info')
grid[3, 1] = create_expanded_button('Three', 'warning')
grid[3, 2] = create_expanded_button('Four', 'danger')
grid
grid[0, 0].description = "I am the blue one"
Explanation: You can still change properties of the widgets stored in the grid, using the same indexing notation.
End of explanation
grid = GridspecLayout(4, 3, layout=layout)
grid[:3, 1:] = create_expanded_button('One', 'info')
grid[:, 0] = create_expanded_button('Two', 'info')
grid[3, 1] = create_expanded_button('Three', 'info')
grid[3, 2] = create_expanded_button('Four', 'info')
grid
grid[3, 1] = create_expanded_button('New button!!', 'danger')
Explanation: Note: It's enough to pass an index of one of the grid cells occupied by the widget of interest. Slices are not supported in this context.
If there is already a widget that conflicts with the position of the widget being added, it will be removed from the grid:
End of explanation
grid[:3, 1:] = create_expanded_button('I am new too!!!!!', 'warning')
Explanation: Note: Slices are supported in this context.
End of explanation
import bqplot as bq
import numpy as np
from ipywidgets import GridspecLayout, Button, Layout
n_features = 5
data = np.random.randn(100, n_features)
data[:50, 2] += 4 * data[:50, 0] **2
data[50:, :] += 4
A = np.random.randn(n_features, n_features)/5
data = np.dot(data,A)
scales_x = [bq.LinearScale() for i in range(n_features)]
scales_y = [bq.LinearScale() for i in range(n_features)]
gs = GridspecLayout(n_features, n_features)
for i in range(n_features):
for j in range(n_features):
if i != j:
sc_x = scales_x[j]
sc_y = scales_y[i]
scatt = bq.Scatter(x=data[:, j], y=data[:, i], scales={'x': sc_x, 'y': sc_y}, default_size=1)
gs[i, j] = bq.Figure(marks=[scatt], layout=Layout(width='auto', height='auto'),
fig_margin=dict(top=0, bottom=0, left=0, right=0))
else:
sc_x = scales_x[j]
sc_y = bq.LinearScale()
hist = bq.Hist(sample=data[:,i], scales={'sample': sc_x, 'count': sc_y})
gs[i, j] = bq.Figure(marks=[hist], layout=Layout(width='auto', height='auto'),
fig_margin=dict(top=0, bottom=0, left=0, right=0))
gs
Explanation: Creating scatter plots using GridspecLayout
In this examples, we will demonstrate how to use GridspecLayout and bqplot widget to create a multipanel scatter plot. To run this example you will need to install the bqplot package.
For example, you can use the following snippet to obtain a scatter plot across multiple dimensions:
End of explanation
AppLayout(header=None,
left_sidebar=left_button,
center=center_button,
right_sidebar=right_button,
footer=None,
height="200px", width="50%")
Explanation: Style attributes
You can specify extra style properties to modify the layout. For example, you can change the size of the whole layout using the height and width arguments.
End of explanation
AppLayout(header=None,
left_sidebar=left_button,
center=center_button,
right_sidebar=right_button,
footer=None,
height="200px", width="50%",
grid_gap="10px")
Explanation: The gap between the panes can be increase or decreased with grid_gap argument:
End of explanation
from ipywidgets import Text, HTML
TwoByTwoLayout(top_left=top_left_button, top_right=top_right_button,
bottom_right=bottom_right_button,
justify_items='center',
width="50%",
align_items='center')
Explanation: Additionally, you can control the alignment of widgets within the layout using justify_content and align_items attributes:
End of explanation
TwoByTwoLayout(top_left=top_left_button, top_right=top_right_button,
bottom_right=bottom_right_button,
justify_items='center',
width="50%",
align_items='top')
Explanation: For other alignment options it's possible to use common names (top and bottom) or their CSS equivalents (flex-start and flex-end):
End of explanation
<END_TASK> |
15,692 |
<SYSTEM_TASK:>
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
<END_TASK>
<USER_TASK:>
Problem:
Remove Stop Words from Text in DataFrame Column
| <ASSISTANT_TASK:>
Python Code::
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
stop_words = stopwords.words('english')
df['tweet'] = df['tweet'].apply(lambda x: ' '.join([word for word in x.split() if word not in (stop_words)]))
<END_TASK>
|
15,693 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
In this notebook, we try to apply an unsupervised learning algorithm to votation profile of every people in order to detect clusters, and observe whether they match with the political partites. To do so, we first create a network with people as nodes, and connect each node to their k (e.g 3) nearest neighbours based on the matrix distance computed previously. The ML algorithm is a spectral clustering algorithm which uses the adjacency matrix of this network.
Step1: Collect the data
Step2: Build adjacency matrix
We compute for each row the k entries with lowest distance, and put 1 for each of them, and 0 elsewhere.
Step3: Spectral Clustering
We now apply the clustering algorithm to the adjacency matrix. This matrix is likely not to be symmetric, but the algorithm will symmetrize it, which does make sense in this case.
Step4: Analysis of the clustering
We would like to observe whether the obtained clustering spearates well the different political partites. To do so, we compute for each cluster the percentage of people in each partite. | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import os
import matplotlib.pyplot as plt
import sklearn
import sklearn.ensemble
from sklearn.metrics import silhouette_score
from sklearn.cluster import KMeans
import csv
Explanation: In this notebook, we try to apply an unsupervised learning algorithm to votation profile of every people in order to detect clusters, and observe whether they match with the political partites. To do so, we first create a network with people as nodes, and connect each node to their k (e.g 3) nearest neighbours based on the matrix distance computed previously. The ML algorithm is a spectral clustering algorithm which uses the adjacency matrix of this network.
End of explanation
path = '../../datas/nlp_results/'
voting_df = pd.read_csv(path+'voting_with_topics.csv')
print('Entries in the DataFrame',voting_df.shape)
#Dropping the useless column
voting_df = voting_df.drop('Unnamed: 0',1)
#Putting numerical values into the columns that should have numerical values
#print(voting_df.columns.values)
num_cols = ['Decision', ' armée', ' asile / immigration', ' assurances', ' budget', ' dunno', ' entreprise/ finance',
' environnement', ' famille / enfants', ' imposition', ' politique internationale', ' retraite ']
voting_df[num_cols] = voting_df[num_cols].apply(pd.to_numeric)
#Inserting the full name at the second position
voting_df.insert(2,'Name', voting_df['FirstName'] + ' ' + voting_df['LastName'])
voting_df = voting_df.drop_duplicates(['Name'], keep = 'last')
voting_df = voting_df.set_index(['Name'])
voting_df.head(3)
profileMatrixFile = 'profileMatrix.csv'
profileMatrix = pd.read_csv(profileMatrixFile, index_col = 0)
profileArray = profileMatrix.values
print(profileArray.shape)
profileMatrix.head()
distanceMatrixFile = 'distanceMatrix.csv'
distances = pd.read_csv(distanceMatrixFile, index_col = 0)
distances = distances.replace(-0.001, 0)
distancesArray = distances.values
print(distancesArray.shape)
distances.head()
Explanation: Collect the data
End of explanation
k = 4 # number of nearest neighbours that we take into account in the adjacency matrix
for i in distances:
d = distances.loc[i]
np.sort(d)
threshold = d[k-1]
for j in distances:
if distances.loc[i][j] > threshold:
distances.loc[i][j] = 0
else:
distances.loc[i][j] = 1
distances.head()
Explanation: Build adjacency matrix
We compute for each row the k entries with lowest distance, and put 1 for each of them, and 0 elsewhere.
End of explanation
nbClust = 4
clusterDist = sklearn.cluster.spectral_clustering(affinity = distances.values, n_clusters = nbClust)
clusterDist
Explanation: Spectral Clustering
We now apply the clustering algorithm to the adjacency matrix. This matrix is likely not to be symmetric, but the algorithm will symmetrize it, which does make sense in this case.
End of explanation
ratio_df = pd.DataFrame(index = voting_df.ParlGroupName.unique())
ratio_df['ratio'] = 0
np.array(ratio_df.index)
def ratioPartite(cluster, clusterDist):
# Compute the partites distribution for all people within this cluster
people = distances.index[clusterDist == cluster]
size = len(people)
ratio_df = pd.DataFrame(index = voting_df.ParlGroupName.unique())
ratio_df['ratio'] = 1.0
for group in np.array(ratio_df.index):
print(group)
peopleGroup = [p for p in people[voting_df.loc[people].ParlGroupName == group]]
print(len(peopleGroup) / float(size))
ratio_df.set_value(group, 'ratio', len(peopleGroup) / float(size))
return ratio_df
ratio_df = pd.DataFrame(index = voting_df.ParlGroupName.unique(), columns = range(nbClust))
ratio_df[0] = range(8)
ratio_df
ratio_df = pd.DataFrame(index = voting_df.ParlGroupName.unique(), columns = range(nbClust))
for cluster in range(nbClust):
ratio = ratioPartite(cluster, clusterDist)
ratio_df[cluster] = ratio.values
ratio_df
Explanation: Analysis of the clustering
We would like to observe whether the obtained clustering spearates well the different political partites. To do so, we compute for each cluster the percentage of people in each partite.
End of explanation
<END_TASK> |
15,694 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Input parameter for the DTDs.
Check different input for the SNIa DTD.
$\odot$ Power law & Maoz
$\odot$ Gaussian
$\odot$ Exponential
Step1: Power law & Maoz
default beta_pow = -1 # t^beta_pow
Step2: Maoz and power law with -1 is the same as visible below.
Step3: Gaussian
gauss_dtd=[3.3e9,6.6e8] (as used in Wiersma09)
Step4: gauss_dtd=[4e9,3.2e9] (as mentioned in Wiersma09)
Step5: Difference in rate
Step6: Exponential
exp_dtd (as used in Wiersma09) 2e9
Step7: exp_dtd (as used in Wiersma09) 10e9 | <ASSISTANT_TASK:>
Python Code:
%pylab nbagg
import sygma as s
reload(s)
s.__file__
from scipy.integrate import quad
from scipy.interpolate import UnivariateSpline
import numpy as np
Explanation: Input parameter for the DTDs.
Check different input for the SNIa DTD.
$\odot$ Power law & Maoz
$\odot$ Gaussian
$\odot$ Exponential
End of explanation
s1=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,sn1a_rate='power_law',beta_pow=-1,
imf_type='salpeter',imf_bdys=[1,30],hardsetZ=0.0001,table='yield_tables/isotope_yield_table_h1.txt',
sn1a_on=True, sn1a_table='yield_tables/sn1a_h1.txt',
iniabu_table='yield_tables/iniabu/iniab1.0E-04GN93_alpha_h1.ppn')
s2=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,sn1a_rate='power_law',beta_pow=-2,
imf_type='salpeter',imf_bdys=[1,30],hardsetZ=0.0001,table='yield_tables/isotope_yield_table_h1.txt',
sn1a_on=True, sn1a_table='yield_tables/sn1a_h1.txt',
iniabu_table='yield_tables/iniabu/iniab1.0E-04GN93_alpha_h1.ppn')
s3_maoz=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,sn1a_rate='maoz',
imf_type='salpeter',imf_bdys=[1,30],hardsetZ=0.0001,table='yield_tables/isotope_yield_table_h1.txt',
sn1a_on=True, sn1a_table='yield_tables/sn1a_h1.txt',
iniabu_table='yield_tables/iniabu/iniab1.0E-04GN93_alpha_h1.ppn')
Explanation: Power law & Maoz
default beta_pow = -1 # t^beta_pow
End of explanation
s1.plot_sn_distr(fig=5,rate=True,rate_only='sn1a',label1='$t{^-1}$',marker1='o')
s2.plot_sn_distr(fig=5,rate=True,rate_only='sn1a',label1='$t^{-2}$',marker1='x',color1='b')
s3_maoz.plot_sn_distr(fig=5,rate=True,rate_only='sn1a',label1='$t^{-1}$, maoz',marker1='x',color1='b',shape1='--')
Explanation: Maoz and power law with -1 is the same as visible below.
End of explanation
gauss_dtd=[1e9,6.6e8]
reload(s)
s2=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,sn1a_rate='gauss',gauss_dtd=gauss_dtd,imf_type='salpeter',imf_bdys=[1,30],hardsetZ=0.0001,table='yield_tables/isotope_yield_table_h1.txt',sn1a_on=True, sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab1.0E-04GN93_alpha_h1.ppn')
Yield_tot_sim=s2.history.ism_iso_yield_1a[-1][0]
zm_lifetime_grid=s2.zm_lifetime_grid_current
idx_z = (np.abs(zm_lifetime_grid[0]-0.0001)).argmin() #Z=0
grid_masses=zm_lifetime_grid[1][::-1]
grid_lifetimes=zm_lifetime_grid[2][idx_z][::-1]
spline_degree1=2
smoothing1=0
boundary=[None,None]
spline = UnivariateSpline(grid_lifetimes,np.log10(grid_masses),bbox=boundary,k=spline_degree1,s=smoothing1)
g_dt1=s2
from scipy.integrate import dblquad
def spline1(x):
#x=t
return max(3.,10**spline(np.log10(x)))
def f_wd_dtd(m,t):
#print 'time ',t
#print 'mass ',m
mlim=10**spline(np.log10(t))
#print 'mlim',mlim
if mlim>8.:
#print t
#print mlim
return 0
else:
#mmin=max(3.,massfunc(t))
#mmax=8.
#imf=self.__imf(mmin,mmax,1)
#Delay time distribution function (DTD)
[1e9,6.6e8]
tau= gauss_dtd[0] #1e9 #3.3e9 #characteristic delay time
sigma=gauss_dtd[1] #0.66e9#0.25*tau
#sigma=0.2#narrow distribution
#sigma=0.5*tau #wide distribution
mmin=0
mmax=0
inte=0
def g2(mm):
return mm*mm**-2.35
norm=1./quad(g2,1,30)[0]
#imf normalized to 1Msun
return norm*m**-2.35* 1./np.sqrt(2*np.pi*sigma**2) * np.exp(-(t-tau)**2/(2*sigma**2))
#a= 0.0069 #normalization parameter
#if spline(np.log10(t))
a=1e-3/(dblquad(f_wd_dtd,0,1.3e10,lambda x: spline1(x), lambda x: 8)[0] )
n1a= a* dblquad(f_wd_dtd,0,1.3e10,lambda x: spline1(x), lambda x: 8)[0]
Yield_tot=n1a*1e11*0.1 #special factor
print Yield_tot_sim
print Yield_tot
print 'Should be 1: ', Yield_tot_sim/Yield_tot
s2.plot_mass(fig=6,specie='H',source='sn1a',label='H',color='k',shape='-',marker='o',markevery=800)
yields1=[]
ages1=[]
m=[1,1.65,2,3,4,5,6,7,12,15,20,25]
ages=[5.67e9,1.211e9,6.972e8,2.471e8,1.347e8,8.123e7,5.642e7,4.217e7,1.892e7,1.381e7,9.895e6,7.902e6]
for m1 in m:
t=ages[m.index(m1)]
yields= a* dblquad(f_wd_dtd,0,t,lambda x: spline1(x), lambda x: 8)[0] *1e11*0.1 #special factor
yields1.append(yields)
ages1.append(t)
plt.plot(ages1,yields1,marker='+',linestyle='',markersize=20,label='semi')
plt.legend(loc=2)
plt.show()
Explanation: Gaussian
gauss_dtd=[3.3e9,6.6e8] (as used in Wiersma09)
End of explanation
gauss_dtd=[4e9,2e9]
s2=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,sn1a_rate='gauss',gauss_dtd=gauss_dtd,imf_type='salpeter',imf_bdys=[1,30],hardsetZ=0.0001,table='yield_tables/isotope_yield_table_h1.txt',sn1a_on=True, sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab1.0E-04GN93_alpha_h1.ppn')
Yield_tot_sim=s2.history.ism_iso_yield_1a[-1][0]
zm_lifetime_grid=s2.zm_lifetime_grid_current
idx_z = (np.abs(zm_lifetime_grid[0]-0.0001)).argmin() #Z=0
grid_masses=zm_lifetime_grid[1][::-1]
grid_lifetimes=zm_lifetime_grid[2][idx_z][::-1]
spline_degree1=2
smoothing1=0
boundary=[None,None]
spline = UnivariateSpline(grid_lifetimes,np.log10(grid_masses),bbox=boundary,k=spline_degree1,s=smoothing1)
g_dt2=s2
from scipy.integrate import dblquad
def spline1(x):
#x=t
return max(3.,10**spline(np.log10(x)))
def f_wd_dtd(m,t):
#print 'time ',t
#print 'mass ',m
mlim=10**spline(np.log10(t))
#print 'mlim',mlim
if mlim>8.:
#print t
#print mlim
return 0
else:
#mmin=max(3.,massfunc(t))
#mmax=8.
#imf=self.__imf(mmin,mmax,1)
#Delay time distribution function (DTD)
[1e9,6.6e8]
tau= gauss_dtd[0] #1e9 #3.3e9 #characteristic delay time
sigma=gauss_dtd[1] #0.66e9#0.25*tau
#sigma=0.2#narrow distribution
#sigma=0.5*tau #wide distribution
mmin=0
mmax=0
inte=0
def g2(mm):
return mm*mm**-2.35
norm=1./quad(g2,1,30)[0]
#imf normalized to 1Msun
return norm*m**-2.35* 1./np.sqrt(2*np.pi*sigma**2) * np.exp(-(t-tau)**2/(2*sigma**2))
#a= 0.0069 #normalization parameter
#if spline(np.log10(t))
a=1e-3/(dblquad(f_wd_dtd,0,1.3e10,lambda x: spline1(x), lambda x: 8)[0] )
n1a= a* dblquad(f_wd_dtd,0,1.3e10,lambda x: spline1(x), lambda x: 8)[0]
Yield_tot=n1a*1e11*0.1 #special factor
print Yield_tot_sim
print Yield_tot
print 'Should be 1: ', Yield_tot_sim/Yield_tot
s2.plot_mass(fig=7,specie='H',source='sn1a',label='H',color='k',shape='-',marker='o',markevery=800)
yields1=[]
ages1=[]
m=[1,1.65,2,3,4,5,6,7,12,15,20,25]
ages=[5.67e9,1.211e9,6.972e8,2.471e8,1.347e8,8.123e7,5.642e7,4.217e7,1.892e7,1.381e7,9.895e6,7.902e6]
for m1 in m:
t=ages[m.index(m1)]
yields= a* dblquad(f_wd_dtd,0,t,lambda x: spline1(x), lambda x: 8)[0] *1e11*0.1 #special factor
yields1.append(yields)
ages1.append(t)
plt.plot(ages1,yields1,marker='+',linestyle='',markersize=20,label='semi')
plt.legend(loc=2)
plt.show()
Explanation: gauss_dtd=[4e9,3.2e9] (as mentioned in Wiersma09)
End of explanation
g_dt1.plot_sn_distr(fig=66,rate=True,rate_only='sn1a',label1='gauss, 1',marker1='o',shape1='--')
g_dt2.plot_sn_distr(fig=66,rate=True,rate_only='sn1a',label1='gauss, 2',marker1='x',markevery=1)
print g_dt1.gauss_dtd
print g_dt2.gauss_dtd
Explanation: Difference in rate
End of explanation
exp_dtd=2e9
#import read_yields as ry
import sygma as s
reload(s)
#interpolate_lifetimes_grid=s22.__interpolate_lifetimes_grid
#ytables=ry.read_nugrid_yields('yield_tables/isotope_yield_table_h1.txt')
#zm_lifetime_grid=interpolate_lifetimes_grid(ytables,iolevel=0) 1e7
s1=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,sn1a_on=True,sn1a_rate='exp',exp_dtd=exp_dtd,imf_type='salpeter',imf_bdys=[1,30],hardsetZ=0.0001,table='yield_tables/isotope_yield_table_h1.txt', sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab1.0E-04GN93_alpha_h1.ppn')
Yield_tot_sim=s1.history.ism_iso_yield_1a[-1][0]
zm_lifetime_grid=s1.zm_lifetime_grid_current
idx_z = (np.abs(zm_lifetime_grid[0]-0.0001)).argmin() #Z=0
grid_masses=zm_lifetime_grid[1][::-1]
grid_lifetimes=zm_lifetime_grid[2][idx_z][::-1]
spline_degree1=2
smoothing1=0
boundary=[None,None]
spline_lifetime = UnivariateSpline(grid_lifetimes,np.log10(grid_masses),bbox=boundary,k=spline_degree1,s=smoothing1)
plt.plot(grid_masses,grid_lifetimes,label='spline fit grid points (SYGMA)')
plt.xlabel('Mini/Msun')
plt.ylabel('log lifetime')
m=[1,1.65,2,3,4,5,6,7,12,15,20,25]
ages=[5.67e9,1.211e9,6.972e8,2.471e8,1.347e8,8.123e7,5.642e7,4.217e7,1.892e7,1.381e7,9.895e6,7.902e6]
plt.plot(np.array(m),np.log10(np.array(ages)),marker='+',markersize=20,label='input yield grid',linestyle='None')
plt.plot(10**spline_lifetime(np.log10(ages)),np.log10(ages),linestyle='--',label='spline fit SNIa')
plt.legend()
#plt.yscale('log')
e_dt1=s1
#following inside function wiersma09_efolding
#if timemin ==0:
# timemin=1
from scipy.integrate import dblquad
def spline1(x):
#x=t
minm_prog1a=3
#if minimum progenitor mass is larger than 3Msun due to IMF range:
#if self.imf_bdys[0]>3:
# minm_prog1a=self.imf_bdys[0]
return max(minm_prog1a,10**spline_lifetime(np.log10(x)))
def f_wd_dtd(m,t):
#print 'time ',t
#print 'mass ',m
mlim=10**spline_lifetime(np.log10(t))
maxm_prog1a=8
#if maximum progenitor mass is smaller than 8Msun due to IMF range:
#if 8>self.imf_bdys[1]:
# maxm_prog1a=self.imf_bdys[1]
if mlim>maxm_prog1a:
return 0
else:
#Delay time distribution function (DTD)
tau= 2e9
mmin=0
mmax=0
inte=0
#follwing is done in __imf()
def g2(mm):
return mm*mm**-2.35
norm=1./quad(g2,1,30)[0]
#print 'IMF test',norm*m**-2.35
#imf normalized to 1Msun
return norm*m**-2.35* np.exp(-t/tau)/tau
a= 0.01 #normalization parameter
#if spline(np.log10(t))
#a=1e-3/()
a=1e-3/(dblquad(f_wd_dtd,0,1.3e10,lambda x: spline1(x), lambda x: 8)[0] )
n1a= a* dblquad(f_wd_dtd,0,1.3e10,lambda x: spline1(x), lambda x: 8)[0]
# in principle since normalization is set: nb_1a_per_m the above calculation is not necessary anymore
Yield_tot=n1a*1e11*0.1 *1 #7 #special factor
print Yield_tot_sim
print Yield_tot
print 'Should be : ', Yield_tot_sim/Yield_tot
s1.plot_mass(fig=8,specie='H',source='sn1a',label='H',color='k',shape='-',marker='o',markevery=800)
yields1=[]
ages1=[]
a= 0.01 #normalization parameter
a=1e-3/(dblquad(f_wd_dtd,0,1.3e10,lambda x: spline1(x), lambda x: 8)[0] )
for m1 in m:
t=ages[m.index(m1)]
yields= a* dblquad(f_wd_dtd,0,t,lambda x: spline1(x), lambda x: 8)[0] *1e11*0.1 #special factor
yields1.append(yields)
ages1.append(t)
plt.plot(ages1,yields1,marker='+',linestyle='',markersize=20,label='semi')
plt.legend(loc=4)
Explanation: Exponential
exp_dtd (as used in Wiersma09) 2e9
End of explanation
exp_dtd=10e9
#import read_yields as ry
import sygma as s
reload(s)
#interpolate_lifetimes_grid=s22.__interpolate_lifetimes_grid
#ytables=ry.read_nugrid_yields('yield_tables/isotope_yield_table_h1.txt')
#zm_lifetime_grid=interpolate_lifetimes_grid(ytables,iolevel=0) 1e7
s1=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,sn1a_on=True,sn1a_rate='exp',exp_dtd=exp_dtd,imf_type='salpeter',imf_bdys=[1,30],hardsetZ=0.0001,table='yield_tables/isotope_yield_table_h1.txt', sn1a_table='yield_tables/sn1a_h1.txt', iniabu_table='yield_tables/iniabu/iniab1.0E-04GN93_alpha_h1.ppn')
Yield_tot_sim=s1.history.ism_iso_yield_1a[-1][0]
zm_lifetime_grid=s1.zm_lifetime_grid_current
idx_z = (np.abs(zm_lifetime_grid[0]-0.0001)).argmin() #Z=0
grid_masses=zm_lifetime_grid[1][::-1]
grid_lifetimes=zm_lifetime_grid[2][idx_z][::-1]
spline_degree1=2
smoothing1=0
boundary=[None,None]
spline_lifetime = UnivariateSpline(grid_lifetimes,np.log10(grid_masses),bbox=boundary,k=spline_degree1,s=smoothing1)
plt.plot(grid_masses,grid_lifetimes,label='spline fit grid points (SYGMA)')
plt.xlabel('Mini/Msun')
plt.ylabel('log lifetime')
m=[1,1.65,2,3,4,5,6,7,12,15,20,25]
ages=[5.67e9,1.211e9,6.972e8,2.471e8,1.347e8,8.123e7,5.642e7,4.217e7,1.892e7,1.381e7,9.895e6,7.902e6]
plt.plot(np.array(m),np.log10(np.array(ages)),marker='+',markersize=20,label='input yield grid',linestyle='None')
plt.plot(10**spline_lifetime(np.log10(ages)),np.log10(ages),linestyle='--',label='spline fit SNIa')
plt.legend()
#plt.yscale('log')
e_dt2=s1
#following inside function wiersma09_efolding
#if timemin ==0:
# timemin=1
from scipy.integrate import dblquad
def spline1(x):
#x=t
minm_prog1a=3
#if minimum progenitor mass is larger than 3Msun due to IMF range:
#if self.imf_bdys[0]>3:
# minm_prog1a=self.imf_bdys[0]
return max(minm_prog1a,10**spline_lifetime(np.log10(x)))
def f_wd_dtd(m,t):
#print 'time ',t
#print 'mass ',m
mlim=10**spline_lifetime(np.log10(t))
maxm_prog1a=8
#if maximum progenitor mass is smaller than 8Msun due to IMF range:
#if 8>self.imf_bdys[1]:
# maxm_prog1a=self.imf_bdys[1]
if mlim>maxm_prog1a:
return 0
else:
#Delay time distribution function (DTD)
tau= exp_dtd
mmin=0
mmax=0
inte=0
#follwing is done in __imf()
def g2(mm):
return mm*mm**-2.35
norm=1./quad(g2,1,30)[0]
#print 'IMF test',norm*m**-2.35
#imf normalized to 1Msun
return norm*m**-2.35* np.exp(-t/tau)/tau
a= 0.01 #normalization parameter
#if spline(np.log10(t))
#a=1e-3/()
a=1e-3/(dblquad(f_wd_dtd,0,1.3e10,lambda x: spline1(x), lambda x: 8)[0] )
n1a= a* dblquad(f_wd_dtd,0,1.3e10,lambda x: spline1(x), lambda x: 8)[0]
# in principle since normalization is set: nb_1a_per_m the above calculation is not necessary anymore
Yield_tot=n1a*1e11*0.1 *1 #7 #special factor
print Yield_tot_sim
print Yield_tot
print 'Should be : ', Yield_tot_sim/Yield_tot
s1.plot_mass(fig=9,specie='H',source='sn1a',label='H',color='k',shape='-',marker='o',markevery=800)
yields1=[]
ages1=[]
a= 0.01 #normalization parameter
a=1e-3/(dblquad(f_wd_dtd,0,1.3e10,lambda x: spline1(x), lambda x: 8)[0] )
for m1 in m:
t=ages[m.index(m1)]
yields= a* dblquad(f_wd_dtd,0,t,lambda x: spline1(x), lambda x: 8)[0] *1e11*0.1 #special factor
yields1.append(yields)
ages1.append(t)
plt.plot(ages1,yields1,marker='+',linestyle='',markersize=20,label='semi')
plt.legend(loc=2)
e_dt1.plot_sn_distr(fig=77,rate=True,rate_only='sn1a',label1='exp, 1',marker1='o')
e_dt2.plot_sn_distr(fig=77,rate=True,rate_only='sn1a',label1='exp, 2',marker1='x',markevery=1)
print e_dt1.exp_dtd,
print e_dt2.exp_dtd
Explanation: exp_dtd (as used in Wiersma09) 10e9
End of explanation
<END_TASK> |
15,695 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
The Pasta Production Problem
This tutorial includes everything you need to set up IBM Decision Optimization CPLEX Modeling for Python (DOcplex), build a Mathematical Programming model, and get its solution by solving the model on the cloud with IBM ILOG CPLEX Optimizer.
When you finish this tutorial, you'll have a foundational knowledge of Prescriptive Analytics.
This notebook is part of Prescriptive Analytics for Python
It requires either an installation of CPLEX Optimizers or it can be run on IBM Cloud Pak for Data as a Service (Sign up for a free IBM Cloud account
and you can start using IBM Cloud Pak for Data as a Service right away).
CPLEX is available on <i>IBM Cloud Pack for Data</i> and <i>IBM Cloud Pak for Data as a Service</i>
Step1: Step 2
Step2: Step 3
Step3: Define the decision variables
Step4: Express the business constraints
Each product can be produced either inside the company or outside, at a higher cost.
The inside production is constrained by the company's resources, while outside production is considered unlimited.
Step5: Express the objective
Minimizing the production cost for a number of products while satisfying customer demand
Step6: Solve with Decision Optimization
Now we have everything we need to solve the model, using Model.solve(). The following cell solves using your local CPLEX (if any, and provided you have added it to your PYTHONPATH variable).
Step7: Step 5 | <ASSISTANT_TASK:>
Python Code:
import sys
try:
import docplex.mp
except:
raise Exception('Please install docplex. See https://pypi.org/project/docplex/')
Explanation: The Pasta Production Problem
This tutorial includes everything you need to set up IBM Decision Optimization CPLEX Modeling for Python (DOcplex), build a Mathematical Programming model, and get its solution by solving the model on the cloud with IBM ILOG CPLEX Optimizer.
When you finish this tutorial, you'll have a foundational knowledge of Prescriptive Analytics.
This notebook is part of Prescriptive Analytics for Python
It requires either an installation of CPLEX Optimizers or it can be run on IBM Cloud Pak for Data as a Service (Sign up for a free IBM Cloud account
and you can start using IBM Cloud Pak for Data as a Service right away).
CPLEX is available on <i>IBM Cloud Pack for Data</i> and <i>IBM Cloud Pak for Data as a Service</i>:
- <i>IBM Cloud Pak for Data as a Service</i>: Depends on the runtime used:
- <i>Python 3.x</i> runtime: Community edition
- <i>Python 3.x + DO</i> runtime: full edition
- <i>Cloud Pack for Data</i>: Community edition is installed by default. Please install DO addon in Watson Studio Premium for the full edition
Table of contents:
Describe the business problem
How decision optimization (prescriptive analytics) can help
Use decision optimization
Step 1: Import the library
Step 2: Model the data
Step 3: Prepare the data
Step 4: Set up the prescriptive model
Define the decision variables
Express the business constraints
Express the objective
Solve with Decision Optimization
Step 5: Investigate the solution and run an example analysis
Summary
Describe the business problem
This notebook describes how to use CPLEX Modeling for Python to manage the production of pasta to meet demand with your resources.
The model aims at minimizing the production cost for a number of products while satisfying customer demand.
Each product can be produced either inside the company or outside, at a higher cost.
The inside production is constrained by the company's resources, while outside production is considered unlimited.
The model first declares the products and the resources.
The data consists of the description of the products (the demand, the inside and outside costs,
and the resource consumption) and the capacity of the various resources.
The variables for this problem are the inside and outside production for each product.
How decision optimization can help
Prescriptive analytics (decision optimization) technology recommends actions that are based on desired outcomes. It takes into account specific scenarios, resources, and knowledge of past and current events. With this insight, your organization can make better decisions and have greater control of business outcomes.
Prescriptive analytics is the next step on the path to insight-based actions. It creates value through synergy with predictive analytics, which analyzes data to predict future outcomes.
Prescriptive analytics takes that insight to the next level by suggesting the optimal way to handle that future situation. Organizations that can act fast in dynamic conditions and make superior decisions in uncertain environments gain a strong competitive advantage.
<br/>
<u>With prescriptive analytics, you can:</u>
Automate the complex decisions and trade-offs to better manage your limited resources.
Take advantage of a future opportunity or mitigate a future risk.
Proactively update recommendations based on changing events.
Meet operational goals, increase customer loyalty, prevent threats and fraud, and optimize business processes.
Use decision optimization
Step 1: Import the library
Run the following code to import the Decision Optimization CPLEX Modeling library. The DOcplex library contains the two modeling packages, Mathematical Programming (docplex.mp) and Constraint Programming (docplex.cp).
End of explanation
products = [("kluski", 100, 0.6, 0.8),
("capellini", 200, 0.8, 0.9),
("fettucine", 300, 0.3, 0.4)]
# resources are a list of simple tuples (name, capacity)
resources = [("flour", 20),
("eggs", 40)]
consumptions = {("kluski", "flour"): 0.5,
("kluski", "eggs"): 0.2,
("capellini", "flour"): 0.4,
("capellini", "eggs"): 0.4,
("fettucine", "flour"): 0.3,
("fettucine", "eggs"): 0.6}
Explanation: Step 2: Model the data
The data consists of the description of the products (the demand, the inside and outside costs,
and the resource consumption) and the capacity of the various resources.
End of explanation
from docplex.mp.model import Model
mdl = Model(name="pasta")
Explanation: Step 3: Prepare the data
Data is very simple and is ready to use without any cleasning, massage, refactoring.
Step 4: Set up the prescriptive model
Create the DOcplex model
The model contains all the business constraints and defines the objective.
We now use CPLEX Modeling for Python to build a Mixed Integer Programming (MIP) model for this problem.
End of explanation
inside_vars = mdl.continuous_var_dict(products, name='inside')
outside_vars = mdl.continuous_var_dict(products, name='outside')
Explanation: Define the decision variables
End of explanation
# --- constraints ---
# demand satisfaction
mdl.add_constraints((inside_vars[prod] + outside_vars[prod] >= prod[1], 'ct_demand_%s' % prod[0]) for prod in products)
# --- resource capacity ---
mdl.add_constraints((mdl.sum(inside_vars[p] * consumptions[p[0], res[0]] for p in products) <= res[1], 'ct_res_%s' % res[0]) for res in resources)
mdl.print_information()
Explanation: Express the business constraints
Each product can be produced either inside the company or outside, at a higher cost.
The inside production is constrained by the company's resources, while outside production is considered unlimited.
End of explanation
total_inside_cost = mdl.sum(inside_vars[p] * p[2] for p in products)
total_outside_cost = mdl.sum(outside_vars[p] * p[3] for p in products)
mdl.minimize(total_inside_cost + total_outside_cost)
Explanation: Express the objective
Minimizing the production cost for a number of products while satisfying customer demand
End of explanation
mdl.solve()
Explanation: Solve with Decision Optimization
Now we have everything we need to solve the model, using Model.solve(). The following cell solves using your local CPLEX (if any, and provided you have added it to your PYTHONPATH variable).
End of explanation
obj = mdl.objective_value
print("* Production model solved with objective: {:g}".format(obj))
print("* Total inside cost=%g" % total_inside_cost.solution_value)
for p in products:
print("Inside production of {product}: {ins_var}".format(product=p[0], ins_var=inside_vars[p].solution_value))
print("* Total outside cost=%g" % total_outside_cost.solution_value)
for p in products:
print("Outside production of {product}: {out_var}".format(product=p[0], out_var=outside_vars[p].solution_value))
Explanation: Step 5: Investigate the solution and then run an example analysis
End of explanation
<END_TASK> |
15,696 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Example 9.4.1, Beaufait 1977
From, Basic Concepts of Structural Analysis, Beaufait, Fred W., Prentice-Hall, 1977.
Step1: Compare Solution Here with that in the Book | <ASSISTANT_TASK:>
Python Code:
from Frame2D import Frame2D
from Frame2D.Members import Member
# because units are kips, inches
Member.E = 30000. #ksi
Member.G = 11500.
from IPython import display
display.Image('data/Beaufait-9-4-1.d/fig1.jpg')
frame = Frame2D('Beaufait-9-4-1') # Example 9.4.1, p. 460
frame.input_all()
rs = frame.solve()
frame.print_input()
frame.print_results(rs,mult=[1.,1.,1./12.])
Explanation: Example 9.4.1, Beaufait 1977
From, Basic Concepts of Structural Analysis, Beaufait, Fred W., Prentice-Hall, 1977.
End of explanation
import pandas as pd
efs = [('M1',11.77,2.72,33.06,-11.77,-2.72,7.75), # end forces from soln, Beaufait, p 473
('M2',9.40,8.85,-7.83,0.60,15.15,-74.11),
('M3',14.18,5.27,74.10,-14.18,-5.27,57.81)]
BOOK_MEFS = pd.DataFrame(efs,columns='ID,FXJ,FYJ,MZJ,FXK,FYK,MZK'.split(',')).set_index('ID')
BOOK_MEFS[['MZJ','MZK']] *= 12. # convert ft-kips to in-kips
BOOK_MEFS
HERE_MEFS = pd.DataFrame(frame.list_member_end_forces(rs),
columns='ID,FXJ,FYJ,MZJ,FXK,FYK,MZK'.split(',')).set_index('ID')
HERE_MEFS
pdiff = (100*(HERE_MEFS-BOOK_MEFS)/BOOK_MEFS)
pdiff.round(2)
Explanation: Compare Solution Here with that in the Book
End of explanation
<END_TASK> |
15,697 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Goal
Follow-up to
Step1: BD min/max
Step2: Nestly
assuming fragments already simulated
Step3: Nestly params
Step4: Copying input files
Step5: Multi-window HR-SIP
Step6: Making confusion matrices
Step7: Aggregating the confusion matrix data
Step8: --End of simulation--
Plotting results
Step9: Checking that specificity is not always 1 (perfect) | <ASSISTANT_TASK:>
Python Code:
import os
import glob
import itertools
import nestly
%load_ext rpy2.ipython
%load_ext pushnote
%%R
library(ggplot2)
library(dplyr)
library(tidyr)
library(gridExtra)
Explanation: Goal
Follow-up to: atomIncorp_taxaIncorp
Determining the effect of 'heavy' BD window (number of windows & window sizes) on HR-SIP accuracy
Apply a sparsity cutoff after to selecting 'heavy' fraction samples
In other words, taxa must be present in just most of the 'heavy' fraction sampels
Variable parameters:
'heavy' BD window sizes
Init
End of explanation
## min G+C cutoff
min_GC = 13.5
## max G+C cutoff
max_GC = 80
## max G+C shift
max_13C_shift_in_BD = 0.036
min_BD = min_GC/100.0 * 0.098 + 1.66
max_BD = max_GC/100.0 * 0.098 + 1.66
max_BD = max_BD + max_13C_shift_in_BD
print 'Min BD: {}'.format(min_BD)
print 'Max BD: {}'.format(max_BD)
Explanation: BD min/max
End of explanation
# paths
workDir = '/home/nick/notebook/SIPSim/dev/bac_genome1147/'
buildDir = os.path.join(workDir, 'atomIncorp_taxaIncorp_MW-HR-SIP_postSpar')
dataDir = os.path.join(workDir, 'atomIncorp_taxaIncorp')
if not os.path.isdir(buildDir):
os.makedirs(buildDir)
%cd $buildDir
# making an experimental design file for qSIP
x = range(1,7)
y = ['control', 'treatment']
expDesignFile = os.path.join(buildDir, 'qSIP_exp_design.txt')
with open(expDesignFile, 'wb') as outFH:
for i,z in itertools.izip(x,itertools.cycle(y)):
line = '\t'.join([str(i),z])
outFH.write(line + '\n')
!head $expDesignFile
Explanation: Nestly
assuming fragments already simulated
End of explanation
# building tree structure
nest = nestly.Nest()
# varying params
nest.add('percIncorp', [0, 15, 25, 50, 100])
nest.add('percTaxa', [1, 5, 10, 25, 50])
nest.add('rep', range(1,11))
## set params
nest.add('abs', ['1e9'], create_dir=False)
nest.add('np', [10], create_dir=False)
nest.add('Monte_rep', [100000], create_dir=False)
nest.add('subsample_dist', ['lognormal'], create_dir=False)
nest.add('subsample_mean', [9.432], create_dir=False)
nest.add('subsample_scale', [0.5], create_dir=False)
nest.add('subsample_min', [10000], create_dir=False)
nest.add('subsample_max', [30000], create_dir=False)
nest.add('min_BD', [min_BD], create_dir=False)
nest.add('max_BD', [max_BD], create_dir=False)
nest.add('DBL_scaling', [0.5], create_dir=False)
nest.add('bandwidth', [0.8], create_dir=False)
nest.add('heavy_BD_min', [1.71], create_dir=False)
nest.add('heavy_BD_max', [1.75], create_dir=False)
nest.add('topTaxaToPlot', [100], create_dir=False)
nest.add('padj', [0.1], create_dir=False)
nest.add('log2', [0.25], create_dir=False)
nest.add('occurs', ['0.0,0.05,0.1,0.15,0.2,0.25,0.3,0.35,0.4,0.45,0.5'], create_dir=False)
### input/output files
nest.add('buildDir', [buildDir], create_dir=False)
nest.add('exp_design', [expDesignFile], create_dir=False)
# building directory tree
nest.build(buildDir)
# bash file to run
bashFile = os.path.join(buildDir, 'SIPSimRun.sh')
Explanation: Nestly params
End of explanation
files = !find . -name "*.json"
dirs = [os.path.split(x)[0] for x in files]
srcFiles = ['OTU_abs1e9_PCR_sub_w.txt', 'OTU_abs1e9_PCR_sub_meta.txt', 'BD-shift_stats.txt']
for d in dirs:
for f in srcFiles:
f1 = os.path.join(dataDir, d, f)
f2 = os.path.join(buildDir, d, f)
cmd = 'cp -f {} {}'.format(f1, f2)
!$cmd
Explanation: Copying input files
End of explanation
bashFileTmp = os.path.splitext(bashFile)[0] + '_HRSIP_multi.sh'
bashFileTmp
%%writefile $bashFileTmp
#!/bin/bash
# phyloseq
## making phyloseq object from OTU table
SIPSimR phyloseq_make \
OTU_abs{abs}_PCR_sub_w.txt \
-s OTU_abs{abs}_PCR_sub_meta.txt \
> OTU_abs{abs}_PCR_sub.physeq
## HR SIP pipeline
SIPSimR phyloseq_DESeq2 \
--log2 {log2} \
--hypo greater \
--cont 1,3,5 \
--treat 2,4,6 \
--occur_heavy {occurs} \
-w 1.71-1.75 \
--all OTU_abs1e9_PCR_sub_MW1_all.txt \
OTU_abs{abs}_PCR_sub.physeq \
> OTU_abs1e9_PCR_sub_MW1_DESeq2
SIPSimR phyloseq_DESeq2 \
--log2 {log2} \
--hypo greater \
--cont 1,3,5 \
--treat 2,4,6 \
--occur_heavy {occurs} \
-w 1.71-1.78 \
--all OTU_abs1e9_PCR_sub_MW2_all.txt \
OTU_abs{abs}_PCR_sub.physeq \
> OTU_abs1e9_PCR_sub_MW2_DESeq2
SIPSimR phyloseq_DESeq2 \
--log2 {log2} \
--hypo greater \
--cont 1,3,5 \
--treat 2,4,6 \
--occur_heavy {occurs} \
-w 1.69-1.74,1.73-1.78 \
--all OTU_abs1e9_PCR_sub_MW3_all.txt \
OTU_abs{abs}_PCR_sub.physeq \
> OTU_abs1e9_PCR_sub_MW3_DESeq2
SIPSimR phyloseq_DESeq2 \
--log2 {log2} \
--hypo greater \
--cont 1,3,5 \
--treat 2,4,6 \
--occur_heavy {occurs} \
-w 1.70-1.73,1.72-1.75,1.74-1.77 \
--all OTU_abs1e9_PCR_sub_MW4_all.txt \
OTU_abs{abs}_PCR_sub.physeq \
> OTU_abs1e9_PCR_sub_MW4_DESeq2
SIPSimR phyloseq_DESeq2 \
--log2 {log2} \
--hypo greater \
--cont 1,3,5 \
--treat 2,4,6 \
--occur_heavy {occurs} \
-w 1.69-1.73,1.72-1.76,1.75-1.79 \
--all OTU_abs1e9_PCR_sub_MW5_all.txt \
OTU_abs{abs}_PCR_sub.physeq \
> OTU_abs1e9_PCR_sub_MW5_DESeq2
!chmod 777 $bashFileTmp
!cd $workDir; \
nestrun --template-file $bashFileTmp -d $buildDir --log-file HR-SIP_multi.log -j 10
%pushnote postSpar MW-HR-SIP complete
Explanation: Multi-window HR-SIP
End of explanation
bashFileTmp = os.path.splitext(bashFile)[0] + '_cMtx.sh'
bashFileTmp
%%writefile $bashFileTmp
#!/bin/bash
# HR-SIP multiple 'heavy' BD windows
SIPSimR DESeq2_confuseMtx \
--libs 2,4,6 \
--padj {padj} \
-o DESeq2_MW1-cMtx \
BD-shift_stats.txt \
OTU_abs1e9_PCR_sub_MW1_DESeq2
SIPSimR DESeq2_confuseMtx \
--libs 2,4,6 \
--padj {padj} \
-o DESeq2_MW2-cMtx \
BD-shift_stats.txt \
OTU_abs1e9_PCR_sub_MW2_DESeq2
SIPSimR DESeq2_confuseMtx \
--libs 2,4,6 \
--padj {padj} \
-o DESeq2_MW3-cMtx \
BD-shift_stats.txt \
OTU_abs1e9_PCR_sub_MW3_DESeq2
SIPSimR DESeq2_confuseMtx \
--libs 2,4,6 \
--padj {padj} \
-o DESeq2_MW4-cMtx \
BD-shift_stats.txt \
OTU_abs1e9_PCR_sub_MW4_DESeq2
SIPSimR DESeq2_confuseMtx \
--libs 2,4,6 \
--padj {padj} \
-o DESeq2_MW5-cMtx \
BD-shift_stats.txt \
OTU_abs1e9_PCR_sub_MW5_DESeq2
!chmod 777 $bashFileTmp
!cd $workDir; \
nestrun --template-file $bashFileTmp -d $buildDir --log-file cMtx.log -j 10
Explanation: Making confusion matrices
End of explanation
def agg_cMtx(prefix):
# all data
#!nestagg delim \
# -d $buildDir \
# -k percIncorp,percTaxa,rep \
# -o $prefix-cMtx_data.txt \
# --tab \
# $prefix-cMtx_data.txt
# overall
x = prefix + '-cMtx_overall.txt'
!nestagg delim \
-d $buildDir \
-k percIncorp,percTaxa,rep \
-o $x \
--tab \
$x
# by class
x = prefix + '-cMtx_byClass.txt'
!nestagg delim \
-d $buildDir \
-k percIncorp,percTaxa,rep \
-o $x \
--tab \
$x
agg_cMtx('DESeq2_MW1')
agg_cMtx('DESeq2_MW2')
agg_cMtx('DESeq2_MW3')
agg_cMtx('DESeq2_MW4')
agg_cMtx('DESeq2_MW5')
%pushnote postSpar MW-HR-SIP run complete!
Explanation: Aggregating the confusion matrix data
End of explanation
F = os.path.join(buildDir, '*-cMtx_byClass.txt')
files = glob.glob(F)
files
%%R -i files
df_byClass = list()
for (f in files){
ff = strsplit(f, '/') %>% unlist
fff = ff[length(ff)]
df_byClass[[fff]] = read.delim(f, sep='\t')
}
df_byClass = do.call(rbind, df_byClass)
df_byClass$file = gsub('\\.[0-9]+$', '', rownames(df_byClass))
df_byClass$method = gsub('-cMtx.+', '', df_byClass$file)
rownames(df_byClass) = 1:nrow(df_byClass)
df_byClass %>% head(n=3)
%%R
# renaming method
rename = data.frame(method = c('DESeq2_MW1', 'DESeq2_MW2', 'DESeq2_MW3', 'DESeq2_MW4', 'DESeq2_MW5'),
method_new = c('1.71-1.75',
'1.71-1.78',
'1.69-1.74,1.73-1.78',
'1.70-1.73,1.72-1.75,1.74-1.77',
'1.69-1.73,1.72-1.76,1.75-1.79'))
df_byClass = inner_join(df_byClass, rename, c('method'='method')) %>%
select(-method) %>%
rename('method' = method_new)
df_byClass$method = factor(df_byClass$method, levels=rename$method_new %>% as.vector)
df_byClass %>% head(n=3)
%%R -w 800 -h 550
# summarize by SIPSim rep & library rep
df_byClass.s = df_byClass %>%
group_by(method, percIncorp, percTaxa, variables) %>%
summarize(mean_value = mean(values),
sd_value = sd(values))
# plotting
ggplot(df_byClass.s, aes(variables, mean_value, color=method,
ymin=mean_value-sd_value,
ymax=mean_value+sd_value)) +
geom_pointrange(alpha=0.8, size=0.2) +
labs(y='Value') +
facet_grid(percTaxa ~ percIncorp) +
theme_bw() +
theme(
text = element_text(size=16),
axis.title.x = element_blank(),
axis.text.x = element_text(angle=45, hjust=1)
)
%%R -w 850 -h 600
# summarize by SIPSim rep & library rep
vars = c('Balanced Accuracy', 'Sensitivity', 'Specificity')
df_byClass.s.f = df_byClass.s %>%
filter(variables %in% vars)
# plotting
ggplot(df_byClass.s.f, aes(variables, mean_value, fill=method,
ymin=mean_value-sd_value,
ymax=mean_value+sd_value)) +
#geom_pointrange(alpha=0.8, size=0.2) +
geom_bar(stat='identity', position='dodge', width=0.8) +
geom_errorbar(stat='identity', position='dodge', width=0.8) +
scale_y_continuous(breaks=seq(0, 1, 0.2)) +
scale_fill_discrete('"Heavy" BD window(s)') +
facet_grid(percTaxa ~ percIncorp) +
theme_bw() +
theme(
text = element_text(size=16),
axis.title.x = element_blank(),
axis.text.x = element_text(angle=45, hjust=1),
axis.title.y = element_blank()
)
Explanation: --End of simulation--
Plotting results
End of explanation
%%R -h 250 -w 650
df_byClass.sf = df_byClass %>%
filter(variables == 'Specificity')
max_val = max(df_byClass.sf$values, na.rm=TRUE)
ggplot(df_byClass.sf, aes(values)) +
geom_histogram() +
scale_y_log10() +
labs(x='Specificity') +
theme_bw() +
theme(
text = element_text(size=16)
)
Explanation: Checking that specificity is not always 1 (perfect)
End of explanation
<END_TASK> |
15,698 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Certamen 2B, TI 2, 2017-1
Leo Ferres & Rodrigo Trigo
UDD
Pregunta 1
Cree la función horaValida(fecha) que devuelva True si el argumento es una hora real, o False si no. Ejemplo, "15
Step1: Pregunta 2
Otra vez el bendito triángulo. Lo pueden hacer o no? Cree una función que tome como argumento el alto de un triángulo equilátero y lo dibuje usando estrellitas. Por ejemplo, para $h=3$, entonces
```
*
```
Pregunta 3
Dado el string de su RUT sin guión ni dígito verificador encuentre la $\sum_{i=1}^{n}d_i+i$, donde $n$ es el largo del string, $d$ es cada dígito, y $d_1$ es el último número del RUT.
Step2: Pregunta 4
Cree dos funciones | <ASSISTANT_TASK:>
Python Code:
##escriba la función aqui##
horaValida('13:00:00')
Explanation: Certamen 2B, TI 2, 2017-1
Leo Ferres & Rodrigo Trigo
UDD
Pregunta 1
Cree la función horaValida(fecha) que devuelva True si el argumento es una hora real, o False si no. Ejemplo, "15:61:01" no es válida. La hora se dará en el siguiente formato: hh:mm:ss. Sugerencia: puede usar la función split() de str. Compruebe que ejecute usando una hora incorrecta.
End of explanation
rut = input("ingrese su rut: ")
##su código va aqui##
Explanation: Pregunta 2
Otra vez el bendito triángulo. Lo pueden hacer o no? Cree una función que tome como argumento el alto de un triángulo equilátero y lo dibuje usando estrellitas. Por ejemplo, para $h=3$, entonces
```
*
```
Pregunta 3
Dado el string de su RUT sin guión ni dígito verificador encuentre la $\sum_{i=1}^{n}d_i+i$, donde $n$ es el largo del string, $d$ es cada dígito, y $d_1$ es el último número del RUT.
End of explanation
import random
random.seed(int(rut))
##su código va aqui##
Explanation: Pregunta 4
Cree dos funciones: 1) tirarMoneda() que devuelva cara (True) o sello (False), 2) la función contar() que retorna cuántas monedas tiró para llegar a 10000 caras.
End of explanation
<END_TASK> |
15,699 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Properties of the Stars
Analyzing Appearance and Properties
<span style="color
Step1: PART 1
Step2: Run the Code below to create a chart containing all of the Stars with Exoplantes within 100 light years!
Step3: <span style="color
Step4: Run the code below to display a chart of Star color and temperatures.
Step5: <span style="color
Step6: Choose a Constellation from the list above and insert the 3 letter code below in the " ", for example "Vir".
Step7: <span style="color | <ASSISTANT_TASK:>
Python Code:
# Import modules that contain functions we need
import pandas as pd
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
# Read in data that will be used for the calculations.
# Using pandas read_csv method, we can create a data frame
data = pd.read_csv("https://github.com/adamlamee/CODINGinK12-data/raw/master/stars.csv")
# We wish too look at the first 3 rows of our data set
data.head(3)
Explanation: Properties of the Stars
Analyzing Appearance and Properties
<span style="color:magenta">Pre-Questions</span>
1. How do absolute magnitude (absmag) and apparent magnitude (mag) differ? How is luminosity related?
2. A star’s appearance gives clues about its properties. Since the nearest star would take thousands of years to reach to measure directly, how are these clues helpful to astronomers?
3. Astronomers have made observations of millions of stars, how might knowing the properties of these stars help them understand things like the composition of different parts of the universe or how solar systems form?
Importing the functions and getting the data...
End of explanation
fig = plt.figure(figsize=(15, 6))
plt.scatter(data.ra,data.dec, s=0.01)
plt.xlim(24, 0)
plt.title("All the Stars in the Catalogue")
plt.xlabel('Right Ascension (Hours)')
plt.ylabel('Declination (Degrees)')
Explanation: PART 1: All the Stars in Our Catalogue
<b>Declination</b> is the distance a star is North or South of the Celestial Equator, similar to <u><i>lattitude</u></i> on Earth. <b>Right</b> <b>Ascension</b> is how far east or west a star is, similar to <u><i>longitude</u></i> on Earth.
End of explanation
from IPython.display import Image
from IPython.core.display import HTML
Image(url= 'http://www.hpcf.upr.edu/~abel/phl/nearby_stars_with_exoplanets.png')
Explanation: Run the Code below to create a chart containing all of the Stars with Exoplantes within 100 light years!
End of explanation
# format the points on the graph
transparency = 1
size = 1
# draws a scatter plot
fig = plt.figure(figsize=(20, 4.5))
plt.scatter(data.temp, data.lum, s=size, edgecolors='none', alpha=transparency)
plt.xlim(2000,15000)
plt.ylim(0,1000)
plt.title("Does hotter mean brighter?")
plt.ylabel("Luminosity")
plt.xlabel("Temperature (K)")
Explanation: <span style="color:magenta">Part 1 Questions</span>
4. The graph you produced shows right ascension on the x-axis and declination on the y-axis. What does the resulting graph actually show?
5. Your code also produced a chart- how are the chart and graph similar to each other? How are they different?
6. Are there any stars you can identify on both of them? What are their names? What aspect of the graph allowed you to do that? Is there
PART 2: Relationships Between Two Properties
End of explanation
Image(url= 'http://hmxearthscience.com/Galaxies%20and%20Stars/HR%20Lab%202.jpg')
Explanation: Run the code below to display a chart of Star color and temperatures.
End of explanation
# These are the abbreviations for all the constellations
data.sort_values('con').con.unique()
Explanation: <span style="color:magenta">Part 2 Questions</span>
7. Each point on the scatter plot shows a star’s temperature and luminosity. What trends do you see in the data?
8. Around what temperature is the largest range of luminosities? Using the chart below the graph you produced determine the most likely color? If a star is blue, what temperature can you infer it is?
PART 3: Constellations and Star Properties
End of explanation
# This shows just one constellation
data_con = data.query('con == "Vir"')
# This plots where the brightest 10 stars are in the sky
data_con = data_con.sort_values('mag').head(10)
plt.scatter(data_con.ra,data_con.dec)
plt.gca().invert_xaxis()
plt.title("A constellation in the sky")
plt.xlabel('Right Ascension (degrees)')
plt.ylabel('Declination (Hours)')
Explanation: Choose a Constellation from the list above and insert the 3 letter code below in the " ", for example "Vir".
End of explanation
# format the points on the graph
transparency = 0.2
size = 1
# draws a scatter plot
fig = plt.figure(figsize=(6, 4.5))
plt.scatter(data.temp, data.absmag, s=size, edgecolors='none', alpha=transparency)
plt.scatter(data_con.temp, data_con.absmag, color='red', edgecolors='none')
plt.xlim(17000,2000)
plt.ylim(18,-18)
plt.title("Types of stars in a constellation")
plt.ylabel("Absolute Magnitude")
plt.xlabel("Temperature (K)")
Explanation: <span style="color:magenta">Part 3 Question 9</span>
9. Which constellation did you choose? How many stars in it? Is it visible above Miami right now? Is it visible over Sydney?
Use this link to help find your constellation! Can we see your constellation now?
Which types of stars make up your constellation?
End of explanation
<END_TASK> |