response
stringlengths
1
33.1k
instruction
stringlengths
22
582k
Given a list of tuples of new data we get for each sid on each critical date (when information changes), create a DataFrame that fills that data through a date range ending at `end_date`.
def create_expected_df_for_factor_compute(start_date, sids, tuples, end_date): """ Given a list of tuples of new data we get for each sid on each critical date (when information changes), create a DataFrame that fills that data through a date range ending at `end_date`. """ df = pd.DataFrame(tuples, columns=[SID_FIELD_NAME, 'estimate', 'knowledge_date']) df = df.pivot_table(columns=SID_FIELD_NAME, values='estimate', index='knowledge_date') df = df.reindex( pd.date_range(start_date, end_date) ) # Index name is lost during reindex. df.index = df.index.rename('knowledge_date') df['at_date'] = end_date.tz_localize('utc') df = df.set_index(['at_date', df.index.tz_localize('utc')]).ffill() new_sids = set(sids) - set(df.columns) df = df.reindex(columns=df.columns.union(new_sids)) return df
Return an iterator of SomeFactor instances that should all be the same object.
def gen_equivalent_factors(): """ Return an iterator of SomeFactor instances that should all be the same object. """ yield SomeFactor() yield SomeFactor(inputs=NotSpecified) yield SomeFactor(SomeFactor.inputs) yield SomeFactor(inputs=SomeFactor.inputs) yield SomeFactor([SomeDataSet.foo, SomeDataSet.bar]) yield SomeFactor(window_length=SomeFactor.window_length) yield SomeFactor(window_length=NotSpecified) yield SomeFactor( [SomeDataSet.foo, SomeDataSet.bar], window_length=NotSpecified, ) yield SomeFactor( [SomeDataSet.foo, SomeDataSet.bar], window_length=SomeFactor.window_length, ) yield SomeFactorAlias()
Convert a list to a dict with keys drawn from '0', '1', '2', ... Examples -------- >>> to_dict([2, 3, 4]) # doctest: +SKIP {'0': 2, '1': 3, '2': 4}
def to_dict(l): """ Convert a list to a dict with keys drawn from '0', '1', '2', ... Examples -------- >>> to_dict([2, 3, 4]) # doctest: +SKIP {'0': 2, '1': 3, '2': 4} """ return dict(zip(map(str, range(len(l))), l))
Helpful function to improve readibility.
def T(s): """ Helpful function to improve readibility. """ return Timestamp(s, tz='UTC')
Strict version of mapall.
def mapall(*args): "Strict version of mapall." return list(lazy_mapall(*args))
Return iterator of all values in d except the values in k.
def everything_but(k, d): """ Return iterator of all values in d except the values in k. """ assert k in d return concat(itervalues(keyfilter(ne(k), d)))
Encapsulates a set of custom command line arguments in key=value or key.namespace=value form into a chain of Namespace objects, where each next level is an attribute of the Namespace object on the current level Parameters ---------- args : list A list of strings representing arguments in key=value form root : Namespace The top-level element of the argument tree
def create_args(args, root): """ Encapsulates a set of custom command line arguments in key=value or key.namespace=value form into a chain of Namespace objects, where each next level is an attribute of the Namespace object on the current level Parameters ---------- args : list A list of strings representing arguments in key=value form root : Namespace The top-level element of the argument tree """ extension_args = {} for arg in args: parse_extension_arg(arg, extension_args) for name in sorted(extension_args, key=len): path = name.split('.') update_namespace(root, path, extension_args[name])
Converts argument strings in key=value or key.namespace=value form to dictionary entries Parameters ---------- arg : str The argument string to parse, which must be in key=value or key.namespace=value form. arg_dict : dict The dictionary into which the key/value pair will be added
def parse_extension_arg(arg, arg_dict): """ Converts argument strings in key=value or key.namespace=value form to dictionary entries Parameters ---------- arg : str The argument string to parse, which must be in key=value or key.namespace=value form. arg_dict : dict The dictionary into which the key/value pair will be added """ match = re.match(r'^(([^\d\W]\w*)(\.[^\d\W]\w*)*)=(.*)$', arg) if match is None: raise ValueError( "invalid extension argument '%s', must be in key=value form" % arg ) name = match.group(1) value = match.group(4) arg_dict[name] = value
A recursive function that takes a root element, list of namespaces, and the value being stored, and assigns namespaces to the root object via a chain of Namespace objects, connected through attributes Parameters ---------- namespace : Namespace The object onto which an attribute will be added path : list A list of strings representing namespaces name : str The value to be stored at the bottom level
def update_namespace(namespace, path, name): """ A recursive function that takes a root element, list of namespaces, and the value being stored, and assigns namespaces to the root object via a chain of Namespace objects, connected through attributes Parameters ---------- namespace : Namespace The object onto which an attribute will be added path : list A list of strings representing namespaces name : str The value to be stored at the bottom level """ if len(path) == 1: setattr(namespace, path[0], name) else: if hasattr(namespace, path[0]): if isinstance(getattr(namespace, path[0]), six.string_types): raise ValueError("Conflicting assignments at namespace" " level '%s'" % path[0]) else: a = Namespace() setattr(namespace, path[0], a) update_namespace(getattr(namespace, path[0]), path[1:], name)
Getter method for retrieving the registry instance for a given extendable type Parameters ---------- interface : type extendable type (base class) Returns ------- manager : Registry The corresponding registry
def get_registry(interface): """ Getter method for retrieving the registry instance for a given extendable type Parameters ---------- interface : type extendable type (base class) Returns ------- manager : Registry The corresponding registry """ try: return custom_types[interface] except KeyError: raise ValueError("class specified is not an extendable type")
Retrieves a custom class whose name is given. Parameters ---------- interface : type The base class for which to perform this operation name : str The name of the class to be retrieved. Returns ------- obj : object An instance of the desired class.
def load(interface, name): """ Retrieves a custom class whose name is given. Parameters ---------- interface : type The base class for which to perform this operation name : str The name of the class to be retrieved. Returns ------- obj : object An instance of the desired class. """ return get_registry(interface).load(name)
Registers a class for retrieval by the load method Parameters ---------- interface : type The base class for which to perform this operation name : str The name of the subclass custom_class : type The class to register, which must be a subclass of the abstract base class in self.dtype
def register(interface, name, custom_class): """ Registers a class for retrieval by the load method Parameters ---------- interface : type The base class for which to perform this operation name : str The name of the subclass custom_class : type The class to register, which must be a subclass of the abstract base class in self.dtype """ return get_registry(interface).register(name, custom_class)
If a class is registered with the given name, it is unregistered. Parameters ---------- interface : type The base class for which to perform this operation name : str The name of the class to be unregistered.
def unregister(interface, name): """ If a class is registered with the given name, it is unregistered. Parameters ---------- interface : type The base class for which to perform this operation name : str The name of the class to be unregistered. """ get_registry(interface).unregister(name)
Unregisters all current registered classes Parameters ---------- interface : type The base class for which to perform this operation
def clear(interface): """ Unregisters all current registered classes Parameters ---------- interface : type The base class for which to perform this operation """ get_registry(interface).clear()
Create a new registry for an extensible interface. Parameters ---------- interface : type The abstract data type for which to create a registry, which will manage registration of factories for this type. Returns ------- interface : type The data type specified/decorated, unaltered.
def create_registry(interface): """ Create a new registry for an extensible interface. Parameters ---------- interface : type The abstract data type for which to create a registry, which will manage registration of factories for this type. Returns ------- interface : type The data type specified/decorated, unaltered. """ if interface in custom_types: raise ValueError('there is already a Registry instance ' 'for the specified type') custom_types[interface] = Registry(interface) return interface
Create a deprecated ``__getitem__`` method that tells users to use getattr instead. Parameters ---------- name : str The name of the object in the warning message. attrs : iterable[str] The set of allowed attributes. Returns ------- __getitem__ : callable[any, str] The ``__getitem__`` method to put in the class dict.
def _deprecated_getitem_method(name, attrs): """Create a deprecated ``__getitem__`` method that tells users to use getattr instead. Parameters ---------- name : str The name of the object in the warning message. attrs : iterable[str] The set of allowed attributes. Returns ------- __getitem__ : callable[any, str] The ``__getitem__`` method to put in the class dict. """ attrs = frozenset(attrs) msg = ( "'{name}[{attr!r}]' is deprecated, please use" " '{name}.{attr}' instead" ) def __getitem__(self, key): """``__getitem__`` is deprecated, please use attribute access instead. """ warn(msg.format(name=name, attr=key), DeprecationWarning, stacklevel=2) if key in attrs: return getattr(self, key) raise KeyError(key) return __getitem__
Lives in zipline.__init__ for doctests.
def setup(self, np=np, numpy_version=numpy_version, StrictVersion=StrictVersion, new_pandas=new_pandas): """Lives in zipline.__init__ for doctests.""" if numpy_version >= StrictVersion('1.14'): self.old_opts = np.get_printoptions() np.set_printoptions(legacy='1.13') else: self.old_opts = None if new_pandas: self.old_err = np.geterr() # old pandas has numpy compat that sets this np.seterr(all='ignore') else: self.old_err = None
Lives in zipline.__init__ for doctests.
def teardown(self, np=np): """Lives in zipline.__init__ for doctests.""" if self.old_err is not None: np.seterr(**self.old_err) if self.old_opts is not None: np.set_printoptions(**self.old_opts)
Top level zipline entry point.
def main(ctx, extension, strict_extensions, default_extension, x): """Top level zipline entry point. """ # install a logbook handler before performing any other operations logbook.StderrHandler().push_application() create_args(x, zipline.extension_args) load_extensions( default_extension, extension, strict_extensions, os.environ, )
Convert a click.option call into a click.Option object. Parameters ---------- option : decorator A click.option decorator. Returns ------- option_object : click.Option The option object that this decorator will create.
def extract_option_object(option): """Convert a click.option call into a click.Option object. Parameters ---------- option : decorator A click.option decorator. Returns ------- option_object : click.Option The option object that this decorator will create. """ @option def opt(): pass return opt.__click_params__[0]
Mark that an option should only be exposed in IPython. Parameters ---------- option : decorator A click.option decorator. Returns ------- ipython_only_dec : decorator A decorator that correctly applies the argument even when not using IPython mode.
def ipython_only(option): """Mark that an option should only be exposed in IPython. Parameters ---------- option : decorator A click.option decorator. Returns ------- ipython_only_dec : decorator A decorator that correctly applies the argument even when not using IPython mode. """ if __IPYTHON__: return option argname = extract_option_object(option).name def d(f): @wraps(f) def _(*args, **kwargs): kwargs[argname] = None return f(*args, **kwargs) return _ return d
Run a backtest for the given algorithm.
def run(ctx, algofile, algotext, define, data_frequency, capital_base, bundle, bundle_timestamp, benchmark_file, benchmark_symbol, benchmark_sid, no_benchmark, start, end, output, trading_calendar, print_algo, metrics_set, local_namespace, blotter): """Run a backtest for the given algorithm. """ # check that the start and end dates are passed correctly if start is None and end is None: # check both at the same time to avoid the case where a user # does not pass either of these and then passes the first only # to be told they need to pass the second argument also ctx.fail( "must specify dates with '-s' / '--start' and '-e' / '--end'", ) if start is None: ctx.fail("must specify a start date with '-s' / '--start'") if end is None: ctx.fail("must specify an end date with '-e' / '--end'") if (algotext is not None) == (algofile is not None): ctx.fail( "must specify exactly one of '-f' / '--algofile' or" " '-t' / '--algotext'", ) trading_calendar = get_calendar(trading_calendar) benchmark_spec = BenchmarkSpec.from_cli_params( no_benchmark=no_benchmark, benchmark_sid=benchmark_sid, benchmark_symbol=benchmark_symbol, benchmark_file=benchmark_file, ) return _run( initialize=None, handle_data=None, before_trading_start=None, analyze=None, algofile=algofile, algotext=algotext, defines=define, data_frequency=data_frequency, capital_base=capital_base, bundle=bundle, bundle_timestamp=bundle_timestamp, start=start, end=end, output=output, trading_calendar=trading_calendar, print_algo=print_algo, metrics_set=metrics_set, local_namespace=local_namespace, environ=os.environ, blotter=blotter, benchmark_spec=benchmark_spec, )
The zipline IPython cell magic.
def zipline_magic(line, cell=None): """The zipline IPython cell magic. """ load_extensions( default=True, extensions=[], strict=True, environ=os.environ, ) try: return run.main( # put our overrides at the start of the parameter list so that # users may pass values with higher precedence [ '--algotext', cell, '--output', os.devnull, # don't write the results by default ] + ([ # these options are set when running in line magic mode # set a non None algo text to use the ipython user_ns '--algotext', '', '--local-namespace', ] if cell is None else []) + line.split(), '%s%%zipline' % ((cell or '') and '%'), # don't use system exit and propogate errors to the caller standalone_mode=False, ) except SystemExit as e: # https://github.com/mitsuhiko/click/pull/533 # even in standalone_mode=False `--help` really wants to kill us ;_; if e.code: raise ValueError('main returned non-zero status code: %d' % e.code)
Ingest the data for the given bundle.
def ingest(bundle, assets_version, show_progress): """Ingest the data for the given bundle. """ bundles_module.ingest( bundle, os.environ, pd.Timestamp.utcnow(), assets_version, show_progress, )
Clean up data downloaded with the ingest command.
def clean(bundle, before, after, keep_last): """Clean up data downloaded with the ingest command. """ bundles_module.clean( bundle, before, after, keep_last, )
List all of the available data bundles.
def bundles(): """List all of the available data bundles. """ for bundle in sorted(bundles_module.bundles.keys()): if bundle.startswith('.'): # hide the test data continue try: ingestions = list( map(text_type, bundles_module.ingestions_for_bundle(bundle)) ) except OSError as e: if e.errno != errno.ENOENT: raise ingestions = [] # If we got no ingestions, either because the directory didn't exist or # because there were no entries, print a single message indicating that # no ingestions have yet been made. for timestamp in ingestions or ["<no ingestions>"]: click.echo("%s %s" % (bundle, timestamp))
Given a dict of mappings where the values are lists of OwnershipPeriod objects, returns a dict with the same structure with new OwnershipPeriod objects adjusted so that the periods have no gaps. Orders the periods chronologically, and pushes forward the end date of each period to match the start date of the following period. The end date of the last period pushed forward to the max Timestamp.
def merge_ownership_periods(mappings): """ Given a dict of mappings where the values are lists of OwnershipPeriod objects, returns a dict with the same structure with new OwnershipPeriod objects adjusted so that the periods have no gaps. Orders the periods chronologically, and pushes forward the end date of each period to match the start date of the following period. The end date of the last period pushed forward to the max Timestamp. """ return valmap( lambda v: tuple( OwnershipPeriod( a.start, b.start, a.sid, a.value, ) for a, b in sliding_window( 2, concatv( sorted(v), # concat with a fake ownership object to make the last # end date be max timestamp [OwnershipPeriod( pd.Timestamp.max.tz_localize('utc'), None, None, None, )], ), ) ), mappings, )
Builds a dict mapping to lists of OwnershipPeriods, from a db table.
def build_ownership_map(table, key_from_row, value_from_row): """ Builds a dict mapping to lists of OwnershipPeriods, from a db table. """ return _build_ownership_map_from_rows( sa.select(table.c).execute().fetchall(), key_from_row, value_from_row, )
Builds a dict mapping group keys to maps of keys to lists of OwnershipPeriods, from a db table.
def build_grouped_ownership_map(table, key_from_row, value_from_row, group_key): """ Builds a dict mapping group keys to maps of keys to lists of OwnershipPeriods, from a db table. """ grouped_rows = groupby( group_key, sa.select(table.c).execute().fetchall(), ) return { key: _build_ownership_map_from_rows( rows, key_from_row, value_from_row, ) for key, rows in grouped_rows.items() }
Filter out kwargs from a dictionary. Parameters ---------- names : set[str] The names to select from ``dict_``. dict_ : dict[str, any] The dictionary to select from. Returns ------- kwargs : dict[str, any] ``dict_`` where the keys intersect with ``names`` and the values are not None.
def _filter_kwargs(names, dict_): """Filter out kwargs from a dictionary. Parameters ---------- names : set[str] The names to select from ``dict_``. dict_ : dict[str, any] The dictionary to select from. Returns ------- kwargs : dict[str, any] ``dict_`` where the keys intersect with ``names`` and the values are not None. """ return {k: v for k, v in dict_.items() if k in names and v is not None}
Takes in a dict of Asset init args and converts dates to pd.Timestamps
def _convert_asset_timestamp_fields(dict_): """ Takes in a dict of Asset init args and converts dates to pd.Timestamps """ for key in _asset_timestamp_fields & viewkeys(dict_): value = pd.Timestamp(dict_[key], tz='UTC') dict_[key] = None if isnull(value) else value return dict_
Whether or not `asset` was active at the time corresponding to `reference_date_value`. Parameters ---------- reference_date_value : int Date, represented as nanoseconds since EPOCH, for which we want to know if `asset` was alive. This is generally the result of accessing the `value` attribute of a pandas Timestamp. asset : Asset The asset object to check. Returns ------- was_active : bool Whether or not the `asset` existed at the specified time.
def was_active(reference_date_value, asset): """ Whether or not `asset` was active at the time corresponding to `reference_date_value`. Parameters ---------- reference_date_value : int Date, represented as nanoseconds since EPOCH, for which we want to know if `asset` was alive. This is generally the result of accessing the `value` attribute of a pandas Timestamp. asset : Asset The asset object to check. Returns ------- was_active : bool Whether or not the `asset` existed at the specified time. """ return ( asset.start_date.value <= reference_date_value <= asset.end_date.value )
Filter an iterable of Asset objects down to just assets that were alive at the time corresponding to `reference_date_value`. Parameters ---------- reference_date_value : int Date, represented as nanoseconds since EPOCH, for which we want to know if `asset` was alive. This is generally the result of accessing the `value` attribute of a pandas Timestamp. assets : iterable[Asset] The assets to filter. Returns ------- active_assets : list List of the active assets from `assets` on the requested date.
def only_active_assets(reference_date_value, assets): """ Filter an iterable of Asset objects down to just assets that were alive at the time corresponding to `reference_date_value`. Parameters ---------- reference_date_value : int Date, represented as nanoseconds since EPOCH, for which we want to know if `asset` was alive. This is generally the result of accessing the `value` attribute of a pandas Timestamp. assets : iterable[Asset] The assets to filter. Returns ------- active_assets : list List of the active assets from `assets` on the requested date. """ return [a for a in assets if was_active(reference_date_value, a)]
Alter columns from a table. Parameters ---------- name : str The name of the table. *columns The new columns to have. selection_string : str, optional The string to use in the selection. If not provided, it will select all of the new columns from the old table. Notes ----- The columns are passed explicitly because this should only be used in a downgrade where ``zipline.assets.asset_db_schema`` could change.
def alter_columns(op, name, *columns, **kwargs): """Alter columns from a table. Parameters ---------- name : str The name of the table. *columns The new columns to have. selection_string : str, optional The string to use in the selection. If not provided, it will select all of the new columns from the old table. Notes ----- The columns are passed explicitly because this should only be used in a downgrade where ``zipline.assets.asset_db_schema`` could change. """ selection_string = kwargs.pop('selection_string', None) if kwargs: raise TypeError( 'alter_columns received extra arguments: %r' % sorted(kwargs), ) if selection_string is None: selection_string = ', '.join(column.name for column in columns) tmp_name = '_alter_columns_' + name op.rename_table(name, tmp_name) for column in columns: # Clear any indices that already exist on this table, otherwise we will # fail to create the table because the indices will already be present. # When we create the table below, the indices that we want to preserve # will just get recreated. for table in name, tmp_name: try: op.drop_index('ix_%s_%s' % (table, column.name)) except sa.exc.OperationalError: pass op.create_table(name, *columns) op.execute( 'insert into %s select %s from %s' % ( name, selection_string, tmp_name, ), ) op.drop_table(tmp_name)
Downgrades the assets db at the given engine to the desired version. Parameters ---------- engine : Engine An SQLAlchemy engine to the assets database. desired_version : int The desired resulting version for the assets database.
def downgrade(engine, desired_version): """Downgrades the assets db at the given engine to the desired version. Parameters ---------- engine : Engine An SQLAlchemy engine to the assets database. desired_version : int The desired resulting version for the assets database. """ # Check the version of the db at the engine with engine.begin() as conn: metadata = sa.MetaData(conn) metadata.reflect() version_info_table = metadata.tables['version_info'] starting_version = sa.select((version_info_table.c.version,)).scalar() # Check for accidental upgrade if starting_version < desired_version: raise AssetDBImpossibleDowngrade(db_version=starting_version, desired_version=desired_version) # Check if the desired version is already the db version if starting_version == desired_version: # No downgrade needed return # Create alembic context ctx = MigrationContext.configure(conn) op = Operations(ctx) # Integer keys of downgrades to run # E.g.: [5, 4, 3, 2] would downgrade v6 to v2 downgrade_keys = range(desired_version, starting_version)[::-1] # Disable foreign keys until all downgrades are complete _pragma_foreign_keys(conn, False) # Execute the downgrades in order for downgrade_key in downgrade_keys: _downgrade_methods[downgrade_key](op, conn, version_info_table) # Re-enable foreign keys _pragma_foreign_keys(conn, True)
Sets the PRAGMA foreign_keys state of the SQLite database. Disabling the pragma allows for batch modification of tables with foreign keys. Parameters ---------- connection : Connection A SQLAlchemy connection to the db on : bool If true, PRAGMA foreign_keys will be set to ON. Otherwise, the PRAGMA foreign_keys will be set to OFF.
def _pragma_foreign_keys(connection, on): """Sets the PRAGMA foreign_keys state of the SQLite database. Disabling the pragma allows for batch modification of tables with foreign keys. Parameters ---------- connection : Connection A SQLAlchemy connection to the db on : bool If true, PRAGMA foreign_keys will be set to ON. Otherwise, the PRAGMA foreign_keys will be set to OFF. """ connection.execute("PRAGMA foreign_keys=%s" % ("ON" if on else "OFF"))
Decorator for marking that a method is a downgrade to a version to the previous version. Parameters ---------- src : int The version this downgrades from. Returns ------- decorator : callable[(callable) -> callable] The decorator to apply.
def downgrades(src): """Decorator for marking that a method is a downgrade to a version to the previous version. Parameters ---------- src : int The version this downgrades from. Returns ------- decorator : callable[(callable) -> callable] The decorator to apply. """ def _(f): destination = src - 1 @do(operator.setitem(_downgrade_methods, destination)) @wraps(f) def wrapper(op, conn, version_info_table): conn.execute(version_info_table.delete()) # clear the version f(op) write_version_info(conn, version_info_table, destination) return wrapper return _
Downgrade assets db by removing the 'tick_size' column and renaming the 'multiplier' column.
def _downgrade_v1(op): """ Downgrade assets db by removing the 'tick_size' column and renaming the 'multiplier' column. """ # Drop indices before batch # This is to prevent index collision when creating the temp table op.drop_index('ix_futures_contracts_root_symbol') op.drop_index('ix_futures_contracts_symbol') # Execute batch op to allow column modification in SQLite with op.batch_alter_table('futures_contracts') as batch_op: # Rename 'multiplier' batch_op.alter_column(column_name='multiplier', new_column_name='contract_multiplier') # Delete 'tick_size' batch_op.drop_column('tick_size') # Recreate indices after batch op.create_index('ix_futures_contracts_root_symbol', table_name='futures_contracts', columns=['root_symbol']) op.create_index('ix_futures_contracts_symbol', table_name='futures_contracts', columns=['symbol'], unique=True)
Downgrade assets db by removing the 'auto_close_date' column.
def _downgrade_v2(op): """ Downgrade assets db by removing the 'auto_close_date' column. """ # Drop indices before batch # This is to prevent index collision when creating the temp table op.drop_index('ix_equities_fuzzy_symbol') op.drop_index('ix_equities_company_symbol') # Execute batch op to allow column modification in SQLite with op.batch_alter_table('equities') as batch_op: batch_op.drop_column('auto_close_date') # Recreate indices after batch op.create_index('ix_equities_fuzzy_symbol', table_name='equities', columns=['fuzzy_symbol']) op.create_index('ix_equities_company_symbol', table_name='equities', columns=['company_symbol'])
Downgrade assets db by adding a not null constraint on ``equities.first_traded``
def _downgrade_v3(op): """ Downgrade assets db by adding a not null constraint on ``equities.first_traded`` """ op.create_table( '_new_equities', sa.Column( 'sid', sa.Integer, unique=True, nullable=False, primary_key=True, ), sa.Column('symbol', sa.Text), sa.Column('company_symbol', sa.Text), sa.Column('share_class_symbol', sa.Text), sa.Column('fuzzy_symbol', sa.Text), sa.Column('asset_name', sa.Text), sa.Column('start_date', sa.Integer, default=0, nullable=False), sa.Column('end_date', sa.Integer, nullable=False), sa.Column('first_traded', sa.Integer, nullable=False), sa.Column('auto_close_date', sa.Integer), sa.Column('exchange', sa.Text), ) op.execute( """ insert into _new_equities select * from equities where equities.first_traded is not null """, ) op.drop_table('equities') op.rename_table('_new_equities', 'equities') # we need to make sure the indices have the proper names after the rename op.create_index( 'ix_equities_company_symbol', 'equities', ['company_symbol'], ) op.create_index( 'ix_equities_fuzzy_symbol', 'equities', ['fuzzy_symbol'], )
Downgrades assets db by copying the `exchange_full` column to `exchange`, then dropping the `exchange_full` column.
def _downgrade_v4(op): """ Downgrades assets db by copying the `exchange_full` column to `exchange`, then dropping the `exchange_full` column. """ op.drop_index('ix_equities_fuzzy_symbol') op.drop_index('ix_equities_company_symbol') op.execute("UPDATE equities SET exchange = exchange_full") with op.batch_alter_table('equities') as batch_op: batch_op.drop_column('exchange_full') op.create_index('ix_equities_fuzzy_symbol', table_name='equities', columns=['fuzzy_symbol']) op.create_index('ix_equities_company_symbol', table_name='equities', columns=['company_symbol'])
Update dataframes in place to set indentifier columns as indices. For each input frame, if the frame has a column with the same name as its associated index column, set that column as the index. Otherwise, assume the index already contains identifiers. If frames are passed as None, they're ignored.
def _normalize_index_columns_in_place(equities, equity_supplementary_mappings, futures, exchanges, root_symbols): """ Update dataframes in place to set indentifier columns as indices. For each input frame, if the frame has a column with the same name as its associated index column, set that column as the index. Otherwise, assume the index already contains identifiers. If frames are passed as None, they're ignored. """ for frame, column_name in ((equities, 'sid'), (equity_supplementary_mappings, 'sid'), (futures, 'sid'), (exchanges, 'exchange'), (root_symbols, 'root_symbol')): if frame is not None and column_name in frame: frame.set_index(column_name, inplace=True)
Takes in a symbol that may be delimited and splits it in to a company symbol and share class symbol. Also returns the fuzzy symbol, which is the symbol without any fuzzy characters at all. Parameters ---------- symbol : str The possibly-delimited symbol to be split Returns ------- company_symbol : str The company part of the symbol. share_class_symbol : str The share class part of a symbol.
def split_delimited_symbol(symbol): """ Takes in a symbol that may be delimited and splits it in to a company symbol and share class symbol. Also returns the fuzzy symbol, which is the symbol without any fuzzy characters at all. Parameters ---------- symbol : str The possibly-delimited symbol to be split Returns ------- company_symbol : str The company part of the symbol. share_class_symbol : str The share class part of a symbol. """ # return blank strings for any bad fuzzy symbols, like NaN or None if symbol in _delimited_symbol_default_triggers: return '', '' symbol = symbol.upper() split_list = re.split( pattern=_delimited_symbol_delimiters_regex, string=symbol, maxsplit=1, ) # Break the list up in to its two components, the company symbol and the # share class symbol company_symbol = split_list[0] if len(split_list) > 1: share_class_symbol = split_list[1] else: share_class_symbol = '' return company_symbol, share_class_symbol
Generates an output dataframe from the given subset of user-provided data, the given column names, and the given default values. Parameters ---------- data_subset : DataFrame A DataFrame, usually from an AssetData object, that contains the user's input metadata for the asset type being processed defaults : dict A dict where the keys are the names of the columns of the desired output DataFrame and the values are a function from dataframe and column name to the default values to insert in the DataFrame if no user data is provided Returns ------- DataFrame A DataFrame containing all user-provided metadata, and default values wherever user-provided metadata was missing
def _generate_output_dataframe(data_subset, defaults): """ Generates an output dataframe from the given subset of user-provided data, the given column names, and the given default values. Parameters ---------- data_subset : DataFrame A DataFrame, usually from an AssetData object, that contains the user's input metadata for the asset type being processed defaults : dict A dict where the keys are the names of the columns of the desired output DataFrame and the values are a function from dataframe and column name to the default values to insert in the DataFrame if no user data is provided Returns ------- DataFrame A DataFrame containing all user-provided metadata, and default values wherever user-provided metadata was missing """ # The columns provided. cols = set(data_subset.columns) desired_cols = set(defaults) # Drop columns with unrecognised headers. data_subset.drop(cols - desired_cols, axis=1, inplace=True) # Get those columns which we need but # for which no data has been supplied. for col in desired_cols - cols: # write the default value for any missing columns data_subset[col] = defaults[col](data_subset, col) return data_subset
Check that there are no cases where multiple symbols resolve to the same asset at the same time in the same country. Parameters ---------- df : pd.DataFrame The equity symbol mappings table. exchanges : pd.DataFrame The exchanges table. asset_exchange : pd.Series A series that maps sids to the exchange the asset is in. Raises ------ ValueError Raised when there are ambiguous symbol mappings.
def _check_symbol_mappings(df, exchanges, asset_exchange): """Check that there are no cases where multiple symbols resolve to the same asset at the same time in the same country. Parameters ---------- df : pd.DataFrame The equity symbol mappings table. exchanges : pd.DataFrame The exchanges table. asset_exchange : pd.Series A series that maps sids to the exchange the asset is in. Raises ------ ValueError Raised when there are ambiguous symbol mappings. """ mappings = df.set_index('sid')[list(mapping_columns)].copy() mappings['country_code'] = exchanges['country_code'][ asset_exchange.loc[df['sid']] ].values ambigious = {} def check_intersections(persymbol): intersections = list(intersecting_ranges(map( from_tuple, zip(persymbol.start_date, persymbol.end_date), ))) if intersections: data = persymbol[ ['start_date', 'end_date'] ].astype('datetime64[ns]') # indent the dataframe string, also compute this early because # ``persymbol`` is a view and ``astype`` doesn't copy the index # correctly in pandas 0.22 msg_component = '\n '.join(str(data).splitlines()) ambigious[persymbol.name] = intersections, msg_component mappings.groupby(['symbol', 'country_code']).apply(check_intersections) if ambigious: raise ValueError( 'Ambiguous ownership for %d symbol%s, multiple assets held the' ' following symbols:\n%s' % ( len(ambigious), '' if len(ambigious) == 1 else 's', '\n'.join( '%s (%s):\n intersections: %s\n %s' % ( symbol, country_code, tuple(map(_format_range, intersections)), cs, ) for (symbol, country_code), (intersections, cs) in sorted( ambigious.items(), key=first, ) ), ) )
Split out the symbol: sid mappings from the raw data. Parameters ---------- df : pd.DataFrame The dataframe with multiple rows for each symbol: sid pair. exchanges : pd.DataFrame The exchanges table. Returns ------- asset_info : pd.DataFrame The asset info with one row per asset. symbol_mappings : pd.DataFrame The dataframe of just symbol: sid mappings. The index will be the sid, then there will be three columns: symbol, start_date, and end_date.
def _split_symbol_mappings(df, exchanges): """Split out the symbol: sid mappings from the raw data. Parameters ---------- df : pd.DataFrame The dataframe with multiple rows for each symbol: sid pair. exchanges : pd.DataFrame The exchanges table. Returns ------- asset_info : pd.DataFrame The asset info with one row per asset. symbol_mappings : pd.DataFrame The dataframe of just symbol: sid mappings. The index will be the sid, then there will be three columns: symbol, start_date, and end_date. """ mappings = df[list(mapping_columns)] with pd.option_context('mode.chained_assignment', None): mappings['sid'] = mappings.index mappings.reset_index(drop=True, inplace=True) # take the most recent sid->exchange mapping based on end date asset_exchange = df[ ['exchange', 'end_date'] ].sort_values('end_date').groupby(level=0)['exchange'].nth(-1) _check_symbol_mappings(mappings, exchanges, asset_exchange) return ( df.groupby(level=0).apply(_check_asset_group), mappings, )
Convert a timeseries into an Int64Index of nanoseconds since the epoch. Parameters ---------- dt_series : pd.Series The timeseries to convert. Returns ------- idx : pd.Int64Index The index converted to nanoseconds since the epoch.
def _dt_to_epoch_ns(dt_series): """Convert a timeseries into an Int64Index of nanoseconds since the epoch. Parameters ---------- dt_series : pd.Series The timeseries to convert. Returns ------- idx : pd.Int64Index The index converted to nanoseconds since the epoch. """ index = pd.to_datetime(dt_series.values) if index.tzinfo is None: index = index.tz_localize('UTC') else: index = index.tz_convert('UTC') return index.view(np.int64)
Checks for a version value in the version table. Parameters ---------- conn : sa.Connection The connection to use to perform the check. version_table : sa.Table The version table of the asset database expected_version : int The expected version of the asset database Raises ------ AssetDBVersionError If the version is in the table and not equal to ASSET_DB_VERSION.
def check_version_info(conn, version_table, expected_version): """ Checks for a version value in the version table. Parameters ---------- conn : sa.Connection The connection to use to perform the check. version_table : sa.Table The version table of the asset database expected_version : int The expected version of the asset database Raises ------ AssetDBVersionError If the version is in the table and not equal to ASSET_DB_VERSION. """ # Read the version out of the table version_from_table = conn.execute( sa.select((version_table.c.version,)), ).scalar() # A db without a version is considered v0 if version_from_table is None: version_from_table = 0 # Raise an error if the versions do not match if (version_from_table != expected_version): raise AssetDBVersionError(db_version=version_from_table, expected_version=expected_version)
Inserts the version value in to the version table. Parameters ---------- conn : sa.Connection The connection to use to execute the insert. version_table : sa.Table The version table of the asset database version_value : int The version to write in to the database
def write_version_info(conn, version_table, version_value): """ Inserts the version value in to the version table. Parameters ---------- conn : sa.Connection The connection to use to execute the insert. version_table : sa.Table The version table of the asset database version_value : int The version to write in to the database """ conn.execute(sa.insert(version_table, values={'version': version_value}))
Create a DataFrame representing lifetimes of assets that are constantly rotating in and out of existence. Parameters ---------- num_assets : int How many assets to create. first_start : pd.Timestamp The start date for the first asset. frequency : str or pd.tseries.offsets.Offset (e.g. trading_day) Frequency used to interpret next two arguments. periods_between_starts : int Create a new asset every `frequency` * `periods_between_new` asset_lifetime : int Each asset exists for `frequency` * `asset_lifetime` days. exchange : str, optional The exchange name. Returns ------- info : pd.DataFrame DataFrame representing newly-created assets.
def make_rotating_equity_info(num_assets, first_start, frequency, periods_between_starts, asset_lifetime, exchange='TEST'): """ Create a DataFrame representing lifetimes of assets that are constantly rotating in and out of existence. Parameters ---------- num_assets : int How many assets to create. first_start : pd.Timestamp The start date for the first asset. frequency : str or pd.tseries.offsets.Offset (e.g. trading_day) Frequency used to interpret next two arguments. periods_between_starts : int Create a new asset every `frequency` * `periods_between_new` asset_lifetime : int Each asset exists for `frequency` * `asset_lifetime` days. exchange : str, optional The exchange name. Returns ------- info : pd.DataFrame DataFrame representing newly-created assets. """ return pd.DataFrame( { 'symbol': [chr(ord('A') + i) for i in range(num_assets)], # Start a new asset every `periods_between_starts` days. 'start_date': pd.date_range( first_start, freq=(periods_between_starts * frequency), periods=num_assets, ), # Each asset lasts for `asset_lifetime` days. 'end_date': pd.date_range( first_start + (asset_lifetime * frequency), freq=(periods_between_starts * frequency), periods=num_assets, ), 'exchange': exchange, }, index=range(num_assets), )
Create a DataFrame representing assets that exist for the full duration between `start_date` and `end_date`. Parameters ---------- sids : array-like of int start_date : pd.Timestamp, optional end_date : pd.Timestamp, optional symbols : list, optional Symbols to use for the assets. If not provided, symbols are generated from the sequence 'A', 'B', ... names : list, optional Names to use for the assets. If not provided, names are generated by adding " INC." to each of the symbols (which might also be auto-generated). exchange : str, optional The exchange name. Returns ------- info : pd.DataFrame DataFrame representing newly-created assets.
def make_simple_equity_info(sids, start_date, end_date, symbols=None, names=None, exchange='TEST'): """ Create a DataFrame representing assets that exist for the full duration between `start_date` and `end_date`. Parameters ---------- sids : array-like of int start_date : pd.Timestamp, optional end_date : pd.Timestamp, optional symbols : list, optional Symbols to use for the assets. If not provided, symbols are generated from the sequence 'A', 'B', ... names : list, optional Names to use for the assets. If not provided, names are generated by adding " INC." to each of the symbols (which might also be auto-generated). exchange : str, optional The exchange name. Returns ------- info : pd.DataFrame DataFrame representing newly-created assets. """ num_assets = len(sids) if symbols is None: symbols = list(ascii_uppercase[:num_assets]) else: symbols = list(symbols) if names is None: names = [str(s) + " INC." for s in symbols] return pd.DataFrame( { 'symbol': symbols, 'start_date': pd.to_datetime([start_date] * num_assets), 'end_date': pd.to_datetime([end_date] * num_assets), 'asset_name': list(names), 'exchange': exchange, }, index=sids, columns=( 'start_date', 'end_date', 'symbol', 'exchange', 'asset_name', ), )
Create a DataFrame representing assets that exist for the full duration between `start_date` and `end_date`, from multiple countries.
def make_simple_multi_country_equity_info(countries_to_sids, countries_to_exchanges, start_date, end_date): """Create a DataFrame representing assets that exist for the full duration between `start_date` and `end_date`, from multiple countries. """ sids = [] symbols = [] exchanges = [] for country, country_sids in countries_to_sids.items(): exchange = countries_to_exchanges[country] for i, sid in enumerate(country_sids): sids.append(sid) symbols.append('-'.join([country, str(i)])) exchanges.append(exchange) return pd.DataFrame( { 'symbol': symbols, 'start_date': start_date, 'end_date': end_date, 'asset_name': symbols, 'exchange': exchanges, }, index=sids, columns=( 'start_date', 'end_date', 'symbol', 'exchange', 'asset_name', ), )
Create a DataFrame representing assets that all begin at the same start date, but have cascading end dates. Parameters ---------- num_assets : int How many assets to create. start_date : pd.Timestamp The start date for all the assets. first_end : pd.Timestamp The date at which the first equity will end. frequency : str or pd.tseries.offsets.Offset (e.g. trading_day) Frequency used to interpret the next argument. periods_between_ends : int Starting after the first end date, end each asset every `frequency` * `periods_between_ends`. Returns ------- info : pd.DataFrame DataFrame representing newly-created assets.
def make_jagged_equity_info(num_assets, start_date, first_end, frequency, periods_between_ends, auto_close_delta): """ Create a DataFrame representing assets that all begin at the same start date, but have cascading end dates. Parameters ---------- num_assets : int How many assets to create. start_date : pd.Timestamp The start date for all the assets. first_end : pd.Timestamp The date at which the first equity will end. frequency : str or pd.tseries.offsets.Offset (e.g. trading_day) Frequency used to interpret the next argument. periods_between_ends : int Starting after the first end date, end each asset every `frequency` * `periods_between_ends`. Returns ------- info : pd.DataFrame DataFrame representing newly-created assets. """ frame = pd.DataFrame( { 'symbol': [chr(ord('A') + i) for i in range(num_assets)], 'start_date': start_date, 'end_date': pd.date_range( first_end, freq=(periods_between_ends * frequency), periods=num_assets, ), 'exchange': 'TEST', }, index=range(num_assets), ) # Explicitly pass None to disable setting the auto_close_date column. if auto_close_delta is not None: frame['auto_close_date'] = frame['end_date'] + auto_close_delta return frame
Create a DataFrame representing futures for `root_symbols` during `year`. Generates a contract per triple of (symbol, year, month) supplied to `root_symbols`, `years`, and `month_codes`. Parameters ---------- first_sid : int The first sid to use for assigning sids to the created contracts. root_symbols : list[str] A list of root symbols for which to create futures. years : list[int or str] Years (e.g. 2014), for which to produce individual contracts. notice_date_func : (Timestamp) -> Timestamp Function to generate notice dates from first of the month associated with asset month code. Return NaT to simulate futures with no notice date. expiration_date_func : (Timestamp) -> Timestamp Function to generate expiration dates from first of the month associated with asset month code. start_date_func : (Timestamp) -> Timestamp, optional Function to generate start dates from first of the month associated with each asset month code. Defaults to a start_date one year prior to the month_code date. month_codes : dict[str -> [1..12]], optional Dictionary of month codes for which to create contracts. Entries should be strings mapped to values from 1 (January) to 12 (December). Default is zipline.futures.CMES_CODE_TO_MONTH multiplier : int The contract multiplier. Returns ------- futures_info : pd.DataFrame DataFrame of futures data suitable for passing to an AssetDBWriter.
def make_future_info(first_sid, root_symbols, years, notice_date_func, expiration_date_func, start_date_func, month_codes=None, multiplier=500): """ Create a DataFrame representing futures for `root_symbols` during `year`. Generates a contract per triple of (symbol, year, month) supplied to `root_symbols`, `years`, and `month_codes`. Parameters ---------- first_sid : int The first sid to use for assigning sids to the created contracts. root_symbols : list[str] A list of root symbols for which to create futures. years : list[int or str] Years (e.g. 2014), for which to produce individual contracts. notice_date_func : (Timestamp) -> Timestamp Function to generate notice dates from first of the month associated with asset month code. Return NaT to simulate futures with no notice date. expiration_date_func : (Timestamp) -> Timestamp Function to generate expiration dates from first of the month associated with asset month code. start_date_func : (Timestamp) -> Timestamp, optional Function to generate start dates from first of the month associated with each asset month code. Defaults to a start_date one year prior to the month_code date. month_codes : dict[str -> [1..12]], optional Dictionary of month codes for which to create contracts. Entries should be strings mapped to values from 1 (January) to 12 (December). Default is zipline.futures.CMES_CODE_TO_MONTH multiplier : int The contract multiplier. Returns ------- futures_info : pd.DataFrame DataFrame of futures data suitable for passing to an AssetDBWriter. """ if month_codes is None: month_codes = CMES_CODE_TO_MONTH year_strs = list(map(str, years)) years = [pd.Timestamp(s, tz='UTC') for s in year_strs] # Pairs of string/date like ('K06', 2006-05-01) sorted by year/month # `MonthBegin(month_num - 1)` since the year already starts at month 1. contract_suffix_to_beginning_of_month = tuple( (month_code + year_str[-2:], year + MonthBegin(month_num - 1)) for ((year, year_str), (month_code, month_num)) in product( zip(years, year_strs), sorted(list(month_codes.items()), key=lambda item: item[1]), ) ) contracts = [] parts = product(root_symbols, contract_suffix_to_beginning_of_month) for sid, (root_sym, (suffix, month_begin)) in enumerate(parts, first_sid): contracts.append({ 'sid': sid, 'root_symbol': root_sym, 'symbol': root_sym + suffix, 'start_date': start_date_func(month_begin), 'notice_date': notice_date_func(month_begin), 'expiration_date': expiration_date_func(month_begin), 'multiplier': multiplier, 'exchange': "TEST", }) return pd.DataFrame.from_records(contracts, index='sid')
Make futures testing data that simulates the notice/expiration date behavior of physical commodities like oil. Parameters ---------- first_sid : int The first sid to use for assigning sids to the created contracts. root_symbols : list[str] A list of root symbols for which to create futures. years : list[int or str] Years (e.g. 2014), for which to produce individual contracts. month_codes : dict[str -> [1..12]], optional Dictionary of month codes for which to create contracts. Entries should be strings mapped to values from 1 (January) to 12 (December). Default is zipline.futures.CMES_CODE_TO_MONTH multiplier : int The contract multiplier. Expiration dates are on the 20th of the month prior to the month code. Notice dates are are on the 20th two months prior to the month code. Start dates are one year before the contract month. See Also -------- make_future_info
def make_commodity_future_info(first_sid, root_symbols, years, month_codes=None, multiplier=500): """ Make futures testing data that simulates the notice/expiration date behavior of physical commodities like oil. Parameters ---------- first_sid : int The first sid to use for assigning sids to the created contracts. root_symbols : list[str] A list of root symbols for which to create futures. years : list[int or str] Years (e.g. 2014), for which to produce individual contracts. month_codes : dict[str -> [1..12]], optional Dictionary of month codes for which to create contracts. Entries should be strings mapped to values from 1 (January) to 12 (December). Default is zipline.futures.CMES_CODE_TO_MONTH multiplier : int The contract multiplier. Expiration dates are on the 20th of the month prior to the month code. Notice dates are are on the 20th two months prior to the month code. Start dates are one year before the contract month. See Also -------- make_future_info """ nineteen_days = pd.Timedelta(days=19) one_year = pd.Timedelta(days=365) return make_future_info( first_sid=first_sid, root_symbols=root_symbols, years=years, notice_date_func=lambda dt: dt - MonthBegin(2) + nineteen_days, expiration_date_func=lambda dt: dt - MonthBegin(1) + nineteen_days, start_date_func=lambda dt: dt - one_year, month_codes=month_codes, multiplier=multiplier, )
Drops any record where a value would not fit into a uint32. Parameters ---------- df : pd.DataFrame The dataframe to winsorise. invalid_data_behavior : {'warn', 'raise', 'ignore'} What to do when data is outside the bounds of a uint32. *columns : iterable[str] The names of the columns to check. Returns ------- truncated : pd.DataFrame ``df`` with values that do not fit into a uint32 zeroed out.
def winsorise_uint32(df, invalid_data_behavior, column, *columns): """Drops any record where a value would not fit into a uint32. Parameters ---------- df : pd.DataFrame The dataframe to winsorise. invalid_data_behavior : {'warn', 'raise', 'ignore'} What to do when data is outside the bounds of a uint32. *columns : iterable[str] The names of the columns to check. Returns ------- truncated : pd.DataFrame ``df`` with values that do not fit into a uint32 zeroed out. """ columns = list((column,) + columns) mask = df[columns] > UINT32_MAX if invalid_data_behavior != 'ignore': mask |= df[columns].isnull() else: # we are not going to generate a warning or error for this so just use # nan_to_num df[columns] = np.nan_to_num(df[columns]) mv = mask.values if mv.any(): if invalid_data_behavior == 'raise': raise ValueError( '%d values out of bounds for uint32: %r' % ( mv.sum(), df[mask.any(axis=1)], ), ) if invalid_data_behavior == 'warn': warnings.warn( 'Ignoring %d values because they are out of bounds for' ' uint32: %r' % ( mv.sum(), df[mask.any(axis=1)], ), stacklevel=3, # one extra frame for `expect_element` ) df[mask] = 0 return df
Get a Series of benchmark returns from a file Parameters ---------- filelike : str or file-like object Path to the benchmark file. expected csv file format: date,return 2020-01-02 00:00:00+00:00,0.01 2020-01-03 00:00:00+00:00,-0.02
def get_benchmark_returns_from_file(filelike): """ Get a Series of benchmark returns from a file Parameters ---------- filelike : str or file-like object Path to the benchmark file. expected csv file format: date,return 2020-01-02 00:00:00+00:00,0.01 2020-01-03 00:00:00+00:00,-0.02 """ log.info("Reading benchmark returns from {}", filelike) df = pd.read_csv( filelike, index_col=['date'], parse_dates=['date'], ).tz_localize('utc') if 'return' not in df.columns: raise ValueError("The column 'return' not found in the " "benchmark file \n" "Expected benchmark file format :\n" "date, return\n" "2020-01-02 00:00:00+00:00,0.01\n" "2020-01-03 00:00:00+00:00,-0.02\n") return df['return'].sort_index()
Returns a copy of the array as uint32, applying a scaling factor to maintain precision if supplied.
def coerce_to_uint32(a, scaling_factor): """ Returns a copy of the array as uint32, applying a scaling factor to maintain precision if supplied. """ return (a * scaling_factor).round().astype('uint32')
Returns the date index and sid columns shared by a list of dataframes, ensuring they all match. Parameters ---------- frames : list[pd.DataFrame] A list of dataframes indexed by day, with a column per sid. Returns ------- days : np.array[datetime64[ns]] The days in these dataframes. sids : np.array[int64] The sids in these dataframes. Raises ------ ValueError If the dataframes passed are not all indexed by the same days and sids.
def days_and_sids_for_frames(frames): """ Returns the date index and sid columns shared by a list of dataframes, ensuring they all match. Parameters ---------- frames : list[pd.DataFrame] A list of dataframes indexed by day, with a column per sid. Returns ------- days : np.array[datetime64[ns]] The days in these dataframes. sids : np.array[int64] The sids in these dataframes. Raises ------ ValueError If the dataframes passed are not all indexed by the same days and sids. """ if not frames: days = np.array([], dtype='datetime64[ns]') sids = np.array([], dtype='int64') return days, sids # Ensure the indices and columns all match. check_indexes_all_same( [frame.index for frame in frames], message='Frames have mismatched days.', ) check_indexes_all_same( [frame.columns for frame in frames], message='Frames have mismatched sids.', ) return frames[0].index.values, frames[0].columns.values
Parameters ---------- frames : dict[str, pd.DataFrame] A dict mapping each OHLCV field to a dataframe with a row for each date and a column for each sid, as passed to write(). Returns ------- start_date_ixs : np.array[int64] The index of the first date with non-nan values, for each sid. end_date_ixs : np.array[int64] The index of the last date with non-nan values, for each sid.
def compute_asset_lifetimes(frames): """ Parameters ---------- frames : dict[str, pd.DataFrame] A dict mapping each OHLCV field to a dataframe with a row for each date and a column for each sid, as passed to write(). Returns ------- start_date_ixs : np.array[int64] The index of the first date with non-nan values, for each sid. end_date_ixs : np.array[int64] The index of the last date with non-nan values, for each sid. """ # Build a 2D array (dates x sids), where an entry is True if all # fields are nan for the given day and sid. is_null_matrix = np.logical_and.reduce( [frames[field].isnull().values for field in FIELDS], ) if not is_null_matrix.size: empty = np.array([], dtype='int64') return empty, empty.copy() # Offset of the first null from the start of the input. start_date_ixs = is_null_matrix.argmin(axis=0) # Offset of the last null from the **end** of the input. end_offsets = is_null_matrix[::-1].argmin(axis=0) # Offset of the last null from the start of the input end_date_ixs = is_null_matrix.shape[0] - end_offsets - 1 return start_date_ixs, end_date_ixs
Check that two 1d arrays of sids are equal
def check_sids_arrays_match(left, right, message): """Check that two 1d arrays of sids are equal """ if len(left) != len(right): raise ValueError( "{}:\nlen(left) ({}) != len(right) ({})".format( message, len(left), len(right) ) ) diff = (left != right) if diff.any(): (bad_locs,) = np.where(diff) raise ValueError( "{}:\n Indices with differences: {}".format(message, bad_locs) )
Verify that DataFrames in ``frames`` have the same indexing scheme and are aligned to ``calendar``. Parameters ---------- frames : list[pd.DataFrame] calendar : trading_calendars.TradingCalendar Raises ------ ValueError If frames have different indexes/columns, or if frame indexes do not match a contiguous region of ``calendar``.
def verify_frames_aligned(frames, calendar): """ Verify that DataFrames in ``frames`` have the same indexing scheme and are aligned to ``calendar``. Parameters ---------- frames : list[pd.DataFrame] calendar : trading_calendars.TradingCalendar Raises ------ ValueError If frames have different indexes/columns, or if frame indexes do not match a contiguous region of ``calendar``. """ indexes = [f.index for f in frames] check_indexes_all_same(indexes, message="DataFrame indexes don't match:") columns = [f.columns for f in frames] check_indexes_all_same(columns, message="DataFrame columns don't match:") start, end = indexes[0][[0, -1]] cal_sessions = calendar.sessions_in_range(start, end) check_indexes_all_same( [indexes[0], cal_sessions], "DataFrame index doesn't match {} calendar:".format(calendar.name), )
Format subdir path to limit the number directories in any given subdirectory to 100. The number in each directory is designed to support at least 100000 equities. Parameters ---------- sid : int Asset identifier. Returns ------- out : string A path for the bcolz rootdir, including subdirectory prefixes based on the padded string representation of the given sid. e.g. 1 is formatted as 00/00/000001.bcolz
def _sid_subdir_path(sid): """ Format subdir path to limit the number directories in any given subdirectory to 100. The number in each directory is designed to support at least 100000 equities. Parameters ---------- sid : int Asset identifier. Returns ------- out : string A path for the bcolz rootdir, including subdirectory prefixes based on the padded string representation of the given sid. e.g. 1 is formatted as 00/00/000001.bcolz """ padded_sid = format(sid, '06') return os.path.join( # subdir 1 00/XX padded_sid[0:2], # subdir 2 XX/00 padded_sid[2:4], "{0}.bcolz".format(str(padded_sid)) )
Adapt OHLCV columns into uint32 columns. Parameters ---------- cols : dict A dict mapping each column name (open, high, low, close, volume) to a float column to convert to uint32. scale_factor : int Factor to use to scale float values before converting to uint32. sid : int Sid of the relevant asset, for logging. invalid_data_behavior : str Specifies behavior when data cannot be converted to uint32. If 'raise', raises an exception. If 'warn', logs a warning and filters out incompatible values. If 'ignore', silently filters out incompatible values.
def convert_cols(cols, scale_factor, sid, invalid_data_behavior): """Adapt OHLCV columns into uint32 columns. Parameters ---------- cols : dict A dict mapping each column name (open, high, low, close, volume) to a float column to convert to uint32. scale_factor : int Factor to use to scale float values before converting to uint32. sid : int Sid of the relevant asset, for logging. invalid_data_behavior : str Specifies behavior when data cannot be converted to uint32. If 'raise', raises an exception. If 'warn', logs a warning and filters out incompatible values. If 'ignore', silently filters out incompatible values. """ scaled_opens = (np.nan_to_num(cols['open']) * scale_factor).round() scaled_highs = (np.nan_to_num(cols['high']) * scale_factor).round() scaled_lows = (np.nan_to_num(cols['low']) * scale_factor).round() scaled_closes = (np.nan_to_num(cols['close']) * scale_factor).round() exclude_mask = np.zeros_like(scaled_opens, dtype=bool) for col_name, scaled_col in [ ('open', scaled_opens), ('high', scaled_highs), ('low', scaled_lows), ('close', scaled_closes), ]: max_val = scaled_col.max() try: check_uint32_safe(max_val, col_name) except ValueError: if invalid_data_behavior == 'raise': raise if invalid_data_behavior == 'warn': logger.warn( 'Values for sid={}, col={} contain some too large for ' 'uint32 (max={}), filtering them out', sid, col_name, max_val, ) # We want to exclude all rows that have an unsafe value in # this column. exclude_mask &= (scaled_col >= np.iinfo(np.uint32).max) # Convert all cols to uint32. opens = scaled_opens.astype(np.uint32) highs = scaled_highs.astype(np.uint32) lows = scaled_lows.astype(np.uint32) closes = scaled_closes.astype(np.uint32) volumes = cols['volume'].astype(np.uint32) # Exclude rows with unsafe values by setting to zero. opens[exclude_mask] = 0 highs[exclude_mask] = 0 lows[exclude_mask] = 0 closes[exclude_mask] = 0 volumes[exclude_mask] = 0 return opens, highs, lows, closes, volumes
Resample a DataFrame with minute data into the frame expected by a BcolzDailyBarWriter. Parameters ---------- minute_frame : pd.DataFrame A DataFrame with the columns `open`, `high`, `low`, `close`, `volume`, and `dt` (minute dts) calendar : trading_calendars.trading_calendar.TradingCalendar A TradingCalendar on which session labels to resample from minute to session. Return ------ session_frame : pd.DataFrame A DataFrame with the columns `open`, `high`, `low`, `close`, `volume`, and `day` (datetime-like).
def minute_frame_to_session_frame(minute_frame, calendar): """ Resample a DataFrame with minute data into the frame expected by a BcolzDailyBarWriter. Parameters ---------- minute_frame : pd.DataFrame A DataFrame with the columns `open`, `high`, `low`, `close`, `volume`, and `dt` (minute dts) calendar : trading_calendars.trading_calendar.TradingCalendar A TradingCalendar on which session labels to resample from minute to session. Return ------ session_frame : pd.DataFrame A DataFrame with the columns `open`, `high`, `low`, `close`, `volume`, and `day` (datetime-like). """ how = OrderedDict((c, _MINUTE_TO_SESSION_OHCLV_HOW[c]) for c in minute_frame.columns) labels = calendar.minute_index_to_session_labels(minute_frame.index) return minute_frame.groupby(labels).agg(how)
Resample an array with minute data into an array with session data. This function assumes that the minute data is the exact length of all minutes in the sessions in the output. Parameters ---------- column : str The `open`, `high`, `low`, `close`, or `volume` column. close_locs : array[intp] The locations in `data` which are the market close minutes. data : array[float64|uint32] The minute data to be sampled into session data. The first value should align with the market open of the first session, containing values for all minutes for all sessions. With the last value being the market close of the last session. out : array[float64|uint32] The output array into which to write the sampled sessions.
def minute_to_session(column, close_locs, data, out): """ Resample an array with minute data into an array with session data. This function assumes that the minute data is the exact length of all minutes in the sessions in the output. Parameters ---------- column : str The `open`, `high`, `low`, `close`, or `volume` column. close_locs : array[intp] The locations in `data` which are the market close minutes. data : array[float64|uint32] The minute data to be sampled into session data. The first value should align with the market open of the first session, containing values for all minutes for all sessions. With the last value being the market close of the last session. out : array[float64|uint32] The output array into which to write the sampled sessions. """ if column == 'open': _minute_to_session_open(close_locs, data, out) elif column == 'high': _minute_to_session_high(close_locs, data, out) elif column == 'low': _minute_to_session_low(close_locs, data, out) elif column == 'close': _minute_to_session_close(close_locs, data, out) elif column == 'volume': _minute_to_session_volume(close_locs, data, out) return out
Convert a pandas Timestamp into the name of the directory for the ingestion. Parameters ---------- ts : pandas.Timestamp The time of the ingestions Returns ------- name : str The name of the directory for this ingestion.
def to_bundle_ingest_dirname(ts): """Convert a pandas Timestamp into the name of the directory for the ingestion. Parameters ---------- ts : pandas.Timestamp The time of the ingestions Returns ------- name : str The name of the directory for this ingestion. """ return ts.isoformat().replace(':', ';')
Read a bundle ingestion directory name into a pandas Timestamp. Parameters ---------- cs : str The name of the directory. Returns ------- ts : pandas.Timestamp The time when this ingestion happened.
def from_bundle_ingest_dirname(cs): """Read a bundle ingestion directory name into a pandas Timestamp. Parameters ---------- cs : str The name of the directory. Returns ------- ts : pandas.Timestamp The time when this ingestion happened. """ return pd.Timestamp(cs.replace(';', ':'))
Create a family of data bundle functions that read from the same bundle mapping. Returns ------- bundles : mappingproxy The mapping of bundles to bundle payloads. register : callable The function which registers new bundles in the ``bundles`` mapping. unregister : callable The function which deregisters bundles from the ``bundles`` mapping. ingest : callable The function which downloads and write data for a given data bundle. load : callable The function which loads the ingested bundles back into memory. clean : callable The function which cleans up data written with ``ingest``.
def _make_bundle_core(): """Create a family of data bundle functions that read from the same bundle mapping. Returns ------- bundles : mappingproxy The mapping of bundles to bundle payloads. register : callable The function which registers new bundles in the ``bundles`` mapping. unregister : callable The function which deregisters bundles from the ``bundles`` mapping. ingest : callable The function which downloads and write data for a given data bundle. load : callable The function which loads the ingested bundles back into memory. clean : callable The function which cleans up data written with ``ingest``. """ _bundles = {} # the registered bundles # Expose _bundles through a proxy so that users cannot mutate this # accidentally. Users may go through `register` to update this which will # warn when trampling another bundle. bundles = mappingproxy(_bundles) @curry def register(name, f, calendar_name='NYSE', start_session=None, end_session=None, minutes_per_day=390, create_writers=True): """Register a data bundle ingest function. Parameters ---------- name : str The name of the bundle. f : callable The ingest function. This function will be passed: environ : mapping The environment this is being run with. asset_db_writer : AssetDBWriter The asset db writer to write into. minute_bar_writer : BcolzMinuteBarWriter The minute bar writer to write into. daily_bar_writer : BcolzDailyBarWriter The daily bar writer to write into. adjustment_writer : SQLiteAdjustmentWriter The adjustment db writer to write into. calendar : trading_calendars.TradingCalendar The trading calendar to ingest for. start_session : pd.Timestamp The first session of data to ingest. end_session : pd.Timestamp The last session of data to ingest. cache : DataFrameCache A mapping object to temporarily store dataframes. This should be used to cache intermediates in case the load fails. This will be automatically cleaned up after a successful load. show_progress : bool Show the progress for the current load where possible. calendar_name : str, optional The name of a calendar used to align bundle data. Default is 'NYSE'. start_session : pd.Timestamp, optional The first session for which we want data. If not provided, or if the date lies outside the range supported by the calendar, the first_session of the calendar is used. end_session : pd.Timestamp, optional The last session for which we want data. If not provided, or if the date lies outside the range supported by the calendar, the last_session of the calendar is used. minutes_per_day : int, optional The number of minutes in each normal trading day. create_writers : bool, optional Should the ingest machinery create the writers for the ingest function. This can be disabled as an optimization for cases where they are not needed, like the ``quantopian-quandl`` bundle. Notes ----- This function my be used as a decorator, for example: .. code-block:: python @register('quandl') def quandl_ingest_function(...): ... See Also -------- zipline.data.bundles.bundles """ if name in bundles: warnings.warn( 'Overwriting bundle with name %r' % name, stacklevel=3, ) # NOTE: We don't eagerly compute calendar values here because # `register` is called at module scope in zipline, and creating a # calendar currently takes between 0.5 and 1 seconds, which causes a # noticeable delay on the zipline CLI. _bundles[name] = RegisteredBundle( calendar_name=calendar_name, start_session=start_session, end_session=end_session, minutes_per_day=minutes_per_day, ingest=f, create_writers=create_writers, ) return f def unregister(name): """Unregister a bundle. Parameters ---------- name : str The name of the bundle to unregister. Raises ------ UnknownBundle Raised when no bundle has been registered with the given name. See Also -------- zipline.data.bundles.bundles """ try: del _bundles[name] except KeyError: raise UnknownBundle(name) def ingest(name, environ=os.environ, timestamp=None, assets_versions=(), show_progress=False): """Ingest data for a given bundle. Parameters ---------- name : str The name of the bundle. environ : mapping, optional The environment variables. By default this is os.environ. timestamp : datetime, optional The timestamp to use for the load. By default this is the current time. assets_versions : Iterable[int], optional Versions of the assets db to which to downgrade. show_progress : bool, optional Tell the ingest function to display the progress where possible. """ try: bundle = bundles[name] except KeyError: raise UnknownBundle(name) calendar = get_calendar(bundle.calendar_name) start_session = bundle.start_session end_session = bundle.end_session if start_session is None or start_session < calendar.first_session: start_session = calendar.first_session if end_session is None or end_session > calendar.last_session: end_session = calendar.last_session if timestamp is None: timestamp = pd.Timestamp.utcnow() timestamp = timestamp.tz_convert('utc').tz_localize(None) timestr = to_bundle_ingest_dirname(timestamp) cachepath = cache_path(name, environ=environ) pth.ensure_directory(pth.data_path([name, timestr], environ=environ)) pth.ensure_directory(cachepath) with dataframe_cache(cachepath, clean_on_failure=False) as cache, \ ExitStack() as stack: # we use `cleanup_on_failure=False` so that we don't purge the # cache directory if the load fails in the middle if bundle.create_writers: wd = stack.enter_context(working_dir( pth.data_path([], environ=environ)) ) daily_bars_path = wd.ensure_dir( *daily_equity_relative(name, timestr) ) daily_bar_writer = BcolzDailyBarWriter( daily_bars_path, calendar, start_session, end_session, ) # Do an empty write to ensure that the daily ctables exist # when we create the SQLiteAdjustmentWriter below. The # SQLiteAdjustmentWriter needs to open the daily ctables so # that it can compute the adjustment ratios for the dividends. daily_bar_writer.write(()) minute_bar_writer = BcolzMinuteBarWriter( wd.ensure_dir(*minute_equity_relative(name, timestr)), calendar, start_session, end_session, minutes_per_day=bundle.minutes_per_day, ) assets_db_path = wd.getpath(*asset_db_relative(name, timestr)) asset_db_writer = AssetDBWriter(assets_db_path) adjustment_db_writer = stack.enter_context( SQLiteAdjustmentWriter( wd.getpath(*adjustment_db_relative(name, timestr)), BcolzDailyBarReader(daily_bars_path), overwrite=True, ) ) else: daily_bar_writer = None minute_bar_writer = None asset_db_writer = None adjustment_db_writer = None if assets_versions: raise ValueError('Need to ingest a bundle that creates ' 'writers in order to downgrade the assets' ' db.') log.info("Ingesting {}.", name) bundle.ingest( environ, asset_db_writer, minute_bar_writer, daily_bar_writer, adjustment_db_writer, calendar, start_session, end_session, cache, show_progress, pth.data_path([name, timestr], environ=environ), ) for version in sorted(set(assets_versions), reverse=True): version_path = wd.getpath(*asset_db_relative( name, timestr, db_version=version, )) with working_file(version_path) as wf: shutil.copy2(assets_db_path, wf.path) downgrade(wf.path, version) def most_recent_data(bundle_name, timestamp, environ=None): """Get the path to the most recent data after ``date``for the given bundle. Parameters ---------- bundle_name : str The name of the bundle to lookup. timestamp : datetime The timestamp to begin searching on or before. environ : dict, optional An environment dict to forward to zipline_root. """ if bundle_name not in bundles: raise UnknownBundle(bundle_name) try: candidates = os.listdir( pth.data_path([bundle_name], environ=environ), ) return pth.data_path( [bundle_name, max( filter(complement(pth.hidden), candidates), key=from_bundle_ingest_dirname, )], environ=environ, ) except (ValueError, OSError) as e: if getattr(e, 'errno', errno.ENOENT) != errno.ENOENT: raise raise ValueError( 'no data for bundle {bundle!r} on or before {timestamp}\n' 'maybe you need to run: $ zipline ingest -b {bundle}'.format( bundle=bundle_name, timestamp=timestamp, ), ) def load(name, environ=os.environ, timestamp=None): """Loads a previously ingested bundle. Parameters ---------- name : str The name of the bundle. environ : mapping, optional The environment variables. Defaults of os.environ. timestamp : datetime, optional The timestamp of the data to lookup. Defaults to the current time. Returns ------- bundle_data : BundleData The raw data readers for this bundle. """ if timestamp is None: timestamp = pd.Timestamp.utcnow() timestr = most_recent_data(name, timestamp, environ=environ) return BundleData( asset_finder=AssetFinder( asset_db_path(name, timestr, environ=environ), ), equity_minute_bar_reader=BcolzMinuteBarReader( minute_equity_path(name, timestr, environ=environ), ), equity_daily_bar_reader=BcolzDailyBarReader( daily_equity_path(name, timestr, environ=environ), ), adjustment_reader=SQLiteAdjustmentReader( adjustment_db_path(name, timestr, environ=environ), ), ) @preprocess( before=optionally(ensure_timestamp), after=optionally(ensure_timestamp), ) def clean(name, before=None, after=None, keep_last=None, environ=os.environ): """Clean up data that was created with ``ingest`` or ``$ python -m zipline ingest`` Parameters ---------- name : str The name of the bundle to remove data for. before : datetime, optional Remove data ingested before this date. This argument is mutually exclusive with: keep_last after : datetime, optional Remove data ingested after this date. This argument is mutually exclusive with: keep_last keep_last : int, optional Remove all but the last ``keep_last`` ingestions. This argument is mutually exclusive with: before after environ : mapping, optional The environment variables. Defaults of os.environ. Returns ------- cleaned : set[str] The names of the runs that were removed. Raises ------ BadClean Raised when ``before`` and or ``after`` are passed with ``keep_last``. This is a subclass of ``ValueError``. """ try: all_runs = sorted( filter( complement(pth.hidden), os.listdir(pth.data_path([name], environ=environ)), ), key=from_bundle_ingest_dirname, ) except OSError as e: if e.errno != errno.ENOENT: raise raise UnknownBundle(name) if before is after is keep_last is None: raise BadClean(before, after, keep_last) if ((before is not None or after is not None) and keep_last is not None): raise BadClean(before, after, keep_last) if keep_last is None: def should_clean(name): dt = from_bundle_ingest_dirname(name) return ( (before is not None and dt < before) or (after is not None and dt > after) ) elif keep_last >= 0: last_n_dts = set(take(keep_last, reversed(all_runs))) def should_clean(name): return name not in last_n_dts else: raise BadClean(before, after, keep_last) cleaned = set() for run in all_runs: if should_clean(run): log.info("Cleaning {}.", run) path = pth.data_path([name, run], environ=environ) shutil.rmtree(path) cleaned.add(path) return cleaned return BundleCore(bundles, register, unregister, ingest, load, clean)
Generate an ingest function for custom data bundle This function can be used in ~/.zipline/extension.py to register bundle with custom parameters, e.g. with a custom trading calendar. Parameters ---------- tframes: tuple, optional The data time frames, supported timeframes: 'daily' and 'minute' csvdir : string, optional, default: CSVDIR environment variable The path to the directory of this structure: <directory>/<timeframe1>/<symbol1>.csv <directory>/<timeframe1>/<symbol2>.csv <directory>/<timeframe1>/<symbol3>.csv <directory>/<timeframe2>/<symbol1>.csv <directory>/<timeframe2>/<symbol2>.csv <directory>/<timeframe2>/<symbol3>.csv Returns ------- ingest : callable The bundle ingest function Examples -------- This code should be added to ~/.zipline/extension.py .. code-block:: python from zipline.data.bundles import csvdir_equities, register register('custom-csvdir-bundle', csvdir_equities(["daily", "minute"], '/full/path/to/the/csvdir/directory'))
def csvdir_equities(tframes=None, csvdir=None): """ Generate an ingest function for custom data bundle This function can be used in ~/.zipline/extension.py to register bundle with custom parameters, e.g. with a custom trading calendar. Parameters ---------- tframes: tuple, optional The data time frames, supported timeframes: 'daily' and 'minute' csvdir : string, optional, default: CSVDIR environment variable The path to the directory of this structure: <directory>/<timeframe1>/<symbol1>.csv <directory>/<timeframe1>/<symbol2>.csv <directory>/<timeframe1>/<symbol3>.csv <directory>/<timeframe2>/<symbol1>.csv <directory>/<timeframe2>/<symbol2>.csv <directory>/<timeframe2>/<symbol3>.csv Returns ------- ingest : callable The bundle ingest function Examples -------- This code should be added to ~/.zipline/extension.py .. code-block:: python from zipline.data.bundles import csvdir_equities, register register('custom-csvdir-bundle', csvdir_equities(["daily", "minute"], '/full/path/to/the/csvdir/directory')) """ return CSVDIRBundle(tframes, csvdir).ingest
Build a zipline data bundle from the directory with csv files.
def csvdir_bundle(environ, asset_db_writer, minute_bar_writer, daily_bar_writer, adjustment_writer, calendar, start_session, end_session, cache, show_progress, output_dir, tframes=None, csvdir=None): """ Build a zipline data bundle from the directory with csv files. """ if not csvdir: csvdir = environ.get('CSVDIR') if not csvdir: raise ValueError("CSVDIR environment variable is not set") if not os.path.isdir(csvdir): raise ValueError("%s is not a directory" % csvdir) if not tframes: tframes = set(["daily", "minute"]).intersection(os.listdir(csvdir)) if not tframes: raise ValueError("'daily' and 'minute' directories " "not found in '%s'" % csvdir) divs_splits = {'divs': DataFrame(columns=['sid', 'amount', 'ex_date', 'record_date', 'declared_date', 'pay_date']), 'splits': DataFrame(columns=['sid', 'ratio', 'effective_date'])} for tframe in tframes: ddir = os.path.join(csvdir, tframe) symbols = sorted(item.split('.csv')[0] for item in os.listdir(ddir) if '.csv' in item) if not symbols: raise ValueError("no <symbol>.csv* files found in %s" % ddir) dtype = [('start_date', 'datetime64[ns]'), ('end_date', 'datetime64[ns]'), ('auto_close_date', 'datetime64[ns]'), ('symbol', 'object')] metadata = DataFrame(empty(len(symbols), dtype=dtype)) if tframe == 'minute': writer = minute_bar_writer else: writer = daily_bar_writer writer.write(_pricing_iter(ddir, symbols, metadata, divs_splits, show_progress), show_progress=show_progress) # Hardcode the exchange to "CSVDIR" for all assets and (elsewhere) # register "CSVDIR" to resolve to the NYSE calendar, because these # are all equities and thus can use the NYSE calendar. metadata['exchange'] = "CSVDIR" asset_db_writer.write(equities=metadata) divs_splits['divs']['sid'] = divs_splits['divs']['sid'].astype(int) divs_splits['splits']['sid'] = divs_splits['splits']['sid'].astype(int) adjustment_writer.write(splits=divs_splits['splits'], dividends=divs_splits['divs'])
Build the query URL for Quandl WIKI Prices metadata.
def format_metadata_url(api_key): """ Build the query URL for Quandl WIKI Prices metadata. """ query_params = [('api_key', api_key), ('qopts.export', 'true')] return ( QUANDL_DATA_URL + urlencode(query_params) )
Load data table from zip file provided by Quandl.
def load_data_table(file, index_col, show_progress=False): """ Load data table from zip file provided by Quandl. """ with ZipFile(file) as zip_file: file_names = zip_file.namelist() assert len(file_names) == 1, "Expected a single file from Quandl." wiki_prices = file_names.pop() with zip_file.open(wiki_prices) as table_file: if show_progress: log.info('Parsing raw data.') data_table = pd.read_csv( table_file, parse_dates=['date'], index_col=index_col, usecols=[ 'ticker', 'date', 'open', 'high', 'low', 'close', 'volume', 'ex-dividend', 'split_ratio', ], ) data_table.rename( columns={ 'ticker': 'symbol', 'ex-dividend': 'ex_dividend', }, inplace=True, copy=False, ) return data_table
Fetch WIKI Prices data table from Quandl
def fetch_data_table(api_key, show_progress, retries): """ Fetch WIKI Prices data table from Quandl """ for _ in range(retries): try: if show_progress: log.info('Downloading WIKI metadata.') metadata = pd.read_csv( format_metadata_url(api_key) ) # Extract link from metadata and download zip file. table_url = metadata.loc[0, 'file.link'] if show_progress: raw_file = download_with_progress( table_url, chunk_size=ONE_MEGABYTE, label="Downloading WIKI Prices table from Quandl" ) else: raw_file = download_without_progress(table_url) return load_data_table( file=raw_file, index_col=None, show_progress=show_progress, ) except Exception: log.exception("Exception raised reading Quandl data. Retrying.") else: raise ValueError( "Failed to download Quandl data after %d attempts." % (retries) )
quandl_bundle builds a daily dataset using Quandl's WIKI Prices dataset. For more information on Quandl's API and how to obtain an API key, please visit https://docs.quandl.com/docs#section-authentication
def quandl_bundle(environ, asset_db_writer, minute_bar_writer, daily_bar_writer, adjustment_writer, calendar, start_session, end_session, cache, show_progress, output_dir): """ quandl_bundle builds a daily dataset using Quandl's WIKI Prices dataset. For more information on Quandl's API and how to obtain an API key, please visit https://docs.quandl.com/docs#section-authentication """ api_key = environ.get('QUANDL_API_KEY') if api_key is None: raise ValueError( "Please set your QUANDL_API_KEY environment variable and retry." ) raw_data = fetch_data_table( api_key, show_progress, environ.get('QUANDL_DOWNLOAD_ATTEMPTS', 5) ) asset_metadata = gen_asset_metadata( raw_data[['symbol', 'date']], show_progress ) asset_db_writer.write(asset_metadata) symbol_map = asset_metadata.symbol sessions = calendar.sessions_in_range(start_session, end_session) raw_data.set_index(['date', 'symbol'], inplace=True) daily_bar_writer.write( parse_pricing_and_vol( raw_data, sessions, symbol_map ), show_progress=show_progress ) raw_data.reset_index(inplace=True) raw_data['symbol'] = raw_data['symbol'].astype('category') raw_data['sid'] = raw_data.symbol.cat.codes adjustment_writer.write( splits=parse_splits( raw_data[[ 'sid', 'date', 'split_ratio', ]].loc[raw_data.split_ratio != 1], show_progress=show_progress ), dividends=parse_dividends( raw_data[[ 'sid', 'date', 'ex_dividend', ]].loc[raw_data.ex_dividend != 0], show_progress=show_progress ) )
Download streaming data from a URL, printing progress information to the terminal. Parameters ---------- url : str A URL that can be understood by ``requests.get``. chunk_size : int Number of bytes to read at a time from requests. **progress_kwargs Forwarded to click.progressbar. Returns ------- data : BytesIO A BytesIO containing the downloaded data.
def download_with_progress(url, chunk_size, **progress_kwargs): """ Download streaming data from a URL, printing progress information to the terminal. Parameters ---------- url : str A URL that can be understood by ``requests.get``. chunk_size : int Number of bytes to read at a time from requests. **progress_kwargs Forwarded to click.progressbar. Returns ------- data : BytesIO A BytesIO containing the downloaded data. """ resp = requests.get(url, stream=True) resp.raise_for_status() total_size = int(resp.headers['content-length']) data = BytesIO() with progressbar(length=total_size, **progress_kwargs) as pbar: for chunk in resp.iter_content(chunk_size=chunk_size): data.write(chunk) pbar.update(len(chunk)) data.seek(0) return data
Download data from a URL, returning a BytesIO containing the loaded data. Parameters ---------- url : str A URL that can be understood by ``requests.get``. Returns ------- data : BytesIO A BytesIO containing the downloaded data.
def download_without_progress(url): """ Download data from a URL, returning a BytesIO containing the loaded data. Parameters ---------- url : str A URL that can be understood by ``requests.get``. Returns ------- data : BytesIO A BytesIO containing the downloaded data. """ resp = requests.get(url) resp.raise_for_status() return BytesIO(resp.content)
Validate that ``requested_dts`` are valid for querying from an FX reader.
def check_dts(requested_dts): """Validate that ``requested_dts`` are valid for querying from an FX reader. """ if not is_sorted_ascending(requested_dts): raise ValueError("Requested fx rates with non-ascending dts.")
Extra arguments to use when zipline's automated tests run this example.
def _test_args(): """Extra arguments to use when zipline's automated tests run this example. """ import pandas as pd return { 'start': pd.Timestamp('2014-01-01', tz='utc'), 'end': pd.Timestamp('2014-11-01', tz='utc'), }
Extra arguments to use when zipline's automated tests run this example.
def _test_args(): """Extra arguments to use when zipline's automated tests run this example. """ import pandas as pd return { 'start': pd.Timestamp('2008', tz='utc'), 'end': pd.Timestamp('2013', tz='utc'), }
Extra arguments to use when zipline's automated tests run this example.
def _test_args(): """Extra arguments to use when zipline's automated tests run this example. """ import pandas as pd return { 'start': pd.Timestamp('2014-01-01', tz='utc'), 'end': pd.Timestamp('2014-11-01', tz='utc'), }
Extra arguments to use when zipline's automated tests run this example.
def _test_args(): """Extra arguments to use when zipline's automated tests run this example. """ import pandas as pd return { 'start': pd.Timestamp('2011', tz='utc'), 'end': pd.Timestamp('2013', tz='utc'), }
Extra arguments to use when zipline's automated tests run this example. Notes for testers: Gross leverage should be roughly 2.0 on every day except the first. Net leverage should be roughly 2.0 on every day except the first. Longs Count should always be 3 after the first day. Shorts Count should be 3 after the first day, except on 2013-10-30, when it dips to 2 for a day because DELL is delisted.
def _test_args(): """ Extra arguments to use when zipline's automated tests run this example. Notes for testers: Gross leverage should be roughly 2.0 on every day except the first. Net leverage should be roughly 2.0 on every day except the first. Longs Count should always be 3 after the first day. Shorts Count should be 3 after the first day, except on 2013-10-30, when it dips to 2 for a day because DELL is delisted. """ import pandas as pd return { # We run through october of 2013 because DELL is in the test data and # it went private on 2013-10-29. 'start': pd.Timestamp('2013-10-07', tz='utc'), 'end': pd.Timestamp('2013-11-30', tz='utc'), 'capital_base': 100000, }
Projection vectors to the simplex domain Implemented according to the paper: Efficient projections onto the l1-ball for learning in high dimensions, John Duchi, et al. ICML 2008. Implementation Time: 2011 June 17 by Bin@libin AT pmail.ntu.edu.sg Optimization Problem: min_{w}\| w - v \|_{2}^{2} s.t. sum_{i=1}^{m}=z, w_{i}\geq 0 Input: A vector v \in R^{m}, and a scalar z > 0 (default=1) Output: Projection vector w :Example: >>> proj = simplex_projection([.4 ,.3, -.4, .5]) >>> proj # doctest: +NORMALIZE_WHITESPACE array([ 0.33333333, 0.23333333, 0. , 0.43333333]) >>> print(proj.sum()) 1.0 Original matlab implementation: John Duchi ([email protected]) Python-port: Copyright 2013 by Thomas Wiecki ([email protected]).
def simplex_projection(v, b=1): r"""Projection vectors to the simplex domain Implemented according to the paper: Efficient projections onto the l1-ball for learning in high dimensions, John Duchi, et al. ICML 2008. Implementation Time: 2011 June 17 by Bin@libin AT pmail.ntu.edu.sg Optimization Problem: min_{w}\| w - v \|_{2}^{2} s.t. sum_{i=1}^{m}=z, w_{i}\geq 0 Input: A vector v \in R^{m}, and a scalar z > 0 (default=1) Output: Projection vector w :Example: >>> proj = simplex_projection([.4 ,.3, -.4, .5]) >>> proj # doctest: +NORMALIZE_WHITESPACE array([ 0.33333333, 0.23333333, 0. , 0.43333333]) >>> print(proj.sum()) 1.0 Original matlab implementation: John Duchi ([email protected]) Python-port: Copyright 2013 by Thomas Wiecki ([email protected]). """ v = np.asarray(v) p = len(v) # Sort v into u in descending order v = (v > 0) * v u = np.sort(v)[::-1] sv = np.cumsum(u) rho = np.where(u > (sv - b) / np.arange(1, p + 1))[0][-1] theta = np.max([0, (sv[rho] - b) / (rho + 1)]) w = (v - theta) w[w < 0] = 0 return w
Extra arguments to use when zipline's automated tests run this example.
def _test_args(): """Extra arguments to use when zipline's automated tests run this example. """ import pandas as pd return { 'start': pd.Timestamp('2004', tz='utc'), 'end': pd.Timestamp('2008', tz='utc'), }
Run an example module from zipline.examples.
def run_example(example_modules, example_name, environ, benchmark_returns=None): """ Run an example module from zipline.examples. """ mod = example_modules[example_name] register_calendar("YAHOO", get_calendar("NYSE"), force=True) return run_algorithm( initialize=getattr(mod, 'initialize', None), handle_data=getattr(mod, 'handle_data', None), before_trading_start=getattr(mod, 'before_trading_start', None), analyze=getattr(mod, 'analyze', None), bundle='test', environ=environ, benchmark_returns=benchmark_returns, # Provide a default capital base, but allow the test to override. **merge({'capital_base': 1e7}, mod._test_args()) )
If there is a minimum commission: If the order hasn't had a commission paid yet, pay the minimum commission. If the order has paid a commission, start paying additional commission once the minimum commission has been reached. If there is no minimum commission: Pay commission based on number of units in the transaction.
def calculate_per_unit_commission(order, transaction, cost_per_unit, initial_commission, min_trade_cost): """ If there is a minimum commission: If the order hasn't had a commission paid yet, pay the minimum commission. If the order has paid a commission, start paying additional commission once the minimum commission has been reached. If there is no minimum commission: Pay commission based on number of units in the transaction. """ additional_commission = abs(transaction.amount * cost_per_unit) if order.commission == 0: # no commission paid yet, pay at least the minimum plus a one-time # exchange fee. return max(min_trade_cost, additional_commission + initial_commission) else: # we've already paid some commission, so figure out how much we # would be paying if we only counted per unit. per_unit_total = \ abs(order.filled * cost_per_unit) + \ additional_commission + \ initial_commission if per_unit_total < min_trade_cost: # if we haven't hit the minimum threshold yet, don't pay # additional commission return 0 else: # we've exceeded the threshold, so pay more commission. return per_unit_total - order.commission
Asymmetric rounding function for adjusting prices to the specified number of places in a way that "improves" the price. For limit prices, this means preferring to round down on buys and preferring to round up on sells. For stop prices, it means the reverse. If prefer_round_down == True: When .05 below to .95 above a specified decimal place, use it. If prefer_round_down == False: When .95 below to .05 above a specified decimal place, use it. In math-speak: If prefer_round_down: [<X-1>.0095, X.0195) -> round to X.01. If not prefer_round_down: (<X-1>.0005, X.0105] -> round to X.01.
def asymmetric_round_price(price, prefer_round_down, tick_size, diff=0.95): """ Asymmetric rounding function for adjusting prices to the specified number of places in a way that "improves" the price. For limit prices, this means preferring to round down on buys and preferring to round up on sells. For stop prices, it means the reverse. If prefer_round_down == True: When .05 below to .95 above a specified decimal place, use it. If prefer_round_down == False: When .95 below to .05 above a specified decimal place, use it. In math-speak: If prefer_round_down: [<X-1>.0095, X.0195) -> round to X.01. If not prefer_round_down: (<X-1>.0005, X.0105] -> round to X.01. """ precision = zp_math.number_of_decimal_places(tick_size) multiplier = int(tick_size * (10 ** precision)) diff -= 0.5 # shift the difference down diff *= (10 ** -precision) # adjust diff to precision of tick size diff *= multiplier # adjust diff to value of tick_size # Subtracting an epsilon from diff to enforce the open-ness of the upper # bound on buys and the lower bound on sells. Using the actual system # epsilon doesn't quite get there, so use a slightly less epsilon-ey value. epsilon = float_info.epsilon * 10 diff = diff - epsilon # relies on rounding half away from zero, unlike numpy's bankers' rounding rounded = tick_size * consistent_round( (price - (diff if prefer_round_down else -diff)) / tick_size ) if zp_math.tolerant_equals(rounded, 0.0): return 0.0 return rounded
Check to make sure the stop/limit prices are reasonable and raise a BadOrderParameters exception if not.
def check_stoplimit_prices(price, label): """ Check to make sure the stop/limit prices are reasonable and raise a BadOrderParameters exception if not. """ try: if not isfinite(price): raise BadOrderParameters( msg="Attempted to place an order with a {} price " "of {}.".format(label, price) ) # This catches arbitrary objects except TypeError: raise BadOrderParameters( msg="Attempted to place an order with a {} price " "of {}.".format(label, type(price)) ) if price < 0: raise BadOrderParameters( msg="Can't place a {} order with a negative price.".format(label) )
Checks whether the fill price is worse than the order's limit price. Parameters ---------- fill_price: float The price to check. order: zipline.finance.order.Order The order whose limit price to check. Returns ------- bool: Whether the fill price is above the limit price (for a buy) or below the limit price (for a sell).
def fill_price_worse_than_limit_price(fill_price, order): """ Checks whether the fill price is worse than the order's limit price. Parameters ---------- fill_price: float The price to check. order: zipline.finance.order.Order The order whose limit price to check. Returns ------- bool: Whether the fill price is above the limit price (for a buy) or below the limit price (for a sell). """ if order.limit: # this is tricky! if an order with a limit price has reached # the limit price, we will try to fill the order. do not fill # these shares if the impacted price is worse than the limit # price. return early to avoid creating the transaction. # buy order is worse if the impacted price is greater than # the limit price. sell order is worse if the impacted price # is less than the limit price if (order.direction > 0 and fill_price > order.limit) or \ (order.direction < 0 and fill_price < order.limit): return True return False
Create a family of metrics sets functions that read from the same metrics set mapping. Returns ------- metrics_sets : mappingproxy The mapping of metrics sets to load functions. register : callable The function which registers new metrics sets in the ``metrics_sets`` mapping. unregister : callable The function which deregisters metrics sets from the ``metrics_sets`` mapping. load : callable The function which loads the ingested metrics sets back into memory.
def _make_metrics_set_core(): """Create a family of metrics sets functions that read from the same metrics set mapping. Returns ------- metrics_sets : mappingproxy The mapping of metrics sets to load functions. register : callable The function which registers new metrics sets in the ``metrics_sets`` mapping. unregister : callable The function which deregisters metrics sets from the ``metrics_sets`` mapping. load : callable The function which loads the ingested metrics sets back into memory. """ _metrics_sets = {} # Expose _metrics_sets through a proxy so that users cannot mutate this # accidentally. Users may go through `register` to update this which will # warn when trampling another metrics set. metrics_sets = mappingproxy(_metrics_sets) def register(name, function=None): """Register a new metrics set. Parameters ---------- name : str The name of the metrics set function : callable The callable which produces the metrics set. Notes ----- This may be used as a decorator if only ``name`` is passed. See Also -------- zipline.finance.metrics.get_metrics_set zipline.finance.metrics.unregister_metrics_set """ if function is None: # allow as decorator with just name. return partial(register, name) if name in _metrics_sets: raise ValueError('metrics set %r is already registered' % name) _metrics_sets[name] = function return function def unregister(name): """Unregister an existing metrics set. Parameters ---------- name : str The name of the metrics set See Also -------- zipline.finance.metrics.register_metrics_set """ try: del _metrics_sets[name] except KeyError: raise ValueError( 'metrics set %r was not already registered' % name, ) def load(name): """Return an instance of the metrics set registered with the given name. Returns ------- metrics : set[Metric] A new instance of the metrics set. Raises ------ ValueError Raised when no metrics set is registered to ``name`` """ try: function = _metrics_sets[name] except KeyError: raise ValueError( 'no metrics set registered as %r, options are: %r' % ( name, sorted(_metrics_sets), ), ) return function() return metrics_sets, register, unregister, load
Takes an iterable of sources, generating namestrings and piping their output into date_sort.
def date_sorted_sources(*sources): """ Takes an iterable of sources, generating namestrings and piping their output into date_sort. """ sorted_stream = heapq.merge(*(_decorate_source(s) for s in sources)) # Strip out key decoration for _, message in sorted_stream: yield message
Define a unique string for any set of representable args.
def hash_args(*args, **kwargs): """Define a unique string for any set of representable args.""" arg_string = '_'.join([str(arg) for arg in args]) kwarg_string = '_'.join([str(key) + '=' + str(value) for key, value in iteritems(kwargs)]) combined = ':'.join([arg_string, kwarg_string]) hasher = md5() hasher.update(b(combined)) return hasher.hexdigest()
Assert that an event meets the protocol for datasource outputs.
def assert_datasource_protocol(event): """Assert that an event meets the protocol for datasource outputs.""" assert event.type in DATASOURCE_TYPE # Done packets have no dt. if not event.type == DATASOURCE_TYPE.DONE: assert isinstance(event.dt, datetime) assert event.dt.tzinfo == pytz.utc
Assert that an event meets the protocol for datasource TRADE outputs.
def assert_trade_protocol(event): """Assert that an event meets the protocol for datasource TRADE outputs.""" assert_datasource_protocol(event) assert event.type == DATASOURCE_TYPE.TRADE assert isinstance(event.price, numbers.Real) assert isinstance(event.volume, numbers.Integral) assert isinstance(event.dt, datetime)
Assert that an event is valid output of zp.DATASOURCE_UNFRAME.
def assert_datasource_unframe_protocol(event): """Assert that an event is valid output of zp.DATASOURCE_UNFRAME.""" assert event.type in DATASOURCE_TYPE
Can we build an AdjustedArray for a baseline of `dtype``?
def can_represent_dtype(dtype): """ Can we build an AdjustedArray for a baseline of `dtype``? """ return dtype in REPRESENTABLE_DTYPES or dtype.kind in STRING_KINDS
Do we represent this dtype with LabelArrays rather than ndarrays?
def is_categorical(dtype): """ Do we represent this dtype with LabelArrays rather than ndarrays? """ return dtype in OBJECT_DTYPES or dtype.kind in STRING_KINDS
Coerce buffer data for an AdjustedArray into a standard scalar representation, returning the coerced array and a dict of argument to pass to np.view to use when providing a user-facing view of the underlying data. - float* data is coerced to float64 with viewtype float64. - int32, int64, and uint32 are converted to int64 with viewtype int64. - datetime[*] data is coerced to int64 with a viewtype of datetime64[ns]. - bool_ data is coerced to uint8 with a viewtype of bool_. Parameters ---------- data : np.ndarray Returns ------- coerced, view_kwargs : (np.ndarray, np.dtype) The input ``data`` array coerced to the appropriate pipeline type. This may return the original array or a view over the same data.
def _normalize_array(data, missing_value): """ Coerce buffer data for an AdjustedArray into a standard scalar representation, returning the coerced array and a dict of argument to pass to np.view to use when providing a user-facing view of the underlying data. - float* data is coerced to float64 with viewtype float64. - int32, int64, and uint32 are converted to int64 with viewtype int64. - datetime[*] data is coerced to int64 with a viewtype of datetime64[ns]. - bool_ data is coerced to uint8 with a viewtype of bool_. Parameters ---------- data : np.ndarray Returns ------- coerced, view_kwargs : (np.ndarray, np.dtype) The input ``data`` array coerced to the appropriate pipeline type. This may return the original array or a view over the same data. """ if isinstance(data, LabelArray): return data, {} data_dtype = data.dtype if data_dtype in BOOL_DTYPES: return data.astype(uint8, copy=False), {'dtype': dtype(bool_)} elif data_dtype in FLOAT_DTYPES: return data.astype(float64, copy=False), {'dtype': dtype(float64)} elif data_dtype in INT_DTYPES: return data.astype(int64, copy=False), {'dtype': dtype(int64)} elif is_categorical(data_dtype): if not isinstance(missing_value, LabelArray.SUPPORTED_SCALAR_TYPES): raise TypeError( "Invalid missing_value for categorical array.\n" "Expected None, bytes or unicode. Got %r." % missing_value, ) return LabelArray(data, missing_value), {} elif data_dtype.kind == 'M': try: outarray = data.astype('datetime64[ns]', copy=False).view('int64') return outarray, {'dtype': datetime64ns_dtype} except OverflowError: raise ValueError( "AdjustedArray received a datetime array " "not representable as datetime64[ns].\n" "Min Date: %s\n" "Max Date: %s\n" % (data.min(), data.max()) ) else: raise TypeError( "Don't know how to construct AdjustedArray " "on data of type %s." % data_dtype )
Merge lists of new and existing adjustments for a given index by appending or prepending new adjustments to existing adjustments. Notes ----- This method is meant to be used with ``toolz.merge_with`` to merge adjustment mappings. In case of a collision ``adjustment_lists`` contains two lists, existing adjustments at index 0 and new adjustments at index 1. When there are no collisions, ``adjustment_lists`` contains a single list. Parameters ---------- adjustment_lists : list[list[Adjustment]] List(s) of new and/or existing adjustments for a given index. front_idx : int Index of list in ``adjustment_lists`` that should be used as baseline in case of a collision. back_idx : int Index of list in ``adjustment_lists`` that should extend baseline list in case of a collision. Returns ------- adjustments : list[Adjustment] List of merged adjustments for a given index.
def _merge_simple(adjustment_lists, front_idx, back_idx): """ Merge lists of new and existing adjustments for a given index by appending or prepending new adjustments to existing adjustments. Notes ----- This method is meant to be used with ``toolz.merge_with`` to merge adjustment mappings. In case of a collision ``adjustment_lists`` contains two lists, existing adjustments at index 0 and new adjustments at index 1. When there are no collisions, ``adjustment_lists`` contains a single list. Parameters ---------- adjustment_lists : list[list[Adjustment]] List(s) of new and/or existing adjustments for a given index. front_idx : int Index of list in ``adjustment_lists`` that should be used as baseline in case of a collision. back_idx : int Index of list in ``adjustment_lists`` that should extend baseline list in case of a collision. Returns ------- adjustments : list[Adjustment] List of merged adjustments for a given index. """ if len(adjustment_lists) == 1: return list(adjustment_lists[0]) else: return adjustment_lists[front_idx] + adjustment_lists[back_idx]