title
stringlengths 5
65
| summary
stringlengths 5
98.2k
| context
stringlengths 9
121k
| path
stringlengths 10
84
⌀ |
---|---|---|---|
Plotting
|
The following functions are contained in the pandas.plotting module.
andrews_curves(frame, class_column[, ax, ...])
Generate a matplotlib plot for visualising clusters of multivariate data.
autocorrelation_plot(series[, ax])
Autocorrelation plot for time series.
bootstrap_plot(series[, fig, size, samples])
Bootstrap plot on mean, median and mid-range statistics.
boxplot(data[, column, by, ax, fontsize, ...])
Make a box plot from DataFrame columns.
deregister_matplotlib_converters()
Remove pandas formatters and converters.
lag_plot(series[, lag, ax])
Lag plot for time series.
parallel_coordinates(frame, class_column[, ...])
Parallel coordinates plotting.
plot_params
Stores pandas plotting options.
radviz(frame, class_column[, ax, color, ...])
Plot a multidimensional dataset in 2D.
register_matplotlib_converters()
Register pandas formatters and converters with matplotlib.
scatter_matrix(frame[, alpha, figsize, ax, ...])
Draw a matrix of scatter plots.
table(ax, data[, rowLabels, colLabels])
Helper function to convert DataFrame and Series to matplotlib.table.
|
reference/plotting.html
| null |
pandas.tseries.offsets.QuarterEnd.n
|
pandas.tseries.offsets.QuarterEnd.n
|
QuarterEnd.n#
|
reference/api/pandas.tseries.offsets.QuarterEnd.n.html
|
pandas.DataFrame.rpow
|
`pandas.DataFrame.rpow`
Get Exponential power of dataframe and other, element-wise (binary operator rpow).
```
>>> df = pd.DataFrame({'angles': [0, 3, 4],
... 'degrees': [360, 180, 360]},
... index=['circle', 'triangle', 'rectangle'])
>>> df
angles degrees
circle 0 360
triangle 3 180
rectangle 4 360
```
|
DataFrame.rpow(other, axis='columns', level=None, fill_value=None)[source]#
Get Exponential power of dataframe and other, element-wise (binary operator rpow).
Equivalent to other ** dataframe, but with support to substitute a fill_value
for missing data in one of the inputs. With reverse version, pow.
Among flexible wrappers (add, sub, mul, div, mod, pow) to
arithmetic operators: +, -, *, /, //, %, **.
Parameters
otherscalar, sequence, Series, dict or DataFrameAny single or multiple element data structure, or list-like object.
axis{0 or ‘index’, 1 or ‘columns’}Whether to compare by the index (0 or ‘index’) or columns.
(1 or ‘columns’). For Series input, axis to match Series index on.
levelint or labelBroadcast across a level, matching Index values on the
passed MultiIndex level.
fill_valuefloat or None, default NoneFill existing missing (NaN) values, and any new element needed for
successful DataFrame alignment, with this value before computation.
If data in both corresponding DataFrame locations is missing
the result will be missing.
Returns
DataFrameResult of the arithmetic operation.
See also
DataFrame.addAdd DataFrames.
DataFrame.subSubtract DataFrames.
DataFrame.mulMultiply DataFrames.
DataFrame.divDivide DataFrames (float division).
DataFrame.truedivDivide DataFrames (float division).
DataFrame.floordivDivide DataFrames (integer division).
DataFrame.modCalculate modulo (remainder after division).
DataFrame.powCalculate exponential power.
Notes
Mismatched indices will be unioned together.
Examples
>>> df = pd.DataFrame({'angles': [0, 3, 4],
... 'degrees': [360, 180, 360]},
... index=['circle', 'triangle', 'rectangle'])
>>> df
angles degrees
circle 0 360
triangle 3 180
rectangle 4 360
Add a scalar with operator version which return the same
results.
>>> df + 1
angles degrees
circle 1 361
triangle 4 181
rectangle 5 361
>>> df.add(1)
angles degrees
circle 1 361
triangle 4 181
rectangle 5 361
Divide by constant with reverse version.
>>> df.div(10)
angles degrees
circle 0.0 36.0
triangle 0.3 18.0
rectangle 0.4 36.0
>>> df.rdiv(10)
angles degrees
circle inf 0.027778
triangle 3.333333 0.055556
rectangle 2.500000 0.027778
Subtract a list and Series by axis with operator version.
>>> df - [1, 2]
angles degrees
circle -1 358
triangle 2 178
rectangle 3 358
>>> df.sub([1, 2], axis='columns')
angles degrees
circle -1 358
triangle 2 178
rectangle 3 358
>>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']),
... axis='index')
angles degrees
circle -1 359
triangle 2 179
rectangle 3 359
Multiply a dictionary by axis.
>>> df.mul({'angles': 0, 'degrees': 2})
angles degrees
circle 0 720
triangle 0 360
rectangle 0 720
>>> df.mul({'circle': 0, 'triangle': 2, 'rectangle': 3}, axis='index')
angles degrees
circle 0 0
triangle 6 360
rectangle 12 1080
Multiply a DataFrame of different shape with operator version.
>>> other = pd.DataFrame({'angles': [0, 3, 4]},
... index=['circle', 'triangle', 'rectangle'])
>>> other
angles
circle 0
triangle 3
rectangle 4
>>> df * other
angles degrees
circle 0 NaN
triangle 9 NaN
rectangle 16 NaN
>>> df.mul(other, fill_value=0)
angles degrees
circle 0 0.0
triangle 9 0.0
rectangle 16 0.0
Divide by a MultiIndex by level.
>>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6],
... 'degrees': [360, 180, 360, 360, 540, 720]},
... index=[['A', 'A', 'A', 'B', 'B', 'B'],
... ['circle', 'triangle', 'rectangle',
... 'square', 'pentagon', 'hexagon']])
>>> df_multindex
angles degrees
A circle 0 360
triangle 3 180
rectangle 4 360
B square 4 360
pentagon 5 540
hexagon 6 720
>>> df.div(df_multindex, level=1, fill_value=0)
angles degrees
A circle NaN 1.0
triangle 1.0 1.0
rectangle 1.0 1.0
B square 0.0 0.0
pentagon 0.0 0.0
hexagon 0.0 0.0
|
reference/api/pandas.DataFrame.rpow.html
|
pandas.core.resample.Resampler.indices
|
`pandas.core.resample.Resampler.indices`
Dict {group name -> group indices}.
|
property Resampler.indices[source]#
Dict {group name -> group indices}.
|
reference/api/pandas.core.resample.Resampler.indices.html
|
pandas arrays, scalars, and data types
|
Objects#
For most data types, pandas uses NumPy arrays as the concrete
objects contained with a Index, Series, or
DataFrame.
For some data types, pandas extends NumPy’s type system. String aliases for these types
can be found at dtypes.
Kind of Data
pandas Data Type
Scalar
Array
TZ-aware datetime
DatetimeTZDtype
Timestamp
Datetimes
Timedeltas
(none)
Timedelta
Timedeltas
Period (time spans)
PeriodDtype
Period
Periods
Intervals
IntervalDtype
Interval
Intervals
Nullable Integer
Int64Dtype, …
(none)
Nullable integer
Categorical
CategoricalDtype
(none)
Categoricals
Sparse
SparseDtype
(none)
Sparse
Strings
StringDtype
str
Strings
Boolean (with NA)
BooleanDtype
bool
Nullable Boolean
PyArrow
ArrowDtype
Python Scalars or NA
PyArrow
pandas and third-party libraries can extend NumPy’s type system (see Extension types).
The top-level array() method can be used to create a new array, which may be
stored in a Series, Index, or as a column in a DataFrame.
array(data[, dtype, copy])
Create an array.
PyArrow#
Warning
This feature is experimental, and the API can change in a future release without warning.
The arrays.ArrowExtensionArray is backed by a pyarrow.ChunkedArray with a
pyarrow.DataType instead of a NumPy array and data type. The .dtype of a arrays.ArrowExtensionArray
is an ArrowDtype.
Pyarrow provides similar array and data type
support as NumPy including first-class nullability support for all data types, immutability and more.
Note
For string types (pyarrow.string(), string[pyarrow]), PyArrow support is still facilitated
by arrays.ArrowStringArray and StringDtype("pyarrow"). See the string section
below.
While individual values in an arrays.ArrowExtensionArray are stored as a PyArrow objects, scalars are returned
as Python scalars corresponding to the data type, e.g. a PyArrow int64 will be returned as Python int, or NA for missing
values.
arrays.ArrowExtensionArray(values)
Pandas ExtensionArray backed by a PyArrow ChunkedArray.
ArrowDtype(pyarrow_dtype)
An ExtensionDtype for PyArrow data types.
Datetimes#
NumPy cannot natively represent timezone-aware datetimes. pandas supports this
with the arrays.DatetimeArray extension array, which can hold timezone-naive
or timezone-aware values.
Timestamp, a subclass of datetime.datetime, is pandas’
scalar type for timezone-naive or timezone-aware datetime data.
Timestamp([ts_input, freq, tz, unit, year, ...])
Pandas replacement for python datetime.datetime object.
Properties#
Timestamp.asm8
Return numpy datetime64 format in nanoseconds.
Timestamp.day
Timestamp.dayofweek
Return day of the week.
Timestamp.day_of_week
Return day of the week.
Timestamp.dayofyear
Return the day of the year.
Timestamp.day_of_year
Return the day of the year.
Timestamp.days_in_month
Return the number of days in the month.
Timestamp.daysinmonth
Return the number of days in the month.
Timestamp.fold
Timestamp.hour
Timestamp.is_leap_year
Return True if year is a leap year.
Timestamp.is_month_end
Return True if date is last day of month.
Timestamp.is_month_start
Return True if date is first day of month.
Timestamp.is_quarter_end
Return True if date is last day of the quarter.
Timestamp.is_quarter_start
Return True if date is first day of the quarter.
Timestamp.is_year_end
Return True if date is last day of the year.
Timestamp.is_year_start
Return True if date is first day of the year.
Timestamp.max
Timestamp.microsecond
Timestamp.min
Timestamp.minute
Timestamp.month
Timestamp.nanosecond
Timestamp.quarter
Return the quarter of the year.
Timestamp.resolution
Timestamp.second
Timestamp.tz
Alias for tzinfo.
Timestamp.tzinfo
Timestamp.value
Timestamp.week
Return the week number of the year.
Timestamp.weekofyear
Return the week number of the year.
Timestamp.year
Methods#
Timestamp.astimezone(tz)
Convert timezone-aware Timestamp to another time zone.
Timestamp.ceil(freq[, ambiguous, nonexistent])
Return a new Timestamp ceiled to this resolution.
Timestamp.combine(date, time)
Combine date, time into datetime with same date and time fields.
Timestamp.ctime
Return ctime() style string.
Timestamp.date
Return date object with same year, month and day.
Timestamp.day_name
Return the day name of the Timestamp with specified locale.
Timestamp.dst
Return self.tzinfo.dst(self).
Timestamp.floor(freq[, ambiguous, nonexistent])
Return a new Timestamp floored to this resolution.
Timestamp.freq
Timestamp.freqstr
Return the total number of days in the month.
Timestamp.fromordinal(ordinal[, freq, tz])
Construct a timestamp from a a proleptic Gregorian ordinal.
Timestamp.fromtimestamp(ts)
Transform timestamp[, tz] to tz's local time from POSIX timestamp.
Timestamp.isocalendar
Return a 3-tuple containing ISO year, week number, and weekday.
Timestamp.isoformat
Return the time formatted according to ISO 8610.
Timestamp.isoweekday()
Return the day of the week represented by the date.
Timestamp.month_name
Return the month name of the Timestamp with specified locale.
Timestamp.normalize
Normalize Timestamp to midnight, preserving tz information.
Timestamp.now([tz])
Return new Timestamp object representing current time local to tz.
Timestamp.replace([year, month, day, hour, ...])
Implements datetime.replace, handles nanoseconds.
Timestamp.round(freq[, ambiguous, nonexistent])
Round the Timestamp to the specified resolution.
Timestamp.strftime(format)
Return a formatted string of the Timestamp.
Timestamp.strptime(string, format)
Function is not implemented.
Timestamp.time
Return time object with same time but with tzinfo=None.
Timestamp.timestamp
Return POSIX timestamp as float.
Timestamp.timetuple
Return time tuple, compatible with time.localtime().
Timestamp.timetz
Return time object with same time and tzinfo.
Timestamp.to_datetime64
Return a numpy.datetime64 object with 'ns' precision.
Timestamp.to_numpy
Convert the Timestamp to a NumPy datetime64.
Timestamp.to_julian_date()
Convert TimeStamp to a Julian Date.
Timestamp.to_period
Return an period of which this timestamp is an observation.
Timestamp.to_pydatetime
Convert a Timestamp object to a native Python datetime object.
Timestamp.today([tz])
Return the current time in the local timezone.
Timestamp.toordinal
Return proleptic Gregorian ordinal.
Timestamp.tz_convert(tz)
Convert timezone-aware Timestamp to another time zone.
Timestamp.tz_localize(tz[, ambiguous, ...])
Localize the Timestamp to a timezone.
Timestamp.tzname
Return self.tzinfo.tzname(self).
Timestamp.utcfromtimestamp(ts)
Construct a naive UTC datetime from a POSIX timestamp.
Timestamp.utcnow()
Return a new Timestamp representing UTC day and time.
Timestamp.utcoffset
Return self.tzinfo.utcoffset(self).
Timestamp.utctimetuple
Return UTC time tuple, compatible with time.localtime().
Timestamp.weekday()
Return the day of the week represented by the date.
A collection of timestamps may be stored in a arrays.DatetimeArray.
For timezone-aware data, the .dtype of a arrays.DatetimeArray is a
DatetimeTZDtype. For timezone-naive data, np.dtype("datetime64[ns]")
is used.
If the data are timezone-aware, then every value in the array must have the same timezone.
arrays.DatetimeArray(values[, dtype, freq, copy])
Pandas ExtensionArray for tz-naive or tz-aware datetime data.
DatetimeTZDtype([unit, tz])
An ExtensionDtype for timezone-aware datetime data.
Timedeltas#
NumPy can natively represent timedeltas. pandas provides Timedelta
for symmetry with Timestamp.
Timedelta([value, unit])
Represents a duration, the difference between two dates or times.
Properties#
Timedelta.asm8
Return a numpy timedelta64 array scalar view.
Timedelta.components
Return a components namedtuple-like.
Timedelta.days
Timedelta.delta
(DEPRECATED) Return the timedelta in nanoseconds (ns), for internal compatibility.
Timedelta.freq
(DEPRECATED) Freq property.
Timedelta.is_populated
(DEPRECATED) Is_populated property.
Timedelta.max
Timedelta.microseconds
Timedelta.min
Timedelta.nanoseconds
Return the number of nanoseconds (n), where 0 <= n < 1 microsecond.
Timedelta.resolution
Timedelta.seconds
Timedelta.value
Timedelta.view
Array view compatibility.
Methods#
Timedelta.ceil(freq)
Return a new Timedelta ceiled to this resolution.
Timedelta.floor(freq)
Return a new Timedelta floored to this resolution.
Timedelta.isoformat
Format the Timedelta as ISO 8601 Duration.
Timedelta.round(freq)
Round the Timedelta to the specified resolution.
Timedelta.to_pytimedelta
Convert a pandas Timedelta object into a python datetime.timedelta object.
Timedelta.to_timedelta64
Return a numpy.timedelta64 object with 'ns' precision.
Timedelta.to_numpy
Convert the Timedelta to a NumPy timedelta64.
Timedelta.total_seconds
Total seconds in the duration.
A collection of Timedelta may be stored in a TimedeltaArray.
arrays.TimedeltaArray(values[, dtype, freq, ...])
Pandas ExtensionArray for timedelta data.
Periods#
pandas represents spans of times as Period objects.
Period#
Period([value, freq, ordinal, year, month, ...])
Represents a period of time.
Properties#
Period.day
Get day of the month that a Period falls on.
Period.dayofweek
Day of the week the period lies in, with Monday=0 and Sunday=6.
Period.day_of_week
Day of the week the period lies in, with Monday=0 and Sunday=6.
Period.dayofyear
Return the day of the year.
Period.day_of_year
Return the day of the year.
Period.days_in_month
Get the total number of days in the month that this period falls on.
Period.daysinmonth
Get the total number of days of the month that this period falls on.
Period.end_time
Get the Timestamp for the end of the period.
Period.freq
Period.freqstr
Return a string representation of the frequency.
Period.hour
Get the hour of the day component of the Period.
Period.is_leap_year
Return True if the period's year is in a leap year.
Period.minute
Get minute of the hour component of the Period.
Period.month
Return the month this Period falls on.
Period.ordinal
Period.quarter
Return the quarter this Period falls on.
Period.qyear
Fiscal year the Period lies in according to its starting-quarter.
Period.second
Get the second component of the Period.
Period.start_time
Get the Timestamp for the start of the period.
Period.week
Get the week of the year on the given Period.
Period.weekday
Day of the week the period lies in, with Monday=0 and Sunday=6.
Period.weekofyear
Get the week of the year on the given Period.
Period.year
Return the year this Period falls on.
Methods#
Period.asfreq
Convert Period to desired frequency, at the start or end of the interval.
Period.now
Return the period of now's date.
Period.strftime
Returns a formatted string representation of the Period.
Period.to_timestamp
Return the Timestamp representation of the Period.
A collection of Period may be stored in a arrays.PeriodArray.
Every period in a arrays.PeriodArray must have the same freq.
arrays.PeriodArray(values[, dtype, freq, copy])
Pandas ExtensionArray for storing Period data.
PeriodDtype([freq])
An ExtensionDtype for Period data.
Intervals#
Arbitrary intervals can be represented as Interval objects.
Interval
Immutable object implementing an Interval, a bounded slice-like interval.
Properties#
Interval.closed
String describing the inclusive side the intervals.
Interval.closed_left
Check if the interval is closed on the left side.
Interval.closed_right
Check if the interval is closed on the right side.
Interval.is_empty
Indicates if an interval is empty, meaning it contains no points.
Interval.left
Left bound for the interval.
Interval.length
Return the length of the Interval.
Interval.mid
Return the midpoint of the Interval.
Interval.open_left
Check if the interval is open on the left side.
Interval.open_right
Check if the interval is open on the right side.
Interval.overlaps
Check whether two Interval objects overlap.
Interval.right
Right bound for the interval.
A collection of intervals may be stored in an arrays.IntervalArray.
arrays.IntervalArray(data[, closed, dtype, ...])
Pandas array for interval data that are closed on the same side.
IntervalDtype([subtype, closed])
An ExtensionDtype for Interval data.
Nullable integer#
numpy.ndarray cannot natively represent integer-data with missing values.
pandas provides this through arrays.IntegerArray.
arrays.IntegerArray(values, mask[, copy])
Array of integer (optional missing) values.
Int8Dtype()
An ExtensionDtype for int8 integer data.
Int16Dtype()
An ExtensionDtype for int16 integer data.
Int32Dtype()
An ExtensionDtype for int32 integer data.
Int64Dtype()
An ExtensionDtype for int64 integer data.
UInt8Dtype()
An ExtensionDtype for uint8 integer data.
UInt16Dtype()
An ExtensionDtype for uint16 integer data.
UInt32Dtype()
An ExtensionDtype for uint32 integer data.
UInt64Dtype()
An ExtensionDtype for uint64 integer data.
Categoricals#
pandas defines a custom data type for representing data that can take only a
limited, fixed set of values. The dtype of a Categorical can be described by
a CategoricalDtype.
CategoricalDtype([categories, ordered])
Type for categorical data with the categories and orderedness.
CategoricalDtype.categories
An Index containing the unique categories allowed.
CategoricalDtype.ordered
Whether the categories have an ordered relationship.
Categorical data can be stored in a pandas.Categorical
Categorical(values[, categories, ordered, ...])
Represent a categorical variable in classic R / S-plus fashion.
The alternative Categorical.from_codes() constructor can be used when you
have the categories and integer codes already:
Categorical.from_codes(codes[, categories, ...])
Make a Categorical type from codes and categories or dtype.
The dtype information is available on the Categorical
Categorical.dtype
The CategoricalDtype for this instance.
Categorical.categories
The categories of this categorical.
Categorical.ordered
Whether the categories have an ordered relationship.
Categorical.codes
The category codes of this categorical.
np.asarray(categorical) works by implementing the array interface. Be aware, that this converts
the Categorical back to a NumPy array, so categories and order information is not preserved!
Categorical.__array__([dtype])
The numpy array interface.
A Categorical can be stored in a Series or DataFrame.
To create a Series of dtype category, use cat = s.astype(dtype) or
Series(..., dtype=dtype) where dtype is either
the string 'category'
an instance of CategoricalDtype.
If the Series is of dtype CategoricalDtype, Series.cat can be used to change the categorical
data. See Categorical accessor for more.
Sparse#
Data where a single value is repeated many times (e.g. 0 or NaN) may
be stored efficiently as a arrays.SparseArray.
arrays.SparseArray(data[, sparse_index, ...])
An ExtensionArray for storing sparse data.
SparseDtype([dtype, fill_value])
Dtype for data stored in SparseArray.
The Series.sparse accessor may be used to access sparse-specific attributes
and methods if the Series contains sparse values. See
Sparse accessor and the user guide for more.
Strings#
When working with text data, where each valid element is a string or missing,
we recommend using StringDtype (with the alias "string").
arrays.StringArray(values[, copy])
Extension array for string data.
arrays.ArrowStringArray(values)
Extension array for string data in a pyarrow.ChunkedArray.
StringDtype([storage])
Extension dtype for string data.
The Series.str accessor is available for Series backed by a arrays.StringArray.
See String handling for more.
Nullable Boolean#
The boolean dtype (with the alias "boolean") provides support for storing
boolean data (True, False) with missing values, which is not possible
with a bool numpy.ndarray.
arrays.BooleanArray(values, mask[, copy])
Array of boolean (True/False) data with missing values.
BooleanDtype()
Extension dtype for boolean data.
Utilities#
Constructors#
api.types.union_categoricals(to_union[, ...])
Combine list-like of Categorical-like, unioning categories.
api.types.infer_dtype
Return a string label of the type of a scalar or list-like of values.
api.types.pandas_dtype(dtype)
Convert input into a pandas only dtype object or a numpy dtype object.
Data type introspection#
api.types.is_bool_dtype(arr_or_dtype)
Check whether the provided array or dtype is of a boolean dtype.
api.types.is_categorical_dtype(arr_or_dtype)
Check whether an array-like or dtype is of the Categorical dtype.
api.types.is_complex_dtype(arr_or_dtype)
Check whether the provided array or dtype is of a complex dtype.
api.types.is_datetime64_any_dtype(arr_or_dtype)
Check whether the provided array or dtype is of the datetime64 dtype.
api.types.is_datetime64_dtype(arr_or_dtype)
Check whether an array-like or dtype is of the datetime64 dtype.
api.types.is_datetime64_ns_dtype(arr_or_dtype)
Check whether the provided array or dtype is of the datetime64[ns] dtype.
api.types.is_datetime64tz_dtype(arr_or_dtype)
Check whether an array-like or dtype is of a DatetimeTZDtype dtype.
api.types.is_extension_type(arr)
(DEPRECATED) Check whether an array-like is of a pandas extension class instance.
api.types.is_extension_array_dtype(arr_or_dtype)
Check if an object is a pandas extension array type.
api.types.is_float_dtype(arr_or_dtype)
Check whether the provided array or dtype is of a float dtype.
api.types.is_int64_dtype(arr_or_dtype)
Check whether the provided array or dtype is of the int64 dtype.
api.types.is_integer_dtype(arr_or_dtype)
Check whether the provided array or dtype is of an integer dtype.
api.types.is_interval_dtype(arr_or_dtype)
Check whether an array-like or dtype is of the Interval dtype.
api.types.is_numeric_dtype(arr_or_dtype)
Check whether the provided array or dtype is of a numeric dtype.
api.types.is_object_dtype(arr_or_dtype)
Check whether an array-like or dtype is of the object dtype.
api.types.is_period_dtype(arr_or_dtype)
Check whether an array-like or dtype is of the Period dtype.
api.types.is_signed_integer_dtype(arr_or_dtype)
Check whether the provided array or dtype is of a signed integer dtype.
api.types.is_string_dtype(arr_or_dtype)
Check whether the provided array or dtype is of the string dtype.
api.types.is_timedelta64_dtype(arr_or_dtype)
Check whether an array-like or dtype is of the timedelta64 dtype.
api.types.is_timedelta64_ns_dtype(arr_or_dtype)
Check whether the provided array or dtype is of the timedelta64[ns] dtype.
api.types.is_unsigned_integer_dtype(arr_or_dtype)
Check whether the provided array or dtype is of an unsigned integer dtype.
api.types.is_sparse(arr)
Check whether an array-like is a 1-D pandas sparse array.
Iterable introspection#
api.types.is_dict_like(obj)
Check if the object is dict-like.
api.types.is_file_like(obj)
Check if the object is a file-like object.
api.types.is_list_like
Check if the object is list-like.
api.types.is_named_tuple(obj)
Check if the object is a named tuple.
api.types.is_iterator
Check if the object is an iterator.
Scalar introspection#
api.types.is_bool
Return True if given object is boolean.
api.types.is_categorical(arr)
(DEPRECATED) Check whether an array-like is a Categorical instance.
api.types.is_complex
Return True if given object is complex.
api.types.is_float
Return True if given object is float.
api.types.is_hashable(obj)
Return True if hash(obj) will succeed, False otherwise.
api.types.is_integer
Return True if given object is integer.
api.types.is_interval
api.types.is_number(obj)
Check if the object is a number.
api.types.is_re(obj)
Check if the object is a regex pattern instance.
api.types.is_re_compilable(obj)
Check if the object can be compiled into a regex pattern instance.
api.types.is_scalar
Return True if given object is scalar.
|
reference/arrays.html
| null |
pandas.Period.day_of_year
|
`pandas.Period.day_of_year`
Return the day of the year.
```
>>> period = pd.Period("2015-10-23", freq='H')
>>> period.day_of_year
296
>>> period = pd.Period("2012-12-31", freq='D')
>>> period.day_of_year
366
>>> period = pd.Period("2013-01-01", freq='D')
>>> period.day_of_year
1
```
|
Period.day_of_year#
Return the day of the year.
This attribute returns the day of the year on which the particular
date occurs. The return value ranges between 1 to 365 for regular
years and 1 to 366 for leap years.
Returns
intThe day of year.
See also
Period.dayReturn the day of the month.
Period.day_of_weekReturn the day of week.
PeriodIndex.day_of_yearReturn the day of year of all indexes.
Examples
>>> period = pd.Period("2015-10-23", freq='H')
>>> period.day_of_year
296
>>> period = pd.Period("2012-12-31", freq='D')
>>> period.day_of_year
366
>>> period = pd.Period("2013-01-01", freq='D')
>>> period.day_of_year
1
|
reference/api/pandas.Period.day_of_year.html
|
pandas.io.formats.style.Styler.highlight_between
|
`pandas.io.formats.style.Styler.highlight_between`
Highlight a defined range with a style.
```
>>> df = pd.DataFrame({
... 'One': [1.2, 1.6, 1.5],
... 'Two': [2.9, 2.1, 2.5],
... 'Three': [3.1, 3.2, 3.8],
... })
>>> df.style.highlight_between(left=2.1, right=2.9)
```
|
Styler.highlight_between(subset=None, color='yellow', axis=0, left=None, right=None, inclusive='both', props=None)[source]#
Highlight a defined range with a style.
New in version 1.3.0.
Parameters
subsetlabel, array-like, IndexSlice, optionalA valid 2d input to DataFrame.loc[<subset>], or, in the case of a 1d input
or single key, to DataFrame.loc[:, <subset>] where the columns are
prioritised, to limit data to before applying the function.
colorstr, default ‘yellow’Background color to use for highlighting.
axis{0 or ‘index’, 1 or ‘columns’, None}, default 0If left or right given as sequence, axis along which to apply those
boundaries. See examples.
leftscalar or datetime-like, or sequence or array-like, default NoneLeft bound for defining the range.
rightscalar or datetime-like, or sequence or array-like, default NoneRight bound for defining the range.
inclusive{‘both’, ‘neither’, ‘left’, ‘right’}Identify whether bounds are closed or open.
propsstr, default NoneCSS properties to use for highlighting. If props is given, color
is not used.
Returns
selfStyler
See also
Styler.highlight_nullHighlight missing values with a style.
Styler.highlight_maxHighlight the maximum with a style.
Styler.highlight_minHighlight the minimum with a style.
Styler.highlight_quantileHighlight values defined by a quantile with a style.
Notes
If left is None only the right bound is applied.
If right is None only the left bound is applied. If both are None
all values are highlighted.
axis is only needed if left or right are provided as a sequence or
an array-like object for aligning the shapes. If left and right are
both scalars then all axis inputs will give the same result.
This function only works with compatible dtypes. For example a datetime-like
region can only use equivalent datetime-like left and right arguments.
Use subset to control regions which have multiple dtypes.
Examples
Basic usage
>>> df = pd.DataFrame({
... 'One': [1.2, 1.6, 1.5],
... 'Two': [2.9, 2.1, 2.5],
... 'Three': [3.1, 3.2, 3.8],
... })
>>> df.style.highlight_between(left=2.1, right=2.9)
Using a range input sequence along an axis, in this case setting a left
and right for each column individually
>>> df.style.highlight_between(left=[1.4, 2.4, 3.4], right=[1.6, 2.6, 3.6],
... axis=1, color="#fffd75")
Using axis=None and providing the left argument as an array that
matches the input DataFrame, with a constant right
>>> df.style.highlight_between(left=[[2,2,3],[2,2,3],[3,3,3]], right=3.5,
... axis=None, color="#fffd75")
Using props instead of default background coloring
>>> df.style.highlight_between(left=1.5, right=3.5,
... props='font-weight:bold;color:#e83e8c')
|
reference/api/pandas.io.formats.style.Styler.highlight_between.html
|
pandas.tseries.offsets.Easter.__call__
|
`pandas.tseries.offsets.Easter.__call__`
Call self as a function.
|
Easter.__call__(*args, **kwargs)#
Call self as a function.
|
reference/api/pandas.tseries.offsets.Easter.__call__.html
|
pandas.Series.notna
|
`pandas.Series.notna`
Detect existing (non-missing) values.
```
>>> df = pd.DataFrame(dict(age=[5, 6, np.NaN],
... born=[pd.NaT, pd.Timestamp('1939-05-27'),
... pd.Timestamp('1940-04-25')],
... name=['Alfred', 'Batman', ''],
... toy=[None, 'Batmobile', 'Joker']))
>>> df
age born name toy
0 5.0 NaT Alfred None
1 6.0 1939-05-27 Batman Batmobile
2 NaN 1940-04-25 Joker
```
|
Series.notna()[source]#
Detect existing (non-missing) values.
Return a boolean same-sized object indicating if the values are not NA.
Non-missing values get mapped to True. Characters such as empty
strings '' or numpy.inf are not considered NA values
(unless you set pandas.options.mode.use_inf_as_na = True).
NA values, such as None or numpy.NaN, get mapped to False
values.
Returns
SeriesMask of bool values for each element in Series that
indicates whether an element is not an NA value.
See also
Series.notnullAlias of notna.
Series.isnaBoolean inverse of notna.
Series.dropnaOmit axes labels with missing values.
notnaTop-level notna.
Examples
Show which entries in a DataFrame are not NA.
>>> df = pd.DataFrame(dict(age=[5, 6, np.NaN],
... born=[pd.NaT, pd.Timestamp('1939-05-27'),
... pd.Timestamp('1940-04-25')],
... name=['Alfred', 'Batman', ''],
... toy=[None, 'Batmobile', 'Joker']))
>>> df
age born name toy
0 5.0 NaT Alfred None
1 6.0 1939-05-27 Batman Batmobile
2 NaN 1940-04-25 Joker
>>> df.notna()
age born name toy
0 True False True False
1 True True True True
2 False True True True
Show which entries in a Series are not NA.
>>> ser = pd.Series([5, 6, np.NaN])
>>> ser
0 5.0
1 6.0
2 NaN
dtype: float64
>>> ser.notna()
0 True
1 True
2 False
dtype: bool
|
reference/api/pandas.Series.notna.html
|
pandas.Series.plot.line
|
`pandas.Series.plot.line`
Plot Series or DataFrame as lines.
```
>>> s = pd.Series([1, 3, 2])
>>> s.plot.line()
<AxesSubplot: ylabel='Density'>
```
|
Series.plot.line(x=None, y=None, **kwargs)[source]#
Plot Series or DataFrame as lines.
This function is useful to plot lines using DataFrame’s values
as coordinates.
Parameters
xlabel or position, optionalAllows plotting of one column versus another. If not specified,
the index of the DataFrame is used.
ylabel or position, optionalAllows plotting of one column versus another. If not specified,
all numerical columns are used.
colorstr, array-like, or dict, optionalThe color for each of the DataFrame’s columns. Possible values are:
A single color string referred to by name, RGB or RGBA code,for instance ‘red’ or ‘#a98d19’.
A sequence of color strings referred to by name, RGB or RGBAcode, which will be used for each column recursively. For
instance [‘green’,’yellow’] each column’s line will be filled in
green or yellow, alternatively. If there is only a single column to
be plotted, then only the first color from the color list will be
used.
A dict of the form {column namecolor}, so that each column will becolored accordingly. For example, if your columns are called a and
b, then passing {‘a’: ‘green’, ‘b’: ‘red’} will color lines for
column a in green and lines for column b in red.
New in version 1.1.0.
**kwargsAdditional keyword arguments are documented in
DataFrame.plot().
Returns
matplotlib.axes.Axes or np.ndarray of themAn ndarray is returned with one matplotlib.axes.Axes
per column when subplots=True.
See also
matplotlib.pyplot.plotPlot y versus x as lines and/or markers.
Examples
>>> s = pd.Series([1, 3, 2])
>>> s.plot.line()
<AxesSubplot: ylabel='Density'>
The following example shows the populations for some animals
over the years.
>>> df = pd.DataFrame({
... 'pig': [20, 18, 489, 675, 1776],
... 'horse': [4, 25, 281, 600, 1900]
... }, index=[1990, 1997, 2003, 2009, 2014])
>>> lines = df.plot.line()
An example with subplots, so an array of axes is returned.
>>> axes = df.plot.line(subplots=True)
>>> type(axes)
<class 'numpy.ndarray'>
Let’s repeat the same example, but specifying colors for
each column (in this case, for each animal).
>>> axes = df.plot.line(
... subplots=True, color={"pig": "pink", "horse": "#742802"}
... )
The following example shows the relationship between both
populations.
>>> lines = df.plot.line(x='pig', y='horse')
|
reference/api/pandas.Series.plot.line.html
|
pandas.api.types.is_categorical_dtype
|
`pandas.api.types.is_categorical_dtype`
Check whether an array-like or dtype is of the Categorical dtype.
The array-like or dtype to check.
```
>>> is_categorical_dtype(object)
False
>>> is_categorical_dtype(CategoricalDtype())
True
>>> is_categorical_dtype([1, 2, 3])
False
>>> is_categorical_dtype(pd.Categorical([1, 2, 3]))
True
>>> is_categorical_dtype(pd.CategoricalIndex([1, 2, 3]))
True
```
|
pandas.api.types.is_categorical_dtype(arr_or_dtype)[source]#
Check whether an array-like or dtype is of the Categorical dtype.
Parameters
arr_or_dtypearray-like or dtypeThe array-like or dtype to check.
Returns
booleanWhether or not the array-like or dtype is of the Categorical dtype.
Examples
>>> is_categorical_dtype(object)
False
>>> is_categorical_dtype(CategoricalDtype())
True
>>> is_categorical_dtype([1, 2, 3])
False
>>> is_categorical_dtype(pd.Categorical([1, 2, 3]))
True
>>> is_categorical_dtype(pd.CategoricalIndex([1, 2, 3]))
True
|
reference/api/pandas.api.types.is_categorical_dtype.html
|
Date offsets
|
Date offsets
|
DateOffset#
DateOffset
Standard kind of date increment used for a date range.
Properties#
DateOffset.freqstr
Return a string representing the frequency.
DateOffset.kwds
Return a dict of extra parameters for the offset.
DateOffset.name
Return a string representing the base frequency.
DateOffset.nanos
DateOffset.normalize
DateOffset.rule_code
DateOffset.n
DateOffset.is_month_start
Return boolean whether a timestamp occurs on the month start.
DateOffset.is_month_end
Return boolean whether a timestamp occurs on the month end.
Methods#
DateOffset.apply
DateOffset.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
DateOffset.copy
Return a copy of the frequency.
DateOffset.isAnchored
DateOffset.onOffset
DateOffset.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
DateOffset.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
DateOffset.__call__(*args, **kwargs)
Call self as a function.
DateOffset.is_month_start
Return boolean whether a timestamp occurs on the month start.
DateOffset.is_month_end
Return boolean whether a timestamp occurs on the month end.
DateOffset.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
DateOffset.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
DateOffset.is_year_start
Return boolean whether a timestamp occurs on the year start.
DateOffset.is_year_end
Return boolean whether a timestamp occurs on the year end.
BusinessDay#
BusinessDay
DateOffset subclass representing possibly n business days.
Alias:
BDay
alias of pandas._libs.tslibs.offsets.BusinessDay
Properties#
BusinessDay.freqstr
Return a string representing the frequency.
BusinessDay.kwds
Return a dict of extra parameters for the offset.
BusinessDay.name
Return a string representing the base frequency.
BusinessDay.nanos
BusinessDay.normalize
BusinessDay.rule_code
BusinessDay.n
BusinessDay.weekmask
BusinessDay.holidays
BusinessDay.calendar
Methods#
BusinessDay.apply
BusinessDay.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
BusinessDay.copy
Return a copy of the frequency.
BusinessDay.isAnchored
BusinessDay.onOffset
BusinessDay.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
BusinessDay.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
BusinessDay.__call__(*args, **kwargs)
Call self as a function.
BusinessDay.is_month_start
Return boolean whether a timestamp occurs on the month start.
BusinessDay.is_month_end
Return boolean whether a timestamp occurs on the month end.
BusinessDay.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
BusinessDay.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
BusinessDay.is_year_start
Return boolean whether a timestamp occurs on the year start.
BusinessDay.is_year_end
Return boolean whether a timestamp occurs on the year end.
BusinessHour#
BusinessHour
DateOffset subclass representing possibly n business hours.
Properties#
BusinessHour.freqstr
Return a string representing the frequency.
BusinessHour.kwds
Return a dict of extra parameters for the offset.
BusinessHour.name
Return a string representing the base frequency.
BusinessHour.nanos
BusinessHour.normalize
BusinessHour.rule_code
BusinessHour.n
BusinessHour.start
BusinessHour.end
BusinessHour.weekmask
BusinessHour.holidays
BusinessHour.calendar
Methods#
BusinessHour.apply
BusinessHour.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
BusinessHour.copy
Return a copy of the frequency.
BusinessHour.isAnchored
BusinessHour.onOffset
BusinessHour.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
BusinessHour.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
BusinessHour.__call__(*args, **kwargs)
Call self as a function.
BusinessHour.is_month_start
Return boolean whether a timestamp occurs on the month start.
BusinessHour.is_month_end
Return boolean whether a timestamp occurs on the month end.
BusinessHour.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
BusinessHour.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
BusinessHour.is_year_start
Return boolean whether a timestamp occurs on the year start.
BusinessHour.is_year_end
Return boolean whether a timestamp occurs on the year end.
CustomBusinessDay#
CustomBusinessDay
DateOffset subclass representing custom business days excluding holidays.
Alias:
CDay
alias of pandas._libs.tslibs.offsets.CustomBusinessDay
Properties#
CustomBusinessDay.freqstr
Return a string representing the frequency.
CustomBusinessDay.kwds
Return a dict of extra parameters for the offset.
CustomBusinessDay.name
Return a string representing the base frequency.
CustomBusinessDay.nanos
CustomBusinessDay.normalize
CustomBusinessDay.rule_code
CustomBusinessDay.n
CustomBusinessDay.weekmask
CustomBusinessDay.calendar
CustomBusinessDay.holidays
Methods#
CustomBusinessDay.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
CustomBusinessDay.apply
CustomBusinessDay.copy
Return a copy of the frequency.
CustomBusinessDay.isAnchored
CustomBusinessDay.onOffset
CustomBusinessDay.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
CustomBusinessDay.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
CustomBusinessDay.__call__(*args, **kwargs)
Call self as a function.
CustomBusinessDay.is_month_start
Return boolean whether a timestamp occurs on the month start.
CustomBusinessDay.is_month_end
Return boolean whether a timestamp occurs on the month end.
CustomBusinessDay.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
CustomBusinessDay.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
CustomBusinessDay.is_year_start
Return boolean whether a timestamp occurs on the year start.
CustomBusinessDay.is_year_end
Return boolean whether a timestamp occurs on the year end.
CustomBusinessHour#
CustomBusinessHour
DateOffset subclass representing possibly n custom business days.
Properties#
CustomBusinessHour.freqstr
Return a string representing the frequency.
CustomBusinessHour.kwds
Return a dict of extra parameters for the offset.
CustomBusinessHour.name
Return a string representing the base frequency.
CustomBusinessHour.nanos
CustomBusinessHour.normalize
CustomBusinessHour.rule_code
CustomBusinessHour.n
CustomBusinessHour.weekmask
CustomBusinessHour.calendar
CustomBusinessHour.holidays
CustomBusinessHour.start
CustomBusinessHour.end
Methods#
CustomBusinessHour.apply
CustomBusinessHour.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
CustomBusinessHour.copy
Return a copy of the frequency.
CustomBusinessHour.isAnchored
CustomBusinessHour.onOffset
CustomBusinessHour.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
CustomBusinessHour.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
CustomBusinessHour.__call__(*args, **kwargs)
Call self as a function.
CustomBusinessHour.is_month_start
Return boolean whether a timestamp occurs on the month start.
CustomBusinessHour.is_month_end
Return boolean whether a timestamp occurs on the month end.
CustomBusinessHour.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
CustomBusinessHour.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
CustomBusinessHour.is_year_start
Return boolean whether a timestamp occurs on the year start.
CustomBusinessHour.is_year_end
Return boolean whether a timestamp occurs on the year end.
MonthEnd#
MonthEnd
DateOffset of one month end.
Properties#
MonthEnd.freqstr
Return a string representing the frequency.
MonthEnd.kwds
Return a dict of extra parameters for the offset.
MonthEnd.name
Return a string representing the base frequency.
MonthEnd.nanos
MonthEnd.normalize
MonthEnd.rule_code
MonthEnd.n
Methods#
MonthEnd.apply
MonthEnd.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
MonthEnd.copy
Return a copy of the frequency.
MonthEnd.isAnchored
MonthEnd.onOffset
MonthEnd.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
MonthEnd.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
MonthEnd.__call__(*args, **kwargs)
Call self as a function.
MonthEnd.is_month_start
Return boolean whether a timestamp occurs on the month start.
MonthEnd.is_month_end
Return boolean whether a timestamp occurs on the month end.
MonthEnd.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
MonthEnd.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
MonthEnd.is_year_start
Return boolean whether a timestamp occurs on the year start.
MonthEnd.is_year_end
Return boolean whether a timestamp occurs on the year end.
MonthBegin#
MonthBegin
DateOffset of one month at beginning.
Properties#
MonthBegin.freqstr
Return a string representing the frequency.
MonthBegin.kwds
Return a dict of extra parameters for the offset.
MonthBegin.name
Return a string representing the base frequency.
MonthBegin.nanos
MonthBegin.normalize
MonthBegin.rule_code
MonthBegin.n
Methods#
MonthBegin.apply
MonthBegin.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
MonthBegin.copy
Return a copy of the frequency.
MonthBegin.isAnchored
MonthBegin.onOffset
MonthBegin.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
MonthBegin.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
MonthBegin.__call__(*args, **kwargs)
Call self as a function.
MonthBegin.is_month_start
Return boolean whether a timestamp occurs on the month start.
MonthBegin.is_month_end
Return boolean whether a timestamp occurs on the month end.
MonthBegin.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
MonthBegin.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
MonthBegin.is_year_start
Return boolean whether a timestamp occurs on the year start.
MonthBegin.is_year_end
Return boolean whether a timestamp occurs on the year end.
BusinessMonthEnd#
BusinessMonthEnd
DateOffset increments between the last business day of the month.
Alias:
BMonthEnd
alias of pandas._libs.tslibs.offsets.BusinessMonthEnd
Properties#
BusinessMonthEnd.freqstr
Return a string representing the frequency.
BusinessMonthEnd.kwds
Return a dict of extra parameters for the offset.
BusinessMonthEnd.name
Return a string representing the base frequency.
BusinessMonthEnd.nanos
BusinessMonthEnd.normalize
BusinessMonthEnd.rule_code
BusinessMonthEnd.n
Methods#
BusinessMonthEnd.apply
BusinessMonthEnd.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
BusinessMonthEnd.copy
Return a copy of the frequency.
BusinessMonthEnd.isAnchored
BusinessMonthEnd.onOffset
BusinessMonthEnd.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
BusinessMonthEnd.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
BusinessMonthEnd.__call__(*args, **kwargs)
Call self as a function.
BusinessMonthEnd.is_month_start
Return boolean whether a timestamp occurs on the month start.
BusinessMonthEnd.is_month_end
Return boolean whether a timestamp occurs on the month end.
BusinessMonthEnd.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
BusinessMonthEnd.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
BusinessMonthEnd.is_year_start
Return boolean whether a timestamp occurs on the year start.
BusinessMonthEnd.is_year_end
Return boolean whether a timestamp occurs on the year end.
BusinessMonthBegin#
BusinessMonthBegin
DateOffset of one month at the first business day.
Alias:
BMonthBegin
alias of pandas._libs.tslibs.offsets.BusinessMonthBegin
Properties#
BusinessMonthBegin.freqstr
Return a string representing the frequency.
BusinessMonthBegin.kwds
Return a dict of extra parameters for the offset.
BusinessMonthBegin.name
Return a string representing the base frequency.
BusinessMonthBegin.nanos
BusinessMonthBegin.normalize
BusinessMonthBegin.rule_code
BusinessMonthBegin.n
Methods#
BusinessMonthBegin.apply
BusinessMonthBegin.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
BusinessMonthBegin.copy
Return a copy of the frequency.
BusinessMonthBegin.isAnchored
BusinessMonthBegin.onOffset
BusinessMonthBegin.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
BusinessMonthBegin.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
BusinessMonthBegin.__call__(*args, **kwargs)
Call self as a function.
BusinessMonthBegin.is_month_start
Return boolean whether a timestamp occurs on the month start.
BusinessMonthBegin.is_month_end
Return boolean whether a timestamp occurs on the month end.
BusinessMonthBegin.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
BusinessMonthBegin.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
BusinessMonthBegin.is_year_start
Return boolean whether a timestamp occurs on the year start.
BusinessMonthBegin.is_year_end
Return boolean whether a timestamp occurs on the year end.
CustomBusinessMonthEnd#
CustomBusinessMonthEnd
Attributes
Alias:
CBMonthEnd
alias of pandas._libs.tslibs.offsets.CustomBusinessMonthEnd
Properties#
CustomBusinessMonthEnd.freqstr
Return a string representing the frequency.
CustomBusinessMonthEnd.kwds
Return a dict of extra parameters for the offset.
CustomBusinessMonthEnd.m_offset
CustomBusinessMonthEnd.name
Return a string representing the base frequency.
CustomBusinessMonthEnd.nanos
CustomBusinessMonthEnd.normalize
CustomBusinessMonthEnd.rule_code
CustomBusinessMonthEnd.n
CustomBusinessMonthEnd.weekmask
CustomBusinessMonthEnd.calendar
CustomBusinessMonthEnd.holidays
Methods#
CustomBusinessMonthEnd.apply
CustomBusinessMonthEnd.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
CustomBusinessMonthEnd.copy
Return a copy of the frequency.
CustomBusinessMonthEnd.isAnchored
CustomBusinessMonthEnd.onOffset
CustomBusinessMonthEnd.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
CustomBusinessMonthEnd.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
CustomBusinessMonthEnd.__call__(*args, **kwargs)
Call self as a function.
CustomBusinessMonthEnd.is_month_start
Return boolean whether a timestamp occurs on the month start.
CustomBusinessMonthEnd.is_month_end
Return boolean whether a timestamp occurs on the month end.
CustomBusinessMonthEnd.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
CustomBusinessMonthEnd.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
CustomBusinessMonthEnd.is_year_start
Return boolean whether a timestamp occurs on the year start.
CustomBusinessMonthEnd.is_year_end
Return boolean whether a timestamp occurs on the year end.
CustomBusinessMonthBegin#
CustomBusinessMonthBegin
Attributes
Alias:
CBMonthBegin
alias of pandas._libs.tslibs.offsets.CustomBusinessMonthBegin
Properties#
CustomBusinessMonthBegin.freqstr
Return a string representing the frequency.
CustomBusinessMonthBegin.kwds
Return a dict of extra parameters for the offset.
CustomBusinessMonthBegin.m_offset
CustomBusinessMonthBegin.name
Return a string representing the base frequency.
CustomBusinessMonthBegin.nanos
CustomBusinessMonthBegin.normalize
CustomBusinessMonthBegin.rule_code
CustomBusinessMonthBegin.n
CustomBusinessMonthBegin.weekmask
CustomBusinessMonthBegin.calendar
CustomBusinessMonthBegin.holidays
Methods#
CustomBusinessMonthBegin.apply
CustomBusinessMonthBegin.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
CustomBusinessMonthBegin.copy
Return a copy of the frequency.
CustomBusinessMonthBegin.isAnchored
CustomBusinessMonthBegin.onOffset
CustomBusinessMonthBegin.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
CustomBusinessMonthBegin.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
CustomBusinessMonthBegin.__call__(*args, ...)
Call self as a function.
CustomBusinessMonthBegin.is_month_start
Return boolean whether a timestamp occurs on the month start.
CustomBusinessMonthBegin.is_month_end
Return boolean whether a timestamp occurs on the month end.
CustomBusinessMonthBegin.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
CustomBusinessMonthBegin.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
CustomBusinessMonthBegin.is_year_start
Return boolean whether a timestamp occurs on the year start.
CustomBusinessMonthBegin.is_year_end
Return boolean whether a timestamp occurs on the year end.
SemiMonthEnd#
SemiMonthEnd
Two DateOffset's per month repeating on the last day of the month & day_of_month.
Properties#
SemiMonthEnd.freqstr
Return a string representing the frequency.
SemiMonthEnd.kwds
Return a dict of extra parameters for the offset.
SemiMonthEnd.name
Return a string representing the base frequency.
SemiMonthEnd.nanos
SemiMonthEnd.normalize
SemiMonthEnd.rule_code
SemiMonthEnd.n
SemiMonthEnd.day_of_month
Methods#
SemiMonthEnd.apply
SemiMonthEnd.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
SemiMonthEnd.copy
Return a copy of the frequency.
SemiMonthEnd.isAnchored
SemiMonthEnd.onOffset
SemiMonthEnd.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
SemiMonthEnd.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
SemiMonthEnd.__call__(*args, **kwargs)
Call self as a function.
SemiMonthEnd.is_month_start
Return boolean whether a timestamp occurs on the month start.
SemiMonthEnd.is_month_end
Return boolean whether a timestamp occurs on the month end.
SemiMonthEnd.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
SemiMonthEnd.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
SemiMonthEnd.is_year_start
Return boolean whether a timestamp occurs on the year start.
SemiMonthEnd.is_year_end
Return boolean whether a timestamp occurs on the year end.
SemiMonthBegin#
SemiMonthBegin
Two DateOffset's per month repeating on the first day of the month & day_of_month.
Properties#
SemiMonthBegin.freqstr
Return a string representing the frequency.
SemiMonthBegin.kwds
Return a dict of extra parameters for the offset.
SemiMonthBegin.name
Return a string representing the base frequency.
SemiMonthBegin.nanos
SemiMonthBegin.normalize
SemiMonthBegin.rule_code
SemiMonthBegin.n
SemiMonthBegin.day_of_month
Methods#
SemiMonthBegin.apply
SemiMonthBegin.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
SemiMonthBegin.copy
Return a copy of the frequency.
SemiMonthBegin.isAnchored
SemiMonthBegin.onOffset
SemiMonthBegin.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
SemiMonthBegin.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
SemiMonthBegin.__call__(*args, **kwargs)
Call self as a function.
SemiMonthBegin.is_month_start
Return boolean whether a timestamp occurs on the month start.
SemiMonthBegin.is_month_end
Return boolean whether a timestamp occurs on the month end.
SemiMonthBegin.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
SemiMonthBegin.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
SemiMonthBegin.is_year_start
Return boolean whether a timestamp occurs on the year start.
SemiMonthBegin.is_year_end
Return boolean whether a timestamp occurs on the year end.
Week#
Week
Weekly offset.
Properties#
Week.freqstr
Return a string representing the frequency.
Week.kwds
Return a dict of extra parameters for the offset.
Week.name
Return a string representing the base frequency.
Week.nanos
Week.normalize
Week.rule_code
Week.n
Week.weekday
Methods#
Week.apply
Week.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
Week.copy
Return a copy of the frequency.
Week.isAnchored
Week.onOffset
Week.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
Week.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
Week.__call__(*args, **kwargs)
Call self as a function.
Week.is_month_start
Return boolean whether a timestamp occurs on the month start.
Week.is_month_end
Return boolean whether a timestamp occurs on the month end.
Week.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
Week.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
Week.is_year_start
Return boolean whether a timestamp occurs on the year start.
Week.is_year_end
Return boolean whether a timestamp occurs on the year end.
WeekOfMonth#
WeekOfMonth
Describes monthly dates like "the Tuesday of the 2nd week of each month".
Properties#
WeekOfMonth.freqstr
Return a string representing the frequency.
WeekOfMonth.kwds
Return a dict of extra parameters for the offset.
WeekOfMonth.name
Return a string representing the base frequency.
WeekOfMonth.nanos
WeekOfMonth.normalize
WeekOfMonth.rule_code
WeekOfMonth.n
WeekOfMonth.week
Methods#
WeekOfMonth.apply
WeekOfMonth.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
WeekOfMonth.copy
Return a copy of the frequency.
WeekOfMonth.isAnchored
WeekOfMonth.onOffset
WeekOfMonth.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
WeekOfMonth.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
WeekOfMonth.__call__(*args, **kwargs)
Call self as a function.
WeekOfMonth.weekday
WeekOfMonth.is_month_start
Return boolean whether a timestamp occurs on the month start.
WeekOfMonth.is_month_end
Return boolean whether a timestamp occurs on the month end.
WeekOfMonth.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
WeekOfMonth.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
WeekOfMonth.is_year_start
Return boolean whether a timestamp occurs on the year start.
WeekOfMonth.is_year_end
Return boolean whether a timestamp occurs on the year end.
LastWeekOfMonth#
LastWeekOfMonth
Describes monthly dates in last week of month.
Properties#
LastWeekOfMonth.freqstr
Return a string representing the frequency.
LastWeekOfMonth.kwds
Return a dict of extra parameters for the offset.
LastWeekOfMonth.name
Return a string representing the base frequency.
LastWeekOfMonth.nanos
LastWeekOfMonth.normalize
LastWeekOfMonth.rule_code
LastWeekOfMonth.n
LastWeekOfMonth.weekday
LastWeekOfMonth.week
Methods#
LastWeekOfMonth.apply
LastWeekOfMonth.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
LastWeekOfMonth.copy
Return a copy of the frequency.
LastWeekOfMonth.isAnchored
LastWeekOfMonth.onOffset
LastWeekOfMonth.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
LastWeekOfMonth.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
LastWeekOfMonth.__call__(*args, **kwargs)
Call self as a function.
LastWeekOfMonth.is_month_start
Return boolean whether a timestamp occurs on the month start.
LastWeekOfMonth.is_month_end
Return boolean whether a timestamp occurs on the month end.
LastWeekOfMonth.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
LastWeekOfMonth.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
LastWeekOfMonth.is_year_start
Return boolean whether a timestamp occurs on the year start.
LastWeekOfMonth.is_year_end
Return boolean whether a timestamp occurs on the year end.
BQuarterEnd#
BQuarterEnd
DateOffset increments between the last business day of each Quarter.
Properties#
BQuarterEnd.freqstr
Return a string representing the frequency.
BQuarterEnd.kwds
Return a dict of extra parameters for the offset.
BQuarterEnd.name
Return a string representing the base frequency.
BQuarterEnd.nanos
BQuarterEnd.normalize
BQuarterEnd.rule_code
BQuarterEnd.n
BQuarterEnd.startingMonth
Methods#
BQuarterEnd.apply
BQuarterEnd.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
BQuarterEnd.copy
Return a copy of the frequency.
BQuarterEnd.isAnchored
BQuarterEnd.onOffset
BQuarterEnd.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
BQuarterEnd.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
BQuarterEnd.__call__(*args, **kwargs)
Call self as a function.
BQuarterEnd.is_month_start
Return boolean whether a timestamp occurs on the month start.
BQuarterEnd.is_month_end
Return boolean whether a timestamp occurs on the month end.
BQuarterEnd.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
BQuarterEnd.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
BQuarterEnd.is_year_start
Return boolean whether a timestamp occurs on the year start.
BQuarterEnd.is_year_end
Return boolean whether a timestamp occurs on the year end.
BQuarterBegin#
BQuarterBegin
DateOffset increments between the first business day of each Quarter.
Properties#
BQuarterBegin.freqstr
Return a string representing the frequency.
BQuarterBegin.kwds
Return a dict of extra parameters for the offset.
BQuarterBegin.name
Return a string representing the base frequency.
BQuarterBegin.nanos
BQuarterBegin.normalize
BQuarterBegin.rule_code
BQuarterBegin.n
BQuarterBegin.startingMonth
Methods#
BQuarterBegin.apply
BQuarterBegin.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
BQuarterBegin.copy
Return a copy of the frequency.
BQuarterBegin.isAnchored
BQuarterBegin.onOffset
BQuarterBegin.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
BQuarterBegin.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
BQuarterBegin.__call__(*args, **kwargs)
Call self as a function.
BQuarterBegin.is_month_start
Return boolean whether a timestamp occurs on the month start.
BQuarterBegin.is_month_end
Return boolean whether a timestamp occurs on the month end.
BQuarterBegin.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
BQuarterBegin.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
BQuarterBegin.is_year_start
Return boolean whether a timestamp occurs on the year start.
BQuarterBegin.is_year_end
Return boolean whether a timestamp occurs on the year end.
QuarterEnd#
QuarterEnd
DateOffset increments between Quarter end dates.
Properties#
QuarterEnd.freqstr
Return a string representing the frequency.
QuarterEnd.kwds
Return a dict of extra parameters for the offset.
QuarterEnd.name
Return a string representing the base frequency.
QuarterEnd.nanos
QuarterEnd.normalize
QuarterEnd.rule_code
QuarterEnd.n
QuarterEnd.startingMonth
Methods#
QuarterEnd.apply
QuarterEnd.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
QuarterEnd.copy
Return a copy of the frequency.
QuarterEnd.isAnchored
QuarterEnd.onOffset
QuarterEnd.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
QuarterEnd.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
QuarterEnd.__call__(*args, **kwargs)
Call self as a function.
QuarterEnd.is_month_start
Return boolean whether a timestamp occurs on the month start.
QuarterEnd.is_month_end
Return boolean whether a timestamp occurs on the month end.
QuarterEnd.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
QuarterEnd.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
QuarterEnd.is_year_start
Return boolean whether a timestamp occurs on the year start.
QuarterEnd.is_year_end
Return boolean whether a timestamp occurs on the year end.
QuarterBegin#
QuarterBegin
DateOffset increments between Quarter start dates.
Properties#
QuarterBegin.freqstr
Return a string representing the frequency.
QuarterBegin.kwds
Return a dict of extra parameters for the offset.
QuarterBegin.name
Return a string representing the base frequency.
QuarterBegin.nanos
QuarterBegin.normalize
QuarterBegin.rule_code
QuarterBegin.n
QuarterBegin.startingMonth
Methods#
QuarterBegin.apply
QuarterBegin.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
QuarterBegin.copy
Return a copy of the frequency.
QuarterBegin.isAnchored
QuarterBegin.onOffset
QuarterBegin.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
QuarterBegin.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
QuarterBegin.__call__(*args, **kwargs)
Call self as a function.
QuarterBegin.is_month_start
Return boolean whether a timestamp occurs on the month start.
QuarterBegin.is_month_end
Return boolean whether a timestamp occurs on the month end.
QuarterBegin.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
QuarterBegin.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
QuarterBegin.is_year_start
Return boolean whether a timestamp occurs on the year start.
QuarterBegin.is_year_end
Return boolean whether a timestamp occurs on the year end.
BYearEnd#
BYearEnd
DateOffset increments between the last business day of the year.
Properties#
BYearEnd.freqstr
Return a string representing the frequency.
BYearEnd.kwds
Return a dict of extra parameters for the offset.
BYearEnd.name
Return a string representing the base frequency.
BYearEnd.nanos
BYearEnd.normalize
BYearEnd.rule_code
BYearEnd.n
BYearEnd.month
Methods#
BYearEnd.apply
BYearEnd.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
BYearEnd.copy
Return a copy of the frequency.
BYearEnd.isAnchored
BYearEnd.onOffset
BYearEnd.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
BYearEnd.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
BYearEnd.__call__(*args, **kwargs)
Call self as a function.
BYearEnd.is_month_start
Return boolean whether a timestamp occurs on the month start.
BYearEnd.is_month_end
Return boolean whether a timestamp occurs on the month end.
BYearEnd.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
BYearEnd.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
BYearEnd.is_year_start
Return boolean whether a timestamp occurs on the year start.
BYearEnd.is_year_end
Return boolean whether a timestamp occurs on the year end.
BYearBegin#
BYearBegin
DateOffset increments between the first business day of the year.
Properties#
BYearBegin.freqstr
Return a string representing the frequency.
BYearBegin.kwds
Return a dict of extra parameters for the offset.
BYearBegin.name
Return a string representing the base frequency.
BYearBegin.nanos
BYearBegin.normalize
BYearBegin.rule_code
BYearBegin.n
BYearBegin.month
Methods#
BYearBegin.apply
BYearBegin.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
BYearBegin.copy
Return a copy of the frequency.
BYearBegin.isAnchored
BYearBegin.onOffset
BYearBegin.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
BYearBegin.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
BYearBegin.__call__(*args, **kwargs)
Call self as a function.
BYearBegin.is_month_start
Return boolean whether a timestamp occurs on the month start.
BYearBegin.is_month_end
Return boolean whether a timestamp occurs on the month end.
BYearBegin.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
BYearBegin.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
BYearBegin.is_year_start
Return boolean whether a timestamp occurs on the year start.
BYearBegin.is_year_end
Return boolean whether a timestamp occurs on the year end.
YearEnd#
YearEnd
DateOffset increments between calendar year ends.
Properties#
YearEnd.freqstr
Return a string representing the frequency.
YearEnd.kwds
Return a dict of extra parameters for the offset.
YearEnd.name
Return a string representing the base frequency.
YearEnd.nanos
YearEnd.normalize
YearEnd.rule_code
YearEnd.n
YearEnd.month
Methods#
YearEnd.apply
YearEnd.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
YearEnd.copy
Return a copy of the frequency.
YearEnd.isAnchored
YearEnd.onOffset
YearEnd.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
YearEnd.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
YearEnd.__call__(*args, **kwargs)
Call self as a function.
YearEnd.is_month_start
Return boolean whether a timestamp occurs on the month start.
YearEnd.is_month_end
Return boolean whether a timestamp occurs on the month end.
YearEnd.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
YearEnd.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
YearEnd.is_year_start
Return boolean whether a timestamp occurs on the year start.
YearEnd.is_year_end
Return boolean whether a timestamp occurs on the year end.
YearBegin#
YearBegin
DateOffset increments between calendar year begin dates.
Properties#
YearBegin.freqstr
Return a string representing the frequency.
YearBegin.kwds
Return a dict of extra parameters for the offset.
YearBegin.name
Return a string representing the base frequency.
YearBegin.nanos
YearBegin.normalize
YearBegin.rule_code
YearBegin.n
YearBegin.month
Methods#
YearBegin.apply
YearBegin.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
YearBegin.copy
Return a copy of the frequency.
YearBegin.isAnchored
YearBegin.onOffset
YearBegin.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
YearBegin.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
YearBegin.__call__(*args, **kwargs)
Call self as a function.
YearBegin.is_month_start
Return boolean whether a timestamp occurs on the month start.
YearBegin.is_month_end
Return boolean whether a timestamp occurs on the month end.
YearBegin.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
YearBegin.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
YearBegin.is_year_start
Return boolean whether a timestamp occurs on the year start.
YearBegin.is_year_end
Return boolean whether a timestamp occurs on the year end.
FY5253#
FY5253
Describes 52-53 week fiscal year.
Properties#
FY5253.freqstr
Return a string representing the frequency.
FY5253.kwds
Return a dict of extra parameters for the offset.
FY5253.name
Return a string representing the base frequency.
FY5253.nanos
FY5253.normalize
FY5253.rule_code
FY5253.n
FY5253.startingMonth
FY5253.variation
FY5253.weekday
Methods#
FY5253.apply
FY5253.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
FY5253.copy
Return a copy of the frequency.
FY5253.get_rule_code_suffix
FY5253.get_year_end
FY5253.isAnchored
FY5253.onOffset
FY5253.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
FY5253.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
FY5253.__call__(*args, **kwargs)
Call self as a function.
FY5253.is_month_start
Return boolean whether a timestamp occurs on the month start.
FY5253.is_month_end
Return boolean whether a timestamp occurs on the month end.
FY5253.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
FY5253.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
FY5253.is_year_start
Return boolean whether a timestamp occurs on the year start.
FY5253.is_year_end
Return boolean whether a timestamp occurs on the year end.
FY5253Quarter#
FY5253Quarter
DateOffset increments between business quarter dates for 52-53 week fiscal year.
Properties#
FY5253Quarter.freqstr
Return a string representing the frequency.
FY5253Quarter.kwds
Return a dict of extra parameters for the offset.
FY5253Quarter.name
Return a string representing the base frequency.
FY5253Quarter.nanos
FY5253Quarter.normalize
FY5253Quarter.rule_code
FY5253Quarter.n
FY5253Quarter.qtr_with_extra_week
FY5253Quarter.startingMonth
FY5253Quarter.variation
FY5253Quarter.weekday
Methods#
FY5253Quarter.apply
FY5253Quarter.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
FY5253Quarter.copy
Return a copy of the frequency.
FY5253Quarter.get_rule_code_suffix
FY5253Quarter.get_weeks
FY5253Quarter.isAnchored
FY5253Quarter.onOffset
FY5253Quarter.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
FY5253Quarter.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
FY5253Quarter.year_has_extra_week
FY5253Quarter.__call__(*args, **kwargs)
Call self as a function.
FY5253Quarter.is_month_start
Return boolean whether a timestamp occurs on the month start.
FY5253Quarter.is_month_end
Return boolean whether a timestamp occurs on the month end.
FY5253Quarter.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
FY5253Quarter.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
FY5253Quarter.is_year_start
Return boolean whether a timestamp occurs on the year start.
FY5253Quarter.is_year_end
Return boolean whether a timestamp occurs on the year end.
Easter#
Easter
DateOffset for the Easter holiday using logic defined in dateutil.
Properties#
Easter.freqstr
Return a string representing the frequency.
Easter.kwds
Return a dict of extra parameters for the offset.
Easter.name
Return a string representing the base frequency.
Easter.nanos
Easter.normalize
Easter.rule_code
Easter.n
Methods#
Easter.apply
Easter.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
Easter.copy
Return a copy of the frequency.
Easter.isAnchored
Easter.onOffset
Easter.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
Easter.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
Easter.__call__(*args, **kwargs)
Call self as a function.
Easter.is_month_start
Return boolean whether a timestamp occurs on the month start.
Easter.is_month_end
Return boolean whether a timestamp occurs on the month end.
Easter.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
Easter.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
Easter.is_year_start
Return boolean whether a timestamp occurs on the year start.
Easter.is_year_end
Return boolean whether a timestamp occurs on the year end.
Tick#
Tick
Attributes
Properties#
Tick.delta
Tick.freqstr
Return a string representing the frequency.
Tick.kwds
Return a dict of extra parameters for the offset.
Tick.name
Return a string representing the base frequency.
Tick.nanos
Return an integer of the total number of nanoseconds.
Tick.normalize
Tick.rule_code
Tick.n
Methods#
Tick.copy
Return a copy of the frequency.
Tick.isAnchored
Tick.onOffset
Tick.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
Tick.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
Tick.__call__(*args, **kwargs)
Call self as a function.
Tick.apply
Tick.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
Tick.is_month_start
Return boolean whether a timestamp occurs on the month start.
Tick.is_month_end
Return boolean whether a timestamp occurs on the month end.
Tick.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
Tick.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
Tick.is_year_start
Return boolean whether a timestamp occurs on the year start.
Tick.is_year_end
Return boolean whether a timestamp occurs on the year end.
Day#
Day
Attributes
Properties#
Day.delta
Day.freqstr
Return a string representing the frequency.
Day.kwds
Return a dict of extra parameters for the offset.
Day.name
Return a string representing the base frequency.
Day.nanos
Return an integer of the total number of nanoseconds.
Day.normalize
Day.rule_code
Day.n
Methods#
Day.copy
Return a copy of the frequency.
Day.isAnchored
Day.onOffset
Day.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
Day.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
Day.__call__(*args, **kwargs)
Call self as a function.
Day.apply
Day.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
Day.is_month_start
Return boolean whether a timestamp occurs on the month start.
Day.is_month_end
Return boolean whether a timestamp occurs on the month end.
Day.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
Day.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
Day.is_year_start
Return boolean whether a timestamp occurs on the year start.
Day.is_year_end
Return boolean whether a timestamp occurs on the year end.
Hour#
Hour
Attributes
Properties#
Hour.delta
Hour.freqstr
Return a string representing the frequency.
Hour.kwds
Return a dict of extra parameters for the offset.
Hour.name
Return a string representing the base frequency.
Hour.nanos
Return an integer of the total number of nanoseconds.
Hour.normalize
Hour.rule_code
Hour.n
Methods#
Hour.copy
Return a copy of the frequency.
Hour.isAnchored
Hour.onOffset
Hour.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
Hour.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
Hour.__call__(*args, **kwargs)
Call self as a function.
Hour.apply
Hour.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
Hour.is_month_start
Return boolean whether a timestamp occurs on the month start.
Hour.is_month_end
Return boolean whether a timestamp occurs on the month end.
Hour.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
Hour.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
Hour.is_year_start
Return boolean whether a timestamp occurs on the year start.
Hour.is_year_end
Return boolean whether a timestamp occurs on the year end.
Minute#
Minute
Attributes
Properties#
Minute.delta
Minute.freqstr
Return a string representing the frequency.
Minute.kwds
Return a dict of extra parameters for the offset.
Minute.name
Return a string representing the base frequency.
Minute.nanos
Return an integer of the total number of nanoseconds.
Minute.normalize
Minute.rule_code
Minute.n
Methods#
Minute.copy
Return a copy of the frequency.
Minute.isAnchored
Minute.onOffset
Minute.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
Minute.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
Minute.__call__(*args, **kwargs)
Call self as a function.
Minute.apply
Minute.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
Minute.is_month_start
Return boolean whether a timestamp occurs on the month start.
Minute.is_month_end
Return boolean whether a timestamp occurs on the month end.
Minute.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
Minute.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
Minute.is_year_start
Return boolean whether a timestamp occurs on the year start.
Minute.is_year_end
Return boolean whether a timestamp occurs on the year end.
Second#
Second
Attributes
Properties#
Second.delta
Second.freqstr
Return a string representing the frequency.
Second.kwds
Return a dict of extra parameters for the offset.
Second.name
Return a string representing the base frequency.
Second.nanos
Return an integer of the total number of nanoseconds.
Second.normalize
Second.rule_code
Second.n
Methods#
Second.copy
Return a copy of the frequency.
Second.isAnchored
Second.onOffset
Second.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
Second.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
Second.__call__(*args, **kwargs)
Call self as a function.
Second.apply
Second.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
Second.is_month_start
Return boolean whether a timestamp occurs on the month start.
Second.is_month_end
Return boolean whether a timestamp occurs on the month end.
Second.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
Second.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
Second.is_year_start
Return boolean whether a timestamp occurs on the year start.
Second.is_year_end
Return boolean whether a timestamp occurs on the year end.
Milli#
Milli
Attributes
Properties#
Milli.delta
Milli.freqstr
Return a string representing the frequency.
Milli.kwds
Return a dict of extra parameters for the offset.
Milli.name
Return a string representing the base frequency.
Milli.nanos
Return an integer of the total number of nanoseconds.
Milli.normalize
Milli.rule_code
Milli.n
Methods#
Milli.copy
Return a copy of the frequency.
Milli.isAnchored
Milli.onOffset
Milli.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
Milli.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
Milli.__call__(*args, **kwargs)
Call self as a function.
Milli.apply
Milli.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
Milli.is_month_start
Return boolean whether a timestamp occurs on the month start.
Milli.is_month_end
Return boolean whether a timestamp occurs on the month end.
Milli.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
Milli.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
Milli.is_year_start
Return boolean whether a timestamp occurs on the year start.
Milli.is_year_end
Return boolean whether a timestamp occurs on the year end.
Micro#
Micro
Attributes
Properties#
Micro.delta
Micro.freqstr
Return a string representing the frequency.
Micro.kwds
Return a dict of extra parameters for the offset.
Micro.name
Return a string representing the base frequency.
Micro.nanos
Return an integer of the total number of nanoseconds.
Micro.normalize
Micro.rule_code
Micro.n
Methods#
Micro.copy
Return a copy of the frequency.
Micro.isAnchored
Micro.onOffset
Micro.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
Micro.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
Micro.__call__(*args, **kwargs)
Call self as a function.
Micro.apply
Micro.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
Micro.is_month_start
Return boolean whether a timestamp occurs on the month start.
Micro.is_month_end
Return boolean whether a timestamp occurs on the month end.
Micro.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
Micro.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
Micro.is_year_start
Return boolean whether a timestamp occurs on the year start.
Micro.is_year_end
Return boolean whether a timestamp occurs on the year end.
Nano#
Nano
Attributes
Properties#
Nano.delta
Nano.freqstr
Return a string representing the frequency.
Nano.kwds
Return a dict of extra parameters for the offset.
Nano.name
Return a string representing the base frequency.
Nano.nanos
Return an integer of the total number of nanoseconds.
Nano.normalize
Nano.rule_code
Nano.n
Methods#
Nano.copy
Return a copy of the frequency.
Nano.isAnchored
Nano.onOffset
Nano.is_anchored
Return boolean whether the frequency is a unit frequency (n=1).
Nano.is_on_offset
Return boolean whether a timestamp intersects with this frequency.
Nano.__call__(*args, **kwargs)
Call self as a function.
Nano.apply
Nano.apply_index
(DEPRECATED) Vectorized apply of DateOffset to DatetimeIndex.
Nano.is_month_start
Return boolean whether a timestamp occurs on the month start.
Nano.is_month_end
Return boolean whether a timestamp occurs on the month end.
Nano.is_quarter_start
Return boolean whether a timestamp occurs on the quarter start.
Nano.is_quarter_end
Return boolean whether a timestamp occurs on the quarter end.
Nano.is_year_start
Return boolean whether a timestamp occurs on the year start.
Nano.is_year_end
Return boolean whether a timestamp occurs on the year end.
Frequencies#
to_offset
Return DateOffset object from string or datetime.timedelta object.
|
reference/offset_frequency.html
|
pandas.Index.argsort
|
`pandas.Index.argsort`
Return the integer indices that would sort the index.
```
>>> idx = pd.Index(['b', 'a', 'd', 'c'])
>>> idx
Index(['b', 'a', 'd', 'c'], dtype='object')
```
|
Index.argsort(*args, **kwargs)[source]#
Return the integer indices that would sort the index.
Parameters
*argsPassed to numpy.ndarray.argsort.
**kwargsPassed to numpy.ndarray.argsort.
Returns
np.ndarray[np.intp]Integer indices that would sort the index if used as
an indexer.
See also
numpy.argsortSimilar method for NumPy arrays.
Index.sort_valuesReturn sorted copy of Index.
Examples
>>> idx = pd.Index(['b', 'a', 'd', 'c'])
>>> idx
Index(['b', 'a', 'd', 'c'], dtype='object')
>>> order = idx.argsort()
>>> order
array([1, 0, 3, 2])
>>> idx[order]
Index(['a', 'b', 'c', 'd'], dtype='object')
|
reference/api/pandas.Index.argsort.html
|
pandas.Period.month
|
`pandas.Period.month`
Return the month this Period falls on.
|
Period.month#
Return the month this Period falls on.
|
reference/api/pandas.Period.month.html
|
pandas.tseries.offsets.BusinessHour.is_quarter_start
|
`pandas.tseries.offsets.BusinessHour.is_quarter_start`
Return boolean whether a timestamp occurs on the quarter start.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_start(ts)
True
```
|
BusinessHour.is_quarter_start()#
Return boolean whether a timestamp occurs on the quarter start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_start(ts)
True
|
reference/api/pandas.tseries.offsets.BusinessHour.is_quarter_start.html
|
pandas.Series.cat.rename_categories
|
`pandas.Series.cat.rename_categories`
Rename categories.
```
>>> c = pd.Categorical(['a', 'a', 'b'])
>>> c.rename_categories([0, 1])
[0, 0, 1]
Categories (2, int64): [0, 1]
```
|
Series.cat.rename_categories(*args, **kwargs)[source]#
Rename categories.
Parameters
new_categorieslist-like, dict-like or callableNew categories which will replace old categories.
list-like: all items must be unique and the number of items in
the new categories must match the existing number of categories.
dict-like: specifies a mapping from
old categories to new. Categories not contained in the mapping
are passed through and extra categories in the mapping are
ignored.
callable : a callable that is called on all items in the old
categories and whose return values comprise the new categories.
inplacebool, default FalseWhether or not to rename the categories inplace or return a copy of
this categorical with renamed categories.
Deprecated since version 1.3.0.
Returns
catCategorical or NoneCategorical with removed categories or None if inplace=True.
Raises
ValueErrorIf new categories are list-like and do not have the same number of
items than the current categories or do not validate as categories
See also
reorder_categoriesReorder categories.
add_categoriesAdd new categories.
remove_categoriesRemove the specified categories.
remove_unused_categoriesRemove categories which are not used.
set_categoriesSet the categories to the specified ones.
Examples
>>> c = pd.Categorical(['a', 'a', 'b'])
>>> c.rename_categories([0, 1])
[0, 0, 1]
Categories (2, int64): [0, 1]
For dict-like new_categories, extra keys are ignored and
categories not in the dictionary are passed through
>>> c.rename_categories({'a': 'A', 'c': 'C'})
['A', 'A', 'b']
Categories (2, object): ['A', 'b']
You may also provide a callable to create the new categories
>>> c.rename_categories(lambda x: x.upper())
['A', 'A', 'B']
Categories (2, object): ['A', 'B']
|
reference/api/pandas.Series.cat.rename_categories.html
|
pandas.io.formats.style.Styler.apply_index
|
`pandas.io.formats.style.Styler.apply_index`
Apply a CSS-styling function to the index or column headers, level-wise.
Updates the HTML representation with the result.
```
>>> df = pd.DataFrame([[1,2], [3,4]], index=["A", "B"])
>>> def color_b(s):
... return np.where(s == "B", "background-color: yellow;", "")
>>> df.style.apply_index(color_b)
```
|
Styler.apply_index(func, axis=0, level=None, **kwargs)[source]#
Apply a CSS-styling function to the index or column headers, level-wise.
Updates the HTML representation with the result.
New in version 1.4.0.
Parameters
funcfunctionfunc should take a Series and return a string array of the same length.
axis{0, 1, “index”, “columns”}The headers over which to apply the function.
levelint, str, list, optionalIf index is MultiIndex the level(s) over which to apply the function.
**kwargsdictPass along to func.
Returns
selfStyler
See also
Styler.applymap_indexApply a CSS-styling function to headers elementwise.
Styler.applyApply a CSS-styling function column-wise, row-wise, or table-wise.
Styler.applymapApply a CSS-styling function elementwise.
Notes
Each input to func will be the index as a Series, if an Index, or a level of a MultiIndex. The output of func should be
an identically sized array of CSS styles as strings, in the format ‘attribute: value; attribute2: value2; …’
or, if nothing is to be applied to that element, an empty string or None.
Examples
Basic usage to conditionally highlight values in the index.
>>> df = pd.DataFrame([[1,2], [3,4]], index=["A", "B"])
>>> def color_b(s):
... return np.where(s == "B", "background-color: yellow;", "")
>>> df.style.apply_index(color_b)
Selectively applying to specific levels of MultiIndex columns.
>>> midx = pd.MultiIndex.from_product([['ix', 'jy'], [0, 1], ['x3', 'z4']])
>>> df = pd.DataFrame([np.arange(8)], columns=midx)
>>> def highlight_x(s):
... return ["background-color: yellow;" if "x" in v else "" for v in s]
>>> df.style.apply_index(highlight_x, axis="columns", level=[0, 2])
...
|
reference/api/pandas.io.formats.style.Styler.apply_index.html
|
pandas.tseries.offsets.Second.is_year_end
|
`pandas.tseries.offsets.Second.is_year_end`
Return boolean whether a timestamp occurs on the year end.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_end(ts)
False
```
|
Second.is_year_end()#
Return boolean whether a timestamp occurs on the year end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_end(ts)
False
|
reference/api/pandas.tseries.offsets.Second.is_year_end.html
|
pandas.io.formats.style.Styler.render
|
`pandas.io.formats.style.Styler.render`
Render the Styler including all applied styles to HTML.
|
Styler.render(sparse_index=None, sparse_columns=None, **kwargs)[source]#
Render the Styler including all applied styles to HTML.
Deprecated since version 1.4.0.
Parameters
sparse_indexbool, optionalWhether to sparsify the display of a hierarchical index. Setting to False
will display each explicit level element in a hierarchical key for each row.
Defaults to pandas.options.styler.sparse.index value.
sparse_columnsbool, optionalWhether to sparsify the display of a hierarchical index. Setting to False
will display each explicit level element in a hierarchical key for each row.
Defaults to pandas.options.styler.sparse.columns value.
**kwargsAny additional keyword arguments are passed
through to self.template.render.
This is useful when you need to provide
additional variables for a custom template.
Returns
renderedstrThe rendered HTML.
Notes
This method is deprecated in favour of Styler.to_html.
Styler objects have defined the _repr_html_ method
which automatically calls self.to_html() when it’s the
last item in a Notebook cell.
When calling Styler.render() directly, wrap the result in
IPython.display.HTML to view the rendered HTML in the notebook.
Pandas uses the following keys in render. Arguments passed
in **kwargs take precedence, so think carefully if you want
to override them:
head
cellstyle
body
uuid
table_styles
caption
table_attributes
|
reference/api/pandas.io.formats.style.Styler.render.html
|
pandas.tseries.offsets.LastWeekOfMonth.freqstr
|
`pandas.tseries.offsets.LastWeekOfMonth.freqstr`
Return a string representing the frequency.
```
>>> pd.DateOffset(5).freqstr
'<5 * DateOffsets>'
```
|
LastWeekOfMonth.freqstr#
Return a string representing the frequency.
Examples
>>> pd.DateOffset(5).freqstr
'<5 * DateOffsets>'
>>> pd.offsets.BusinessHour(2).freqstr
'2BH'
>>> pd.offsets.Nano().freqstr
'N'
>>> pd.offsets.Nano(-3).freqstr
'-3N'
|
reference/api/pandas.tseries.offsets.LastWeekOfMonth.freqstr.html
|
pandas.tseries.offsets.SemiMonthEnd.rollback
|
`pandas.tseries.offsets.SemiMonthEnd.rollback`
Roll provided date backward to next offset only if not on offset.
Rolled timestamp if not on offset, otherwise unchanged timestamp.
|
SemiMonthEnd.rollback()#
Roll provided date backward to next offset only if not on offset.
Returns
TimeStampRolled timestamp if not on offset, otherwise unchanged timestamp.
|
reference/api/pandas.tseries.offsets.SemiMonthEnd.rollback.html
|
pandas.tseries.offsets.WeekOfMonth.is_anchored
|
`pandas.tseries.offsets.WeekOfMonth.is_anchored`
Return boolean whether the frequency is a unit frequency (n=1).
```
>>> pd.DateOffset().is_anchored()
True
>>> pd.DateOffset(2).is_anchored()
False
```
|
WeekOfMonth.is_anchored()#
Return boolean whether the frequency is a unit frequency (n=1).
Examples
>>> pd.DateOffset().is_anchored()
True
>>> pd.DateOffset(2).is_anchored()
False
|
reference/api/pandas.tseries.offsets.WeekOfMonth.is_anchored.html
|
pandas.tseries.offsets.Second.normalize
|
pandas.tseries.offsets.Second.normalize
|
Second.normalize#
|
reference/api/pandas.tseries.offsets.Second.normalize.html
|
pandas.Period.dayofweek
|
`pandas.Period.dayofweek`
Day of the week the period lies in, with Monday=0 and Sunday=6.
If the period frequency is lower than daily (e.g. hourly), and the
period spans over multiple days, the day at the start of the period is
used.
```
>>> per = pd.Period('2017-12-31 22:00', 'H')
>>> per.day_of_week
6
```
|
Period.dayofweek#
Day of the week the period lies in, with Monday=0 and Sunday=6.
If the period frequency is lower than daily (e.g. hourly), and the
period spans over multiple days, the day at the start of the period is
used.
If the frequency is higher than daily (e.g. monthly), the last day
of the period is used.
Returns
intDay of the week.
See also
Period.day_of_weekDay of the week the period lies in.
Period.weekdayAlias of Period.day_of_week.
Period.dayDay of the month.
Period.dayofyearDay of the year.
Examples
>>> per = pd.Period('2017-12-31 22:00', 'H')
>>> per.day_of_week
6
For periods that span over multiple days, the day at the beginning of
the period is returned.
>>> per = pd.Period('2017-12-31 22:00', '4H')
>>> per.day_of_week
6
>>> per.start_time.day_of_week
6
For periods with a frequency higher than days, the last day of the
period is returned.
>>> per = pd.Period('2018-01', 'M')
>>> per.day_of_week
2
>>> per.end_time.day_of_week
2
|
reference/api/pandas.Period.dayofweek.html
|
pandas maintenance
|
pandas maintenance
|
This guide is for pandas’ maintainers. It may also be interesting to contributors
looking to understand the pandas development process and what steps are necessary
to become a maintainer.
The main contributing guide is available at Contributing to pandas.
Roles#
pandas uses two levels of permissions: triage and core team members.
Triage members can label and close issues and pull requests.
Core team members can label and close issues and pull request, and can merge
pull requests.
GitHub publishes the full list of permissions.
Tasks#
pandas is largely a volunteer project, so these tasks shouldn’t be read as
“expectations” of triage and maintainers. Rather, they’re general descriptions
of what it means to be a maintainer.
Triage newly filed issues (see Issue triage)
Review newly opened pull requests
Respond to updates on existing issues and pull requests
Drive discussion and decisions on stalled issues and pull requests
Provide experience / wisdom on API design questions to ensure consistency and maintainability
Project organization (run / attend developer meetings, represent pandas)
https://matthewrocklin.com/blog/2019/05/18/maintainer may be interesting background
reading.
Issue triage#
Here’s a typical workflow for triaging a newly opened issue.
Thank the reporter for opening an issue
The issue tracker is many people’s first interaction with the pandas project itself,
beyond just using the library. As such, we want it to be a welcoming, pleasant
experience.
Is the necessary information provided?
Ideally reporters would fill out the issue template, but many don’t.
If crucial information (like the version of pandas they used), is missing
feel free to ask for that and label the issue with “Needs info”. The
report should follow the guidelines in Bug reports and enhancement requests.
You may want to link to that if they didn’t follow the template.
Make sure that the title accurately reflects the issue. Edit it yourself
if it’s not clear.
Is this a duplicate issue?
We have many open issues. If a new issue is clearly a duplicate, label the
new issue as “Duplicate” assign the milestone “No Action”, and close the issue
with a link to the original issue. Make sure to still thank the reporter, and
encourage them to chime in on the original issue, and perhaps try to fix it.
If the new issue provides relevant information, such as a better or slightly
different example, add it to the original issue as a comment or an edit to
the original post.
Is the issue minimal and reproducible?
For bug reports, we ask that the reporter provide a minimal reproducible
example. See https://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports
for a good explanation. If the example is not reproducible, or if it’s
clearly not minimal, feel free to ask the reporter if they can provide
and example or simplify the provided one. Do acknowledge that writing
minimal reproducible examples is hard work. If the reporter is struggling,
you can try to write one yourself and we’ll edit the original post to include it.
If a reproducible example can’t be provided, add the “Needs info” label.
If a reproducible example is provided, but you see a simplification,
edit the original post with your simpler reproducible example.
Is this a clearly defined feature request?
Generally, pandas prefers to discuss and design new features in issues, before
a pull request is made. Encourage the submitter to include a proposed API
for the new feature. Having them write a full docstring is a good way to
pin down specifics.
We’ll need a discussion from several pandas maintainers before deciding whether
the proposal is in scope for pandas.
Is this a usage question?
We prefer that usage questions are asked on StackOverflow with the pandas
tag. https://stackoverflow.com/questions/tagged/pandas
If it’s easy to answer, feel free to link to the relevant documentation section,
let them know that in the future this kind of question should be on
StackOverflow, and close the issue.
What labels and milestones should I add?
Apply the relevant labels. This is a bit of an art, and comes with experience.
Look at similar issues to get a feel for how things are labeled.
If the issue is clearly defined and the fix seems relatively straightforward,
label the issue as “Good first issue”.
Typically, new issues will be assigned the “Contributions welcome” milestone,
unless it’s know that this issue should be addressed in a specific release (say
because it’s a large regression).
Closing issues#
Be delicate here: many people interpret closing an issue as us saying that the
conversation is over. It’s typically best to give the reporter some time to
respond or self-close their issue if it’s determined that the behavior is not a bug,
or the feature is out of scope. Sometimes reporters just go away though, and
we’ll close the issue after the conversation has died.
Reviewing pull requests#
Anybody can review a pull request: regular contributors, triagers, or core-team
members. But only core-team members can merge pull requests when they’re ready.
Here are some things to check when reviewing a pull request.
Tests should be in a sensible location: in the same file as closely related tests.
New public APIs should be included somewhere in doc/source/reference/.
New / changed API should use the versionadded or versionchanged directives in the docstring.
User-facing changes should have a whatsnew in the appropriate file.
Regression tests should reference the original GitHub issue number like # GH-1234.
The pull request should be labeled and assigned the appropriate milestone (the next patch release
for regression fixes and small bug fixes, the next minor milestone otherwise)
Changes should comply with our Version policy.
Backporting#
pandas supports point releases (e.g. 1.4.3) that aim to:
Fix bugs in new features introduced in the first minor version release.
e.g. If a new feature was added in 1.4 and contains a bug, a fix can be applied in 1.4.3
Fix bugs that used to work in a few minor releases prior. There should be agreement between core team members that a backport is appropriate.
e.g. If a feature worked in 1.2 and stopped working since 1.3, a fix can be applied in 1.4.3.
Since pandas minor releases are based on Github branches (e.g. point release of 1.4 are based off the 1.4.x branch),
“backporting” means merging a pull request fix to the main branch and correct minor branch associated with the next point release.
By default, if a pull request is assigned to the next point release milestone within the Github interface,
the backporting process should happen automatically by the @meeseeksdev bot once the pull request is merged.
A new pull request will be made backporting the pull request to the correct version branch.
Sometimes due to merge conflicts, a manual pull request will need to be made addressing the code conflict.
If the bot does not automatically start the backporting process, you can also write a Github comment in the merged pull request
to trigger the backport:
@meeseeksdev backport version-branch
This will trigger a workflow which will backport a given change to a branch
(e.g. @meeseeksdev backport 1.4.x)
Cleaning up old issues#
Every open issue in pandas has a cost. Open issues make finding duplicates harder,
and can make it harder to know what needs to be done in pandas. That said, closing
issues isn’t a goal on its own. Our goal is to make pandas the best it can be,
and that’s best done by ensuring that the quality of our open issues is high.
Occasionally, bugs are fixed but the issue isn’t linked to in the Pull Request.
In these cases, comment that “This has been fixed, but could use a test.” and
label the issue as “Good First Issue” and “Needs Test”.
If an older issue doesn’t follow our issue template, edit the original post to
include a minimal example, the actual output, and the expected output. Uniformity
in issue reports is valuable.
If an older issue lacks a reproducible example, label it as “Needs Info” and
ask them to provide one (or write one yourself if possible). If one isn’t
provide reasonably soon, close it according to the policies in Closing issues.
Cleaning up old pull requests#
Occasionally, contributors are unable to finish off a pull request.
If some time has passed (two weeks, say) since the last review requesting changes,
gently ask if they’re still interested in working on this. If another two weeks or
so passes with no response, thank them for their work and close the pull request.
Comment on the original issue that “There’s a stalled PR at #1234 that may be
helpful.”, and perhaps label the issue as “Good first issue” if the PR was relatively
close to being accepted.
Additionally, core-team members can push to contributors branches. This can be
helpful for pushing an important PR across the line, or for fixing a small
merge conflict.
Becoming a pandas maintainer#
The full process is outlined in our governance documents. In summary,
we’re happy to give triage permissions to anyone who shows interest by
being helpful on the issue tracker.
The required steps for adding a maintainer are:
Contact the contributor and ask their interest to join.
Add the contributor to the appropriate Github Team if accepted the invitation.
pandas-core is for core team members
pandas-triage is for pandas triage members
Add the contributor to the pandas Google group.
Create a pull request to add the contributor’s Github handle to pandas-dev/pandas/web/pandas/config.yml.
Create a pull request to add the contributor’s name/Github handle to the governance document.
The current list of core-team members is at
https://github.com/pandas-dev/pandas-governance/blob/master/people.md
Merging pull requests#
Only core team members can merge pull requests. We have a few guidelines.
You should typically not self-merge your own pull requests. Exceptions include
things like small changes to fix CI (e.g. pinning a package version).
You should not merge pull requests that have an active discussion, or pull
requests that has any -1 votes from a core maintainer. pandas operates
by consensus.
For larger changes, it’s good to have a +1 from at least two core team members.
In addition to the items listed in Closing issues, you should verify
that the pull request is assigned the correct milestone.
Pull requests merged with a patch-release milestone will typically be backported
by our bot. Verify that the bot noticed the merge (it will leave a comment within
a minute typically). If a manual backport is needed please do that, and remove
the “Needs backport” label once you’ve done it manually. If you forget to assign
a milestone before tagging, you can request the bot to backport it with:
@Meeseeksdev backport <branch>
Benchmark machine#
The team currently owns dedicated hardware for hosting a website for pandas’ ASV performance benchmark. The results
are published to http://pandas.pydata.org/speed/pandas/
Configuration#
The machine can be configured with the Ansible playbook in https://github.com/tomaugspurger/asv-runner.
Publishing#
The results are published to another Github repository, https://github.com/tomaugspurger/asv-collection.
Finally, we have a cron job on our docs server to pull from https://github.com/tomaugspurger/asv-collection, to serve them from /speed.
Ask Tom or Joris for access to the webserver.
Debugging#
The benchmarks are scheduled by Airflow. It has a dashboard for viewing and debugging the results. You’ll need to setup an SSH tunnel to view them
ssh -L 8080:localhost:8080 [email protected]
Release process#
The process for releasing a new version of pandas can be found at https://github.com/pandas-dev/pandas-release
|
development/maintaining.html
|
pandas.api.types.is_datetime64_any_dtype
|
`pandas.api.types.is_datetime64_any_dtype`
Check whether the provided array or dtype is of the datetime64 dtype.
The array or dtype to check.
```
>>> is_datetime64_any_dtype(str)
False
>>> is_datetime64_any_dtype(int)
False
>>> is_datetime64_any_dtype(np.datetime64) # can be tz-naive
True
>>> is_datetime64_any_dtype(DatetimeTZDtype("ns", "US/Eastern"))
True
>>> is_datetime64_any_dtype(np.array(['a', 'b']))
False
>>> is_datetime64_any_dtype(np.array([1, 2]))
False
>>> is_datetime64_any_dtype(np.array([], dtype="datetime64[ns]"))
True
>>> is_datetime64_any_dtype(pd.DatetimeIndex([1, 2, 3], dtype="datetime64[ns]"))
True
```
|
pandas.api.types.is_datetime64_any_dtype(arr_or_dtype)[source]#
Check whether the provided array or dtype is of the datetime64 dtype.
Parameters
arr_or_dtypearray-like or dtypeThe array or dtype to check.
Returns
boolWhether or not the array or dtype is of the datetime64 dtype.
Examples
>>> is_datetime64_any_dtype(str)
False
>>> is_datetime64_any_dtype(int)
False
>>> is_datetime64_any_dtype(np.datetime64) # can be tz-naive
True
>>> is_datetime64_any_dtype(DatetimeTZDtype("ns", "US/Eastern"))
True
>>> is_datetime64_any_dtype(np.array(['a', 'b']))
False
>>> is_datetime64_any_dtype(np.array([1, 2]))
False
>>> is_datetime64_any_dtype(np.array([], dtype="datetime64[ns]"))
True
>>> is_datetime64_any_dtype(pd.DatetimeIndex([1, 2, 3], dtype="datetime64[ns]"))
True
|
reference/api/pandas.api.types.is_datetime64_any_dtype.html
|
pandas.Index.map
|
`pandas.Index.map`
Map values using an input mapping or function.
|
Index.map(mapper, na_action=None)[source]#
Map values using an input mapping or function.
Parameters
mapperfunction, dict, or SeriesMapping correspondence.
na_action{None, ‘ignore’}If ‘ignore’, propagate NA values, without passing them to the
mapping correspondence.
Returns
appliedUnion[Index, MultiIndex], inferredThe output of the mapping function applied to the index.
If the function returns a tuple with more than one element
a MultiIndex will be returned.
|
reference/api/pandas.Index.map.html
|
pandas.DataFrame.pow
|
`pandas.DataFrame.pow`
Get Exponential power of dataframe and other, element-wise (binary operator pow).
Equivalent to dataframe ** other, but with support to substitute a fill_value
for missing data in one of the inputs. With reverse version, rpow.
```
>>> df = pd.DataFrame({'angles': [0, 3, 4],
... 'degrees': [360, 180, 360]},
... index=['circle', 'triangle', 'rectangle'])
>>> df
angles degrees
circle 0 360
triangle 3 180
rectangle 4 360
```
|
DataFrame.pow(other, axis='columns', level=None, fill_value=None)[source]#
Get Exponential power of dataframe and other, element-wise (binary operator pow).
Equivalent to dataframe ** other, but with support to substitute a fill_value
for missing data in one of the inputs. With reverse version, rpow.
Among flexible wrappers (add, sub, mul, div, mod, pow) to
arithmetic operators: +, -, *, /, //, %, **.
Parameters
otherscalar, sequence, Series, dict or DataFrameAny single or multiple element data structure, or list-like object.
axis{0 or ‘index’, 1 or ‘columns’}Whether to compare by the index (0 or ‘index’) or columns.
(1 or ‘columns’). For Series input, axis to match Series index on.
levelint or labelBroadcast across a level, matching Index values on the
passed MultiIndex level.
fill_valuefloat or None, default NoneFill existing missing (NaN) values, and any new element needed for
successful DataFrame alignment, with this value before computation.
If data in both corresponding DataFrame locations is missing
the result will be missing.
Returns
DataFrameResult of the arithmetic operation.
See also
DataFrame.addAdd DataFrames.
DataFrame.subSubtract DataFrames.
DataFrame.mulMultiply DataFrames.
DataFrame.divDivide DataFrames (float division).
DataFrame.truedivDivide DataFrames (float division).
DataFrame.floordivDivide DataFrames (integer division).
DataFrame.modCalculate modulo (remainder after division).
DataFrame.powCalculate exponential power.
Notes
Mismatched indices will be unioned together.
Examples
>>> df = pd.DataFrame({'angles': [0, 3, 4],
... 'degrees': [360, 180, 360]},
... index=['circle', 'triangle', 'rectangle'])
>>> df
angles degrees
circle 0 360
triangle 3 180
rectangle 4 360
Add a scalar with operator version which return the same
results.
>>> df + 1
angles degrees
circle 1 361
triangle 4 181
rectangle 5 361
>>> df.add(1)
angles degrees
circle 1 361
triangle 4 181
rectangle 5 361
Divide by constant with reverse version.
>>> df.div(10)
angles degrees
circle 0.0 36.0
triangle 0.3 18.0
rectangle 0.4 36.0
>>> df.rdiv(10)
angles degrees
circle inf 0.027778
triangle 3.333333 0.055556
rectangle 2.500000 0.027778
Subtract a list and Series by axis with operator version.
>>> df - [1, 2]
angles degrees
circle -1 358
triangle 2 178
rectangle 3 358
>>> df.sub([1, 2], axis='columns')
angles degrees
circle -1 358
triangle 2 178
rectangle 3 358
>>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']),
... axis='index')
angles degrees
circle -1 359
triangle 2 179
rectangle 3 359
Multiply a dictionary by axis.
>>> df.mul({'angles': 0, 'degrees': 2})
angles degrees
circle 0 720
triangle 0 360
rectangle 0 720
>>> df.mul({'circle': 0, 'triangle': 2, 'rectangle': 3}, axis='index')
angles degrees
circle 0 0
triangle 6 360
rectangle 12 1080
Multiply a DataFrame of different shape with operator version.
>>> other = pd.DataFrame({'angles': [0, 3, 4]},
... index=['circle', 'triangle', 'rectangle'])
>>> other
angles
circle 0
triangle 3
rectangle 4
>>> df * other
angles degrees
circle 0 NaN
triangle 9 NaN
rectangle 16 NaN
>>> df.mul(other, fill_value=0)
angles degrees
circle 0 0.0
triangle 9 0.0
rectangle 16 0.0
Divide by a MultiIndex by level.
>>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6],
... 'degrees': [360, 180, 360, 360, 540, 720]},
... index=[['A', 'A', 'A', 'B', 'B', 'B'],
... ['circle', 'triangle', 'rectangle',
... 'square', 'pentagon', 'hexagon']])
>>> df_multindex
angles degrees
A circle 0 360
triangle 3 180
rectangle 4 360
B square 4 360
pentagon 5 540
hexagon 6 720
>>> df.div(df_multindex, level=1, fill_value=0)
angles degrees
A circle NaN 1.0
triangle 1.0 1.0
rectangle 1.0 1.0
B square 0.0 0.0
pentagon 0.0 0.0
hexagon 0.0 0.0
|
reference/api/pandas.DataFrame.pow.html
|
pandas.Index.is_type_compatible
|
`pandas.Index.is_type_compatible`
Whether the index type is compatible with the provided type.
|
Index.is_type_compatible(kind)[source]#
Whether the index type is compatible with the provided type.
|
reference/api/pandas.Index.is_type_compatible.html
|
pandas.tseries.offsets.Second.apply
|
pandas.tseries.offsets.Second.apply
|
Second.apply()#
|
reference/api/pandas.tseries.offsets.Second.apply.html
|
pandas.core.resample.Resampler.quantile
|
`pandas.core.resample.Resampler.quantile`
Return value at the given quantile.
|
Resampler.quantile(q=0.5, **kwargs)[source]#
Return value at the given quantile.
Parameters
qfloat or array-like, default 0.5 (50% quantile)
Returns
DataFrame or SeriesQuantile of values within each group.
See also
Series.quantileReturn a series, where the index is q and the values are the quantiles.
DataFrame.quantileReturn a DataFrame, where the columns are the columns of self, and the values are the quantiles.
DataFrameGroupBy.quantileReturn a DataFrame, where the coulmns are groupby columns, and the values are its quantiles.
|
reference/api/pandas.core.resample.Resampler.quantile.html
|
pandas.errors.CategoricalConversionWarning
|
`pandas.errors.CategoricalConversionWarning`
Warning is raised when reading a partial labeled Stata file using a iterator.
```
>>> from pandas.io.stata import StataReader
>>> with StataReader('dta_file', chunksize=2) as reader:
... for i, block in enumerate(reader):
... print(i, block))
... # CategoricalConversionWarning: One or more series with value labels...
```
|
exception pandas.errors.CategoricalConversionWarning[source]#
Warning is raised when reading a partial labeled Stata file using a iterator.
Examples
>>> from pandas.io.stata import StataReader
>>> with StataReader('dta_file', chunksize=2) as reader:
... for i, block in enumerate(reader):
... print(i, block))
... # CategoricalConversionWarning: One or more series with value labels...
|
reference/api/pandas.errors.CategoricalConversionWarning.html
|
pandas.tseries.offsets.CustomBusinessMonthBegin.is_year_start
|
`pandas.tseries.offsets.CustomBusinessMonthBegin.is_year_start`
Return boolean whether a timestamp occurs on the year start.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_start(ts)
True
```
|
CustomBusinessMonthBegin.is_year_start()#
Return boolean whether a timestamp occurs on the year start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_start(ts)
True
|
reference/api/pandas.tseries.offsets.CustomBusinessMonthBegin.is_year_start.html
|
pandas.tseries.offsets.Day.__call__
|
`pandas.tseries.offsets.Day.__call__`
Call self as a function.
|
Day.__call__(*args, **kwargs)#
Call self as a function.
|
reference/api/pandas.tseries.offsets.Day.__call__.html
|
pandas.Timedelta.is_populated
|
`pandas.Timedelta.is_populated`
Is_populated property.
|
Timedelta.is_populated#
Is_populated property.
Deprecated since version 1.5.0: This argument is deprecated.
|
reference/api/pandas.Timedelta.is_populated.html
|
pandas.Timestamp.year
|
pandas.Timestamp.year
|
Timestamp.year#
|
reference/api/pandas.Timestamp.year.html
|
pandas.Flags
|
`pandas.Flags`
Flags that apply to pandas objects.
```
>>> df = pd.DataFrame()
>>> df.flags
<Flags(allows_duplicate_labels=True)>
>>> df.flags.allows_duplicate_labels = False
>>> df.flags
<Flags(allows_duplicate_labels=False)>
```
|
class pandas.Flags(obj, *, allows_duplicate_labels)[source]#
Flags that apply to pandas objects.
New in version 1.2.0.
Parameters
objSeries or DataFrameThe object these flags are associated with.
allows_duplicate_labelsbool, default TrueWhether to allow duplicate labels in this object. By default,
duplicate labels are permitted. Setting this to False will
cause an errors.DuplicateLabelError to be raised when
index (or columns for DataFrame) is not unique, or any
subsequent operation on introduces duplicates.
See Disallowing Duplicate Labels for more.
Warning
This is an experimental feature. Currently, many methods fail to
propagate the allows_duplicate_labels value. In future versions
it is expected that every method taking or returning one or more
DataFrame or Series objects will propagate allows_duplicate_labels.
Notes
Attributes can be set in two ways
>>> df = pd.DataFrame()
>>> df.flags
<Flags(allows_duplicate_labels=True)>
>>> df.flags.allows_duplicate_labels = False
>>> df.flags
<Flags(allows_duplicate_labels=False)>
>>> df.flags['allows_duplicate_labels'] = True
>>> df.flags
<Flags(allows_duplicate_labels=True)>
Attributes
allows_duplicate_labels
Whether this object allows duplicate labels.
|
reference/api/pandas.Flags.html
|
pandas.Series.isna
|
`pandas.Series.isna`
Detect missing values.
Return a boolean same-sized object indicating if the values are NA.
NA values, such as None or numpy.NaN, gets mapped to True
values.
Everything else gets mapped to False values. Characters such as empty
strings '' or numpy.inf are not considered NA values
(unless you set pandas.options.mode.use_inf_as_na = True).
```
>>> df = pd.DataFrame(dict(age=[5, 6, np.NaN],
... born=[pd.NaT, pd.Timestamp('1939-05-27'),
... pd.Timestamp('1940-04-25')],
... name=['Alfred', 'Batman', ''],
... toy=[None, 'Batmobile', 'Joker']))
>>> df
age born name toy
0 5.0 NaT Alfred None
1 6.0 1939-05-27 Batman Batmobile
2 NaN 1940-04-25 Joker
```
|
Series.isna()[source]#
Detect missing values.
Return a boolean same-sized object indicating if the values are NA.
NA values, such as None or numpy.NaN, gets mapped to True
values.
Everything else gets mapped to False values. Characters such as empty
strings '' or numpy.inf are not considered NA values
(unless you set pandas.options.mode.use_inf_as_na = True).
Returns
SeriesMask of bool values for each element in Series that
indicates whether an element is an NA value.
See also
Series.isnullAlias of isna.
Series.notnaBoolean inverse of isna.
Series.dropnaOmit axes labels with missing values.
isnaTop-level isna.
Examples
Show which entries in a DataFrame are NA.
>>> df = pd.DataFrame(dict(age=[5, 6, np.NaN],
... born=[pd.NaT, pd.Timestamp('1939-05-27'),
... pd.Timestamp('1940-04-25')],
... name=['Alfred', 'Batman', ''],
... toy=[None, 'Batmobile', 'Joker']))
>>> df
age born name toy
0 5.0 NaT Alfred None
1 6.0 1939-05-27 Batman Batmobile
2 NaN 1940-04-25 Joker
>>> df.isna()
age born name toy
0 False True False True
1 False False False False
2 True False False False
Show which entries in a Series are NA.
>>> ser = pd.Series([5, 6, np.NaN])
>>> ser
0 5.0
1 6.0
2 NaN
dtype: float64
>>> ser.isna()
0 False
1 False
2 True
dtype: bool
|
reference/api/pandas.Series.isna.html
|
pandas.tseries.offsets.BQuarterBegin.copy
|
`pandas.tseries.offsets.BQuarterBegin.copy`
Return a copy of the frequency.
```
>>> freq = pd.DateOffset(1)
>>> freq_copy = freq.copy()
>>> freq is freq_copy
False
```
|
BQuarterBegin.copy()#
Return a copy of the frequency.
Examples
>>> freq = pd.DateOffset(1)
>>> freq_copy = freq.copy()
>>> freq is freq_copy
False
|
reference/api/pandas.tseries.offsets.BQuarterBegin.copy.html
|
pandas.Series.str.get
|
`pandas.Series.str.get`
Extract element from each component at specified position or with specified key.
```
>>> s = pd.Series(["String",
... (1, 2, 3),
... ["a", "b", "c"],
... 123,
... -456,
... {1: "Hello", "2": "World"}])
>>> s
0 String
1 (1, 2, 3)
2 [a, b, c]
3 123
4 -456
5 {1: 'Hello', '2': 'World'}
dtype: object
```
|
Series.str.get(i)[source]#
Extract element from each component at specified position or with specified key.
Extract element from lists, tuples, dict, or strings in each element in the
Series/Index.
Parameters
iint or hashable dict labelPosition or key of element to extract.
Returns
Series or Index
Examples
>>> s = pd.Series(["String",
... (1, 2, 3),
... ["a", "b", "c"],
... 123,
... -456,
... {1: "Hello", "2": "World"}])
>>> s
0 String
1 (1, 2, 3)
2 [a, b, c]
3 123
4 -456
5 {1: 'Hello', '2': 'World'}
dtype: object
>>> s.str.get(1)
0 t
1 2
2 b
3 NaN
4 NaN
5 Hello
dtype: object
>>> s.str.get(-1)
0 g
1 3
2 c
3 NaN
4 NaN
5 None
dtype: object
Return element with given key
>>> s = pd.Series([{"name": "Hello", "value": "World"},
... {"name": "Goodbye", "value": "Planet"}])
>>> s.str.get('name')
0 Hello
1 Goodbye
dtype: object
|
reference/api/pandas.Series.str.get.html
|
pandas.Timedelta.resolution_string
|
`pandas.Timedelta.resolution_string`
Return a string representing the lowest timedelta resolution.
```
>>> td = pd.Timedelta('1 days 2 min 3 us 42 ns')
>>> td.resolution_string
'N'
```
|
Timedelta.resolution_string#
Return a string representing the lowest timedelta resolution.
Each timedelta has a defined resolution that represents the lowest OR
most granular level of precision. Each level of resolution is
represented by a short string as defined below:
Resolution: Return value
Days: ‘D’
Hours: ‘H’
Minutes: ‘T’
Seconds: ‘S’
Milliseconds: ‘L’
Microseconds: ‘U’
Nanoseconds: ‘N’
Returns
strTimedelta resolution.
Examples
>>> td = pd.Timedelta('1 days 2 min 3 us 42 ns')
>>> td.resolution_string
'N'
>>> td = pd.Timedelta('1 days 2 min 3 us')
>>> td.resolution_string
'U'
>>> td = pd.Timedelta('2 min 3 s')
>>> td.resolution_string
'S'
>>> td = pd.Timedelta(36, unit='us')
>>> td.resolution_string
'U'
|
reference/api/pandas.Timedelta.resolution_string.html
|
pandas.Series.rdivmod
|
`pandas.Series.rdivmod`
Return Integer division and modulo of series and other, element-wise (binary operator rdivmod).
Equivalent to other divmod series, but with support to substitute a fill_value for
missing data in either one of the inputs.
```
>>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd'])
>>> a
a 1.0
b 1.0
c 1.0
d NaN
dtype: float64
>>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e'])
>>> b
a 1.0
b NaN
d 1.0
e NaN
dtype: float64
>>> a.divmod(b, fill_value=0)
(a 1.0
b NaN
c NaN
d 0.0
e NaN
dtype: float64,
a 0.0
b NaN
c NaN
d 0.0
e NaN
dtype: float64)
```
|
Series.rdivmod(other, level=None, fill_value=None, axis=0)[source]#
Return Integer division and modulo of series and other, element-wise (binary operator rdivmod).
Equivalent to other divmod series, but with support to substitute a fill_value for
missing data in either one of the inputs.
Parameters
otherSeries or scalar value
levelint or nameBroadcast across a level, matching Index values on the
passed MultiIndex level.
fill_valueNone or float value, default None (NaN)Fill existing missing (NaN) values, and any new element needed for
successful Series alignment, with this value before computation.
If data in both corresponding Series locations is missing
the result of filling (at that location) will be missing.
axis{0 or ‘index’}Unused. Parameter needed for compatibility with DataFrame.
Returns
2-Tuple of SeriesThe result of the operation.
See also
Series.divmodElement-wise Integer division and modulo, see Python documentation for more details.
Examples
>>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd'])
>>> a
a 1.0
b 1.0
c 1.0
d NaN
dtype: float64
>>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e'])
>>> b
a 1.0
b NaN
d 1.0
e NaN
dtype: float64
>>> a.divmod(b, fill_value=0)
(a 1.0
b NaN
c NaN
d 0.0
e NaN
dtype: float64,
a 0.0
b NaN
c NaN
d 0.0
e NaN
dtype: float64)
|
reference/api/pandas.Series.rdivmod.html
|
pandas.Timestamp.day_of_year
|
`pandas.Timestamp.day_of_year`
Return the day of the year.
```
>>> ts = pd.Timestamp(2020, 3, 14)
>>> ts.day_of_year
74
```
|
Timestamp.day_of_year#
Return the day of the year.
Examples
>>> ts = pd.Timestamp(2020, 3, 14)
>>> ts.day_of_year
74
|
reference/api/pandas.Timestamp.day_of_year.html
|
pandas.plotting.parallel_coordinates
|
`pandas.plotting.parallel_coordinates`
Parallel coordinates plotting.
```
>>> df = pd.read_csv(
... 'https://raw.githubusercontent.com/pandas-dev/'
... 'pandas/main/pandas/tests/io/data/csv/iris.csv'
... )
>>> pd.plotting.parallel_coordinates(
... df, 'Name', color=('#556270', '#4ECDC4', '#C7F464')
... )
<AxesSubplot: xlabel='y(t)', ylabel='y(t + 1)'>
```
|
pandas.plotting.parallel_coordinates(frame, class_column, cols=None, ax=None, color=None, use_columns=False, xticks=None, colormap=None, axvlines=True, axvlines_kwds=None, sort_labels=False, **kwargs)[source]#
Parallel coordinates plotting.
Parameters
frameDataFrame
class_columnstrColumn name containing class names.
colslist, optionalA list of column names to use.
axmatplotlib.axis, optionalMatplotlib axis object.
colorlist or tuple, optionalColors to use for the different classes.
use_columnsbool, optionalIf true, columns will be used as xticks.
xtickslist or tuple, optionalA list of values to use for xticks.
colormapstr or matplotlib colormap, default NoneColormap to use for line colors.
axvlinesbool, optionalIf true, vertical lines will be added at each xtick.
axvlines_kwdskeywords, optionalOptions to be passed to axvline method for vertical lines.
sort_labelsbool, default FalseSort class_column labels, useful when assigning colors.
**kwargsOptions to pass to matplotlib plotting method.
Returns
class:matplotlib.axis.Axes
Examples
>>> df = pd.read_csv(
... 'https://raw.githubusercontent.com/pandas-dev/'
... 'pandas/main/pandas/tests/io/data/csv/iris.csv'
... )
>>> pd.plotting.parallel_coordinates(
... df, 'Name', color=('#556270', '#4ECDC4', '#C7F464')
... )
<AxesSubplot: xlabel='y(t)', ylabel='y(t + 1)'>
|
reference/api/pandas.plotting.parallel_coordinates.html
|
pandas.core.groupby.DataFrameGroupBy.fillna
|
`pandas.core.groupby.DataFrameGroupBy.fillna`
Fill NA/NaN values using the specified method.
Value to use to fill holes (e.g. 0), alternately a
dict/Series/DataFrame of values specifying which value to use for
each index (for a Series) or column (for a DataFrame). Values not
in the dict/Series/DataFrame will not be filled. This value cannot
be a list.
```
>>> df = pd.DataFrame([[np.nan, 2, np.nan, 0],
... [3, 4, np.nan, 1],
... [np.nan, np.nan, np.nan, np.nan],
... [np.nan, 3, np.nan, 4]],
... columns=list("ABCD"))
>>> df
A B C D
0 NaN 2.0 NaN 0.0
1 3.0 4.0 NaN 1.0
2 NaN NaN NaN NaN
3 NaN 3.0 NaN 4.0
```
|
property DataFrameGroupBy.fillna[source]#
Fill NA/NaN values using the specified method.
Parameters
valuescalar, dict, Series, or DataFrameValue to use to fill holes (e.g. 0), alternately a
dict/Series/DataFrame of values specifying which value to use for
each index (for a Series) or column (for a DataFrame). Values not
in the dict/Series/DataFrame will not be filled. This value cannot
be a list.
method{‘backfill’, ‘bfill’, ‘pad’, ‘ffill’, None}, default NoneMethod to use for filling holes in reindexed Series
pad / ffill: propagate last valid observation forward to next valid
backfill / bfill: use next valid observation to fill gap.
axis{0 or ‘index’, 1 or ‘columns’}Axis along which to fill missing values. For Series
this parameter is unused and defaults to 0.
inplacebool, default FalseIf True, fill in-place. Note: this will modify any
other views on this object (e.g., a no-copy slice for a column in a
DataFrame).
limitint, default NoneIf method is specified, this is the maximum number of consecutive
NaN values to forward/backward fill. In other words, if there is
a gap with more than this number of consecutive NaNs, it will only
be partially filled. If method is not specified, this is the
maximum number of entries along the entire axis where NaNs will be
filled. Must be greater than 0 if not None.
downcastdict, default is NoneA dict of item->dtype of what to downcast if possible,
or the string ‘infer’ which will try to downcast to an appropriate
equal type (e.g. float64 to int64 if possible).
Returns
DataFrame or NoneObject with missing values filled or None if inplace=True.
See also
interpolateFill NaN values using interpolation.
reindexConform object to new index.
asfreqConvert TimeSeries to specified frequency.
Examples
>>> df = pd.DataFrame([[np.nan, 2, np.nan, 0],
... [3, 4, np.nan, 1],
... [np.nan, np.nan, np.nan, np.nan],
... [np.nan, 3, np.nan, 4]],
... columns=list("ABCD"))
>>> df
A B C D
0 NaN 2.0 NaN 0.0
1 3.0 4.0 NaN 1.0
2 NaN NaN NaN NaN
3 NaN 3.0 NaN 4.0
Replace all NaN elements with 0s.
>>> df.fillna(0)
A B C D
0 0.0 2.0 0.0 0.0
1 3.0 4.0 0.0 1.0
2 0.0 0.0 0.0 0.0
3 0.0 3.0 0.0 4.0
We can also propagate non-null values forward or backward.
>>> df.fillna(method="ffill")
A B C D
0 NaN 2.0 NaN 0.0
1 3.0 4.0 NaN 1.0
2 3.0 4.0 NaN 1.0
3 3.0 3.0 NaN 4.0
Replace all NaN elements in column ‘A’, ‘B’, ‘C’, and ‘D’, with 0, 1,
2, and 3 respectively.
>>> values = {"A": 0, "B": 1, "C": 2, "D": 3}
>>> df.fillna(value=values)
A B C D
0 0.0 2.0 2.0 0.0
1 3.0 4.0 2.0 1.0
2 0.0 1.0 2.0 3.0
3 0.0 3.0 2.0 4.0
Only replace the first NaN element.
>>> df.fillna(value=values, limit=1)
A B C D
0 0.0 2.0 2.0 0.0
1 3.0 4.0 NaN 1.0
2 NaN 1.0 NaN 3.0
3 NaN 3.0 NaN 4.0
When filling using a DataFrame, replacement happens along
the same column names and same indices
>>> df2 = pd.DataFrame(np.zeros((4, 4)), columns=list("ABCE"))
>>> df.fillna(df2)
A B C D
0 0.0 2.0 0.0 0.0
1 3.0 4.0 0.0 1.0
2 0.0 0.0 0.0 NaN
3 0.0 3.0 0.0 4.0
Note that column D is not affected since it is not present in df2.
|
reference/api/pandas.core.groupby.DataFrameGroupBy.fillna.html
|
pandas.ExcelFile.parse
|
`pandas.ExcelFile.parse`
Parse specified sheet(s) into a DataFrame.
Equivalent to read_excel(ExcelFile, …) See the read_excel
docstring for more info on accepted parameters.
|
ExcelFile.parse(sheet_name=0, header=0, names=None, index_col=None, usecols=None, squeeze=None, converters=None, true_values=None, false_values=None, skiprows=None, nrows=None, na_values=None, parse_dates=False, date_parser=None, thousands=None, comment=None, skipfooter=0, convert_float=None, mangle_dupe_cols=True, **kwds)[source]#
Parse specified sheet(s) into a DataFrame.
Equivalent to read_excel(ExcelFile, …) See the read_excel
docstring for more info on accepted parameters.
Returns
DataFrame or dict of DataFramesDataFrame from the passed in Excel file.
|
reference/api/pandas.ExcelFile.parse.html
|
pandas.MultiIndex.get_indexer
|
`pandas.MultiIndex.get_indexer`
Compute indexer and mask for new index given the current index.
```
>>> index = pd.Index(['c', 'a', 'b'])
>>> index.get_indexer(['a', 'b', 'x'])
array([ 1, 2, -1])
```
|
MultiIndex.get_indexer(target, method=None, limit=None, tolerance=None)[source]#
Compute indexer and mask for new index given the current index.
The indexer should be then used as an input to ndarray.take to align the
current data to the new index.
Parameters
targetIndex
method{None, ‘pad’/’ffill’, ‘backfill’/’bfill’, ‘nearest’}, optional
default: exact matches only.
pad / ffill: find the PREVIOUS index value if no exact match.
backfill / bfill: use NEXT index value if no exact match
nearest: use the NEAREST index value if no exact match. Tied
distances are broken by preferring the larger index value.
limitint, optionalMaximum number of consecutive labels in target to match for
inexact matches.
toleranceoptionalMaximum distance between original and new labels for inexact
matches. The values of the index at the matching locations must
satisfy the equation abs(index[indexer] - target) <= tolerance.
Tolerance may be a scalar value, which applies the same tolerance
to all values, or list-like, which applies variable tolerance per
element. List-like includes list, tuple, array, Series, and must be
the same size as the index and its dtype must exactly match the
index’s type.
Returns
indexernp.ndarray[np.intp]Integers from 0 to n - 1 indicating that the index at these
positions matches the corresponding target values. Missing values
in the target are marked by -1.
Notes
Returns -1 for unmatched values, for further explanation see the
example below.
Examples
>>> index = pd.Index(['c', 'a', 'b'])
>>> index.get_indexer(['a', 'b', 'x'])
array([ 1, 2, -1])
Notice that the return value is an array of locations in index
and x is marked by -1, as it is not in index.
|
reference/api/pandas.MultiIndex.get_indexer.html
|
pandas.Series.T
|
`pandas.Series.T`
Return the transpose, which is by definition self.
|
property Series.T[source]#
Return the transpose, which is by definition self.
|
reference/api/pandas.Series.T.html
|
pandas.tseries.offsets.CustomBusinessMonthEnd.is_month_start
|
`pandas.tseries.offsets.CustomBusinessMonthEnd.is_month_start`
Return boolean whether a timestamp occurs on the month start.
Examples
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_start(ts)
True
```
|
CustomBusinessMonthEnd.is_month_start()#
Return boolean whether a timestamp occurs on the month start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_start(ts)
True
|
reference/api/pandas.tseries.offsets.CustomBusinessMonthEnd.is_month_start.html
|
pandas.Series.expanding
|
`pandas.Series.expanding`
Provide expanding window calculations.
```
>>> df = pd.DataFrame({"B": [0, 1, 2, np.nan, 4]})
>>> df
B
0 0.0
1 1.0
2 2.0
3 NaN
4 4.0
```
|
Series.expanding(min_periods=1, center=None, axis=0, method='single')[source]#
Provide expanding window calculations.
Parameters
min_periodsint, default 1Minimum number of observations in window required to have a value;
otherwise, result is np.nan.
centerbool, default FalseIf False, set the window labels as the right edge of the window index.
If True, set the window labels as the center of the window index.
Deprecated since version 1.1.0.
axisint or str, default 0If 0 or 'index', roll across the rows.
If 1 or 'columns', roll across the columns.
For Series this parameter is unused and defaults to 0.
methodstr {‘single’, ‘table’}, default ‘single’Execute the rolling operation per single column or row ('single')
or over the entire object ('table').
This argument is only implemented when specifying engine='numba'
in the method call.
New in version 1.3.0.
Returns
Expanding subclass
See also
rollingProvides rolling window calculations.
ewmProvides exponential weighted functions.
Notes
See Windowing Operations for further usage details
and examples.
Examples
>>> df = pd.DataFrame({"B": [0, 1, 2, np.nan, 4]})
>>> df
B
0 0.0
1 1.0
2 2.0
3 NaN
4 4.0
min_periods
Expanding sum with 1 vs 3 observations needed to calculate a value.
>>> df.expanding(1).sum()
B
0 0.0
1 1.0
2 3.0
3 3.0
4 7.0
>>> df.expanding(3).sum()
B
0 NaN
1 NaN
2 3.0
3 3.0
4 7.0
|
reference/api/pandas.Series.expanding.html
|
pandas.tseries.offsets.CustomBusinessDay.is_year_end
|
`pandas.tseries.offsets.CustomBusinessDay.is_year_end`
Return boolean whether a timestamp occurs on the year end.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_end(ts)
False
```
|
CustomBusinessDay.is_year_end()#
Return boolean whether a timestamp occurs on the year end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_end(ts)
False
|
reference/api/pandas.tseries.offsets.CustomBusinessDay.is_year_end.html
|
pandas.tseries.offsets.Week.is_year_start
|
`pandas.tseries.offsets.Week.is_year_start`
Return boolean whether a timestamp occurs on the year start.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_start(ts)
True
```
|
Week.is_year_start()#
Return boolean whether a timestamp occurs on the year start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_start(ts)
True
|
reference/api/pandas.tseries.offsets.Week.is_year_start.html
|
pandas.tseries.offsets.CustomBusinessDay.is_quarter_end
|
`pandas.tseries.offsets.CustomBusinessDay.is_quarter_end`
Return boolean whether a timestamp occurs on the quarter end.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_end(ts)
False
```
|
CustomBusinessDay.is_quarter_end()#
Return boolean whether a timestamp occurs on the quarter end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_end(ts)
False
|
reference/api/pandas.tseries.offsets.CustomBusinessDay.is_quarter_end.html
|
pandas.io.formats.style.Styler.template_latex
|
pandas.io.formats.style.Styler.template_latex
|
Styler.template_latex = <Template 'latex.tpl'>#
|
reference/api/pandas.io.formats.style.Styler.template_latex.html
|
pandas.plotting.bootstrap_plot
|
`pandas.plotting.bootstrap_plot`
Bootstrap plot on mean, median and mid-range statistics.
The bootstrap plot is used to estimate the uncertainty of a statistic
by relaying on random sampling with replacement [1]. This function will
generate bootstrapping plots for mean, median and mid-range statistics
for the given number of samples of the given size.
```
>>> s = pd.Series(np.random.uniform(size=100))
>>> pd.plotting.bootstrap_plot(s)
<Figure size 640x480 with 6 Axes>
```
|
pandas.plotting.bootstrap_plot(series, fig=None, size=50, samples=500, **kwds)[source]#
Bootstrap plot on mean, median and mid-range statistics.
The bootstrap plot is used to estimate the uncertainty of a statistic
by relaying on random sampling with replacement [1]. This function will
generate bootstrapping plots for mean, median and mid-range statistics
for the given number of samples of the given size.
1
“Bootstrapping (statistics)” in https://en.wikipedia.org/wiki/Bootstrapping_%28statistics%29
Parameters
seriespandas.SeriesSeries from where to get the samplings for the bootstrapping.
figmatplotlib.figure.Figure, default NoneIf given, it will use the fig reference for plotting instead of
creating a new one with default parameters.
sizeint, default 50Number of data points to consider during each sampling. It must be
less than or equal to the length of the series.
samplesint, default 500Number of times the bootstrap procedure is performed.
**kwdsOptions to pass to matplotlib plotting method.
Returns
matplotlib.figure.FigureMatplotlib figure.
See also
DataFrame.plotBasic plotting for DataFrame objects.
Series.plotBasic plotting for Series objects.
Examples
This example draws a basic bootstrap plot for a Series.
>>> s = pd.Series(np.random.uniform(size=100))
>>> pd.plotting.bootstrap_plot(s)
<Figure size 640x480 with 6 Axes>
|
reference/api/pandas.plotting.bootstrap_plot.html
|
pandas.DataFrame.ffill
|
`pandas.DataFrame.ffill`
Synonym for DataFrame.fillna() with method='ffill'.
|
DataFrame.ffill(*, axis=None, inplace=False, limit=None, downcast=None)[source]#
Synonym for DataFrame.fillna() with method='ffill'.
Returns
Series/DataFrame or NoneObject with missing values filled or None if inplace=True.
|
reference/api/pandas.DataFrame.ffill.html
|
pandas.tseries.offsets.DateOffset.name
|
`pandas.tseries.offsets.DateOffset.name`
Return a string representing the base frequency.
```
>>> pd.offsets.Hour().name
'H'
```
|
DateOffset.name#
Return a string representing the base frequency.
Examples
>>> pd.offsets.Hour().name
'H'
>>> pd.offsets.Hour(5).name
'H'
|
reference/api/pandas.tseries.offsets.DateOffset.name.html
|
pandas.tseries.offsets.BusinessMonthBegin.nanos
|
pandas.tseries.offsets.BusinessMonthBegin.nanos
|
BusinessMonthBegin.nanos#
|
reference/api/pandas.tseries.offsets.BusinessMonthBegin.nanos.html
|
pandas.Series.str.lower
|
`pandas.Series.str.lower`
Convert strings in the Series/Index to lowercase.
```
>>> s = pd.Series(['lower', 'CAPITALS', 'this is a sentence', 'SwApCaSe'])
>>> s
0 lower
1 CAPITALS
2 this is a sentence
3 SwApCaSe
dtype: object
```
|
Series.str.lower()[source]#
Convert strings in the Series/Index to lowercase.
Equivalent to str.lower().
Returns
Series or Index of object
See also
Series.str.lowerConverts all characters to lowercase.
Series.str.upperConverts all characters to uppercase.
Series.str.titleConverts first character of each word to uppercase and remaining to lowercase.
Series.str.capitalizeConverts first character to uppercase and remaining to lowercase.
Series.str.swapcaseConverts uppercase to lowercase and lowercase to uppercase.
Series.str.casefoldRemoves all case distinctions in the string.
Examples
>>> s = pd.Series(['lower', 'CAPITALS', 'this is a sentence', 'SwApCaSe'])
>>> s
0 lower
1 CAPITALS
2 this is a sentence
3 SwApCaSe
dtype: object
>>> s.str.lower()
0 lower
1 capitals
2 this is a sentence
3 swapcase
dtype: object
>>> s.str.upper()
0 LOWER
1 CAPITALS
2 THIS IS A SENTENCE
3 SWAPCASE
dtype: object
>>> s.str.title()
0 Lower
1 Capitals
2 This Is A Sentence
3 Swapcase
dtype: object
>>> s.str.capitalize()
0 Lower
1 Capitals
2 This is a sentence
3 Swapcase
dtype: object
>>> s.str.swapcase()
0 LOWER
1 capitals
2 THIS IS A SENTENCE
3 sWaPcAsE
dtype: object
|
reference/api/pandas.Series.str.lower.html
|
pandas.io.formats.style.Styler.template_string
|
pandas.io.formats.style.Styler.template_string
|
Styler.template_string = <Template 'string.tpl'>#
|
reference/api/pandas.io.formats.style.Styler.template_string.html
|
pandas.Index.array
|
`pandas.Index.array`
The ExtensionArray of the data backing this Series or Index.
```
>>> pd.Series([1, 2, 3]).array
<PandasArray>
[1, 2, 3]
Length: 3, dtype: int64
```
|
Index.array[source]#
The ExtensionArray of the data backing this Series or Index.
Returns
ExtensionArrayAn ExtensionArray of the values stored within. For extension
types, this is the actual array. For NumPy native types, this
is a thin (no copy) wrapper around numpy.ndarray.
.array differs .values which may require converting the
data to a different form.
See also
Index.to_numpySimilar method that always returns a NumPy array.
Series.to_numpySimilar method that always returns a NumPy array.
Notes
This table lays out the different array types for each extension
dtype within pandas.
dtype
array type
category
Categorical
period
PeriodArray
interval
IntervalArray
IntegerNA
IntegerArray
string
StringArray
boolean
BooleanArray
datetime64[ns, tz]
DatetimeArray
For any 3rd-party extension types, the array type will be an
ExtensionArray.
For all remaining dtypes .array will be a
arrays.NumpyExtensionArray wrapping the actual ndarray
stored within. If you absolutely need a NumPy array (possibly with
copying / coercing data), then use Series.to_numpy() instead.
Examples
For regular NumPy types like int, and float, a PandasArray
is returned.
>>> pd.Series([1, 2, 3]).array
<PandasArray>
[1, 2, 3]
Length: 3, dtype: int64
For extension types, like Categorical, the actual ExtensionArray
is returned
>>> ser = pd.Series(pd.Categorical(['a', 'b', 'a']))
>>> ser.array
['a', 'b', 'a']
Categories (2, object): ['a', 'b']
|
reference/api/pandas.Index.array.html
|
pandas.tseries.offsets.BYearEnd.apply
|
pandas.tseries.offsets.BYearEnd.apply
|
BYearEnd.apply()#
|
reference/api/pandas.tseries.offsets.BYearEnd.apply.html
|
pandas.tseries.offsets.Hour.is_year_end
|
`pandas.tseries.offsets.Hour.is_year_end`
Return boolean whether a timestamp occurs on the year end.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_end(ts)
False
```
|
Hour.is_year_end()#
Return boolean whether a timestamp occurs on the year end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_end(ts)
False
|
reference/api/pandas.tseries.offsets.Hour.is_year_end.html
|
pandas.DataFrame.align
|
`pandas.DataFrame.align`
Align two objects on their axes with the specified join method.
Join method is specified for each axis Index.
```
>>> df = pd.DataFrame(
... [[1, 2, 3, 4], [6, 7, 8, 9]], columns=["D", "B", "E", "A"], index=[1, 2]
... )
>>> other = pd.DataFrame(
... [[10, 20, 30, 40], [60, 70, 80, 90], [600, 700, 800, 900]],
... columns=["A", "B", "C", "D"],
... index=[2, 3, 4],
... )
>>> df
D B E A
1 1 2 3 4
2 6 7 8 9
>>> other
A B C D
2 10 20 30 40
3 60 70 80 90
4 600 700 800 900
```
|
DataFrame.align(other, join='outer', axis=None, level=None, copy=True, fill_value=None, method=None, limit=None, fill_axis=0, broadcast_axis=None)[source]#
Align two objects on their axes with the specified join method.
Join method is specified for each axis Index.
Parameters
otherDataFrame or Series
join{‘outer’, ‘inner’, ‘left’, ‘right’}, default ‘outer’
axisallowed axis of the other object, default NoneAlign on index (0), columns (1), or both (None).
levelint or level name, default NoneBroadcast across a level, matching Index values on the
passed MultiIndex level.
copybool, default TrueAlways returns new objects. If copy=False and no reindexing is
required then original objects are returned.
fill_valuescalar, default np.NaNValue to use for missing values. Defaults to NaN, but can be any
“compatible” value.
method{‘backfill’, ‘bfill’, ‘pad’, ‘ffill’, None}, default NoneMethod to use for filling holes in reindexed Series:
pad / ffill: propagate last valid observation forward to next valid.
backfill / bfill: use NEXT valid observation to fill gap.
limitint, default NoneIf method is specified, this is the maximum number of consecutive
NaN values to forward/backward fill. In other words, if there is
a gap with more than this number of consecutive NaNs, it will only
be partially filled. If method is not specified, this is the
maximum number of entries along the entire axis where NaNs will be
filled. Must be greater than 0 if not None.
fill_axis{0 or ‘index’, 1 or ‘columns’}, default 0Filling axis, method and limit.
broadcast_axis{0 or ‘index’, 1 or ‘columns’}, default NoneBroadcast values along this axis, if aligning two objects of
different dimensions.
Returns
(left, right)(DataFrame, type of other)Aligned objects.
Examples
>>> df = pd.DataFrame(
... [[1, 2, 3, 4], [6, 7, 8, 9]], columns=["D", "B", "E", "A"], index=[1, 2]
... )
>>> other = pd.DataFrame(
... [[10, 20, 30, 40], [60, 70, 80, 90], [600, 700, 800, 900]],
... columns=["A", "B", "C", "D"],
... index=[2, 3, 4],
... )
>>> df
D B E A
1 1 2 3 4
2 6 7 8 9
>>> other
A B C D
2 10 20 30 40
3 60 70 80 90
4 600 700 800 900
Align on columns:
>>> left, right = df.align(other, join="outer", axis=1)
>>> left
A B C D E
1 4 2 NaN 1 3
2 9 7 NaN 6 8
>>> right
A B C D E
2 10 20 30 40 NaN
3 60 70 80 90 NaN
4 600 700 800 900 NaN
We can also align on the index:
>>> left, right = df.align(other, join="outer", axis=0)
>>> left
D B E A
1 1.0 2.0 3.0 4.0
2 6.0 7.0 8.0 9.0
3 NaN NaN NaN NaN
4 NaN NaN NaN NaN
>>> right
A B C D
1 NaN NaN NaN NaN
2 10.0 20.0 30.0 40.0
3 60.0 70.0 80.0 90.0
4 600.0 700.0 800.0 900.0
Finally, the default axis=None will align on both index and columns:
>>> left, right = df.align(other, join="outer", axis=None)
>>> left
A B C D E
1 4.0 2.0 NaN 1.0 3.0
2 9.0 7.0 NaN 6.0 8.0
3 NaN NaN NaN NaN NaN
4 NaN NaN NaN NaN NaN
>>> right
A B C D E
1 NaN NaN NaN NaN NaN
2 10.0 20.0 30.0 40.0 NaN
3 60.0 70.0 80.0 90.0 NaN
4 600.0 700.0 800.0 900.0 NaN
|
reference/api/pandas.DataFrame.align.html
|
pandas.tseries.offsets.LastWeekOfMonth.is_on_offset
|
`pandas.tseries.offsets.LastWeekOfMonth.is_on_offset`
Return boolean whether a timestamp intersects with this frequency.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Day(1)
>>> freq.is_on_offset(ts)
True
```
|
LastWeekOfMonth.is_on_offset()#
Return boolean whether a timestamp intersects with this frequency.
Parameters
dtdatetime.datetimeTimestamp to check intersections with frequency.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Day(1)
>>> freq.is_on_offset(ts)
True
>>> ts = pd.Timestamp(2022, 8, 6)
>>> ts.day_name()
'Saturday'
>>> freq = pd.offsets.BusinessDay(1)
>>> freq.is_on_offset(ts)
False
|
reference/api/pandas.tseries.offsets.LastWeekOfMonth.is_on_offset.html
|
pandas.tseries.offsets.MonthEnd.n
|
pandas.tseries.offsets.MonthEnd.n
|
MonthEnd.n#
|
reference/api/pandas.tseries.offsets.MonthEnd.n.html
|
pandas.tseries.offsets.BusinessHour.offset
|
`pandas.tseries.offsets.BusinessHour.offset`
Alias for self._offset.
|
BusinessHour.offset#
Alias for self._offset.
|
reference/api/pandas.tseries.offsets.BusinessHour.offset.html
|
pandas.CategoricalIndex.remove_unused_categories
|
`pandas.CategoricalIndex.remove_unused_categories`
Remove categories which are not used.
Whether or not to drop unused categories inplace or return a copy of
this categorical with unused categories dropped.
```
>>> c = pd.Categorical(['a', 'c', 'b', 'c', 'd'])
>>> c
['a', 'c', 'b', 'c', 'd']
Categories (4, object): ['a', 'b', 'c', 'd']
```
|
CategoricalIndex.remove_unused_categories(*args, **kwargs)[source]#
Remove categories which are not used.
Parameters
inplacebool, default FalseWhether or not to drop unused categories inplace or return a copy of
this categorical with unused categories dropped.
Deprecated since version 1.2.0.
Returns
catCategorical or NoneCategorical with unused categories dropped or None if inplace=True.
See also
rename_categoriesRename categories.
reorder_categoriesReorder categories.
add_categoriesAdd new categories.
remove_categoriesRemove the specified categories.
set_categoriesSet the categories to the specified ones.
Examples
>>> c = pd.Categorical(['a', 'c', 'b', 'c', 'd'])
>>> c
['a', 'c', 'b', 'c', 'd']
Categories (4, object): ['a', 'b', 'c', 'd']
>>> c[2] = 'a'
>>> c[4] = 'c'
>>> c
['a', 'c', 'a', 'c', 'c']
Categories (4, object): ['a', 'b', 'c', 'd']
>>> c.remove_unused_categories()
['a', 'c', 'a', 'c', 'c']
Categories (2, object): ['a', 'c']
|
reference/api/pandas.CategoricalIndex.remove_unused_categories.html
|
pandas.errors.DataError
|
`pandas.errors.DataError`
Exceptionn raised when performing an operation on non-numerical data.
|
exception pandas.errors.DataError[source]#
Exceptionn raised when performing an operation on non-numerical data.
For example, calling ohlc on a non-numerical column or a function
on a rolling window.
|
reference/api/pandas.errors.DataError.html
|
pandas.Index.searchsorted
|
`pandas.Index.searchsorted`
Find indices where elements should be inserted to maintain order.
```
>>> ser = pd.Series([1, 2, 3])
>>> ser
0 1
1 2
2 3
dtype: int64
```
|
Index.searchsorted(value, side='left', sorter=None)[source]#
Find indices where elements should be inserted to maintain order.
Find the indices into a sorted Index self such that, if the
corresponding elements in value were inserted before the indices,
the order of self would be preserved.
Note
The Index must be monotonically sorted, otherwise
wrong locations will likely be returned. Pandas does not
check this for you.
Parameters
valuearray-like or scalarValues to insert into self.
side{‘left’, ‘right’}, optionalIf ‘left’, the index of the first suitable location found is given.
If ‘right’, return the last such index. If there is no suitable
index, return either 0 or N (where N is the length of self).
sorter1-D array-like, optionalOptional array of integer indices that sort self into ascending
order. They are typically the result of np.argsort.
Returns
int or array of intA scalar or array of insertion points with the
same shape as value.
See also
sort_valuesSort by the values along either axis.
numpy.searchsortedSimilar method from NumPy.
Notes
Binary search is used to find the required insertion points.
Examples
>>> ser = pd.Series([1, 2, 3])
>>> ser
0 1
1 2
2 3
dtype: int64
>>> ser.searchsorted(4)
3
>>> ser.searchsorted([0, 4])
array([0, 3])
>>> ser.searchsorted([1, 3], side='left')
array([0, 2])
>>> ser.searchsorted([1, 3], side='right')
array([1, 3])
>>> ser = pd.Series(pd.to_datetime(['3/11/2000', '3/12/2000', '3/13/2000']))
>>> ser
0 2000-03-11
1 2000-03-12
2 2000-03-13
dtype: datetime64[ns]
>>> ser.searchsorted('3/14/2000')
3
>>> ser = pd.Categorical(
... ['apple', 'bread', 'bread', 'cheese', 'milk'], ordered=True
... )
>>> ser
['apple', 'bread', 'bread', 'cheese', 'milk']
Categories (4, object): ['apple' < 'bread' < 'cheese' < 'milk']
>>> ser.searchsorted('bread')
1
>>> ser.searchsorted(['bread'], side='right')
array([3])
If the values are not monotonically sorted, wrong locations
may be returned:
>>> ser = pd.Series([2, 1, 3])
>>> ser
0 2
1 1
2 3
dtype: int64
>>> ser.searchsorted(1)
0 # wrong result, correct would be 1
|
reference/api/pandas.Index.searchsorted.html
|
pandas.tseries.offsets.BusinessMonthBegin.__call__
|
`pandas.tseries.offsets.BusinessMonthBegin.__call__`
Call self as a function.
|
BusinessMonthBegin.__call__(*args, **kwargs)#
Call self as a function.
|
reference/api/pandas.tseries.offsets.BusinessMonthBegin.__call__.html
|
pandas.Series.str.isspace
|
`pandas.Series.str.isspace`
Check whether all characters in each string are whitespace.
This is equivalent to running the Python string method
str.isspace() for each element of the Series/Index. If a string
has zero characters, False is returned for that check.
```
>>> s1 = pd.Series(['one', 'one1', '1', ''])
```
|
Series.str.isspace()[source]#
Check whether all characters in each string are whitespace.
This is equivalent to running the Python string method
str.isspace() for each element of the Series/Index. If a string
has zero characters, False is returned for that check.
Returns
Series or Index of boolSeries or Index of boolean values with the same length as the original
Series/Index.
See also
Series.str.isalphaCheck whether all characters are alphabetic.
Series.str.isnumericCheck whether all characters are numeric.
Series.str.isalnumCheck whether all characters are alphanumeric.
Series.str.isdigitCheck whether all characters are digits.
Series.str.isdecimalCheck whether all characters are decimal.
Series.str.isspaceCheck whether all characters are whitespace.
Series.str.islowerCheck whether all characters are lowercase.
Series.str.isupperCheck whether all characters are uppercase.
Series.str.istitleCheck whether all characters are titlecase.
Examples
Checks for Alphabetic and Numeric Characters
>>> s1 = pd.Series(['one', 'one1', '1', ''])
>>> s1.str.isalpha()
0 True
1 False
2 False
3 False
dtype: bool
>>> s1.str.isnumeric()
0 False
1 False
2 True
3 False
dtype: bool
>>> s1.str.isalnum()
0 True
1 True
2 True
3 False
dtype: bool
Note that checks against characters mixed with any additional punctuation
or whitespace will evaluate to false for an alphanumeric check.
>>> s2 = pd.Series(['A B', '1.5', '3,000'])
>>> s2.str.isalnum()
0 False
1 False
2 False
dtype: bool
More Detailed Checks for Numeric Characters
There are several different but overlapping sets of numeric characters that
can be checked for.
>>> s3 = pd.Series(['23', '³', '⅕', ''])
The s3.str.isdecimal method checks for characters used to form numbers
in base 10.
>>> s3.str.isdecimal()
0 True
1 False
2 False
3 False
dtype: bool
The s.str.isdigit method is the same as s3.str.isdecimal but also
includes special digits, like superscripted and subscripted digits in
unicode.
>>> s3.str.isdigit()
0 True
1 True
2 False
3 False
dtype: bool
The s.str.isnumeric method is the same as s3.str.isdigit but also
includes other characters that can represent quantities such as unicode
fractions.
>>> s3.str.isnumeric()
0 True
1 True
2 True
3 False
dtype: bool
Checks for Whitespace
>>> s4 = pd.Series([' ', '\t\r\n ', ''])
>>> s4.str.isspace()
0 True
1 True
2 False
dtype: bool
Checks for Character Case
>>> s5 = pd.Series(['leopard', 'Golden Eagle', 'SNAKE', ''])
>>> s5.str.islower()
0 True
1 False
2 False
3 False
dtype: bool
>>> s5.str.isupper()
0 False
1 False
2 True
3 False
dtype: bool
The s5.str.istitle method checks for whether all words are in title
case (whether only the first letter of each word is capitalized). Words are
assumed to be as any sequence of non-numeric characters separated by
whitespace characters.
>>> s5.str.istitle()
0 False
1 True
2 False
3 False
dtype: bool
|
reference/api/pandas.Series.str.isspace.html
|
pandas.tseries.offsets.BYearEnd.apply
|
pandas.tseries.offsets.BYearEnd.apply
|
BYearEnd.apply()#
|
reference/api/pandas.tseries.offsets.BYearEnd.apply.html
|
pandas.tseries.offsets.BYearEnd.month
|
pandas.tseries.offsets.BYearEnd.month
|
BYearEnd.month#
|
reference/api/pandas.tseries.offsets.BYearEnd.month.html
|
pandas.tseries.offsets.CustomBusinessMonthBegin.nanos
|
pandas.tseries.offsets.CustomBusinessMonthBegin.nanos
|
CustomBusinessMonthBegin.nanos#
|
reference/api/pandas.tseries.offsets.CustomBusinessMonthBegin.nanos.html
|
pandas.plotting.deregister_matplotlib_converters
|
`pandas.plotting.deregister_matplotlib_converters`
Remove pandas formatters and converters.
|
pandas.plotting.deregister_matplotlib_converters()[source]#
Remove pandas formatters and converters.
Removes the custom converters added by register(). This
attempts to set the state of the registry back to the state before
pandas registered its own units. Converters for pandas’ own types like
Timestamp and Period are removed completely. Converters for types
pandas overwrites, like datetime.datetime, are restored to their
original value.
See also
register_matplotlib_convertersRegister pandas formatters and converters with matplotlib.
|
reference/api/pandas.plotting.deregister_matplotlib_converters.html
|
pandas.Series.str
|
`pandas.Series.str`
Vectorized string functions for Series and Index.
```
>>> s = pd.Series(["A_Str_Series"])
>>> s
0 A_Str_Series
dtype: object
```
|
Series.str()[source]#
Vectorized string functions for Series and Index.
NAs stay NA unless handled otherwise by a particular method.
Patterned after Python’s string methods, with some inspiration from
R’s stringr package.
Examples
>>> s = pd.Series(["A_Str_Series"])
>>> s
0 A_Str_Series
dtype: object
>>> s.str.split("_")
0 [A, Str, Series]
dtype: object
>>> s.str.replace("_", "")
0 AStrSeries
dtype: object
|
reference/api/pandas.Series.str.html
|
pandas.tseries.offsets.YearEnd.isAnchored
|
pandas.tseries.offsets.YearEnd.isAnchored
|
YearEnd.isAnchored()#
|
reference/api/pandas.tseries.offsets.YearEnd.isAnchored.html
|
pandas.TimedeltaIndex.ceil
|
`pandas.TimedeltaIndex.ceil`
Perform ceil operation on the data to the specified freq.
The frequency level to ceil the index to. Must be a fixed
frequency like ‘S’ (second) not ‘ME’ (month end). See
frequency aliases for
a list of possible freq values.
```
>>> rng = pd.date_range('1/1/2018 11:59:00', periods=3, freq='min')
>>> rng
DatetimeIndex(['2018-01-01 11:59:00', '2018-01-01 12:00:00',
'2018-01-01 12:01:00'],
dtype='datetime64[ns]', freq='T')
>>> rng.ceil('H')
DatetimeIndex(['2018-01-01 12:00:00', '2018-01-01 12:00:00',
'2018-01-01 13:00:00'],
dtype='datetime64[ns]', freq=None)
```
|
TimedeltaIndex.ceil(*args, **kwargs)[source]#
Perform ceil operation on the data to the specified freq.
Parameters
freqstr or OffsetThe frequency level to ceil the index to. Must be a fixed
frequency like ‘S’ (second) not ‘ME’ (month end). See
frequency aliases for
a list of possible freq values.
ambiguous‘infer’, bool-ndarray, ‘NaT’, default ‘raise’Only relevant for DatetimeIndex:
‘infer’ will attempt to infer fall dst-transition hours based on
order
bool-ndarray where True signifies a DST time, False designates
a non-DST time (note that this flag is only applicable for
ambiguous times)
‘NaT’ will return NaT where there are ambiguous times
‘raise’ will raise an AmbiguousTimeError if there are ambiguous
times.
nonexistent‘shift_forward’, ‘shift_backward’, ‘NaT’, timedelta, default ‘raise’A nonexistent time does not exist in a particular timezone
where clocks moved forward due to DST.
‘shift_forward’ will shift the nonexistent time forward to the
closest existing time
‘shift_backward’ will shift the nonexistent time backward to the
closest existing time
‘NaT’ will return NaT where there are nonexistent times
timedelta objects will shift nonexistent times by the timedelta
‘raise’ will raise an NonExistentTimeError if there are
nonexistent times.
Returns
DatetimeIndex, TimedeltaIndex, or SeriesIndex of the same type for a DatetimeIndex or TimedeltaIndex,
or a Series with the same index for a Series.
Raises
ValueError if the freq cannot be converted.
Notes
If the timestamps have a timezone, ceiling will take place relative to the
local (“wall”) time and re-localized to the same timezone. When ceiling
near daylight savings time, use nonexistent and ambiguous to
control the re-localization behavior.
Examples
DatetimeIndex
>>> rng = pd.date_range('1/1/2018 11:59:00', periods=3, freq='min')
>>> rng
DatetimeIndex(['2018-01-01 11:59:00', '2018-01-01 12:00:00',
'2018-01-01 12:01:00'],
dtype='datetime64[ns]', freq='T')
>>> rng.ceil('H')
DatetimeIndex(['2018-01-01 12:00:00', '2018-01-01 12:00:00',
'2018-01-01 13:00:00'],
dtype='datetime64[ns]', freq=None)
Series
>>> pd.Series(rng).dt.ceil("H")
0 2018-01-01 12:00:00
1 2018-01-01 12:00:00
2 2018-01-01 13:00:00
dtype: datetime64[ns]
When rounding near a daylight savings time transition, use ambiguous or
nonexistent to control how the timestamp should be re-localized.
>>> rng_tz = pd.DatetimeIndex(["2021-10-31 01:30:00"], tz="Europe/Amsterdam")
>>> rng_tz.ceil("H", ambiguous=False)
DatetimeIndex(['2021-10-31 02:00:00+01:00'],
dtype='datetime64[ns, Europe/Amsterdam]', freq=None)
>>> rng_tz.ceil("H", ambiguous=True)
DatetimeIndex(['2021-10-31 02:00:00+02:00'],
dtype='datetime64[ns, Europe/Amsterdam]', freq=None)
|
reference/api/pandas.TimedeltaIndex.ceil.html
|
pandas.io.formats.style.Styler.to_latex
|
`pandas.io.formats.style.Styler.to_latex`
Write Styler to a file, buffer or string in LaTeX format.
```
>>> df = pd.DataFrame([[1,2], [3,4]])
>>> s = df.style.highlight_max(axis=None,
... props='background-color:red; font-weight:bold;')
>>> s.to_html()
```
|
Styler.to_latex(buf=None, *, column_format=None, position=None, position_float=None, hrules=None, clines=None, label=None, caption=None, sparse_index=None, sparse_columns=None, multirow_align=None, multicol_align=None, siunitx=False, environment=None, encoding=None, convert_css=False)[source]#
Write Styler to a file, buffer or string in LaTeX format.
New in version 1.3.0.
Parameters
bufstr, path object, file-like object, or None, default NoneString, path object (implementing os.PathLike[str]), or file-like
object implementing a string write() function. If None, the result is
returned as a string.
column_formatstr, optionalThe LaTeX column specification placed in location:
\begin{tabular}{<column_format>}
Defaults to ‘l’ for index and
non-numeric data columns, and, for numeric data columns,
to ‘r’ by default, or ‘S’ if siunitx is True.
positionstr, optionalThe LaTeX positional argument (e.g. ‘h!’) for tables, placed in location:
\\begin{table}[<position>].
position_float{“centering”, “raggedleft”, “raggedright”}, optionalThe LaTeX float command placed in location:
\begin{table}[<position>]
\<position_float>
Cannot be used if environment is “longtable”.
hrulesboolSet to True to add \toprule, \midrule and \bottomrule from the
{booktabs} LaTeX package.
Defaults to pandas.options.styler.latex.hrules, which is False.
Changed in version 1.4.0.
clinesstr, optionalUse to control adding \cline commands for the index labels separation.
Possible values are:
None: no cline commands are added (default).
“all;data”: a cline is added for every index value extending the
width of the table, including data entries.
“all;index”: as above with lines extending only the width of the
index entries.
“skip-last;data”: a cline is added for each index value except the
last level (which is never sparsified), extending the widtn of the
table.
“skip-last;index”: as above with lines extending only the width of the
index entries.
New in version 1.4.0.
labelstr, optionalThe LaTeX label included as: \label{<label>}.
This is used with \ref{<label>} in the main .tex file.
captionstr, tuple, optionalIf string, the LaTeX table caption included as: \caption{<caption>}.
If tuple, i.e (“full caption”, “short caption”), the caption included
as: \caption[<caption[1]>]{<caption[0]>}.
sparse_indexbool, optionalWhether to sparsify the display of a hierarchical index. Setting to False
will display each explicit level element in a hierarchical key for each row.
Defaults to pandas.options.styler.sparse.index, which is True.
sparse_columnsbool, optionalWhether to sparsify the display of a hierarchical index. Setting to False
will display each explicit level element in a hierarchical key for each
column. Defaults to pandas.options.styler.sparse.columns, which
is True.
multirow_align{“c”, “t”, “b”, “naive”}, optionalIf sparsifying hierarchical MultiIndexes whether to align text centrally,
at the top or bottom using the multirow package. If not given defaults to
pandas.options.styler.latex.multirow_align, which is “c”.
If “naive” is given renders without multirow.
Changed in version 1.4.0.
multicol_align{“r”, “c”, “l”, “naive-l”, “naive-r”}, optionalIf sparsifying hierarchical MultiIndex columns whether to align text at
the left, centrally, or at the right. If not given defaults to
pandas.options.styler.latex.multicol_align, which is “r”.
If a naive option is given renders without multicol.
Pipe decorators can also be added to non-naive values to draw vertical
rules, e.g. “|r” will draw a rule on the left side of right aligned merged
cells.
Changed in version 1.4.0.
siunitxbool, default FalseSet to True to structure LaTeX compatible with the {siunitx} package.
environmentstr, optionalIf given, the environment that will replace ‘table’ in \\begin{table}.
If ‘longtable’ is specified then a more suitable template is
rendered. If not given defaults to
pandas.options.styler.latex.environment, which is None.
New in version 1.4.0.
encodingstr, optionalCharacter encoding setting. Defaults
to pandas.options.styler.render.encoding, which is “utf-8”.
convert_cssbool, default FalseConvert simple cell-styles from CSS to LaTeX format. Any CSS not found in
conversion table is dropped. A style can be forced by adding option
–latex. See notes.
Returns
str or NoneIf buf is None, returns the result as a string. Otherwise returns None.
See also
Styler.formatFormat the text display value of cells.
Notes
Latex Packages
For the following features we recommend the following LaTeX inclusions:
Feature
Inclusion
sparse columns
none: included within default {tabular} environment
sparse rows
\usepackage{multirow}
hrules
\usepackage{booktabs}
colors
\usepackage[table]{xcolor}
siunitx
\usepackage{siunitx}
bold (with siunitx)
\usepackage{etoolbox}
\robustify\bfseries
\sisetup{detect-all = true} (within {document})
italic (with siunitx)
\usepackage{etoolbox}
\robustify\itshape
\sisetup{detect-all = true} (within {document})
environment
\usepackage{longtable} if arg is “longtable”
| or any other relevant environment package
hyperlinks
\usepackage{hyperref}
Cell Styles
LaTeX styling can only be rendered if the accompanying styling functions have
been constructed with appropriate LaTeX commands. All styling
functionality is built around the concept of a CSS (<attribute>, <value>)
pair (see Table Visualization), and this
should be replaced by a LaTeX
(<command>, <options>) approach. Each cell will be styled individually
using nested LaTeX commands with their accompanied options.
For example the following code will highlight and bold a cell in HTML-CSS:
>>> df = pd.DataFrame([[1,2], [3,4]])
>>> s = df.style.highlight_max(axis=None,
... props='background-color:red; font-weight:bold;')
>>> s.to_html()
The equivalent using LaTeX only commands is the following:
>>> s = df.style.highlight_max(axis=None,
... props='cellcolor:{red}; bfseries: ;')
>>> s.to_latex()
Internally these structured LaTeX (<command>, <options>) pairs
are translated to the
display_value with the default structure:
\<command><options> <display_value>.
Where there are multiple commands the latter is nested recursively, so that
the above example highlighed cell is rendered as
\cellcolor{red} \bfseries 4.
Occasionally this format does not suit the applied command, or
combination of LaTeX packages that is in use, so additional flags can be
added to the <options>, within the tuple, to result in different
positions of required braces (the default being the same as --nowrap):
Tuple Format
Output Structure
(<command>,<options>)
\<command><options> <display_value>
(<command>,<options> --nowrap)
\<command><options> <display_value>
(<command>,<options> --rwrap)
\<command><options>{<display_value>}
(<command>,<options> --wrap)
{\<command><options> <display_value>}
(<command>,<options> --lwrap)
{\<command><options>} <display_value>
(<command>,<options> --dwrap)
{\<command><options>}{<display_value>}
For example the textbf command for font-weight
should always be used with –rwrap so ('textbf', '--rwrap') will render a
working cell, wrapped with braces, as \textbf{<display_value>}.
A more comprehensive example is as follows:
>>> df = pd.DataFrame([[1, 2.2, "dogs"], [3, 4.4, "cats"], [2, 6.6, "cows"]],
... index=["ix1", "ix2", "ix3"],
... columns=["Integers", "Floats", "Strings"])
>>> s = df.style.highlight_max(
... props='cellcolor:[HTML]{FFFF00}; color:{red};'
... 'textit:--rwrap; textbf:--rwrap;'
... )
>>> s.to_latex()
Table Styles
Internally Styler uses its table_styles object to parse the
column_format, position, position_float, and label
input arguments. These arguments are added to table styles in the format:
set_table_styles([
{"selector": "column_format", "props": f":{column_format};"},
{"selector": "position", "props": f":{position};"},
{"selector": "position_float", "props": f":{position_float};"},
{"selector": "label", "props": f":{{{label.replace(':','§')}}};"}
], overwrite=False)
Exception is made for the hrules argument which, in fact, controls all three
commands: toprule, bottomrule and midrule simultaneously. Instead of
setting hrules to True, it is also possible to set each
individual rule definition, by manually setting the table_styles,
for example below we set a regular toprule, set an hline for
bottomrule and exclude the midrule:
set_table_styles([
{'selector': 'toprule', 'props': ':toprule;'},
{'selector': 'bottomrule', 'props': ':hline;'},
], overwrite=False)
If other commands are added to table styles they will be detected, and
positioned immediately above the ‘\begin{tabular}’ command. For example to
add odd and even row coloring, from the {colortbl} package, in format
\rowcolors{1}{pink}{red}, use:
set_table_styles([
{'selector': 'rowcolors', 'props': ':{1}{pink}{red};'}
], overwrite=False)
A more comprehensive example using these arguments is as follows:
>>> df.columns = pd.MultiIndex.from_tuples([
... ("Numeric", "Integers"),
... ("Numeric", "Floats"),
... ("Non-Numeric", "Strings")
... ])
>>> df.index = pd.MultiIndex.from_tuples([
... ("L0", "ix1"), ("L0", "ix2"), ("L1", "ix3")
... ])
>>> s = df.style.highlight_max(
... props='cellcolor:[HTML]{FFFF00}; color:{red}; itshape:; bfseries:;'
... )
>>> s.to_latex(
... column_format="rrrrr", position="h", position_float="centering",
... hrules=True, label="table:5", caption="Styled LaTeX Table",
... multirow_align="t", multicol_align="r"
... )
Formatting
To format values Styler.format() should be used prior to calling
Styler.to_latex, as well as other methods such as Styler.hide()
for example:
>>> s.clear()
>>> s.table_styles = []
>>> s.caption = None
>>> s.format({
... ("Numeric", "Integers"): '\${}',
... ("Numeric", "Floats"): '{:.3f}',
... ("Non-Numeric", "Strings"): str.upper
... })
Numeric Non-Numeric
Integers Floats Strings
L0 ix1 $1 2.200 DOGS
ix2 $3 4.400 CATS
L1 ix3 $2 6.600 COWS
>>> s.to_latex()
\begin{tabular}{llrrl}
{} & {} & \multicolumn{2}{r}{Numeric} & {Non-Numeric} \\
{} & {} & {Integers} & {Floats} & {Strings} \\
\multirow[c]{2}{*}{L0} & ix1 & \\$1 & 2.200 & DOGS \\
& ix2 & \$3 & 4.400 & CATS \\
L1 & ix3 & \$2 & 6.600 & COWS \\
\end{tabular}
CSS Conversion
This method can convert a Styler constructured with HTML-CSS to LaTeX using
the following limited conversions.
CSS Attribute
CSS value
LaTeX Command
LaTeX Options
font-weight
bold
bolder
bfseries
bfseries
font-style
italic
oblique
itshape
slshape
background-color
red
#fe01ea
#f0e
rgb(128,255,0)
rgba(128,0,0,0.5)
rgb(25%,255,50%)
cellcolor
{red}–lwrap
[HTML]{FE01EA}–lwrap
[HTML]{FF00EE}–lwrap
[rgb]{0.5,1,0}–lwrap
[rgb]{0.5,0,0}–lwrap
[rgb]{0.25,1,0.5}–lwrap
color
red
#fe01ea
#f0e
rgb(128,255,0)
rgba(128,0,0,0.5)
rgb(25%,255,50%)
color
{red}
[HTML]{FE01EA}
[HTML]{FF00EE}
[rgb]{0.5,1,0}
[rgb]{0.5,0,0}
[rgb]{0.25,1,0.5}
It is also possible to add user-defined LaTeX only styles to a HTML-CSS Styler
using the --latex flag, and to add LaTeX parsing options that the
converter will detect within a CSS-comment.
>>> df = pd.DataFrame([[1]])
>>> df.style.set_properties(
... **{"font-weight": "bold /* --dwrap */", "Huge": "--latex--rwrap"}
... ).to_latex(convert_css=True)
\begin{tabular}{lr}
{} & {0} \\
0 & {\bfseries}{\Huge{1}} \\
\end{tabular}
Examples
Below we give a complete step by step example adding some advanced features
and noting some common gotchas.
First we create the DataFrame and Styler as usual, including MultiIndex rows
and columns, which allow for more advanced formatting options:
>>> cidx = pd.MultiIndex.from_arrays([
... ["Equity", "Equity", "Equity", "Equity",
... "Stats", "Stats", "Stats", "Stats", "Rating"],
... ["Energy", "Energy", "Consumer", "Consumer", "", "", "", "", ""],
... ["BP", "Shell", "H&M", "Unilever",
... "Std Dev", "Variance", "52w High", "52w Low", ""]
... ])
>>> iidx = pd.MultiIndex.from_arrays([
... ["Equity", "Equity", "Equity", "Equity"],
... ["Energy", "Energy", "Consumer", "Consumer"],
... ["BP", "Shell", "H&M", "Unilever"]
... ])
>>> styler = pd.DataFrame([
... [1, 0.8, 0.66, 0.72, 32.1678, 32.1678**2, 335.12, 240.89, "Buy"],
... [0.8, 1.0, 0.69, 0.79, 1.876, 1.876**2, 14.12, 19.78, "Hold"],
... [0.66, 0.69, 1.0, 0.86, 7, 7**2, 210.9, 140.6, "Buy"],
... [0.72, 0.79, 0.86, 1.0, 213.76, 213.76**2, 2807, 3678, "Sell"],
... ], columns=cidx, index=iidx).style
Second we will format the display and, since our table is quite wide, will
hide the repeated level-0 of the index:
>>> styler.format(subset="Equity", precision=2)
... .format(subset="Stats", precision=1, thousands=",")
... .format(subset="Rating", formatter=str.upper)
... .format_index(escape="latex", axis=1)
... .format_index(escape="latex", axis=0)
... .hide(level=0, axis=0)
Note that one of the string entries of the index and column headers is “H&M”.
Without applying the escape=”latex” option to the format_index method the
resultant LaTeX will fail to render, and the error returned is quite
difficult to debug. Using the appropriate escape the “&” is converted to “\&”.
Thirdly we will apply some (CSS-HTML) styles to our object. We will use a
builtin method and also define our own method to highlight the stock
recommendation:
>>> def rating_color(v):
... if v == "Buy": color = "#33ff85"
... elif v == "Sell": color = "#ff5933"
... else: color = "#ffdd33"
... return f"color: {color}; font-weight: bold;"
>>> styler.background_gradient(cmap="inferno", subset="Equity", vmin=0, vmax=1)
... .applymap(rating_color, subset="Rating")
All the above styles will work with HTML (see below) and LaTeX upon conversion:
However, we finally want to add one LaTeX only style
(from the {graphicx} package), that is not easy to convert from CSS and
pandas does not support it. Notice the –latex flag used here,
as well as –rwrap to ensure this is formatted correctly and
not ignored upon conversion.
>>> styler.applymap_index(
... lambda v: "rotatebox:{45}--rwrap--latex;", level=2, axis=1
... )
Finally we render our LaTeX adding in other options as required:
>>> styler.to_latex(
... caption="Selected stock correlation and simple statistics.",
... clines="skip-last;data",
... convert_css=True,
... position_float="centering",
... multicol_align="|c|",
... hrules=True,
... )
\begin{table}
\centering
\caption{Selected stock correlation and simple statistics.}
\begin{tabular}{llrrrrrrrrl}
\toprule
& & \multicolumn{4}{|c|}{Equity} & \multicolumn{4}{|c|}{Stats} & Rating \\
& & \multicolumn{2}{|c|}{Energy} & \multicolumn{2}{|c|}{Consumer} &
\multicolumn{4}{|c|}{} & \\
& & \rotatebox{45}{BP} & \rotatebox{45}{Shell} & \rotatebox{45}{H\&M} &
\rotatebox{45}{Unilever} & \rotatebox{45}{Std Dev} & \rotatebox{45}{Variance} &
\rotatebox{45}{52w High} & \rotatebox{45}{52w Low} & \rotatebox{45}{} \\
\midrule
\multirow[c]{2}{*}{Energy} & BP & {\cellcolor[HTML]{FCFFA4}}
\color[HTML]{000000} 1.00 & {\cellcolor[HTML]{FCA50A}} \color[HTML]{000000}
0.80 & {\cellcolor[HTML]{EB6628}} \color[HTML]{F1F1F1} 0.66 &
{\cellcolor[HTML]{F68013}} \color[HTML]{F1F1F1} 0.72 & 32.2 & 1,034.8 & 335.1
& 240.9 & \color[HTML]{33FF85} \bfseries BUY \\
& Shell & {\cellcolor[HTML]{FCA50A}} \color[HTML]{000000} 0.80 &
{\cellcolor[HTML]{FCFFA4}} \color[HTML]{000000} 1.00 &
{\cellcolor[HTML]{F1731D}} \color[HTML]{F1F1F1} 0.69 &
{\cellcolor[HTML]{FCA108}} \color[HTML]{000000} 0.79 & 1.9 & 3.5 & 14.1 &
19.8 & \color[HTML]{FFDD33} \bfseries HOLD \\
\cline{1-11}
\multirow[c]{2}{*}{Consumer} & H\&M & {\cellcolor[HTML]{EB6628}}
\color[HTML]{F1F1F1} 0.66 & {\cellcolor[HTML]{F1731D}} \color[HTML]{F1F1F1}
0.69 & {\cellcolor[HTML]{FCFFA4}} \color[HTML]{000000} 1.00 &
{\cellcolor[HTML]{FAC42A}} \color[HTML]{000000} 0.86 & 7.0 & 49.0 & 210.9 &
140.6 & \color[HTML]{33FF85} \bfseries BUY \\
& Unilever & {\cellcolor[HTML]{F68013}} \color[HTML]{F1F1F1} 0.72 &
{\cellcolor[HTML]{FCA108}} \color[HTML]{000000} 0.79 &
{\cellcolor[HTML]{FAC42A}} \color[HTML]{000000} 0.86 &
{\cellcolor[HTML]{FCFFA4}} \color[HTML]{000000} 1.00 & 213.8 & 45,693.3 &
2,807.0 & 3,678.0 & \color[HTML]{FF5933} \bfseries SELL \\
\cline{1-11}
\bottomrule
\end{tabular}
\end{table}
|
reference/api/pandas.io.formats.style.Styler.to_latex.html
|
pandas.tseries.offsets.MonthEnd.base
|
`pandas.tseries.offsets.MonthEnd.base`
Returns a copy of the calling offset object with n=1 and all other
attributes equal.
|
MonthEnd.base#
Returns a copy of the calling offset object with n=1 and all other
attributes equal.
|
reference/api/pandas.tseries.offsets.MonthEnd.base.html
|
pandas.tseries.offsets.BYearBegin.kwds
|
`pandas.tseries.offsets.BYearBegin.kwds`
Return a dict of extra parameters for the offset.
```
>>> pd.DateOffset(5).kwds
{}
```
|
BYearBegin.kwds#
Return a dict of extra parameters for the offset.
Examples
>>> pd.DateOffset(5).kwds
{}
>>> pd.offsets.FY5253Quarter().kwds
{'weekday': 0,
'startingMonth': 1,
'qtr_with_extra_week': 1,
'variation': 'nearest'}
|
reference/api/pandas.tseries.offsets.BYearBegin.kwds.html
|
pandas.tseries.offsets.CustomBusinessDay.is_year_start
|
`pandas.tseries.offsets.CustomBusinessDay.is_year_start`
Return boolean whether a timestamp occurs on the year start.
Examples
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_start(ts)
True
```
|
CustomBusinessDay.is_year_start()#
Return boolean whether a timestamp occurs on the year start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_start(ts)
True
|
reference/api/pandas.tseries.offsets.CustomBusinessDay.is_year_start.html
|
pandas.PeriodIndex.day_of_week
|
`pandas.PeriodIndex.day_of_week`
The day of the week with Monday=0, Sunday=6.
|
property PeriodIndex.day_of_week[source]#
The day of the week with Monday=0, Sunday=6.
|
reference/api/pandas.PeriodIndex.day_of_week.html
|
pandas.PeriodIndex.start_time
|
`pandas.PeriodIndex.start_time`
Get the Timestamp for the start of the period.
See also
```
>>> period = pd.Period('2012-1-1', freq='D')
>>> period
Period('2012-01-01', 'D')
```
|
property PeriodIndex.start_time[source]#
Get the Timestamp for the start of the period.
Returns
Timestamp
See also
Period.end_timeReturn the end Timestamp.
Period.dayofyearReturn the day of year.
Period.daysinmonthReturn the days in that month.
Period.dayofweekReturn the day of the week.
Examples
>>> period = pd.Period('2012-1-1', freq='D')
>>> period
Period('2012-01-01', 'D')
>>> period.start_time
Timestamp('2012-01-01 00:00:00')
>>> period.end_time
Timestamp('2012-01-01 23:59:59.999999999')
|
reference/api/pandas.PeriodIndex.start_time.html
|
pandas.arrays.IntervalArray.mid
|
`pandas.arrays.IntervalArray.mid`
Return the midpoint of each Interval in the IntervalArray as an Index.
|
property IntervalArray.mid[source]#
Return the midpoint of each Interval in the IntervalArray as an Index.
|
reference/api/pandas.arrays.IntervalArray.mid.html
|
pandas.PeriodIndex.month
|
`pandas.PeriodIndex.month`
The month as January=1, December=12.
|
property PeriodIndex.month[source]#
The month as January=1, December=12.
|
reference/api/pandas.PeriodIndex.month.html
|
pandas.tseries.offsets.FY5253.variation
|
pandas.tseries.offsets.FY5253.variation
|
FY5253.variation#
|
reference/api/pandas.tseries.offsets.FY5253.variation.html
|
pandas.Categorical.codes
|
`pandas.Categorical.codes`
The category codes of this categorical.
|
property Categorical.codes[source]#
The category codes of this categorical.
Codes are an array of integers which are the positions of the actual
values in the categories array.
There is no setter, use the other categorical methods and the normal item
setter to change values in the categorical.
Returns
ndarray[int]A non-writable view of the codes array.
|
reference/api/pandas.Categorical.codes.html
|
pandas.Series.interpolate
|
`pandas.Series.interpolate`
Fill NaN values using an interpolation method.
```
>>> s = pd.Series([0, 1, np.nan, 3])
>>> s
0 0.0
1 1.0
2 NaN
3 3.0
dtype: float64
>>> s.interpolate()
0 0.0
1 1.0
2 2.0
3 3.0
dtype: float64
```
|
Series.interpolate(method='linear', *, axis=0, limit=None, inplace=False, limit_direction=None, limit_area=None, downcast=None, **kwargs)[source]#
Fill NaN values using an interpolation method.
Please note that only method='linear' is supported for
DataFrame/Series with a MultiIndex.
Parameters
methodstr, default ‘linear’Interpolation technique to use. One of:
‘linear’: Ignore the index and treat the values as equally
spaced. This is the only method supported on MultiIndexes.
‘time’: Works on daily and higher resolution data to interpolate
given length of interval.
‘index’, ‘values’: use the actual numerical values of the index.
‘pad’: Fill in NaNs using existing values.
‘nearest’, ‘zero’, ‘slinear’, ‘quadratic’, ‘cubic’, ‘spline’,
‘barycentric’, ‘polynomial’: Passed to
scipy.interpolate.interp1d. These methods use the numerical
values of the index. Both ‘polynomial’ and ‘spline’ require that
you also specify an order (int), e.g.
df.interpolate(method='polynomial', order=5).
‘krogh’, ‘piecewise_polynomial’, ‘spline’, ‘pchip’, ‘akima’,
‘cubicspline’: Wrappers around the SciPy interpolation methods of
similar names. See Notes.
‘from_derivatives’: Refers to
scipy.interpolate.BPoly.from_derivatives which
replaces ‘piecewise_polynomial’ interpolation method in
scipy 0.18.
axis{{0 or ‘index’, 1 or ‘columns’, None}}, default NoneAxis to interpolate along. For Series this parameter is unused
and defaults to 0.
limitint, optionalMaximum number of consecutive NaNs to fill. Must be greater than
0.
inplacebool, default FalseUpdate the data in place if possible.
limit_direction{{‘forward’, ‘backward’, ‘both’}}, OptionalConsecutive NaNs will be filled in this direction.
If limit is specified:
If ‘method’ is ‘pad’ or ‘ffill’, ‘limit_direction’ must be ‘forward’.
If ‘method’ is ‘backfill’ or ‘bfill’, ‘limit_direction’ must be
‘backwards’.
If ‘limit’ is not specified:
If ‘method’ is ‘backfill’ or ‘bfill’, the default is ‘backward’
else the default is ‘forward’
Changed in version 1.1.0: raises ValueError if limit_direction is ‘forward’ or ‘both’ and
method is ‘backfill’ or ‘bfill’.
raises ValueError if limit_direction is ‘backward’ or ‘both’ and
method is ‘pad’ or ‘ffill’.
limit_area{{None, ‘inside’, ‘outside’}}, default NoneIf limit is specified, consecutive NaNs will be filled with this
restriction.
None: No fill restriction.
‘inside’: Only fill NaNs surrounded by valid values
(interpolate).
‘outside’: Only fill NaNs outside valid values (extrapolate).
downcastoptional, ‘infer’ or None, defaults to NoneDowncast dtypes if possible.
``**kwargs``optionalKeyword arguments to pass on to the interpolating function.
Returns
Series or DataFrame or NoneReturns the same object type as the caller, interpolated at
some or all NaN values or None if inplace=True.
See also
fillnaFill missing values using different methods.
scipy.interpolate.Akima1DInterpolatorPiecewise cubic polynomials (Akima interpolator).
scipy.interpolate.BPoly.from_derivativesPiecewise polynomial in the Bernstein basis.
scipy.interpolate.interp1dInterpolate a 1-D function.
scipy.interpolate.KroghInterpolatorInterpolate polynomial (Krogh interpolator).
scipy.interpolate.PchipInterpolatorPCHIP 1-d monotonic cubic interpolation.
scipy.interpolate.CubicSplineCubic spline data interpolator.
Notes
The ‘krogh’, ‘piecewise_polynomial’, ‘spline’, ‘pchip’ and ‘akima’
methods are wrappers around the respective SciPy implementations of
similar names. These use the actual numerical values of the index.
For more information on their behavior, see the
SciPy documentation.
Examples
Filling in NaN in a Series via linear
interpolation.
>>> s = pd.Series([0, 1, np.nan, 3])
>>> s
0 0.0
1 1.0
2 NaN
3 3.0
dtype: float64
>>> s.interpolate()
0 0.0
1 1.0
2 2.0
3 3.0
dtype: float64
Filling in NaN in a Series by padding, but filling at most two
consecutive NaN at a time.
>>> s = pd.Series([np.nan, "single_one", np.nan,
... "fill_two_more", np.nan, np.nan, np.nan,
... 4.71, np.nan])
>>> s
0 NaN
1 single_one
2 NaN
3 fill_two_more
4 NaN
5 NaN
6 NaN
7 4.71
8 NaN
dtype: object
>>> s.interpolate(method='pad', limit=2)
0 NaN
1 single_one
2 single_one
3 fill_two_more
4 fill_two_more
5 fill_two_more
6 NaN
7 4.71
8 4.71
dtype: object
Filling in NaN in a Series via polynomial interpolation or splines:
Both ‘polynomial’ and ‘spline’ methods require that you also specify
an order (int).
>>> s = pd.Series([0, 2, np.nan, 8])
>>> s.interpolate(method='polynomial', order=2)
0 0.000000
1 2.000000
2 4.666667
3 8.000000
dtype: float64
Fill the DataFrame forward (that is, going down) along each column
using linear interpolation.
Note how the last entry in column ‘a’ is interpolated differently,
because there is no entry after it to use for interpolation.
Note how the first entry in column ‘b’ remains NaN, because there
is no entry before it to use for interpolation.
>>> df = pd.DataFrame([(0.0, np.nan, -1.0, 1.0),
... (np.nan, 2.0, np.nan, np.nan),
... (2.0, 3.0, np.nan, 9.0),
... (np.nan, 4.0, -4.0, 16.0)],
... columns=list('abcd'))
>>> df
a b c d
0 0.0 NaN -1.0 1.0
1 NaN 2.0 NaN NaN
2 2.0 3.0 NaN 9.0
3 NaN 4.0 -4.0 16.0
>>> df.interpolate(method='linear', limit_direction='forward', axis=0)
a b c d
0 0.0 NaN -1.0 1.0
1 1.0 2.0 -2.0 5.0
2 2.0 3.0 -3.0 9.0
3 2.0 4.0 -4.0 16.0
Using polynomial interpolation.
>>> df['d'].interpolate(method='polynomial', order=2)
0 1.0
1 4.0
2 9.0
3 16.0
Name: d, dtype: float64
|
reference/api/pandas.Series.interpolate.html
|
pandas.Series.apply
|
`pandas.Series.apply`
Invoke function on values of Series.
```
>>> s = pd.Series([20, 21, 12],
... index=['London', 'New York', 'Helsinki'])
>>> s
London 20
New York 21
Helsinki 12
dtype: int64
```
|
Series.apply(func, convert_dtype=True, args=(), **kwargs)[source]#
Invoke function on values of Series.
Can be ufunc (a NumPy function that applies to the entire Series)
or a Python function that only works on single values.
Parameters
funcfunctionPython function or NumPy ufunc to apply.
convert_dtypebool, default TrueTry to find better dtype for elementwise function results. If
False, leave as dtype=object. Note that the dtype is always
preserved for some extension array dtypes, such as Categorical.
argstuplePositional arguments passed to func after the series value.
**kwargsAdditional keyword arguments passed to func.
Returns
Series or DataFrameIf func returns a Series object the result will be a DataFrame.
See also
Series.mapFor element-wise operations.
Series.aggOnly perform aggregating type operations.
Series.transformOnly perform transforming type operations.
Notes
Functions that mutate the passed object can produce unexpected
behavior or errors and are not supported. See Mutating with User Defined Function (UDF) methods
for more details.
Examples
Create a series with typical summer temperatures for each city.
>>> s = pd.Series([20, 21, 12],
... index=['London', 'New York', 'Helsinki'])
>>> s
London 20
New York 21
Helsinki 12
dtype: int64
Square the values by defining a function and passing it as an
argument to apply().
>>> def square(x):
... return x ** 2
>>> s.apply(square)
London 400
New York 441
Helsinki 144
dtype: int64
Square the values by passing an anonymous function as an
argument to apply().
>>> s.apply(lambda x: x ** 2)
London 400
New York 441
Helsinki 144
dtype: int64
Define a custom function that needs additional positional
arguments and pass these additional arguments using the
args keyword.
>>> def subtract_custom_value(x, custom_value):
... return x - custom_value
>>> s.apply(subtract_custom_value, args=(5,))
London 15
New York 16
Helsinki 7
dtype: int64
Define a custom function that takes keyword arguments
and pass these arguments to apply.
>>> def add_custom_values(x, **kwargs):
... for month in kwargs:
... x += kwargs[month]
... return x
>>> s.apply(add_custom_values, june=30, july=20, august=25)
London 95
New York 96
Helsinki 87
dtype: int64
Use a function from the Numpy library.
>>> s.apply(np.log)
London 2.995732
New York 3.044522
Helsinki 2.484907
dtype: float64
|
reference/api/pandas.Series.apply.html
|
pandas.DataFrame.convert_dtypes
|
`pandas.DataFrame.convert_dtypes`
Convert columns to best possible dtypes using dtypes supporting pd.NA.
```
>>> df = pd.DataFrame(
... {
... "a": pd.Series([1, 2, 3], dtype=np.dtype("int32")),
... "b": pd.Series(["x", "y", "z"], dtype=np.dtype("O")),
... "c": pd.Series([True, False, np.nan], dtype=np.dtype("O")),
... "d": pd.Series(["h", "i", np.nan], dtype=np.dtype("O")),
... "e": pd.Series([10, np.nan, 20], dtype=np.dtype("float")),
... "f": pd.Series([np.nan, 100.5, 200], dtype=np.dtype("float")),
... }
... )
```
|
DataFrame.convert_dtypes(infer_objects=True, convert_string=True, convert_integer=True, convert_boolean=True, convert_floating=True)[source]#
Convert columns to best possible dtypes using dtypes supporting pd.NA.
New in version 1.0.0.
Parameters
infer_objectsbool, default TrueWhether object dtypes should be converted to the best possible types.
convert_stringbool, default TrueWhether object dtypes should be converted to StringDtype().
convert_integerbool, default TrueWhether, if possible, conversion can be done to integer extension types.
convert_booleanbool, defaults TrueWhether object dtypes should be converted to BooleanDtypes().
convert_floatingbool, defaults TrueWhether, if possible, conversion can be done to floating extension types.
If convert_integer is also True, preference will be give to integer
dtypes if the floats can be faithfully casted to integers.
New in version 1.2.0.
Returns
Series or DataFrameCopy of input object with new dtype.
See also
infer_objectsInfer dtypes of objects.
to_datetimeConvert argument to datetime.
to_timedeltaConvert argument to timedelta.
to_numericConvert argument to a numeric type.
Notes
By default, convert_dtypes will attempt to convert a Series (or each
Series in a DataFrame) to dtypes that support pd.NA. By using the options
convert_string, convert_integer, convert_boolean and
convert_boolean, it is possible to turn off individual conversions
to StringDtype, the integer extension types, BooleanDtype
or floating extension types, respectively.
For object-dtyped columns, if infer_objects is True, use the inference
rules as during normal Series/DataFrame construction. Then, if possible,
convert to StringDtype, BooleanDtype or an appropriate integer
or floating extension type, otherwise leave as object.
If the dtype is integer, convert to an appropriate integer extension type.
If the dtype is numeric, and consists of all integers, convert to an
appropriate integer extension type. Otherwise, convert to an
appropriate floating extension type.
Changed in version 1.2: Starting with pandas 1.2, this method also converts float columns
to the nullable floating extension type.
In the future, as new dtypes are added that support pd.NA, the results
of this method will change to support those new dtypes.
Examples
>>> df = pd.DataFrame(
... {
... "a": pd.Series([1, 2, 3], dtype=np.dtype("int32")),
... "b": pd.Series(["x", "y", "z"], dtype=np.dtype("O")),
... "c": pd.Series([True, False, np.nan], dtype=np.dtype("O")),
... "d": pd.Series(["h", "i", np.nan], dtype=np.dtype("O")),
... "e": pd.Series([10, np.nan, 20], dtype=np.dtype("float")),
... "f": pd.Series([np.nan, 100.5, 200], dtype=np.dtype("float")),
... }
... )
Start with a DataFrame with default dtypes.
>>> df
a b c d e f
0 1 x True h 10.0 NaN
1 2 y False i NaN 100.5
2 3 z NaN NaN 20.0 200.0
>>> df.dtypes
a int32
b object
c object
d object
e float64
f float64
dtype: object
Convert the DataFrame to use best possible dtypes.
>>> dfn = df.convert_dtypes()
>>> dfn
a b c d e f
0 1 x True h 10 <NA>
1 2 y False i <NA> 100.5
2 3 z <NA> <NA> 20 200.0
>>> dfn.dtypes
a Int32
b string
c boolean
d string
e Int64
f Float64
dtype: object
Start with a Series of strings and missing data represented by np.nan.
>>> s = pd.Series(["a", "b", np.nan])
>>> s
0 a
1 b
2 NaN
dtype: object
Obtain a Series with dtype StringDtype.
>>> s.convert_dtypes()
0 a
1 b
2 <NA>
dtype: string
|
reference/api/pandas.DataFrame.convert_dtypes.html
|
pandas.tseries.offsets.MonthBegin.is_quarter_end
|
`pandas.tseries.offsets.MonthBegin.is_quarter_end`
Return boolean whether a timestamp occurs on the quarter end.
Examples
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_end(ts)
False
```
|
MonthBegin.is_quarter_end()#
Return boolean whether a timestamp occurs on the quarter end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_end(ts)
False
|
reference/api/pandas.tseries.offsets.MonthBegin.is_quarter_end.html
|
pandas.tseries.offsets.MonthBegin.apply
|
pandas.tseries.offsets.MonthBegin.apply
|
MonthBegin.apply()#
|
reference/api/pandas.tseries.offsets.MonthBegin.apply.html
|
pandas.Series.dt.day_of_week
|
`pandas.Series.dt.day_of_week`
The day of the week with Monday=0, Sunday=6.
Return the day of the week. It is assumed the week starts on
Monday, which is denoted by 0 and ends on Sunday which is denoted
by 6. This method is available on both Series with datetime
values (using the dt accessor) or DatetimeIndex.
```
>>> s = pd.date_range('2016-12-31', '2017-01-08', freq='D').to_series()
>>> s.dt.dayofweek
2016-12-31 5
2017-01-01 6
2017-01-02 0
2017-01-03 1
2017-01-04 2
2017-01-05 3
2017-01-06 4
2017-01-07 5
2017-01-08 6
Freq: D, dtype: int64
```
|
Series.dt.day_of_week[source]#
The day of the week with Monday=0, Sunday=6.
Return the day of the week. It is assumed the week starts on
Monday, which is denoted by 0 and ends on Sunday which is denoted
by 6. This method is available on both Series with datetime
values (using the dt accessor) or DatetimeIndex.
Returns
Series or IndexContaining integers indicating the day number.
See also
Series.dt.dayofweekAlias.
Series.dt.weekdayAlias.
Series.dt.day_nameReturns the name of the day of the week.
Examples
>>> s = pd.date_range('2016-12-31', '2017-01-08', freq='D').to_series()
>>> s.dt.dayofweek
2016-12-31 5
2017-01-01 6
2017-01-02 0
2017-01-03 1
2017-01-04 2
2017-01-05 3
2017-01-06 4
2017-01-07 5
2017-01-08 6
Freq: D, dtype: int64
|
reference/api/pandas.Series.dt.day_of_week.html
|
pandas.core.window.ewm.ExponentialMovingWindow.mean
|
`pandas.core.window.ewm.ExponentialMovingWindow.mean`
Calculate the ewm (exponential weighted moment) mean.
Include only float, int, boolean columns.
|
ExponentialMovingWindow.mean(numeric_only=False, *args, engine=None, engine_kwargs=None, **kwargs)[source]#
Calculate the ewm (exponential weighted moment) mean.
Parameters
numeric_onlybool, default FalseInclude only float, int, boolean columns.
New in version 1.5.0.
*argsFor NumPy compatibility and will not have an effect on the result.
Deprecated since version 1.5.0.
enginestr, default None
'cython' : Runs the operation through C-extensions from cython.
'numba' : Runs the operation through JIT compiled code from numba.
None : Defaults to 'cython' or globally setting compute.use_numba
New in version 1.3.0.
engine_kwargsdict, default None
For 'cython' engine, there are no accepted engine_kwargs
For 'numba' engine, the engine can accept nopython, nogil
and parallel dictionary keys. The values must either be True or
False. The default engine_kwargs for the 'numba' engine is
{'nopython': True, 'nogil': False, 'parallel': False}
New in version 1.3.0.
**kwargsFor NumPy compatibility and will not have an effect on the result.
Deprecated since version 1.5.0.
Returns
Series or DataFrameReturn type is the same as the original object with np.float64 dtype.
See also
pandas.Series.ewmCalling ewm with Series data.
pandas.DataFrame.ewmCalling ewm with DataFrames.
pandas.Series.meanAggregating mean for Series.
pandas.DataFrame.meanAggregating mean for DataFrame.
Notes
See Numba engine and Numba (JIT compilation) for extended documentation and performance considerations for the Numba engine.
|
reference/api/pandas.core.window.ewm.ExponentialMovingWindow.mean.html
|
pandas.plotting.register_matplotlib_converters
|
`pandas.plotting.register_matplotlib_converters`
Register pandas formatters and converters with matplotlib.
|
pandas.plotting.register_matplotlib_converters()[source]#
Register pandas formatters and converters with matplotlib.
This function modifies the global matplotlib.units.registry
dictionary. pandas adds custom converters for
pd.Timestamp
pd.Period
np.datetime64
datetime.datetime
datetime.date
datetime.time
See also
deregister_matplotlib_convertersRemove pandas formatters and converters.
|
reference/api/pandas.plotting.register_matplotlib_converters.html
|
pandas.PeriodIndex.second
|
`pandas.PeriodIndex.second`
The second of the period.
|
property PeriodIndex.second[source]#
The second of the period.
|
reference/api/pandas.PeriodIndex.second.html
|
pandas.tseries.offsets.Easter.normalize
|
pandas.tseries.offsets.Easter.normalize
|
Easter.normalize#
|
reference/api/pandas.tseries.offsets.Easter.normalize.html
|
pandas.tseries.offsets.BusinessHour.start
|
pandas.tseries.offsets.BusinessHour.start
|
BusinessHour.start#
|
reference/api/pandas.tseries.offsets.BusinessHour.start.html
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.