ladybug.datacollectionimmutable module¶
Immutable versions of the Ladybug Data Collections.
Note that all of the methods or properties on an immutable collection that return another data collection will return a collection that is mutable.
The only exceptions to this rule are:
duplicate() - which will always return an exact copy of the collection including its mutability.
get_aligned_collection() - which follows the mutability of the starting collection by default but includes an parameter to override this.
to_immutable() - which clearly always returns an immutable version of the collection
Note that the to_mutable() method on the immutable collections can always be used to get a mutable version of an immutable collection.
- class ladybug.datacollectionimmutable.DailyCollectionImmutable(header, values, datetimes)[source]¶
Bases:
_ImmutableCollectionBase
,DailyCollection
Immutable Daily Data Collection.
- ToString()¶
Overwrite .NET ToString method.
- aggregate_by_area(area, area_unit)¶
Get a Data Collection that is aggregated by an area value.
Note that this method will raise a ValueError if the data type in the header of the data collection is not a normalized_type of another data type.
- Parameters:
area – Number representing area by which all of the data is aggregated.
area_unit – Text for the units that the area value is in. Acceptable inputs include ‘m2’, ‘ft2’ and any other unit that is supported in the normalized_type of destination datacollection’s data type.
- static arange(start, stop, step)¶
Return evenly spaced fractional or whole values within a given interval.
This function acts like the Python range method, but can also account for fractional values. It is equivalent to the numpy.arange function.
- Parameters:
start – Number for inclusive start of interval.
stop – Number for exclusive end of interval.
step – Number for step size of interval.
- Returns:
Generator of evenly spaced values.
Usage:
from BaseCollection import arange arange(1, 351, 50) # >> [1, 51, 101, 151, 201, 251, 301]
- static are_collections_aligned(data_collections, raise_exception=True)¶
Test if a series of Data Collections are aligned with one another.
Aligned Data Collections are of the same Data Collection class, have the same number of values and have matching datetimes.
- Parameters:
data_collections – A list of Data Collections for which alignment will be tested.
raise_exception – Boolean to note if an exception should be raised when collections are not aligned with one another.
- Returns:
True if collections are aligned, False if not aligned
- static are_metadatas_aligned(data_collections, raise_exception=True)¶
Test if a series of Data Collections have aligned metadata.
Aligned metadata means that the number of metadata items is the same between the two collections.
- Parameters:
data_collections – A list of Data Collections for which metadata alignment will be tested.
raise_exception – Boolean to note if an exception should be raised when collection metadatas are not aligned with one another.
- Returns:
True if metadatas are aligned, False if not aligned
- average_monthly()¶
Return a monthly collection of values averaged for each month.
- static compute_function_aligned(funct, data_collections, data_type, unit)¶
Compute a function with a list of aligned data collections or values.
- Parameters:
funct – A function with a single numerical value as output and one or more numerical values as input.
data_collections – A list with a length equal to the number of arguments for the function. Items of the list can be either Data Collections or individual values to be used at each datetime of other collections.
data_type – An instance of a Ladybug data type that describes the results of the funct.
unit – The units of the funct results.
- Returns:
A Data Collection with the results function. If all items in this list of data_collections are individual values, only a single value will be returned.
Usage:
from ladybug.datacollection import HourlyContinuousCollection from ladybug.epw import EPW from ladybug.psychrometrics import humid_ratio_from_db_rh from ladybug.datatype.percentage import HumidityRatio epw_file_path = './epws/denver.epw' denver_epw = EPW(epw_file_path) pressure_at_denver = 85000 hr_inputs = [denver_epw.dry_bulb_temperature, denver_epw.relative_humidity, pressure_at_denver] humid_ratio = HourlyContinuousCollection.compute_function_aligned( humid_ratio_from_db_rh, hr_inputs, HumidityRatio(), 'fraction') # humid_ratio will be a Data Collection of humidity ratios at Denver
- convert_to_ip()¶
Convert the Data Collection to IP units.
Note that this mutates the data collection object, which can have unintended consequences depending on how the data collection is used. Use to_ip to get a new instance of a collection without mutating this one.
- convert_to_si()¶
Convert the Data Collection to SI units.
Note that this mutates the data collection object, which can have unintended consequences depending on how the data collection is used. Use to_si to get a new instance of a collection without mutating this one.
- convert_to_unit(unit)¶
Convert the Data Collection to the input unit.
Note that this mutates the data collection object, which can have unintended consequences depending on how the data collection is used. Use to_unit to get a new instance of a collection without mutating this one.
- duplicate()¶
Get a copy of this Data Collection.
- filter_by_analysis_period(analysis_period)¶
Filter the Data Collection based on an analysis period.
- Parameters:
period (analysis) – A Ladybug analysis period
- Returns:
A new Data Collection with filtered data
- filter_by_conditional_statement(statement)¶
Filter the Data Collection based on a conditional statement.
- Parameters:
statement – A conditional statement as a string (e.g. a > 25 and a%5 == 0). The variable should always be named as ‘a’ (without quotations).
- Returns:
A new Data Collection containing only the filtered data.
- filter_by_doys(doys)¶
Filter the Data Collection based on a list of days of the year (as integers).
- Parameters:
doys – A List of days of the year [1..365]
- Returns:
A new Data Collection with filtered data
- filter_by_pattern(pattern)¶
Filter the Data Collection based on a list of booleans.
- Parameters:
pattern – A list of True/False values. Typically, this is a list with a length matching the length of the Data Collections values but it can also be a pattern to be repeated over the Data Collection.
- Returns:
A new Data Collection with filtered data.
- filter_by_range(greater_than=-inf, less_than=inf)¶
Filter the Data Collection based on whether values fall within a given range.
This is similar to the filter_by_conditional_statement but is often much faster since it does not have all of the flexibility of the conditional statement and uses native Python operators instead of eval() statements.
- Parameters:
greater_than – A number which the data collection values should be greater than in order to be included in the output collection. (Default: Negative Infinity).
less_than – A number which the data collection values should be less than in order to be included in the output collection. (Default: Infinity).
- Returns:
A new Data Collection with filtered data.
- static filter_collections_by_statement(data_collections, statement)¶
Generate a filtered data collections according to a conditional statement.
- Parameters:
data_collections – A list of aligned Data Collections to be evaluated against the statement.
statement – A conditional statement as a string (e.g. a>25 and a%5==0). The variable should always be named as ‘a’ (without quotations).
- Returns:
collections – A list of Data Collections that have been filtered based on the statement.
- classmethod from_dict(data)¶
Create a Data Collection from a dictionary.
- Parameters:
data – A python dictionary in the following format
{ "header": {}, # Ladybug Header "values": [], # array of values "datetimes": [], # array of datetimes "validated_a_period": True # boolean for valid analysis_period }
- get_aligned_collection(value=0, data_type=None, unit=None, mutable=None)¶
Get a collection aligned with this one composed of one repeated value.
Aligned Data Collections are of the same Data Collection class, have the same number of values and have matching datetimes.
- Parameters:
value – A value to be repeated in the aligned collection values or A list of values that has the same length as this collection. Default: 0.
data_type – The data type of the aligned collection. Default is to use the data type of this collection.
unit – The unit of the aligned collection. Default is to use the unit of this collection or the base unit of the input data_type (if it exists).
mutable – An optional Boolean to set whether the returned aligned collection is mutable (True) or immutable (False). The default is None, which will simply set the aligned collection to have the same mutability as the starting collection.
- group_by_month()¶
Return a dictionary of this collection’s values grouped by each month.
Key values are between 1-12.
- highest_values(count)¶
Get a list of the the highest values of the Data Collection and their indices.
This is useful for situations where one needs to know the times of the year when the largest values of a data collection occur. For example, there is a European daylight code that requires an analysis for the hours of the year with the greatest exterior illuminance level. This method can be used to help build a schedule for such a study.
- Parameters:
count – Integer representing the number of highest values to account for.
- Returns:
A tuple with two elements.
highest_values: The n highest values in data list, ordered from highest to lowest.
highest_values_index: Indices of the n highest values in data list, ordered from highest to lowest.
- static histogram(values, bins, key=None)¶
Compute the frequency histogram from a list of values.
The data is binned inclusive of the lower bound but exclusive of the upper bound for intervals. See usage for example of losing the last number in the following dataset because of exclusive upper bound.
- Parameters:
values – Set of numerical data as a list.
bins – A monotonically increasing array of uniform-width bin edges, excluding the rightmost edge.
key – Optional parameter to define key to bin values by, as a function. If not provided, the histogram will be binned by the value.
- Returns:
A list of lists representing the ordered values binned by frequency.
Usage:
from BaseCollection import histogram # Simple example histogram([0, 0, 0.9, 1, 1.5, 1.99, 2, 3], (0, 1, 2, 3)) # >> [[0, 0, 0.9], [1, 1.5, 1.99], [2]] # With key parameter histogram( zip([0, 0, 0.9, 1, 1.5, 1.99], ['a', 'b', 'c', 'd', 'e', 'f']), (0, 1, 2), key=lambda k: k[0]) # >> [[], [(0, a), (0, b), (0.9, c)], [(1, d), (1.5, e), (1.99, f)], []]
- static histogram_circular(values, bins, hist_range=None, key=None)¶
Compute the frequency histogram from a list of circular values.
Circular values refers to a set of values where there is no distinction between values at the lower or upper end of the range, for example angles in a circle, or time. The data is binned inclusive of the lower bound but exclusive of the upper bound for intervals.
- Parameters:
values – Set of numerical data as a list.
bins – An array of uniform-width bin edges, excluding the rightmost edge. These values do not have to be monotonically increasing.
hist_range – Optional parameter to define the lower and upper range of the histogram as a tuple of numbers. If not provided the range is
(min(key(values)), max(key(values))+1)
.key – Optional parameter to define key to bin values by, as a function. If not provided, the histogram will be binned by the value.
- Returns:
A list of lists representing the ordered values binned by frequency.
Usage:
from BaseCollection import histogram_circular histogram_circular([358, 359, 0, 1, 2, 3], (358, 0, 3)) # >> [[358, 359], [0, 1, 2]]
- is_collection_aligned(data_collection)¶
Check if this Data Collection is aligned with another.
Aligned Data Collections are of the same Data Collection class, have the same number of values and have matching datetimes.
- Parameters:
data_collection – The Data Collection for which alignment will be tested.
- Returns:
True if collections are aligned, False if not aligned
- is_in_data_type_range(raise_exception=True)¶
Check if collection values are in the range for the data_type.
If this method returns False, the collection’s values are physically or mathematically impossible for the data_type (eg. temperature below absolute zero).
- Parameters:
raise_exception – Boolean to note whether an exception should be raised if an impossible value is found. (Default: True).
- is_metadata_aligned(data_collection)¶
Check if the metadata in this Data Collection header is aligned with another.
Aligned metadata means that the number of metadata items is the same between the two collections.
- Parameters:
data_collection – The Data Collection for which metadata alignment will be tested.
- Returns:
True if the metadata in the collections are aligned, False if not aligned.
- static linspace(start, stop, num)¶
Get evenly spaced numbers calculated over the interval start, stop.
This method is similar to native Python range except that it takes a number of divisions instead of a step. It is also equivalent to numpy’s linspace method.
- Parameters:
start – Start interval index as integer or float.
stop – Stop interval index as integer or float.
num – Number of divisions as integer.
- Returns:
A list of numbers.
Usage:
from BaseCollection import linspace linspace(0, 5, 6) # >> [0., 1., 2., 3., 4., 5.]
- lowest_values(count)¶
Get a list of the the lowest values of the Data Collection and their indices.
This is useful for situations where one needs to know the times of the year when the smallest values of a data collection occur.
- Parameters:
count – Integer representing the number of lowest values to account for.
- Returns:
A tuple with two elements.
highest_values: The n lowest values in data list, ordered from lowest to lowest.
lowest_values_index: Indices of the n lowest values in data list, ordered from lowest to lowest.
- normalize_by_area(area, area_unit)¶
Get a Data Collection that is normalized by an area value.
Note that this method will raise a ValueError if the data type in the header of the data collection does not have a normalized_type. Also note that a ZeroDivisionError will be raised if the input area is equal to 0.
- Parameters:
area – Number representing area by which all of the data is normalized.
area_unit – Text for the units that the area value is in. Acceptable inputs include ‘m2’, ‘ft2’ and any other unit that is supported in the normalized_type of this datacollection’s data type.
- static pattern_from_collections_and_statement(data_collections, statement)¶
Generate a list of booleans from data collections and a conditional statement.
- Parameters:
data_collections – A list of aligned Data Collections to be evaluated against the statement.
statement – A conditional statement as a string (e.g. a>25 and a%5==0). The variable should always be named as ‘a’ (without quotations).
- Returns:
pattern – A list of True/False booleans with the length of the Data Collections where True meets the conditional statement and False does not.
- percentile(percentile)¶
Get a value representing a the input percentile of the Data Collection.
- Parameters:
percentile – A float value from 0 to 100 representing the requested percentile.
- Returns:
The Data Collection value at the input percentile
- percentile_monthly(percentile)¶
Return a monthly collection of values at the input percentile of each month.
- Parameters:
percentile – A float value from 0 to 100 representing the requested percentile.
- to_dict()¶
Convert Data Collection to a dictionary.
- to_immutable()¶
Get an immutable version of this collection.
- to_ip()¶
Get a Data Collection in IP units.
- to_si()¶
Get a Data Collection in SI units.
- to_time_aggregated()¶
Get a collection where data has been aggregated over the collection timestep.
For example, if the collection has a Power data type in W, this method will return a collection with an Energy data type in kWh.
- to_time_rate_of_change()¶
Get a collection that has been converted to time rate of change units.
For example, if the collection has an Energy data type in kWh, this method will return a collection with a Power data type in W.
- to_unit(unit)¶
Get a Data Collection in the input unit.
- Parameters:
unit – Text for the unit to convert the data to (eg. ‘C’ or ‘kWh’). This unit must appear under the data collection’s header.data_type.units.
- total_monthly()¶
Return a monthly collection of values totaled over each month.
- validate_analysis_period()¶
Get a collection where the header analysis_period aligns with datetimes.
This means that checks for four criteria will be performed:
All days in the data collection are chronological starting from the analysis_period start day to the end day.
No duplicate days exist in the data collection.
There are no days that lie outside of the analysis_period time range.
February 29th is excluded if is_leap_year is False on the analysis_period.
Note that there is no need to run this check any time that a discontinuous data collection has been derived from a continuous one or when the validated_a_period attribute of the collection is True.
- property average¶
Get the average of the Data Collection values.
- property bounds¶
Get a tuple of two value as (min, max) of the data.
- property datetime_strings¶
Get a list of datetime strings for this collection.
These provides a human-readable way to interpret the datetimes.
- property datetimes¶
Get a tuple of datetimes for this collection, which align with the values.
- property header¶
Get the header for this collection.
- property is_continuous¶
Boolean denoting whether the data collection is continuous.
- property is_mutable¶
Boolean denoting whether the data collection is mutable.
- property max¶
Get the max of the Data Collection values.
- property median¶
Get the median of the Data Collection values.
- property min¶
Get the min of the Data Collection values.
- property total¶
Get the total of the Data Collection values.
- property validated_a_period¶
Boolean for whether the header analysis_period is validated against datetimes.
This will always be True when a collection is derived from a continuous one.
- property values¶
The Data Collection’s list of numerical values.
- class ladybug.datacollectionimmutable.HourlyContinuousCollectionImmutable(header, values)[source]¶
Bases:
_ImmutableCollectionBase
,HourlyContinuousCollection
Immutable Continuous Data Collection at hourly or sub-hourly intervals.
- ToString()¶
Overwrite .NET ToString method.
- aggregate_by_area(area, area_unit)¶
Get a Data Collection that is aggregated by an area value.
Note that this method will raise a ValueError if the data type in the header of the data collection is not a normalized_type of another data type.
- Parameters:
area – Number representing area by which all of the data is aggregated.
area_unit – Text for the units that the area value is in. Acceptable inputs include ‘m2’, ‘ft2’ and any other unit that is supported in the normalized_type of destination datacollection’s data type.
- static arange(start, stop, step)¶
Return evenly spaced fractional or whole values within a given interval.
This function acts like the Python range method, but can also account for fractional values. It is equivalent to the numpy.arange function.
- Parameters:
start – Number for inclusive start of interval.
stop – Number for exclusive end of interval.
step – Number for step size of interval.
- Returns:
Generator of evenly spaced values.
Usage:
from BaseCollection import arange arange(1, 351, 50) # >> [1, 51, 101, 151, 201, 251, 301]
- static are_collections_aligned(data_collections, raise_exception=True)¶
Test if a series of Data Collections are aligned with one another.
Aligned Data Collections are of the same Data Collection class, have the same number of values and have matching datetimes.
- Parameters:
data_collections – A list of Data Collections for which alignment will be tested.
raise_exception – Boolean to note if an exception should be raised when collections are not aligned with one another.
- Returns:
True if collections are aligned, False if not aligned
- static are_metadatas_aligned(data_collections, raise_exception=True)¶
Test if a series of Data Collections have aligned metadata.
Aligned metadata means that the number of metadata items is the same between the two collections.
- Parameters:
data_collections – A list of Data Collections for which metadata alignment will be tested.
raise_exception – Boolean to note if an exception should be raised when collection metadatas are not aligned with one another.
- Returns:
True if metadatas are aligned, False if not aligned
- average_daily()¶
Return a daily collection of values averaged for each day.
- average_monthly()¶
Return a monthly collection of values averaged for each month.
- average_monthly_per_hour()¶
Return a monthly per hour data collection of average values.
- static compute_function_aligned(funct, data_collections, data_type, unit)¶
Compute a function with a list of aligned data collections or values.
- Parameters:
funct – A function with a single numerical value as output and one or more numerical values as input.
data_collections – A list with a length equal to the number of arguments for the function. Items of the list can be either Data Collections or individual values to be used at each datetime of other collections.
data_type – An instance of a Ladybug data type that describes the results of the funct.
unit – The units of the funct results.
- Returns:
A Data Collection with the results function. If all items in this list of data_collections are individual values, only a single value will be returned.
Usage:
from ladybug.datacollection import HourlyContinuousCollection from ladybug.epw import EPW from ladybug.psychrometrics import humid_ratio_from_db_rh from ladybug.datatype.percentage import HumidityRatio epw_file_path = './epws/denver.epw' denver_epw = EPW(epw_file_path) pressure_at_denver = 85000 hr_inputs = [denver_epw.dry_bulb_temperature, denver_epw.relative_humidity, pressure_at_denver] humid_ratio = HourlyContinuousCollection.compute_function_aligned( humid_ratio_from_db_rh, hr_inputs, HumidityRatio(), 'fraction') # humid_ratio will be a Data Collection of humidity ratios at Denver
- convert_to_culled_timestep(timestep=1)[source]¶
This method is not available for immutable collections.
- convert_to_ip()¶
Convert the Data Collection to IP units.
Note that this mutates the data collection object, which can have unintended consequences depending on how the data collection is used. Use to_ip to get a new instance of a collection without mutating this one.
- convert_to_si()¶
Convert the Data Collection to SI units.
Note that this mutates the data collection object, which can have unintended consequences depending on how the data collection is used. Use to_si to get a new instance of a collection without mutating this one.
- convert_to_unit(unit)¶
Convert the Data Collection to the input unit.
Note that this mutates the data collection object, which can have unintended consequences depending on how the data collection is used. Use to_unit to get a new instance of a collection without mutating this one.
- cull_to_timestep(timestep=1)¶
Get a collection with only datetimes that fit a timestep.
- filter_by_analysis_period(analysis_period)¶
Filter the Data Collection based on an analysis period.
- Parameters:
period (analysis) – A Ladybug analysis period
- Returns:
A new Data Collection with filtered data
- filter_by_conditional_statement(statement)¶
Filter the Data Collection based on a conditional statement.
- Parameters:
statement – A conditional statement as a string (e.g. a > 25 and a%5 == 0). The variable should always be named as ‘a’ (without quotations).
- Returns:
A new Data Collection containing only the filtered data
- filter_by_hoys(hoys)¶
Filter the Data Collection using a list of hours of the year (hoys).
- Parameters:
hoys – A List of hours of the year 0..8759
- Returns:
A new Data Collection with filtered data
- filter_by_moys(moys)¶
Filter the Data Collection based on a list of minutes of the year.
- Parameters:
moys – A List of minutes of the year [0..8759 * 60]
- Returns:
A new Data Collection with filtered data
- filter_by_pattern(pattern)¶
Filter the Data Collection based on a list of booleans.
- Parameters:
pattern – A list of True/False values. Typically, this is a list with a length matching the length of the Data Collections values but it can also be a pattern to be repeated over the Data Collection.
- Returns:
A new Data Collection with filtered data
- filter_by_range(greater_than=-inf, less_than=inf)¶
Filter the Data Collection based on whether values fall within a given range.
This is similar to the filter_by_conditional_statement but is often much faster since it does not have all of the flexibility of the conditional statement and uses native Python operators instead of eval() statements.
- Parameters:
greater_than – A number which the data collection values should be greater than in order to be included in the output collection. (Default: Negative Infinity).
less_than – A number which the data collection values should be less than in order to be included in the output collection. (Default: Infinity).
- Returns:
A new Data Collection with filtered data.
- static filter_collections_by_statement(data_collections, statement)¶
Generate a filtered data collections according to a conditional statement.
- Parameters:
data_collections – A list of aligned Data Collections to be evaluated against the statement.
statement – A conditional statement as a string (e.g. a>25 and a%5==0). The variable should always be named as ‘a’ (without quotations).
- Returns:
collections – A list of Data Collections that have been filtered based on the statement.
- classmethod from_dict(data)¶
Create a Data Collection from a dictionary.
- Parameters:
data – A python dictionary in the following format
{ "type": HourlyContinuousCollection, "header": {}, # A Ladybug Header "values": [] # An array of values }
- get_aligned_collection(value=0, data_type=None, unit=None, mutable=None)¶
Return a Collection aligned with this one composed of one repeated value.
Aligned Data Collections are of the same Data Collection class, have the same number of values and have matching datetimes.
- Parameters:
value – A value to be repeated in the aligned collection values or A list of values that has the same length as this collection. Default: 0.
data_type – The data type of the aligned collection. Default is to use the data type of this collection.
unit – The unit of the aligned collection. Default is to use the unit of this collection or the base unit of the input data_type (if it exists).
mutable – An optional Boolean to set whether the returned aligned collection is mutable (True) or immutable (False). The default is None, which will simply set the aligned collection to have the same mutability as the starting collection.
- group_by_day()¶
Return a dictionary of this collection’s values grouped by each day of year.
Key values are between 1-365.
- group_by_month()¶
Return a dictionary of this collection’s values grouped by each month.
Key values are between 1-12.
- group_by_month_per_hour()¶
Return a dictionary of this collection’s values grouped by each month per hour.
Key values are tuples of 3 integers.
The first represents the month of the year between 1-12.
The second represents the hour of the day between 0-24.
The third represents the minute of the minute of the hour between 0-59.
- highest_values(count)¶
Get a list of the the highest values of the Data Collection and their indices.
This is useful for situations where one needs to know the times of the year when the largest values of a data collection occur. For example, there is a European daylight code that requires an analysis for the hours of the year with the greatest exterior illuminance level. This method can be used to help build a schedule for such a study.
- Parameters:
count – Integer representing the number of highest values to account for.
- Returns:
A tuple with two elements.
highest_values: The n highest values in data list, ordered from highest to lowest.
highest_values_index: Indices of the n highest values in data list, ordered from highest to lowest.
- static histogram(values, bins, key=None)¶
Compute the frequency histogram from a list of values.
The data is binned inclusive of the lower bound but exclusive of the upper bound for intervals. See usage for example of losing the last number in the following dataset because of exclusive upper bound.
- Parameters:
values – Set of numerical data as a list.
bins – A monotonically increasing array of uniform-width bin edges, excluding the rightmost edge.
key – Optional parameter to define key to bin values by, as a function. If not provided, the histogram will be binned by the value.
- Returns:
A list of lists representing the ordered values binned by frequency.
Usage:
from BaseCollection import histogram # Simple example histogram([0, 0, 0.9, 1, 1.5, 1.99, 2, 3], (0, 1, 2, 3)) # >> [[0, 0, 0.9], [1, 1.5, 1.99], [2]] # With key parameter histogram( zip([0, 0, 0.9, 1, 1.5, 1.99], ['a', 'b', 'c', 'd', 'e', 'f']), (0, 1, 2), key=lambda k: k[0]) # >> [[], [(0, a), (0, b), (0.9, c)], [(1, d), (1.5, e), (1.99, f)], []]
- static histogram_circular(values, bins, hist_range=None, key=None)¶
Compute the frequency histogram from a list of circular values.
Circular values refers to a set of values where there is no distinction between values at the lower or upper end of the range, for example angles in a circle, or time. The data is binned inclusive of the lower bound but exclusive of the upper bound for intervals.
- Parameters:
values – Set of numerical data as a list.
bins – An array of uniform-width bin edges, excluding the rightmost edge. These values do not have to be monotonically increasing.
hist_range – Optional parameter to define the lower and upper range of the histogram as a tuple of numbers. If not provided the range is
(min(key(values)), max(key(values))+1)
.key – Optional parameter to define key to bin values by, as a function. If not provided, the histogram will be binned by the value.
- Returns:
A list of lists representing the ordered values binned by frequency.
Usage:
from BaseCollection import histogram_circular histogram_circular([358, 359, 0, 1, 2, 3], (358, 0, 3)) # >> [[358, 359], [0, 1, 2]]
- interpolate_holes()¶
All continuous collections do not have holes in the data set.
Therefore, there is no need to run this method on a continuous collection.
- interpolate_to_timestep(timestep, cumulative=None)¶
Interpolate data for a finer timestep using a linear interpolation.
- Parameters:
timestep – Target timestep as an integer. Target timestep must be divisible by current timestep.
cumulative – A boolean that sets whether the interpolation should treat the data collection values as cumulative, in which case the value at each timestep is the value over that timestep (instead of over the hour). The default will check the DataType to see if this type of data is typically cumulative over time.
- Returns:
A continuous hourly data collection with data interpolated to the input timestep.
- is_collection_aligned(data_collection)¶
Check if this Data Collection is aligned with another.
Aligned Data Collections are of the same Data Collection class, have the same number of values and have matching datetimes.
- Parameters:
data_collection – The Data Collection which you want to test if this collection is aligned with.
- Returns:
True if collections are aligned, False if not aligned
- is_in_data_type_range(raise_exception=True)¶
Check if collection values are in the range for the data_type.
If this method returns False, the collection’s values are physically or mathematically impossible for the data_type (eg. temperature below absolute zero).
- Parameters:
raise_exception – Boolean to note whether an exception should be raised if an impossible value is found. (Default: True).
- is_metadata_aligned(data_collection)¶
Check if the metadata in this Data Collection header is aligned with another.
Aligned metadata means that the number of metadata items is the same between the two collections.
- Parameters:
data_collection – The Data Collection for which metadata alignment will be tested.
- Returns:
True if the metadata in the collections are aligned, False if not aligned.
- static linspace(start, stop, num)¶
Get evenly spaced numbers calculated over the interval start, stop.
This method is similar to native Python range except that it takes a number of divisions instead of a step. It is also equivalent to numpy’s linspace method.
- Parameters:
start – Start interval index as integer or float.
stop – Stop interval index as integer or float.
num – Number of divisions as integer.
- Returns:
A list of numbers.
Usage:
from BaseCollection import linspace linspace(0, 5, 6) # >> [0., 1., 2., 3., 4., 5.]
- lowest_values(count)¶
Get a list of the the lowest values of the Data Collection and their indices.
This is useful for situations where one needs to know the times of the year when the smallest values of a data collection occur.
- Parameters:
count – Integer representing the number of lowest values to account for.
- Returns:
A tuple with two elements.
highest_values: The n lowest values in data list, ordered from lowest to lowest.
lowest_values_index: Indices of the n lowest values in data list, ordered from lowest to lowest.
- normalize_by_area(area, area_unit)¶
Get a Data Collection that is normalized by an area value.
Note that this method will raise a ValueError if the data type in the header of the data collection does not have a normalized_type. Also note that a ZeroDivisionError will be raised if the input area is equal to 0.
- Parameters:
area – Number representing area by which all of the data is normalized.
area_unit – Text for the units that the area value is in. Acceptable inputs include ‘m2’, ‘ft2’ and any other unit that is supported in the normalized_type of this datacollection’s data type.
- static pattern_from_collections_and_statement(data_collections, statement)¶
Generate a list of booleans from data collections and a conditional statement.
- Parameters:
data_collections – A list of aligned Data Collections to be evaluated against the statement.
statement – A conditional statement as a string (e.g. a>25 and a%5==0). The variable should always be named as ‘a’ (without quotations).
- Returns:
pattern – A list of True/False booleans with the length of the Data Collections where True meets the conditional statement and False does not.
- percentile(percentile)¶
Get a value representing a the input percentile of the Data Collection.
- Parameters:
percentile – A float value from 0 to 100 representing the requested percentile.
- Returns:
The Data Collection value at the input percentile
- percentile_daily(percentile)¶
Return a daily collection of values at the input percentile of each day.
- Parameters:
percentile – A float value from 0 to 100 representing the requested percentile.
- percentile_monthly(percentile)¶
Return a monthly collection of values at the input percentile of each month.
- Parameters:
percentile – A float value from 0 to 100 representing the requested percentile.
- percentile_monthly_per_hour(percentile)¶
Return a monthly per hour collection of values at the input percentile.
- Parameters:
percentile – A float value from 0 to 100 representing the requested percentile.
- to_dict()¶
Convert Data Collection to a dictionary.
- to_discontinuous()¶
Return a discontinuous version of the current collection.
- to_immutable()¶
Get an immutable version of this collection.
- to_ip()¶
Get a Data Collection in IP units.
- to_si()¶
Get a Data Collection in SI units.
- to_time_aggregated()¶
Get a collection where data has been aggregated over the collection timestep.
For example, if the collection has a Power data type in W, this method will return a collection with an Energy data type in kWh.
- to_time_rate_of_change()¶
Get a collection that has been converted to time-rate-of-change units.
For example, if the collection has an Energy data type in kWh, this method will return a collection with a Power data type in W.
- to_unit(unit)¶
Get a Data Collection in the input unit.
- Parameters:
unit – Text for the unit to convert the data to (eg. ‘C’ or ‘kWh’). This unit must appear under the data collection’s header.data_type.units.
- total_daily()¶
Return a daily collection of values totaled over each day.
- total_monthly()¶
Return a monthly collection of values totaled over each month.
- total_monthly_per_hour()¶
Return a monthly per hour collection of totaled values.
- validate_analysis_period(overwrite_period=False)¶
All continuous collections already have valid header analysis_periods.
Therefore, this method just returns a copy of the current collection.
- property average¶
Get the average of the Data Collection values.
- property bounds¶
Get a tuple of two value as (min, max) of the data.
- property datetime_strings¶
Get a list of datetime strings for this collection.
These provide a human-readable way to interpret the datetimes.
- property datetimes¶
Return datetimes for this collection as a tuple.
- property header¶
Get the header for this collection.
- property is_continuous¶
Boolean denoting whether the data collection is continuous.
- property is_mutable¶
Boolean denoting whether the data collection is mutable.
- property max¶
Get the max of the Data Collection values.
- property median¶
Get the median of the Data Collection values.
- property min¶
Get the min of the Data Collection values.
- property moys_dict¶
Return a dictionary of this collection’s values where the keys are the moys.
This is useful for aligning the values with another list of datetimes.
- property timestep_text¶
Return a text string representing the timestep of the collection.
- property total¶
Get the total of the Data Collection values.
- property validated_a_period¶
Boolean for whether the header analysis_period is validated against datetimes.
This will always be True when a collection is derived from a continuous one.
- property values¶
The Data Collection’s list of numerical values.
- class ladybug.datacollectionimmutable.HourlyDiscontinuousCollectionImmutable(header, values, datetimes)[source]¶
Bases:
_ImmutableCollectionBase
,HourlyDiscontinuousCollection
Immutable Discontinuous Data Collection at hourly or sub-hourly intervals.
- ToString()¶
Overwrite .NET ToString method.
- aggregate_by_area(area, area_unit)¶
Get a Data Collection that is aggregated by an area value.
Note that this method will raise a ValueError if the data type in the header of the data collection is not a normalized_type of another data type.
- Parameters:
area – Number representing area by which all of the data is aggregated.
area_unit – Text for the units that the area value is in. Acceptable inputs include ‘m2’, ‘ft2’ and any other unit that is supported in the normalized_type of destination datacollection’s data type.
- static arange(start, stop, step)¶
Return evenly spaced fractional or whole values within a given interval.
This function acts like the Python range method, but can also account for fractional values. It is equivalent to the numpy.arange function.
- Parameters:
start – Number for inclusive start of interval.
stop – Number for exclusive end of interval.
step – Number for step size of interval.
- Returns:
Generator of evenly spaced values.
Usage:
from BaseCollection import arange arange(1, 351, 50) # >> [1, 51, 101, 151, 201, 251, 301]
- static are_collections_aligned(data_collections, raise_exception=True)¶
Test if a series of Data Collections are aligned with one another.
Aligned Data Collections are of the same Data Collection class, have the same number of values and have matching datetimes.
- Parameters:
data_collections – A list of Data Collections for which alignment will be tested.
raise_exception – Boolean to note if an exception should be raised when collections are not aligned with one another.
- Returns:
True if collections are aligned, False if not aligned
- static are_metadatas_aligned(data_collections, raise_exception=True)¶
Test if a series of Data Collections have aligned metadata.
Aligned metadata means that the number of metadata items is the same between the two collections.
- Parameters:
data_collections – A list of Data Collections for which metadata alignment will be tested.
raise_exception – Boolean to note if an exception should be raised when collection metadatas are not aligned with one another.
- Returns:
True if metadatas are aligned, False if not aligned
- average_daily()¶
Return a daily collection of values averaged for each day.
- average_monthly()¶
Return a monthly collection of values averaged for each month.
- average_monthly_per_hour()¶
Return a monthly per hour data collection of average values.
- static compute_function_aligned(funct, data_collections, data_type, unit)¶
Compute a function with a list of aligned data collections or values.
- Parameters:
funct – A function with a single numerical value as output and one or more numerical values as input.
data_collections – A list with a length equal to the number of arguments for the function. Items of the list can be either Data Collections or individual values to be used at each datetime of other collections.
data_type – An instance of a Ladybug data type that describes the results of the funct.
unit – The units of the funct results.
- Returns:
A Data Collection with the results function. If all items in this list of data_collections are individual values, only a single value will be returned.
Usage:
from ladybug.datacollection import HourlyContinuousCollection from ladybug.epw import EPW from ladybug.psychrometrics import humid_ratio_from_db_rh from ladybug.datatype.percentage import HumidityRatio epw_file_path = './epws/denver.epw' denver_epw = EPW(epw_file_path) pressure_at_denver = 85000 hr_inputs = [denver_epw.dry_bulb_temperature, denver_epw.relative_humidity, pressure_at_denver] humid_ratio = HourlyContinuousCollection.compute_function_aligned( humid_ratio_from_db_rh, hr_inputs, HumidityRatio(), 'fraction') # humid_ratio will be a Data Collection of humidity ratios at Denver
- convert_to_culled_timestep(timestep=1)[source]¶
This method is not available for immutable collections.
- convert_to_ip()¶
Convert the Data Collection to IP units.
Note that this mutates the data collection object, which can have unintended consequences depending on how the data collection is used. Use to_ip to get a new instance of a collection without mutating this one.
- convert_to_si()¶
Convert the Data Collection to SI units.
Note that this mutates the data collection object, which can have unintended consequences depending on how the data collection is used. Use to_si to get a new instance of a collection without mutating this one.
- convert_to_unit(unit)¶
Convert the Data Collection to the input unit.
Note that this mutates the data collection object, which can have unintended consequences depending on how the data collection is used. Use to_unit to get a new instance of a collection without mutating this one.
- cull_to_timestep(timestep=1)¶
Get a collection with only datetimes that fit a timestep.
- duplicate()¶
Get a copy of this Data Collection.
- filter_by_analysis_period(analysis_period)¶
Filter a Data Collection based on an analysis period.
- Parameters:
period (analysis) – A Ladybug analysis period.
- Returns:
A new Data Collection with filtered data.
- filter_by_conditional_statement(statement)¶
Filter the Data Collection based on a conditional statement.
- Parameters:
statement – A conditional statement as a string (e.g. a > 25 and a%5 == 0). The variable should always be named as ‘a’ (without quotations).
- Returns:
A new Data Collection containing only the filtered data.
- filter_by_hoys(hoys)¶
Filter the Data Collection using a list of hours of the year (hoys).
- Parameters:
hoys – A List of hours of the year 0..8759
- Returns:
A new Data Collection with filtered data
- filter_by_moys(moys)¶
Filter the Data Collection based on a list of minutes of the year.
- Parameters:
moys – A List of minutes of the year [0..8759 * 60]
- Returns:
A new Data Collection with filtered data
- filter_by_pattern(pattern)¶
Filter the Data Collection based on a list of booleans.
- Parameters:
pattern – A list of True/False values. Typically, this is a list with a length matching the length of the Data Collections values but it can also be a pattern to be repeated over the Data Collection.
- Returns:
A new Data Collection with filtered data.
- filter_by_range(greater_than=-inf, less_than=inf)¶
Filter the Data Collection based on whether values fall within a given range.
This is similar to the filter_by_conditional_statement but is often much faster since it does not have all of the flexibility of the conditional statement and uses native Python operators instead of eval() statements.
- Parameters:
greater_than – A number which the data collection values should be greater than in order to be included in the output collection. (Default: Negative Infinity).
less_than – A number which the data collection values should be less than in order to be included in the output collection. (Default: Infinity).
- Returns:
A new Data Collection with filtered data.
- static filter_collections_by_statement(data_collections, statement)¶
Generate a filtered data collections according to a conditional statement.
- Parameters:
data_collections – A list of aligned Data Collections to be evaluated against the statement.
statement – A conditional statement as a string (e.g. a>25 and a%5==0). The variable should always be named as ‘a’ (without quotations).
- Returns:
collections – A list of Data Collections that have been filtered based on the statement.
- classmethod from_dict(data)¶
Create a Data Collection from a dictionary.
- Parameters:
data – A python dictionary in the following format
{ "type": "HourlyDiscontinuous", "header": {}, # Ladybug Header "values": [], # array of values "datetimes": [], # array of datetimes "validated_a_period": True # boolean for valid analysis_period }
- get_aligned_collection(value=0, data_type=None, unit=None, mutable=None)¶
Get a collection aligned with this one composed of one repeated value.
Aligned Data Collections are of the same Data Collection class, have the same number of values and have matching datetimes.
- Parameters:
value – A value to be repeated in the aligned collection values or A list of values that has the same length as this collection. Default: 0.
data_type – The data type of the aligned collection. Default is to use the data type of this collection.
unit – The unit of the aligned collection. Default is to use the unit of this collection or the base unit of the input data_type (if it exists).
mutable – An optional Boolean to set whether the returned aligned collection is mutable (True) or immutable (False). The default is None, which will simply set the aligned collection to have the same mutability as the starting collection.
- group_by_day()¶
Return a dictionary of this collection’s values grouped by each day of year.
Key values are between 1-365.
- group_by_month()¶
Return a dictionary of this collection’s values grouped by each month.
Key values are between 1-12.
- group_by_month_per_hour()¶
Return a dictionary of this collection’s values grouped by each month per hour.
Key values are tuples of 3 integers.
The first represents the month of the year between 1-12.
The second represents the hour of the day between 0-24.
The third represents the minute of the minute of the hour between 0-59.
- highest_values(count)¶
Get a list of the the highest values of the Data Collection and their indices.
This is useful for situations where one needs to know the times of the year when the largest values of a data collection occur. For example, there is a European daylight code that requires an analysis for the hours of the year with the greatest exterior illuminance level. This method can be used to help build a schedule for such a study.
- Parameters:
count – Integer representing the number of highest values to account for.
- Returns:
A tuple with two elements.
highest_values: The n highest values in data list, ordered from highest to lowest.
highest_values_index: Indices of the n highest values in data list, ordered from highest to lowest.
- static histogram(values, bins, key=None)¶
Compute the frequency histogram from a list of values.
The data is binned inclusive of the lower bound but exclusive of the upper bound for intervals. See usage for example of losing the last number in the following dataset because of exclusive upper bound.
- Parameters:
values – Set of numerical data as a list.
bins – A monotonically increasing array of uniform-width bin edges, excluding the rightmost edge.
key – Optional parameter to define key to bin values by, as a function. If not provided, the histogram will be binned by the value.
- Returns:
A list of lists representing the ordered values binned by frequency.
Usage:
from BaseCollection import histogram # Simple example histogram([0, 0, 0.9, 1, 1.5, 1.99, 2, 3], (0, 1, 2, 3)) # >> [[0, 0, 0.9], [1, 1.5, 1.99], [2]] # With key parameter histogram( zip([0, 0, 0.9, 1, 1.5, 1.99], ['a', 'b', 'c', 'd', 'e', 'f']), (0, 1, 2), key=lambda k: k[0]) # >> [[], [(0, a), (0, b), (0.9, c)], [(1, d), (1.5, e), (1.99, f)], []]
- static histogram_circular(values, bins, hist_range=None, key=None)¶
Compute the frequency histogram from a list of circular values.
Circular values refers to a set of values where there is no distinction between values at the lower or upper end of the range, for example angles in a circle, or time. The data is binned inclusive of the lower bound but exclusive of the upper bound for intervals.
- Parameters:
values – Set of numerical data as a list.
bins – An array of uniform-width bin edges, excluding the rightmost edge. These values do not have to be monotonically increasing.
hist_range – Optional parameter to define the lower and upper range of the histogram as a tuple of numbers. If not provided the range is
(min(key(values)), max(key(values))+1)
.key – Optional parameter to define key to bin values by, as a function. If not provided, the histogram will be binned by the value.
- Returns:
A list of lists representing the ordered values binned by frequency.
Usage:
from BaseCollection import histogram_circular histogram_circular([358, 359, 0, 1, 2, 3], (358, 0, 3)) # >> [[358, 359], [0, 1, 2]]
- interpolate_holes()¶
Linearly interpolate over holes in this collection to make it continuous.
- Returns:
continuous_collection – A HourlyContinuousCollection with the same data as this collection but with missing data filled by means of a linear interpolation.
- is_collection_aligned(data_collection)¶
Check if this Data Collection is aligned with another.
Aligned Data Collections are of the same Data Collection class, have the same number of values and have matching datetimes.
- Parameters:
data_collection – The Data Collection for which alignment will be tested.
- Returns:
True if collections are aligned, False if not aligned
- is_in_data_type_range(raise_exception=True)¶
Check if collection values are in the range for the data_type.
If this method returns False, the collection’s values are physically or mathematically impossible for the data_type (eg. temperature below absolute zero).
- Parameters:
raise_exception – Boolean to note whether an exception should be raised if an impossible value is found. (Default: True).
- is_metadata_aligned(data_collection)¶
Check if the metadata in this Data Collection header is aligned with another.
Aligned metadata means that the number of metadata items is the same between the two collections.
- Parameters:
data_collection – The Data Collection for which metadata alignment will be tested.
- Returns:
True if the metadata in the collections are aligned, False if not aligned.
- static linspace(start, stop, num)¶
Get evenly spaced numbers calculated over the interval start, stop.
This method is similar to native Python range except that it takes a number of divisions instead of a step. It is also equivalent to numpy’s linspace method.
- Parameters:
start – Start interval index as integer or float.
stop – Stop interval index as integer or float.
num – Number of divisions as integer.
- Returns:
A list of numbers.
Usage:
from BaseCollection import linspace linspace(0, 5, 6) # >> [0., 1., 2., 3., 4., 5.]
- lowest_values(count)¶
Get a list of the the lowest values of the Data Collection and their indices.
This is useful for situations where one needs to know the times of the year when the smallest values of a data collection occur.
- Parameters:
count – Integer representing the number of lowest values to account for.
- Returns:
A tuple with two elements.
highest_values: The n lowest values in data list, ordered from lowest to lowest.
lowest_values_index: Indices of the n lowest values in data list, ordered from lowest to lowest.
- normalize_by_area(area, area_unit)¶
Get a Data Collection that is normalized by an area value.
Note that this method will raise a ValueError if the data type in the header of the data collection does not have a normalized_type. Also note that a ZeroDivisionError will be raised if the input area is equal to 0.
- Parameters:
area – Number representing area by which all of the data is normalized.
area_unit – Text for the units that the area value is in. Acceptable inputs include ‘m2’, ‘ft2’ and any other unit that is supported in the normalized_type of this datacollection’s data type.
- static pattern_from_collections_and_statement(data_collections, statement)¶
Generate a list of booleans from data collections and a conditional statement.
- Parameters:
data_collections – A list of aligned Data Collections to be evaluated against the statement.
statement – A conditional statement as a string (e.g. a>25 and a%5==0). The variable should always be named as ‘a’ (without quotations).
- Returns:
pattern – A list of True/False booleans with the length of the Data Collections where True meets the conditional statement and False does not.
- percentile(percentile)¶
Get a value representing a the input percentile of the Data Collection.
- Parameters:
percentile – A float value from 0 to 100 representing the requested percentile.
- Returns:
The Data Collection value at the input percentile
- percentile_daily(percentile)¶
Return a daily collection of values at the input percentile of each day.
- Parameters:
percentile – A float value from 0 to 100 representing the requested percentile.
- percentile_monthly(percentile)¶
Return a monthly collection of values at the input percentile of each month.
- Parameters:
percentile – A float value from 0 to 100 representing the requested percentile.
- percentile_monthly_per_hour(percentile)¶
Return a monthly per hour collection of values at the input percentile.
- Parameters:
percentile – A float value from 0 to 100 representing the requested percentile.
- to_dict()¶
Convert Data Collection to a dictionary.
- to_immutable()¶
Get an immutable version of this collection.
- to_ip()¶
Get a Data Collection in IP units.
- to_si()¶
Get a Data Collection in SI units.
- to_time_aggregated()¶
Get a collection where data has been aggregated over the collection timestep.
For example, if the collection has a Power data type in W, this method will return a collection with an Energy data type in kWh.
- to_time_rate_of_change()¶
Get a collection that has been converted to time-rate-of-change units.
For example, if the collection has an Energy data type in kWh, this method will return a collection with a Power data type in W.
- to_unit(unit)¶
Get a Data Collection in the input unit.
- Parameters:
unit – Text for the unit to convert the data to (eg. ‘C’ or ‘kWh’). This unit must appear under the data collection’s header.data_type.units.
- total_daily()¶
Return a daily collection of values totaled over each day.
- total_monthly()¶
Return a monthly collection of values totaled over each month.
- total_monthly_per_hour()¶
Return a monthly per hour collection of totaled values.
- validate_analysis_period()¶
Get a collection where the header analysis_period aligns with datetimes.
This means that checks for five criteria will be performed:
All datetimes in the data collection are in chronological order starting from the analysis_period start hour to the end hour.
No duplicate datetimes exist in the data collection.
There are no datetimes that lie outside of the analysis_period time range.
There are no datetimes that do not align with the analysis_period timestep.
Datetimes for February 29th are excluded if is_leap_year is False on the analysis_period.
Note that there is no need to run this check any time that a discontinuous data collection has been derived from a continuous one or when the validated_a_period attribute of the collection is True. Furthermore, most methods on this data collection will still run without a validated analysis_period.
- property average¶
Get the average of the Data Collection values.
- property bounds¶
Get a tuple of two value as (min, max) of the data.
- property datetime_strings¶
Get a list of datetime strings for this collection.
These provide a human-readable way to interpret the datetimes.
- property datetimes¶
Get a tuple of datetimes for this collection, which align with the values.
- property header¶
Get the header for this collection.
- property is_continuous¶
Boolean denoting whether the data collection is continuous.
- property is_mutable¶
Boolean denoting whether the data collection is mutable.
- property max¶
Get the max of the Data Collection values.
- property median¶
Get the median of the Data Collection values.
- property min¶
Get the min of the Data Collection values.
- property moys_dict¶
Return a dictionary of this collection’s values where the keys are the moys.
This is useful for aligning the values with another list of datetimes.
- property timestep_text¶
Return a text string representing the timestep of the collection.
- property total¶
Get the total of the Data Collection values.
- property validated_a_period¶
Boolean for whether the header analysis_period is validated against datetimes.
This will always be True when a collection is derived from a continuous one.
- property values¶
The Data Collection’s list of numerical values.
- class ladybug.datacollectionimmutable.MonthlyCollectionImmutable(header, values, datetimes)[source]¶
Bases:
_ImmutableCollectionBase
,MonthlyCollection
Immutable Monthly Data Collection.
- ToString()¶
Overwrite .NET ToString method.
- aggregate_by_area(area, area_unit)¶
Get a Data Collection that is aggregated by an area value.
Note that this method will raise a ValueError if the data type in the header of the data collection is not a normalized_type of another data type.
- Parameters:
area – Number representing area by which all of the data is aggregated.
area_unit – Text for the units that the area value is in. Acceptable inputs include ‘m2’, ‘ft2’ and any other unit that is supported in the normalized_type of destination datacollection’s data type.
- static arange(start, stop, step)¶
Return evenly spaced fractional or whole values within a given interval.
This function acts like the Python range method, but can also account for fractional values. It is equivalent to the numpy.arange function.
- Parameters:
start – Number for inclusive start of interval.
stop – Number for exclusive end of interval.
step – Number for step size of interval.
- Returns:
Generator of evenly spaced values.
Usage:
from BaseCollection import arange arange(1, 351, 50) # >> [1, 51, 101, 151, 201, 251, 301]
- static are_collections_aligned(data_collections, raise_exception=True)¶
Test if a series of Data Collections are aligned with one another.
Aligned Data Collections are of the same Data Collection class, have the same number of values and have matching datetimes.
- Parameters:
data_collections – A list of Data Collections for which alignment will be tested.
raise_exception – Boolean to note if an exception should be raised when collections are not aligned with one another.
- Returns:
True if collections are aligned, False if not aligned
- static are_metadatas_aligned(data_collections, raise_exception=True)¶
Test if a series of Data Collections have aligned metadata.
Aligned metadata means that the number of metadata items is the same between the two collections.
- Parameters:
data_collections – A list of Data Collections for which metadata alignment will be tested.
raise_exception – Boolean to note if an exception should be raised when collection metadatas are not aligned with one another.
- Returns:
True if metadatas are aligned, False if not aligned
- static compute_function_aligned(funct, data_collections, data_type, unit)¶
Compute a function with a list of aligned data collections or values.
- Parameters:
funct – A function with a single numerical value as output and one or more numerical values as input.
data_collections – A list with a length equal to the number of arguments for the function. Items of the list can be either Data Collections or individual values to be used at each datetime of other collections.
data_type – An instance of a Ladybug data type that describes the results of the funct.
unit – The units of the funct results.
- Returns:
A Data Collection with the results function. If all items in this list of data_collections are individual values, only a single value will be returned.
Usage:
from ladybug.datacollection import HourlyContinuousCollection from ladybug.epw import EPW from ladybug.psychrometrics import humid_ratio_from_db_rh from ladybug.datatype.percentage import HumidityRatio epw_file_path = './epws/denver.epw' denver_epw = EPW(epw_file_path) pressure_at_denver = 85000 hr_inputs = [denver_epw.dry_bulb_temperature, denver_epw.relative_humidity, pressure_at_denver] humid_ratio = HourlyContinuousCollection.compute_function_aligned( humid_ratio_from_db_rh, hr_inputs, HumidityRatio(), 'fraction') # humid_ratio will be a Data Collection of humidity ratios at Denver
- convert_to_ip()¶
Convert the Data Collection to IP units.
Note that this mutates the data collection object, which can have unintended consequences depending on how the data collection is used. Use to_ip to get a new instance of a collection without mutating this one.
- convert_to_si()¶
Convert the Data Collection to SI units.
Note that this mutates the data collection object, which can have unintended consequences depending on how the data collection is used. Use to_si to get a new instance of a collection without mutating this one.
- convert_to_unit(unit)¶
Convert the Data Collection to the input unit.
Note that this mutates the data collection object, which can have unintended consequences depending on how the data collection is used. Use to_unit to get a new instance of a collection without mutating this one.
- duplicate()¶
Get a copy of this Data Collection.
- filter_by_analysis_period(analysis_period)¶
Filter the Data Collection based on an analysis period.
- Parameters:
period (analysis) – A Ladybug analysis period
- Returns:
A new Data Collection with filtered data
- filter_by_conditional_statement(statement)¶
Filter the Data Collection based on a conditional statement.
- Parameters:
statement – A conditional statement as a string (e.g. a > 25 and a%5 == 0). The variable should always be named as ‘a’ (without quotations).
- Returns:
A new Data Collection containing only the filtered data.
- filter_by_months(months)¶
Filter the Data Collection based on a list of months of the year (as integers).
- Parameters:
months – A List of months of the year [1..12]
- Returns:
A new Data Collection with filtered data
- filter_by_pattern(pattern)¶
Filter the Data Collection based on a list of booleans.
- Parameters:
pattern – A list of True/False values. Typically, this is a list with a length matching the length of the Data Collections values but it can also be a pattern to be repeated over the Data Collection.
- Returns:
A new Data Collection with filtered data.
- filter_by_range(greater_than=-inf, less_than=inf)¶
Filter the Data Collection based on whether values fall within a given range.
This is similar to the filter_by_conditional_statement but is often much faster since it does not have all of the flexibility of the conditional statement and uses native Python operators instead of eval() statements.
- Parameters:
greater_than – A number which the data collection values should be greater than in order to be included in the output collection. (Default: Negative Infinity).
less_than – A number which the data collection values should be less than in order to be included in the output collection. (Default: Infinity).
- Returns:
A new Data Collection with filtered data.
- static filter_collections_by_statement(data_collections, statement)¶
Generate a filtered data collections according to a conditional statement.
- Parameters:
data_collections – A list of aligned Data Collections to be evaluated against the statement.
statement – A conditional statement as a string (e.g. a>25 and a%5==0). The variable should always be named as ‘a’ (without quotations).
- Returns:
collections – A list of Data Collections that have been filtered based on the statement.
- classmethod from_dict(data)¶
Create a Data Collection from a dictionary.
- Parameters:
data – A python dictionary in the following format
{ "header": {}, # Ladybug Header "values": [], # array of values "datetimes": [], # array of datetimes "validated_a_period": True # boolean for valid analysis_period }
- get_aligned_collection(value=0, data_type=None, unit=None, mutable=None)¶
Get a collection aligned with this one composed of one repeated value.
Aligned Data Collections are of the same Data Collection class, have the same number of values and have matching datetimes.
- Parameters:
value – A value to be repeated in the aligned collection values or A list of values that has the same length as this collection. Default: 0.
data_type – The data type of the aligned collection. Default is to use the data type of this collection.
unit – The unit of the aligned collection. Default is to use the unit of this collection or the base unit of the input data_type (if it exists).
mutable – An optional Boolean to set whether the returned aligned collection is mutable (True) or immutable (False). The default is None, which will simply set the aligned collection to have the same mutability as the starting collection.
- highest_values(count)¶
Get a list of the the highest values of the Data Collection and their indices.
This is useful for situations where one needs to know the times of the year when the largest values of a data collection occur. For example, there is a European daylight code that requires an analysis for the hours of the year with the greatest exterior illuminance level. This method can be used to help build a schedule for such a study.
- Parameters:
count – Integer representing the number of highest values to account for.
- Returns:
A tuple with two elements.
highest_values: The n highest values in data list, ordered from highest to lowest.
highest_values_index: Indices of the n highest values in data list, ordered from highest to lowest.
- static histogram(values, bins, key=None)¶
Compute the frequency histogram from a list of values.
The data is binned inclusive of the lower bound but exclusive of the upper bound for intervals. See usage for example of losing the last number in the following dataset because of exclusive upper bound.
- Parameters:
values – Set of numerical data as a list.
bins – A monotonically increasing array of uniform-width bin edges, excluding the rightmost edge.
key – Optional parameter to define key to bin values by, as a function. If not provided, the histogram will be binned by the value.
- Returns:
A list of lists representing the ordered values binned by frequency.
Usage:
from BaseCollection import histogram # Simple example histogram([0, 0, 0.9, 1, 1.5, 1.99, 2, 3], (0, 1, 2, 3)) # >> [[0, 0, 0.9], [1, 1.5, 1.99], [2]] # With key parameter histogram( zip([0, 0, 0.9, 1, 1.5, 1.99], ['a', 'b', 'c', 'd', 'e', 'f']), (0, 1, 2), key=lambda k: k[0]) # >> [[], [(0, a), (0, b), (0.9, c)], [(1, d), (1.5, e), (1.99, f)], []]
- static histogram_circular(values, bins, hist_range=None, key=None)¶
Compute the frequency histogram from a list of circular values.
Circular values refers to a set of values where there is no distinction between values at the lower or upper end of the range, for example angles in a circle, or time. The data is binned inclusive of the lower bound but exclusive of the upper bound for intervals.
- Parameters:
values – Set of numerical data as a list.
bins – An array of uniform-width bin edges, excluding the rightmost edge. These values do not have to be monotonically increasing.
hist_range – Optional parameter to define the lower and upper range of the histogram as a tuple of numbers. If not provided the range is
(min(key(values)), max(key(values))+1)
.key – Optional parameter to define key to bin values by, as a function. If not provided, the histogram will be binned by the value.
- Returns:
A list of lists representing the ordered values binned by frequency.
Usage:
from BaseCollection import histogram_circular histogram_circular([358, 359, 0, 1, 2, 3], (358, 0, 3)) # >> [[358, 359], [0, 1, 2]]
- is_collection_aligned(data_collection)¶
Check if this Data Collection is aligned with another.
Aligned Data Collections are of the same Data Collection class, have the same number of values and have matching datetimes.
- Parameters:
data_collection – The Data Collection for which alignment will be tested.
- Returns:
True if collections are aligned, False if not aligned
- is_in_data_type_range(raise_exception=True)¶
Check if collection values are in the range for the data_type.
If this method returns False, the collection’s values are physically or mathematically impossible for the data_type (eg. temperature below absolute zero).
- Parameters:
raise_exception – Boolean to note whether an exception should be raised if an impossible value is found. (Default: True).
- is_metadata_aligned(data_collection)¶
Check if the metadata in this Data Collection header is aligned with another.
Aligned metadata means that the number of metadata items is the same between the two collections.
- Parameters:
data_collection – The Data Collection for which metadata alignment will be tested.
- Returns:
True if the metadata in the collections are aligned, False if not aligned.
- static linspace(start, stop, num)¶
Get evenly spaced numbers calculated over the interval start, stop.
This method is similar to native Python range except that it takes a number of divisions instead of a step. It is also equivalent to numpy’s linspace method.
- Parameters:
start – Start interval index as integer or float.
stop – Stop interval index as integer or float.
num – Number of divisions as integer.
- Returns:
A list of numbers.
Usage:
from BaseCollection import linspace linspace(0, 5, 6) # >> [0., 1., 2., 3., 4., 5.]
- lowest_values(count)¶
Get a list of the the lowest values of the Data Collection and their indices.
This is useful for situations where one needs to know the times of the year when the smallest values of a data collection occur.
- Parameters:
count – Integer representing the number of lowest values to account for.
- Returns:
A tuple with two elements.
highest_values: The n lowest values in data list, ordered from lowest to lowest.
lowest_values_index: Indices of the n lowest values in data list, ordered from lowest to lowest.
- normalize_by_area(area, area_unit)¶
Get a Data Collection that is normalized by an area value.
Note that this method will raise a ValueError if the data type in the header of the data collection does not have a normalized_type. Also note that a ZeroDivisionError will be raised if the input area is equal to 0.
- Parameters:
area – Number representing area by which all of the data is normalized.
area_unit – Text for the units that the area value is in. Acceptable inputs include ‘m2’, ‘ft2’ and any other unit that is supported in the normalized_type of this datacollection’s data type.
- static pattern_from_collections_and_statement(data_collections, statement)¶
Generate a list of booleans from data collections and a conditional statement.
- Parameters:
data_collections – A list of aligned Data Collections to be evaluated against the statement.
statement – A conditional statement as a string (e.g. a>25 and a%5==0). The variable should always be named as ‘a’ (without quotations).
- Returns:
pattern – A list of True/False booleans with the length of the Data Collections where True meets the conditional statement and False does not.
- percentile(percentile)¶
Get a value representing a the input percentile of the Data Collection.
- Parameters:
percentile – A float value from 0 to 100 representing the requested percentile.
- Returns:
The Data Collection value at the input percentile
- to_dict()¶
Convert Data Collection to a dictionary.
- to_immutable()¶
Get an immutable version of this collection.
- to_ip()¶
Get a Data Collection in IP units.
- to_si()¶
Get a Data Collection in SI units.
- to_unit(unit)¶
Get a Data Collection in the input unit.
- Parameters:
unit – Text for the unit to convert the data to (eg. ‘C’ or ‘kWh’). This unit must appear under the data collection’s header.data_type.units.
- validate_analysis_period()¶
Get a collection where the header analysis_period aligns with datetimes.
This means that checks for three criteria will be performed:
All months in the data collection are chronological starting from the analysis_period start month to the end month.
No duplicate months exist in the data collection.
There are no months that lie outside of the analysis_period range.
Note that there is no need to run this check any time that a data collection has been derived from a continuous one or when the validated_a_period attribute of the collection is True.
- property average¶
Get the average of the Data Collection values.
- property bounds¶
Get a tuple of two value as (min, max) of the data.
- property datetime_strings¶
Get a list of datetime strings for this collection.
These provides a human-readable way to interpret the datetimes.
- property datetimes¶
Get a tuple of datetimes for this collection, which align with the values.
- property header¶
Get the header for this collection.
- property is_continuous¶
Boolean denoting whether the data collection is continuous.
- property is_mutable¶
Boolean denoting whether the data collection is mutable.
- property max¶
Get the max of the Data Collection values.
- property median¶
Get the median of the Data Collection values.
- property min¶
Get the min of the Data Collection values.
- property total¶
Get the total of the Data Collection values.
- property validated_a_period¶
Boolean for whether the header analysis_period is validated against datetimes.
This will always be True when a collection is derived from a continuous one.
- property values¶
The Data Collection’s list of numerical values.
- class ladybug.datacollectionimmutable.MonthlyPerHourCollectionImmutable(header, values, datetimes)[source]¶
Bases:
_ImmutableCollectionBase
,MonthlyPerHourCollection
Immutable Monthly Per Hour Data Collection.
- ToString()¶
Overwrite .NET ToString method.
- aggregate_by_area(area, area_unit)¶
Get a Data Collection that is aggregated by an area value.
Note that this method will raise a ValueError if the data type in the header of the data collection is not a normalized_type of another data type.
- Parameters:
area – Number representing area by which all of the data is aggregated.
area_unit – Text for the units that the area value is in. Acceptable inputs include ‘m2’, ‘ft2’ and any other unit that is supported in the normalized_type of destination datacollection’s data type.
- static arange(start, stop, step)¶
Return evenly spaced fractional or whole values within a given interval.
This function acts like the Python range method, but can also account for fractional values. It is equivalent to the numpy.arange function.
- Parameters:
start – Number for inclusive start of interval.
stop – Number for exclusive end of interval.
step – Number for step size of interval.
- Returns:
Generator of evenly spaced values.
Usage:
from BaseCollection import arange arange(1, 351, 50) # >> [1, 51, 101, 151, 201, 251, 301]
- static are_collections_aligned(data_collections, raise_exception=True)¶
Test if a series of Data Collections are aligned with one another.
Aligned Data Collections are of the same Data Collection class, have the same number of values and have matching datetimes.
- Parameters:
data_collections – A list of Data Collections for which alignment will be tested.
raise_exception – Boolean to note if an exception should be raised when collections are not aligned with one another.
- Returns:
True if collections are aligned, False if not aligned
- static are_metadatas_aligned(data_collections, raise_exception=True)¶
Test if a series of Data Collections have aligned metadata.
Aligned metadata means that the number of metadata items is the same between the two collections.
- Parameters:
data_collections – A list of Data Collections for which metadata alignment will be tested.
raise_exception – Boolean to note if an exception should be raised when collection metadatas are not aligned with one another.
- Returns:
True if metadatas are aligned, False if not aligned
- static compute_function_aligned(funct, data_collections, data_type, unit)¶
Compute a function with a list of aligned data collections or values.
- Parameters:
funct – A function with a single numerical value as output and one or more numerical values as input.
data_collections – A list with a length equal to the number of arguments for the function. Items of the list can be either Data Collections or individual values to be used at each datetime of other collections.
data_type – An instance of a Ladybug data type that describes the results of the funct.
unit – The units of the funct results.
- Returns:
A Data Collection with the results function. If all items in this list of data_collections are individual values, only a single value will be returned.
Usage:
from ladybug.datacollection import HourlyContinuousCollection from ladybug.epw import EPW from ladybug.psychrometrics import humid_ratio_from_db_rh from ladybug.datatype.percentage import HumidityRatio epw_file_path = './epws/denver.epw' denver_epw = EPW(epw_file_path) pressure_at_denver = 85000 hr_inputs = [denver_epw.dry_bulb_temperature, denver_epw.relative_humidity, pressure_at_denver] humid_ratio = HourlyContinuousCollection.compute_function_aligned( humid_ratio_from_db_rh, hr_inputs, HumidityRatio(), 'fraction') # humid_ratio will be a Data Collection of humidity ratios at Denver
- convert_to_ip()¶
Convert the Data Collection to IP units.
Note that this mutates the data collection object, which can have unintended consequences depending on how the data collection is used. Use to_ip to get a new instance of a collection without mutating this one.
- convert_to_si()¶
Convert the Data Collection to SI units.
Note that this mutates the data collection object, which can have unintended consequences depending on how the data collection is used. Use to_si to get a new instance of a collection without mutating this one.
- convert_to_unit(unit)¶
Convert the Data Collection to the input unit.
Note that this mutates the data collection object, which can have unintended consequences depending on how the data collection is used. Use to_unit to get a new instance of a collection without mutating this one.
- duplicate()¶
Get a copy of this Data Collection.
- filter_by_analysis_period(analysis_period)¶
Filter the Data Collection based on an analysis period.
- Parameters:
period (analysis) – A Ladybug analysis period
- Returns:
A new Data Collection with filtered data
- filter_by_conditional_statement(statement)¶
Filter the Data Collection based on a conditional statement.
- Parameters:
statement – A conditional statement as a string (e.g. a > 25 and a%5 == 0). The variable should always be named as ‘a’ (without quotations).
- Returns:
A new Data Collection containing only the filtered data.
- filter_by_months_per_hour(months_per_hour)¶
Filter the Data Collection based on a list of months per hour (as strings).
- Parameters:
months_per_hour – A list of tuples representing months per hour. Each tuple should possess three values: the first is the month, the second is the hour and the third is the minute. (eg. (12, 23, 30) = December at 11:30 PM)
- Returns:
A new Data Collection with filtered data
- filter_by_pattern(pattern)¶
Filter the Data Collection based on a list of booleans.
- Parameters:
pattern – A list of True/False values. Typically, this is a list with a length matching the length of the Data Collections values but it can also be a pattern to be repeated over the Data Collection.
- Returns:
A new Data Collection with filtered data.
- filter_by_range(greater_than=-inf, less_than=inf)¶
Filter the Data Collection based on whether values fall within a given range.
This is similar to the filter_by_conditional_statement but is often much faster since it does not have all of the flexibility of the conditional statement and uses native Python operators instead of eval() statements.
- Parameters:
greater_than – A number which the data collection values should be greater than in order to be included in the output collection. (Default: Negative Infinity).
less_than – A number which the data collection values should be less than in order to be included in the output collection. (Default: Infinity).
- Returns:
A new Data Collection with filtered data.
- static filter_collections_by_statement(data_collections, statement)¶
Generate a filtered data collections according to a conditional statement.
- Parameters:
data_collections – A list of aligned Data Collections to be evaluated against the statement.
statement – A conditional statement as a string (e.g. a>25 and a%5==0). The variable should always be named as ‘a’ (without quotations).
- Returns:
collections – A list of Data Collections that have been filtered based on the statement.
- classmethod from_dict(data)¶
Create a Data Collection from a dictionary.
- Parameters:
data – A python dictionary in the following format
{ "header": {}, # Ladybug Header "values": [], # array of values "datetimes": [], # array of datetimes "validated_a_period": True # boolean for valid analysis_period }
- get_aligned_collection(value=0, data_type=None, unit=None, mutable=None)¶
Get a collection aligned with this one composed of one repeated value.
Aligned Data Collections are of the same Data Collection class, have the same number of values and have matching datetimes.
- Parameters:
value – A value to be repeated in the aligned collection values or A list of values that has the same length as this collection. Default: 0.
data_type – The data type of the aligned collection. Default is to use the data type of this collection.
unit – The unit of the aligned collection. Default is to use the unit of this collection or the base unit of the input data_type (if it exists).
mutable – An optional Boolean to set whether the returned aligned collection is mutable (True) or immutable (False). The default is None, which will simply set the aligned collection to have the same mutability as the starting collection.
- highest_values(count)¶
Get a list of the the highest values of the Data Collection and their indices.
This is useful for situations where one needs to know the times of the year when the largest values of a data collection occur. For example, there is a European daylight code that requires an analysis for the hours of the year with the greatest exterior illuminance level. This method can be used to help build a schedule for such a study.
- Parameters:
count – Integer representing the number of highest values to account for.
- Returns:
A tuple with two elements.
highest_values: The n highest values in data list, ordered from highest to lowest.
highest_values_index: Indices of the n highest values in data list, ordered from highest to lowest.
- static histogram(values, bins, key=None)¶
Compute the frequency histogram from a list of values.
The data is binned inclusive of the lower bound but exclusive of the upper bound for intervals. See usage for example of losing the last number in the following dataset because of exclusive upper bound.
- Parameters:
values – Set of numerical data as a list.
bins – A monotonically increasing array of uniform-width bin edges, excluding the rightmost edge.
key – Optional parameter to define key to bin values by, as a function. If not provided, the histogram will be binned by the value.
- Returns:
A list of lists representing the ordered values binned by frequency.
Usage:
from BaseCollection import histogram # Simple example histogram([0, 0, 0.9, 1, 1.5, 1.99, 2, 3], (0, 1, 2, 3)) # >> [[0, 0, 0.9], [1, 1.5, 1.99], [2]] # With key parameter histogram( zip([0, 0, 0.9, 1, 1.5, 1.99], ['a', 'b', 'c', 'd', 'e', 'f']), (0, 1, 2), key=lambda k: k[0]) # >> [[], [(0, a), (0, b), (0.9, c)], [(1, d), (1.5, e), (1.99, f)], []]
- static histogram_circular(values, bins, hist_range=None, key=None)¶
Compute the frequency histogram from a list of circular values.
Circular values refers to a set of values where there is no distinction between values at the lower or upper end of the range, for example angles in a circle, or time. The data is binned inclusive of the lower bound but exclusive of the upper bound for intervals.
- Parameters:
values – Set of numerical data as a list.
bins – An array of uniform-width bin edges, excluding the rightmost edge. These values do not have to be monotonically increasing.
hist_range – Optional parameter to define the lower and upper range of the histogram as a tuple of numbers. If not provided the range is
(min(key(values)), max(key(values))+1)
.key – Optional parameter to define key to bin values by, as a function. If not provided, the histogram will be binned by the value.
- Returns:
A list of lists representing the ordered values binned by frequency.
Usage:
from BaseCollection import histogram_circular histogram_circular([358, 359, 0, 1, 2, 3], (358, 0, 3)) # >> [[358, 359], [0, 1, 2]]
- is_collection_aligned(data_collection)¶
Check if this Data Collection is aligned with another.
Aligned Data Collections are of the same Data Collection class, have the same number of values and have matching datetimes.
- Parameters:
data_collection – The Data Collection for which alignment will be tested.
- Returns:
True if collections are aligned, False if not aligned
- is_in_data_type_range(raise_exception=True)¶
Check if collection values are in the range for the data_type.
If this method returns False, the collection’s values are physically or mathematically impossible for the data_type (eg. temperature below absolute zero).
- Parameters:
raise_exception – Boolean to note whether an exception should be raised if an impossible value is found. (Default: True).
- is_metadata_aligned(data_collection)¶
Check if the metadata in this Data Collection header is aligned with another.
Aligned metadata means that the number of metadata items is the same between the two collections.
- Parameters:
data_collection – The Data Collection for which metadata alignment will be tested.
- Returns:
True if the metadata in the collections are aligned, False if not aligned.
- static linspace(start, stop, num)¶
Get evenly spaced numbers calculated over the interval start, stop.
This method is similar to native Python range except that it takes a number of divisions instead of a step. It is also equivalent to numpy’s linspace method.
- Parameters:
start – Start interval index as integer or float.
stop – Stop interval index as integer or float.
num – Number of divisions as integer.
- Returns:
A list of numbers.
Usage:
from BaseCollection import linspace linspace(0, 5, 6) # >> [0., 1., 2., 3., 4., 5.]
- lowest_values(count)¶
Get a list of the the lowest values of the Data Collection and their indices.
This is useful for situations where one needs to know the times of the year when the smallest values of a data collection occur.
- Parameters:
count – Integer representing the number of lowest values to account for.
- Returns:
A tuple with two elements.
highest_values: The n lowest values in data list, ordered from lowest to lowest.
lowest_values_index: Indices of the n lowest values in data list, ordered from lowest to lowest.
- normalize_by_area(area, area_unit)¶
Get a Data Collection that is normalized by an area value.
Note that this method will raise a ValueError if the data type in the header of the data collection does not have a normalized_type. Also note that a ZeroDivisionError will be raised if the input area is equal to 0.
- Parameters:
area – Number representing area by which all of the data is normalized.
area_unit – Text for the units that the area value is in. Acceptable inputs include ‘m2’, ‘ft2’ and any other unit that is supported in the normalized_type of this datacollection’s data type.
- static pattern_from_collections_and_statement(data_collections, statement)¶
Generate a list of booleans from data collections and a conditional statement.
- Parameters:
data_collections – A list of aligned Data Collections to be evaluated against the statement.
statement – A conditional statement as a string (e.g. a>25 and a%5==0). The variable should always be named as ‘a’ (without quotations).
- Returns:
pattern – A list of True/False booleans with the length of the Data Collections where True meets the conditional statement and False does not.
- percentile(percentile)¶
Get a value representing a the input percentile of the Data Collection.
- Parameters:
percentile – A float value from 0 to 100 representing the requested percentile.
- Returns:
The Data Collection value at the input percentile
- to_dict()¶
Convert Data Collection to a dictionary.
- to_immutable()¶
Get an immutable version of this collection.
- to_ip()¶
Get a Data Collection in IP units.
- to_si()¶
Get a Data Collection in SI units.
- to_unit(unit)¶
Get a Data Collection in the input unit.
- Parameters:
unit – Text for the unit to convert the data to (eg. ‘C’ or ‘kWh’). This unit must appear under the data collection’s header.data_type.units.
- validate_analysis_period()¶
Get a collection where the header analysis_period aligns with datetimes.
This means that checks for three criteria will be performed:
All datetimes in the data collection are chronological starting from the analysis_period start datetime to the end datetime.
No duplicate datetimes exist in the data collection.
There are no datetimes that lie outside of the analysis_period range.
Note that there is no need to run this check any time that a data collection has been derived from a continuous one or when the validated_a_period attribute of the collection is True.
- property average¶
Get the average of the Data Collection values.
- property bounds¶
Get a tuple of two value as (min, max) of the data.
- property datetime_strings¶
Get a list of datetime strings for this collection.
These provides a human-readable way to interpret the datetimes.
- property datetimes¶
Get a tuple of datetimes for this collection, which align with the values.
- property header¶
Get the header for this collection.
- property is_continuous¶
Boolean denoting whether the data collection is continuous.
- property is_mutable¶
Boolean denoting whether the data collection is mutable.
- property max¶
Get the max of the Data Collection values.
- property median¶
Get the median of the Data Collection values.
- property min¶
Get the min of the Data Collection values.
- property total¶
Get the total of the Data Collection values.
- property validated_a_period¶
Boolean for whether the header analysis_period is validated against datetimes.
This will always be True when a collection is derived from a continuous one.
- property values¶
The Data Collection’s list of numerical values.