Commands¶
General information¶
Most of the commands require access to PostgreSQL/PostGIS. For this reason we
recommend setting the DATABASE_URL environment variable, even though it is
possible to specify it on the CLI. This will keep the commands shorter and
reduce the chance of error in the connection URL.
All the commands follow a very similar pattern, therefore they use almost all the same parameters accross the board.
Environment variables¶
DATABASE_URL: Define the connection URL to the database.
For instance:
postgresql://postgres:postgres@localhost:5432/postgres.BNA_OSMNX_CACHE: Set it to 0 to disable the OSMNX cache.
This is useful when used in an ephemeral environment where there is no real benefit of caching the downloads.
Configure¶
Configure a database for an analysis.
Configure is a helper command, in the sense that it is completely optional to the process, but may help to configure the PostgreSQL instance which will be used for the analysis.
bna configure [OPTIONS] COMMAND [ARGS]
configure docker¶
Configure a database running in a Docker container.
bna configure docker [OPTIONS]
The most common use case is to configure a PostgreSQL instance which is running in a Docker container (via the Docker Compose file that we provide for example).
The command will autodetect the number of cores and the memory allocated to the Docker daemon, and will use this information to set the values to use to configure the PostgreSQL instance.
options¶
--database-urldatabase-urlSet the database URL
May also be set with the
DATABASE_URLenvironment variable.
configure custom¶
Configure a database with custom values.
bna configure custom [OPTIONS] CORES MEMORY_MB PGUSER
If for some reason a more fine grained configuration is prefered, another command is provided where the user has to specify the information manually.
The parameters are:
the number of cores to allocate
the amount of memory to allocate, in MB
the name of the PostgreSQL user to connect as
configure reset¶
The reset comand is a convenience command that resets the database. It deletes tables associated with an analysis and recreates the necessary schema. Its main use case is when developing/debugging locally and you need to try out another analysis without having to swith to your host system and using Docker to stop/remove the database associated with the previous analysis.
options¶
--database-urldatabase-urlSet the database URL
May also be set with the
DATABASE_URLenvironment variable.
configure system¶
Configure the database system parameters.
bna configure system [OPTIONS] CORES MEMORY_MB
options¶
--database-urldatabase-urlSet the database URL
May also be set with the
DATABASE_URLenvironment variable.
configure extensions¶
Configure the database extensions.
bna configure extensions [OPTIONS]
options¶
--database-urldatabase-urlSet the database URL
May also be set with the
DATABASE_URLenvironment variable.
configure schemas¶
Configure the database schemas.
bna configure schemas [OPTIONS] PGUSER
options¶
--database-urldatabase-urlSet the database URL
May also be set with the
DATABASE_URLenvironment variable.
Prepare¶
Prepare all the input files required for an analysis.
bna prepare [OPTIONS] COUNTRY CITY [STATE] [FIPS_CODE]
For US cities, the full name of the state as well as the city FIPS code are required:
bna prepare "united states" "santa rosa" "new mexico" 3570670
For non US cities, only the name and the country are required:
bna prepare malta valletta
However, specifying a region can speed up the process since it will reduce the
size of the map to download. For instance this command will download the map of
the province of Québec in Canada. If québec was omitted, it would download the
map of the full country instead.
bna prepare canada "ancienne-lorette" québec
For non US cities, the FIPS code is always ignored.
By default the files will be saved in their own sub-directory in the ./data
directory, relative to where the command was executed. This can be changed with
the --data-dir option flag.
For the 3 previous examples, the files will be located in:
data
├── ancienne-lorette-quebec-canada
├── santa-rosa-new-mexico-united-states
└── valletta-malta
All of this should already be enough to gather the information required to perform an analysis, but a few more knobs are available to override the default values in the options.
options¶
--block-populationblock-populationPopulation of a synthetic block for non-US cities.
Defaults to 100.
--block-sizeblock-sizeSize of a synthetic block for non-US cities (in meters).
Defaults to 500.
--cache-dircache-dirPath to the custom cache directory.
Defaults to
./data.When sets, it replaces the default user cache directory (platform specific, see bna cache dir).
--city-speed-limitcity-speed-limit <<<<<<< HEADOverride the default speed limit (in mph).
Defaults to 30.
--data-dirdata-dir <<<<<<< HEADDirectory where to store the files required for the analysis.
Defaults to
./data.
--lodes-yearlodes-year <<<<<<< HEADYear to use to retrieve US job data.
Defaults to 2022.
--mirrormirrorUse a mirror to fetch the US census files.
Defaults to
None, meaning it fetches the data from the US census sites.
--no-cacheDisable the cache folder.
Defaults to
False.
--mirrormirrorUse a mirror to fetch the US census files.
Defaults to
None, meaning it fetches the data from the US census sites.
--no-cacheDisable the cache folder.
Defaults to
False.
--mirrormirrorUse a mirror to fetch the US census files.
Defaults to
None, meaning it fetches the data from the US census sites.
--no-cacheDisable the cache folder.
Defaults to
False.
--mirrormirrorUse a mirror to fetch the US census files.
Defaults to
None, meaning it fetches the data from the US census sites.
--no-cacheDisable the cache folder.
Defaults to
False.
--retriesretriesNumber of times to retry downloading files.
Defaults to 2.
Import¶
Import files from the prepare command into the database.
The sub-command which is the most commonly used is all, but in case of
exploration, a particular type of import can be specified: jobs,
neighborhood and osm.
import all¶
Import all files into the database.
bna import all [OPTIONS] COUNTRY CITY [STATE] [FIPS_CODE]
Same conditions as before, country and city arguments are mandatory, while
the region and the FIPS code are required only for US cities.
Attention
The same parameters as for the prepare command must be used to
guarantee correct results.
In addition to these parameters, the directory where the input files were stored
is also required and must be specified with the --data-dir option flag.
bna import all "united states" "santa rosa" "new mexico" 3570670 --data-dir data/santa-rosa-new-mexico-united-states
options¶
--bufferbufferDefine the buffer area
Defaults to 2680.
--data-dirdata-dirDirectory where the files to import are located.
This is usually the output directory of the
preparecommand.--database-urldatabase-urlSet the database URL
May also be set with the
DATABASE_URLenvironment variable.--lodes-yearlodes-yearYear to use to retrieve US job data.
Defaults to 2022
import neighborhood¶
Import neighborhood data.
bna import neighborhood [OPTIONS] COUNTRY CITY [REGION]
import jobs¶
Import US census job data.
bna import jobs [OPTIONS] STATE_ABBREVIATION
options¶
--bufferbufferDefine the buffer area
Defaults to 2680.
--data-dirdata-dirDirectory where the files to import are located.
This is usually the output directory of the
preparecommand.
--database-urldatabase-urlSet the database URL
May also be set with the
DATABASE_URLenvironment variable.
import osm¶
Import OSM data.
bna import osm [OPTIONS] COUNTRY CITY [REGION] [FIPS_CODE]
options¶
--data-dirdata-dirDirectory where the files to import are located.
This is usually the output directory of the
preparecommand.
--database-urldatabase-urlSet the database URL
May also be set with the
DATABASE_URLenvironment variable.
Compute¶
Compute the numbers.
This is the command which actually computes the scores and generates the geojson files resulting from the analysis.
bna compute [OPTIONS] COUNTRY CITY [REGION]
Attention
The same parameters as for the prepare command must be used to
guarantee correct results.
In addition to these parameters, the directory where the files were stored is
also required and must be specified with the --data-dir option flag.
bna compute "united states" "santa rosa" "new mexico" --data-dir data/santa-rosa-new-mexico-united-states
Several parts are available for computing:
features
stress
connectivity
measure
It is possible to use only some parts for the analysis. In this case, the
--with-parts option can be used to specify which part to compute.
bna compute --with-parts stress "united states" "santa rosa" "new mexico" --data-dir data/santa-rosa-new-mexico-united-states
You can also specify multiple parts by repeating the --with-parts option:
bna compute --with-parts stress --with-parts connectivity "united states" "santa rosa" "new mexico"
--data-dir data/santa-rosa-new-mexico-united-states
If the --with-parts option is not specified, all the parts will be computed.
All the results will be stored in various tables in the database.
options¶
--bufferbufferDefine the buffer area
Defaults to 2680.
--data-dirdata-dirDirectory where the files to import are located.
This is usually the output directory of the
preparecommand.
--database-urldatabase-urlSet the database URL
May also be set with the
DATABASE_URLenvironment variable.
--with-partspartsParts of the analysis to compute.
Valid values are:
features,stress,connectivity, andmeasure. This option can be repeated if multiple parts are needed.Defaults to all the parts (features, stress, connectivity, measure).
Export¶
Export the tables from the database.
Several exporters are available to export the results that where previously computed.
The following files will be created from the PostgreSQL tables:
.
├── neighborhood_census_blocks.cpg
├── neighborhood_census_blocks.dbf
├── neighborhood_census_blocks.geojson
├── neighborhood_census_blocks.prj
├── neighborhood_census_blocks.shp
├── neighborhood_census_blocks.shx
├── neighborhood_colleges.geojson
├── neighborhood_community_centers.geojson
├── neighborhood_connected_census_blocks.csv
├── neighborhood_dentists.geojson
├── neighborhood_doctors.geojson
├── neighborhood_hospitals.geojson
├── neighborhood_overall_scores.csv
├── neighborhood_parks.geojson
├── neighborhood_pharmacies.geojson
├── neighborhood_retail.geojson
├── neighborhood_schools.geojson
├── neighborhood_score_inputs.csv
├── neighborhood_social_services.geojson
├── neighborhood_supermarkets.geojson
├── neighborhood_transit.geojson
├── neighborhood_universities.geojson
├── neighborhood_ways.cpg
├── neighborhood_ways.dbf
├── neighborhood_ways.prj
├── neighborhood_ways.shp
├── neighborhood_ways.shx
└── residential_speed_limit.csv
export local¶
Export the results to a local directory following the PeopleForBikes calver convention.
bna export local [OPTIONS] COUNTRY CITY [REGION] [EXPORT_DIR]
The final directory structure follows the PeopleForBikes convention
<export_dir>/<country>/<region>/<city>/<calver_version>.
The directories will be created if they do not exist.
The calver scheme used here is YY.0M[.MINOR],
similar to what Ubuntu
does.
Example¶
Running:
bna export local "united states" "santa rosa" "new mexico" ~/bna/
Would export the results into ~/bna/united states/new mexico/santa rosa/25.06
if the analysis was run in June 2025 for the first time.
options¶
--database-urldatabase-urlSet the database URL
May also be set with the
DATABASE_URLenvironment variable.
--with-bundleAdd a zip archive which bundles the result files altogether.
Defaults to no bundle.
export local-custom¶
Export results to a custom directory.
bna export local-custom [OPTIONS] EXPORT_DIR
options¶
--database-urldatabase-urlSet the database URL
May also be set with the
DATABASE_URLenvironment variable.
--with-bundleAdd a zip archive which bundles the result files altogether.
Defaults to no bundle.
S3¶
Export the result to an AWS S3 bucket, respecting the calver representation.
bna export s3 [OPTIONS] BUCKET_NAME COUNTRY CITY [REGION]
Therefore the output is similar to the local export:
my_s3_bucket
└── united states
└── new mexico
└── santa rosa
└── 23.9
└── ...
options¶
--database-urldatabase-urlSet the database URL
May also be set with the
DATABASE_URLenvironment variable.
--with-bundleAdd a zip archive which bundles the result files altogether.
Defaults to no bundle.
S3 Custom¶
Export the results to a custom AWS S3 bucket.
bna export s3-custom [OPTIONS] BUCKET_NAME
And the output could look like that:
my_s3_bucket
└── united-states-new mexico-santa rosa-23.9
└── ...
options¶
--database-urldatabase-urlSet the database URL
May also be set with the
DATABASE_URLenvironment variable.--s3-dirs3-dirDirectory where to store the results within the S3 bucket.
Defaults to the root of the bucket.
--with-bundleAdd a zip archive which bundles the result files altogether.
Defaults to no bundle.
Run¶
Run the full analysis in one command.
bna run [OPTIONS] COUNTRY CITY [STATE] [FIPS_CODE]
Basically this command is a combination of the prepare, import, compute
and export sub-commands.
It still requires a configured, up and running database in order to complete.
bna run "united states" "santa rosa" "new mexico" 3570670
options¶
--block-populationblock-populationPopulation of a synthetic block for non-US cities.
Defaults to 100.
--block-sizeblock-sizeSize of a synthetic block for non-US cities (in meters).
Defaults to 500.
--bufferbufferDefine the buffer area
Defaults to 2680.
--cache-dircache-dirPath to the custom cache directory.
When sets, it replaces the default user cache directory (platform specific, see bna cache dir).
--city-speed-limitcity-speed-limit <Override the default speed limit (in mph).
Defaults to 30.
--data-dirdata-dirDirectory where to store the files required for the analysis.
Defaults to
./data.
--database-urldatabase-urlSet the database URL
May also be set with the
DATABASE_URLenvironment variable.
--lodes-yearlodes-year <Year to use to retrieve US job data.
Defaults to 2022.
--max-trip-distancemax-trip-distanceDistance maximal of a trip.
Defaults to 2680.
--mirrormirrorUse a mirror to fetch the US census files.
Defaults to
None, meaning it fetches the data from the US census sites.
--no-cacheDisable the cache folder.
Defaults to
False.
--retriesretriesNumber of times to retry downloading files.
Defaults to 2.
--s3-buckets3-bucket <<<<<<< HEADS3 bucket to use to store the result files.
--s3-dirs3-dirDirectory where to store the results within the S3 bucket.
Defaults to the root of the bucket.
--with-bundleAdd a zip archive which bundles the result files altogether.
Defaults to no bundle.
--with-exportwith-exportExport strategy
Valid values are:
nonelocals3s3_custom.Defaults to
local.
--with-partspartsParts of the analysis to compute.
Valid values are:
features,stress,connectivity, andmeasure. This option can be repeated if multiple parts are needed.Defaults to all the parts (features, stress, connectivity, measure).
Run-with¶
Provide alternative ways to run the analysis.
Compose¶
Manage the Docker Compose environment automatically.
bna run-with compose [OPTIONS] COUNTRY CITY [STATE] [FIPS_CODE]
It combines the configure, prepare, import, compute, export
sub-commands and wraps them into the setup and tear-down of the Docker Compose
environment.
options¶
--block-populationblock-population <<<<<<< HEADPopulation of a synthetic block for non-US cities.
Defaults to 100.
--block-sizeblock-sizeSize of a synthetic block for non-US cities (in meters).
Defaults to 500.
--bufferbufferDefine the buffer area
Defaults to 2680.
--cache-dircache-dirPath to the custom cache directory.
When sets, it replaces the default user cache directory (platform specific, see bna cache dir).
--city-speed-limitcity-speed-limitOverride the default speed limit (in mph).
Defaults to 30.
--data-dirdata-dirDirectory where to store the files required for the analysis.
Defaults to
./data.
--database-urldatabase-urlSet the database URL
May also be set with the
DATABASE_URLenvironment variable.
--export-direxport-dirDirectory where to export the results.
Defaults to
./results
--lodes-yearlodes-yearYear to use to retrieve US job data.
Defaults to 2022.
--max-trip-distancemax-trip-distanceDistance maximal of a trip.
Defaults to 2680.
--mirrormirrorUse a mirror to fetch the US census files.
Defaults to
None, meaning it fetches the data from the US census sites.
--no-cacheDisable the cache folder.
Defaults to
False.
--retriesretriesNumber of times to retry downloading files.
Defaults to 2.
--s3-buckets3-bucketS3 bucket to use to store the result files.
--s3-dirs3-dirDirectory where to store the results within the S3 bucket.
Defaults to the root of the bucket.
--with-bundleAdd a zip archive which bundles the result files altogether.
Defaults to no bundle.
--with-exportwith-exportExport strategy
Valid values are:
nonelocals3s3_custom.Defaults to
local.
--with-partspartsParts of the analysis to compute.
Valid values are:
features,stress,connectivity, andmeasure. This option can be repeated if multiple parts are needed.Defaults to all the parts (features, stress, connectivity, measure).
Cache¶
Manage the cache.
clean¶
Clean the cache directory.
bna cache clean [OPTIONS]
options¶
--dry-run,-nDry run.
Does not actually perform any action, but show the simulated results.
--quiet,-qQuiet mode.
Does not display any information on the output.
dir¶
Show the cache directory.
bna cache dir [OPTIONS]