A list of all the Belgian stations and their properties used within the iRail project

Overview

All stations in Belgium

Build Status Dependency Status Software License

We try to maintain a list of all the stations in Belgium using CSV so everyone can help to maintain it on github. Furthermore, we have a PHP composer/packagist library for you to go from station name to ID and vice versa and we convert the CSV file to JSON-LD for maximum semantic interoperability.

Fields we collect

stations.csv

This file describes all NMBS/SNCB stations in Belgium. A station can have multiple platforms (stops), which are described in stops.csv.

  • URI: this is the URI where we can find more information (such as the real-time departures) about this station (this already contains the ID of the NMBS/SNCB as well)
  • longitude: the longitude of the station
  • latitude: the latitude of the station
  • name: the most neutral name of the station (e.g., in Wallonia use the French name, for Brussels use both, for Flanders use nl name)
  • alternative-fr: alt. name in French, if available
  • alternative-nl: alt. name in Dutch, if available
  • alternative-de: alt. name in German, if available
  • alternative-en: alt. name in English, if available
  • country-code: the code of the country the station belongs to
  • avg_stop_times: the average number of vehicles stopping each day in this station (computed field)
  • official_transfer_time: the time needed for an average person to make a transfer in this station, according to official sources (NMBS/SNCB) (computed field)

stops.csv

This file describes all NMBS/SNCB stops in Belgium. Each platform is a separate stop location. All fields are computed using gtfs_data_extractor.php.

  • URI: this is the URI where we can find more information about this stop/platform (exists out of URI of the parent station + '#' + platform code)
  • parent_stop: this is the URI of the parent stop defined in stations.csv
  • longitude: the longitude of the stop
  • latitude: the latitude of the stop
  • name: stop name
  • alternative-fr: alt. name in French, if available
  • alternative-nl: alt. name in Dutch, if available
  • alternative-de: alt. name in German, if available
  • alternative-en: alt. name in English, if available
  • platform: the platform code (can also consist of letters, so do not treat this as a number!)

facilities.csv

This file describes facilities available in NMBS/SNCB stations. All fields are computed using web_facilities_extractor.php.

  • URI: The URI identifying this station.
  • name: The name of this station.
  • street: The street of this station's address.
  • zip: The postal code of this station's address.
  • city: The city of this station's address.
  • ticket_vending_machine: Whether or not ticket vending machines are available. Note: Ticket vending machines might be located inside a building (and can be locked when the station is closed).
  • luggage_lockers: Whether or not luggage lockers are available.
  • free_parking: Whether or not free parking spots are available.
  • taxi: Whether or not parking spots for taxis / waiting taxis are available.
  • bicycle_spots: Whether or not bicycle parking spots are available.
  • blue-bike: Whether or not the has blue-bikes (rental bikes).
  • bus: Whether or not transferring to a bus line is possible in this station.
  • tram: Whether or not transferring to a tram line is possible in this station.
  • metro: Whether or not transferring to a metro line is possible in this station.
  • wheelchair_available: Whether or not the station has wheelchairs available.
  • ramp: Whether or not the station has a ramp for wheelchair users to board a train.
  • disabled_parking_spots: The number of reserved parking spots for travellers with a disability.
  • elevated_platform: Whether or not the station has elevated platforms.
  • escalator_up: Whether or not the station has an ascending escalator from or to the platform(s).
  • escalator_down: Whether or not the station has a descending escalator from or to the platform(s).
  • elevator_platform: Whether or not the station has an elevator to the platform(s).
  • audio_induction_loop : Whether or not an Audio induction loop (Dutch: Ringleiding) is available.
  • sales_open_monday - sales_open_sunday: The time at which ticket boots open on this day of the week.
  • sales_close_monday -sales_close_sunday:The time at which ticket boots close on this day of the week.

How we collect data

This repository contains two PHP scripts which can load all data from the NMBS GTFS public data and the NMBS website. These scripts can be used to generate all CSV files from scratch, and to update existing files.

Manual changes and corrections can be made to stations.csv. It is recommended to use the stations.csv file in this repository as a starting point instead of using the scripts to generate this file, as the repository versions includes manual fixes to station names and translations.

Any changes made to stops.csv or facilities.csv will be overwritten by the scripts. Therefore, any pull requests with the sole purpose of updating/modifying these files won't be accepted

Missing stations and missing fields in stations.csv are automatically added when the gtfs_data_extractor tool runs.

How to make a correction

Corrections to names, translations and locations can be made by adjusting fields in stations.csv:

  • Names or translations will never be overwritten by the scripts.
  • Names in facilities.csv or stops.csv are derived from the names in stations.csv, meaning you only need to update stations.csv.
  • The GTFS data extractor script will warn on wrong locations, but won't correct them.

If you want to make a correction to facilities.csv or stops.csv, don't fix the files, but fix the scripts instead, and let these scripts run to update the file for you.

Build the RDF or JSON-LD

Using scripts, we convert this to JSON-LD. In order to run the script, run this command:

First time run this in your terminal (nodejs needs to be installed on your system):

npm install

Or install it globally using the npm package (you will need to run this again when there's an update to the stations file):

npm install -g irail-stations

From then on, you can always run:

# using this repo
./bin/build.js
# or with the global package:
irail-stations

For extra commands, check:

./bin/build.js --help
# or
irail-stations --help

We currently support the output formats TRiG, N-Quads and JSON-LD (default)

In case you just want to reuse the data

Latest update over HTTP

JSON-LD is available at https://irail.be/stations/NMBS if you add the right accept header. For example, using curl on the command line, you would do this:

curl -H "accept: application/json" https://irail.be/stations/NMBS

If you want to change this output, please change the CSV files over here first (we love pull requests)

In PHP project

Using composer (mind that we also require nodejs to be installed on your system):

composer require irail/stations

Then you can use the stations in your code as follows:

use irail\stations\Stations;
// getStations() returns a json-ld document
$brusselsnorth = Stations::getStations("Brussels North")->{"@graph"}[0];
// getStationByID($id) returns a simple object with the station or null
$ghentstpieters = Stations::getStationByID("http://irail.be/stations/NMBS/008892007");

Don't forget to do a composer update from time to time to update the data

License

CC0: This dataset belongs to the public domain. You're free to reuse it without any restrictions whatsoever.

If you contribute to this repository, you agree that your contributions will be licensed under the CC0 open data license.

We do appreciate a link back to this repository, or a mention of the iRail project.

Comments
  • Use APC as in-memory store for extreme performance gains. #111

    Use APC as in-memory store for extreme performance gains. #111

    As tested in #111, in-memory caching provides serious performance gains.

    This PR includes APC support for getStations and getStationById. Results of both functions are cached. This results in extreme performance gains up to 99%. This might have serious (positive) consequences for projects depending on this library.

    See issue #111 for an in-depth review of multiple possible solutions and why APC was chosen.

    This PR will require APC available on the server. APC keys have a prefix to prevent key collisions.

    Testing requires the argument '-d apc.enable_cli=1', or apc.enable_cli should be set in php.ini.

    opened by Bertware 23
  • Giving stations a magnitude indication

    Giving stations a magnitude indication

    Gent Sint Pieters is more important than Gentbrugge. Brussels Schuman is less important than Brussels Central, North or Midi. To optimize user experience, we could use an indication of this importance in our CSV file.

    Use case: autocompletion

    When autocompleting Brussels, Brussels Luxemburg is the first option. This is one of the smaller stations in Brussels and should appear lower.

    Which data should we use?

    This importance field can be made on the basis of different parameters:

    • number of platforms at a station
    • number of passengers per day in a station
    • Number of trains per day in that station
    • Number of times the station is requested over the past year in the iRail API compared to others
    • etc.

    All of these could are valid indications. We can use these indicators to categorize stations.

    Implementation

    Add a field importance: a number between 0 (low) and 1 (high).

    to start, we can make this a normalized percent based on the access logs or iRail

    Problems I see

    • Adding a station will become a difficult operation
    • This percent will be difficult to compare in a global setting and will only be usable within projects for the NMBS/SNCB using the iRail API
    enhancement help wanted question 
    opened by pietercolpaert 9
  • shapes.geojson

    shapes.geojson

    What do we need: List of all coordinates of the trackshape between stationX and stationY. Platforms must be excluded so we can calculate the first/last point to the correct platform. Inside the properties-key of the geojson, you specify the start- and stopStation by using its URI.

    TODO

    • [x] Make geojson-ld file with context
    • [x] Add shape between two stations as example (you can use geojson.io for the coordinates)
    • [x] Generate todo.csv of all combinations of stations (one direction), so people know what has to be done.
    opened by brechtvdv 7
  • Updated stations, stop and facilities using the PHP scripts.

    Updated stations, stop and facilities using the PHP scripts.

    Automated update using GTFS data, the NMBS website, and our GTFS parser/website scraper scripts.

    • 5 new stations (see below)
    • updated avg_stop_times
    • updated facilities (addresses, opening times)

    New stations:

    • Aulnoye aymeries
    • Quevy-frontiere
    • Melle PW (Polyvalente werkplaats, open to public on March 30th 2019: https://www.belgiantrain.be/nl/about-sncb/themes/opendoors)
    • Petange
    • Maubeuge
    enhancement Missing-station Incorrect-station 
    opened by Bertware 5
  • Added transfer times

    Added transfer times

    This commit adds transfer times inside a station, in seconds (official from NMBS GTFS) to stations.csv, so anyone building a routeplanner can use this data. Transfer time is set to zero when no data is available. Naming and integration in other data formats is up for discussion.

    enhancement 
    opened by Bertware 5
  • Use in-memory store for station lookups.

    Use in-memory store for station lookups.

    The problem: Too long execution time for station lookups

    At this moment, both getStationsand getStationFromID loop over the entire stations list. This consumes valuable time resources. getStations also does quite some text replacing to match different spellings for the same station.

    Given that a single liveboard can easily contain 25 entries, every reduction of the execution time by 10ms, would result in a possible reduction up to 250ms. This means even small gains are worth it. As a possible side-effect, the php code might run more efficient, offloading the server a little.

    Option 1: SQLite

    The entire matching could be moved to an SQLite database. Here, we would have a table, which contains the standard name and the alternative name. A station could have multiple entries here, one for every special spelling of that station. If a station doesn't have special/different spellings, it's not in the table.

    1. getStations($name) would perform a query, checking this table to see if we're handling a special name. If so, the original name would be returned. If not, we continue working with the original name parameter.

    2. we now need to lookup the station with it's data. For this purpose, we could load the jsonld in sqlite too. We could set id as primary key and station name as indexed. Searching would be done on the name parameter, using LIKE $name% and load the entire rows. We parse the rows and return. This is expected to be a performance boost compared to looping over the entire file.

    3. searching by id would happen on the same table mentioned in step 2. Only, we'd now search on id. Again, we load the entire row, parse and return.

    We could load the stations in an sqlite database (containing both tables) using a php script, which could be manually run after running the build script. (Or call it from the build script).

    To load the special spellings in sqlite, we could use the same php script, and a station-alternatives.csv files, which would contain the same data as the table would (original name | special spelling). This way, we can easily add names and run the script to generate a database table. An alternative would be to allow an unlimited amount of special names on one line, so you'd have the original name followed by all special spellings.

    This would also clean up the code a little, since all matching code would be replaced by a single query.

    Option 2: In-memory store

    We could consider keeping the results in a memory cache. This cache could be filled by a cron job, so we would always have a cache hit. This seems an easier, but less flexible solution. This also seems to be more server and config dependent, compared to the guaranteed, but likely lower performance of sqlite.

    Option 3: Combine

    The best of 2 worlds: we could use the sqlite followed by an in-memory store. Every cache miss would still get a good performance, the memory store would give a little extra boost.


    These performance improvements might help #88 a little. (Not a duplicate, as this is one specific approach for a specific speedup, whereas #88 is a lot broader).

    Also related to https://github.com/iRail/iRail/issues/265

    I could implement the sqlite version in a fork to compare performance, after which we could evaluate if we switch? Input and discussion wanted. @pietercolpaert

    opened by Bertware 5
  • Using dat for version control instead of git

    Using dat for version control instead of git

    This is a question, and not a suggestion. I would like to raise the attention of this tool:

    https://github.com/maxogden/dat - http://dat-data.com

    It looks like a very useful tool for exactly what we want to achieve. However: I'm not sure what to use it for and whether it will raise the bar for potential contributors to do something.

    question 
    opened by pietercolpaert 4
  • Train station Goebelsmühle (BE.NMBS.008200129) has X and Y location switched

    Train station Goebelsmühle (BE.NMBS.008200129) has X and Y location switched

    When visualizing the train stations returned but the following request https://api.irail.be/stations/?format=json I spotted the following anomaly I presume that the X and Y locations are mixed up as the current locations put the train station in the ocean in front of Somalia. Screenshot 2022-03-17 at 19 46 28 Screenshot 2022-03-17 at 19 49 16

    opened by tobias-wilfert 3
  • Inconsistent IBNR's for German stations

    Inconsistent IBNR's for German stations

    For example Duesseldorf Hbf, in your date you have station id 008008094 but if you check Deutsche Bahn Data there is 8000080 station id. In my view it make sense to use german id to be consistent.

    question 
    opened by eg-ops 3
  • Update names of “Liège-Jonfosse” and “Liège-Palais”

    Update names of “Liège-Jonfosse” and “Liège-Palais”

    This fixes issue #134.

    I did some tests with the stations autocomplete on the SNCB / NMBS website but they don’t really seem to have any “official” new names for these two stations (they just translate “Liège” to “Lüttich” in German).

    Edit: what I mean is no “official” new names for languages other than Dutch and French (i.e. English and German).

    Incorrect-station 
    opened by miclf 2
  • Two stations renamed in Liège / Luik

    Two stations renamed in Liège / Luik

    Hi there!

    Since yesterday SNCB / NMBS changed the name of two stations in Liège / Luik:

    • “Liège-Jonfosse / Luik-Jonfosse” became “Liège-Carré / Luik-Carré”
    • “Liège-Palais / Luik-Paleis” became “Liège-Saint-Lambert / Luik-Sint-Lambertus”

    Press release (in French): https://www.belgiantrain.be/fr/about-sncb/news/press-releases/2018/28-08-2018-3 It seems that no press release was issued in other languages.

    Wikipedia pages: FR: Liège-Carré and Liège-Saint-Lambert NL: Luik-Carré and Luik-Sint-Lambertus

    The name of Liège-Palais / Luik-Paleis has already been updated in PR #132 but unfortunately the French name has been set to “Liege-Saint-Lambert”, with a missing grave accent in “Liège”.

    Liège-Jonfosse / Luik-Jonfosse has not been updated yet.

    I’m not sure if the stations.csv is supposed to be edited manually of if it has to be updated using a different way, so this is why I open an issue instead of directly submitting a pull request. If you tell me it’s safe to update the file, I can make a PR to rename these two stations.

    Incorrect-station 
    opened by miclf 2
  • Bump qs from 6.5.2 to 6.5.3

    Bump qs from 6.5.2 to 6.5.3

    Bumps qs from 6.5.2 to 6.5.3.

    Changelog

    Sourced from qs's changelog.

    6.5.3

    • [Fix] parse: ignore __proto__ keys (#428)
    • [Fix] utils.merge`: avoid a crash with a null target and a truthy non-array source
    • [Fix] correctly parse nested arrays
    • [Fix] stringify: fix a crash with strictNullHandling and a custom filter/serializeDate (#279)
    • [Fix] utils: merge: fix crash when source is a truthy primitive & no options are provided
    • [Fix] when parseArrays is false, properly handle keys ending in []
    • [Fix] fix for an impossible situation: when the formatter is called with a non-string value
    • [Fix] utils.merge: avoid a crash with a null target and an array source
    • [Refactor] utils: reduce observable [[Get]]s
    • [Refactor] use cached Array.isArray
    • [Refactor] stringify: Avoid arr = arr.concat(...), push to the existing instance (#269)
    • [Refactor] parse: only need to reassign the var once
    • [Robustness] stringify: avoid relying on a global undefined (#427)
    • [readme] remove travis badge; add github actions/codecov badges; update URLs
    • [Docs] Clean up license text so it’s properly detected as BSD-3-Clause
    • [Docs] Clarify the need for "arrayLimit" option
    • [meta] fix README.md (#399)
    • [meta] add FUNDING.yml
    • [actions] backport actions from main
    • [Tests] always use String(x) over x.toString()
    • [Tests] remove nonexistent tape option
    • [Dev Deps] backport from main
    Commits
    • 298bfa5 v6.5.3
    • ed0f5dc [Fix] parse: ignore __proto__ keys (#428)
    • 691e739 [Robustness] stringify: avoid relying on a global undefined (#427)
    • 1072d57 [readme] remove travis badge; add github actions/codecov badges; update URLs
    • 12ac1c4 [meta] fix README.md (#399)
    • 0338716 [actions] backport actions from main
    • 5639c20 Clean up license text so it’s properly detected as BSD-3-Clause
    • 51b8a0b add FUNDING.yml
    • 45f6759 [Fix] fix for an impossible situation: when the formatter is called with a no...
    • f814a7f [Dev Deps] backport from main
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Bump xmldom and jsonld

    Bump xmldom and jsonld

    Removes xmldom. It's no longer used after updating ancestor dependency jsonld. These dependencies need to be updated together.

    Removes xmldom

    Updates jsonld from 1.8.0 to 8.1.0

    Changelog

    Sourced from jsonld's changelog.

    8.1.0 - 2022-08-29

    Fixed

    • relative property reference event renamed to relative predicate reference.
    • relative type reference event renamed to relative object reference.

    8.0.0 - 2022-08-23

    Changed

    • BREAKING: By default, set safe mode to true and base to null in canonize. Applications that were previously canonizing data may see new errors if their data did not fully define terms or used relative URLs that would be dropped when converting to canonized RDF. Now these situations are caught via safe mode by default, informing the developer that they need to fix their data.

    7.0.0 - 2022-08-16

    Fixed

    • compact t0111 test: "Keyword-like relative IRIs"

    Changed

    • Change EARL Assertor to Digital Bazaar, Inc.
    • Update eslint dependencies.

    Added

    • Support benchmarks in Karma tests.
    • Support test environment in EARL output.
    • Support benchmark output in EARL output.
    • Benchmark comparison tool.
    • Add "safe mode" to all APIs. Enable by adding {safe: true} to API options. This mode causes processing to fail when data constructs are encountered that result in lossy behavior or other data warnings. This is intended to be the common way that digital signing and similar applications use this library.

    Removed

    • Experimental non-standard protectedMode option.
    • BREAKING: Various console warnings were removed. The newly added "safe mode" can stop processing where these warnings occurred.
    • BREAKING: Remove compactionMap and expansionMap. Their known use cases are addressed with "safe mode" and future planned features.

    6.0.0 - 2022-06-06

    Changed

    • BREAKING: Drop testing and support for Node.js 12.x. The majority of the code will still run on Node.js 12.x. However, the @digitalbazaar/http-client@3 update uses a newer ky-universal which uses

    ... (truncated)

    Commits

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Bump node-forge from 0.10.0 to 1.3.0

    Bump node-forge from 0.10.0 to 1.3.0

    Bumps node-forge from 0.10.0 to 1.3.0.

    Changelog

    Sourced from node-forge's changelog.

    1.3.0 - 2022-03-17

    Security

    • Three RSA PKCS#1 v1.5 signature verification issues were reported by Moosa Yahyazadeh ([email protected]).
    • HIGH: Leniency in checking digestAlgorithm structure can lead to signature forgery.
    • HIGH: Failing to check tailing garbage bytes can lead to signature forgery.
    • MEDIUM: Leniency in checking type octet.
      • DigestInfo is not properly checked for proper ASN.1 structure. This can lead to successful verification with signatures that contain invalid structures but a valid digest.
      • CVE ID: CVE-2022-24773
      • GHSA ID: GHSA-2r2c-g63r-vccr

    Fixed

    • [asn1] Add fallback to pretty print invalid UTF8 data.
    • [asn1] fromDer is now more strict and will default to ensuring all input bytes are parsed or throw an error. A new option parseAllBytes can disable this behavior.
      • NOTE: The previous behavior is being changed since it can lead to security issues with crafted inputs. It is possible that code doing custom DER parsing may need to adapt to this new behavior and optional flag.
    • [rsa] Add and use a validator to check for proper structure of parsed ASN.1 RSASSA-PKCS-v1_5 DigestInfo data. Additionally check that the hash algorithm identifier is a known value from RFC 8017 PKCS1-v1-5DigestAlgorithms. An invalid DigestInfo or algorithm identifier will now throw an error.
      • NOTE: The previous lenient behavior is being changed to be more strict since it could lead to security issues with crafted inputs. It is possible that code may have to handle the errors from these stricter checks.

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Compare data with Infrabel open data, correct the data if we detect errors

    Compare data with Infrabel open data, correct the data if we detect errors

    @mushpah pointed out in iRail that Infrabel has its own dataset of stops, which includes traveller stations as well as technical stops and cargo stations. This data could be valueable to correct and enrich this dataset.

    https://opendata.infrabel.be/explore/dataset/operationele-punten-van-het-newterk/information/?disjunctive.classification

    The IDs do not match, and infrabel has information about additional stations, so I suggest

    • We try to match the data based on the station name
    • Add abbreviations to stations.csv
    • Improve/add translations if infrabel has a better translation
    • We add an additional file, for example infrabel.csv or infrastructure.csv, where we include the iRail/NMBS id, the infrabel id (TAF/TAP), abbreviation (symbolic name), station name (for human readability) and station type. This file could be used by 3rd parties to combine the datasets, to use data that references internal stations (such as composition data) and by us to keep the stations dataset up-to-date.
    enhancement 
    opened by Bertware 0
  • Add demo queries on wikidata with our identifiers to get more information

    Add demo queries on wikidata with our identifiers to get more information

    E.g.,

    I found more information about Ghent Sint Pieters on the digital heritage site of the Flemish government: https://inventaris.onroerenderfgoed.be/erfgoedobjecten/18369

    opened by pietercolpaert 0
  • Identifiers for the sales API

    Identifiers for the sales API

    The SNCB sales API has their own identifiers and doesn’t use global URIs. E.g., for Ostend, the Sales API ID is 108891702 while ours is http://irail.be/stations/NMBS/008891702.

    I propose to add a column to stations.txt that has the sales API identifier for SNCB. I however don’t have access to this list. Is someone else able to pull request this?

    help wanted 
    opened by pietercolpaert 5
Releases(1.6.14)
Owner
Open transport data in Belgium
null
The perfect starting point to integrate Algolia within your PHP project

⚡️ A fully-featured and blazing-fast PHP API client to interact with Algolia.

Algolia 629 Jan 4, 2023
Shopware plugin to show a variant switch on the product listing and within the (checkout) cart.

Variant switch for Shopware 6 A plugin for Shopware 6 Features Show variant switch on product listing card Variant switch when hovering a variant prop

Shape & Shift 17 Aug 26, 2022
Use middleware to decorate method calls within your application code.

Laravel Middlewarize ?? Chain of Responsibility Design Pattern In Laravel Apps ?? You can use middlewares to decorate any method calls on any object.

Iman 99 Jan 1, 2023
This package makes it easy for developers to access WhatsApp Cloud API service in their PHP code.

The first PHP API to send and receive messages using a cloud-hosted version of the WhatsApp Business Platform

NETFLIE 135 Dec 29, 2022
This API provides functionality for creating and maintaining users to control a simple To-Do-List application. The following shows the API structure for users and tasks resources.

PHP API TO-DO-LIST v.2.0 This API aims to present a brief to consume a API resources, mainly for students in the early years of Computer Science cours

Edson M. de Souza 6 Oct 13, 2022
Meta package tying together all the key packages of the Symfony CMF project.

This repository is no longer maintained Due to lack of interest, we had to decide to discontinue this repository. The CMF project focusses on the Rout

Symfony CMF 733 Dec 21, 2022
This project lists all the mandatory steps I recommend to build a Website using Symfony, Twig, Doctrine.

{% raw %} <-- keep this for Jekyll to fully bypass this documents, because of the Twig tags. Symfony Website Checklist ?? Summary~~~~ Elevator pitch P

William Pinaud 6 Aug 31, 2022
A collective list of free APIs

Public APIs A collective list of free APIs for use in software and web development Status The Project Contributing Guide • API for this project • Issu

null 222.8k Jan 4, 2023
Berisikan tentang list alquran termasuk dengan terjemahan, tafsir dan murotalnya

Quran Cindy Quran Cindy adalah merupakan sebuah alquran berbasis web. Web application ini menggunakan bahasa pemrograman PHP dengan framework Laravel

null 4 Aug 21, 2022
RecordManager is a metadata record management system intended to be used in conjunction with VuFind.

RecordManager is a metadata record management system intended to be used in conjunction with VuFind. It can also be used as an OAI-PMH repository and a generic metadata management utility.

National Library of Finland 42 Dec 14, 2022
All this is a ready-made solution for the VK api.

Instructions for VK API ?? Instructions using my ready-made solution ?? Table of Contents Introduction Users users.get users.getFollowers users.getSub

Dima Gashuk 3 Dec 11, 2021
PHP REST API without using any frameworks. Contains all CRUD operations.

PHP REST API without any framework and CRUD operations ?? Hi there, this is a simple REST API built in PHP without using any frameworks. This is built

Hanoak 10 Sep 5, 2022
STEAM education curriculum centered on building a new civilization entirely from trash, which provides all human needs for free directly to the local community

TRASH ACADEMY STEAM(Science Technology Engineering Art Math) education curriculum centered around building self-replicating technology from trash whic

Trash Robot 3 Nov 9, 2021
Place where I record all knowledge gained for GraphQL from Laracasts & other tutorials.

Knowledge from Laracasts series: https://laracasts.com/series/graphql-with-laravel-and-vue What is GraphQL It is a query language for your API, and it

Nikola 0 Dec 26, 2021
Monorepo of the PoP project, including: a server-side component model in PHP, a GraphQL server, a GraphQL API plugin for WordPress, and a website builder

PoP PoP is a monorepo containing several projects. The GraphQL API for WordPress plugin GraphQL API for WordPress is a forward-looking and powerful Gr

Leonardo Losoviz 265 Jan 7, 2023
Raidbots API wrapper which incorporates existing reports and static data into your project.

Raidbots API Raidbots API wrapper which incorporates existing reports and static data into your project. Usage use Logiek\Raidbots\Client; $client =

Logiek 2 Dec 23, 2021
Monorepo of the PoP project, including: a server-side component model in PHP, a GraphQL server, a GraphQL API plugin for WordPress, and a website builder

PoP PoP is a monorepo containing several projects. The GraphQL API for WordPress plugin GraphQL API for WordPress is a forward-looking and powerful Gr

Leonardo Losoviz 265 Jan 7, 2023
Symfony Health Check Bundle Monitoring Project Status

Symfony Health Check Bundle Version Build Status Code Coverage master develop Installation Step 1: Download the Bundle Open a command console, enter y

MacPaw Inc. 27 Jul 7, 2022
Check a project's code coverage, optionally enforcing a minimum value

coverage-check Display the code coverage for a project using a clover.xml file, optionally enforcing a minimum code coverage percentage. This package

Permafrost Software 15 Aug 9, 2022