A tool for creating configurable dumps of large MySQL-databases.

Overview

webfactory Logo slimdump

Build Status Coverage Status

slimdump is a little tool to help you create configurable dumps of large MySQL-databases. It works off one or several configuration files. For every table you specify, it can dump only the schema (CREATE TABLE ... statement), full table data, data without blobs and more.

Why?

We created slimdump because we often need to dump parts of MySQL databases in a convenient and reproducible way. Also, when you need to analyze problems with data from your production databases, you might want to pull only relevant parts of data and hide personal data (user names, for example).

mysqldump is a great tool, probably much more proven when it comes to edge cases and with a lot of switches. But there is no easy way to create a simple configuration file that describes a particular type of dump (e.g. a subset of your tables) and share it with your co-workers. Let alone dumping tables and omitting BLOB type columns.

Installation

When PHP is your everyday programming language, you probably have Composer installed. You can then easily install slimdump as a global package. Just run composer global require webfactory/slimdump. In order to use it like any other Unix command, make sure $COMPOSER_HOME/vendor/bin is in your $PATH.

Of course, you can also add slimdump as a local (per-project) Composer dependency.

We're also working on providing a .phar package of slimdump for those not using PHP regularly. With that solution, all you need is to have the PHP interpreter installed and to download a single archive file to use slimdump. You can help us and open a pull request for that :-)!

Usage

slimdump needs the DSN for the database to dump and one or more config files:

slimdump {DSN} {config-file} [...more config files...]

slimdump writes to STDOUT. If you want your dump written to a file, just redirect the output:

slimdump {DSN} {config-file} > dump.sql

If you want to use an environment variable for the DSN, replace the first parameter with -:

MYSQL_DSN={DSN} slimdump - {config file(s)}

The DSN has to be in the following format:

mysql://[user[:password]@]host[:port]/dbname

For further explanations have a look at the Doctrine documentation.

Optional parameters and command line switches

no-progress

This turns off printing some progress information on stderr. Useful in scripting contexts.

Example: slimdump --no-progress {DSN} {config-file}

buffer-size

You can also specify the buffer size, which can be useful on shared environments where your max_allowed_packet is low. Do this by using the optional cli-option buffer-size. Add a suffix (KB, MB or GB) to the value for better readability.

Example: slimdump --buffer-size=16MB {DSN} {config-file}

single-line-insert-statements

If you have tables with a large number of rows to dump and you are not planning to keep your dumps under version control, you might consider writing each INSERT INTO-statement to a single line instead of one line per row. You can do this by using the cli-parameter single-line-insert-statements. This can speed up the import significantly.

Example: slimdump --single-line-insert-statements {DSN} {config-file}

Configuration

Configuration is stored in XML format somewhere in your filesystem. As a benefit, you could add the configuration to your repository to share a quickstart to your database dump with your coworkers.

Example:

<?xml version="1.0" ?>
<slimdump>
  <!-- Create a full dump (schema + data) of "some_table" -->
  <table name="some_table" dump="full" />

  <!-- Dump the "media" table, omit BLOB fields. -->
  <table name="media" dump="noblob" />

  <!-- Dump the "user" table, hide names and email addresses. -->
  <table name="user" dump="full">
      <column name="username" dump="masked" />
      <column name="email" dump="masked" />
      <column name="password" dump="replace" replacement="test" />
  </table>

  <!-- Dump the "document" table but do not pass the "AUTO_INCREMENT" parameter to the SQL query.
       Instead start to increment from the beginning -->
  <table name="document" dump="full" keep-auto-increment="false" />

  <!--
    Trigger handling:

    By default, CREATE TRIGGER statements will be dumped for all tables, but the "DEFINER=..."
    clause will be removed to make it easier to re-import the database e. g. in development
    environments.

    You can change this by setting 'dump-triggers' to one of:
        - 'false' or 'none': Do not dump triggers at all
        - 'true' or 'no-definer': Dump trigger creation statement but remove DEFINER=... clause
        - 'keep-definer': Keep the DEFINER=... clause
  -->
  <table name="events" dump="schema" dump-triggers="false" />

  <!--
    View handling:

    A configured <table> may also be a database view. A CREATE VIEW statement will be issued
    in that case, but the "DEFINER=..." clause will be removed to make it easier to re-import
    the database e. g. in development environments.

    You can change this by setting 'view-definers' to one of:
        - 'no-definer': Dump view creation statement but remove DEFINER=... clause
        - 'keep-definer': Keep the DEFINER=... clause
    'no-definer' is the default if the 'view-definers'  attribute is omitted.
  -->
  <table name="aggregated_data_view" dump="schema" view-definers="no-definer" />
</slimdump>

Conditions

You may want to select only some rows. In that case you can define a condition on a table.

<?xml version="1.0" ?>
<slimdump>
  <!-- Dump all users whose usernames begin with foo -->
  <table name="user" dump="full" condition="`username` LIKE 'foo%'" />
</slimdump>

In this example, only users with a username starting with 'foo' are exported: A simple way to export roughly a percentage of the users is this:

<?xml version="1.0" ?>
<slimdump>
  <!-- Dump every tenth user -->
  <table name="user" dump="full" condition="id % 10 = 0" />
</slimdump>

This will export only the users with an id divisible by ten without a remainder, e.g. about 1/10th of the user rows (given the ids are evenly distributed).

If you want to keep referential integrity, you might have to configure a more complex condition like this:

<?xml version="1.0" ?>
<slimdump>
  <!-- Dump all users whose usernames begin with foo -->
  <table name="user" dump="full" condition="id IN (SELECT author_id FROM blog_posts UNION SELECT author_id from comments)" />
</slimdump>

In this case, we export only users that are referenced in other tables, e.g. that are authors of blog posts or comments.

Dump modes

The following modes are supported for the dump attribute:

  • none - Table is not dumped at all. Makes sense if you use broad wildcards (see below) and then want to exclude a specific table.
  • schema - Only the table schema will be dumped
  • noblob - Will dump a NULL value for BLOB fields
  • full - Whole table will be dumped
  • masked - Replaces all chars with "x". Mostly makes sense when applied on the column level, for example for email addresses or user names.
  • replace - When applied on a element, it replaces the values in this column with either a static value, or a nice dummy value generated by Faker. Useful e.g. to replace passwords with a static one or to replace personal data like the first and last name with realistically sounding dummy data.

Wildcards

Of course, you can use wildcards for table names (* for multiple characters, ? for a single character).

Example:

<?xml version="1.0" ?>
<slimdump>
  <!-- Default: dump all tables -->
  <table name="*" dump="full" />

  <!-- Dump all tables beginning with "a_" as schema -->
  <table name="a_*" dump="schema" />

  <!-- Dump "big_blob_table" without blobs -->
  <table name="big_blob_table" dump="noblob" />

  <!-- Do not dump any tables ending with "_test" -->
  <table name="*_test" dump="none" />
</slimdump>

This is a valid configuration. If more than one instruction matches a specific table name, the most specific one will be used. E.g. if you have definitions for blog_* and blog_author, the latter will be used for your author table, independent of their sequence order in the config.

Replacements

You probably don't want to use any personal data from your database. Therefore, slimdump allows you to replace data on column level - a great instrument not only for General Data Protection Regulation (GDPR) compliance.

The simplest replacement is a static one:

<?xml version="1.0" ?>
<slimdump>
    <table name="users" dump="full">
        <column name="password" dump="replace" replacement="test" />
    </table>
</slimdump>

This replaces the password values of all users with "test" (in clear text - but for sure you have some sort of hashing in place, do you?).

To achieve realistically sounding dummy data, slimdump also allows basic Faker formatters. You can use every Faker formatter which needs no arguments and modifiers such as unique (just seperate the modifier with an object operator (->), as you would do in PHP). This is especially useful if your table has a unique constraint on a column containing personal information, like the email address.

<?xml version="1.0" ?>
<slimdump>
    <table name="users" dump="full">
        <column name="username" dump="replace" replacement="FAKER_word" />
        <column name="password" dump="replace" replacement="test" />
        <column name="firstname" dump="replace" replacement="FAKER_firstName" />
        <column name="lastname" dump="replace" replacement="FAKER_lastName" />
        <column name="email" dump="replace" replacement="FAKER_unique->safeEmail" />
    </table>
</slimdump>

Other databases

Currently, only MySQL is supported. Feel free to port it to the database of your needs.

Development

Building the Phar

  • Make sure Phive is installed
  • Run phive install to install tools, including Box
  • Run composer install --no-dev to make sure the vendor/ folder is up to date
  • Run tools/box compile to build slimdump.phar.

Tests

You can execute the phpunit-tests by calling vendor/bin/phpunit.

Credits, Copyright and License

This tool was written by webfactory GmbH, Bonn, Germany. We're a software development agency with a focus on PHP (mostly Symfony). We're big fans of automation, DevOps, CI and CD, and of open source in general.

If you're a developer looking for new challenges, we'd like to hear from you! Otherwise, if this tool is useful for you, add a ⭐️ .

Copyright 2014-2020 webfactory GmbH, Bonn. Code released under the MIT license.

Comments
  • Feature/add faker integration

    Feature/add faker integration

    This PR adds faker integration to slimdump with two options:

    • All "official" faker properties which need no arguments like lastname or word (see https://github.com/fzaninotto/Faker#basic-usage ) when prefix FAKER_ is used (e.g. FAKER_word)
    • One additional (can be extended) option "FAKER_name" which returns firstname and lastname separated by space.

    Please comment before merging because I would like to add user documentation if everything else was discussed!

    Of cause it is still possible to replace column value with static content:

    <?xml version="1.0" ?>
    <slimdump>
        <table name="users" dump="full" keep-auto-increment="false" condition="`signup_date` >= '2017-01-20 00:00:00'">
            <column name="password" dump="replace" replacement="XXXXX" />
            <column name="username" dump="replace" replacement="FAKER_name" />
        </table>
    </slimdump>
    
    opened by franziskahahn 14
  • add-faker-integration: removed 'name' because it is an existing faker option

    add-faker-integration: removed 'name' because it is an existing faker option

    Hey guys, I saw that "name" is already (is it new, I never saw it before?! :D) an "official" faker formatter.

    So I deleted the self-written formatter. I decided to keep the configuration skeleton because I am sure we will need individual stuff in the further development of slimdump.

    opened by franziskahahn 9
  • Suppress progressbar output

    Suppress progressbar output

    When I use Slimdump to dump the database SQL and put the contents to a file I can not import the SQL file because of the output from the ProgressBar inside Dumper. If I use verbosity 'quit' or the option --no-ansi the contents will be completely empty.

    Is it possible to make the ProgressBar "listen" to the Output object if it may display its contents?

    Thanks in advance!

    opened by isneep 8
  • Add option to avoid dumping AUTO_INCREMENT values

    Add option to avoid dumping AUTO_INCREMENT values

    We're using SHOW CREATE TABLE to get the table creation SQL, but that includes the AUTO_INCREMENT value (at least for some storage engines).

    Depending on the use case, it would be nice to be able to get rid of that.

    opened by mpdude 8
  • Tidy up util.php

    Tidy up util.php

    util.php only contains unused functions + the connect function, which is only used exactly one time in the SlimdumpCommand.php file, and I think, this is where the function belongs.

    opened by janopae 7
  • --buffer-size feature was lost during a merge

    --buffer-size feature was lost during a merge

    In dumptask.php function dump() a new instance of Dumper is created with output but without the buffersize variable. This causes an issue with larger queries and timeouts.

    opened by Foepe 4
  • keep-auto-increment=

    keep-auto-increment="false" removes not only the auto_increment value, but also the auto_increment property

    I've used keep-auto-increment="false" in schema dumps to keep things clean. My false expectation was that keep-auto-increment="false" would only remove the AUTO_INCREMENT=x part in the table generation DDL where the offset is set. But it also removes the AUTO_INCREMENT property off the column.

    Hence, if the AUTO_INCREMENT column has a unique constraint (which it has to have at least in MySQL 5.x), INSERTs on tables based on such schema dumps fail if they don't specify a value for the AUTO_INCREMENT column. This seems to be a subtle source of errors.

    Is this the intended behaviour? In this case, I'd like to see a warning in the documentation; otherwhise I suggest to change the behaviour / add a new option.

    opened by MalteWunsch 4
  • Postpone adding foreign key constraints to when all tables have been created

    Postpone adding foreign key constraints to when all tables have been created

    Sometimes, I end up with dumps like this:

    CREATE TABLE A (
      `id` int(11) NOT NULL,
      `b_id` int(11) DEFAULT NULL,
      UNIQUE KEY (`b_id`),
      CONSTRAINT FOREIGN KEY (`b_id`) REFERENCES `B` (`id`)
    ) ENGINE=InnoDB;
    
    CREATE TABLE B (
      `id` int(11) NOT NULL
    ) ENGINE=InnoDB;
    

    As table B does not yet exist while creating table A that references it in a constraint, the import of such dumps fail. At least for the time being, this cannot be fixed with a transaction.

    Hence, I propose to postpone adding the constraints to a time when all tables have been created - as phpMyAdmin does. A working dump could look like this:

    CREATE TABLE A (
      `id` int(11) NOT NULL,
      `b_id` int(11) DEFAULT NULL,
      UNIQUE KEY (`b_id`)
    ) ENGINE=InnoDB;
    
    CREATE TABLE B (
      `id` int(11) NOT NULL
    ) ENGINE=InnoDB;
    
    ALTER TABLE `A` ADD CONSTRAINT FOREIGN KEY (`b_id`) REFERENCES `B` (`id`);
    
    opened by MalteWunsch 4
  • Apply `WHERE` condition when counting the number of rows to export

    Apply `WHERE` condition when counting the number of rows to export

    In our project, it is not possible to count all rows in our biggest table (very bad performances) before exporting a small subset of this table.

    We need to apply the same condition as the one in the XML configuration file which for some reason is ignored when estimating the number of rows that will be exported.

    opened by marine-sivade-woonoz 4
  • Fix support for special characters for database Password

    Fix support for special characters for database Password

    Im doctrine/dbal Projekt gibt es schon einen pull-request zu diesem Problem: https://github.com/doctrine/dbal/pull/2504

    Ein Beispiel: bin/slimdump mysql://user:U/js!jds*js(epupi@host:port/datebase live2dev.xml

    gibt zurück: -bash: !jds*js: event not found

    Das würde allein schon wegen Bash nicht funktionieren, wenn wir die Sonderzeichen also escapen: bin/slimdump mysql://user:U/js!jds*js(epupi@host:port/datebase live2dev.xml

    bekommen wir: Database error: Malformed parameter "url".

    da die URL nicht enkodiert ist. Dieses Problem ist jedoch phlough spezifisch.

    HIER DER SLIMDUMP BETREFFENDE TEIL: angenommen wir bekommen von phlough eine enkodierte url zurück, dann würde unser slimdump Befehl folgendermaßen aussehen: bin/slimdump mysql://user:U%2Fjs%21jds%2Ajs%28epupi@host:port/datebase live2dev.xml

    Dann bekommen wir [PDOException] SQLSTATE[28000] [1045] Access denied for user '...'@'localhost' (using password: YES)

    Das kommt daher, dass, wie im oben genannten github pull request auf doctrine/dbal, die url nicht dekodiert wird. Es bleibt also abzuwarten, dass dieser pull request gemerged wird.

    opened by xkons 4
  • Use OutputInterface::OUTPUT_RAW to reduce memory consumption

    Use OutputInterface::OUTPUT_RAW to reduce memory consumption

    If OutputInterface::OUTPUT_RAW is not added, the strings to be written will be passed through output processors that will do some fancy replacements/addtions. That greatly increases the memory footprint so we avoid it.

    opened by mpdude 4
  • Unable to deal with custom Doctrine types

    Unable to deal with custom Doctrine types

    Assume your database contains a table with a definition like so:

    CREATE TABLE `test` (
      `some_col` int NOT NULL COMMENT '(DC2Type:some_type)'
    )
    

    In your application, you may have configured Doctrine with some special (custom) mapping type, thus the comment (DC2Type:some_type).

    When trying to dump this database, slimdump fails with an exception when trying to load the schema information, since the some_type type is not known to DBAL.

    It should not be necessary to configure this type either; in fact, we (probably) don't care about types that much, we just need to decide whether or not we're dealing with BLOB values.

    Probably we need to come up with a clever way of how/when to register placeholder types dynamically?

    opened by mpdude 0
  • Add Replacement element for conditional value replacements

    Add Replacement element for conditional value replacements

    I found myself in a scenario where I only wanted to replace column values if a condition was met (e.g. replace email column value only if domain is not 'example.com').

    So I hacked together a Replacement element that you can use as child of a Column element. Example:

    <?xml version="1.0" ?>
    <column name="test" dump="replace">
        <replacement value="updated"/>
    </column>
    

    For testing the column value against a condition, the "strategy" and "constraint" attributes are available. In the example below, columns with value "baby yoda" would be replaced by "grogu":

    <?xml version="1.0" ?>
    <column name="test" dump="replace">
        <replacement strategy="eq" constraint="baby yoda" value="grogu"/>
    </column>
    

    At the moment, 3 comparison strategies are defined: "eq" (equals), "neq" (not equals), and "regex".

    A Column can have many Replacement elements. The first one where the column value matches the constraint defines the replacement value. In the example below, "pattern" would be replaced by "A", "something" by "B" and all other values by "fallback".

    <?xml version="1.0" ?>
    <column name="test" dump="replace">
        <replacement strategy="regex" constraint="/^pattern$/" value="A"/>
        <replacement strategy="eq" constraint="something" value="B"/>
        <replacement value="fallback"/>
    </column>
    

    A Column with "replacement" attribute still works like it used to and it overrides any Replacement elements.

    I am not sure about the API (Replacement elements with strategy and constraint attributes) nor if more people will find this useful, but it did the trick for me and I decided to share it here. Please let me know what you think.

    Best.

    opened by karkowg 1
  • Allow calling faker methods with arguments

    Allow calling faker methods with arguments

    This PR adds the ability to call more complex faker methods.

    It follows an easy convention: FAKER_methodName:arguments The arguments of the faker method can be separated by a comma.

    Examples:

    <?xml version="1.0" ?>
    <slimdump>
        <table name="users" dump="full">
            <column name="username" dump="replace" replacement="FAKER_word" />
            <column name="password" dump="replace" replacement="test" />
            <column name="amount" dump="replace" replacement="FAKER_numberBetween:1,100" />
            <column name="lastname" dump="replace" replacement="FAKER_numerify:'Helo ###'" />
            <column name="email" dump="replace" replacement="FAKER_unique->randomDigitNot:0" />
        </table>
    </slimdump>
    

    Edit: I haven't changed the README yet, as I wasn't sure if you want this PR. If this gets merged, I can prepare a PR that modifies the documentation.

    opened by mpociot 1
  • Take care of dependencies between VIEW definitions

    Take care of dependencies between VIEW definitions

    Since v1.10, slimdump will also dump VIEW definitions.

    One possible issue with that is that one VIEW might be built on top of another one. To make it possible to load the dump into an empty database schema, we need to take these dependencies between VIEWs into consideration.

    One approach might be to build something like a dependency graph and dump the VIEWs in the right order. I'd suspect that there cannot be cycles for logical reasons.

    The other solution would be to have a look at how mysqldump solves this. In fact, it creates something like temporary placeholders for views and replacing those in a final phase.

    /* many lines removed for clarity ... */
    CREATE TABLE `test` (
      `id` int(11) NOT NULL
    ) ENGINE=InnoDB DEFAULT CHARSET=utf8;
    
    /*!50001 CREATE VIEW `view1` AS SELECT 1 AS `id`*/;
    
    /*!50001 CREATE VIEW `view2` AS SELECT 1 AS `id`*/;
    
    --
    -- Final view structure for view `view1`
    --
    
    /*!50001 DROP VIEW IF EXISTS `view1`*/;
    /*!50001 CREATE ALGORITHM=UNDEFINED */
    /*!50013 DEFINER=`...`@`...` SQL SECURITY DEFINER */
    /*!50001 VIEW `view1` AS select `view2`.`id` AS `id` from `view2` */;
    
    --
    -- Final view structure for view `view2`
    --
    
    /*!50001 DROP VIEW IF EXISTS `view2`*/;
    /*!50001 CREATE ALGORITHM=UNDEFINED */
    /*!50013 DEFINER=`...`@`...` SQL SECURITY DEFINER */
    /*!50001 VIEW `view2` AS select `test`.`id` AS `id` from `test` */;
    
    help wanted 
    opened by mpdude 0
  • Show version information

    Show version information

    It would be helpful if we could have some version information printed on startup or with a command line switch like --version.

    Not sure how to best obtain that information, though.

    opened by mpdude 0
  • Add a replacement map

    Add a replacement map

    usecase:

    Given you have a table to select which records to export, but in the end result they all need to be set to other values.

    It would be awesome to have some kind of replacement table:

    a —> b
    c —> d
    

    And similar.

    opened by kaystrobach 0
Releases(1.14.1)
  • 1.14.1(Dec 21, 2022)

    Since the addition of the CSV output mode in the 1.14.0 release, TRIGGER definitions were (unintentionally) dumped before the data for a table. In previous releases, they followed the INSERT statements.

    Creating triggers before loading the data may be a problem when the trigger fires and not all tables have been created or locked at that time, so this change was reverted (#100).

    What's Changed

    • Update to actions/checkout@v3 by @mpdude in https://github.com/webfactory/slimdump/pull/97
    • Update PHP-CS-Fixer to v3.11 by @mpdude in https://github.com/webfactory/slimdump/pull/98
    • Update actions/cache to v3 by @mpdude in https://github.com/webfactory/slimdump/pull/99
    • Dump TRIGGER definitions after data by @mpdude in https://github.com/webfactory/slimdump/pull/100

    Full Changelog: https://github.com/webfactory/slimdump/compare/1.14.0...1.14.1

    Source code(tar.gz)
    Source code(zip)
    slimdump-1.14.1.phar(12.49 MB)
  • 1.14.0(Sep 16, 2022)

    The main new feature of this release is a new output mode for CSV files (#92).

    The new command option --output-csv=... enables CSV output mode, and must be given the path to a directory. The output mode will create .csv files in the given directory, named according to the dumped tables.

    This output format does not support output redirection from stdout as the default MySQL SQL format does.

    CSV files will be created only for tables that contain data. In other words, schema type dumps will skip the table in question. Also, dumping views/triggers makes no sense for CSV files, they will be skipped as well.

    How to best write binary (BLOB) data and/or NULL values in CSV files is probably highly controversial. For now, we'll just go with the 0x... hex literals supported by MySQL. Maybe having binary data in CSV files is not a sane idea in the first place.

    Also, the exact details of the CSV format might still change, so consider this feature as experimental.

    Full list of changes

    • Run GitHub Actions Workflows on Ubuntu 20.04 @mpdude in https://github.com/webfactory/slimdump/pull/87
    • Bump dependencies, update test matrix by @mpdude in https://github.com/webfactory/slimdump/pull/91
    • Remove DumperTest by @mpdude in https://github.com/webfactory/slimdump/pull/90
    • Refactor output-format related code into a dedicated class + interface by @FabianSchmick in https://github.com/webfactory/slimdump/pull/88
    • Refactor the SlimdumpCommand, DumpTask and Dumper classes by @mpdude in https://github.com/webfactory/slimdump/pull/93
    • Remove the Dumper::keepalive() method by @mpdude in https://github.com/webfactory/slimdump/pull/94
    • Ensure PHP 7.2 compatibility for the composer.lock file by @mpdude in https://github.com/webfactory/slimdump/pull/95
    • Implement CSV output mode by @FabianSchmick in https://github.com/webfactory/slimdump/pull/92

    New Contributors

    • @FabianSchmick made their first contribution in https://github.com/webfactory/slimdump/pull/88

    Full Diff: https://github.com/webfactory/slimdump/compare/1.13.0...1.14.0

    Source code(tar.gz)
    Source code(zip)
    slimdump-1.14.0.phar(12.49 MB)
  • 1.13.0(Dec 20, 2021)

    The max_execution_time might cause rows to be missing from dumps when PHP < 7.4.13, but could also lead to dumps being aborted for newer PHP versions (#84).

    This release disables the max_execution_time timeout value for slimdump's database connection (#86).

    Additionally, a warning is printed (#85) when the number of rows dumped does not match the number expected (which is based on a COUNT(*) query).

    Source code(tar.gz)
    Source code(zip)
    slimdump-1.13.0.phar(12.11 MB)
  • 1.12.2(Apr 21, 2021)

  • 1.12.0(Sep 15, 2020)

    This release comes with the following changes:

    • Optional parameter --single-line-insert-statements: This option can speed up imports significantly if you dump tables with a very large number of rows.
    • KEYS are being DISABLED before inserting data and ENABLED afterwards.
    Source code(tar.gz)
    Source code(zip)
  • 1.11.0(Sep 7, 2020)

  • 1.10.0(Jun 3, 2020)

  • 1.9.0(Jan 2, 2020)

  • 1.6.1(Oct 11, 2017)

  • 1.5.0(Jun 27, 2016)

  • 1.4.1(Dec 9, 2015)

  • 1.4.0(Dec 4, 2015)

  • 1.3.0(Dec 4, 2015)

  • 1.2.0(Nov 28, 2015)

  • 1.1.0(Nov 27, 2015)

Owner
webfactory GmbH
Software development agency based in Bonn, Germany. We focus on great customers, talented devs and agile methods. Jobs: https://www.webfactory.de/stellen/
webfactory GmbH
A quick,easy and safe way of accessing Mysql-like databases from within a PHP program

Mysqli-Safe A simple, easy-to-use and secure way of accessing a Mysql database from within your PHP programs Mysqli-safe is a wrapper around the mysql

Anthony Maina Njoroge 2 Oct 9, 2022
Adds a compact "easy-sort" mode to Repeater and Repeater Matrix, making those fields easier to sort when there are a large number of items.

Repeater Easy Sort Adds a compact "easy-sort" mode to Repeater and Repeater Matrix, making those fields easier to sort when there are a large number o

Robin Sallis 3 Oct 10, 2021
JsonCollectionParser - Event-based parser for large JSON collections (consumes small amount of memory)

Event-based parser for large JSON collections (consumes small amount of memory). Built on top of JSON Streaming Parser This packa

Max Grigorian 113 Dec 6, 2022
NamelessMC is a free, easy to use & powerful website software for your Minecraft server, which includes a large range of features.

NamelessMC is a free, easy to use & powerful website software for your Minecraft server, which includes a large range of features

NamelessMC 519 Dec 31, 2022
Oui player - Manage configurable media players in @Textpattern CMS

oui_player Introduction An extendable plugin to easily embed customized audio and video players. . This plugin does not use oembed, it builds iframe e

Nicolas Morand 3 Aug 15, 2018
Enhancement to Magento to allow simple product prices to be used instead of the default special-case configurable product prices

Simple Configurable Products Extension For Magento This documentation applies to SCP versions 0.7 onwards. The documentation for SCP v0.6 and earlier

Simon King 288 Nov 7, 2022
A Magento 2 module that enables configurable CORS Headers on the GraphQL and REST APIs

Magento 2 CORS Magento Version Support Ever try to work with the Magento GraphQL API or REST API from your browser and see the following? Access to XM

Graycore, LLC 62 Dec 8, 2022
Simple configurable plugin to see all commands executed by players

About [FR] Simple plugin configurable permettant de voir toutes les commandes exécutées par les joueurs [ENG] Simple configurable plugin to see all co

leo 1 Feb 4, 2022
Package to send customer specific prices to Magento from a Laravel application using a configurable source.

Laravel Magento Customer Prices This package provides a way to add customer specific prices to Magento from a Laravel app. By default, it uses the Mag

JustBetter 14 Nov 4, 2022
Package to send prices to Magento from a Laravel application using a configurable source.

Laravel Magento Prices Package to send prices to Magento from a Laravel application using a configurable source. Features The idea is that we want to

JustBetter 15 Nov 4, 2022
A tool for diff'ing this and OpenTHC lab metrics, creating their objects, and docs.

wcia-analytes-tool Consumes OpenTHC Lab Metrics and WCIA Analytes, and produces diff objects and docs for use in WA State interop. version 0.9.8 Getti

Confidence Analytics 1 Jan 15, 2022
WeExpire is an opensource tool for creating emergency notes that can be read by your trusted contacts only after your death or if you are seriously injured

WeExpire is an opensource tool for creating emergency notes that can be read by your trusted contacts only after your death or if you are

Francesco 36 Nov 24, 2022
PHP_Depend is an adaptation of the established Java development tool JDepend. This tool shows you the quality of your design in terms of extensibility, reusability and maintainability.

PHP Depend Documentation PHP Depend for enterprise Available as part of the Tidelift Subscription. The maintainers of PHP Depend and thousands of othe

PHP_Depend 837 Dec 14, 2022
A PHP library for creating EDI 837 claim equivalent to paper-claim 1500. EDI X12 ANSI 837 File 5010 Version

EDI X12 ANSI 5010 PHP Library for creating EDI X12 ANSI 837 File 5010 Version A Simple PHP function for creating an EDI X12 ANSI 837 file version 0050

WalksWithMe 7 Jan 3, 2023
My first attempt at creating my own Rubik's Cube timing interface with PHP and JS!

Cubing Sessions My first attempt at creating my own Rubik's Cube timing interface with PHP and JS! I haved named it the PA Timer for fun, as I hail fr

Adam Gradess 0 Jan 12, 2022
This composer plugin allows you to share your selected packages between your projects by creating symlinks

Composer - Shared Package Plugin This composer plugin allows you to share your selected packages between your projects by creating symlinks. All share

L'Etudiant 169 Sep 20, 2022
🐋 This project aims to broaden knowledge of system administration by using Docker: virtualizing several Docker images, creating them in a new personal virtual machine.

?? This project aims to broaden knowledge of system administration by using Docker: virtualizing several Docker images, creating them in a new personal virtual machine.

Anton Kliek 1 Jan 26, 2022
A challenge to develop frontend-backend forms and account creating.

Symfony + Vue (Back/Front) Helped and assisted by Vanessa and Paulo. This project have two sides, the back-end(Symfony) and the front-end(Vue.js) for

Rickelme Dias 1 Feb 10, 2022
Set of polyfills for changed PHPUnit functionality to allow for creating PHPUnit cross-version compatible tests

PHPUnit Polyfills Set of polyfills for changed PHPUnit functionality to allow for creating PHPUnit cross-version compatible tests. Requirements Instal

Yoast 147 Dec 12, 2022