One of the major benefits of this package is the possibility to quickly and easily generate datasets of reference problems and test your algorithms against (existing) datasets.

Especially when benchmarking your novel algorithm against commonly used reference datasets, this will allow a simple reproducibility. A collection of some reference datasets can be found at

In the following, an example of how a repository for a research paper could look like, is presented.

Research Paper Repository

The file structure can be as simple as shown in the following.

├── dataset/
│   ├── problem1.txt
│   ├── problem2.txt
│   └── ...

The directory dataset/ contains all problem instances of the reference dataset, which are saved by one of the functions in

The file contains the implementation of your algorithm. It could look something like the following. Details on how to implement new algorithms can also be found on the Implementing a Novel Algorithm page.

 1import os
 2import numpy as np
 3import qmkpy
 5def my_algorithm(profits, weights, capacities):
 7    return assignments
 9def main():
10    results = []
11    for root, dirnames, filenames in os.walk("dataset"):
12        for problem in filenames:
13            qmkp = qmkpy.QMKProblem.load(problem, strategy="txt")
14            qmkp.algorithm = my_algorithm
15            solution, profit = qmkp.solve()
16            results.append(profit)
17    print(f"Average profit: {np.mean(results):.2f}")
19if __name__ == "__main__":
20    main()

This simple script solves all problems of the dataset using your algorithm and prints the average total profit at the end.