Now also rudimentary tests the LRU mechanism, which at the same
time makes the name a misnomer. These will be split into separate
programs to allow tests to be run individually.
With the addition of filter capabilities, the tee filter should work with
all sorts of routers that require at most the RCAP_TYPE_CONTIGUOUS_INPUT
capability.
Due to a recent discovery of the server's capability to process multiple
requests, the filter can safely send data from one service to another
without waiting for the earlier replies.
This also fixes a minor problem with the cloning of DCBs where the backend
DCBs could end up in the wrong thread's pool.
In order to be able to test the LRU mechanism properly you need
to be able to access the head and tail from the outside. The same
is regarding the size and items in the cache. In order to be able
to test that the guarantees are upheld, you need to be able to
access those values from the outside.
It would seem that the likelyhood of different threads accessing
the same items at the same time is greater if each thread continuosly
loops across all items from beginning to finish. That will also ensure
that head and tail surely are accessed. In addition, some function names
have been disambiguated.
If the underlying storage does not support max_count and/or max_size
and it accordinly is decorated with LRUStorage, then create the real
storage as single-threaded. Since LRUStorage will do locking it is of
no use to do locking in the real storage as well.
Here we create a number of threads and then randomly start getting
putting and deleting values. The intent is to test that the locking
behaviour of the storage modules is correctly implemented.
0 is now the default of all cache configuration parameters and in
all cases the meaning is the same; that is, no limit. Internally
all limits but ttl are now for the sake of consistency 64-bit.
Smoke test for detecting errors in key generation. The input file
is a number of .test-files from the server combined into a single
one. We simply check that a unique key is generated for each
statement.
Diagnostics can be required on both instance and session level,
so both cases need to be handled explicitly.
In addition, some reinterpret_casts were changed into static_casts.
Reinterpret_cast needs to be used with the instance, which is a
void** but static_cast is sufficient for the session, which is void*.
Now everything needed for cleanly transfer the control between the
C filter API of MaxScale and a C++ filter implementation is handled
automatically. Nothing of the earlier boiler-plate code is needed.
cache_storage_api.h contains pure C declarations, while
cache_storage_api.hh contains C++ declarations. Functions dealing
with CACHE_KEY moved to these headers.
Initial POC implementation that is capable of outputting the
rules of the cache.
In order to do that conveniently, the json object is retained for
the lifetime of the CACHE_RULES object.
MXS-848 (partially). The QLA-filter now has additional options
to control the printing.
1. "append"
This toggles append-mode, where the filter opens the log files in
update mode (if file already existed) and only adds text to the end.
2. "print_service"
This toggles writing the service name onto each row. Mostly useful
with the unified_file-setting.
3. "print_session"
This toggles writing the session number onto each row. Mostly useful
with the unified_file-setting.
Also, the filter now writes a header to the beginning of the file
when creating it.
The printing has been separated to its own helper-function, in case
more accurate control will be added in the future.
The code prevented scaling by imposing global spinlocks for the DCBs and
SESSIONs. Removing this list means that a thread-local list must be taken
into use to replace it.
Unnecessary methods were also removed from CachePT and CacheMT
as it does not make sense to create more than one single instance
of those per filter instance. Consequently there is no need for
them to be able to use an existing StorageFactory (and CacheRules).