Page tree

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

List of TMs

PurposeReturns JSON list of TMs
RequestGET /%service%/
Params

-

Returns list of open TMs and then list of available(excluding open) in the app.

Code Block
languagejs
titleResponse
collapsetrue
Response example:
{
    "Open": [
        {
            "name": "mem2"
        }
    ],
    "Available on disk": [
        {
            "name": "mem_internal_format"
        },
        {
            "name": "mem1"
        },
        {
            "name": "newBtree3"
        },
        {
            "name": "newBtree3_cloned"
        }
    ]
}open - TM is in RAM, Available on disk - TM is not yet loaded from disk




Create TM

PurposeCreates TM with the provided name(tmd and tmi files in/MEM/ folder)
RequestPost /%service%/%tm_name%/
Params

Required: name, sourceLang


Code Block
languagejs
titleResponse
collapsetrue
Request example 
{    "name": "examle_tm",  // this name would be used as filename for .TMD and .TMI files   
   {  "sourceLang": "bg-BG"}  // should match lang in languages.xml
   {"data": "base64_encoded_archive_see_import_in_internal_format"}
   ["loggingThreshold": 0]
}
this endpoint could work in 2 ways, like creation of new tm (then sourceLang is required and data can be skipped) or importing archived .tm(then sourceLang can be skipped, but data is required)it's possible to add memDescription in this stage, but this should be explored more if needed

Response example:Success:{
"name": "examle_tm",
}
TM already exists: 
{
    "ReturnValue": 7272,
    "ErrorMsg": "::ERROR_MEM_NAME_EXISTS:: TM with this name already exists: examle_tm1; res = 0"
}



Create/Import TM in internal format

PurposeImport and unpack base64 encoded archive of .TMD, .TMI, .MEM(in pre 0.5.x versions) files. Rename it to provided name
RequestPOST /%service%/
Params

{    "name": "examle_tm",    "sourceLang": "bg-BG" , "data":"base64EncodedArchive" }

Do not import tms created in other version of t5memory. Starting from 0.5.x tmd and tmi files has t5memory version where they were created in the header of the file, and different middle version(0.5.x) or global version(0.5.x) would be represented as 
version mismatch. Instead export tmx in corresponding version and create new empty tm and import tmx in new version. 

This would create example_tm.TMD(data file) and example.TMI(index file) in MEM folder
If there are "data" provided, no "sourceLang" required and vice versa - base64 data should be base64 encoded .tm file(which is just archive that contains .tmd and .tmi files 
If there are no "data" - new tm would be created, "sourceLang" should be provided and should be match with lang in languages.xml

Starting from 0.6.52 import in internal format supporst multipart/form data, so you can send then both file and json_body. In json_body only "name" attribute is required(sourceLang would be ignored anyway).

Send it in a same way as streaming import TMX. Json body should be in pretty formatting and in a part called json_body to be parsed correctly.

Code Block
languagejs
titleResponse
collapsetrue
Request example:{ "name": "mem_internal_format", "data":"UEsDBBQACAgIAPmrhVQAAAAAAAAAAAAAAAAWAAQAT1RNXy1JRDE3NS0wXzJfNV9iLk1FTQEAAADtzqEKgDAQgOFTEHwNWZ5swrAO0SBys6wfWxFBDILv6uOI2WZQw33lr38GbvRIsm91baSiigzFEjuEb6XHEK\/myX0PXtXsyxS2OazwhLDWeVTaWgEFMMYYY\/9wAlBLBwhEWTaSXAAAAAAAAAAACAAAAAAAAFBLAwQUAAgICAD5q4VUAAAAAAAAAAAAAAAAFgAEAE9UTV8tSUQxNzUtMF8yXzVfYi5UTUQBAAAA7d3Pa5JxHMDxz+Ns09phDAYdPfaDyQqWRcYjS9nGpoYZhBeZMCISW2v2g5o6VkqQONk\/0KVzh4IoKAovnboUo1PHbuuwU8dSn8c9Pk2yTbc53y+R5\/P9fL7P1wf5Ps9zep5vIOy3iMiSiPLn0yPrQ7In+rStTQARi\/bV9chEyHcxGPIKAGDnPonl21SsHNmUYNgfHZ70nnKNDo9ET0dHozFn2L+Ll9uxZPzazPz1mYQAAAAAAAAAAAAAAAAAAAAAAAAAANDtBkXRoj5Zk7OqSFZ9q35Vn6khNa6W2wAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAdBKbKHK4Em1omT5DxV6J7FrmkKFypBKt9FczvYaKtr+2DLpiqPTWVayGiq2uYjFUpC7VI6aElN8F8JPn\/QEAAAAAAAAAAAAAAAAAAAAAAAAAAAAA2ANW7U0Ag9Iv60MnT4j8uLBZ\/X5+7dxn1ztX6Uy5AgAAAAAAAAAAAAAAAAAAgA6nL1qFjmc1rAO2IwNN9bL9u4ulVUeEfcQqQAfxSNtltshZaytB7jalZZ2a5KhFGT3Qr\/ztv1pkzAnP1v06+F7UxL22tRzSNf6aFq08MdoiY078\/znmkTZo5Qm2YdoOSLSyDdbaVUop\/Cj3cDm14I6\/uqf++nDUN1u4lS+k9MbKXL4QK72+775U+phOpp8sucdK728X5nK5hVT+weJqbTiHjMiNzWG1yNxWvI8rvxZ9cTfycj71NH1nsZgbf54uJlKryWy6GFlueBT6xHrzJRupDqkPXc9eyyduJmbLkf6\/mlYRDgQDPtO++3\/uYvsazANfYHx68vLEsSvOKedxqa\/hAGowD4Jh\/1X\/dH1X5sEBZpoH6E6\/AVBLBwj3gRyzjAIAAAAAAAAAAAEAAAAAAFBLAwQUAAgICAD5q4VUAAAAAAAAAAAAAAAAFgAEAE9UTV8tSUQxNzUtMF8yXzVfYi5UTUkBAAAA7d3PS9NhHMDxz\/Y1nbp0zfw2Vw6CEjooJkkFPs9DZZaFCiIRHRxKoJUIFXk06iB0kS5Fvw6dhDp28FDgOSqiIKQ\/ICQMhIIuYVnJt2f7eK2M2Ps1xp49b8Y+fP6ArXegJy4iV0RiPx6BNAXyT6ysrKhXlLZ49PwlkKP9hw\/19XcKAOD3PZX42+PDP0+JWN9AT765u3P33vbm1nxbvj0\/3DLQ0y3r5uClsZGhC2eGxgUAAAAAAAAAAAAAAAAAAAAAAAAAgFKXllh0ahQbLHeInDb3Xc6NWrF77Jibcr22zC2YY6bVLNoX5qp97Pa5SbPc8ci8sqHpd1k7a2+ZN+6eFQAAAAAAAAAAAAAAAAAAAAAAAAAAAAD4YxISk8bVUyq6eVa905dtqtxO3fBlqyqnkrW+ZFVZCGp8aVDl9ZeELxlVjhRNsEWVa+UffAlVuf78rC\/1eoK20JfNqnzt3OhLnSp1DZW+bFJl\/467vqRUuVxV5UutKts\/JX2pUWUyXvie9OopE5U7QWEHSfWZXdmPvlSr8i75xJcqVT7fPOdLpSqj5+t9Sahy8UBhOxWqLEph6nJVHhZNvUFPXbS3MlXyYWFvgSon3xf2FldlpGiCmCoPiiYQVbLR3or\/ZT0tS04AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAMC6K4t+ZSAtOWkKQpOSeTfnZty0m3CDrsu1uNB9swv2pZ21IlN23J6w1uZsuV0y82bOzJhpM2EGTZdpMaERAAAAAAAAAAAAAAAAAAAAAAAAAAAAAPjrUmteK0RypXifid5n1tyX6j7+9\/vvUEsHCGo104BhAgAAAAAAAAAAAQAAAAAAUEsBAgAAFAAICAgA912FVERZNpJcAAAAAAgAABYABAAAAAAAAAAAALSBAAAAAE9UTV8tSUQxNzUtMF8yXzVfYi5NRU0BAAAAUEsBAgAAFAAICAgA\/F2FVPeBHLOMAgAAAAABABYABAAAAAAAAAAAALSBrAAAAE9UTV8tSUQxNzUtMF8yXzVfYi5UTUQBAAAAUEsBAgAAFAAICAgA\/F2FVGo104BhAgAAAAABABYABAAAAAAAAAAAALSBiAMAAE9UTV8tSUQxNzUtMF8yXzVfYi5UTUkBAAAAUEsGBiwAAAAAAAAAHgMtAAAAAAAAAAAAAwAAAAAAAAADAAAAAAAAANgAAAAAAAAAOQYAAAAAAABQSwYHAAAAABEHAAAAAAAAAQAAAFBLBQYAAAAAAwADANgAAAA5BgAAAAA=" }
Response example:{
"name": "examle_tm"
}

TM already exists:
{
  "ReturnValue": 65535,
  "ErrorMsg": ""
}



...

Logging

LevelMnemonicDescription
0DEVELOPcould make code work really slow, should be used only when debugging some specific places in code, like binary search in files, etc.
1DEBUG

logging values of variables. Wouldn't delete temporary files(In MEM and TMP subdirectories), like base64 encoded\decoded tmx files and archives for import\export

2INFO logging top-level functions entrances, return codes, etc. Default value.
3WARNING logging if we reached some commented or hardcoded code.  Usually commented code  here is replaced with new code, and if not, it's marked as ERROR level
4ERRORerrors, why and where something fails during parsing, search, etc
5FATAL

you shouldn't reach this code, something is really wrongOther values would be ignored.  The set level would stay the same till you change it in a new request or close the app. Logs suppose to be written into a file with date\time name under ~/.OtmMemoryService/Logs and errors/fatal are supposed to be duplicated in another log file with FATAL suffices

6TRANSACTION

 - Logs only things like begin\end of request etc.  No purpose to setup this hight

 --v is glogs flag, for t5memory it make sense to set it to 0(by default) for production or to 2 for debuging.glog have it's own log levels and flags, but we are not touching them, default is okay, but it have just INFO, WARNING, ERROR. t5memory have it's own system, that was implemeted before proxygen, so it have 6 log levels (0=develop, 1=debug, 2=info, 3=warning, 4=error, 5=fatal, 6=transaction), which would be streamed to glog streams this way:
1. develop, debug, info, transaction would be streamed to glogs INFO stream
2. warning to WARNING
3. error and fatal error to ERROR stream, but also when first error log would happen, cached info about tm name and body of request that caused error would be flushed once per request. with next errors in the same request, you should not see "...with body... etc" in the log.--v and t5log level are just two separate filters for logs. when you set --v=0, glog allow only ERROR stream to work, so if you set t5loglevel to [0,1,2,3,4] wouldn't matter. but you can set to 5 to skip regular errors and have only fatal errors. in that mode also transaction log level is downgraded to be just info log levelwhen you set --v=2 you disable glogs filter, so you would have a lot of logs and now you can set logging intensivity with --t5loglevel. also t5transaction would be the highest log level now, you you can skip all info logs(with --t5loglevel=4) and still have transaction logs(which are not warnings or errors, usually something about that request handling begin or end, if you enable that). 
Shortly: for production just leave defaults(–v=0, --t5loglevel=2(info)), for debugging you need to set --v=2 and --t5loglevel to 0,1,2. Sometimes it make sense to use 1 or 2, because for 0 t5loglevel would print a ton of logs and t5memory would be slow.


Logging could impact application speed very much, especially during import or export. In t5memory there are 2 systems of logs - one from glog library and could be set in launch as commandline parameter and one is internal to filter out logs based on their level, can be set with every request that have json body with additional ["loggingThreshold": 0]  parameter or at startup with flag. 
[loggingThreshold:"2"]
Like here 

POST http://localhost:4040/t5memory/example_tm/

{
sourceLang: “en”, // the source language is required for a new TM
name: „TM Name“,
loggingThreshold:"2"

}
This would set the logging level to INFO just before the main work of creating mem endpoint starts. DEVELOP could be used in really low level debugging, but most of the time DEBUG log is more useful, since DEVELOP would log a lot of logs.  Transaction logs have the highest level of severity but it's severity is also changes with -v parameter, so with --v=2 it would be the highest log level(this log is not used often, it's only to track something like end or start of request) but with default --v=0 it's severity is belowe WARNINGOr in t5memory.conf file in line (config file is obsolete now)
logLevel=0 
Would set the log level to DEVELOP, this would be applied only after restarting of service

gLog part - it have it's own configuration with command line flags. you can see all possible flags for t5memory with ./t5memory --help command.
main parameter here is --v and you can set it to 2 or 0(default). 
By default it set to 0,  in that case all not-errors would be avoided in logs, except startup. 
idea of --v=1 was to have logBuffer to keep log in some stream and in case of error show previous logs for that request, but it seems not so usefull, so it was not fixed and it's not working properly
--v=2 is basicaly disables that buffering, so 
In case of error or fatalError, log would be written with info about what request caused that log to happen(but that info would be truncated to 3000 symbols, this is important for importTMX), but if there are second error with the same request, new logs would not have that requests info

Some parameters combinations:
Default - --t5loglevel=2(T5INFO), --v=0,  in this case you could see only init messages and errors only, with info about requests that caused error to happen
Change only --v=2 - t5loglevel would be set by default to 2(T5INFO), so you could see T5INFO, T5WARNING, T5ERROR, T5FATAL, T5TRANSACTION messages
Debug production --t5loglevel=1(T5DEBUG), --v=2 - should be enough to have some info about issues, a lot of logs, but not as much as with Develop
Develop --t5loglevel=0(T5DEVELOP), --v=2 - all possible logs, includes entering to some functions, some step-by-step mechanisms logs(like how t5memory is parsing and hashing strings) etc. Useful only when you can reproduce issue so you don't get lost in logs from just normal behaviour or when it's crashing etc.

It's possible to change t5loglevel with some requests, so for example for some specific update request, you can set it to some lower log level and then set it back. It would affect other threads, but since in logs you have info about thread, it could be useful tool. 

Seems like --v parameter it's not quite useful, maybe should be refactored, since with --v=0 you wouldn't get any messages with severity lower than T5ERROR, except init process. 
But gLog library could be connected to some other libs in proxygen package

Here are all glog flags:
Flags from src/logging.cc:
    -alsologtoemail (log messages go to these email addresses in addition to
      logfiles) type: string default: ""
    -alsologtostderr (log messages go to stderr in addition to logfiles)
      type: bool default: false 
    -colorlogtostderr (color messages logged to stderr (if supported by
      terminal)) type: bool default: false
    -drop_log_memory (Drop in-memory buffers of log contents. Logs can grow
      very quickly and they are rarely read before they need to be evicted from
      memory. Instead, drop them from memory as soon as they are flushed to
      disk.) type: bool default: true
    -log_backtrace_at (Emit a backtrace when logging at file:linenum.)
      type: string default: ""
    -log_dir (If specified, logfiles are written into this directory instead of
      the default logging directory.) type: string default: ""
      currently: "/root/.t5memory/LOG/"
    -log_link (Put additional links to the log files in this directory)
      type: string default: ""
    -log_prefix (Prepend the log prefix to the start of each log line)
      type: bool default: true
    -logbuflevel (Buffer log messages logged at this level or lower (-1 means
      don't buffer; 0 means buffer INFO only; ...)) type: int32 default: 0
    -logbufsecs (Buffer log messages for at most this many seconds) type: int32
      default: 30
    -logemaillevel (Email log messages logged at this level or higher (0 means
      email all; 3 means email FATAL only; ...)) type: int32 default: 999
    -logfile_mode (Log file mode/permissions.) type: int32 default: 436
    -logmailer (Mailer used to send logging email) type: string
      default: "/bin/mail"
    -logtostderr (log messages go to stderr instead of logfiles) type: bool
      default: false
    -max_log_size (approx. maximum log file size (in MB). A value of 0 will be
      silently overridden to 1.) type: int32 default: 1800
    -minloglevel (Messages logged at a lower level than this don't actually get
      logged anywhere) type: int32 default: 0
    -stderrthreshold (log messages at or above this level are copied to stderr
      in addition to logfiles.  This flag obsoletes --alsologtostderr.)
      type: int32 default: 2
    -stop_logging_if_full_disk (Stop attempting to log to disk if the disk is
      full.) type: bool default: false

 

...

errors: [{errorMsg: 'Given tmxData is no TMX.'}]

}


Values

%service%Name of service(default - t5memory, could be changed in t5m3mory.conf file
%tm_name%

Name of Translation Memory

Examplehttp://localhost:4040/t5memory/examle_tm/fuzzysearch/?


Endpoints overview

default endpoint/example

Is async?

1Get the list of TMsReturns JSON list of TMsGET/%service%//t5memory/
2Create TM

Creates TM with the provided name

POST/%service%//t5memory/
3Create/Import TM in internal formatImport and unpack base64 encoded archive of .TMD, .TMI, .MEM files. Rename it to provided namePOST/%service%//t5memory/
4Clone TM LocalyMakes clone of existing tmPOST/%service%/%tm_name%/clone/t5memory/my+TM/clone
(+is placeholder for whitespace in tm name, so there should be 'my TM.TMD' and 'my TM.TMI'(and in pre 0.5.x 'my TM.MEM' also) files on the disk )
tm name IS case sensetive in url

5Reorganize TMReorganizing tm(replacing tm with new one and reimporting segments from tmd) - asyncGET/%service%/%tm_name%/reorganize/t5memory/my+other_tm/reorganize+ in 0.5.x and up
5Delete TMDeletes .TMD, .TMI files DELETE/%service%/%tm_name%//t5memory/%tm_name%/
6Import TMX into TMImport provided base64 encoded TMX file into TM - asyncPOST/%service%/%tm_name%/import/t5memory/%tm_name%/import+
7Export TMX from TMCreates TMX from tm. Encoded in base64GET/%service%/%tm_name%//t5memory/%tm_name%/
8Export in Internal formatCreates and exports archive with .TMD, .TMI files of TMGET/%service%/%tm_name%//t5memory/%tm_name%/status
9

Status of TM 

Returns status\import status of TMGET/%service%/%tm_name%/status/t5memory/%tm_name%/status
10Fuzzy searchReturns entries\translations with small differences from requestedPOST/%service%/%tm_name%/fuzzysearch/t5memory/%tm_name%/fuzzysearch
11Concordance searchReturns entries\translations that contain requested segmentPOST/%service%/%tm_name%/concordancesearch/t5memory/%tm_name%/concordancesearch
12Entry updateUpdates entry\translation POST/%service%/%tm_name%/entry/t5memory/%tm_name%/entry
13Entry deleteDeletes entry\translationPOST/%service%/%tm_name%/entrydelete/t5memory/%tm_name%/entrydelete
14Save all TMsFlushes all filebuffers(TMD, TMI files) into the filesystemGET/%service%_service/savetms/t5memory_service/saveatms
15Shutdown serviceFlushes all filebuffers into the filesystem and shutting down the serviceGET/%service%_service/shutdown/t5memory_service/shutdown
16Test tag replacement callFor testing tag replacementPOST/%service%_service/tagreplacement/t5memory_service/tagreplacement
17ResourcesReturns resources and service dataGET

/%service%_service/resources

/t5memory_service/resources


18Import tmx from local file(in removing lookuptable git branch)Similar to import tmx, but instead of base64 encoded file, use local path to filePOST

/%service%/%tm_name%/importlocal

/t5memory/%tm_name%/importlocal

+

19 Mass deletion of entries(from v0.6.0)It's like reorganize, but with skipping import of segments, that after checking with provided filters combined with logical AND returns true. POST

/%service%/%tm_name%/entriesdelete

/t5memory/tm1/entriesdelete

+

20New concordance search(from v0.6.0)It's extended concordance search, where you can search in different field of the segmentPOST

/%service%/%tm_name%/search

/t5memory/tm1/search


21Get segment by internal keyExtracting segment by it's location in tmd file. POST

/%service%/%tm_name%/getentry

/t5memory/tm1/getentry


22NEW Import tmxImports tmx in non-base64 formatPOST

/%service%/%tm_name%/importtmx

/t5memory/tm1/tmporttmx

+

23NEW import in internal format(tm)Extracts tm zip attached to request(it should contains tmd and tmi files) into MEM folderPOST

/%service%/%tm_name%/
("multipart/form-data")

/t5memory/tm1/

("multipart/form-data")


24NEW export tmxExports tmx file as a file. Could be used to export selected number of segments starting from selected positionGET
(could be with body)

/%service%/%tm_name%/download.tmx

/t5memory/tm1/download.tmx


25NEW export tm (internal format)Exports tm archive GET

/%service%/%tm_name%/download.tm

/t5memory/tm1/download.tm


26Flush tmIf tm is open, flushes it to the disk(implemented in 0.6.33)GET

/%service%/%tm_name%/flush

/t5memory/tm1/flush


27FlagsReturn all available commandline flags(implemented in 0.6.47). Do not spam too much because gflags documentation says that that's slow. Useful to collect configuration data about t5memory to do debugging.GET

/%service%_service/flags

/t5memory_service/flags



Available end points

List of TMs

PurposeReturns JSON list of TMs
RequestGET /%service%/
Params

-

Returns list of open TMs and then list of available(excluding open) in the app.

Code Block
languagejs
titleResponse
collapsetrue
Response example:
{
    "Open": [
        {
            "name": "mem2"
        }
    ],
    "Available on disk": [
        {
            "name": "mem_internal_format"
        },
        {
            "name": "mem1"
        },
        {
            "name": "newBtree3"
        },
        {
            "name": "newBtree3_cloned"
        }
    ]
}open - TM is in RAM, Available on disk - TM is not yet loaded from disk




Create TM

PurposeCreates TM with the provided name(tmd and tmi files in/MEM/ folder)
RequestPost /%service%/%tm_name%/
Params

Required: name, sourceLang


Code Block
languagejs
titleResponse
collapsetrue
Request example 
{    "name": "examle_tm",  // this name would be used as filename for .TMD and .TMI files   
   {  "sourceLang": "bg-BG"}  // should match lang in languages.xml
   {"data": "base64_encoded_archive_see_import_in_internal_format"}
   ["loggingThreshold": 0]
}
this endpoint could work in 2 ways, like creation of new tm (then sourceLang is required and data can be skipped) or importing archived .tm(then sourceLang can be skipped, but data is required)it's possible to add memDescription in this stage, but this should be explored more if needed

Response example:Success:{
"name": "examle_tm",
}
TM already exists: 
{
    "ReturnValue": 7272,
    "ErrorMsg": "::ERROR_MEM_NAME_EXISTS:: TM with this name already exists: examle_tm1; res = 0"
}



Create/Import TM in internal format

PurposeImport and unpack base64 encoded archive of .TMD, .TMI, .MEM(in pre 0.5.x versions) files. Rename it to provided name
RequestPOST /%service%/
Params

{    "name": "examle_tm",    "sourceLang": "bg-BG" , "data":"base64EncodedArchive" }

Do not import tms created in other version of t5memory. Starting from 0.5.x tmd and tmi files has t5memory version where they were created in the header of the file, and different middle version(0.5.x) or global version(0.5.x) would be represented as 
version mismatch. Instead export tmx in corresponding version and create new empty tm and import tmx in new version. 

This would create example_tm.TMD(data file) and example.TMI(index file) in MEM folder
If there are "data" provided, no "sourceLang" required and vice versa - base64 data should be base64 encoded .tm file(which is just archive that contains .tmd and .tmi files 
If there are no "data" - new tm would be created, "sourceLang" should be provided and should be match with lang in languages.xml

Starting from 0.6.52 import in internal format supporst multipart/form data, so you can send then both file and json_body. In json_body only "name" attribute is required(sourceLang would be ignored anyway).

Send it in a same way as streaming import TMX. Json body should be in pretty formatting and in a part called json_body to be parsed correctly.

Code Block
languagejs
titleResponse
collapsetrue
Request example:{ "name": "mem_internal_format", "data":"UEsDBBQACAgIAPmrhVQAAAAAAAAAAAAAAAAWAAQAT1RNXy1JRDE3NS0wXzJfNV9iLk1FTQEAAADtzqEKgDAQgOFTEHwNWZ5swrAO0SBys6wfWxFBDILv6uOI2WZQw33lr38GbvRIsm91baSiigzFEjuEb6XHEK\/myX0PXtXsyxS2OazwhLDWeVTaWgEFMMYYY\/9wAlBLBwhEWTaSXAAAAAAAAAAACAAAAAAAAFBLAwQUAAgICAD5q4VUAAAAAAAAAAAAAAAAFgAEAE9UTV8tSUQxNzUtMF8yXzVfYi5UTUQBAAAA7d3Pa5JxHMDxz+Ns09phDAYdPfaDyQqWRcYjS9nGpoYZhBeZMCISW2v2g5o6VkqQONk\/0KVzh4IoKAovnboUo1PHbuuwU8dSn8c9Pk2yTbc53y+R5\/P9fL7P1wf5Ps9zep5vIOy3iMiSiPLn0yPrQ7In+rStTQARi\/bV9chEyHcxGPIKAGDnPonl21SsHNmUYNgfHZ70nnKNDo9ET0dHozFn2L+Ll9uxZPzazPz1mYQAAAAAAAAAAAAAAAAAAAAAAAAAANDtBkXRoj5Zk7OqSFZ9q35Vn6khNa6W2wAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAdBKbKHK4Em1omT5DxV6J7FrmkKFypBKt9FczvYaKtr+2DLpiqPTWVayGiq2uYjFUpC7VI6aElN8F8JPn\/QEAAAAAAAAAAAAAAAAAAAAAAAAAAAAA2ANW7U0Ag9Iv60MnT4j8uLBZ\/X5+7dxn1ztX6Uy5AgAAAAAAAAAAAAAAAAAAgA6nL1qFjmc1rAO2IwNN9bL9u4ulVUeEfcQqQAfxSNtltshZaytB7jalZZ2a5KhFGT3Qr\/ztv1pkzAnP1v06+F7UxL22tRzSNf6aFq08MdoiY078\/znmkTZo5Qm2YdoOSLSyDdbaVUop\/Cj3cDm14I6\/uqf++nDUN1u4lS+k9MbKXL4QK72+775U+phOpp8sucdK728X5nK5hVT+weJqbTiHjMiNzWG1yNxWvI8rvxZ9cTfycj71NH1nsZgbf54uJlKryWy6GFlueBT6xHrzJRupDqkPXc9eyyduJmbLkf6\/mlYRDgQDPtO++3\/uYvsazANfYHx68vLEsSvOKedxqa\/hAGowD4Jh\/1X\/dH1X5sEBZpoH6E6\/AVBLBwj3gRyzjAIAAAAAAAAAAAEAAAAAAFBLAwQUAAgICAD5q4VUAAAAAAAAAAAAAAAAFgAEAE9UTV8tSUQxNzUtMF8yXzVfYi5UTUkBAAAA7d3PS9NhHMDxz\/Y1nbp0zfw2Vw6CEjooJkkFPs9DZZaFCiIRHRxKoJUIFXk06iB0kS5Fvw6dhDp28FDgOSqiIKQ\/ICQMhIIuYVnJt2f7eK2M2Ps1xp49b8Y+fP6ArXegJy4iV0RiPx6BNAXyT6ysrKhXlLZ49PwlkKP9hw\/19XcKAOD3PZX42+PDP0+JWN9AT765u3P33vbm1nxbvj0\/3DLQ0y3r5uClsZGhC2eGxgUAAAAAAAAAAAAAAAAAAAAAAAAAgFKXllh0ahQbLHeInDb3Xc6NWrF77Jibcr22zC2YY6bVLNoX5qp97Pa5SbPc8ci8sqHpd1k7a2+ZN+6eFQAAAAAAAAAAAAAAAAAAAAAAAAAAAAD4YxISk8bVUyq6eVa905dtqtxO3fBlqyqnkrW+ZFVZCGp8aVDl9ZeELxlVjhRNsEWVa+UffAlVuf78rC\/1eoK20JfNqnzt3OhLnSp1DZW+bFJl\/467vqRUuVxV5UutKts\/JX2pUWUyXvie9OopE5U7QWEHSfWZXdmPvlSr8i75xJcqVT7fPOdLpSqj5+t9Sahy8UBhOxWqLEph6nJVHhZNvUFPXbS3MlXyYWFvgSon3xf2FldlpGiCmCoPiiYQVbLR3or\/ZT0tS04AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAMC6K4t+ZSAtOWkKQpOSeTfnZty0m3CDrsu1uNB9swv2pZ21IlN23J6w1uZsuV0y82bOzJhpM2EGTZdpMaERAAAAAAAAAAAAAAAAAAAAAAAAAAAAAPjrUmteK0RypXifid5n1tyX6j7+9\/vvUEsHCGo104BhAgAAAAAAAAAAAQAAAAAAUEsBAgAAFAAICAgA912FVERZNpJcAAAAAAgAABYABAAAAAAAAAAAALSBAAAAAE9UTV8tSUQxNzUtMF8yXzVfYi5NRU0BAAAAUEsBAgAAFAAICAgA\/F2FVPeBHLOMAgAAAAABABYABAAAAAAAAAAAALSBrAAAAE9UTV8tSUQxNzUtMF8yXzVfYi5UTUQBAAAAUEsBAgAAFAAICAgA\/F2FVGo104BhAgAAAAABABYABAAAAAAAAAAAALSBiAMAAE9UTV8tSUQxNzUtMF8yXzVfYi5UTUkBAAAAUEsGBiwAAAAAAAAAHgMtAAAAAAAAAAAAAwAAAAAAAAADAAAAAAAAANgAAAAAAAAAOQYAAAAAAABQSwYHAAAAABEHAAAAAAAAAQAAAFBLBQYAAAAAAwADANgAAAA5BgAAAAA=" }
Response example:{
"name": "examle_tm"
}

TM already exists:
{
  "ReturnValue": 65535,
  "ErrorMsg": ""
}





Clone TM localy

PurposeCreates TM with the provided name
RequestPost /%service%/%tm_name%/clone
Params

Required: name, sourceLang

Endpoint is sync(blocking)

Code Block
languagejs
titleResponse
collapsetrue
Request example 
{    "newName": "examle_tm" // when cloning, cloned tm would be renamed to this name(source tm is in url)
}

Response example:
Success: 
{
    "msg": "newBtree3_cloned2 was cloned successfully",
    "time": "5 ms"
}

 Failure: 
{
    "ReturnValue": -1,
    "ErrorMsg": "'dstTmdPath' = /home/or/.t5memory/MEM/newBtree3_cloned.TMD already exists; for request for mem newBtree3; with body = {\n    \"newName\": \"newBtree3_cloned\"\n}"
}




Flush TM 

PurposeIf TM is open, flushes it to the disk
RequestGet /%service%/%tm_name%/flush
Params

Endpoint is sync(blocking)

If tm is not found on the disk  - returns 404
If tm is not open - returns 400 with message
Then t5m requests write pointer to the tm(so it waits till other requests that's working with the tm would finish) and then it flushes it to the disk
Could also return an error if flushing got some issue.
Would not open the tm, if it's not opened yet, but instead would return an error.
Code Block
languagejs
titleResponse
collapsetrue
Response example:
Success:  {
    "msg": "Mem test1 was flushed to the disk successfully"
}   
Failure:  
{
    "ReturnValue": -1,
    "ErrorMsg": "FlushMemRequestData::checkData -> tm is not found"
}// or 
{
"ReturnValue": -1,
"ErrorMsg": "FlushMemRequestData::checkData -> tm is not open"
}




Delete TM

PurposeDeletes .TMD, .TMI, .MEM files 
RequestDelete /%service%/%tm_name%/
Params

-


Code Block
languagejs
titleResponse
collapsetrue
Response example:
success:
{
    "newBtree3_cloned2": "deleted"
},


Code Block
languagejs
titleResponse
collapsetrue
Response example:
failed:
{
    "newBtree3_cloned2": "not found"
}



Import provided base64 encoded TMX file into TM

PurposeImport provided base64 encoded TMX file into TM. Starts another thead for import. For checking import status use status call
RequestPOST /%service%/%tm_name%/import
Params

{"tmxData": "base64EncodedTmxFile" }

  • additional:
    "framingTags":
       "saveAll" - default behaviour, do nothing
       "skipAll" - skip all enclosing tags, including standalone tags
       "skipPaired" - skip only paired enclosing tags 

TM must exist
It's async, so check status using status endpoint, like with reorganize in 0.5.x and up

Handling if framing tag situation differs from source to target - for skipAll or skipPaired

If framing tags situation is the same in source and target, both sides should be treated as described above.

If framing tags only exist in source, then still they should be treated as described above.

If they only exist in target, then  nothing should be removed.

Code Block
languagejs
titleResponse
collapsetrue
Request example:{
   ["framingTags": "skipAll"["skipPaired", "saveAll"],]
   "tmxData":   "PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0idXRmLTgiPz4KPHRteCB2ZXJzaW9uPSIxLjQiPgogIDxoZWFkZXIgY3JlYXRpb250b29sPSJTREwgTGFuZ3VhZ2UgUGxhdGZvcm0iIGNyZWF0aW9udG9vbHZlcnNpb249IjguMCIgby10bWY9IlNETCBUTTggRm9ybWF0IiBkYXRhdHlwZT0ieG1sIiBzZWd0eXBlPSJzZW50ZW5jZSIgYWRtaW5sYW5nPSJlbi1HQiIgc3JjbGFuZz0iYmctQkciIGNyZWF0aW9uZGF0ZT0iMjAxNTA4MjFUMDkyNjE0WiIgY3JlYXRpb25pZD0idGVzdCIvPgogIDxib2R5PgoJPHR1IGNyZWF0aW9uZGF0ZT0iMjAxODAyMTZUMTU1MTA1WiIgY3JlYXRpb25pZD0iREVTS1RPUC1SNTlCT0tCXFBDMiIgY2hhbmdlZGF0ZT0iMjAxODAyMTZUMTU1MTA4WiIgY2hhbmdlaWQ9IkRFU0tUT1AtUjU5Qk9LQlxQQzIiIGxhc3R1c2FnZWRhdGU9IjIwMTgwMjE2VDE2MTMwNVoiIHVzYWdlY291bnQ9IjEiPgogICAgICA8dHV2IHhtbDpsYW5nPSJiZy1CRyI+CiAgICAgICAgPHNlZz5UaGUgPHBoIC8+IGVuZDwvc2VnPgogICAgICA8L3R1dj4KICAgICAgPHR1diB4bWw6bGFuZz0iZW4tR0IiPgogICAgICAgIDxzZWc+RXRoIDxwaCAvPiBkbmU8L3NlZz4KICAgICAgPC90dXY+CiAgICA8L3R1PgogIDwvYm9keT4KPC90bXg+Cg=="
}Response example:Error in case of errorFrom v0_2_15
{ "%tm_name%":""} in case of success
Check status of import using status call
TMX import could be interrupted in case of invalid XML or TM reaching it's limit. For both cases check status request to have info about position in tmx file where it was interrupted. 



   

Table of Contents

Overview and API introduction

...

errors: [{errorMsg: 'Given tmxData is no TMX.'}]

}


Values

%service%Name of service(default - t5memory, could be changed in t5m3mory.conf file
%tm_name%

Name of Translation Memory

Examplehttp://localhost:4040/t5memory/examle_tm/fuzzysearch/?


Endpoints overview

default endpoint/example

Is async?

1Get the list of TMsReturns JSON list of TMsGET/%service%//t5memory/
2Create TM

Creates TM with the provided name

POST/%service%//t5memory/
3Create/Import TM in internal formatImport and unpack base64 encoded archive of .TMD, .TMI, .MEM files. Rename it to provided namePOST/%service%//t5memory/
4Clone TM LocalyMakes clone of existing tmPOST/%service%/%tm_name%/clone/t5memory/my+TM/clone
(+is placeholder for whitespace in tm name, so there should be 'my TM.TMD' and 'my TM.TMI'(and in pre 0.5.x 'my TM.MEM' also) files on the disk )
tm name IS case sensetive in url

5Reorganize TMReorganizing tm(replacing tm with new one and reimporting segments from tmd) - asyncGET/%service%/%tm_name%/reorganize/t5memory/my+other_tm/reorganize+ in 0.5.x and up
5Delete TMDeletes .TMD, .TMI files DELETE/%service%/%tm_name%//t5memory/%tm_name%/
6Import TMX into TMImport provided base64 encoded TMX file into TM - asyncPOST/%service%/%tm_name%/import/t5memory/%tm_name%/import+
7Export TMX from TMCreates TMX from tm. Encoded in base64GET/%service%/%tm_name%//t5memory/%tm_name%/
8Export in Internal formatCreates and exports archive with .TMD, .TMI files of TMGET/%service%/%tm_name%//t5memory/%tm_name%/status
9

Status of TM 

Returns status\import status of TMGET/%service%/%tm_name%/status/t5memory/%tm_name%/status
10Fuzzy searchReturns entries\translations with small differences from requestedPOST/%service%/%tm_name%/fuzzysearch/t5memory/%tm_name%/fuzzysearch
11Concordance searchReturns entries\translations that contain requested segmentPOST/%service%/%tm_name%/concordancesearch/t5memory/%tm_name%/concordancesearch
12Entry updateUpdates entry\translation POST/%service%/%tm_name%/entry/t5memory/%tm_name%/entry
13Entry deleteDeletes entry\translationPOST/%service%/%tm_name%/entrydelete/t5memory/%tm_name%/entrydelete
14Save all TMsFlushes all filebuffers(TMD, TMI files) into the filesystemGET/%service%_service/savetms/t5memory_service/saveatms
15Shutdown serviceFlushes all filebuffers into the filesystem and shutting down the serviceGET/%service%_service/shutdown/t5memory_service/shutdown
16Test tag replacement callFor testing tag replacementPOST/%service%_service/tagreplacement/t5memory_service/tagreplacement
17ResourcesReturns resources and service dataGET

/%service%_service/resources

/t5memory_service/resources


18Import tmx from local file(in removing lookuptable git branch)Similar to import tmx, but instead of base64 encoded file, use local path to filePOST

/%service%/%tm_name%/importlocal

/t5memory/%tm_name%/importlocal

+

19 Mass deletion of entries(from v0.6.0)It's like reorganize, but with skipping import of segments, that after checking with provided filters combined with logical AND returns true. POST

/%service%/%tm_name%/entriesdelete

/t5memory/tm1/entriesdelete

+

20New concordance search(from v0.6.0)It's extended concordance search, where you can search in different field of the segmentPOST

/%service%/%tm_name%/search

/t5memory/tm1/search


21Get segment by internal keyExtracting segment by it's location in tmd file. POST

/%service%/%tm_name%/getentry

/t5memory/tm1/getentry


22NEW Import tmxImports tmx in non-base64 formatPOST

/%service%/%tm_name%/importtmx

/t5memory/tm1/tmporttmx

+

23NEW import in internal format(tm)Extracts tm zip attached to request(it should contains tmd and tmi files) into MEM folderPOST

/%service%/%tm_name%/
("multipart/form-data")

/t5memory/tm1/

("multipart/form-data")


24NEW export tmxExports tmx file as a file. Could be used to export selected number of segments starting from selected positionGET
(could be with body)

/%service%/%tm_name%/download.tmx

/t5memory/tm1/download.tmx


25NEW export tm (internal format)Exports tm archive GET

/%service%/%tm_name%/download.tm

/t5memory/tm1/download.tm



Available end points

List of TMs

PurposeReturns JSON list of TMs
RequestGET /%service%/
Params

-

Returns list of open TMs and then list of available(excluding open) in the app.

Code Block
languagejs
titleResponse
collapsetrue
Response example:
{
    "Open": [
        {
            "name": "mem2"
        }
    ],
    "Available on disk": [
        {
            "name": "mem_internal_format"
        },
        {
            "name": "mem1"
        },
        {
            "name": "newBtree3"
        },
        {
            "name": "newBtree3_cloned"
        }
    ]
}open - TM is in RAM, Available on disk - TM is not yet loaded from disk




Create TM

PurposeCreates TM with the provided name(tmd and tmi files in/MEM/ folder)
RequestPost /%service%/%tm_name%/
Params

Required: name, sourceLang


Code Block
languagejs
titleResponse
collapsetrue
Request example 
{    "name": "examle_tm",  // this name would be used as filename for .TMD and .TMI files   
   {  "sourceLang": "bg-BG"}  // should match lang in languages.xml
   {"data": "base64_encoded_archive_see_import_in_internal_format"}
   ["loggingThreshold": 0]
}
this endpoint could work in 2 ways, like creation of new tm (then sourceLang is required and data can be skipped) or importing archived .tm(then sourceLang can be skipped, but data is required)it's possible to add memDescription in this stage, but this should be explored more if needed

Response example:Success:{
"name": "examle_tm",
}
TM already exists: 
{
    "ReturnValue": 7272,
    "ErrorMsg": "::ERROR_MEM_NAME_EXISTS:: TM with this name already exists: examle_tm1; res = 0"
}



Create/Import TM in internal format

PurposeImport and unpack base64 encoded archive of .TMD, .TMI, .MEM(in pre 0.5.x versions) files. Rename it to provided name
RequestPOST /%service%/
Params

{    "name": "examle_tm",    "sourceLang": "bg-BG" , "data":"base64EncodedArchive" }

or alternatively data could be provided in non-base64 binary format as a file attached to the request


curl -X POST \ -H "Content-Type: application/json" \ -F "file=@/path/to/12434615271d732fvd7te3.gz;filename=myfile.tg" \ -F "json_data={\"name\": \"TM name\", \"sourceLang\": \"en-GB\"}" \ http://t5memory:4045/t5memory

Do not import tms created in other version of t5memory. Starting from 0.5.x tmd and tmi files has t5memory version where they were created in the header of the file, and different middle version(0.5.x) or global version(0.5.x) would be represented as 
version mismatch. Instead export tmx in corresponding version and create new empty tm and import tmx in new version. 

This would create example_tm.TMD(data file) and example.TMI(index file) in MEM folder
If there are "data" provided, no "sourceLang" required and vice versa - base64 data should be base64 encoded .tm file(which is just archive that contains .tmd and .tmi files 
If there are no "data" - new tm would be created, "sourceLang" should be provided and should be match with lang in languages.xml

In 0.6.20 and up data could be send as attachment instead of base64 encoded. Content-type then should be set to "multipart/form-data" and then json(with name of new tm) should be provided with json_data key(search is made this way: 

part.headers.at("Content-Disposition").find("name=\"json_data\"")

curl command example : curl -X POST \
-H "Content-Type: application/json" \
-F "file=@/path/to/12434615271d732fvd7te3.tm;filename=myfile.tm" \
-F "json_data={\"name\": \"TM name\", \"sourceLang\": \"en-GB\"}" \
http://t5memory:4045/t5memory Response example:{ "name": "examle_tm" }


Code Block
languagejs
titleResponse
collapsetrue
Request example:{ "name": "mem_internal_format", "data":"UEsDBBQACAgIAPmrhVQAAAAAAAAAAAAAAAAWAAQAT1RNXy1JRDE3NS0wXzJfNV9iLk1FTQEAAADtzqEKgDAQgOFTEHwNWZ5swrAO0SBys6wfWxFBDILv6uOI2WZQw33lr38GbvRIsm91baSiigzFEjuEb6XHEK\/myX0PXtXsyxS2OazwhLDWeVTaWgEFMMYYY\/9wAlBLBwhEWTaSXAAAAAAAAAAACAAAAAAAAFBLAwQUAAgICAD5q4VUAAAAAAAAAAAAAAAAFgAEAE9UTV8tSUQxNzUtMF8yXzVfYi5UTUQBAAAA7d3Pa5JxHMDxz+Ns09phDAYdPfaDyQqWRcYjS9nGpoYZhBeZMCISW2v2g5o6VkqQONk\/0KVzh4IoKAovnboUo1PHbuuwU8dSn8c9Pk2yTbc53y+R5\/P9fL7P1wf5Ps9zep5vIOy3iMiSiPLn0yPrQ7In+rStTQARi\/bV9chEyHcxGPIKAGDnPonl21SsHNmUYNgfHZ70nnKNDo9ET0dHozFn2L+Ll9uxZPzazPz1mYQAAAAAAAAAAAAAAAAAAAAAAAAAANDtBkXRoj5Zk7OqSFZ9q35Vn6khNa6W2wAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAdBKbKHK4Em1omT5DxV6J7FrmkKFypBKt9FczvYaKtr+2DLpiqPTWVayGiq2uYjFUpC7VI6aElN8F8JPn\/QEAAAAAAAAAAAAAAAAAAAAAAAAAAAAA2ANW7U0Ag9Iv60MnT4j8uLBZ\/X5+7dxn1ztX6Uy5AgAAAAAAAAAAAAAAAAAAgA6nL1qFjmc1rAO2IwNN9bL9u4ulVUeEfcQqQAfxSNtltshZaytB7jalZZ2a5KhFGT3Qr\/ztv1pkzAnP1v06+F7UxL22tRzSNf6aFq08MdoiY078\/znmkTZo5Qm2YdoOSLSyDdbaVUop\/Cj3cDm14I6\/uqf++nDUN1u4lS+k9MbKXL4QK72+775U+phOpp8sucdK728X5nK5hVT+weJqbTiHjMiNzWG1yNxWvI8rvxZ9cTfycj71NH1nsZgbf54uJlKryWy6GFlueBT6xHrzJRupDqkPXc9eyyduJmbLkf6\/mlYRDgQDPtO++3\/uYvsazANfYHx68vLEsSvOKedxqa\/hAGowD4Jh\/1X\/dH1X5sEBZpoH6E6\/AVBLBwj3gRyzjAIAAAAAAAAAAAEAAAAAAFBLAwQUAAgICAD5q4VUAAAAAAAAAAAAAAAAFgAEAE9UTV8tSUQxNzUtMF8yXzVfYi5UTUkBAAAA7d3PS9NhHMDxz\/Y1nbp0zfw2Vw6CEjooJkkFPs9DZZaFCiIRHRxKoJUIFXk06iB0kS5Fvw6dhDp28FDgOSqiIKQ\/ICQMhIIuYVnJt2f7eK2M2Ps1xp49b8Y+fP6ArXegJy4iV0RiPx6BNAXyT6ysrKhXlLZ49PwlkKP9hw\/19XcKAOD3PZX42+PDP0+JWN9AT765u3P33vbm1nxbvj0\/3DLQ0y3r5uClsZGhC2eGxgUAAAAAAAAAAAAAAAAAAAAAAAAAgFKXllh0ahQbLHeInDb3Xc6NWrF77Jibcr22zC2YY6bVLNoX5qp97Pa5SbPc8ci8sqHpd1k7a2+ZN+6eFQAAAAAAAAAAAAAAAAAAAAAAAAAAAAD4YxISk8bVUyq6eVa905dtqtxO3fBlqyqnkrW+ZFVZCGp8aVDl9ZeELxlVjhRNsEWVa+UffAlVuf78rC\/1eoK20JfNqnzt3OhLnSp1DZW+bFJl\/467vqRUuVxV5UutKts\/JX2pUWUyXvie9OopE5U7QWEHSfWZXdmPvlSr8i75xJcqVT7fPOdLpSqj5+t9Sahy8UBhOxWqLEph6nJVHhZNvUFPXbS3MlXyYWFvgSon3xf2FldlpGiCmCoPiiYQVbLR3or\/ZT0tS04AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAMC6K4t+ZSAtOWkKQpOSeTfnZty0m3CDrsu1uNB9swv2pZ21IlN23J6w1uZsuV0y82bOzJhpM2EGTZdpMaERAAAAAAAAAAAAAAAAAAAAAAAAAAAAAPjrUmteK0RypXifid5n1tyX6j7+9\/vvUEsHCGo104BhAgAAAAAAAAAAAQAAAAAAUEsBAgAAFAAICAgA912FVERZNpJcAAAAAAgAABYABAAAAAAAAAAAALSBAAAAAE9UTV8tSUQxNzUtMF8yXzVfYi5NRU0BAAAAUEsBAgAAFAAICAgA\/F2FVPeBHLOMAgAAAAABABYABAAAAAAAAAAAALSBrAAAAE9UTV8tSUQxNzUtMF8yXzVfYi5UTUQBAAAAUEsBAgAAFAAICAgA\/F2FVGo104BhAgAAAAABABYABAAAAAAAAAAAALSBiAMAAE9UTV8tSUQxNzUtMF8yXzVfYi5UTUkBAAAAUEsGBiwAAAAAAAAAHgMtAAAAAAAAAAAAAwAAAAAAAAADAAAAAAAAANgAAAAAAAAAOQYAAAAAAABQSwYHAAAAABEHAAAAAAAAAQAAAFBLBQYAAAAAAwADANgAAAA5BgAAAAA=" }
//you can skip data if you send it as attachment, but then set content-type to  multipart/form-data and send json with json_body key
// 

TM already exists:
{
  "ReturnValue": 65535,
  "ErrorMsg": ""
}




Clone TM localy

PurposeCreates TM with the provided name
RequestPost /%service%/%tm_name%/clone
Params

Required: name, sourceLang

Endpoint is sync(blocking)

Code Block
languagejs
titleResponse
collapsetrue
Request example 
{    "newName": "examle_tm" // when cloning, cloned tm would be renamed to this name(source tm is in url)
}

Response example:
Success: 
{
    "msg": "newBtree3_cloned2 was cloned successfully",
    "time": "5 ms"
}

 Failure: 
{
    "ReturnValue": -1,
    "ErrorMsg": "'dstTmdPath' = /home/or/.t5memory/MEM/newBtree3_cloned.TMD already exists; for request for mem newBtree3; with body = {\n    \"newName\": \"newBtree3_cloned\"\n}"
}



Testing TCP backlog options

related toissue T5TMS-281

most up-to-date version for this ticket is 0.6.75, where there are new flags and functionality to manipulate tcp stack.
--http_listen_backlog, default was 1024, in 0.6.75 it's 128, suppose to set tcp backlog for proxygen server, but seems like in reality it's just a hint, because requests over that limit is not dropping, except of timeout

--add_premade_socket - this is used to create socket and bind it to proxygen server instead of just providing ip address tot the server to open socket inside, should be set to true to enable, log_tcp_backog_events and socket_backlog flags

--log_tcp_backog_events  if set to true allow to test tcp backog, for that also recomended to set  --v=2 --t5loglevel=4. Require  add_premade_socket  to be set to true. You would see then in logs behaviour of tcp backlog

--socket_backlog is simillar to http_listen_backlog, but for socket.  But this require add_premade_socket to be set to true

--limit_num_of_active_requests, this would limit num of requests that could be handled at the same time in a way, when only n-1 of n created worker threads could be executed at the same time. last one would send 503 error and message that service is busy. I think that it make sense to play with num of worker threads and measure performance, for example try service with 32 threads on 8 cores. in that case service would handle properly 31 thread but 32nd would be responded with error.

--debug_sleep_in_request_run just make sleep  n microseconds(1/1000000 s) in every requests  to artificially slow them down. 

to test behaviour of tcp backlog you can use attached python script via command: 
python(3) sendNrequests4.py -n 40 
this would send 40 request on default local t5memory address
feel free to edit script if needed
flags for

to test tcp backlog you can set --add_premade_socket=1 --t5loglevel=4 --v=2 --debug_sleep_in_request_run=10000000 --log_tcp_backog_events=true --log_every_request_end=1 --log_every_request_start=1 --http_listen_backlog=4 --socket_backlog=2

and other flags as you wish

This would make every request at least 10 sec longer, every tcp backlog action would be logged, and also start and end of request handler execution, proxygens http tcp backog would be set to 4(or set it to some other value), and sockets backlog to 2
add_premade_socket is required to set sockets backlog and also tcp backlogs event logs.


other approach is to set docker containers environment, but seems like it's also just a hint and could be ignored by os
in docker-compose.yaml:
  myt5m:
    image: translate5/t5memory:0.6.75
    sysctls:
      net.core.somaxconn: 1
      net.ipv4.tcp_max_syn_backlog: 1
      net.ipv4.tcp_abort_on_overflow: 1
    ports:
      - '4086:4086'


Code Block
languagepy
titlesendNRequests.py
collapsetrue
import asyncio
import aiohttp
import argparse
import time
import traceback

async def fetch(session, url, request_id):
    try:
        async with session.get(url, timeout=60) as response:
            text = await response.text()
            if response.status != 200:
                print(f"Request {request_id}: Error with status {response.status}. Response:")
                print(text)
            else:
                print(f"Request {request_id}: Success with status {response.status}")
            return response.status, text
    except Exception as e:
        print(f"Request {request_id}: Exception occurred: {e}")
        traceback.print_exc()  # Print the full traceback for the exception
        return e  # Return the exception for further handling

async def main(num_requests, url, delay):
    async with aiohttp.ClientSession() as session:
        tasks = []
        for i in range(num_requests):
            tasks.append(asyncio.create_task(fetch(session, url, i)))
            if delay > 0:
                await asyncio.sleep(delay)
        results = await asyncio.gather(*tasks, return_exceptions=True)

    success_count = 0
    failure_count = 0
    for idx, result in enumerate(results):
        if isinstance(result, Exception):
            failure_count += 1
            print(f"Request {idx} raised an exception: {result}")
        else:
            status, text = result
            if status is None or status != 200:
                failure_count += 1
                print(f"Request {idx}: Failed. Status: {status}. Response: {text}")
            else:
                success_count += 1

    print(f"\nTotal successes: {success_count}")
    print(f"Total failures: {failure_count}")

if __name__ == "__main__":
    parser = argparse.ArgumentParser(
        description="Send multiple HTTP GET requests concurrently with an optional delay between requests"
    )
    parser.add_argument("-n", "--num_requests", type=int, default=200,
                        help="Number of parallel requests to send (default: 200)")
    parser.add_argument("-u", "--url", type=str, default="http://127.0.0.1:4080/t5memory",
                        help="URL to send requests to (default: http://127.0.0.1:4080/t5memory)")
    parser.add_argument("-d", "--delay", type=float, default=0.1,
                        help="Delay in seconds between starting each request (default: 0.1)")
    args = parser.parse_args()
    
    asyncio.run(main(args.num_requests, args.url, args.delay))