Page tree

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

List of TMs

PurposeReturns JSON list of TMs
RequestGET /%service%/
Params

-

Returns list of open TMs and then list of available(excluding open) in the app.

Code Block
languagejs
titleResponse
collapsetrue
Response example:
{
    "Open": [
        {
            "name": "mem2"
        }
    ],
    "Available on disk": [
        {
            "name": "mem_internal_format"
        },
        {
            "name": "mem1"
        },
        {
            "name": "newBtree3"
        },
        {
            "name": "newBtree3_cloned"
        }
    ]
}open - TM is in RAM, Available on disk - TM is not yet loaded from disk




Create TM

PurposeCreates TM with the provided name(tmd and tmi files in/MEM/ folder)
RequestPost /%service%/%tm_name%/
Params

Required: name, sourceLang


Code Block
languagejs
titleResponse
collapsetrue
Request example 
{    "name": "examle_tm",  // this name would be used as filename for .TMD and .TMI files   
   {  "sourceLang": "bg-BG"}  // should match lang in languages.xml
   {"data": "base64_encoded_archive_see_import_in_internal_format"}
   ["loggingThreshold": 0]
}
this endpoint could work in 2 ways, like creation of new tm (then sourceLang is required and data can be skipped) or importing archived .tm(then sourceLang can be skipped, but data is required)it's possible to add memDescription in this stage, but this should be explored more if needed

Response example:Success:{
"name": "examle_tm",
}
TM already exists: 
{
    "ReturnValue": 7272,
    "ErrorMsg": "::ERROR_MEM_NAME_EXISTS:: TM with this name already exists: examle_tm1; res = 0"
}



Create/Import TM in internal format

PurposeImport and unpack base64 encoded archive of .TMD, .TMI, .MEM(in pre 0.5.x versions) files. Rename it to provided name
RequestPOST /%service%/
Params

{    "name": "examle_tm",    "sourceLang": "bg-BG" , "data":"base64EncodedArchive" }

Do not import tms created in other version of t5memory. Starting from 0.5.x tmd and tmi files has t5memory version where they were created in the header of the file, and different middle version(0.5.x) or global version(0.5.x) would be represented as 
version mismatch. Instead export tmx in corresponding version and create new empty tm and import tmx in new version. 

This would create example_tm.TMD(data file) and example.TMI(index file) in MEM folder
If there are "data" provided, no "sourceLang" required and vice versa - base64 data should be base64 encoded .tm file(which is just archive that contains .tmd and .tmi files 
If there are no "data" - new tm would be created, "sourceLang" should be provided and should be match with lang in languages.xml

Starting from 0.6.52 import in internal format supporst multipart/form data, so you can send then both file and json_body. In json_body only "name" attribute is required(sourceLang would be ignored anyway).

Send it in a same way as streaming import TMX. Json body should be in pretty formatting and in a part called json_body to be parsed correctly.

Code Block
languagejs
titleResponse
collapsetrue
Request example:{ "name": "mem_internal_format", "data":"UEsDBBQACAgIAPmrhVQAAAAAAAAAAAAAAAAWAAQAT1RNXy1JRDE3NS0wXzJfNV9iLk1FTQEAAADtzqEKgDAQgOFTEHwNWZ5swrAO0SBys6wfWxFBDILv6uOI2WZQw33lr38GbvRIsm91baSiigzFEjuEb6XHEK\/myX0PXtXsyxS2OazwhLDWeVTaWgEFMMYYY\/9wAlBLBwhEWTaSXAAAAAAAAAAACAAAAAAAAFBLAwQUAAgICAD5q4VUAAAAAAAAAAAAAAAAFgAEAE9UTV8tSUQxNzUtMF8yXzVfYi5UTUQBAAAA7d3Pa5JxHMDxz+Ns09phDAYdPfaDyQqWRcYjS9nGpoYZhBeZMCISW2v2g5o6VkqQONk\/0KVzh4IoKAovnboUo1PHbuuwU8dSn8c9Pk2yTbc53y+R5\/P9fL7P1wf5Ps9zep5vIOy3iMiSiPLn0yPrQ7In+rStTQARi\/bV9chEyHcxGPIKAGDnPonl21SsHNmUYNgfHZ70nnKNDo9ET0dHozFn2L+Ll9uxZPzazPz1mYQAAAAAAAAAAAAAAAAAAAAAAAAAANDtBkXRoj5Zk7OqSFZ9q35Vn6khNa6W2wAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAdBKbKHK4Em1omT5DxV6J7FrmkKFypBKt9FczvYaKtr+2DLpiqPTWVayGiq2uYjFUpC7VI6aElN8F8JPn\/QEAAAAAAAAAAAAAAAAAAAAAAAAAAAAA2ANW7U0Ag9Iv60MnT4j8uLBZ\/X5+7dxn1ztX6Uy5AgAAAAAAAAAAAAAAAAAAgA6nL1qFjmc1rAO2IwNN9bL9u4ulVUeEfcQqQAfxSNtltshZaytB7jalZZ2a5KhFGT3Qr\/ztv1pkzAnP1v06+F7UxL22tRzSNf6aFq08MdoiY078\/znmkTZo5Qm2YdoOSLSyDdbaVUop\/Cj3cDm14I6\/uqf++nDUN1u4lS+k9MbKXL4QK72+775U+phOpp8sucdK728X5nK5hVT+weJqbTiHjMiNzWG1yNxWvI8rvxZ9cTfycj71NH1nsZgbf54uJlKryWy6GFlueBT6xHrzJRupDqkPXc9eyyduJmbLkf6\/mlYRDgQDPtO++3\/uYvsazANfYHx68vLEsSvOKedxqa\/hAGowD4Jh\/1X\/dH1X5sEBZpoH6E6\/AVBLBwj3gRyzjAIAAAAAAAAAAAEAAAAAAFBLAwQUAAgICAD5q4VUAAAAAAAAAAAAAAAAFgAEAE9UTV8tSUQxNzUtMF8yXzVfYi5UTUkBAAAA7d3PS9NhHMDxz\/Y1nbp0zfw2Vw6CEjooJkkFPs9DZZaFCiIRHRxKoJUIFXk06iB0kS5Fvw6dhDp28FDgOSqiIKQ\/ICQMhIIuYVnJt2f7eK2M2Ps1xp49b8Y+fP6ArXegJy4iV0RiPx6BNAXyT6ysrKhXlLZ49PwlkKP9hw\/19XcKAOD3PZX42+PDP0+JWN9AT765u3P33vbm1nxbvj0\/3DLQ0y3r5uClsZGhC2eGxgUAAAAAAAAAAAAAAAAAAAAAAAAAgFKXllh0ahQbLHeInDb3Xc6NWrF77Jibcr22zC2YY6bVLNoX5qp97Pa5SbPc8ci8sqHpd1k7a2+ZN+6eFQAAAAAAAAAAAAAAAAAAAAAAAAAAAAD4YxISk8bVUyq6eVa905dtqtxO3fBlqyqnkrW+ZFVZCGp8aVDl9ZeELxlVjhRNsEWVa+UffAlVuf78rC\/1eoK20JfNqnzt3OhLnSp1DZW+bFJl\/467vqRUuVxV5UutKts\/JX2pUWUyXvie9OopE5U7QWEHSfWZXdmPvlSr8i75xJcqVT7fPOdLpSqj5+t9Sahy8UBhOxWqLEph6nJVHhZNvUFPXbS3MlXyYWFvgSon3xf2FldlpGiCmCoPiiYQVbLR3or\/ZT0tS04AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAMC6K4t+ZSAtOWkKQpOSeTfnZty0m3CDrsu1uNB9swv2pZ21IlN23J6w1uZsuV0y82bOzJhpM2EGTZdpMaERAAAAAAAAAAAAAAAAAAAAAAAAAAAAAPjrUmteK0RypXifid5n1tyX6j7+9\/vvUEsHCGo104BhAgAAAAAAAAAAAQAAAAAAUEsBAgAAFAAICAgA912FVERZNpJcAAAAAAgAABYABAAAAAAAAAAAALSBAAAAAE9UTV8tSUQxNzUtMF8yXzVfYi5NRU0BAAAAUEsBAgAAFAAICAgA\/F2FVPeBHLOMAgAAAAABABYABAAAAAAAAAAAALSBrAAAAE9UTV8tSUQxNzUtMF8yXzVfYi5UTUQBAAAAUEsBAgAAFAAICAgA\/F2FVGo104BhAgAAAAABABYABAAAAAAAAAAAALSBiAMAAE9UTV8tSUQxNzUtMF8yXzVfYi5UTUkBAAAAUEsGBiwAAAAAAAAAHgMtAAAAAAAAAAAAAwAAAAAAAAADAAAAAAAAANgAAAAAAAAAOQYAAAAAAABQSwYHAAAAABEHAAAAAAAAAQAAAFBLBQYAAAAAAwADANgAAAA5BgAAAAA=" }
Response example:{
"name": "examle_tm"
}

TM already exists:
{
  "ReturnValue": 65535,
  "ErrorMsg": ""
}



...

Testing TCP backlog options

related toissue T5TMS-281

most up-to-date version for this ticket is 0.6.75, where there are new flags and functionality to manipulate tcp stack.
--http_listen_backlog, default was 1024, in 0.6.75 it's 128, suppose to set tcp backlog for proxygen server, but seems like in reality it's just a hint, because requests over that limit is not dropping, except of timeout

--add_premade_socket - this is used to create socket and bind it to proxygen server instead of just providing ip address tot the server to open socket inside, should be set to true to enable, log_tcp_backog_events and socket_backlog flags

--log_tcp_backog_events  if set to true allow to test tcp backog, for that also recomended to set  --v=2 --t5loglevel=4. Require  add_premade_socket  to be set to true. You would see then in logs behaviour of tcp backlog

--socket_backlog is simillar to http_listen_backlog, but for socket.  But this require add_premade_socket to be set to true

--limit_num_of_active_requests, this would limit num of requests that could be handled at the same time in a way, when only n-1 of n created worker threads could be executed at the same time. last one would send 503 error and message that service is busy. I think that it make sense to play with num of worker threads and measure performance, for example try service with 32 threads on 8 cores. in that case service would handle properly 31 thread but 32nd would be responded with error.

--debug_sleep_in_request_run just make sleep  n microseconds(1/1000000 s) in every requests  to artificially slow them down. 

to test behaviour of tcp backlog you can use attached python script via command: 
python(3) sendNrequests4.py -n 40 
this would send 40 request on default local t5memory address
feel free to edit script if needed
flags for

to test tcp backlog you can set --add_premade_socket=1 --t5loglevel=4 --v=2 --debug_sleep_in_request_run=10000000 --log_tcp_backog_events=true --log_every_request_end=1 --log_every_request_start=1 --http_listen_backlog=4 --socket_backlog=2

and other flags as you wish

This would make every request at least 10 sec longer, every tcp backlog action would be logged, and also start and end of request handler execution, proxygens http tcp backog would be set to 4(or set it to some other value), and sockets backlog to 2
add_premade_socket is required to set sockets backlog and also tcp backlogs event logs.


other approach is to set docker containers environment, but seems like it's also just a hint and could be ignored by os
in docker-compose.yaml:
  myt5m:
    image: translate5/t5memory:0.6.75
    sysctls:
      net.core.somaxconn: 1
      net.ipv4.tcp_max_syn_backlog: 1
      net.ipv4.tcp_abort_on_overflow: 1
    ports:
      - '4086:4086'


Code Block
languagepy
titlesendNRequests.py
collapsetrue
import asyncio
import aiohttp
import argparse
import time
import traceback

async def fetch(session, url, request_id):
    try:
        async with session.get(url, timeout=60) as response:
            text = await response.text()
            if response.status != 200:
                print(f"Request {request_id}: Error with status {response.status}. Response:")
                print(text)
            else:
                print(f"Request {request_id}: Success with status {response.status}")
            return response.status, text
    except Exception as e:
        print(f"Request {request_id}: Exception occurred: {e}")
        traceback.print_exc()  # Print the full traceback for the exception
        return e  # Return the exception for further handling

async def main(num_requests, url, delay):
    async with aiohttp.ClientSession() as session:
        tasks = []
        for i in range(num_requests):
            tasks.append(asyncio.create_task(fetch(session, url, i)))
            if delay > 0:
                await asyncio.sleep(delay)
        results = await asyncio.gather(*tasks, return_exceptions=True)

    success_count = 0
    failure_count = 0
    for idx, result in enumerate(results):
        if isinstance(result, Exception):
            failure_count += 1
            print(f"Request {idx} raised an exception: {result}")
        else:
            status, text = result
            if status is None or status != 200:
                failure_count += 1
                print(f"Request {idx}: Failed. Status: {status}. Response: {text}")
            else:
                success_count += 1

    print(f"\nTotal successes: {success_count}")
    print(f"Total failures: {failure_count}")

if __name__ == "__main__":
    parser = argparse.ArgumentParser(
        description="Send multiple HTTP GET requests concurrently with an optional delay between requests"
    )
    parser.add_argument("-n", "--num_requests", type=int, default=200,
                        help="Number of parallel requests to send (default: 200)")
    parser.add_argument("-u", "--url", type=str, default="http://127.0.0.1:4080/t5memory",
                        help="URL to send requests to (default: http://127.0.0.1:4080/t5memory)")
    parser.add_argument("-d", "--delay", type=float, default=0.1,
                        help="Delay in seconds between starting each request (default: 0.1)")
    args = parser.parse_args()
    
    asyncio.run(main(args.num_requests, args.url, args.delay))