Page tree

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Added info for 0.6.5

...

Endpoints overview

default endpoint/example

Is async?

1Get the list of TMsReturns JSON list of TMsGET/%service%//t5memory/
2Create TM

Creates TM with the provided name

POST/%service%//t5memory/
3Create/Import TM in internal formatImport and unpack base64 encoded archive of .TMD, .TMI, .MEM files. Rename it to provided namePOST/%service%//t5memory/
4Clone TM LocalyMakes clone of existing tmPOST/%service%/%tm_name%/clone/t5memory/my+TM/clone
(+is placeholder for whitespace in tm name, so there should be 'my TM.TMD' and 'my TM.TMI'(and in pre 0.5.x 'my TM.MEM' also) files on the disk )
tm name IS case sensetive in url

5Reorganize TMReorganizing tm(replacing tm with new one and reimporting segments from tmd) - asyncGET/%service%/%tm_name%/reorganize/t5memory/my+other_tm/reorganize+ in 0.5.x and up
5Delete TMDeletes .TMD, .TMI files DELETE/%service%/%tm_name%//t5memory/%tm_name%/
6Import TMX into TMImport provided base64 encoded TMX file into TM - asyncPOST/%service%/%tm_name%/import/t5memory/%tm_name%/import+
7Export TMX from TMCreates TMX from tm. Encoded in base64GET/%service%/%tm_name%//t5memory/%tm_name%/
8Export in Internal formatCreates and exports archive with .TMD, .TMI files of TMGET/%service%/%tm_name%//t5memory/%tm_name%/status
9

Status of TM 

Returns status\import status of TMGET/%service%/%tm_name%/status/t5memory/%tm_name%/status
10Fuzzy searchReturns entries\translations with small differences from requestedPOST/%service%/%tm_name%/fuzzysearch/t5memory/%tm_name%/fuzzysearch
11Concordance searchReturns entries\translations that contain requested segmentPOST/%service%/%tm_name%/concordancesearch/t5memory/%tm_name%/concordancesearch
12Entry updateUpdates entry\translation POST/%service%/%tm_name%/entry/t5memory/%tm_name%/entry
13Entry deleteDeletes entry\translationPOST/%service%/%tm_name%/entrydelete/t5memory/%tm_name%/entrydelete
14Save all TMsFlushes all filebuffers(TMD, TMI files) into the filesystemGET/%service%_service/savetms/t5memory_service/saveatms
15Shutdown serviceFlushes all filebuffers into the filesystem and shutting down the serviceGET/%service%_service/shutdown/t5memory_service/shutdown
16Test tag replacement callFor testing tag replacementPOST/%service%_service/tagreplacement/t5memory_service/tagreplacement
17ResourcesReturns resources and service dataGET

/%service%_service/resources

/t5memory_service/resources


18Import tmx from local file(in removing lookuptable git branch)Similar to import tmx, but instead of base64 encoded file, use local path to filePOST

/%service%/%tm_name%/importlocal

/t5memory/%tm_name%/importlocal

+

19 Mass deletion of entries(from v0.6.0)It's like reorganize, but with skipping import of segments, that after checking with provided filters combined with logical AND returns true. POST

/%service%/%tm_name%/entriesdelete

/t5memory/tm1/entriesdelete

+

20New concordance search(from v0.6.0)It's extended concordance search, where you can search in different field of the segmentPOST

/%service%/%tm_name%/search

/t5memory/tm1/search



Available end points

List of TMs

PurposeReturns JSON list of TMs
RequestGET /%service%/
Params

-

Returns list of open TMs and then list of available(excluding open) in the app.

Code Block
languagejs
titleResponse
collapsetrue
Response example:
{
    "Open": [
        {
            "name": "mem2"
        }
    ],
    "Available on disk": [
        {
            "name": "mem_internal_format"
        },
        {
            "name": "mem1"
        },
        {
            "name": "newBtree3"
        },
        {
            "name": "newBtree3_cloned"
        }
    ]
}open - TM is in RAM, Available on disk - TM is not yet loaded from disk


...

Import provided base64 encoded TMX file into TM

PurposeImport provided base64 encoded TMX file into TM. Starts another thead for import. For checking import status use status call
RequestPOST /%service%/%tm_name%/import
Params

{"tmxData": "base64EncodedTmxFile" }

  • additional:
    "framingTags":
       "saveAll" - default behaviour, do nothing
       "skipAll" - skip all enclosing tags, including standalone tags
       "skipPaired" - skip only paired enclosing tags 

TM must exist
It's async, so check status using status endpoint, like with reorganize in 0.5.x and up

Handling if framing tag situation differs from source to target - for skipAll or skipPaired

If framing tags situation is the same in source and target, both sides should be treated as described above.

If framing tags only exist in source, then still they should be treated as described above.

If they only exist in target, then  nothing should be removed.

Code Block
languagejs
titleResponse
collapsetrue
Request example:{
    ["tmxDataframingTags": "skipAll"["skipPaired", "saveAll"],]
   "tmxData":   "PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0idXRmLTgiPz4KPHRteCB2ZXJzaW9uPSIxLjQiPgogIDxoZWFkZXIgY3JlYXRpb250b29sPSJTREwgTGFuZ3VhZ2UgUGxhdGZvcm0iIGNyZWF0aW9udG9vbHZlcnNpb249IjguMCIgby10bWY9IlNETCBUTTggRm9ybWF0IiBkYXRhdHlwZT0ieG1sIiBzZWd0eXBlPSJzZW50ZW5jZSIgYWRtaW5sYW5nPSJlbi1HQiIgc3JjbGFuZz0iYmctQkciIGNyZWF0aW9uZGF0ZT0iMjAxNTA4MjFUMDkyNjE0WiIgY3JlYXRpb25pZD0idGVzdCIvPgogIDxib2R5PgoJPHR1IGNyZWF0aW9uZGF0ZT0iMjAxODAyMTZUMTU1MTA1WiIgY3JlYXRpb25pZD0iREVTS1RPUC1SNTlCT0tCXFBDMiIgY2hhbmdlZGF0ZT0iMjAxODAyMTZUMTU1MTA4WiIgY2hhbmdlaWQ9IkRFU0tUT1AtUjU5Qk9LQlxQQzIiIGxhc3R1c2FnZWRhdGU9IjIwMTgwMjE2VDE2MTMwNVoiIHVzYWdlY291bnQ9IjEiPgogICAgICA8dHV2IHhtbDpsYW5nPSJiZy1CRyI+CiAgICAgICAgPHNlZz5UaGUgPHBoIC8+IGVuZDwvc2VnPgogICAgICA8L3R1dj4KICAgICAgPHR1diB4bWw6bGFuZz0iZW4tR0IiPgogICAgICAgIDxzZWc+RXRoIDxwaCAvPiBkbmU8L3NlZz4KICAgICAgPC90dXY+CiAgICA8L3R1PgogIDwvYm9keT4KPC90bXg+Cg=="
}Response example:Error in case of errorFrom v0_2_15
{ "%tm_name%":""} in case of success
Check status of import using status call
TMX import could be interrupted in case of invalid XML or TM reaching it's limit. For both cases check status request to have info about position in tmx file where it was interrupted. 


...

Fuzzy search

Purpose Returns enrties\translations with small differences from requested
RequestPOST /%service%/%tm_name%/fuzzysearch
Params

Required: source, sourceLang, targetLang

iNumOfProposal -  limit of found proposals - max is 20, if 0 → use default value '5' 


Code Block
languagejs
titleResponse
collapsetrue
Request example:


Request example:
{ // required fields
  "sourceLang":"en-GB",    // langs would be checked with languages.xml 
  "targetLang":"de",   
  "source":"For > 100 setups.", 

 // optional fields  
  ["documentName":"OBJ_DCL-0000000845-004_pt-br.xml"],   
  ["segmentNumber":15],   
  ["markupTable":"OTMXUXLF"],    //if there is no markup, default OTMXUXLF would be used. 
								 //Markup tables should be located inside ~/.t5memory/TABLE/%markup$.TBL 
  ["context":"395_408"],  
  ["numOfProposals":20],   // num of expected segments in output. By default it's 5
  ["loggingThreshold": 0]
}





Response example:

Success:
{
    "ReturnValue": 0,
    "ErrorMsg": "",
    "NumOfFoundProposals": 1,
    "results": [
        {
            "source": "The end",
            "target": "The target",
            "segmentNumber": 0,
            "id": "",
            "documentName": "Te2.xlf",
            "sourceLang": "de-DE",
            "targetLang": "EN-GB",
            "type": "Manual",
            "author": "THOMAS LAURIA",
            "timestamp": "20231228T171821Z",
            "markupTable": "OTMXUXLF",
            "context": "2_3",
            "additionalInfo": "",
            "internalKey": "7:1",
            "matchType": "Fuzzy",
            "matchRate": 50,
            "fuzzyWords": 0,
            "fuzzyDiffs": 0
        }
    ]
} 

example 2
{
  "ReturnValue": 0,
  "ErrorMsg": "",
  "NumOfFoundProposals": 1, 
  "results": [
  {
     "source": "For > 100 setups.",
     "target": "Für > 100 Aufstellungen.",
     "segmentNumber": 10906825,
     "id": "",
     "documentName": "none",
    "documentShortName": "NONE",
    "sourceLang": "en-GB",
    "targetLang": "de-DE",
    "type": "Manual",
    "matchType": "Exact", // could be exact or fuzzy
    "author": "",
    "timestamp": "20190401T084052Z",
    "matchRate": 100,
     "fuzzyWords": -1, // for exact match it would be -1 here and in diffs
     "fuzzyDiffs": -1, // otherwise here would be amount of parsed words and diffs that was 
					   // used in fuzzy matchrate calculation    
	 "markupTable": "OTMXML",
     "context": "",
     "additionalInfo": ""
   }
 ]
}

Not found:

{
"ReturnValue": 133,
"ErrorMsg": "OtmMemoryServiceWorker::concordanceSearch::"
}
For exact match used function that's comparing strings ignoring whitespaces. First normalized strings(without tags).
If it's the same string, then t5memory is checking string with tags and could return 100 or 97 match rate depending on result.

Then it's checking context match rate and if document name is the same(non case sensitive)

Then it's checking and modifying exactMatchRate according to code in code block below.
After that it would store exact matches only with usMatchLevel>=100. If there would be no exact matches, fuzzy match calculations would begin.
In case if there is at least one exact match, any fuzzy matches would be skipped.
In case if we have only one exact exact match, it's rate would be set to 102


For equal matches with 100% word matches but different whitespaces/newlines, each whitespace/newline diffs would be count as -1%. For punctuation, at least for 0.4.50, each punctuation would count as word token. This would be changed in future to count punctuation as whitespaces. 

For fuzzy calculation tags would be removed from text, except t5:np tags, which would be replaced with their "r" attribute to be counted as 1 word per tag. 
 

For fuzzy rate calculation we count words and then diffs in normalized string(without tags), using this formula: 
  if (usDiff < usWords )
  {
    *pusFuzzy = (usWords != 0) ? ((usWords - usDiff)*100 / usWords) : 100;
  }
  else
  {
    *pusFuzzy = 0;
  } /* endif */ Regarging Number Protection feature, tags from number protection would be replaced with their regexHashes from their attributes, so they would be count as 1 word each. NP with the same regex would be counted as equal
 To count diffs, t5memory go throuht both segments to find matching tokens, to find something called snake- line of matching tokens. 
 Then It marks unmatched as INSERTED or DELETED tokens, and based on that it calculates diffs.

if it's 100% rate, we add tags and compare it again
if then it's not equal, here is how match rate would be changed - probably this would never happens, because we have exact match test before fuzzy, 
and we do exact test even if triplesHashes is different(which is pre-fuzzy calculation and if it's equal, it could be flag that trigger exact test)

  if ( !fStringEqual )
  {
    if ( usFuzzy > 3 )
    {
      usFuzzy -= 3;
    }
    else
    {
       usFuzzy = 0;
    } /* endif */
    usFuzzy = std::min( (USHORT)99, usFuzzy );
  } /* endif */

then depending on type of translation it could tweak rate
if ( (usModifiedTranslationFlag == TRANSLFLAG_MACHINE) && (usFuzzy < 100) )
{
  // ignore machine fuzzy matches
}
else if ( usFuzzy > TM_FUZZINESS_THRESHOLD )
{
  /********************************************************/
  /* give MT flag a little less fuzziness */
  /********************************************************/
  if ( usModifiedTranslationFlag == TRANSLFLAG_MACHINE )
  {
    if ( usFuzzy > 1 )
    {
      usFuzzy -= 1;
    }
    else
    {
      usFuzzy = 0;
    } /* endif */
  } /* endif */
  if (usFuzzy == 100 && (pGetIn->ulParm & GET_RESPECTCRLF) && !fRespectCRLFStringEqual )
  { // P018279!
    usFuzzy -= 1;
  }
   add to resulting set
} /* endif */
} /* endif */


At the end fuzzy request replaces tags in proposal from TM with tags from request, and if matchRate >= 100, it calculates whitespace diffs and apply matchRate-= wsDiffs


Code Block
languagejs
titleResponse
collapsetrue
ExactMatchRate calculation:so, before usExact is equal to 97 or 100, depending if strings with tags are equal ignoring whitespaces  and then code do some tweaks.
 pClb is struct that have proposals from TM, pGetIn is fuzzy requests data

 // loop over CLBs and look for best matching entry
{
  LONG lLeftClbLen; // left CLB entries in CLB list
  PTMX_TARGET_CLB pClb; // pointer for CLB list processing
  #define SEG_DOC_AND_CONTEXT_MATCH 8
  #define DOC_AND_CONTEXT_MATCH 7
  #define CONTEXT_MATCH 6
  #define SAME_SEG_AND_DOC_MATCH 5
  #define SAME_DOC_MATCH 4
  #define MULT_DOC_MATCH 3
  #define NORMAL_MATCH 2
  #define IGNORE_MATCH 1
  SHORT sCurMatch = 0;

  // loop over all target CLBs
  pClb = pTMXTargetClb;
  lLeftClbLen = RECLEN(pTMXTargetRecord) -
  pTMXTargetRecord->usClb;
  while ( ( lLeftClbLen > 0 ) && (sCurMatch < SAME_SEG_AND_DOC_MATCH) )
  {
    USHORT usTranslationFlag = pClb->bTranslationFlag;
    USHORT usCurContextRanking = 0; // context ranking of this match
    BOOL fIgnoreProposal = FALSE;
    // apply global memory option file on global memory proposals
   if ( pClb->bTranslationFlag == TRANSLFLAG_GLOBMEM ) // pClb it's segment in TM
   {
       if ( (pGetIn->pvGMOptList != NULL) && pClb->usAddDataLen ) // pGetIn it's fuzzy requests segment
       {

           USHORT usAddDataLen = NtmGetAddData( pClb, ADDDATA_ADDINFO_ID, pContextBuffer, MAX_SEGMENT_SIZE );
           if ( usAddDataLen )
           {
               GMMEMOPT GobMemOpt = GlobMemGetFlagForProposal( pGetIn->pvGMOptList, pContextBuffer );
               switch ( GobMemOpt )
               {
                  case GM_SUBSTITUTE_OPT: usTranslationFlag = TRANSLFLAG_NORMAL; break;
                  case GM_HFLAG_OPT : usTranslationFlag = TRANSLFLAG_GLOBMEM; break;
                  case GM_HFLAGSTAR_OPT : usTranslationFlag = TRANSLFLAG_GLOBMEMSTAR; break;
                  case GM_EXCLUDE_OPT : fIgnoreProposal = TRUE; break;
               } /* endswitch */
          } /* endif */
     } /* endif */ 

     if ( pClb == pTMXTargetClb )
    {
       usTargetTranslationFlag = usTranslationFlag;
    } /* endif *
  } /* endif */ 


  // check context strings (if any)
  if ((!fIgnoreProposal)
       && pGetIn->szContext[0]
       && pClb->usAddDataLen )
   {
       USHORT usContextLen = NtmGetAddData( pClb, ADDDATA_CONTEXT_ID, pContextBuffer, MAX_SEGMENT_SIZE );
       if ( usContextLen != 0 )
       {
            usCurContextRanking = NTMCompareContext( pTmClb, pGetIn->szTagTable, pGetIn->szContext, pContextBuffer );
       } /* endif */
    } /* endif */


  // check for matching document names
  if ( pGetIn->ulParm & GET_IGNORE_PATH )
  {
     // we have to compare the real document names rather than comparing the document name IDs
     PSZ pszCLBDocName = NTMFindNameForID( pTmClb, &(pClb->usFileId), (USHORT)FILE_KEY );
     if ( pszCLBDocName != NULL )
     {
        PSZ pszName = UtlGetFnameFromPath( pszCLBDocName );
        if ( pszName == NULL )
        {
           pszName = pszCLBDocName;
         } /* endif */
      fMatchingDocName = stricmp( pszName, pszDocName ) == 0;
    }
    else
    {
       // could not access the document name, we have to compare the document name IDs
      fMatchingDocName = ((pClb->usFileId == usGetFile) || (pClb->usFileId == usAlternateGetFile));
    } /* endif */
  }
  else
  {
     // we can compare the document name IDs
     fMatchingDocName = ((pClb->usFileId == usGetFile) || (pClb->usFileId == usAlternateGetFile));
  } /* endif */


  if ( fIgnoreProposal )
  {
    if ( sCurMatch == 0 )
    {
      sCurMatch = IGNORE_MATCH;
    } /* endif */
  }
  else if ( usCurContextRanking == 100 )
  {
    if ( fMatchingDocName && (pClb->ulSegmId >= (pGetIn->ulSegmentId - 1)) && (pClb->ulSegmId <= (pGetIn->ulSegmentId + 1)) )
    {
      if ( sCurMatch < SEG_DOC_AND_CONTEXT_MATCH )
      {
         sCurMatch = SEG_DOC_AND_CONTEXT_MATCH;
        pTMXTargetClb = pClb; // use this target CLB for match
        usTargetTranslationFlag = usTranslationFlag;
        usContextRanking = usCurContextRanking;
      }
    }
    else if ( fMatchingDocName )
    {
    if ( sCurMatch < DOC_AND_CONTEXT_MATCH )
    {
      sCurMatch = DOC_AND_CONTEXT_MATCH;
      pTMXTargetClb = pClb; // use this target CLB for match
      usTargetTranslationFlag = usTranslationFlag;
      usContextRanking = usCurContextRanking;
     }
     else if ( sCurMatch == DOC_AND_CONTEXT_MATCH )
     {
       // we have already a match of this type so check if context ranking
       if ( usCurContextRanking > usContextRanking )
       {
          pTMXTargetClb = pClb; // use newer target CLB for match
          usTargetTranslationFlag = usTranslationFlag;
          usContextRanking = usCurContextRanking;
       }
       // use time info to ensure that latest match is used
       else if ( usCurContextRanking == usContextRanking )
       {
         // GQ 2015-04-10 New approach: If we have an exact-exact match use this one, otherwise use timestamp for the comparism
         BOOL fExactExactNewCLB = fMatchingDocName && (pClb->ulSegmId >= (pGetIn->ulSegmentId - 1)) && (pClb->ulSegmId <= (pGetIn->ulSegmentId + 1));
         BOOL fExactExactExistingCLB = ((pTMXTargetClb->usFileId == usGetFile) || (pTMXTargetClb->usFileId == usAlternateGetFile)) &&
         (pTMXTargetClb->ulSegmId >= (pGetIn->ulSegmentId - 1)) && (pTMXTargetClb->ulSegmId <= (pGetIn->ulSegmentId + 1));
         if ( fExactExactNewCLB && !fExactExactExistingCLB )
         {
           // use exact-exact CLB for match
           pTMXTargetClb = pClb;
           usTargetTranslationFlag = usTranslationFlag;
           usContextRanking = usCurContextRanking;
         }
         else if ( (fExactExactNewCLB == fExactExactExistingCLB) && (pClb->lTime > pTMXTargetClb->lTime) )
         {
           // use newer target CLB for match
           pTMXTargetClb = pClb;
           usTargetTranslationFlag = usTranslationFlag;
           usContextRanking = usCurContextRanking;
         }
       } /* endif */
     } /* endif */
   }
   else
   {
     if ( sCurMatch < CONTEXT_MATCH )
     {
     sCurMatch = CONTEXT_MATCH;
     pTMXTargetClb = pClb; // use this target CLB for match
     usTargetTranslationFlag = usTranslationFlag;
     usContextRanking = usCurContextRanking;
     }
     else if ( sCurMatch == CONTEXT_MATCH )
     {
       // we have already a match of this type so check if context ranking
      if ( usCurContextRanking > usContextRanking )
      {
        pTMXTargetClb = pClb; // use newer target CLB for match
        usTargetTranslationFlag = usTranslationFlag;
        usContextRanking = usCurContextRanking;
      }
      // use time info to ensure that latest match is used
     else if ( (usCurContextRanking == usContextRanking) && (pClb->lTime > pTMXTargetClb->lTime) )
     {
       pTMXTargetClb = pClb; // use newer target CLB for match
       usTargetTranslationFlag = usTranslationFlag;
       usContextRanking = usCurContextRanking;
      } /* endif */
    } /* endif */
  } /* endif */
 }
 else if ( fMatchingDocName && (pClb->ulSegmId >= (pGetIn->ulSegmentId - 1)) && (pClb->ulSegmId <= (pGetIn->ulSegmentId + 1)) )
 {
   // same segment from same document available
   sCurMatch = SAME_SEG_AND_DOC_MATCH;
   pTMXTargetClb = pClb; // use this target CLB for match
   usContextRanking = usCurContextRanking;
   usTargetTranslationFlag = usTranslationFlag;
 }
 else if ( fMatchingDocName )
 {
    // segment from same document available
    if ( sCurMatch < SAME_DOC_MATCH )
    {
       sCurMatch = SAME_DOC_MATCH;
       pTMXTargetClb = pClb; // use this target CLB for match
       usTargetTranslationFlag = usTranslationFlag;
       usContextRanking = usCurContextRanking;
     }
     else if ( sCurMatch == SAME_DOC_MATCH )
     {
       // we have already a match of this type so
       // use time info to ensure that latest match is used
       if ( pClb->lTime > pTMXTargetClb->lTime )
       {
         pTMXTargetClb = pClb; // use newer target CLB for match
         usTargetTranslationFlag = usTranslationFlag;
         usContextRanking = usCurContextRanking;
       } /* endif */
     } /* endif */
   }
    else if ( pClb->bMultiple )
    {
       // multiple target segment available
       if ( sCurMatch < MULT_DOC_MATCH )
       {
         // no better match yet
         sCurMatch = MULT_DOC_MATCH;
         pTMXTargetClb = pClb; // use this target CLB for match
         usTargetTranslationFlag = usTranslationFlag;
         usContextRanking = usCurContextRanking;
       } /* endif */
     }
     else if ( usTranslationFlag == TRANSLFLAG_NORMAL )
     {
        // a 'normal' memory match is available
        if ( sCurMatch < NORMAL_MATCH )
        {
           // no better match yet
           sCurMatch = NORMAL_MATCH;
           pTMXTargetClb = pClb; // use this target CLB for match
           usTargetTranslationFlag = usTranslationFlag;
           usContextRanking = usCurContextRanking;
         } /* endif */
     } /* endif */

    // continue with next target CLB
    if ( sCurMatch < SAME_SEG_AND_DOC_MATCH )
    {
      lLeftClbLen -= TARGETCLBLEN(pClb);
      if (lLeftClbLen > 0)
      {
        usTgtNum++;
        pClb = NEXTTARGETCLB(pClb);
      }
    } /* endif */
} /* endwhile */


{
  BOOL fNormalMatch = (usTargetTranslationFlag == TRANSLFLAG_NORMAL) ||
  (usTargetTranslationFlag == TRANSLFLAG_GLOBMEM) ||
  (usTargetTranslationFlag == TRANSLFLAG_GLOBMEMSTAR);
  switch ( sCurMatch )
  {
    case IGNORE_MATCH :
      usMatchLevel = 0;
       break;
  case SAME_SEG_AND_DOC_MATCH :
      usMatchLevel = fNormalMatch ? usEqual+2 : usEqual-1;
      break;
   case SEG_DOC_AND_CONTEXT_MATCH :
       usMatchLevel = fNormalMatch ? usEqual+2 : usEqual-1; // exact-exact match with matching context
       break;
    case DOC_AND_CONTEXT_MATCH :
       if ( usContextRanking == 100 )
       {
         // GQ 2015/05/09: treat 100% context matches as normal exact matches
         // usMatchLevel = fNormalMatch ? usEqual+2 : usEqual-1;
         usMatchLevel = fNormalMatch ? usEqual+1 : usEqual-1;
        }
        else
        {
          usMatchLevel = fNormalMatch ? usEqual+1 : usEqual-1;
        } /* endif */
       break;
  case CONTEXT_MATCH :
    if ( usContextRanking == 100 )
    {
      // GQ 2015/05/09: treat 100% context matches as normal exact context matches
      // usMatchLevel = fNormalMatch ? usEqual+2 : usEqual-1;
      // GQ 2016/10/24: treat 100% context matches as normal exact matches
      usMatchLevel = fNormalMatch ? usEqual : usEqual-1;
    }
    else
    {
      usMatchLevel = fNormalMatch ? usEqual : usEqual-1;
     } /* endif */
     break;
  case SAME_DOC_MATCH :
    usMatchLevel = fNormalMatch ? usEqual+1 : usEqual-1;
    break;
  case MULT_DOC_MATCH :
     usMatchLevel = fNormalMatch ? usEqual+1 : usEqual-1;
     break;
  default :
     usMatchLevel = fNormalMatch ? usEqual : usEqual-1;
     break;
  } /* endswitch */
}
}








New Concordance search

PurposeReturns entries\translations that
contain requested segment
fits selected filters. 
RequestPOST /%service%/%tm_name%/
concordancesearch
search
Params

Required:

searchString - what we are looking for , searchType ["Source"|"Target"|"SourceAndTarget"] - where to look

NONE

iNumOfProposal -  limit of found proposals - max is

20

200, if 0 → use default value '5' 

Search is made segment-by segment, and it's checking segment if it fits selected filters. You can search for EXACT or CONCORDANCE matches in this fields:
source, target, document, author, addInfo, context
To set filter, use it's SearchMode field, otherwise filter would be disabled. So you have sourceSearchMode, targetSearchMode, documentSearchMode,
authorSearchMode, addInfoSearchMode, contextSearchMode 

Search mode should be set explicitly to CONTAINS/CONCORDANCE or EXACT, otherwise filter would be ignored. But also each searchMode could have additional search parameters "CONTAINS, caseinsensetive, WHITESPACETOLERANT, INVERTED", all that values is not important, as well as delimiter. By default search is case sensetive. If you add Inverted option, check for that filter would be reverted. 
To check how filters would be parsed, check json in responce.  Field with that info could look like this:

"Filters":"
Search filter, field: SOURCE FilterType::CONTAINS SearchStr: 'THE'; Options: SEARCH_FILTERS_NOT|SEARCH_CASEINSENSITIVE_OPT|SEARCH_WHITESPACETOLERANT_OPT|;\n
Search filter, field: TARGET FilterType::EXACT SearchStr: ''; Options: SEARCH_CASEINSENSITIVE_OPT|;\n
Search filter, field: ADDINFO FilterType::CONTAINS SearchStr: 'some add info'; Options: SEARCH_WHITESPACETOLERANT_OPT|;\n
Search filter, field: CONTEXT FilterType::EXACT SearchStr: 'context context'; Options: ;\nSearch filter, field: AUTHOR FilterType::CONTAINS SearchStr: ''; Options: ;\n
Search filter, field: DOCUMENT FilterType::CONTAINS SearchStr: 'evo3_p1137_reports_translation_properties_de_fr_20220720_094902'; Options: SEARCH_FILTERS_NOT|;\n
Search filter, field: TIMESTAMP FilterType::RANGE Range: 20000121T115234Z - 20240121T115234Z Options: ;\n"
,

It's possible to apply filter just with SearchMode, like if you would type "authorSearchMode": "exact",but there would be no "author" field, it would look for segments, where author field is empty.

Also there are  timespan parameter, to set it, use this fields and format:

"timestampSpanStart":"20000121T115234Z",
"timestampSpanEnd":"20240121T115234Z",

You should set both parameters to apply filter, otherwise you would get error as return.  Check output to see how it was parsed and applied.
By default all mentioned filters is applied in logical and combination, but you can change that globaly with adding 

"logicalOr": 1
Then all mentioned filters would be applied in logical or combination(please, use 1 to set this to true, boolean type is not supported by json parser in t5memory). Supported since 0.6.5

"onlyCountSegments":1

Instead of returning segments, just count them and return counter in 

"NumOfFoundSegments":22741

Also there are lang filters, they would always be applied to selection of segments that passed previous filters, so value of  "logicalOr": 1,  wouldn't be applied to that.
To set language filters, use this fields:

"sourceLang":"en-GB",
"targetLang":"de",

Lang filters could be applied with major lang feature, so source lang in this case would be applied as exact filter for source lang, but target lang would check if langs is in the same lang group. That check is done in languages.xml file with isPreferred flag.
Lang filters and if filters is combined in logical or or logical and you can check in  GlobalSearchOptions  field of responce. It could look like this: 

"GlobalSearchOptions":"SEARCH_FILTERS_LOGICAL_OR|SEARCH_EXACT_MATCH_OF_SRC_LANG_OPT, lang = en-GB|SEARCH_GROUP_MATCH_OF_TRG_LANG_OPT, lang = de",

Other that you can send is:

"searchPosition":"8:1",
"numResults":2,
"msSearchAfterNumResults":250
"loggingThreshold": 4 - check other requests,

So search position is position where to start search internaly in btree. This search is limited by num of found segment(set by numResults) or timeout(set by msSearchAfterNumResults), but timeout would be ignored in case if there are no segments in the tm to fit params.  Max  numResults is 200.

You can send empty json and search would work fine, but it would just return first 5 segments in tm
You can go through all segment  with using this 2 fields
"searchPosition":"8:1",
"numResults":200 

and just updating  searchPosition with  NewSearchPosition
from responce.


Code Block
languagejs
titleResponse
collapsetrue
{     
    "logicalOr": 1,
     "source":"the",
    "sourceSearchMode":"CONTAINS, CASEINSENSETIVE, WHITESPACETOLERANT, INVERTED",
    
    "target":"",
    "targetSearchMode":"EXACT, CASEINSENSETIVE",
    
    "document":"evo3_p1137_reports_translation_properties_de_fr_20220720_094902",
    "documentSearchMode":"CONTAINS, INVERTED",
    
     "author":"some author",
     "authorSearchMode":"CONTAINS",

    "timestampSpanStart": "20000121T115234Z",
    "timestampSpanEnd": "20240121T115234Z",

    "addInfo":"some add info",
    "addInfoSearchMode":"CONCORDANCE, WHITESPACETOLERANT",


    "context":"context context",
    "contextSearchMode":"EXACT",
    
    "sourceLang":"en-GB", 
    "targetLang":"SV",  
    "searchPosition": "8:1",
    "numResults": 2,
    "msSearchAfterNumResults": 25,
    "loggingThreshold": 3
}



So here search would be done in logical or way, so if any of source, target, document, context, author, timestamp filters returns true, result would be added to set, which then would be filtered out by sourceLang on exact match check and targetLang on groupLang check.
Search would start from position "8:1"(tm data start at "7:1" but if you wan't to start from the beggining, just avoid that param. 
numResuts:2 - so if there would be 2 segments found, search would end
"msSearchAfterNumResults": 25 - 25ms after first found segment, search would end, even if more segments was found, responce would contain "NewSearchPosition": "10:1", which can be used in searchPosition to continue search

Response example:Success:
example{
"Filters": "Search filter, field: SOURCE FilterType::CONTAINS SearchStr: 'THE'; Options: SEARCH_FILTERS_NOT|SEARCH_CASEINSENSITIVE_OPT|SEARCH_WHITESPACETOLERANT_OPT|;\n
Search filter, field: TARGET FilterType::EXACT SearchStr: ''; Options: SEARCH_CASEINSENSITIVE_OPT|;\n
Search filter, field: ADDINFO FilterType::CONTAINS SearchStr: 'some add info'; Options: SEARCH_WHITESPACETOLERANT_OPT|;\n
Search filter, field: CONTEXT FilterType::EXACT SearchStr: 'context context'; Options: ;\n
Search filter, field: AUTHOR FilterType::CONTAINS SearchStr: ''; Options: ;\n
Search filter, field: DOCUMENT FilterType::CONTAINS SearchStr: 'evo3_p1137_reports_translation_properties_de_fr_20220720_094902'; Options: SEARCH_FILTERS_NOT|;\n
Search filter, field: TIMESTAMP FilterType::RANGE Range: 20000121T115234Z - 20240121T115234Z Options: ;\n",
"GlobalSearchOptions": "SEARCH_FILTERS_LOGICAL_OR|SEARCH_EXACT_MATCH_OF_SRC_LANG_OPT, lang = en-GB|SEARCH_GROUP_MATCH_OF_TRG_LANG_OPT, lang = sv",
"ReturnValue": 0,
"ReturnMessage": "FOUND",
"NewSearchPosition": "10:1",
"results": [
{
"source": "Congratulations on the purchase of a <ph x=\"101\"/> machine control system.",
"target": "Gratulerar till köpet av maskinstyrningsystemet <ph x=\"101\"/>.",
"segmentNumber": 5740419,
"id": "",
"documentName": "none",
"sourceLang": "en-GB",
"targetLang": "SV-SE",
"type": "Manual",
"author": "",
"timestamp": "20170327T091814Z",
"markupTable": "OTMXUXLF",
"context": "",
"additionalInfo": "",
"internalKey": "8:1"
},
{
"source": "The <ph x=\"101\"/> System is an ideal tool for increasing productivity in all aspects of the construction earthmoving industry.",
"target": "Systemet <ph x=\"101\"/> är ett verktyg som lämpar sig perfekt för att öka produktiviteten inom alla delar av bygg- och anläggningsområdet.",
"segmentNumber": 5740420,
"id": "",
"documentName": "none",
"sourceLang": "en-GB",
"targetLang": "SV-SE",
"type": "Manual",
"author": "",
"timestamp": "20170327T091814Z",
"markupTable": "OTMXUXLF",
"context": "",
"additionalInfo": "",
"internalKey": "9:1"
}
]
}
SearchPosition / NewSearchPositionFormat: "7:1"
First is segment\record number, second is target number
The NextSearchposition is an internal key of the memory for the next position on sequential access. Since it is an internal key, maintained and understood by the underlying memory plug-in (for EqfMemoryPlugin is it the record number and the position in one record),
no assumptions should be made regarding the content. It is just a string that, should be sent back to OpenTM2 on the next request, so that the search starts from there.
So is the implementation in Translate5: The first request to OpenTM2 contains SearchPosition with an empty string, OpenTM2 returns than a string in NewSearchPosition, which is just resent to OpenTM2 in the next request.

Not found:{
"ReturnValue": 0,
"NewSearchPosition": null,
"ErrorMsg": ""
}TM not found:{
"ReturnValue": 133,
"ErrorMsg": "OtmMemoryServiceWorker::concordanceSearch::"
}



Here is search request with all possible parameters:
{
"logicalOr": 1, 

  "source":"the",

   "sourceSearchMode":"CONTAINS, CASEINSENSETIVE, WHITESPACETOLERANT, INVERTED",

   "target":"", "targetSearchMode":"EXACT, CASEINSENSETIVE",

   "document":"evo3_p1137_reports_translation_properties_de_fr_20220720_094902",

    "documentSearchMode":"CONTAINS, INVERTED", 

    "author":"some author",
    "authorSearchMode":"CONTAINS",

    "timestampSpanStart": "20000121T115234Z",

    "timestampSpanEnd": "20240121T115234Z",

    "addInfo":"some add info",

    "addInfoSearchMode":"CONCORDANCE, WHITESPACETOLERANT",

    "context":"context context",

    "contextSearchMode":"EXACT",

    "sourceLang":"en-GB",

    "targetLang":"SV",

    "searchPosition": "8:1",

    "numResults": 2,

     "msSearchAfterNumResults": 25,
     "loggingThreshold": 3
}
All fields is optional, but some depends on other, so error should be returned in case of not providing required field

So request with this body would also work:
{
}

ParametervalueTypedefault valuepossible valuesrequireFielddescription
sourceLangstring""langs that can 
be matched to
langs in languages.xml
-

Filter segments on src/trg lang attribute, 

If specified lang is preffered, matching is done based on lang family,
otherwise on exact match

targetLang
searchPositionstring"" (search would
start from "7:1" 
then
"8:1" etcpoint where to start search in tmd file
numResultsint5(0....200]points how many matches return in current request
msSearchAfterNumResults0no checksets how many ms should pass between first found segment and search stop, if it didn't reach the end yet. 
loggingThreshold-1[0...6]additional field to set log level on the run
logicalOrint00 for false, any other number as true, 
example:
"logicalOr": 1, 
"onlyCountSegments": 1
by default source, target, document, author, context, addinfo, timestamp is combined in logical AND, but by sending here "OR" you can switch that to logical OR, any other value would left it in default AND state.
Doesn't apply to sourceLang and targetLang filters, they are always in AND state
onlyCountSegmentsinstead of returning segment, would go in search till the end of tm and return total number of segments, that returns true with selected filters
sourcestring""any string, example 
"source": "data in the segment"
sourceSearchModeSets what to look for in source of the segments, based on type of search, specified in sourceSearchMode(exact, concordance). If sourceSearchMode is not specified, returns an error.
targettargetSearchMode--//–(the same as above but for corresponding fields)
documentdocumentSearchMode
authorauthorSearchMode
contextcontextSearchMode
addInfoaddInfoSearchMode
timestampSpanStartstringstring with date in format
 "20240121T115234Z"
timestampSpanEndSets filter for time.  You need to provid both timestamps, or none, otherwise request would return an error. Could be used in "OR" combination in 
"logicalOr": 1,, but, maybe, it's better to change that behaviour to similar like with langs(Always AND)
timestampSpanEndtimestampSpanStart
sourceSearchModestring""String with required 
EXACT or CONCORDANCE 
(or CONTAINS, what's equal to CONCORDANCE)
words and some optional, like
CASEINSENSETIVE for non case sensetive comparison,
WHITESPACETOLERANT for
modifying whitespaces(result of this actions you can see in filters in responce)
INVERTED  for applying filter in inverted state, so to return false on match and true 
if no match. Logical NOT

Attributes is not case sensetive, 
Separator doen't matters
-

Sets type of search for corresponding field. 
If you set, for example, "authorSearchMode" = "EXACT", but don't provide any
author in request, author field would be "", so request would look for segments, 
where author equals to "". The same is true for other fields
Examples:
1)
"source": "the  text inside"
"sourceSearchMode":"CONTAINS, CASEINSENSETIVE, WHITESPACETOLERANT, INVERTED", - search would be for all segments, which doesn't contains "the text inside" in non case sensetive mode and with normalizing whitespaces.
2) 
"author": "Ed Sheeran",
"authorSearchMode" = "Exact", -search would be done on exact case sensetive matches with "Ed Sheeran" in author field
3) 
"author": "Ed Sheeran",
"authorSearchMode" = "CASEINSENSETIVE", - ERROR, search mode(Exact\Contains) is not selected

4) "author": "Ed Sheeran",  - ERROR, search mode(Exact\Contains) is not selected


5) "authorSearchMode" = "CONTAINS",- OK, filter would check if segment contains "", so every segment would return true then

targetSearchMode
documentSearchMode
authorSearchMode
contextSearchMode
addInfoSearchMode






Concordance search

PurposeReturns entries\translations that contain requested segment
RequestPOST /%service%/%tm_name%/concordancesearch
Params

Required: searchString - what we are looking for , searchType ["Source"|"Target"|"SourceAndTarget"] - where to look

iNumOfProposal -  limit of found proposals - max is 20, if 0 → use default value '5' 


Code Block
languagejs
titleResponse
collapsetrue
Request example:
{
    "searchString": "The",
    "searchType": "source", // could be Source, Target, SourceAndTarget - says where to do search
    ["searchPosition": "",
Code Block
languagejs
titleResponse
collapsetrue
Request example:
{
    "searchString": "The",
    "searchType": "source", // could be Source, Target, SourceAndTarget - says where to do search
    ["searchPosition": "",] 
    ["numResults": 20,]
    ["msSearchAfterNumResults": 250,]
	["loggingThreshold": 0]
}
Response example:Success:
example_new{
  "ReturnValue": "ENDREACHED_RC",
  "NewSearchPosition": null,
  "results": [
    {
      "source": "The end",
      "target": "The target",
      "segmentNumber": 0,
      "id": "",
      "documentName": "Te2.xlf",
      "sourceLang": "de-DE",
      "targetLang": "EN-GB",
     "type": "Manual",
     "author": "THOMAS LAURIA",
     "timestamp": "20231228T171821Z",
      "markupTable": "OTMXUXLF",
      "context": "2_3",
      "additionalInfo": "",
      "internalKey": "7:1"
    }
  ]
}

example_old
{
  "ReturnValue": 0,
  "NewSearchPosition": null,
  "results": [
  {
     "source": "For > 100 setups.",
     "target": "Für > 100 Aufstellungen.",
     "segmentNumber": 10906825,
     "id": "",
     "documentName": "none",
     "documentShortName": "NONE",
     "sourceLang": "en-GB",← rfc5646     
     "targetLang": "de-DE",← rfc5646
     "type": "Manual",
     "matchType": "undefined",
     "author": "",
     "timestamp": "20190401T084052Z",
     "matchRate": 0,
     "markupTable": "OTMXML",
     "context": "",
     "additionalInfo": ""
   }
  ],
 "ErrorMsg": ""
}

Success, but with NewSearchPosition - not all TM was checked, use this position to repeat search:
{
  "ReturnValue": 0,
  "NewSearchPosition": "8:1",
  "results": [
  {
     "source": "For > 100 setups.",
     "target": "Für > 100 Aufstellungen.",
     "segmentNumber": 10906825,
     "id": "",
    "documentName": "none",
    "documentShortName": "NONE",
    "sourceLang": "en-GB",
    "targetLang": "de-DE",
    "type": "Manual",
     "matchType": "undefined",
     "author": "",
     "timestamp": "20190401T084052Z",
     "matchRate": 0,
     "markupTable": "OTMXML",
     "context": "",
     "additionalInfo": ""
   }
  ],
 "ErrorMsg": ""
}
SearchPosition / NewSearchPositionFormat: "7:1"
First is segmeng\record number, second is target number
The NextSearchposition is an internal key of the memory for the next position on sequential access. Since it is an internal key, maintained and understood by the underlying memory plug-in (for EqfMemoryPlugin is it the record number and the position in one record),
no assumptions should be made regarding the content. It is just a string that, should be sent back to OpenTM2 on the next request, so that the search starts from there.
So is the implementation in Translate5: The first request to OpenTM2 contains SearchPosition with an empty string, OpenTM2 returns than a string in NewSearchPosition, which is just resent to OpenTM2 in the next request.

Not found:{
"ReturnValue": 0,
"NewSearchPosition": null,
"ErrorMsg": ""
}TM not found:{
"ReturnValue": 133,
"ErrorMsg": "OtmMemoryServiceWorker::concordanceSearch::"
}


...

Update entry

PurposeUpdates entry\translation 
RequestPOST /%service%/%tm_name%/entry
Params

Only sourceLang, targetLang, source and target are required


This request would made changes only in the filebuffer(so files on disk would not be changed)
To write it to the disk just call request which would flush tm to the disk as part of execution(exportTMX, exportTM, cloneTM) or using SaveAllTms request 

Code Block
languagejs
titleResponse
collapsetrue
Request example:
{
    "source": "The end",
    "target": "The target",
    "sourceLang": "en", // langs would be checked with languages.xml
    "targetLang": "de", 
//additional field
    ["documentName": "Translate5 Demo Text-en-de.xlf"],
    ["segmentNumber": 8,]
    ["author": "Thomas Lauria"],
    ["timeStamp": "20210621T071042Z"], // if there is no timestamp, current time would be used
    ["context": "2_2"], // context and addInfo would be saved in TM in the same field
    ["addInfo": "2_2"], 
    ["type": "Manual"], // could be GlobalMemory, GlobalMemoryStar, MachineTranslation, Manual, by default Undefined         
    ["markupTable": "OTMXUXLF"], //if there is no markup, default OTMXUXLF would be used. 
								 //Markup tables should be located inside ~/.t5memory/TABLE/%markup$.TBL
    ["loggingThreshold": 0],
	["save2disk": 0]   // flag if we need to flush tm to disk after update. by default is true
}

here are data struct used for search, so you can see max numbers of symbols
typedef struct _LOOKUPINMEMORYDATA
{
  char szMemory[260];
  wchar_t szSource[2050];
  wchar_t szTarget[2050];
  char szIsoSourceLang[40];
  char szIsoTargetLang[40];
  int lSegmentNum;
  char szDocName[260];
  char szMarkup[128];
  wchar_t szContext[2050];
  wchar_t szAddInfo[2050];
  wchar_t szError[512];
  char szType[256];
  char szAuthor[80];
  char szDateTime[40];
  char szSearchMode[40]; // only for concordance search
  char szSearchPos[80]; // only for concordance search
  int iNumOfProposals;
  int iSearchTime;
  wchar_t szSearchString[2050];
} LOOKUPINMEMORYDATA, *PLOOKUPINMEMORYDATA;

Response example:success:
example_new{
  "source": "The end",
  "sourceNPRepl": "The end",
  "sourceNorm": "The end",
  "target": "The target",
  "segmentNumber": 0,
  "id": "",
  "documentName": "Te2.xlf",
  "sourceLang": "DE-DE",
  "targetLang": "EN-GB",
  "type": "Manual",
  "author": "THOMAS LAURIA",
  "timestamp": "",
  "markupTable": "OTMXUXLF",
  "context": "2_3",
  "additionalInfo": "addInfo2",
  "internalKey": "8:1"
}

example_old
{
"sourceLang": "de-DE",
"targetLang": "en-GB",
"source": "The end",
"target": "The target",
"documentName": "Translate5 Demo Text-en-de.xlf",
"segmentNumber": 222,
"markupTable": "OTMXUXLF",
"timeStamp": "20210621T071042Z",
"author": "Thomas Lauria"
}

in case if similar record exists, t5memory comparing source text, 
if it's the same, t5memory would compare docName, 
if it's the same,t5memory would compare timestamps and would leave only newer one

in case if TM is alreade reached it's limit, you would get 
{
"ReturnValue": 5034,
"ErrorMsg": ""
}or{
"ReturnValue": 5035,
"ErrorMsg": ""
}




Code Block
languagejs
titleUpdateEntry Pseudo code
collapsetrue
Update entry pseudo code:update segment/import
{
  if we have triples equal match (candidate for exact match)
  {
    UpdateTmRecord
    if(updateFailed)
      AddToTMAsNewKey
      if(added) UpdateTmIndex
  }else{
    AddToTMAsNewKey
    if(added) UpdateTmIndex
  }
}

UpdateTmRecord{
  getListOfDataKeysFromIndexRecord
  sortThemByTriplesMatchesWithProposal(first have biggest match)

  foreach key untill fStop==true{
    readTmRecord // tm record is 16kB block in file, first number in "7:1"

    //compare tm record data with data passed in the get in structure
    CompareAndModifyPutData
    if(NO_ERROR) set fStop = true;
  }
}

CompareAndModifyPutData{
  if source strings are equal
    Delete old entry - with TMLoopAndDelTargetClb
  if fNewerTargetExists -> fStop = TRUE
  Loop thru target records
    loop over all target CLBs or until fStop
      if segment+file id found (exact-exact-found!)
        update time field in control block
        set fUpdate= fStop=TRUE
        update context info
      if not fStop
        goto next CLB
    endloop
    if no matching CLB has been found (if not fStop)
      add new CLB (ids, context, timestamp etc. )
    endloop
  endloop

  if fupdated, update TM record
  if !fStop (all target record have been tried & none matches )
    add new target record to end of tm record
  else
    return source_string_error // errcode for UpdateTmRecord to go to the next TM record in prepared list
}

TMLoopAndDelTargetClb{
  loop through all target records in tm record checking
    loop over all target CLBs or until fStop_error // errcode for UpdateTmRecord to go to the next TM record in prepared list
}

TMLoopAndDelTargetClb{
  loop through all target records in tm record checking
    loop over all target CLBs or until fStop
      if lang + segment+file id found (exact-exact-found!)
        if entry is older
          delete it, fDel = TRUE
        else set fNewerTargetExists=TRUE(would be used in CompareAndModifyPutData)
          goon with search in next tgt CLB (control block)
      else
        goon with search in next tgt CLB (control block)
      if  endloop
 lang + segment+file id found (exact-exact-found!) endif
    if not fDel
  if entry is older
 position at next target record
  endloop
}




Delete entry

PurposeDeletes entry\translation 
RequestPOST /%service%/%tm_name%/entrydelete
Params

Only sourceLang, targetLang, source, and target are required

Deleting based on strict match(including tags and whitespaces) of target and source

This request would made changes only in the filebuffer(so files on disk would not be changed)
To write it to the disk just call request which would flush tm to the disk as part of execution(exportTMX, exportTM, cloneTM)  or using SaveAllTms request 

Code Block
languagejs
titleResponse
collapsetrue
Request example:
{
  
delete it, fDel = TRUE else set fNewerTargetExists=TRUE(would be used in CompareAndModifyPutData) goon with search in next tgt CLB (control block) else goon with search in next tgt CLB (control block) endloop endif if not fDel position at next target record endloop }
"sourceLang": "bg",
  "targetLang": "en",
  "source": "The end",
  "target": "Eth dne"
  ["documentName": "my file.sdlxliff",]
  ["segmentNumber": 1,]
  ["markupTable": "translate5",]
  ["author": "Thomas Lauria",]
  ["type": "",]
  ["timeStamp": ""],
  ["context": "",]
   ["addInfo": ""] ,  ["loggingThreshold": 0] 
}
Responce example:
{
  "fileFlushed": 0,
  "results": {
     "source": "The tar",
     "target": "The target",
     "segmentNumber": 0,
     "id": "",
     "documentName": "Te2.xlf",
     "sourceLang": "de-DE",
     "targetLang": "EN-GB",
     "type": "Manual",
     "author": "THOMAS LAURIA",
     "timestamp": "20231229T125701Z",
     "markupTable": "OTMXUXLF",
     "context": "2_3",
    "additionalInfo": "",
    "internalKey": "7:1"
  }
}




Delete entries / mass deletion

PurposeDeletes entries

Delete entry

PurposeDeletes entry
\translation 
RequestPOST /%service%/%tm_name%/
entrydelete
entriesdelete
Params

Only sourceLang, targetLang, source, and target are required

Deleting based on strict match(including tags and whitespaces) of target and source

This request would made changes only in the filebuffer(so files on disk would not be changed)
To write it to the disk just call request which would flush tm to the disk as part of execution(exportTMX, exportTM, cloneTM)  or using SaveAllTms request 

This would start reorganize process which would remove like reorganize bad segments and also would remove segments that gives true when checking with provided filters combined with logical AND. So if you provide timestamps and addInfo, only segments within provided timestamp and with that addInfo would not be imported to new TM(check reorganize process). 
Every parameter is optional, so empty json would just start reorganize async process.
If you provide one of timestamps you would get error - please provide both. 
To add parameter you should set it's SearchMode to be EXACT|CONCORDANCE(non case sensetive)
If only searched string provided, but not search mode - you would get error.



Code Block
languagejs
titleResponse
collapsetrue
Request example: 
{
 
["
sourceLang
addInfo": "
bg
ADD_INFO"],
  "targetLang": "en",   "source"
["addInfoSearchMode" : "
The end
EXACT"],
  "target": "Eth dne"  
["
documentName
context": "
my file.sdlxliff
CONTEXT"
,
]
  ["segmentNumber": 1
,
]

 
["
markupTable
contextSearchMode": "
translate5
concordance"],
]

 
["author": "
Thomas Lauria
AUTHOR"],
]

 
["
type
authorSearchMode": "exact"],
]

 
["
timeStamp
document":
"document"],
 
["
context
documentSearchMode":
"CONCORDANCE"],
]

   
["
addInfo
timestampSpanStart": "20000121T115234Z"]
,
 

["
loggingThreshold
timestampSpanEnd": 
0
"20240121T115234Z"]
 

}
Responce example:
{
  "fileFlushed": 0,
  "results": {
     "source": "The tar",
     "target": "The target",
     "segmentNumber": 0,
     "id": "",
     "documentName": "Te2.xlf",
     "sourceLang": "de-DE",
     "targetLang": "EN-GB",
     "type": "Manual",
     "author": "THOMAS LAURIA",
     "timestamp": "20231229T125701Z",
     "markupTable": "OTMXUXLF",
     "context": "2_3",
    "additionalInfo": "",
    "internalKey": "7:1"
  }
}





Save all TMs

Purpose

Flushes all filebuffers(TMD, TMI files) into the filesystem. Reset 'Modified' flags for file buffers. 

Filebuffer is a file instance of .TMD or .TMI loaded into RAM. It provides better speed and safety when working with files.

RequestGET /%service%_service/savetms
Params

-


Code Block
languagejs
titleResponse
collapsetrue
Response example:{
   'saved 4 files': '/home/or/.t5memory/MEM/mem2.TMD, /home/or/.t5memory/MEM/mem2.TMI, /home/or/.t5memory/MEM/newBtree3.TMD, /home/or/.t5memory/MEM/newBtree3.TMI'
} List of saved files


...