Page tree

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Added info about new statuses

...

Clone TM localy

PurposeCreates TM with the provided name
RequestPost /%service%/%tm_name%/clone
Params

Required: name, sourceLang

Endpoint is sync(blocking)

Code Block
languagejs
titleResponse
collapsetrue
Request example 
{    "newName": "examle_tm" // when cloning, cloned tm would be renamed to this name(source tm is in url)
}

Response example:
Success: 
{
    "msg": "newBtree3_cloned2 was cloned successfully",
    "time": "5 ms"
}

 Failure: 
{
    "ReturnValue": -1,
    "ErrorMsg": "'dstTmdPath' = /home/or/.t5memory/MEM/newBtree3_cloned.TMD already exists; for request for mem newBtree3; with body = {\n    \"newName\": \"newBtree3_cloned\"\n}"
}




Flush TM 

PurposeIf TM is open, flushes it to the disk
RequestGet /%service%/%tm_name%/flush
Params

Endpoint is sync(blocking)

If tm is not found on the disk  - returns 404
If tm is not open - returns 400 with message
Then t5m requests write pointer to the tm(so it waits till other requests that's working with the tm would finish) and then it flushes it to the disk
Could also return an error if flushing got some issue.
Would not open the tm, if it's not opened yet, but instead would return an error.
Code Block
languagejs
titleResponse
collapsetrue
Response example:
Success:  {
    "msg": "Mem test1 was flushed to the disk successfully"
}   
Failure:  
{
    "ReturnValue": -1,
    "ErrorMsg": "FlushMemRequestData::checkData -> tm is not found"
}// or 
{
"ReturnValue": -1,
"ErrorMsg": "FlushMemRequestData::checkData -> tm is not open"
}




Delete TM

PurposeDeletes .TMD, .TMI, .MEM files 
RequestDelete /%service%/%tm_name%/
Params

-


Code Block
languagejs
titleResponse
collapsetrue
Response example:
success:
{
    "newBtree3_cloned2": "deleted"
},


Code Block
languagejs
titleResponse
collapsetrue
Response example:
failed:
{
    "newBtree3_cloned2": "not found"
}


...

Export TMX from TM - old

PurposeCreates TMX from tm.
RequestGET /%service%/%tm_name%/
Headers

Accept - applicaton/xml


This endpoint should flush tm before execution

Code Block
languagejs
titleResponse
collapsetrue
Response example:<?xml version="1.0" encoding="UTF-8" ?>
<tmx version="1.4">
<header creationtoolversion="0.2.14" gitCommit="60784cf * refactoring and cleanup" segtype="sentence" adminlang="en-us" srclang="en-GB" o-tmf="t5memory" creationtool="t5memory" datatype="xml" />
<body>
  <tu tuid="1" datatype="xml" creationdate="20190401T084052Z">
     <prop type="tmgr:segNum">10906825</prop>
     <prop type="tmgr:markup">OTMXML</prop>
     <prop type="tmgr:docname">none</prop>
     <tuv xml:lang="en-GB">
          <prop type="tmgr:language">English(U.K.)</prop>
          <seg>For > 100 setups.</seg>
     </tuv>
     <tuv xml:lang="de-DE">
          <prop type="tmgr:language">GERMAN(REFORM)</prop>
     <seg>Für > 100 Aufstellungen.</seg>
     </tuv>
     </tu>
   </body>
</tmx>





Export TMX from TM

PurposeExports TMX from tm.
RequestGET /%service%/%tm_name%/download.tmx
Headers

Accept - applicaton/xml

curl 

curl --location --request GET 'http://localhost:4040/t5memory/{MEMORY_NAME}/download.tmx' \ --header 'Accept: application/xml' \ --header 'Content-Type: application/json' \ --data '{"startFromInternalKey": "7:1", "limit": 20}'

Could have body with this fields 

startFromInternalKey - in "recordKey:targetKey" format sets starting point for import
limit - sets maximum numberof segments to be exported 

loggingThreshold- as in other requests

in response in headers you would get NextInternalKey: 19:1 - if exists next item in memory else the same as you send. So you could repeat the call with new starting position. 

If no body provided, export starts from the beginning (key 7:1) to the end.


This endpoint should flush tm before execution

Code Block
languagejs
titleResponse
collapsetrue
Response example:<?xml version="1.0" encoding="UTF-8" ?>
<tmx version="1.4">
<header creationtoolversion="0.2.14" gitCommit="60784cf * refactoring and cleanup" segtype="sentence" adminlang="en-us" srclang="en-GB" o-tmf="t5memory" creationtool="t5memory" datatype="xml" />
<body>
  <tu tuid="1" datatype="xml" creationdate="20190401T084052Z">
     <prop type="tmgr:segNum">10906825</prop>
     <prop type="tmgr:markup">OTMXML</prop>
     <prop type="tmgr:docname">none</prop>
     <tuv xml:lang="en-GB">
          <prop type="tmgr:language">English(U.K.)</prop>
          <seg>For > 100 setups.</seg>
     </tuv>
     <tuv xml:lang="de-DE">
          <prop type="tmgr:language">GERMAN(REFORM)</prop>
     <seg>Für > 100 Aufstellungen.</seg>
     </tuv>
     </tu>
   </body>
</tmx>



Export in internal format 

PurposeCreates and exports archive with .TMD, .TMI files of TM
RequestGET /%service%/%tm_name%/download.tm
Headers

application/zip

returns archive(.tm file) consists with .tmd and .tmi files
This should flush tm before execution

Code Block
languagejs
titleResponse
collapsetrue
Response example:%binary_data%




Export in internal format - OLD

PurposeCreates and exports archive with .TMD, .TMI, .MEM files of TM
RequestGET /%service%/%tm_name%/
Headers

application/zip

returns archive(.tm file) consists with .tmd and .tmi files
This should flush tm before execution

Code Block
languagejs
titleResponse
collapsetrue
Response example:%binary_data%


...

Get the status of TM

RequestGET /%service%/%tm_name%/status
Params

-

Would return status of TM. It could be 'not found', 'available' if it's on the disk but not loaded into the RAM yet, and 'open' with additional info. In case if there was at least one try to import tmx or reorganize tm since it was loaded into the RAM, additional fields would appear and stay in the statistics till memory would be unloaded. 

Code Block
languagejs
titleResponse
collapsetrue
Response example:
{//just opened tm, without import\reorganize called
    "status": "open",
    "lastAccessTime": "",
    "creationTime": "20230703T122212Z",
    "tmCreatedInT5M_version": "0:5:1"
}

{// after reorgainize was called 
    "status": "open",
    "reorganizeStatus": "available",
    "reorganizeTime": 100,
    "reorganizeTime": "Overall reorganize time is      : 0:00:02\n",
    "segmentsReorganized": 1112,
    "invalidSegments": 10,
    "invalidSegmentsRCs": "5005:10; ",
    "firstInvalidSegments": "123; 432; 554; 623; 659; 675; 741; 742; 753; 755; ",
    "invalidSymbolErrors": -1,
    "reorganizeErrorMsg": "",
    "lastAccessTime": "",
    "creationTime": "20230810T095233Z",
    "tmCreatedInT5M_version": "0:5:10"
}   {//not opened but available on the disk 
	"status": "available"
}
{//not found tm {
    "status": "not found",
    "res": 48 // 48- both tmi and tmd files are no found, 16- only TMD file not found, 32 - only TMI file not found
}
  The tmxImportStatus could be "available", "import" or "failed" if the import had errors. If there were at least one import to that tm, new fields would appear
{//tm in process of import
    "status": "open",
    "tmxImportStatus": "import",    
	"importProgress" : 56,    
	"importTime": "00:00:13",    
	"segmentsImported": 1356,    
	"invalidSegments": 23,    
	"invalidSymbolErrors": 2,    
	"importErrorMsg": "", 
    "lastAccessTime":  "%lastAccessTime",
    "creationTime": "20230703T122212Z",
    "tmCreatedInT5M_version": "0:5:1" 
}// in case if internal error happened, like t5memory would have error 5034 or 5035 which indicates, that tm size is reached it's limit and you should create new one to save new segments or part that left from tmx that you tried to import, status would look like this
{
"status": "open",
"tmxImportStatus": "failed",
"importProgress": 100,
"importTime": "Overall import time is : 0:00:19\n",
"segmentsImported": 445,
"invalidSegments": 1,
"invalidSymbolErrors": 0,
"importErrorMsg": "Warning: encoding 'UTF-16' from XML declaration or manually set contradicts the auto-sensed encoding; ignoring at column 40 in line 1; \n Fatal internal Error at column 6 in line 9605, import stopped at progress = 0%, errorMsg: TM is reached it's size limit, please create another one and import segments there, rc = 5034; aciveSegment = 1834\n\nSegment 1834 not imported\r\n\nReason = \nDocument = none\nSourceLanguage = de-DE\nTargetLanguage = en-GB\nMarkup = OTMXUXLF\nSource = in Verbindung mit Befestigungswinkel MS-...-WPE-B zur Wandmontage eines Einzelgeräts\nTarget = In combination with mounting bracket MS-...-WPE-B for wall mounting an individual component ",
"lastAccessTime": "",
"ErrorMsg": " Fatal internal Error at column 6 in line 9605, import stopped at progress = 0%, errorMsg: TM is reached it's size limit, please create another one and import segments there, rc = 5034; aciveSegment = 1834\n\nSegment 1834 not imported\r\n\nReason = \nDoc"
}

So you would have info about last segment which interrupted tm import.
In 0.6.44 and up you would have also optimized way of loading tm, to prevent blocking tm list for a long time, t5memory is using lazy loading, so new statuses are added. First tm is added to the tm list, then it's loading and it could be in new states: waiting to be loaded, loading, failed to load(and old open if it's loaded). In case if tm is failed to load, new request would try to load it again. Here are responces for status request - take attention that info in fields which are taken from binary data is set to 0 or empty till loading would be successful. Also in this version new fields are added: sizeInRAM(in bytes) - precalculated size of tm in RAM, activeRequest - would have info about which blocking requests is running now. 
You don't need to wait till tm would be loaded, because t5memory should manage that. So you can send few, for example, fuzzy requests to not open memory, first would try to open tm, and second would wait for the first one. If it would fail, second would also try to open tm. 

a)
{
"status": "available"
}

b)
{
"status": "waiting for loading",
"sizeInRAM": 48304,
"activeRequest": "",
"lastAccessTime": "20240930T093759Z",
"creationTime": "",
"tmCreatedInT5M_version": "0:0:0",
"segmentIndex": 0,
"sourceLang": "",
"internalDescription": ""
}

c)
{
"status": "failed to open",
"sizeInRAM": 48304,
"activeRequest": "",
"lastAccessTime": "20240930T094715Z",
"creationTime": "",
"tmCreatedInT5M_version": "0:0:0",
"segmentIndex": 0,
"sourceLang": "",
"internalDescription": ""
}

d)
{
"status": "loading",
"sizeInRAM": 48304,
"activeRequest": "",
"lastAccessTime": "20240930T094118Z",
"creationTime": "",
"tmCreatedInT5M_version": "0:0:0",
"segmentIndex": 0,
"sourceLang": "",
"internalDescription": ""
}

e)
{
"status": "open",
"sizeInRAM": 2686128,
"activeRequest": "",
"lastAccessTime": "20240930T093859Z",
"creationTime": "20240926T105408Z",
"tmCreatedInT5M_version": "0:6:39",
"segmentIndex": 1750,
"sourceLang": "de-DE",
"internalDescription": ""
}



Fuzzy search

Purpose Returns enrties\translations with small differences from requested
RequestPOST /%service%/%tm_name%/fuzzysearch
Params

Required: source, sourceLang, targetLang

iNumOfProposal -  limit of found proposals - max is 20, if 0 → use default value '5' 


Code Block
languagejs
titleResponse
collapsetrue
Request example:


Request example:
{ // required fields
  "sourceLang":"en-GB",    // langs would be checked with languages.xml 
  "targetLang":"de",   
  "source":"For > 100 setups.", 

 // optional fields  
  ["documentName":"OBJ_DCL-0000000845-004_pt-br.xml"],   
  ["segmentNumber":15],   
  ["markupTable":"OTMXUXLF"],    //if there is no markup, default OTMXUXLF would be used. 
								 //Markup tables should be located inside ~/.t5memory/TABLE/%markup$.TBL 
  ["context":"395_408"],  
  ["numOfProposals":20],   // num of expected segments in output. By default it's 5
  ["loggingThreshold": 0]
}





Response example:

Success:
{
    "ReturnValue": 0,
    "ErrorMsg": "",
    "NumOfFoundProposals": 1,
    "results": [
        {
            "source": "The end",
            "target": "The target",
            "segmentNumber": 0,
            "id": "",
            "documentName": "Te2.xlf",
            "sourceLang": "de-DE",
            "targetLang": "EN-GB",
            "type": "Manual",
            "author": "THOMAS LAURIA",
            "timestamp": "20231228T171821Z",
            "markupTable": "OTMXUXLF",
            "context": "2_3",
            "additionalInfo": "",
            "internalKey": "7:1",
            "matchType": "Fuzzy",
            "matchRate": 50,
            "fuzzyWords": 0,
            "fuzzyDiffs": 0
        }
    ]
} 

example 2
{
  "ReturnValue": 0,
  "ErrorMsg": "",
  "NumOfFoundProposals": 1, 
  "results": [
  {
     "source": "For > 100 setups.",
     "target": "Für > 100 Aufstellungen.",
     "segmentNumber": 10906825,
     "id": "",
     "documentName": "none",
    "documentShortName": "NONE",
    "sourceLang": "en-GB",
    "targetLang": "de-DE",
    "type": "Manual",
    "matchType": "Exact", // could be exact or fuzzy
    "author": "",
    "timestamp": "20190401T084052Z",
    "matchRate": 100,
     "fuzzyWords": -1, // for exact match it would be -1 here and in diffs
     "fuzzyDiffs": -1, // otherwise here would be amount of parsed words and diffs that was 
					   // used in fuzzy matchrate calculation    
	 "markupTable": "OTMXML",
     "context": "",
     "additionalInfo": ""
   }
 ]
}

Not found:

{
"ReturnValue": 133,
"ErrorMsg": "OtmMemoryServiceWorker::concordanceSearch::"
}
For exact match used function that's comparing strings ignoring whitespaces. First normalized strings(without tags).
If it's the same string, then t5memory is checking string with tags and could return 100 or 97 match rate depending on result.

Then it's checking context match rate and if document name is the same(non case sensitive)

Then it's checking and modifying exactMatchRate according to code in code block below.
After that it would store exact matches only with usMatchLevel>=100. If there would be no exact matches, fuzzy match calculations would begin.
In case if there is at least one exact match, any fuzzy matches would be skipped.
In case if we have only one exact exact match, it's rate would be set to 102


For equal matches with 100% word matches but different whitespaces/newlines, each whitespace/newline diffs would be count as -1%. For punctuation, at least for 0.4.50, each punctuation would count as word token. This would be changed in future to count punctuation as whitespaces. 

For fuzzy calculation tags would be removed from text, except t5:np tags, which would be replaced with their "r" attribute to be counted as 1 word per tag. 
 

For fuzzy rate calculation we count words and then diffs in normalized string(without tags), using this formula: 
  if (usDiff < usWords )
  {
    *pusFuzzy = (usWords != 0) ? ((usWords - usDiff)*100 / usWords) : 100;
  }
  else
  {
    *pusFuzzy = 0;
  } /* endif */ Regarging Number Protection feature, tags from number protection would be replaced with their regexHashes from their attributes, so they would be count as 1 word each. NP with the same regex would be counted as equal
 To count diffs, t5memory go throuht both segments to find matching tokens, to find something called snake- line of matching tokens. 
 Then It marks unmatched as INSERTED or DELETED tokens, and based on that it calculates diffs.

if it's 100% rate, we add tags and compare it again
if then it's not equal, here is how match rate would be changed - probably this would never happens, because we have exact match test before fuzzy, 
and we do exact test even if triplesHashes is different(which is pre-fuzzy calculation and if it's equal, it could be flag that trigger exact test)

  if ( !fStringEqual )
  {
    if ( usFuzzy > 3 )
    {
      usFuzzy -= 3;
    }
    else
    {
       usFuzzy = 0;
    } /* endif */
    usFuzzy = std::min( (USHORT)99, usFuzzy );
  } /* endif */

then depending on type of translation it could tweak rate
if ( (usModifiedTranslationFlag == TRANSLFLAG_MACHINE) && (usFuzzy < 100) )
{
  // ignore machine fuzzy matches
}
else if ( usFuzzy > TM_FUZZINESS_THRESHOLD )
{
  /********************************************************/
  /* give MT flag a little less fuzziness */
  /********************************************************/
  if ( usModifiedTranslationFlag == TRANSLFLAG_MACHINE )
  {
    if ( usFuzzy > 1 )
    {
      usFuzzy -= 1;
    }
    else
    {
      usFuzzy = 0;
    } /* endif */
  } /* endif */
  if (usFuzzy == 100 && (pGetIn->ulParm & GET_RESPECTCRLF) && !fRespectCRLFStringEqual )
  { // P018279!
    usFuzzy -= 1;
  }
   add to resulting set
} /* endif */
} /* endif */


At the end fuzzy request replaces tags in proposal from TM with tags from request, and if matchRate >= 100, it calculates whitespace diffs and apply matchRate-= wsDiffs


Code Block
languagejs
titleResponse
collapsetrue
ExactMatchRate calculation:so, before usExact is equal to 97 or 100, depending if strings with tags are equal ignoring whitespaces  and then code do some tweaks.
 pClb is struct that have proposals from TM, pGetIn is fuzzy requests data

 // loop over CLBs and look for best matching entry
{
  LONG lLeftClbLen; // left CLB entries in CLB list
  PTMX_TARGET_CLB pClb; // pointer for CLB list processing
  #define SEG_DOC_AND_CONTEXT_MATCH 8
  #define DOC_AND_CONTEXT_MATCH 7
  #define CONTEXT_MATCH 6
  #define SAME_SEG_AND_DOC_MATCH 5
  #define SAME_DOC_MATCH 4
  #define MULT_DOC_MATCH 3
  #define NORMAL_MATCH 2
  #define IGNORE_MATCH 1
  SHORT sCurMatch = 0;

  // loop over all target CLBs
  pClb = pTMXTargetClb;
  lLeftClbLen = RECLEN(pTMXTargetRecord) -
  pTMXTargetRecord->usClb;
  while ( ( lLeftClbLen > 0 ) && (sCurMatch < SAME_SEG_AND_DOC_MATCH) )
  {
    USHORT usTranslationFlag = pClb->bTranslationFlag;
    USHORT usCurContextRanking = 0; // context ranking of this match
    BOOL fIgnoreProposal = FALSE;
    // apply global memory option file on global memory proposals
   if ( pClb->bTranslationFlag == TRANSLFLAG_GLOBMEM ) // pClb it's segment in TM
   {
       if ( (pGetIn->pvGMOptList != NULL) && pClb->usAddDataLen ) // pGetIn it's fuzzy requests segment
       {

           USHORT usAddDataLen = NtmGetAddData( pClb, ADDDATA_ADDINFO_ID, pContextBuffer, MAX_SEGMENT_SIZE );
           if ( usAddDataLen )
           {
               GMMEMOPT GobMemOpt = GlobMemGetFlagForProposal( pGetIn->pvGMOptList, pContextBuffer );
               switch ( GobMemOpt )
               {
                  case GM_SUBSTITUTE_OPT: usTranslationFlag = TRANSLFLAG_NORMAL; break;
                  case GM_HFLAG_OPT : usTranslationFlag = TRANSLFLAG_GLOBMEM; break;
                  case GM_HFLAGSTAR_OPT : usTranslationFlag = TRANSLFLAG_GLOBMEMSTAR; break;
                  case GM_EXCLUDE_OPT : fIgnoreProposal = TRUE; break;
               } /* endswitch */
          } /* endif */
     } /* endif */ 

     if ( pClb == pTMXTargetClb )
    {
       usTargetTranslationFlag = usTranslationFlag;
    } /* endif *
  } /* endif */ 


  // check context strings (if any)
  if ((!fIgnoreProposal)
       && pGetIn->szContext[0]
       && pClb->usAddDataLen )
   {
       USHORT usContextLen = NtmGetAddData( pClb, ADDDATA_CONTEXT_ID, pContextBuffer, MAX_SEGMENT_SIZE );
       if ( usContextLen != 0 )
       {
            usCurContextRanking = NTMCompareContext( pTmClb, pGetIn->szTagTable, pGetIn->szContext, pContextBuffer );
       } /* endif */
    } /* endif */


  // check for matching document names
  if ( pGetIn->ulParm & GET_IGNORE_PATH )
  {
     // we have to compare the real document names rather than comparing the document name IDs
     PSZ pszCLBDocName = NTMFindNameForID( pTmClb, &(pClb->usFileId), (USHORT)FILE_KEY );
     if ( pszCLBDocName != NULL )
     {
        PSZ pszName = UtlGetFnameFromPath( pszCLBDocName );
        if ( pszName == NULL )
        {
           pszName = pszCLBDocName;
         } /* endif */
      fMatchingDocName = stricmp( pszName, pszDocName ) == 0;
    }
    else
    {
       // could not access the document name, we have to compare the document name IDs
      fMatchingDocName = ((pClb->usFileId == usGetFile) || (pClb->usFileId == usAlternateGetFile));
    } /* endif */
  }
  else
  {
     // we can compare the document name IDs
     fMatchingDocName = ((pClb->usFileId == usGetFile) || (pClb->usFileId == usAlternateGetFile));
  } /* endif */


  if ( fIgnoreProposal )
  {
    if ( sCurMatch == 0 )
    {
      sCurMatch = IGNORE_MATCH;
    } /* endif */
  }
  else if ( usCurContextRanking == 100 )
  {
    if ( fMatchingDocName && (pClb->ulSegmId >= (pGetIn->ulSegmentId - 1)) && (pClb->ulSegmId <= (pGetIn->ulSegmentId + 1)) )
    {
      if ( sCurMatch < SEG_DOC_AND_CONTEXT_MATCH )
      {
         sCurMatch = SEG_DOC_AND_CONTEXT_MATCH;
        pTMXTargetClb = pClb; // use this target CLB for match
        usTargetTranslationFlag = usTranslationFlag;
        usContextRanking = usCurContextRanking;
      }
    }
    else if ( fMatchingDocName )
    {
    if ( sCurMatch < DOC_AND_CONTEXT_MATCH )
    {
      sCurMatch = DOC_AND_CONTEXT_MATCH;
      pTMXTargetClb = pClb; // use this target CLB for match
      usTargetTranslationFlag = usTranslationFlag;
      usContextRanking = usCurContextRanking;
     }
     else if ( sCurMatch == DOC_AND_CONTEXT_MATCH )
     {
       // we have already a match of this type so check if context ranking
       if ( usCurContextRanking > usContextRanking )
       {
          pTMXTargetClb = pClb; // use newer target CLB for match
          usTargetTranslationFlag = usTranslationFlag;
          usContextRanking = usCurContextRanking;
       }
       // use time info to ensure that latest match is used
       else if ( usCurContextRanking == usContextRanking )
       {
         // GQ 2015-04-10 New approach: If we have an exact-exact match use this one, otherwise use timestamp for the comparism
         BOOL fExactExactNewCLB = fMatchingDocName && (pClb->ulSegmId >= (pGetIn->ulSegmentId - 1)) && (pClb->ulSegmId <= (pGetIn->ulSegmentId + 1));
         BOOL fExactExactExistingCLB = ((pTMXTargetClb->usFileId == usGetFile) || (pTMXTargetClb->usFileId == usAlternateGetFile)) &&
         (pTMXTargetClb->ulSegmId >= (pGetIn->ulSegmentId - 1)) && (pTMXTargetClb->ulSegmId <= (pGetIn->ulSegmentId + 1));
         if ( fExactExactNewCLB && !fExactExactExistingCLB )
         {
           // use exact-exact CLB for match
           pTMXTargetClb = pClb;
           usTargetTranslationFlag = usTranslationFlag;
           usContextRanking = usCurContextRanking;
         }
         else if ( (fExactExactNewCLB == fExactExactExistingCLB) && (pClb->lTime > pTMXTargetClb->lTime) )
         {
           // use newer target CLB for match
           pTMXTargetClb = pClb;
           usTargetTranslationFlag = usTranslationFlag;
           usContextRanking = usCurContextRanking;
         }
       } /* endif */
     } /* endif */
   }
   else
   {
     if ( sCurMatch < CONTEXT_MATCH )
     {
     sCurMatch = CONTEXT_MATCH;
     pTMXTargetClb = pClb; // use this target CLB for match
     usTargetTranslationFlag = usTranslationFlag;
     usContextRanking = usCurContextRanking;
     }
     else if ( sCurMatch == CONTEXT_MATCH )
     {
       // we have already a match of this type so check if context ranking
      if ( usCurContextRanking > usContextRanking )
      {
        pTMXTargetClb = pClb; // use newer target CLB for match
        usTargetTranslationFlag = usTranslationFlag;
        usContextRanking = usCurContextRanking;
      }
      // use time info to ensure that latest match is used
     else if ( (usCurContextRanking == usContextRanking) && (pClb->lTime > pTMXTargetClb->lTime) )
     {
       pTMXTargetClb = pClb; // use newer target CLB for match
       usTargetTranslationFlag = usTranslationFlag;
       usContextRanking = usCurContextRanking;
      } /* endif */
    } /* endif */
  } /* endif */
 }
 else if ( fMatchingDocName && (pClb->ulSegmId >= (pGetIn->ulSegmentId - 1)) && (pClb->ulSegmId <= (pGetIn->ulSegmentId + 1)) )
 {
   // same segment from same document available
   sCurMatch = SAME_SEG_AND_DOC_MATCH;
   pTMXTargetClb = pClb; // use this target CLB for match
   usContextRanking = usCurContextRanking;
   usTargetTranslationFlag = usTranslationFlag;
 }
 else if ( fMatchingDocName )
 {
    // segment from same document available
    if ( sCurMatch < SAME_DOC_MATCH )
    {
       sCurMatch = SAME_DOC_MATCH;
       pTMXTargetClb = pClb; // use this target CLB for match
       usTargetTranslationFlag = usTranslationFlag;
       usContextRanking = usCurContextRanking;
     }
     else if ( sCurMatch == SAME_DOC_MATCH )
     {
       // we have already a match of this type so
       // use time info to ensure that latest match is used
       if ( pClb->lTime > pTMXTargetClb->lTime )
       {
         pTMXTargetClb = pClb; // use newer target CLB for match
         usTargetTranslationFlag = usTranslationFlag;
         usContextRanking = usCurContextRanking;
       } /* endif */
     } /* endif */
   }
    else if ( pClb->bMultiple )
    {
       // multiple target segment available
       if ( sCurMatch < MULT_DOC_MATCH )
       {
         // no better match yet
         sCurMatch = MULT_DOC_MATCH;
         pTMXTargetClb = pClb; // use this target CLB for match
         usTargetTranslationFlag = usTranslationFlag;
         usContextRanking = usCurContextRanking;
       } /* endif */
     }
     else if ( usTranslationFlag == TRANSLFLAG_NORMAL )
     {
        // a 'normal' memory match is available
        if ( sCurMatch < NORMAL_MATCH )
        {
           // no better match yet
           sCurMatch = NORMAL_MATCH;
           pTMXTargetClb = pClb; // use this target CLB for match
           usTargetTranslationFlag = usTranslationFlag;
           usContextRanking = usCurContextRanking;
         } /* endif */
     } /* endif */

    // continue with next target CLB
    if ( sCurMatch < SAME_SEG_AND_DOC_MATCH )
    {
      lLeftClbLen -= TARGETCLBLEN(pClb);
      if (lLeftClbLen > 0)
      {
        usTgtNum++;
        pClb = NEXTTARGETCLB(pClb);
      }
    } /* endif */
} /* endwhile */


{
  BOOL fNormalMatch = (usTargetTranslationFlag == TRANSLFLAG_NORMAL) ||
  (usTargetTranslationFlag == TRANSLFLAG_GLOBMEM) ||
  (usTargetTranslationFlag == TRANSLFLAG_GLOBMEMSTAR);
  switch ( sCurMatch )
  {
    case IGNORE_MATCH :
      usMatchLevel = 0;
       break;
  case SAME_SEG_AND_DOC_MATCH :
      usMatchLevel = fNormalMatch ? usEqual+2 : usEqual-1;
      break;
   case SEG_DOC_AND_CONTEXT_MATCH :
       usMatchLevel = fNormalMatch ? usEqual+2 : usEqual-1; // exact-exact match with matching context
       break;
    case DOC_AND_CONTEXT_MATCH :
       if ( usContextRanking == 100 )
       {
         // GQ 2015/05/09: treat 100% context matches as normal exact matches
         // usMatchLevel = fNormalMatch ? usEqual+2 : usEqual-1;
         usMatchLevel = fNormalMatch ? usEqual+1 : usEqual-1;
        }
        else
        {
          usMatchLevel = fNormalMatch ? usEqual+1 : usEqual-1;
        } /* endif */
       break;
  case CONTEXT_MATCH :
    if ( usContextRanking == 100 )
    {
      // GQ 2015/05/09: treat 100% context matches as normal exact context matches
      // usMatchLevel = fNormalMatch ? usEqual+2 : usEqual-1;
      // GQ 2016/10/24: treat 100% context matches as normal exact matches
      usMatchLevel = fNormalMatch ? usEqual : usEqual-1;
    }
    else
    {
      usMatchLevel = fNormalMatch ? usEqual : usEqual-1;
     } /* endif */
     break;
  case SAME_DOC_MATCH :
    usMatchLevel = fNormalMatch ? usEqual+1 : usEqual-1;
    break;
  case MULT_DOC_MATCH :
     usMatchLevel = fNormalMatch ? usEqual+1 : usEqual-1;
     break;
  default :
     usMatchLevel = fNormalMatch ? usEqual : usEqual-1;
     break;
  } /* endswitch */
}
}



Here is structure of the segment from responses

{
"source":"in Verbindung 2 fds fdsa amit Befestigungswinkel fdsaf MS-...-WPE-B zur Wandmontage eines sfg Einzelgeräts"// source that was saved
"sourceNPRepl":"in Verbindung 2 fds fdsa amit Befestigungswinkel fdsaf MS-...-WPE-B zur Wandmontage eines sfg Einzelgeräts",// np replaced source - used for fuzzy and triples thresholds - here there are no NP tags, but their hashes
"sourceNorm":"in Verbindung 2 fds fdsa amit Befestigungswinkel fdsaf MS-...-WPE-B zur Wandmontage eines sfg Einzelgeräts",// normalized source - used for fuzzy calclulation - there are no tags at all
"target":"In combinahgfd tion with mounting bracket MS-...-WPE-B for wall mounting an individual component ",// saved target
"segmentNumber":1// previously segmentNumber  - internal Id generated in tm, or provided with update call. Can be used together with internalKey as a primary number in tm 
"id":"", // dummy field, 
"documentName":"Audioscript_Hybrides_Arbeiten.xlsx.sdlxliff",
"sourceLang":"DE-DE",// that langs in requests would be searched in languages.xml, and would be used best match(or preffered) 
"targetLang":"EN-GB",
"type":"Manual",
"author":"PROJECT MANAGER",
"timestamp":"",// if empty, current time would be used. 
"markupTable":"OTMXUXLF"// the same all the time, in the future could be refactored and deleted
"context":"390",// context and addinfo would be saved as additional field in cbl(internal data struct, which saves variants of other variables for the same translation, which have position pointed below - "11:1"
"additionalInfo":"",
"internalKey":"11:1"// internal position of the segment inside tmd file. could be shifted with deleting some other segments. both numbers should not be zero
}

...