Table of Contents
Overview and API introduction
...
Endpoints overview | default endpoint/example | Is async? | ||||
---|---|---|---|---|---|---|
1 | Get the list of TMs | Returns JSON list of TMs | GET | /%service%/ | /t5memory/ | |
2 | Create TM | Creates TM with the provided name | POST | /%service%/ | /t5memory/ | |
3 | Create/Import TM in internal format | Import and unpack base64 encoded archive of .TMD, .TMI, .MEM files. Rename it to provided name | POST | /%service%/ | /t5memory/ | |
4 | Clone TM Localy | Makes clone of existing tm | POST | /%service%/%tm_name%/clone | /t5memory/my+TM/clone (+is placeholder for whitespace in tm name, so there should be 'my TM.TMD' and 'my TM.TMI'(and in pre 0.5.x 'my TM.MEM' also) files on the disk ) tm name IS case sensetive in url | |
5 | Reorganize TM | Reorganizing tm(replacing tm with new one and reimporting segments from tmd) - async | GET | /%service%/%tm_name%/reorganize | /t5memory/my+other_tm/reorganize | + in 0.5.x and up |
5 | Delete TM | Deletes .TMD, .TMI files | DELETE | /%service%/%tm_name%/ | /t5memory/%tm_name%/ | |
6 | Import TMX into TM | Import provided base64 encoded TMX file into TM - async | POST | /%service%/%tm_name%/import | /t5memory/%tm_name%/import | + |
7 | Export TMX from TM | Creates TMX from tm. Encoded in base64 | GET | /%service%/%tm_name%/ | /t5memory/%tm_name%/ | |
8 | Export in Internal format | Creates and exports archive with .TMD, .TMI files of TM | GET | /%service%/%tm_name%/ | /t5memory/%tm_name%/status | |
9 | Status of TM | Returns status\import status of TM | GET | /%service%/%tm_name%/status | /t5memory/%tm_name%/status | |
10 | Fuzzy search | Returns entries\translations with small differences from requested | POST | /%service%/%tm_name%/fuzzysearch | /t5memory/%tm_name%/fuzzysearch | |
11 | Concordance search | Returns entries\translations that contain requested segment | POST | /%service%/%tm_name%/concordancesearch | /t5memory/%tm_name%/concordancesearch | |
12 | Entry update | Updates entry\translation | POST | /%service%/%tm_name%/entry | /t5memory/%tm_name%/entry | |
13 | Entry delete | Deletes entry\translation | POST | /%service%/%tm_name%/entrydelete | /t5memory/%tm_name%/entrydelete | |
14 | Save all TMs | Flushes all filebuffers(TMD, TMI files) into the filesystem | GET | /%service%_service/savetms | /t5memory_service/saveatms | |
15 | Shutdown service | Flushes all filebuffers into the filesystem and shutting down the service | GET | /%service%_service/shutdown | /t5memory_service/shutdown | |
16 | Test tag replacement call | For testing tag replacement | POST | /%service%_service/tagreplacement | /t5memory_service/tagreplacement | |
17 | Resources | Returns resources and service data | GET | /%service%_service/resources | /t5memory_service/resources | |
18 | Import tmx from local file(in removing lookuptable git branch) | Similar to import tmx, but instead of base64 encoded file, use local path to file | POST | /%service%/%tm_name%/importlocal | /t5memory/%tm_name%/importlocal | + |
19 | Mass deletion of entries(from v0.6.0) | It's like reorganize, but with skipping import of segments, that after checking with provided filters combined with logical AND returns true. | POST | /%service%/%tm_name%/entriesdelete | /t5memory/tm1/entriesdelete | + |
20 | New concordance search(from v0.6.0) | It's extended concordance search, where you can search in different field of the segment | POST | /%service%/%tm_name%/search | /t5memory/tm1/search |
Available end points
List of TMs | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
Purpose | Returns JSON list of TMs | |||||||||
Request | GET /%service%/ | |||||||||
Params | - | |||||||||
Returns list of open TMs and then list of available(excluding open) in the app.
|
...
Create TM | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
Purpose | Creates TM with the provided name(tmd and tmi files in/MEM/ folder) | |||||||||
Request | Post /%service%/%tm_name%/ | |||||||||
Params | Required: name, sourceLang | |||||||||
|
...
Create/Import TM in internal format | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
Purpose | Import and unpack base64 encoded archive of .TMD, .TMI, .MEM(in pre 0.5.x versions) files. Rename it to provided name | |||||||||
Request | POST /%service%/ | |||||||||
Params | { "name": "examle_tm", "sourceLang": "bg-BG" , "data":"base64EncodedArchive" } | |||||||||
Do not import tms created in other version of t5memory. Starting from 0.5.x tmd and tmi files has t5memory version where they were created in the header of the file, and different middle version(0.5.x) or global version(0.5.x) would be represented as This would create example_tm.TMD(data file) and example.TMI(index file) in MEM folder
|
...
Clone TM localy | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
Purpose | Creates TM with the provided name | |||||||||
Request | Post /%service%/%tm_name%/clone | |||||||||
Params | Required: name, sourceLang | |||||||||
Endpoint is sync(blocking)
|
...
Import provided base64 encoded TMX file into TM | |||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Purpose | Import provided base64 encoded TMX file into TM. Starts another thead for import. For checking import status use status call | ||||||||||||||
Request | POST /%service%/%tm_name%/import | ||||||||||||||
Params | {"tmxData": "base64EncodedTmxFile" }
| ||||||||||||||
TM must exist Handling if framing tag situation differs from source to target - for skipAll or skipPairedIf framing tags situation is the same in source and target, both sides should be treated as described above. If framing tags only exist in source, then still they should be treated as described above. If they only exist in target, then nothing should be removed.
|
Reorganize TM | |
---|---|
Purpose | Reorganizes tm and fixing issues. |
Request | GET /%service%/%tm_name%/reorganize |
Headers | Accept - applicaton/xml |
up to v0.4.x reorganize is sync, so t5memory reorganize would check this condition
, and in case if this condition is true and then it passes segment to putProposal function, which is also used by UpdateRequest and ImportTmx request, so other
{ |
...
Get the status of TM | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
Request | GET /%service%/%tm_name%/status | |||||||||
Params | - | |||||||||
Would return status of TM. It could be 'not found', 'available' if it's on the disk but not loaded into the RAM yet, and 'open' with additional info. In case if there was at least one try to import tmx or reorganize tm since it was loaded into the RAM, additional fields would appear and stay in the statistics till memory would be unloaded.
| ||||||||||
Fuzzy search | ||||||||||
Purpose | Returns enrties\translations with small differences from requested | |||||||||
Request | POST
| %service%/%tm_name%/fuzzysearchParams |
|
Fuzzy search | |||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Purpose | Returns enrties\translations with small differences from requested | ||||||||||||||||||||||||||||||||||||
Request | POST /%service%/%tm_name%/fuzzysearch | ||||||||||||||||||||||||||||||||||||
Params | Required: source, sourceLang, targetLang iNumOfProposal - limit of found proposals - max is 20, if 0 → use default value '5' | ||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||
Concordance search | |||||||||||||||||||||||||||||||||||||
Purpose | Returns entries\translations that contain requested segment | ||||||||||||||||||||||||||||||||||||
Request | POST /%service%/%tm_name%/concordancesearch | ||||||||||||||||||||||||||||||||||||
Params | Required: searchString - what we are looking for , searchType ["Source"|"Target"|"SourceAndTarget"] - where to look iNumOfProposal - limit of found proposals - max is 20, if 0 → use default value '5' |
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
Request example:
{
"searchString": "The",
"searchType": "source",
["searchPosition": "",]
["numResults": 20,]
["msSearchAfterNumResults": 250,]
["loggingThreshold": 0]
}
Response example:Success:
{
"ReturnValue": 0,
"NewSearchPosition": null,
"results": [
{
"source": "For > 100 setups.",
"target": "Für > 100 Aufstellungen.",
"segmentNumber": 10906825,
"id": "",
"documentName": "none",
"documentShortName": "NONE",
"sourceLang": "en-GB",← rfc5646
"targetLang": "de-DE",← rfc5646
"type": "Manual",
"matchType": "undefined",
"author": "",
"timestamp": "20190401T084052Z",
"matchRate": 0,
"markupTable": "OTMXML",
"context": "",
"additionalInfo": ""
}
],
"ErrorMsg": ""
}
Success, but with NewSearchPosition - not all TM was checked, use this position to repeat search:
{
"ReturnValue": 0,
"NewSearchPosition": "8:1",
"results": [
{
"source": "For > 100 setups.",
"target": "Für > 100 Aufstellungen.",
"segmentNumber": 10906825,
"id": "",
"documentName": "none",
"documentShortName": "NONE",
"sourceLang": "en-GB",
"targetLang": "de-DE",
"type": "Manual",
"matchType": "undefined",
"author": "",
"timestamp": "20190401T084052Z",
"matchRate": 0,
"markupTable": "OTMXML",
"context": "",
"additionalInfo": ""
}
],
"ErrorMsg": ""
}
SearchPosition / NewSearchPositionFormat: "7:1"
First is segmeng\record number, second is target number
The NextSearchposition is an internal key of the memory for the next position on sequential access. Since it is an internal key, maintained and understood by the underlying memory plug-in (for EqfMemoryPlugin is it the record number and the position in one record),
no assumptions should be made regarding the content. It is just a string that, should be sent back to OpenTM2 on the next request, so that the search starts from there.
So is the implementation in Translate5: The first request to OpenTM2 contains SearchPosition with an empty string, OpenTM2 returns than a string in NewSearchPosition, which is just resent to OpenTM2 in the next request.
Not found:{
"ReturnValue": 0,
"NewSearchPosition": null,
"ErrorMsg": ""
}TM not found:{
"ReturnValue": 133,
"ErrorMsg": "OtmMemoryServiceWorker::concordanceSearch::"
} |
Update entry
Only sourceLang, targetLang, source and target are required
This request would made changes only in the filebuffer(so files on disk would not be changed)
To write it to the disk just call request which would flush tm to the disk as part of execution(exportTMX, exportTM, cloneTM) or using SaveAllTms request
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
Request example:
{
"source": "The end",
"target": "The target",
"sourceLang": "en", // langs would be checked with languages.xml
"targetLang": "de",
//additional field
["documentName": "Translate5 Demo Text-en-de.xlf"],
["segmentNumber": 8,]
["author": "Thomas Lauria"],
["timeStamp": "20210621T071042Z"], // if there is no timestamp, current time would be used
["context": "2_2"], // context and addInfo would be saved in TM in the same field
["addInfo": "2_2"],
["type": "Manual"], // could be GlobalMemory, GlobalMemoryStar, MachineTranslation, Manual, by default Undefined
["markupTable": "OTMXUXLF"], //if there is no markup, default OTMXUXLF would be used.
//Markup tables should be located inside ~/.t5memory/TABLE/%markup$.TBL
["loggingThreshold": 0],
["save2disk": 0] // flag if we need to flush tm to disk after update. by default is true
}
here are data struct used for search, so you can see max numbers of symbols
typedef struct _LOOKUPINMEMORYDATA
{
char szMemory[260];
wchar_t szSource[2050];
wchar_t szTarget[2050];
char szIsoSourceLang[40];
char szIsoTargetLang[40];
int lSegmentNum;
char szDocName[260];
char szMarkup[128];
wchar_t szContext[2050];
wchar_t szAddInfo[2050];
wchar_t szError[512];
char szType[256];
char szAuthor[80];
char szDateTime[40];
char szSearchMode[40]; // only for concordance search
char szSearchPos[80]; // only for concordance search
int iNumOfProposals;
int iSearchTime;
wchar_t szSearchString[2050];
} LOOKUPINMEMORYDATA, *PLOOKUPINMEMORYDATA;
Response example:success:
{
"sourceLang": "de-DE",
"targetLang": "en-GB",
"source": "The end",
"target": "The target",
"documentName": "Translate5 Demo Text-en-de.xlf",
"segmentNumber": 222,
"markupTable": "OTMXUXLF",
"timeStamp": "20210621T071042Z",
"author": "Thomas Lauria"
}
in case if similar record exists, t5memory comparing source text,
if it's the same, t5memory would compare docName,
if it's the same,t5memory would compare timestamps and would leave only newer one
|
Delete entry
Only sourceLang, targetLang, source, and target are required
Deleting based on strict match(including tags and whitespaces) of target and source
This request would made changes only in the filebuffer(so files on disk would not be changed)
To write it to the disk just call request which would flush tm to the disk as part of execution(exportTMX, exportTM, cloneTM) or using SaveAllTms request
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
Request example:
{
"sourceLang": "bg",
"targetLang": "en",
"source": "The end",
"target": "Eth dne"
["documentName": "my file.sdlxliff",]
["segmentNumber": 1,]
["markupTable": "translate5",]
["author": "Thomas Lauria",]
["type": "",]
["timeStamp": ""],
["context": "",]
["addInfo": ""] , ["loggingThreshold": 0]
}
|
Save all TMs
Flushes all filebuffers(TMD, TMI files) into the filesystem. Reset 'Modified' flags for file buffers.
Filebuffer is a file instance of .TMD or .TMI loaded into RAM. It provides better speed and safety when working with files.
*/
} /* endif */
} /* endif */
}
else if ( fMatchingDocName && (pClb->ulSegmId >= (pGetIn->ulSegmentId - 1)) && (pClb->ulSegmId <= (pGetIn->ulSegmentId + 1)) )
{
// same segment from same document available
sCurMatch = SAME_SEG_AND_DOC_MATCH;
pTMXTargetClb = pClb; // use this target CLB for match
usContextRanking = usCurContextRanking;
usTargetTranslationFlag = usTranslationFlag;
}
else if ( fMatchingDocName )
{
// segment from same document available
if ( sCurMatch < SAME_DOC_MATCH )
{
sCurMatch = SAME_DOC_MATCH;
pTMXTargetClb = pClb; // use this target CLB for match
usTargetTranslationFlag = usTranslationFlag;
usContextRanking = usCurContextRanking;
}
else if ( sCurMatch == SAME_DOC_MATCH )
{
// we have already a match of this type so
// use time info to ensure that latest match is used
if ( pClb->lTime > pTMXTargetClb->lTime )
{
pTMXTargetClb = pClb; // use newer target CLB for match
usTargetTranslationFlag = usTranslationFlag;
usContextRanking = usCurContextRanking;
} /* endif */
} /* endif */
}
else if ( pClb->bMultiple )
{
// multiple target segment available
if ( sCurMatch < MULT_DOC_MATCH )
{
// no better match yet
sCurMatch = MULT_DOC_MATCH;
pTMXTargetClb = pClb; // use this target CLB for match
usTargetTranslationFlag = usTranslationFlag;
usContextRanking = usCurContextRanking;
} /* endif */
}
else if ( usTranslationFlag == TRANSLFLAG_NORMAL )
{
// a 'normal' memory match is available
if ( sCurMatch < NORMAL_MATCH )
{
// no better match yet
sCurMatch = NORMAL_MATCH;
pTMXTargetClb = pClb; // use this target CLB for match
usTargetTranslationFlag = usTranslationFlag;
usContextRanking = usCurContextRanking;
} /* endif */
} /* endif */
// continue with next target CLB
if ( sCurMatch < SAME_SEG_AND_DOC_MATCH )
{
lLeftClbLen -= TARGETCLBLEN(pClb);
if (lLeftClbLen > 0)
{
usTgtNum++;
pClb = NEXTTARGETCLB(pClb);
}
} /* endif */
} /* endwhile */
{
BOOL fNormalMatch = (usTargetTranslationFlag == TRANSLFLAG_NORMAL) ||
(usTargetTranslationFlag == TRANSLFLAG_GLOBMEM) ||
(usTargetTranslationFlag == TRANSLFLAG_GLOBMEMSTAR);
switch ( sCurMatch )
{
case IGNORE_MATCH :
usMatchLevel = 0;
break;
case SAME_SEG_AND_DOC_MATCH :
usMatchLevel = fNormalMatch ? usEqual+2 : usEqual-1;
break;
case SEG_DOC_AND_CONTEXT_MATCH :
usMatchLevel = fNormalMatch ? usEqual+2 : usEqual-1; // exact-exact match with matching context
break;
case DOC_AND_CONTEXT_MATCH :
if ( usContextRanking == 100 )
{
// GQ 2015/05/09: treat 100% context matches as normal exact matches
// usMatchLevel = fNormalMatch ? usEqual+2 : usEqual-1;
usMatchLevel = fNormalMatch ? usEqual+1 : usEqual-1;
}
else
{
usMatchLevel = fNormalMatch ? usEqual+1 : usEqual-1;
} /* endif */
break;
case CONTEXT_MATCH :
if ( usContextRanking == 100 )
{
// GQ 2015/05/09: treat 100% context matches as normal exact context matches
// usMatchLevel = fNormalMatch ? usEqual+2 : usEqual-1;
// GQ 2016/10/24: treat 100% context matches as normal exact matches
usMatchLevel = fNormalMatch ? usEqual : usEqual-1;
}
else
{
usMatchLevel = fNormalMatch ? usEqual : usEqual-1;
} /* endif */
break;
case SAME_DOC_MATCH :
usMatchLevel = fNormalMatch ? usEqual+1 : usEqual-1;
break;
case MULT_DOC_MATCH :
usMatchLevel = fNormalMatch ? usEqual+1 : usEqual-1;
break;
default :
usMatchLevel = fNormalMatch ? usEqual : usEqual-1;
break;
} /* endswitch */
}
} |
New Concordance search | |||||||||
---|---|---|---|---|---|---|---|---|---|
Purpose | Returns entries\translations that fits selected filters. | ||||||||
Request | POST /%service%/%tm_name%/search | ||||||||
Params | Required: NONE iNumOfProposal - limit of found proposals - max is 200, if 0 → use default value '5' | ||||||||
Search is made segment-by segment, and it's checking segment if it fits selected filters. You can search for EXACT or CONCORDANCE matches in this fields: "Filters":" It's possible to apply filter just with SearchMode, like if you would type "authorSearchMode": "exact",but there would be no "author" field, it would look for segments, where author field is empty. "timestampSpanStart":"20000121T115234Z", You should set both parameters to apply filter, otherwise you would get error as return. Check output to see how it was parsed and applied. "logicalOr": 1, Instead of returning segments, just count them and return counter in "NumOfFoundSegments":22741 "sourceLang":"en-GB", Lang filters could be applied with major lang feature, so source lang in this case would be applied as exact filter for source lang, but target lang would check if langs is in the same lang group. That check is done in languages.xml file with isPreferred flag. "GlobalSearchOptions":"SEARCH_FILTERS_LOGICAL_OR|SEARCH_EXACT_MATCH_OF_SRC_LANG_OPT, lang = en-GB|SEARCH_GROUP_MATCH_OF_TRG_LANG_OPT, lang = de", Other that you can send is: "searchPosition":"8:1", So search position is position where to start search internaly in btree. This search is limited by num of found segment(set by numResults) or timeout(set by msSearchAfterNumResults), but timeout would be ignored in case if there are no segments in the tm to fit params. Max numResults is 200. from responce.
|
Response example:{
'saved 4 files': '/home/or/.t5memory/MEM/mem2.TMD, /home/or/.t5memory/MEM/mem2.TMI, /home/or/.t5memory/MEM/newBtree3.TMD, /home/or/.t5memory/MEM/newBtree3.TMI'
} List of saved files
Shutdown service
dontsave=1(optional in address) - skips saving tms, for now value doesn't matter, only presence
If try to save tms before closing, would check if there is still import process going on
If there is some, would wait 1 second and check again.
Repeats last step up to 10 min, then closes service anyway.
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
Response example:%Empty% |
Test tag replacement call
Required: src, trg,
Optional: req
language | js |
---|---|
title | Response |
collapse | true |
|
|
|
|
|
|
|
|
|
|
|
|
...
Here is search request with all possible parameters: "source":"the", "sourceSearchMode":"CONTAINS, CASEINSENSETIVE, WHITESPACETOLERANT, INVERTED", "target":"", "targetSearchMode":"EXACT, CASEINSENSETIVE", "document":"evo3_p1137_reports_translation_properties_de_fr_20220720_094902", "documentSearchMode":"CONTAINS, INVERTED", "author":"some author", "timestampSpanStart": "20000121T115234Z", "timestampSpanEnd": "20240121T115234Z", "addInfo":"some add info", "addInfoSearchMode":"CONCORDANCE, WHITESPACETOLERANT", "context":"context context", "contextSearchMode":"EXACT", "sourceLang":"en-GB", "targetLang":"SV", "searchPosition": "8:1", "numResults": 2, "msSearchAfterNumResults": 25, So request with this body would also work:
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Concordance search | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
Purpose | Returns entries\translations that contain requested segment | |||||||||
Request | POST /%service%/%tm_name%/concordancesearch | |||||||||
Params | Required: searchString - what we are looking for , searchType ["Source"|"Target"|"SourceAndTarget"] - where to look iNumOfProposal - limit of found proposals - max is 20, if 0 → use default value '5' | |||||||||
|
Update entry | |||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Purpose | Updates entry\translation | ||||||||||||||||||
Request | POST /%service%/%tm_name%/entry | ||||||||||||||||||
Params | Only sourceLang, targetLang, source and target are required | ||||||||||||||||||
This request would made changes only in the filebuffer(so files on disk would not be changed)
|
Delete entry | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
Purpose | Deletes entry\translation | |||||||||
Request | POST /%service%/%tm_name%/entrydelete | |||||||||
Params | Only sourceLang, targetLang, source, and target are required Deleting based on strict match(including tags and whitespaces) of target and source | |||||||||
This request would made changes only in the filebuffer(so files on disk would not be changed)
|
Delete entries / mass deletion | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
Purpose | Deletes entries\translation | |||||||||
Request | POST /%service%/%tm_name%/entriesdelete | |||||||||
Params | This would start reorganize process which would remove like reorganize bad segments and also would remove segments that gives true when checking with provided filters combined with logical AND. So if you provide timestamps and addInfo, only segments within provided timestamp and with that addInfo would not be imported to new TM(check reorganize process). | |||||||||
|
Save all TMs | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
Purpose | Flushes all filebuffers(TMD, TMI files) into the filesystem. Reset 'Modified' flags for file buffers. Filebuffer is a file instance of .TMD or .TMI loaded into RAM. It provides better speed and safety when working with files. | |||||||||
Request | GET /%service%_service/savetms | |||||||||
Params | - | |||||||||
|
Shutdown service | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
Purpose | Safely shutting down the service with\without saving all loaded tm files to the disk | |||||||||
Request | GET /%service%_service/shutdown?dontsave=1 | |||||||||
Params | dontsave=1(optional in address) - skips saving tms, for now value doesn't matter, only presence | |||||||||
If try to save tms before closing, would check if there is still import process going on
|
Test tag replacement call | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
Purpose | Updates entry\translation | |||||||||
Request | POST /%service%_service/tagreplacement | |||||||||
Params | Required: src, trg, Optional: req | |||||||||
|
Configuration of service
You can configure the service in ~/.t5service/t5memory.conf
Logging | ||
---|---|---|
Level | Mnemonic | Description |
0 | DEVELOP | could make code work really slow, should be used only when debugging some specific places in code, like binary search in files, etc. |
1 | DEBUG | logging values of variables. Wouldn't delete temporary files(In MEM and TMP subdirectories), like base64 encoded\decoded tmx files and archives for import\export |
2 | INFO | logging top-level functions entrances, return codes, etc. Default value. |
3 | WARNING | logging if we reached some commented or hardcoded code. Usually commented code here is replaced with new code, and if not, it's marked as ERROR level |
4 | ERROR | errors, why and where something fails during parsing, search, etc |
5 | FATAL | you shouldn't reach this code, something is really wrongOther values would be ignored. The set level would stay the same till you change it in a new request or close the app. Logs suppose to be written into a file with date\time name under ~/.OtmMemoryService/Logs and errors/fatal are supposed to be duplicated in another log file with FATAL suffices |
6 | TRANSACTION | - Logs only things like begin\end of request etc. No purpose to setup this hight |
Logging could impact application speed very much, especially during import or export. In t5memory there are 2 systems of logs - one from glog library and could be set in launch as commandline parameter and one is internal to filter out logs based on their level, can be set with every request that have json body with additional ["loggingThreshold": 0] parameter or at startup with flag. POST http://localhost:4040/t5memory/example_tm/ { Or in t5memory.conf file in line (config file is obsolete now) |
Logging | ||
---|---|---|
Level | Mnemonic | Description |
0 | DEVELOP | could make code work really slow, should be used only when debugging some specific places in code, like binary search in files, etc. |
1 | DEBUG | logging values of variables. Wouldn't delete temporary files(In MEM and TMP subdirectories), like base64 encoded\decoded tmx files and archives for import\export |
2 | INFO | logging top-level functions entrances, return codes, etc. Default value. |
3 | WARNING | logging if we reached some commented or hardcoded code. Usually commented code here is replaced with new code, and if not, it's marked as ERROR level |
4 | ERROR | errors, why and where something fails during parsing, search, etc |
5 | FATAL | you shouldn't reach this code, something is really wrongOther values would be ignored. The set level would stay the same till you change it in a new request or close the app. Logs suppose to be written into a file with date\time name under ~/.OtmMemoryService/Logs and errors/fatal are supposed to be duplicated in another log file with FATAL suffices |
6 | TRANSACTION | - Logs only things like begin\end of request etc. No purpose to setup this hight |
Working directory | ||
Path | Description | |
~/.t5memory | The main directory of service. Should always be under the home directory. Consists of nested folders and t5memory.conf file(see Config file). All directories\files below are nested | |
LOG | lIncludes log files. It should be cleanup manualy. One session(launch of service) creates two files Log_Thu May 12 10:15:48 2022 .log and Log_Thu May 12 10:15:48 2022 .log_IMPORTANT | |
MEM | Main data directory. All tm files is stored here. One TM should include .TMD(data file), .TMI(index file), .MEM(properties file) with the same name as TM name | |
TABLE | Services reserved readonly folder with tagtables, languages etc. | |
TEMP | For temporary files that were created for mainly import\export. On low debug leved(DEVELOP, DEBUG) should be cleaned manualy | |
t5memory.conf | Main config file(see config file) | |
Config directory should be located in a specific place |
...
Openning and closing TM | |
---|---|
In first concept it was planned to implement routines to open and close a TM. While concepting we found some problemes with this approach:
This leads to the following conclusion in implementation of opening and closing of TMs: OpenTM2 has to automatically load the requested TMs if requested. Also OpenTM2 has to close the TMs after a TM was not used for some time. That means that OpenTM2 has to track the timestamps when a TM was last requested.
http://opentm2/translationmemory/[TM_Name]/openHandle GET – Opens a memory for queries by OpenTM2 Note: This method is not required as memories are automatically opened when they are accessed for the first time. http://opentm2/translationmemory/[TM_Name]/openHandle DELETE – Closes a memory for queries by OpenTM2 Note: This method is not required as memories are automatically opened when they are accessed for the first time. For now we open TM in case of call to work with it. TM stays opened till the shutdown we wouldn't try to open more TM's, exceeding the RAM limit setupped in config file.In that case we would close TM in order of longest not used, till we would fit in limit including TM that we try to open. TM size is calcucated basicaly as sum .TMD and .TMI files Ram limit doesn't include service RAM and temporary files |
TM files structure and other related info
Starting from version 0_5_0 .mem file is excluded from TM files - tm now consists only with .tmd and .tmi files. That files have 2kb headers which have some useful information, like creation date and version in which that file was created. In general, changing mid_version number means binary incompatible files. During reorganize there would be created new empty tm and then segments would be reimported from previous, and then old files would be deleted and new ones would be renamed to replace old files. That means that reorganize would also update creation t5memory version of files to the newest.TM file is just archive with tmi and tmd files.
tmd and tmi files should be flushed in a safe way - saved on disk with temporary filename and then replacing old files.(Should be implemented)
There is tmmanager(as singletone) which have list of tm, and one tm instance have two binary trees(for both (tmd)data and (tmi)index files), with each have own filebuffer instance(before there used to be a pool of filebuffers and it's files operation functions, like write, read, close and open was handling requests).
Request handler - it's an instance of class in request handler hierarhy classes. For each type of requests there is class to handle it. In general it have private functions "parseJSON"(would parse json if provided and would return error if json is invalid), "checkData"(whould check if all required fields was provided), "requestTM"(would request readOnly, write or service tm handlers. It would load tm if it is not loaded in RAM yet) and "execute" - original requests code. And also it has public function "run" which is stategy template to operate listed private function.
The TMs is saved in TMManager using smart pointers(it's pointer which track references to itself and call destructor automaticaly). That means that on request it's possible to clear list from some TM, while it would still be active in other thread(like in fuzzy search). Then ram would be freed at the end of last request handling that TM.
In case if in the middle of some request(like fuzzy search) there was a call to delete tm, first we clear TMlist(but we keep smart pointer in fuzzy requests thread, so this is not calling destructor yet, but would after fuzzy request would be done). Destructor would try to flush filebuffer into filesystem but because there is no files in the disk, filebuffers would not create them again and it would just clean the RAM(in that case log would be writen about filebuffer flush not founding file in the folder).
From TMManager, request could ask for one of 3 types of tm handers - readonly, write or service. ReadOnly\write requests here have it's name from inside-tm perspective(so operations with tm files in filesystem is service requests).
ReadOnly(concordance search, fuzzy search, exportTmx) would be provided if there is no write handlers, for write handlers(deleteEntry, updateEntry, importTmx) there should be no other write handlers and no readOnly handlers. Service handlers could mean different for different requests. For example status request should be able to access something like readonly handler, but it shouldn't be blocked if there is any write requests, since it's used for checking import\reorganize status and progress. For some filesystem requests(deleteTM, createTM, cloneTM, importTM, exportTM(internal format)) there should be other blocking mechanism, since most of them even doesn't require to load tm into the ram.
In case if tm is not in RAM, requesting handler from TMManager would try to load TM into the RAM, considering RAM limit explained in this document.
TAG REPLACEMENT
/translationmemory/[TM_Name]/openHandle GET – Opens a memory for queries by OpenTM2 Note: This method is not required as memories are automatically opened when they are accessed for the first time. http://opentm2/translationmemory/[TM_Name]/openHandle DELETE – Closes a memory for queries by OpenTM2 Note: This method is not required as memories are automatically opened when they are accessed for the first time.
|
TM files structure and other related info | ||
---|---|---|
Info below is actual for version 0_5_x TM file is just archive with tmi and tmd files. |
NUMBER PROTECTION TAGS (NP TAG, t5:n) | ||
---|---|---|
NP Feature is also implemented in tagReplacer, but it has other branch in code - for import it's just saves original id, r and n attributes, without generating new, for fuzzy requests it's just outputs original data without searching for mathing tag in src and trg. So NP tags is influence ID generation for other tags(or matching if it's trg segment). "Press the encodedRegex, power button to turn on <bpt id="501" rid="1"/>text<ept rid="1"/>" |
Tag replacement
Pseudocode for tag replacement in import call:
TAG_REPLACEMENT PSEUDO CODE
...