Turkish to English Translation

Model by Modzy

This model translates text from Turkish to English. It accepts UTF-8 Turkish text as input and outputs a translated version of the same text in English. 
  • Description

    Product Description

    PERFORMANCE METRICS:

    This model achieves a Bleu score of 32.88%.

    The Bleu is a commonly used metric for translation tasks and it is a way fo comparing model generated text to a gold standards to see how similar they are. 

    OVERVIEW:

    This model uses the Google Transformer architecture, which is currently the basis for many state-of-the-art translation models. The essence of the Transformer model is the encode-decoder architecture with Attention. Multiple encoders are stacked on top of each other with each one consisting of a self-attention layer, which tries to take into account the full sentence when translating instead of just the word that it is looking at, and a feed-forward neural network. Word embeddings are fed through these encoding layers and are then passed into the decoding layers.

    In the decoding layer, the self-attention layer only pays attention to earlier positions as opposed to the encode, which allows both directions. Otherwise they work the same way except that the decoding layer attempts to decode the input into the output language. All units have an add and a normalize layer as well. Additionally, in the standard transformer model, there are eight attention heads which are used. The results of these are concatenated into the feed-forward network and then reduced to the correct size. Positional encoding is also used to account for the order of words in the input sequence. This model was trained for 200,000 steps on 4 gpus.

    The transformer used by this model is described here. The implementation was created using the open source opennmt framework available at opennmt.net.

    TRAINING:

    This model is trained on the Bianet, Bible, EUbookshop, GlobalVoices, GNOME, Infopankki, jw300, KDE4, OpenSubtitles, PHP, QED, SETIMES, Tanzil, Tatoeba, TED, Tilde, Ubuntu, Wikipedia and WMT-News parallel corpora. These total 48,716,245 lines of parallel text. These texts can be found at opus.nlpl.eu.

    VALIDATION:

    This model was validated on 2,500 parallel sentences and achieves a Bleu Score of 32.88%.

    INPUT SPECIFICATION

    The input(s) to this model must adhere to the following specifications:

    Filename Maximum Size Accepted Format(s)
    input.txt 1M .txt

    OUTPUT DETAILS

    This model will output the following:

    Filename Maximum Size Format
    results.json 1M .json

    The “results.json” file will contain the translated text in the following format: {"text": "text translated from Turkish"}