This model translates text from Arabic to English. It accepts UTF-8 Arabic text as input. It outputs English text. This model can be used to translate text from Arabic to English.
See the model in action with a Modzy MLOps platform demo or start a trial
34.5% Bleu score
This model was trained on the UN, MultiUN, News-Commentary, QED and Tanzil parallel corpora. In total the training set contained approximately 60 million parallel sentences. To characterize model performance, a Bleu score was calculated. The Bleu is a commonly used metric for translation tasks, and it is a way of comparing model generated text to a gold standard to see how similar they are. This model achieves a Bleu score of 34.50.
Bleu is a method for assessing the quality of text that has been machine-translated from one language to another. The closer the machine translation is to expert human translation, the better the score.
This model uses the Google Transformer architecture, which is currently the base for many state-of-the-art translation models. The essence of the Transformer model is the encoder-decoder architecture with Attention. Multiple encoders are stacked on top of each other, each consisting a self-attention layer, to try and consider the full sentence when translating, instead of just the word it is looking at, and a feed forward neural network. Word embeddings are fed through these encoding layers and are then passed into the decoding layers. In the decoding layer, the self-attention layer only pays attention to earlier positions, as opposed to the encoding which allows both directions. Otherwise they work the same way except they attempt to decode the input into the output language. All units have an add and a normalize layer as well. Additionally, in the standard transformer model, there are eight attention heads which are used. The results of these are concatenated into the feed forward network and then reduced to the correct size. Positional encoding is also used to account for the order of words in the input sequence. This model was trained for 200,000 steps on 4 GPUs, and was implemented using the open source OpenNMT framework.
This model was trained using the UN, MultiUN, News-Commentary, QED and Tanzil parallel corpora. This totals to approximately 60 million parallel sentences. The bulk of these datasets are records of UN meetings which have been translated into multiple languages, along with parallel translations of religious texts. This dataset was tokenized, and byte pair encoding was implemented to try and detect sub-work units. These datasets can be found at OPUS.
This model was tested on 996 parallel sentences pulled from unseen data available at OPUS. It achieves a Bleu Score of 34.50%.
The input(s) to this model must adhere to the following specifications:
This model will output the following:
See how quickly you can deploy and run models, connect to pipelines, autoscale resources, and integrate into workflows with Modzy MLOps platform
d o n o t fill t h i s . f i e l d d o n o t fill t h i s . f i e l d