项目作者: mapmeld

项目描述 :
Hindi NLP work
高级语言: Jupyter Notebook
项目地址: git://github.com/mapmeld/hindi-bert.git
创建时间: 2020-04-01T02:19:51Z
项目社区:https://github.com/mapmeld/hindi-bert

开源协议:

下载


hindi-bert

This is a Hindi language model trained with Google Research’s ELECTRA. I don’t modify ELECTRA code until we get into finetuning, and only then because there’s hardcoded train and test files

The corpus is Hindi text (9 GB of OSCAR / CommonCrawl, ~1GB of Hindi Wikipedia)

Notebooks show finetuning classifiers on review sentiment analysis (3500 x 3 categories), BBC topic classification, and XNLI

Blog post: @mapmeld/teaching-hindi-to-electra-b11084baab81">https://medium.com/@mapmeld/teaching-hindi-to-electra-b11084baab81

It’s available on HuggingFace: https://huggingface.co/monsoon-nlp/hindi-bert

2022 Update: Consider using Google’s MuRIL model for Indian languages: https://huggingface.co/google/muril-large-cased

Corpus

Download: https://drive.google.com/drive/u/1/folders/1WikYHHMI72hjZoCQkLPr45LDV8zm9P7p

The corpus is two files:

Bonus notes:

  • Adding English wiki text or parallel corpus could help with cross-lingual tasks and training

Vocabulary

https://drive.google.com/file/d/1-02Um-8ogD4vjn4t-wD2EwCE-GtBjnzh/view?usp=sharing

Bonus notes:

  • Created with HuggingFace Tokenizers; could be longer or shorter, review ELECTRA vocab_size param

Pretrain TF Records

build_pretraining_dataset.py splits the corpus into training documents

Set the ELECTRA model size and whether to split the corpus by newlines. This process can take hours on its own.

https://drive.google.com/drive/u/1/folders/1--wBjSH59HSFOVkYi4X-z5bigLnD32R5

Bonus notes:

  • I am not sure of the meaning of the corpus newline split (what is the alternative?) and given this corpus, which creates the better training docs

Training

Structure your files, with data-dir named “trainer” here

  1. trainer
  2. - vocab.txt
  3. - pretrain_tfrecords
  4. -- (all .tfrecord... files)
  5. - models
  6. -- modelname
  7. --- checkpoint
  8. --- graph.pbtxt
  9. --- model.*

CoLab notebook gives examples of GPU vs. TPU setup

configure_pretraining.py

Baby Model: https://drive.google.com/drive/folders/1KPJ_rhji7Q_4qazLOMhiiG21kCFADpfS?usp=sharing

Baby2 Model (more training) https://drive.google.com/drive/folders/1cwQlWryLE4nlke4OixXA7NK8hzlmUR0c?usp=sharing

Using the model with transformers

It’s available on HuggingFace: https://huggingface.co/monsoon-nlp/hindi-bert - sample usage: https://colab.research.google.com/drive/1mSeeSfVSOT7e-dVhPlmSsQRvpn6xC05w

Finetuning

Each task (such as XLNI, BBC, Hindi Movie Reviews) is a hardcoded class.

Where to place your training and test/dev data in the file system (for data-dir = trainer)

  1. trainer
  2. - finetuning_data
  3. -- xnli
  4. --- train.tsv
  5. --- dev.tsv
  6. - models
  7. -- model_name
  8. --- finetuning_tfrecords
  9. --- finetuning_models

^^ If things go bad or you redesign your data, delete finetuning_tfrecords and finetuning_models

In finetune/task_builder.py

  1. elif task_name == "bbc":
  2. return classification_tasks.BBC(config, tokenizer)

In finetune/classification/classification_tasks.py

  1. class BBC(ClassificationTask):
  2. def __init__(self, config: configure_finetuning.FinetuningConfig, tokenizer):
  3. super(BBC, self).__init__(config, "bbc", tokenizer,
  4. ['southasia', 'international', 'learningenglish', 'institutional', 'india', 'news', 'pakistan', 'multimedia', 'social', 'china', 'entertainment', 'science', 'business', 'sport'])
  5. def get_examples(self, split):
  6. return self._create_examples(read_tsv(
  7. os.path.join(self.config.raw_data_dir(self.name), split + ".csv"),
  8. quotechar="\"",
  9. max_lines=100 if self.config.debug else None), split)
  10. def _create_examples(self, lines, split):
  11. return self._load_glue(lines, split, 1, None, 0, skip_first_line=True)