Folgen
Xavier Martinet
Xavier Martinet
Meta
Bestätigte E-Mail-Adresse bei meta.com
Titel
Zitiert von
Zitiert von
Jahr
Llama: Open and efficient foundation language models
H Touvron, T Lavril, G Izacard, X Martinet, MA Lachaux, T Lacroix, ...
arXiv preprint arXiv:2302.13971, 2023
95622023
Llama 2: Open foundation and fine-tuned chat models
H Touvron, L Martin, K Stone, P Albert, A Almahairi, Y Babaei, ...
arXiv preprint arXiv:2307.09288, 2023
91212023
The llama 3 herd of models
A Dubey, A Jauhri, A Pandey, A Kadian, A Al-Dahle, A Letman, A Mathur, ...
arXiv preprint arXiv:2407.21783, 2024
5452024
LLaMA: open and efficient foundation language models. arXiv
H Touvron, T Lavril, G Izacard, X Martinet, MA Lachaux, T Lacroix, ...
arXiv preprint arXiv:2302.13971, 2023
1832023
Llama: Open and efficient foundation language models. arXiv 2023
H Touvron, T Lavril, G Izacard, X Martinet, MA Lachaux, T Lacroix, ...
arXiv preprint arXiv:2302.13971 10, 2023
1392023
Hypertree proof search for neural theorem proving
G Lample, T Lacroix, MA Lachaux, A Rodriguez, A Hayat, T Lavril, ...
Advances in neural information processing systems 35, 26337-26349, 2022
992022
Llama 2: open foundation and fine-tuned chat models. arXiv
H Touvron, L Martin, K Stone, P Albert, A Almahairi, Y Babaei, ...
arXiv preprint arXiv:2307.09288, 2023
982023
Llama 2: Open foundation and fine-tuned chat models. arXiv 2023
H Touvron, L Martin, K Stone, P Albert, A Almahairi, Y Babaei, ...
arXiv preprint arXiv:2307.09288, 0
93
Llama 2: Open foundation and fine-tuned chat models, 2023b
H Touvron, L Martin, K Stone, P Albert, A Almahairi, Y Babaei, ...
URL https://arxiv. org/abs/2307.09288, 2023
602023
Polygames: Improved zero learning
T Cazenave, YC Chen, GW Chen, SY Chen, XD Chiu, J Dehos, M Elsa, ...
ICGA Journal 42 (4), 244-256, 2020
552020
Llama 2: open foundation and fine-tuned chat models. CoRR abs/2307.09288 (2023)
H Touvron, L Martin, K Stone, P Albert, A Almahairi, Y Babaei, ...
arXiv preprint arXiv:2307.09288 10, 2023
512023
LLaMA: open and efficient foundation language models, 2023 [J]
H Touvron, T Lavril, G Izacard, X Martinet, MA Lachaux, T Lacroix, ...
URL https://arxiv. org/abs/2302.13971, 2023
492023
Timo-401 thée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open 402 and efficient foundation language models
H Touvron, T Lavril, G Izacard, X Martinet, MA Lachaux
arXiv preprint arXiv:2302.13971 403, 2023
162023
LLaMA: Open and Efficient Foundation Language Models. ArXiv (Cornell University)
H Touvron, T Lavril, G Izacard, X Martinet, MA Lachaux, T Lacroix, ...
15*2023
Llama: Open and efficient foundation language models, CoRR abs/2302.13971 (2023). URL: https://doi. org/10.48550/arXiv. 230 2.13971. doi: 10.48550/arXiv. 2302.13971
H Touvron, T Lavril, G Izacard, X Martinet, M Lachaux, T Lacroix, ...
arXiv preprint arXiv:2302.13971, 0
15
Worldsense: A synthetic benchmark for grounded reasoning in large language models
Y Benchekroun, M Dervishi, M Ibrahim, JB Gaya, X Martinet, G Mialon, ...
arXiv preprint arXiv:2311.15930, 2023
62023
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–16