Optimizing High-Throughput Inference on Graph Neural Networks at Shared Computing Facilities with the NVIDIA Triton Inference Server
- Location
-
Deutsche Nationalbibliothek Frankfurt am Main
- Extent
-
1 Online-Ressource.
- Language
-
Englisch
- Bibliographic citation
-
Optimizing High-Throughput Inference on Graph Neural Networks at Shared Computing Facilities with the NVIDIA Triton Inference Server ; volume:8 ; number:1 ; day:18 ; month:7 ; year:2024 ; pages:1-14 ; date:12.2024
Computing and software for big science ; 8, Heft 1 (18.7.2024), 1-14, 12.2024
- Creator
-
Savard, Claire
Manganelli, Nicholas
Holzman, Burt
Gray, Lindsey
Perloff, Alexx
Pedro, Kevin
Stenson, Kevin
Ulmer, Keith
- Contributor
-
SpringerLink (Online service)
- DOI
-
10.1007/s41781-024-00123-2
- URN
-
urn:nbn:de:101:1-2410080801001.196580929387
- Rights
-
Open Access; Der Zugriff auf das Objekt ist unbeschränkt möglich.
- Last update
-
15.08.2025, 7:20 AM CEST
Data provider
Deutsche Nationalbibliothek. If you have any questions about the object, please contact the data provider.
Associated
- Savard, Claire
- Manganelli, Nicholas
- Holzman, Burt
- Gray, Lindsey
- Perloff, Alexx
- Pedro, Kevin
- Stenson, Kevin
- Ulmer, Keith
- SpringerLink (Online service)