This paper introduces a novel methodology developed for creating 3D models of archaeological artifacts that reduces the time and effort required by operators. The approach uses a simple vision system mounted on a robotic arm that follows a predetermined path around the object to be reconstructed. The robotic system captures different viewing angles of the object and assigns 3D coordinates corresponding to the robot's pose, allowing it to adjust the trajectory to accommodate objects of various shapes and sizes. The angular displacement between consecutive acquisitions can also be fine-tuned based on the desired final resolution. This flexible approach is suitable for different object sizes, textures, and levels of detail, making it ideal for both large volumes with low detail and small volumes with high detail. The recorded images and assigned coordinates are fed into a constrained implementation of the structure-from-motion (SfM) algorithm, which uses the scale-invariant features transform (SIFT) method to detect key points in each image. By utilising a priori knowledge of the coordinates and SIFT algorithm, low processing time can be ensured while maintaining high accuracy in the final reconstruction. The use of a robotic system to acquire images at a pre-defined pace ensures high repeatability and consistency across different 3D reconstructions, eliminating operator errors in the workflow. This approach not only allows for comparisons between similar objects but also provides the ability to track structural changes of the same object over time. Overall, the proposed methodology provides a significant improvement over current photogrammetry techniques by reducing the time and effort required to create 3D models while maintaining a high level of accuracy and repeatability.

CULTURAL HERITAGE DIGITAL PRESERVATION THROUGH AI-DRIVEN ROBOTICS

Giovanelli, R.;Traviglia, A.
2023-01-01

Abstract

This paper introduces a novel methodology developed for creating 3D models of archaeological artifacts that reduces the time and effort required by operators. The approach uses a simple vision system mounted on a robotic arm that follows a predetermined path around the object to be reconstructed. The robotic system captures different viewing angles of the object and assigns 3D coordinates corresponding to the robot's pose, allowing it to adjust the trajectory to accommodate objects of various shapes and sizes. The angular displacement between consecutive acquisitions can also be fine-tuned based on the desired final resolution. This flexible approach is suitable for different object sizes, textures, and levels of detail, making it ideal for both large volumes with low detail and small volumes with high detail. The recorded images and assigned coordinates are fed into a constrained implementation of the structure-from-motion (SfM) algorithm, which uses the scale-invariant features transform (SIFT) method to detect key points in each image. By utilising a priori knowledge of the coordinates and SIFT algorithm, low processing time can be ensured while maintaining high accuracy in the final reconstruction. The use of a robotic system to acquire images at a pre-defined pace ensures high repeatability and consistency across different 3D reconstructions, eliminating operator errors in the workflow. This approach not only allows for comparisons between similar objects but also provides the ability to track structural changes of the same object over time. Overall, the proposed methodology provides a significant improvement over current photogrammetry techniques by reducing the time and effort required to create 3D models while maintaining a high level of accuracy and repeatability.
File in questo prodotto:
File Dimensione Formato  
isprs-archives-XLVIII-M-2-2023-995-2023.pdf

accesso aperto

Tipologia: Documento in Post-print
Licenza: Creative commons
Dimensione 933.53 kB
Formato Adobe PDF
933.53 kB Adobe PDF Visualizza/Apri

I documenti in ARCA sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10278/5045156
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact