3D Scene Retrieval from Text

Authors

  • Gireesh Singh Thakurathi Computer Engineering/ Thadomal Shahani Engineering College/ Mumbai University
  • Melvin Thomas Computer Engineering/ Thadomal Shahani Engineering College/ Mumbai University
  • Haresh Savlani Computer Engineering/ Thadomal Shahani Engineering College/ Mumbai University

Keywords:

Text, scene, NLP, Dependency, Parser, Supporter, Dependent, Preposition, Bounding Box, 3d Models, Refinement, POS Tags, Heuristics, Collision

Abstract

Translating elusive ideas or statements into visual scenes is quite a daunting task. First the thought has to be laid out explicitly and clearly in the form of a written statement which acts as the foundation for the conversion. Then a professional strives to create a mental image of the given statement. Finally, another professional starts placing models according to the abstract image created in the previous stage in a model rendering software. Thus, converting a single text to corresponding visual element is a rather challenging task. This paper will focus on automating this entire transformation. It promises to convert any arbitrary descriptive text into a representative scene. The proposed system parses a user written input text, extracts information using Natural Language Processing (NLP) and tags relevant units. It then associates every object with model and places them according to the derived relations and spatial dependencies. Ultimately the user can make changes and minor adjustments to the final scene using Blender  in-build controls.

Downloads

Published

30.06.2018

How to Cite

[1]
Gireesh Singh Thakurathi, Melvin Thomas, and Haresh Savlani, “3D Scene Retrieval from Text”, IJREST, vol. 5, no. 6, Jun. 2018.

Issue

Section

Articles