3D Scene Retrieval from Text
Keywords:
Text, scene, NLP, Dependency, Parser, Supporter, Dependent, Preposition, Bounding Box, 3d Models, Refinement, POS Tags, Heuristics, CollisionAbstract
Translating elusive ideas or statements into visual scenes is quite a daunting task. First the thought has to be laid out explicitly and clearly in the form of a written statement which acts as the foundation for the conversion. Then a professional strives to create a mental image of the given statement. Finally, another professional starts placing models according to the abstract image created in the previous stage in a model rendering software. Thus, converting a single text to corresponding visual element is a rather challenging task. This paper will focus on automating this entire transformation. It promises to convert any arbitrary descriptive text into a representative scene. The proposed system parses a user written input text, extracts information using Natural Language Processing (NLP) and tags relevant units. It then associates every object with model and places them according to the derived relations and spatial dependencies. Ultimately the user can make changes and minor adjustments to the final scene using Blender in-build controls.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2018 Gireesh Singh Thakurathi, Melvin Thomas, Haresh Savlani

This work is licensed under a Creative Commons Attribution 4.0 International License.