Detail projektu
TA2: Together Anywhere, Together Anytime
Období řešení: 15. 3. 2010 - 31. 12. 2012
Typ projektu: grant
Kód: 214793
Agentura: Information and Communication Technologies (ICT) 7th Framework programme
Program: Sedmý rámcový program Evropského společenství pro výzkum, technologický rozvoj a demonstrace
TA2 (Together Anywhere, Together Anytime), pronounced "tattoo", aims at defining end-to-end systems for the development and delivery of new, creative forms of interactive, immersive, high quality media experiences for groups of users such as households and families. The overall vision of TA2 can be summarised as "making communications and engagement easier among groups of people separated in space and time".
One of the key components of TA2 is a set of generic and reliable audio, video, and multimodalities integration and recognition tools. This includes automatic extraction of cues from raw data streams. The running TA2 project stresses low-level "instantaneous" cues; it does not deal with semantic-aware integration of contextual information which could significantly improve the quality of cues.
The proposed TA2 project extension focuses on the medium-level (context-aware) cues taking into account not only low-level analysis outputs but also contextual information, e.g., about the activated scenario. The created semantic cues will be used by the TA2 system to orchestrate (i.e. frame, crop and represent) the audio-visual elements of the interaction between people.
The addition of BUT to the consortium will allow the semantic relevance of the metadata extracted from the analysis to be interpreted within the particular contexts described in the project. This will make the subsequent orchestration of the video more effective and more efficient and hence improve the end-user experience. The extension will enable building better applications that help families to interact easily and openly through games, through improved semi automatic production and publication of user generated content, and through enhanced ambient connectedness between families.
Zemčík Pavel, prof. Dr. Ing. (UPGM FIT VUT) , spoluřešitel
2012
- POLÁČEK Ondřej, KLÍMA Martin, SPORKA Adam J., ŽÁK Pavel, HRADIŠ Michal, ZEMČÍK Pavel a PROCHÁZKA Václav. A Comparative Study on Distant Free-Hand Pointing. In: EuroiTV '12 Proceedings of the 10th European conference on Interactive tv and video. Berlin, Germany: Association for Computing Machinery, 2012, s. 139-142. ISBN 978-1-4503-1107-6. Detail
- MOTLÍČEK Petr, VALENTE Fabio a SZŐKE Igor. Improving Acoustic Based Keyword Spotting Using LVCSR Lattices. In: Proc. International Conference on Acoustics, Speech, and Signal Processing 2012. Kyoto: IEEE Signal Processing Society, 2012, s. 4413-4416. ISBN 978-1-4673-0044-5. Detail
- KRÁL Jiří a HRADIŠ Michal. Restricted Boltzman Machines for Image Tag Suggestion. In: Proceedings of the 19th Conference STUDENT EEICT 2012. Brno: Vysoké učení technické v Brně, 2012, s. 5. Detail
- HRADIŠ Michal, ŘEZNÍČEK Ivo a BEHÚŇ Kamil. Semantic Class Detectors in Video Genre Recognition. In: Proceedings of VISAPP 2012. Rome: SciTePress - Science and Technology Publications, 2012, s. 640-646. ISBN 978-989-8565-03-7. Detail
- HRADIŠ Michal, EIVAZI Shahram a BEDNAŘÍK Roman. Voice activity detection in video mediated communication from gaze. In: ETRA '12 Proceedings of the Symposium on Eye Tracking Research and Applications. Santa Barbara: Association for Computing Machinery, 2012, s. 329-332. ISBN 978-1-4503-1221-9. Detail
- BEDNAŘÍK Roman, VRZÁKOVÁ Hana a HRADIŠ Michal. What you want to do next: A novel approach for intent prediction in gaze-based interaction. In: ETRA '12 Proceedings of the Symposium on Eye Tracking Research and Applications. Santa Barbara: Association for Computing Machinery, 2012, s. 83-90. ISBN 978-1-4503-1221-9. Detail
2011
- HRADIŠ Michal, ŘEZNÍČEK Ivo a BEHÚŇ Kamil. Brno University of Technology at MediaEval 2011 Genre Tagging Task. In: Working Notes Proceedings of the MediaEval 2011 Workshop. Pisa, Italy: CEUR-WS.org, 2011, s. 1-2. ISSN 1613-0073. Detail
- ŘEZNÍČEK Ivo a ZEMČÍK Pavel. On-line human action detection using space-time interest points. In: Zborník príspevkov prezentovaných na konferencii ITAT, september 2011. Praha: Matematicko-fyzikální fakulta, Univerzita Karlova, 2011, s. 39-45. ISBN 978-80-89557-01-1. Detail
2010
- HRADIŠ Michal, BERAN Vítězslav, ŘEZNÍČEK Ivo, HEROUT Adam, BAŘINA David, VLČEK Adam a ZEMČÍK Pavel. Brno University of Technology at TRECVid 2010 SIN, CCD. In: 2010 TREC Video Retrieval Evaluation Notebook Papers. Gaithersburg, MD: National Institute of Standards and Technology, 2010, s. 1-10. Detail
- ŘEZNÍČEK Ivo a BAŘINA David. Classifier creation framework for diverse classification tasks. In: Proceedings of the DT workshop. Žilina: Vysoké učení technické v Brně, 2010, s. 3. ISBN 978-80-554-0304-5. Detail
- ŽÁK Pavel, BARTOŇ Radek a ZEMČÍK Pavel. Vision based user interface framework. In: Proceedings of the DT workshop. Žilina, 2010, s. 3. ISBN 978-80-554-0304-5. Detail
2010
- Obecný framework pro tvorbu klasifikátorů, software, 2010
Autoři: Bařina David, Hradiš Michal, Řezníček Ivo, Zemčík Pavel Detail - Online human action recognition framework, software, 2010
Autoři: Řezníček Ivo, Hradiš Michal, Zemčík Pavel Detail - Shared Image Preprocessing, software, 2010
Autoři: Žák Pavel, Hradiš Michal, Smrž Pavel, Zemčík Pavel Detail