|
|
|
References
|
|
|
1. |
Trafton G.J., et al., “ACT-R/E: An Embodied Cognitive Architecture for Human-Robot Interaction”, J. Human-Robot Interaction, 2:1 (2013), 30–54 |
2. |
Goertzel B., “From Abstract Agents Models to Real-World AGI Architectures: Bridging the Gap”, Lecture Notes in Computer Science, 10414, eds. Everitt T., Goertzel B., Potapov A., Springer International Publishing, Cham, 2017, 3–12 |
3. |
Wu J., et al., “Track to Detect and Segment: An Online Multi-Object Tracker”, 2021 IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE, 2021, 12347–12356 |
4. |
Likhachev M., Ferguson D., “Planning long dynamically feasible maneuvers for autonomous vehicles”, Int. J. Robotics Research, 28:8 (2009), 933–945 |
5. |
Aitygulov E., Kiselev G., Panov A.I., “Task and Spatial Planning by the Cognitive Agent with Human-like Knowledge Representation”, Interactive Collaborative Robotics, ICR Lecture Notes in Computer Science, 11097, eds. Ronzhin A., Rigoll G., Meshcheryakov R., Springer, 2018, 1–12 |
6. |
Satton R.S., Barto E.G., Obuchenie s podkrepleniem, Izd. 2-e, BINOM. Laboratoriya znanii, M., 2011 |
7. |
Moerland T.M., Broekens J., Jonker C.M., Model-based Reinforcement Learning: A Survey, 2020, 421–429 |
8. |
Makarov D.A., Panov A.I., Yakovlev K.S., “Arkhitektura mnogourovnevoi intellektualnoi sistemy upravleniya bespilotnymi letatelnymi apparatami”, Iskusstvennyi intellekt i prinyatie reshenii, 2015, no. 3, 18–33 |
9. |
Yakovlev K., et al., “Combining Safe Interval Path Planning and Constrained Path Following Control: Preliminary Results”, Interactive Collaborative Robotics, ICR Lecture Notes in Computer Science, 11659, 2019, 310–319 |
10. |
Staroverov A., et al., “Real-Time Object Navigation with Deep Neural Networks and Hierarchical Reinforcement Learning”, IEEE Access, 8 (2020), 195608–195621 |
11. |
Kiselev G.A., “Intellektualnaya sistema planirovaniya povedeniya koalitsii robototekhnicheskikh agentov s STRL arkhitekturoi”, Informatsionnye tekhnologii i vychislitelnye sistemy, 2020, no. 2, 21–37 |
12. |
Pack L., Littman M.L., Cassandra A.R., “Planning and acting in partially observable stochastic domains”, Artificial Intelligence, 101 (1998), 99–134 |
13. |
Bacon P.-L., Harb J., Precup D., “The Option-Critic Architecture”, Proc. of the AAAI Conf. on Artificial Intelligence, 31 (2017) |
14. |
Keramati R., et al., Strategic Object Oriented Reinforcement Learning, 2018 |
15. |
Watters N., et al., COBRA: Data-Efficient Model-Based RL through Unsupervised Object Discovery and Curiosity-Driven Exploration, 2019 |
16. |
Hafner D., et al., “Dream to Control: Learning Behaviors by Latent Imagination”, Int. Conf. on Learning Representations, 2020 |
17. |
Jamal M., Panov A., “Adaptive Maneuver Planning for Autonomous Vehicles Using Behavior Tree on Apollo Platform”, Artificial Intelligence XXXVIII, SGAI Lecture Notes in Computer Science, 13101, eds. Bramer M., Ellis R., 2021, 327–340 |