title
string
paper_id
int64
abstract
string
authors
list
year
float64
arxiv_id
string
acl_id
string
pmc_id
string
pubmed_id
string
doi
string
venue
string
journal
string
mag_id
string
outbound_citations
sequence
inbound_citations
sequence
has_outbound_citations
bool
has_inbound_citations
bool
has_pdf_parse
bool
s2_url
string
has_pdf_body_text
float64
has_pdf_parsed_abstract
float64
has_pdf_parsed_body_text
float64
has_pdf_parsed_bib_entries
float64
has_pdf_parsed_ref_entries
float64
entities
sequence
A Universal Extensible Workflow Dynamic Control Model and Its Application
63,995,267
Product development process is first analyzed and classified into five categories. Then, a universal dynamic extensible workflow model for cooperated work is proposed for realizing automatic process management. The related data structure and functional components are described in detail. Finally, an example for machinery design is illustrated.
[ { "first": "Hu", "middle": [], "last": "Min", "suffix": "" } ]
2,001
Journal of Computer-aided Design & Computer Graphics
2377134098
[]
[]
false
false
false
https://api.semanticscholar.org/CorpusID:63995267
null
null
null
null
null
[ [ "machinery design", "APPLICATION" ], [ "cooperated work", "APPLICATION" ], [ "universal dynamic extensible workflow model", "METHOD" ], [ "Product development process", "DATA" ], [ "automatic process management", "APPLICATION" ], [ "functional component", "DATA" ] ]
Supporting user interface evaluation of AR presentation and interaction techniques with ARToolkit
63,996,803
Usability oriented design is essential for the creation of efficient, effective and successful real-world AR applications. The creation of highly usable AR interfaces requires detailed knowledge about the usability aspects of the various interaction and information presentation techniques that are used to create them. Currently, the expertise in the AR domain with regards to efficient and effective visual presentation techniques and corresponding interaction techniques is still very limited. To resolve this problem, it is necessary to support designers of AR user interfaces with a knowledge-base that covers the AR specific aspects of various information presentation and interaction techniques. This knowledge can only be gathered by evaluating different techniques in systematic usability tests. To make the systematic evaluation of a variety of AR information presentation and interaction techniques viable, we have created a workflow that supports the fast and easy creation of the necessary test applications. The workflow uses well established tools including Maya for 3D modeling, the i4D graphics system for graphics rendering and ARToolkit for tracking, as well as some new custom developments to integrate them into a coherent workflow. This approach enables the creation of the small-scale AR applications that are required for user tests with minimal effort and thus enables us to systematically compare different approaches to common AR user interface design problems.
[ { "first": "V.", "middle": [], "last": "paelke", "suffix": "" }, { "first": "J.", "middle": [], "last": "Stocklein", "suffix": "" }, { "first": "C.", "middle": [], "last": "Reimann", "suffix": "" }, { "first": "W.", "middle": [], "last": "Rosenbach", "suffix": "" } ]
2,003
10.1109/ART.2003.1320424
2003 IEEE International Augmented Reality Toolkit Workshop
2003 IEEE International Augmented Reality Toolkit Workshop
2540098392
[ "10019527", "17649517", "67088944" ]
[]
true
false
true
https://api.semanticscholar.org/CorpusID:63996803
0
0
0
1
0
[ [ "information presentation", "METHOD" ], [ "systematic usability test", "APPLICATION" ], [ "i4D graphic system", "METHOD" ], [ "track", "APPLICATION" ], [ "real-world AR application", "APPLICATION" ], [ "information presentation technique", "METHOD" ], [ "application", "METHOD" ], [ "coherent workflow", "METHOD" ], [ "Usability oriented design", "METHOD" ], [ "custom", "METHOD" ], [ "systematic evaluation", "EVALUATION" ], [ "AR user interface", "APPLICATION" ], [ "Maya", "METHOD" ], [ "visual presentation technique", "METHOD" ], [ "AR information presentation", "METHOD" ], [ "cale AR application", "APPLICATION" ], [ "AR interface", "APPLICATION" ], [ "AR user interface design problem", "APPLICATION" ], [ "interaction technique", "METHOD" ], [ "ARToolkit", "METHOD" ], [ "3D modeling", "APPLICATION" ], [ "user test", "EVALUATION" ], [ "AR domain", "APPLICATION" ], [ "graphic rendering", "APPLICATION" ] ]
The Blender Game Engine
183,612,017
[ { "first": "John", "middle": [], "last": "Blain", "suffix": "" } ]
2,012
10.1201/b11922-20
The Complete Guide to Blender Graphics
The Complete Guide to Blender Graphics
2478896500
[]
[]
false
false
false
https://api.semanticscholar.org/CorpusID:183612017
null
null
null
null
null
[ [ "Blender Game Engine", "METHOD" ] ]
Temporal Symbolic Integration Applied to a Multimodal System Using Gestures and Speech
16,557,616
This paper presents a technical approach for temporal symbol integration aimed to be generally applicable in unimodal and multimodal user interfaces. It draws its strength from symbolic data representation and an underlying rule-based system, and is embedded in a multiagent system. The core method for temporal integration is motivated by findings from cognitive science research. We discuss its application for a gesture recognition task and speech-gesture integration in a Virtual Construction scenario. Finally an outlook of an empirical evaluation is given.
[ { "first": "Timo", "middle": [], "last": "Sowa", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Fröhlich", "suffix": "" }, { "first": "Marc", "middle": [ "Erich" ], "last": "Latoschik", "suffix": "" } ]
1,999
10.1007/3-540-46616-9_26
TOWARD A GESTURE-BASED COMMUNICATION IN HUMAN-COMPUTER INTERACTION - PROCEEDINGS OF THE INTERNATIONAL GESTURE WORKSHOP
1591715535
[ "61779829", "18308188", "13630698", "63110727", "206852389", "143170727", "8451911", "210224824", "54134871", "7587824" ]
[ "9682359", "2270858", "10300615", "7296678", "1642452", "55655813", "13239679" ]
true
true
true
https://api.semanticscholar.org/CorpusID:16557616
0
0
0
1
0
[ [ "symbolic data representation", "METHOD" ], [ "temporal symbol integration", "APPLICATION" ], [ "temporal integration", "APPLICATION" ], [ "speech-gesture integration", "METHOD" ], [ "unimodal and multimodal user interface", "APPLICATION" ], [ "cognitive science research", "APPLICATION" ], [ "gesture recognition task", "APPLICATION" ], [ "Virtual Construction scenario", "APPLICATION" ], [ "rule-based system", "METHOD" ], [ "empirical evaluation", "EVALUATION" ], [ "multiagent system", "METHOD" ] ]
Alternative Archaeological Representations within Virtual Worlds
17,598,580
Traditional VR methods allow the user to tour and view the virtual world from different perspectives. Increasingly, more interactive and adaptive worlds are being generated, potentially allowing the user to interact with and affect objects in the virtual world. We describe and compare four models of operation that allow the publisher to generate views, with the client manipulating and affecting specific objects in the world. We demonstrate these approaches through a problem in archaeological visualization.
[ { "first": "Jonathan", "middle": [ "C." ], "last": "Roberts", "suffix": "" }, { "first": "Nick", "middle": [ "S." ], "last": "Ryan", "suffix": "" } ]
1,997
Proc. of the 4 th UK Virtual Reality Specialist Interest Group Conference, Brunel University
2204853233
[ "59678975", "112627099", "132489556", "14499859", "38462632", "60206089" ]
[ "35215058", "17876722", "7862977", "54768964", "55161656", "13838661", "17126127", "964344", "62645120", "108140126", "55494223", "54010011", "62805549", "96442048", "7283399", "59039766" ]
true
true
true
https://api.semanticscholar.org/CorpusID:17598580
0
0
0
1
0
[ [ "Traditional VR method", "METHOD" ], [ "virtual world", "APPLICATION" ], [ "archaeological visualization", "VISUALIZATION" ], [ "adaptive world", "METHOD" ] ]
Softening up Hard Science: reply to Newell and Card
17,599,211
A source of intellectual overhead periodically encountered by scientists is the call to be "hard," to ensure good science by imposing severe methodological strictures. Newell and Card (1985) undertook to impose such strictures on the psychology of human-computer interaction. Although their discussion contributes to theoretical debate in human-computer interaction by setting a reference point, their specific argument fails. Their program is unmotivated, is severely limited, and suffers from these limitations in principle. A top priority for the psychology of human-computer interaction should be the articulation of an alternative explanatory program, one that takes as its starting point the need to understand the real problems involved in providing better computer tools for people to use.
[ { "first": "John", "middle": [ "M." ], "last": "Carroll", "suffix": "" }, { "first": "Robert", "middle": [ "L." ], "last": "Campbell", "suffix": "" } ]
1,986
10.1207/s15327051hci0203_3
Human Computer Interaction
2048963798
[ "143756165", "144391935", "57106575", "20001280", "60452239", "142697067", "32735751", "145306380", "1921535", "59624696", "210584425", "60155059", "7455209", "144679912", "61142342", "207660078", "150834818", "203061063", "16729332", "62537074", "42031607", "145306380", "59690029", "64863235", "145306380", "1387074", "14843553", "209396626", "72470048", "58809327", "53895703", "145306380", "63969316" ]
[ "18122828", "18230774", "53805585", "9124957", "5739739", "16058525", "13863367", "207799928", "591488", "1921535", "3105449", "12189457", "6691728", "53397487", "54625427", "1661749" ]
true
true
true
https://api.semanticscholar.org/CorpusID:17599211
0
0
0
1
0
[ [ "computer tool", "APPLICATION" ], [ "reference point", "DATA" ], [ "human-computer interaction", "APPLICATION" ], [ "theoretical debate", "APPLICATION" ], [ "alternative explanatory program", "METHOD" ], [ "intellectual overhead", "EVALUATION" ], [ "methodological stricture", "METHOD" ] ]
3D Modelling of a Famosa Fortress, Malaysia Based on Comparison of Textual and Visual Data
16,981,854
This paper presents an attempt to model the “A Famosa Fortress” in Malaysia into 3D. This building was built in 1511 by the Portuguese and went through several architectural developments and changes before being largely destroyed during the British occupation in 1824. The biggest challenge in this research is to determine the original fortress layout due to the lack of any authoritative documentation pertaining to the fortress. Detail analysis has been conducted to identify reliable sources for references which are available in the form of text and visual. In this paper, we focus on comparison of selected textual and visual data to come out with a verifiable conjectural layout of the fortress. We then pre-visualized the layout in 3D model. Some samples of the model are presented here however there are still rooms for improvements before it is finalized. The output of this research will be tested for application in tourism and education.
[ { "first": "M.", "middle": [], "last": "Izani", "suffix": "" }, { "first": "A.", "middle": [], "last": "Bridges", "suffix": "" }, { "first": "A.", "middle": [], "last": "Razak", "suffix": "" } ]
2,009
10.1109/CGIV.2009.76
2009 Sixth International Conference on Computer Graphics, Imaging and Visualization
2009 Sixth International Conference on Computer Graphics, Imaging and Visualization
2110311329
[ "126939277", "131718262", "113328259", "195010191" ]
[ "18177351", "17220516", "150334592", "146805939", "9965693" ]
true
true
true
https://api.semanticscholar.org/CorpusID:16981854
0
0
0
1
0
[ [ "fortress layout", "METHOD" ], [ "authoritative documentation", "DATA" ], [ "text", "EVALUATION" ], [ "tourism and education", "APPLICATION" ], [ "conjectural layout", "METHOD" ], [ "3D model", "DATA" ], [ "textual and visual data", "DATA" ], [ "Detail analysis", "EVALUATION" ] ]
How well do professional developers test with code coverage visualizations? An empirical study
7,231,416
Despite years of availability of testing tools, professional software developers still seem to need better support to determine the effectiveness of their tests. Without improvements in this area, inadequate testing of software seems likely to remain a major problem. To address this problem, industry and researchers have proposed systems that visualize "testedness" for end-user and professional developers. Empirical studies of such systems for end-user programmers have begun to show success at helping end users write more effective tests. Encouraged by this research, we examined the effect that code coverage visualizations have on the effectiveness of test cases that professional software developers write. This paper presents the results of an empirical study conducted using code coverage visualizations found in a commercially available programming environment. Our results reveal how this kind of code coverage visualization impacts test effectiveness, and provide insights into the strategies developers use to test code.
[ { "first": "J.", "middle": [], "last": "Lawrence", "suffix": "" }, { "first": "S.", "middle": [], "last": "Clarke", "suffix": "" }, { "first": "M.", "middle": [], "last": "Burnett", "suffix": "" }, { "first": "G.", "middle": [], "last": "Rothermel", "suffix": "" } ]
2,005
10.1109/VLHCC.2005.44
2005 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC'05)
2005 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC'05)
2137440139
[ "16862987", "8475318", "14784420", "5369890", "317131", "437142", "32060976", "53303406", "195861096", "1498913", "12324859", "62196411", "15515787", "9124682", "12740139", "17759588" ]
[ "20620823", "18414684", "12799370", "11782879" ]
true
true
true
https://api.semanticscholar.org/CorpusID:7231416
1
1
1
1
1
[ [ "code coverage visualization", "VISUALIZATION" ], [ "environment", "VISUALIZATION" ], [ "professional software developer", "APPLICATION" ], [ "irical study", "EVALUATION" ], [ "test effectiveness", "EVALUATION" ], [ "empirical study", "EVALUATION" ], [ "inadequate testing", "APPLICATION" ], [ "ness", "DATA" ], [ "test case", "EVALUATION" ], [ "end-user programmer", "APPLICATION" ], [ "professional developer", "APPLICATION" ] ]
Context-Aware Computer Aided Inbetweening
7,239,327
This paper presents a context-aware computer aided inbetweening (CACAI) technique that interpolates planar strokes to generate inbetween frames from a given set of key frames. The inbetweening is context-aware in the sense that not only the stroke’s shape but also the context (i.e., the neighborhood of a stroke) in which a stroke appears are taken into account for the stroke correspondence and interpolation. Given a pair of successive key frames, the CACAI automatically constructs the stroke correspondence between them by exploiting the context coherence between the corresponding strokes. Meanwhile, the construction algorithm is able to incorporate the user’s interaction with ease and allows the user more effective control over the correspondence process than existing stroke matching techniques. With a one-to-one stroke correspondence, the CACAI interpolates the shape and context between the corresponding strokes for the generation of intermediate frames. In the interpolation sequence, both the shape of individual strokes and the spatial layout between them are well retained such that the feature characteristics and visual appearance of the objects in the key frames can be fully preserved even when complex motions are involved in these objects. We have developed a prototype system to demonstrate the ease of use and effectiveness of the CACAI.
[ { "first": "Wenwu", "middle": [], "last": "Yang", "suffix": "" } ]
2,018
10.1109/TVCG.2017.2657511
IEEE Transactions on Visualization and Computer Graphics
IEEE Transactions on Visualization and Computer Graphics
2582053741
[ "15140955", "15191375", "23402262", "9265476", "205572369", "207176899", "18546185", "6454972", "225102", "16721163", "1305216", "36128550", "62310910", "3506948", "14227607", "3337513", "6503275", "852015", "3541556", "42751132", "2979401", "16014595", "53248231", "18246534", "5212035", "189415948", "1152078", "16392144", "6895725" ]
[ "209083173", "46894333", "85527083", "51987597", "3899476", "203640854", "202777808" ]
true
true
true
https://api.semanticscholar.org/CorpusID:7239327
0
0
0
1
0
[ [ "inbetween frame", "DATA" ], [ "stroke matching technique", "METHOD" ], [ "context-aware computer aided inbetweening (CACAI) technique", "METHOD" ], [ "fram", "DATA" ], [ "feature character", "DATA" ], [ "successive key frame", "DATA" ], [ "context", "DATA" ], [ "shape", "DATA" ], [ "context coherence", "DATA" ], [ "key fram", "DATA" ], [ "stroke correspondence", "DATA" ], [ "intermediate frame", "DATA" ], [ "correspondence process", "APPLICATION" ], [ "spatial layout", "VISUALIZATION" ], [ "construction algorithm", "METHOD" ], [ "planar stroke", "METHOD" ], [ "stroke correspondence and interpolation", "APPLICATION" ], [ "CACA", "METHOD" ], [ "prototype system", "EVALUATION" ], [ "visual appearance", "VISUALIZATION" ], [ "CACAI", "METHOD" ], [ "one-to-one stroke correspondence", "METHOD" ], [ "interpolation sequence", "DATA" ], [ "ease of use", "EVALUATION" ], [ "object", "DATA" ] ]
Interactive and adaptive data-driven crowd simulation: User study
28,911,899
We present an adaptive data-driven algorithm for interactive crowd simulation. Our approach combines realistic trajectory behaviors extracted from videos with synthetic multi-agent algorithms to generate plausible simulations. We use statistical techniques to compute the movement patterns and motion dynamics from noisy 2D trajectories extracted from crowd videos. These learned pedestrian dynamic characteristics are used to generate collision-free trajectories of virtual pedestrians in slightly different environments or situations. The overall approach is robust and can generate perceptually realistic crowd movements at interactive rates in dynamic environments. We also present results from preliminary user studies that evaluate the trajectory behaviors generated by our algorithm.
[ { "first": "Aniket", "middle": [], "last": "Bera", "suffix": "" }, { "first": "Sujeong", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Dinesh", "middle": [], "last": "Manocha", "suffix": "" } ]
2,016
10.1109/VR.2016.7504784
VR
2461629729
[ "18178842", "2885497", "7640977", "1803356", "12456074" ]
[]
true
false
true
https://api.semanticscholar.org/CorpusID:28911899
1
1
1
1
0
[ [ "motion dynamic", "DATA" ], [ "preliminary user study", "EVALUATION" ], [ "dynamic environment", "APPLICATION" ], [ "synthetic multi-agent algorithm", "METHOD" ], [ "video", "DATA" ], [ "crowd movement", "VISUALIZATION" ], [ "interactive rate", "METHOD" ], [ "statistical technique", "METHOD" ], [ "movement pattern", "EVALUATION" ], [ "adaptive data-driven algorithm", "METHOD" ], [ "crowd video", "DATA" ], [ "plausible simulation", "VISUALIZATION" ], [ "pedestrian dynamic characteristic", "METHOD" ], [ "collision-free trajectory", "APPLICATION" ], [ "interactive crowd simulation", "APPLICATION" ], [ "noisy 2D trajectory", "DATA" ], [ "trajectory behavior", "EVALUATION" ], [ "virtual pedestrian", "APPLICATION" ], [ "realistic trajectory behavior", "METHOD" ] ]
Data Pattern for Allocating User Experience Meta-Data to User Experience Research Data
33,387,025
The vision of user experience is making life of users of products as convenient as possible, especially during the interaction with a productor a service. An important aspect of perceived convenience is the user experience of a product. The visual design and especially the interaction design has a major influence on this perception. In order to achieve the vision, user experience experts apply different types of tasks. One type of task is to analyze how users carry out tasks and what user's needs or problems are. Another type of task is to design user experience solutions and other typical task type deals with carrying out usability evaluations in order to find problems in using software application. In the course of user experience activities, many data are being collected. Many of the collected data relates to certain activities of users. For the user experience area there exist just a few tools, which support typical tasks in different ways. None of the tools supports linking results of user experience work to user experience meta-data. Why is it a problem? The current tools do not support an access to user experience project data with generic search and filter criteria like "industry", "application area", "use case" etc. This makes the access to user experience research data difficult and the comparison of user experience project data between different projects inefficient. In general, results of different user experience projects are difficult to reuse. The core idea of data pattern for allocating user experience Meta-Data to User Experience research data is to associate user experience project data with user experience meta-data. The data pattern considers associating user experience project data with user experience meta-data partially automatically and partially manually by the user. The key idea is that we want to reuse project data like the project sponsor, the application area, the industry, use cases etc. as user experience meta-data and assign them to user experience research data. The benefits of the data pattern are: Reusing results of user experience research projects. Making access to available results more efficient. Direct comparison of available results is supported and more efficient. UX Office is the typical instance for application.
[ { "first": "Li", "middle": [], "last": "Li", "suffix": "" }, { "first": "Hong", "middle": [], "last": "Ji", "suffix": "" }, { "first": "Xuejiao", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Xiaowei", "middle": [], "last": "Yuan", "suffix": "" } ]
2,009
10.1007/978-3-642-02556-3_76
HCI
1502168790
[]
[]
false
false
false
https://api.semanticscholar.org/CorpusID:33387025
null
null
null
null
null
[ [ "use", "DATA" ], [ "software application", "APPLICATION" ], [ "project data", "DATA" ], [ "generic search", "METHOD" ], [ "user experience area", "APPLICATION" ], [ "user experience Meta-Data", "DATA" ], [ "user experience project", "APPLICATION" ], [ "user experience work", "APPLICATION" ], [ "data pattern", "METHOD" ], [ "User Experience research data", "DATA" ], [ "user experience research project", "APPLICATION" ], [ "usability evaluation", "APPLICATION" ], [ "user experience project data", "DATA" ], [ "user experience activity", "APPLICATION" ], [ "Direct comparison", "METHOD" ], [ "typical", "APPLICATION" ], [ "UX Office", "APPLICATION" ], [ "user experience research data", "DATA" ], [ "user experience", "APPLICATION" ], [ "visual design", "VISUALIZATION" ], [ "user experience meta-data", "DATA" ], [ "user experience solution", "APPLICATION" ], [ "interaction design", "VISUALIZATION" ], [ "filter criteri", "METHOD" ], [ "user experience expert", "APPLICATION" ] ]
Using graph grammars for data structure manipulation
28,161,213
The replacement of pointers with graph grammar productions is discussed. Such a replacement provides a substantial improvement in the programming model used, makes better use of current high-resolution screen technology than a strictly text-based language, and provides improved support for parallel processing due to characteristics of the graph grammar formulation used. The background of this project, and the relationship to visual languages, is described. The use of graph grammars in programming and the graph grammar programming languages are described. The editing environment being developed for programming in graph grammars is presented. Compiler development for the system is described. >
[ { "first": "J.J.", "middle": [], "last": "Pfeiffer", "suffix": "" } ]
1,990
10.1109/WVL.1990.128380
Proceedings of the 1990 IEEE Workshop on Visual Languages
Proceedings of the 1990 IEEE Workshop on Visual Languages
2167888151
[ "12476837", "16130575", "1485521", "62540186" ]
[ "44597046", "46944710", "15187063", "32694505", "26536164", "61608985", "9084577" ]
true
true
true
https://api.semanticscholar.org/CorpusID:28161213
0
0
0
1
0
[ [ "editing environment", "METHOD" ], [ "text-based language", "METHOD" ], [ "parallel processing", "APPLICATION" ], [ "programming model", "METHOD" ], [ "graph grammar programming language", "METHOD" ], [ "graph grammar production", "METHOD" ], [ "visual language", "APPLICATION" ], [ "graph grammar", "METHOD" ], [ "high-resolution screen technology", "METHOD" ], [ "graph grammar formulation", "METHOD" ] ]
A shading model for cloth objects
21,021,813
A fundamental light reflection model that takes into account the internal structure of cloth to render the peculiar gloss of cloth objects is presented. In developing this model, the microscopic structure of textiles was considered. The model represents fabric features such as fiber's cross-sectional shape or weave. By measuring the reflected light intensity from actual cloth objects of several typical materials, it was verified that the model can express the properties of several kinds of cloth, and the parameters in the model were defined. >
[ { "first": "T.", "middle": [], "last": "Yasuda", "suffix": "" }, { "first": "S.", "middle": [], "last": "Yokoi", "suffix": "" }, { "first": "J.", "middle": [], "last": "Toriwaki", "suffix": "" }, { "first": "K.", "middle": [], "last": "Inagaki", "suffix": "" } ]
1,992
10.1109/38.163621
IEEE Computer Graphics and Applications
IEEE Computer Graphics and Applications
2107762293
[]
[ "1766133", "7296018", "11765783", "13936863", "7078461", "14619035", "15215963", "17731700", "9829193", "85498272", "53543390", "7233606", "1711180", "17846615", "8046714", "9949034", "18511680", "16126791" ]
false
true
false
https://api.semanticscholar.org/CorpusID:21021813
null
null
null
null
null
[ [ "actual cloth object", "DATA" ], [ "microscopic structure", "DATA" ], [ "cloth object", "DATA" ], [ "weave", "DATA" ], [ "fundamental light reflection model", "METHOD" ], [ "reflected light intensity", "DATA" ], [ "fabric feature", "DATA" ], [ "fiber's cross-sectional shape", "VISUALIZATION" ] ]
A Fresh Look at Vocabulary
63,968,919
The term vocabulary is used to represent the set of commands about which the user has knowledge and practical experience. In many cases only some of the commands known will be actually used: these are sometimes said to form a working set. No longer is an extensive, or even complete, knowledge of a command set considered the passport to good practice that it once was. On the other hand it tends to be believed the vocabulary of some learners will grow over a long period as they become skilled.
[ { "first": "Richard", "middle": [], "last": "Thomas", "suffix": "" } ]
1,998
10.1007/978-1-4471-1548-9_5
Long Term Human-Computer Interaction
Long Term Human-Computer Interaction
2481173831
[]
[]
false
false
false
https://api.semanticscholar.org/CorpusID:63968919
null
null
null
null
null
[ [ "practical experience", "EVALUATION" ], [ "vocabulary", "DATA" ], [ "command set", "DATA" ] ]
Top - down and bottom - up menu design
56,762,308
A sleeve having a central channel for receiving a prong of an electrical connector and preventing use of the connector is provided. A locking stud in the form of a projection extends on a flexible arm into the channel. Upon insertion of the prong in the channel the locking stud engages a hole in the surface of the prong as is commonly provided. In one embodiment there are two such locking studs on opposite sides of the channel to engage opposite sides of the hole in the prong. The tip of the locking stud is beveled to facilitate insertion of the prong and the opposite face of the locking stud is perpendicular to the channel to prevent removal of the prong. A key having two slightly divergent blades with beveled ends is inserted in the channel. The blades slide by the end of the prong deflecting the arms and withdrawing the locking studs from the holes in the prong permitting removal of the prong. The sleeve can be constructed from two identical components. Ribs are provided in the end of the channel where the key is inserted to prevent the insertion of the prong in the wrong end. The channel is preferably larger on the end in which the key is inserted and the key is of a corresponding size so the key will not be inserted in the wrong end of the sleeve when the sleeve is not in use.
[ { "first": "John", "middle": [ "P." ], "last": "Chin", "suffix": "" } ]
1,987
International Journal of Human-computer Interaction
156957241
[]
[]
false
false
false
https://api.semanticscholar.org/CorpusID:56762308
null
null
null
null
null
[ [ "electrical connect", "METHOD" ], [ "divergent blade", "VISUALIZATION" ], [ "locking stud", "METHOD" ], [ "central channel", "METHOD" ], [ "identical component", "DATA" ], [ "the prong", "DATA" ] ]
Interactive Three-Dimensional Visualization of Web3D Media
56,768,562
Web3D is the general term for techniques of three dimensional computer graphics on Internet(Web). The idea of VRML of which mechanism was formed at the beginning of 90's, was completely hanged. It is characteristic that the shift to the offer of rich media contents with pursued realistic image by utilizing the broad-band network. This Web3D techniques was developed mainly aiming at the electronic commerce etc. It is further used as a visualization tool for the simulation system solved on a large-scale computer. An example of the Optimum Towing Support System using Web3D is referred.
[ { "first": "Reo", "middle": [], "last": "Yamaguchi", "suffix": "" } ]
2,003
10.3154/jvs.23.Supplement1_11
JOURNAL OF THE FLOW VISUALIZATION SOCIETY OF JAPAN
2334995766
[]
[]
false
false
false
https://api.semanticscholar.org/CorpusID:56768562
null
null
null
null
null
[ [ "three dimensional computer graphic", "METHOD" ], [ "electronic commerce", "APPLICATION" ], [ "simulation system", "APPLICATION" ], [ "Optimum Towing Support System", "APPLICATION" ], [ "realistic image", "DATA" ], [ "VRML", "METHOD" ], [ "rich medium content", "APPLICATION" ], [ "visualization tool", "VISUALIZATION" ], [ "Web3D technique", "METHOD" ], [ "Web3D", "METHOD" ], [ "broad-band network", "METHOD" ] ]
Study of PID neural network control based on phase-shifted full-bridge CPT system
62,765,231
Since the CPT system is a complex higher-order nonlinear system, it is very difficult to meet the control requirements if ::: the traditional control method is used to control the stability of the system. In this paper, a performance simulation study ::: is performed by introducing the PID neural network controller into the Phase-Shifted Full-Bridge CPT system. ::: Compared with the control effects of traditional PID controllers, the PID neural network controllers have better dynamic ::: responses and more robustness under load rapid changes and input step disturbances.
[ { "first": "Lihong", "middle": [], "last": "He", "suffix": "" }, { "first": "Yingying", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Guangyan", "middle": [], "last": "Sun", "suffix": "" } ]
2,011
10.1117/12.906147
2011 International Conference on Photonics, 3D-Imaging, and Visualization
2011 International Conference on Photonics, 3D-Imaging, and Visualization
2026473252
[]
[ "109404714" ]
false
true
false
https://api.semanticscholar.org/CorpusID:62765231
null
null
null
null
null
[ [ "performance simulation study", "EVALUATION" ], [ "PID neural network controller", "METHOD" ], [ "Phase-Shifted Full-Bridge CPT system", "METHOD" ], [ "higher-order nonlinear system", "METHOD" ], [ "traditional control method", "METHOD" ], [ "::: response", "EVALUATION" ], [ "input step disturbance", "APPLICATION" ], [ "traditional PID controller", "METHOD" ], [ "CPT system", "METHOD" ] ]
Radiometric Compensation through Inverse Light Transport
11,427,390
Radiometric compensation techniques allow seamless projections onto complex everyday surfaces. Implemented with projector-camera systems they support the presentation of visual content in situations where projection-optimized screens are not available or not desired - as in museums, historic sites, air-plane cabins, or stage performances. We propose a novel approach that employs the full light transport between projectors and a camera to account for many illumination aspects, such as interreflections, refractions, shadows, and defocus. Precomputing the inverse light transport in combination with an efficient implementation on the GPU makes the real-time compensation of captured local and global light modulations possible.
[ { "first": "G.", "middle": [], "last": "Wetzstein", "suffix": "" }, { "first": "O.", "middle": [], "last": "Bimber", "suffix": "" } ]
2,007
10.1109/PG.2007.47
15th Pacific Conference on Computer Graphics and Applications (PG'07)
15th Pacific Conference on Computer Graphics and Applications (PG'07)
[ "6228710", "62371751", "42008445", "21431044", "9911282", "9060150", "2860203", "9758166", "6052315", "206590263", "8118067", "14153390", "3052680", "53223969", "1363510", "11720588", "53249286", "9866492", "10551433", "7811126", "5665790", "36367905", "10970736", "11305744", "72680", "41887657", "9192133" ]
[ "14248340", "6443847", "15505779", "12582060", "12829607", "15543761", "14072166", "5040681", "24694689", "12634198", "15458213", "14710710", "36206920", "2250406", "13572837", "2171156", "1045390", "761473", "8284327", "55728523", "222327", "33639119", "18881058", "253945", "8079055", "43939635", "17224824", "16186500", "15855662", "18181039", "12728646", "15207332", "14001676", "7331550", "835469", "207230620", "53041216", "9276153", "17748692", "153313779", "8120968", "34568739", "18964561", "2176899", "43945556", "187803", "7145007", "67283276", "209517075", "16310243", "202894242", "24473594", "21262780", "14204748" ]
true
true
true
https://api.semanticscholar.org/CorpusID:11427390
1
1
1
1
1
[ [ "real-time compensation", "APPLICATION" ], [ "illumination aspect", "VISUALIZATION" ], [ "projector-camera system", "METHOD" ], [ "efficient implementation", "METHOD" ], [ "visual content", "VISUALIZATION" ], [ "amless projection", "VISUALIZATION" ], [ "full light transport", "METHOD" ], [ "refraction", "VISUALIZATION" ], [ "air-plane cabin", "APPLICATION" ], [ "stage performance", "APPLICATION" ], [ "shadow", "VISUALIZATION" ], [ "defocus", "VISUALIZATION" ], [ "inverse light transport", "METHOD" ], [ "local and global light modulation", "DATA" ], [ "everyday surface", "DATA" ], [ "projection-optimized screen", "METHOD" ], [ "Radiometric compensation technique", "METHOD" ], [ "interreflection", "VISUALIZATION" ] ]
Filtering and integrating visual information with motion
13,020,297
Visualizing information in user interfaces to complex, large-scale systems is difficult due to visual fragmentation caused by an enormous amount of inter-related data distributed across multiple views. New display dimensions are required to help the user perceptually integrate and filter such spatially distributed and heterogeneous information. Motion holds promise in this regard as a perceptually efficient display dimension. It has long been known to have a strong grouping effect which suggest that it has potential for filtering and brushing techniques. However, there is little known about which properties of motion are most effective. We review the prior literature relating to the use of motion for display and discuss the requirements for how motion can be usefully applied to these problems, especially for visualizations incorporating multiple groups of data objects. We compared three shapes of motions in pairwise combinations: linear, circular and expansion/contraction. Combinations of linear directions were also compared to evaluate how great angular separation needs to be to enforce perceptual distinction. Our results showed that shape differentiation is more effective than directional differences (except for 90°). Of the three shapes studied, circular demands the most attention. Angular separation must be 90° to be equally effective. These results suggest that motion can be usefully applied to both filtering and brushing. They also provide the beginnings of a vocabulary of simple motions that can be applied to information visualization.
[ { "first": "Lyn", "middle": [], "last": "Bartram", "suffix": "" }, { "first": "Colin", "middle": [], "last": "Ware", "suffix": "" } ]
2,001
In Proceedings on Information Visualization
2202808018
[ "18673917", "86666351", "38924299", "59755897", "19855661", "8056977", "195869887", "74528", "45771271", "57166078", "39965109" ]
[ "18562583", "42136283", "15450853", "11613884", "8777347", "8047841", "20359568", "7659682", "7154331", "6788177" ]
true
true
true
https://api.semanticscholar.org/CorpusID:13020297
0
0
0
1
0
[ [ "shape differentiation", "METHOD" ], [ "expansion/contraction", "METHOD" ], [ "linear direction", "METHOD" ], [ "circular", "METHOD" ], [ "shape", "DATA" ], [ "information visualization", "VISUALIZATION" ], [ "visual fragmentation", "VISUALIZATION" ], [ "data object", "DATA" ], [ "perceptual distinction", "EVALUATION" ], [ "s: linear", "METHOD" ], [ "strong grouping effect", "EVALUATION" ], [ "brushing", "APPLICATION" ], [ "heterogeneous information", "DATA" ], [ "inter-related data", "DATA" ], [ "user interface", "VISUALIZATION" ], [ "Angular separation", "METHOD" ], [ "spatially distributed", "DATA" ], [ "angular separation", "EVALUATION" ], [ "filtering and brushing technique", "METHOD" ], [ "filtering", "APPLICATION" ], [ "tually efficient display dimension", "EVALUATION" ], [ "directional difference", "METHOD" ] ]
Little Big Choices: Customization in Online User Experience
51,880,069
Customization can be a decisive factor in improving online user experience. It is a procedure that allows users to get involved with an interactive system to obtain results that better match their needs. These results are achieved through a co-design process. To establish the importance of customization in this context, we developed a design project for online customization of lacrosse equipment for Ativo brand. It was intended for users to create their own lacrosse equipment, with the possibility of adapting them to their tastes and requirements. For the tool to become viable it was necessary to consider several interaction tasks. Screens were designed, first trough 11 wireframes and later through 194 visual layouts. The project was evaluated with usability tests, using a support questionnaire to verify tasks were effectively fulfilled. The result is a tool which allows wide customization of various options related to these products, their implementation on the brand website and improvement of its user experience.
[ { "first": "Marco", "middle": [], "last": "Neves", "suffix": "" }, { "first": "Maria", "middle": [ "A.M." ], "last": "Reis", "suffix": "" } ]
2,018
10.1007/978-3-319-91806-8_53
HCI
2807177824
[]
[]
false
false
false
https://api.semanticscholar.org/CorpusID:51880069
null
null
null
null
null
[ [ "online user experience", "APPLICATION" ], [ "lacrosse equipment", "APPLICATION" ], [ "visual layout", "VISUALIZATION" ], [ "user experience", "APPLICATION" ], [ "interaction task", "APPLICATION" ], [ "co-design process", "METHOD" ], [ "design project", "METHOD" ], [ "usability test", "EVALUATION" ], [ "brand website", "APPLICATION" ], [ "support questionnaire", "EVALUATION" ], [ "online customization of lacrosse equipment", "APPLICATION" ], [ "interactive system", "METHOD" ] ]
A Study on the Behavior of Using Intelligent Television Among the Elderly in New Urban Areas
51,881,086
Objective: This paper centers on the behavior of using intelligent television among the elderly in new urban areas. The purposes of this study are: 1, finding if there is potential tendency of them to use intelligent television. 2, if there are significant differences in the aspect of physiological and psychological characteristics among the elderly who live in new urban areas, in small cities and in big cities. 3, finding out existing problems and their real needs and their wishes when using intelligent televisions. 4, providing meaningful references and theoretical foundations for entrepreneurs, markets and relevant departments in this field.
[ { "first": "Cuiping", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Xiaoping", "middle": [], "last": "Hu", "suffix": "" } ]
2,018
10.1007/978-3-319-92034-4_15
HCI
2806491356
[]
[]
false
false
false
https://api.semanticscholar.org/CorpusID:51881086
null
null
null
null
null
[ [ "theoretical foundation", "DATA" ], [ "physio", "DATA" ], [ "psychological characteristic", "EVALUATION" ], [ "intelligent television", "METHOD" ] ]
Study on Display Space Design of Off-line Experience Stores of Traditional Handicraft Derivative Product of ICH Based on Multi-sensory Integration
51,882,080
With the increasing of Intangible cultural status, as the main transmission medium, the off-line experience store is the most common way for the public to contact with the traditional handcrafts of Intangible cultural heritage, it not only continues the emotional experience of the audience, but also plays an essential role of the Intangible Cultural Heritage even after the exhibition.
[ { "first": "Bingmei", "middle": [], "last": "Bie", "suffix": "" }, { "first": "Ye", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Rongrong", "middle": [], "last": "Fu", "suffix": "" } ]
2,018
10.1007/978-3-319-91806-8_36
HCI
2806511242
[]
[]
false
false
false
https://api.semanticscholar.org/CorpusID:51882080
null
null
null
null
null
[ [ "emotional experience", "APPLICATION" ], [ "ntangible Cultural Heritage", "APPLICATION" ], [ "off-line experience store", "METHOD" ] ]
Shape Analysis of Volume Models by Euclidean Distance Transform and Moment Invariants
14,714,637
In this paper, volume models are obtained from closed surface models by an accurate voxelization method which can handle the hidden cavities. This kind of 3D binary images is then converted to gray-level images by a fast Euclidean distance transform (EDT). Moment invariants (Mis) which are invariant shape descriptors under similarity transformations, are then computed based on the gray images. Applications in shape analysis area such as principal axis determination, skeleton and medial axis extraction, and shape retrieval can be carried out base on EDT and Mis.
[ { "first": "Dong", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Hua", "middle": [], "last": "Li", "suffix": "" } ]
2,007
10.1109/CADCG.2007.4407924
2007 10th IEEE International Conference on Computer-Aided Design and Computer Graphics
2007 10th IEEE International Conference on Computer-Aided Design and Computer Graphics
1996420990
[ "5739152", "21874346", "11663609", "18235275", "19013689", "4228534", "5438035", "1070719", "34349808", "6150670", "33411968", "46029405", "7464913", "18518136", "40130392" ]
[ "14279866", "8635530", "16861607" ]
true
true
true
https://api.semanticscholar.org/CorpusID:14714637
0
0
0
1
0
[ [ "volume model", "DATA" ], [ "accurate voxelization method", "METHOD" ], [ "Moment invariant (Mis)", "METHOD" ], [ "Euclidean distance transform (EDT)", "METHOD" ], [ "3D binary image", "DATA" ], [ "similarity transformation", "METHOD" ], [ "shape retrieval", "APPLICATION" ], [ "gray-level image", "VISUALIZATION" ], [ "closed surface model", "DATA" ], [ "gray image", "DATA" ], [ "shape analysis area", "APPLICATION" ], [ "invariant shape d", "DATA" ], [ "EDT", "METHOD" ], [ "skeleton and medial axis extraction", "APPLICATION" ], [ "Mis", "METHOD" ], [ "principal axis determination", "APPLICATION" ], [ "hidden cavity", "DATA" ] ]
OpenGL ES and Shader
70,299,344
[ { "first": "JungHyun", "middle": [], "last": "Han", "suffix": "" } ]
2,018
10.1201/9780429443145-6
Introduction to Computer Graphics with OpenGL ES
Introduction to Computer Graphics with OpenGL ES
2904585654
[]
[]
false
false
false
https://api.semanticscholar.org/CorpusID:70299344
null
null
null
null
null
[ [ "Sha", "METHOD" ], [ "OpenGL ES", "METHOD" ] ]
Designing Interactive Sonification for Live Aquarium Exhibits
1,332,910
In response to the need for more accessible and engaging informal learning environments (ILEs), researchers have studied sonification for use in interpretation of live aquarium exhibits. The present work attempts to introduce more interactivity to the project’s existing sonification work, which is expected to lead to more accessible and interactive learning opportunities for visitors, including children and people with vision impairment. In this interactive sonification environment, visitors can actively experience an exhibit by using tangible objects to mimic the movement of animals. Sonifications corresponding to their movement can be paired with real-time animal-based sonifications produced by the existing system to generate a musical fugue. In the current paper, we describe the system configurations, experiment results for optimal sonification parameters and interaction levels, and implications in terms of embodied interaction and interactive learning.
[ { "first": "Myounghoon", "middle": [], "last": "Jeon", "suffix": "" }, { "first": "Riley", "middle": [ "J." ], "last": "Winton", "suffix": "" }, { "first": "Ashley", "middle": [ "G." ], "last": "Henry", "suffix": "" }, { "first": "Sanghun", "middle": [], "last": "Oh", "suffix": "" }, { "first": "Carrie", "middle": [], "last": "Bruce", "suffix": "" }, { "first": "Bruce", "middle": [ "N." ], "last": "Walker", "suffix": "" } ]
2,013
10.1007/978-3-642-39473-7_67
HCI
2103849563
[]
[ "52207142", "14009492", "203743537", "28002630" ]
false
true
false
https://api.semanticscholar.org/CorpusID:1332910
null
null
null
null
null
[ [ "tangible object", "DATA" ], [ "real-time animal-based sonifications", "DATA" ], [ "interactive sonification environment", "VISUALIZATION" ], [ "live aquarium exhibit", "APPLICATION" ], [ "interaction level", "EVALUATION" ], [ "musical fugue", "APPLICATION" ], [ "optimal sonification parameter", "EVALUATION" ], [ "interactive learning", "APPLICATION" ], [ "interactive learning opportunity", "APPLICATION" ], [ "experiment result", "EVALUATION" ], [ "s (ILEs", "METHOD" ] ]
How to Preserve Taiwanese Cultural Food Heritage Through Everyday HCI: A Proposal for Mobile Implementation.
195,877,383
We explore how cultural heritage can be preserved via human-computer interaction applications with a focus on food heritage in Taiwan. The contribution of this paper is its explanation of the existing and potential design space for HCI in the area of human-food interaction and is a step further in preserving the food culture in Taiwan. This paper represents a new step in this direction whereby all people can participate in the preservation prerevision of their own family recipes and the cultural meanings embodied in them by making a record of their daily meals. By highlighting the use of HCI’s irreplaceable role in the relationship features function, recording culture, and celebratory technology [31], we hope to encourage a more comprehensive research agenda within HCI to design technologies pertaining to the preservation of food heritage.
[ { "first": "Kuan-Yi", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Yu-Hsuan", "middle": [], "last": "Ling", "suffix": "" }, { "first": "Chung-Ching", "middle": [], "last": "Huang", "suffix": "" } ]
2,019
10.1007/978-3-030-22577-3_11
HCI
2956855358
[]
[]
false
false
false
https://api.semanticscholar.org/CorpusID:195877383
null
null
null
null
null
[ [ "human-food interaction", "APPLICATION" ], [ "human-computer interaction application", "APPLICATION" ], [ "celebratory technology", "APPLICATION" ], [ "recording culture", "APPLICATION" ], [ "preservation prerevision", "APPLICATION" ], [ "food heritage", "DATA" ], [ "family recipe", "DATA" ], [ "cultural meaning", "DATA" ], [ "daily meal", "DATA" ], [ "relationship feature function", "APPLICATION" ], [ "cultural her", "DATA" ] ]
Dynamic analysis of bubble-driven liquid flows using time-resolved particle image velocimetry and proper orthogonal decomposition techniques
28,227,161
AbstractAn experimental study to evaluate dynamic structures of flow motion and turbulence characteristics in bubble-driven water flow in a rectangular tank with a varying flow rate of compressed air is conducted. Liquid flow fields are measured by time-resolved particle image velocimetry (PIV) with fluorescent tracer particles to eliminate diffused reflections, and by an image intensifier to acquire enhanced clean particle images. By proper orthogonal decomposition (POD) analysis, the energy distributions of spatial and temporal modes are acquired. Time-averaged velocity and turbulent kinetic energy distributions are varied with the air flow rates. With increasing Reynolds number, bubble-induced turbulent motion becomes dominant rather than the recirculating flow near the side wall. Detailed spatial structures and the unsteady behavior of dominant dynamic modes associated with turbulent kinetic energy distributions are addressed.Graphical AbstractGraphical Abstract text
[ { "first": "Sang", "middle": [ "Moon" ], "last": "Kim", "suffix": "" }, { "first": "Seung", "middle": [ "Jae" ], "last": "Yi", "suffix": "" }, { "first": "Hyun", "middle": [ "Dong" ], "last": "Kim", "suffix": "" }, { "first": "Jong", "middle": [ "Wook" ], "last": "Kim", "suffix": "" }, { "first": "Kyung", "middle": [ "Chun" ], "last": "Kim", "suffix": "" } ]
2,010
10.1007/s12650-010-0029-y
Journal of Visualization
2002779817
[]
[]
false
false
false
https://api.semanticscholar.org/CorpusID:28227161
null
null
null
null
null
[ [ "time-resolved particle image velocimetry", "DATA" ], [ "image intensifier", "METHOD" ], [ "dynamic structure", "EVALUATION" ], [ "bubble-driven water flow", "DATA" ], [ "air flow rate", "DATA" ], [ "-averaged", "DATA" ], [ "experimental study", "EVALUATION" ], [ "proper orthogonal decomposition (POD) analysis", "METHOD" ], [ "clean particle image", "DATA" ], [ "turbulent kinetic energy distribution", "DATA" ], [ "unsteady behavior", "EVALUATION" ], [ "compressed air", "DATA" ], [ "turbulence characteristic", "EVALUATION" ], [ "Liquid flow field", "DATA" ], [ "dynamic mode", "METHOD" ], [ "fluorescent tracer particle", "METHOD" ], [ "bubble-induced turbulent motion", "METHOD" ], [ "spatial structure", "DATA" ], [ "spatial and temporal mode", "DATA" ], [ "rectangular tank", "METHOD" ], [ "energy distribution", "DATA" ], [ "recirculating flow", "DATA" ], [ "flow motion", "DATA" ] ]
Tunnel‐Free Supercover 3D Polygons and Polyhedra
120,840,924
A new discrete 3D line and 3D polygon, called Supercover 3D line and Supercover 3D polygon, are introduced. Analytical definitions are provided. The Supercover 3D polygon is a tunnel free plane segment defined by vertices and edges. An edge is a Supercover 3D line segment. Two different polygons can share a common edge and if they do, the union of both polygons is tunnel free. This definition of discrete polygons has the “most” properties in common with the continuous polygons. It is particularly interesting for modeling of discrete scenes, especially using tunnel-free discrete polyhedra. Algorithms for computing Supercover 3D Lines and Polygons are given and illustrated.
[ { "first": "Eric", "middle": [], "last": "Andres", "suffix": "" }, { "first": "Philippe", "middle": [], "last": "Nehlig", "suffix": "" }, { "first": "Jean", "middle": [], "last": "Françon", "suffix": "" } ]
2,008
10.1111/1467-8659.16.3conferenceissue.2
Computer Graphics Forum
1983765135
[]
[ "59130240", "16299485", "17454356", "59674128", "16806701", "6960433", "27688723", "14714532" ]
false
true
false
https://api.semanticscholar.org/CorpusID:120840924
null
null
null
null
null
[ [ "discrete polygon", "METHOD" ], [ "Supercover 3D line and Supercover 3D polygon", "DATA" ], [ "cover 3D Lines", "DATA" ], [ "Polygons", "DATA" ], [ "Supercover 3D line segment", "DATA" ], [ "Supercover 3D polygon", "VISUALIZATION" ], [ "discrete 3D line and 3D polygon", "DATA" ], [ "tunnel free", "EVALUATION" ], [ "discrete scene", "APPLICATION" ], [ "tunnel-free discrete polyhedron", "METHOD" ], [ "Analytical definition", "METHOD" ], [ "continuous polygon", "DATA" ], [ "tunnel free plane segment", "DATA" ] ]
Histogram Equalization-A Simple but Efficient Technique for Image Enhancement
55,344,817
This paper demonstrates the significance of histogram processing of an image particu larly the histogram equalization (HE). It is one of the widely used image enhancement technique. It has become a popular technique for contrast enhancement because the method is simple and effect ive. The basic idea of HE is to re-map the gray levels of an image. Here we propose two different techniques of Histogram Equalizat ion namely, the global HE and local HE. The Histogram Equalization has been performed in the MATLA B environ ment. The merits and demerits of both techniques of Histogram Equalization have also been discussed. It is seen after exhaustive experimentation on a nu mber of sample images that the proposed image enhancement techniques can be considered as an imp rovement over the inbuilt MATLAB function histeq.
[ { "first": "Saurabh", "middle": [], "last": "Chaudhury", "suffix": "" }, { "first": "Ananta", "middle": [], "last": "Kumar Roy", "suffix": "" } ]
2,013
10.5815/ijigsp.2013.10.07
International Journal of Image, Graphics and Signal Processing
2155577471
[ "17097623" ]
[ "21888096" ]
true
true
true
https://api.semanticscholar.org/CorpusID:55344817
1
1
1
1
1
[ [ "contrast enhancement", "APPLICATION" ], [ "histogram equalization (HE)", "METHOD" ], [ "image enhancement technique", "METHOD" ], [ "Histogram Equalization", "METHOD" ], [ "MATLAB function histeq", "METHOD" ], [ "sample image", "DATA" ], [ "gray level", "DATA" ], [ "Histogram Equalizat ion", "METHOD" ], [ "global HE", "METHOD" ], [ "local HE", "METHOD" ], [ "histogram processing", "METHOD" ], [ "exhaustive experimentation", "EVALUATION" ], [ "image", "DATA" ] ]
An efficient two steps algorithm for wide baseline image matching
9,273,011
A recent study (Int. J. Comput. Vis. 73(3), 263–284, 2007) has shown that none of the detector/descriptor combinations perform well when the camera viewpoint is changed with more than 25–30°. In this paper we introduce an efficient two-step method that increases significantly the number of correct matches of wide separated views of a given 3D scenes. First, a few kernel correspondences are identified in the images and then, based on their neighbor information, the geometric distortion that relates the surrounding regions of these seed keypoints is estimated iteratively. Next, based on these estimated parameters combined with a rough segmentation that reduces the searching space of the keypoint descriptors, the neighbor regions around every keypoint are warped accordingly. In our experiments the method has been tested extensively, yielding promising results over a wide range of viewpoints of known 3D models images.
[ { "first": "Cosmin", "middle": [], "last": "Ancuti", "suffix": "" }, { "first": "Codruta", "middle": [ "Orniana" ], "last": "Ancuti", "suffix": "" }, { "first": "Philippe", "middle": [], "last": "Bekaert", "suffix": "" } ]
2,009
10.1007/s00371-009-0353-1
The Visual Computer
The Visual Computer
2010122970
[ "17726917", "64820088", "15626261", "270664", "691081", "13979044", "3150198", "1694378", "54124999", "61571454", "18264626", "723210", "130535382", "41908779", "2572455", "1704741", "6794491", "47070926", "46527015", "2964260", "778478", "14308539", "5107897", "1849068" ]
[ "28181303", "38410062", "38069863", "16552607" ]
true
true
true
https://api.semanticscholar.org/CorpusID:9273011
0
0
0
1
0
[ [ "promising result", "EVALUATION" ], [ "neighbor information", "DATA" ], [ "rough segmentation", "METHOD" ], [ "3D model image", "DATA" ], [ "two-step method", "METHOD" ], [ "searching space", "EVALUATION" ], [ "3D scene", "DATA" ], [ "detector/descriptor combination", "METHOD" ], [ "kernel correspondence", "DATA" ], [ "neighbor region", "DATA" ], [ "image", "DATA" ], [ "seed keypoint", "DATA" ], [ "estimated parameter", "DATA" ], [ "geometric distortion", "EVALUATION" ], [ "correct match", "EVALUATION" ], [ "keypoint descriptor", "DATA" ] ]
GameX: a platform for incremental instruction in computer graphics and game design
8,691,049
Recent trends have resulted in an increased focus on game design as a topic for teaching in higher education [Deutsch 2002]. Although many game engines currently exist, few of these were designed with educational goals in mind. We distinguish between industry-oriented engines and instructional game engines designed to teach a range of concepts. The features needed to teach game development to college undergraduates in engineering and the humanities are explored. Specifically, we develop a platform that supports incremental education in game design. GameX, an open source instructional game engine, was developed with this approach in mind and was used to initiate the Game Design Initiative at Cornell University (GDIAC).
[ { "first": "Rama", "middle": [ "C." ], "last": "Hoetzlein", "suffix": "" }, { "first": "David", "middle": [ "I." ], "last": "Schwartz", "suffix": "" } ]
2,005
10.1145/1187358.1187402
SIGGRAPH '05
2068567675
[ "9518498", "2386446", "2755496" ]
[ "13985522", "16934507", "16005555", "35879588", "11561921", "22031560", "7276291", "198986318", "14113316", "44123411" ]
true
true
true
https://api.semanticscholar.org/CorpusID:8691049
0
0
0
1
0
[ [ "GameX", "METHOD" ], [ "incremental education", "METHOD" ], [ "industry-oriented engine", "METHOD" ], [ "game development", "APPLICATION" ], [ "instructional game engine", "METHOD" ], [ "educational goal", "APPLICATION" ], [ "game", "METHOD" ], [ "game design", "APPLICATION" ] ]
The design of naming features in App Inventor 2
14,817,350
Blocks languages, in which programs are constructed by connecting blocks resembling puzzle pieces, are increasingly used to introduce novices to programming. MIT App Inventor 2 has a blocks language for specifying the behavior of mobile apps. Its naming features (involving event and procedure parameters, global and local variables, and names for procedures, components, and component properties) were designed to address problems with names in other blocks languages, including its predecessor, MIT App Inventor Classic. We discuss the design of these features, and evaluate them with respect to cognitive dimensions and fundamental computer science naming concepts.
[ { "first": "Franklyn", "middle": [], "last": "Turbak", "suffix": "" }, { "first": "David", "middle": [], "last": "Wolber", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Medlock-Walton", "suffix": "" } ]
2,014
10.1109/VLHCC.2014.6883034
2014 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC)
2014 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC)
1985846218
[ "9744698", "13447890", "11750514", "14977519" ]
[ "54823280", "155098670", "212747048", "202714485", "23671317", "15519079", "22250224", "55399814", "53077317", "17175249", "37075993", "207800100", "53081351", "189961193" ]
true
true
true
https://api.semanticscholar.org/CorpusID:14817350
0
0
0
1
0
[ [ "block language", "METHOD" ], [ "MIT App Inventor Classic", "METHOD" ], [ "event and procedure parameter", "DATA" ], [ "cognitive dimension", "EVALUATION" ], [ "mobile apps", "APPLICATION" ], [ "global and local variable", "DATA" ], [ "Blocks language", "METHOD" ], [ "component property", "DATA" ], [ "naming feature", "METHOD" ], [ "computer science naming concept", "APPLICATION" ], [ "puzzle piece", "DATA" ] ]
Physically based animation and rendering of lightning
5,671,330
We present a physically-based method for animating and rendering lightning and other electric arcs. For the simulation, we present the dielectric breakdown model, an elegant formulation of electrical pattern formation. We then extend the model to animate a sustained, 'dancing' electrical arc, by using a simplified Helmholtz equation for propagating electromagnetic waves. For rendering, we use a convolution kernel to produce results competitive with Monte Carlo ray tracing. Lastly, we present user parameters for manipulation of the simulation patterns.
[ { "first": "T.", "middle": [], "last": "Kim", "suffix": "" }, { "first": "M.C.", "middle": [], "last": "Lin", "suffix": "" } ]
2,004
10.1109/PCCGA.2004.1348357
12th Pacific Conference on Computer Graphics and Applications, 2004. PG 2004. Proceedings.
12th Pacific Conference on Computer Graphics and Applications, 2004. PG 2004. Proceedings.
1507507089
[ "110197235", "18605480", "31399825", "17975636", "6136326", "120137649", "121395725", "95055822", "16894929", "6491967", "17397363", "55691653" ]
[ "16871235", "32157959", "14567181", "53478538", "8664893", "6368200", "14136665", "25453063", "8813303", "17923224", "6465169", "21288666", "14042967", "8355299" ]
true
true
true
https://api.semanticscholar.org/CorpusID:5671330
1
1
1
1
1
[ [ "simplified Helmholtz equation", "METHOD" ], [ "convolution kernel", "METHOD" ], [ "physically-based method", "METHOD" ], [ "rendering lightning", "DATA" ], [ "electromagnetic wave", "DATA" ], [ "electrical pattern formation", "APPLICATION" ], [ "electrical arc", "DATA" ], [ "electric arc", "DATA" ], [ "dielectric breakdown model", "METHOD" ], [ "rendering", "APPLICATION" ], [ "user parameter", "METHOD" ], [ "Monte Carlo ray tracing", "METHOD" ], [ "simulation pattern", "DATA" ] ]
Immersive Learning in Real VR
213,180,775
[ { "first": "Johanna", "middle": [], "last": "Pirker", "suffix": "" }, { "first": "Isabel", "middle": [], "last": "Lesjak", "suffix": "" }, { "first": "Johannes", "middle": [], "last": "Kopf", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Kainz", "suffix": "" }, { "first": "Amir", "middle": [], "last": "Dini", "suffix": "" } ]
2,020
10.1007/978-3-030-41816-8_14
Real VR
3011035714
[]
[]
false
false
false
https://api.semanticscholar.org/CorpusID:213180775
null
null
null
null
null
[ [ "Real VR", "APPLICATION" ], [ "Immersive Learning", "METHOD" ] ]
COMPUTER INTERACTION IN THE USE OF RADIOMETRIC CALIBRATION FOR THE CASSINI SPACE FLIGHT PROBE
135,320,289
In the area of Radiometric Calibration for Space Flight Hardware, there is a ::: need for accurate data manipulation and analysis. At the Jet Propulsion ::: Laboratory in Pasadena, CA, there is an advanced system of calibration and ::: analysis of spacecraft data designed by the Planetary Instruments Group of the ::: Space Instruments Implementation. We examine a problem with the user ::: interface. As data is downloaded in this system, it must be physically examined ::: for "good" and "bad" data points. This paper describes how to make this process ::: more efficient. With the use of a light pen or touch-screen CRT the choice of ::: data points can be quickly and accurately chosen or discarded. The article ::: provides a description of the process of data acquisition as well as contributions ::: toward the design of a more "friendly" data calibration procedure.
[ { "first": "Kenneth", "middle": [ "A." ], "last": "Brown", "suffix": "" } ]
1,996
10.20870/ijvr.1996.2.4.2614
International Journal of Virtual Reality
2757060033,2751306756
[ "54096333", "7436341", "60206713" ]
[]
true
false
true
https://api.semanticscholar.org/CorpusID:135320289
1
1
1
1
1
[ [ "Radiometric Calibration for Space Flight Hardware", "APPLICATION" ], [ "data point", "DATA" ], [ "accurate data manipulation", "METHOD" ], [ "touch-screen CRT", "METHOD" ], [ ": data point", "DATA" ], [ "spacecraft data", "DATA" ], [ "user ::: interface", "METHOD" ], [ "friendly\" data calibration procedure", "METHOD" ], [ "analysis", "METHOD" ], [ "data acquisition", "METHOD" ], [ "calibration", "METHOD" ], [ "light pen", "METHOD" ] ]
Personal Aesthetics for Soft Biometrics: A Generative Multi-resolution Approach
7,861,646
Are we recognizable by our image preferences? This paper answers affirmatively the question, presenting a soft biometric approach where the preferred images of an individual are used as his personal signature in identification tasks. The approach builds a multi-resolution latent space, formed by multiple Counting Grids, where similar images are mapped nearby. On this space, a set of preferred images of a user produces an ensemble of intensity maps, highlighting in an intuitive way his personal aesthetic preferences. These maps are then used for learning a battery of discriminative classifiers (one for each resolution), which characterizes the user and serves to perform identification. Results are promising: on a dataset of 200 users, and 40K images, using 20 preferred images as biometric template gives 66% of probability of guessing the correct user. This makes the "personal aesthetics" a very hot topic for soft biometrics, while its usage in standard biometric applications seems to be far from being effective, as we show in a simple user study.
[ { "first": "Cristina", "middle": [], "last": "Segalin", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Perina", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Cristani", "suffix": "" } ]
2,014
10.1145/2663204.2663259
Proceedings of the 16th International Conference on Multimodal Interaction
1981034026
[ "63374487", "3198903", "143365379", "5281907", "17892806", "14942207", "3354049", "11563321", "14783622", "11664336", "14814359", "1016649", "28319865", "1367594", "8591208", "4541770", "2715202", "15219853" ]
[ "211475476", "20914125", "208880340", "4718451", "206820788", "17067672" ]
true
true
true
https://api.semanticscholar.org/CorpusID:7861646
0
0
0
1
0
[ [ "aesthetic", "DATA" ], [ "personal aesthetic", "METHOD" ], [ "standard biometric application", "APPLICATION" ], [ "multi-resolution latent space", "VISUALIZATION" ], [ "identification task", "APPLICATION" ], [ "intensity map", "VISUALIZATION" ], [ "map", "DATA" ], [ "image preference", "DATA" ], [ "identification", "APPLICATION" ], [ "biometric template", "METHOD" ], [ "soft biometrics", "APPLICATION" ], [ "soft biometric approach", "METHOD" ], [ "simple user study", "EVALUATION" ], [ "preferred image", "DATA" ], [ "discriminative classifier", "METHOD" ], [ "image", "DATA" ], [ "Counting Grids", "VISUALIZATION" ] ]
Forehead retina system
10,106,308
The goal of our project is to provide a cheap, lightweight, yet fully functional system that provides rich and dynamic 2D environmental information to the blind. The Forehead Retina System (FRS)composed of a small camera and 512 electrodes on the foreheadcaptures the view in front, extracts outlines from the view, and converts the outlines to tactile sensation by electrical stimulation. Using this device, the users can ”see” the surrounding environment with their forehead skin, without using their eyes.
[ { "first": "Hiroyuki", "middle": [], "last": "Kajimoto", "suffix": "" }, { "first": "Yonezo", "middle": [], "last": "Kanno", "suffix": "" }, { "first": "Susumu", "middle": [], "last": "Tachi", "suffix": "" } ]
2,006
10.1145/1179133.1179145
SIGGRAPH '06
2035525635
[ "62688811", "17384427" ]
[ "22791330" ]
true
true
true
https://api.semanticscholar.org/CorpusID:10106308
0
0
0
1
0
[ [ "Forehead Retina System (FRS)", "METHOD" ], [ "forehead skin", "DATA" ], [ "tactile sensation", "VISUALIZATION" ], [ "2D environmental information", "DATA" ], [ "electrical stimulation", "METHOD" ], [ "system", "METHOD" ] ]
Variance minimization light probe sampling
11,663,595
We present a technique for sampling the light probe image using variance minimization. The technique modifies median cut algorithm for light probe sampling [Debevec 2005] so that the variance of each region is minimized. The algorithm is fast, efficient, and easy to implement.
[ { "first": "Kuntee", "middle": [], "last": "Viriyothai", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Debevec", "suffix": "" } ]
2,009
10.1145/1599301.1599393
SIGGRAPH '09
1978223075
[]
[ "18053725", "30490360", "12507362", "4470481", "11343248", "631737", "14308204" ]
false
true
true
https://api.semanticscholar.org/CorpusID:11663595
1
1
1
0
1
[ [ "light probe image", "DATA" ], [ "light probe sampling", "APPLICATION" ], [ "median cut algorithm", "METHOD" ], [ "variance minimization", "METHOD" ] ]
Images and Presentation in Power View
63,224,619
After spending a little time working with Power View, let’s assume you have analyzed your data. In fact, I imagine that you have been able to tease out a few extremely interesting trends and telling facts from your deep dive into the figures—and you have created the tables and charts to prove your point. To finish the job, you now want to add the final touches to the look and feel of your work so that it will come across to your audience as polished and professional.
[ { "first": "Adam", "middle": [], "last": "Aspin", "suffix": "" } ]
2,016
10.1007/978-1-4842-2400-7_7
High Impact Data Visualization in Excel with Power View, 3D Maps, Get & Transform and Power BI
High Impact Data Visualization in Excel with Power View, 3D Maps, Get & Transform and Power BI
2553153148
[]
[]
false
false
false
https://api.semanticscholar.org/CorpusID:63224619
null
null
null
null
null
[ [ "Power View", "METHOD" ] ]
Toward the Practical Use of ROI-JPEG : Assurance of Perceived Quality and Simplification of Algorithms
63,229,780
[ { "first": "オキ", "middle": [ "ディッキ", "A." ], "last": "プリマ", "suffix": "" }, { "first": "田口", "middle": [], "last": "康平", "suffix": "" }, { "first": "大棒", "middle": [], "last": "麻実", "suffix": "" } ]
2,012
The journal of the Institute of Image Electronics Engineers of Japan : visual computing, devices & communications
2588112455
[]
[]
false
false
false
https://api.semanticscholar.org/CorpusID:63229780
null
null
null
null
null
[ [ "ROI-JPEG", "METHOD" ], [ "Algorithm", "METHOD" ] ]
Real-Time Immersive Table Tennis Game for Two Players with Motion Tracking
5,615,773
Presented in this paper is a novel real-time virtual reality game developed to enable two participants to play table tennis immersively with each other’s avatar in a shared virtual environment. It uses a wireless hybrid inertial and ultrasonic tracking system to provide the positions and orientations of both the head (view point) and hand (racket) of each player, as well as two large rear-projection stereoscopic screens to provide a view-dependent 3D display of the game environment. Additionally, a physics-based ball animation model is designed for the game, which includes fast detection of the ball colliding with table, net and quick moving rackets. The system is shown to offer some unique features and form a good platform for development of other immersive games for multiple players.
[ { "first": "Yingzhu", "middle": [], "last": "Li", "suffix": "" }, { "first": "Lik-Kwan", "middle": [], "last": "Shark", "suffix": "" }, { "first": "Sarah", "middle": [ "Jane" ], "last": "Hobbs", "suffix": "" }, { "first": "James", "middle": [], "last": "Ingham", "suffix": "" } ]
2,010
10.1109/IV.2010.97
2010 14th International Conference Information Visualisation
2010 14th International Conference Information Visualisation
2111870506
[ "59663322", "2399224", "59627848", "14118599", "2418653", "16875554" ]
[ "17522621", "19367229", "5226949", "11636420", "17810566", "14519680", "17529862", "11373341", "31696781", "67840501", "13512836" ]
true
true
true
https://api.semanticscholar.org/CorpusID:5615773
0
0
0
1
0
[ [ "wireless hybrid inertial", "METHOD" ], [ "environment", "VISUALIZATION" ], [ "immersive game", "APPLICATION" ], [ "rear-projection stereoscopic screen", "METHOD" ], [ "fast detection", "APPLICATION" ], [ "table tennis", "APPLICATION" ], [ "racket", "METHOD" ], [ "ultrasonic tracking system", "METHOD" ], [ "view", "VISUALIZATION" ], [ "virtual environment", "METHOD" ], [ "view-dependent 3D display", "VISUALIZATION" ], [ "real-time virtual reality game", "METHOD" ], [ "physics-based ball animation model", "METHOD" ] ]
Audio-Only Augmented Reality System for Social Interaction
15,886,119
We explore new possibilities for interactive music consumption by proposing an audio-only augmented reality system for social interaction.
[ { "first": "Tom", "middle": [], "last": "Gurion", "suffix": "" }, { "first": "Nori", "middle": [], "last": "Jacoby", "suffix": "" } ]
2,013
10.1007/978-3-642-39473-7_65
HCI
286215783
[ "62525807", "17384158", "9513568", "3022968", "5299088", "6840143", "60868048", "60078313", "469744", "13985675" ]
[]
true
false
true
https://api.semanticscholar.org/CorpusID:15886119
0
0
0
1
0
[ [ "interactive music consumption", "APPLICATION" ], [ "social interaction", "APPLICATION" ], [ "audio-only augmented reality system", "METHOD" ] ]
Moving Objects in Space: Exploiting Proprio-ception in Virtual-Environment
59,913,686
[ { "first": "Mark", "middle": [ "R." ], "last": "Mine", "suffix": "" }, { "first": "Frederick", "middle": [ "P." ], "last": "Brooks", "suffix": "" }, { "first": "Carlo", "middle": [ "H." ], "last": "Séquin", "suffix": "" } ]
1,997
SIGGRAPH 1997
153970731
[]
[ "2560694", "44141453" ]
false
true
false
https://api.semanticscholar.org/CorpusID:59913686
null
null
null
null
null
[ [ "Proprio-ception", "METHOD" ], [ "Virtual-Environment", "APPLICATION" ] ]
Visualization of Finite Elements and Tools for Numerical Analysis
59,918,266
A visualization approach for finite elements including numerical algorithms based on an object oriented environment is presented. Starting from examples of numerical analysis of partial differential equations the requirements and specifications for a toolbox offering highly interactive rendering facilities for continuum mechanical as well as geometrical problems in 2D and 3D are explained. After a short description of object oriented programming our concept of interactive geometric modeling is introduced. Applications include the rendering of isoline, color scaled maps, vector and tensor fields on 2D domains, surfaces of intersections in 3D bodies (bars under stress or containers with fluid flow), particle traces, moved hyper surf aces, and the 2D levels of a function on a 3D finite element domain. Our concept has been implemented in the object oriented programming environment GRAPE at the graphics laboratory of the SFB 256. The appendix contains the definition of the specific classes and a description of all methods.
[ { "first": "Monika", "middle": [], "last": "Geiben", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Rumpf", "suffix": "" } ]
1,992
10.1007/978-3-642-77334-1_1
Advances in Scientific Visualization
Advances in Scientific Visualization
162475756
[]
[ "14271252", "2340410", "2043732", "10797164" ]
false
true
true
https://api.semanticscholar.org/CorpusID:59918266
0
0
0
0
0
[ [ "vector and tensor field", "DATA" ], [ "2D domain", "DATA" ], [ "3D finite element domain", "DATA" ], [ "surface", "DATA" ], [ "object oriented environment", "METHOD" ], [ "rendering of isoline", "APPLICATION" ], [ "object oriented programming environment", "METHOD" ], [ "hyper surf ace", "APPLICATION" ], [ "partial differential equa", "APPLICATION" ], [ "body (bar", "DATA" ], [ "object oriented programming", "METHOD" ], [ "particle trace", "DATA" ], [ "container", "DATA" ], [ "numerical algorithm", "METHOD" ], [ "numerical analysis", "METHOD" ], [ "fluid flow", "DATA" ], [ "interactive geometric modeling", "METHOD" ], [ "geometrical problem", "APPLICATION" ], [ "interactive rendering facility", "VISUALIZATION" ], [ "continuum mechanical", "APPLICATION" ], [ "color scaled map", "VISUALIZATION" ], [ "finite element", "DATA" ] ]
Methodical complex of training in programming on AutoLISP language
62,020,847
[ { "first": "Elena", "middle": [], "last": "Alshakova", "suffix": "" } ]
2,013
10.12737/471
Geometry & Graphics
2313649060
[]
[]
false
false
false
https://api.semanticscholar.org/CorpusID:62020847
null
null
null
null
null
[ [ "AutoLISP language", "METHOD" ] ]
Participatory Design of STEM Education AR Experiences for Heterogeneous Student Groups: Exploring Dimensions of Tangibility, Simulation, and Interaction
1,406,356
In this paper, we present the results of a multi-year participatory design process exploring the space of educational AR experiences for STEM education targeted at students of various ages and abilities. Our participants included teachers, students (ages five to fourteen), educational technology experts, game designers, and HCI researchers. The work was informed by state educational curriculum guidelines. The activities included developing a set of design dimensions which guided our ideation process, iteratively designing, building, and evaluating six prototypes with our stakeholders, and collecting our observations regarding the use of AR STEM applications by target students.
[ { "first": "Ben", "middle": [], "last": "Thompson", "suffix": "" }, { "first": "Laura", "middle": [], "last": "Leavy", "suffix": "" }, { "first": "Amelia", "middle": [], "last": "Lambeth", "suffix": "" }, { "first": "David", "middle": [], "last": "Byrd", "suffix": "" }, { "first": "Joelle", "middle": [], "last": "Alcaidinho", "suffix": "" }, { "first": "Iulian", "middle": [], "last": "Radu", "suffix": "" }, { "first": "Maribeth", "middle": [], "last": "Gandy", "suffix": "" } ]
2,016
10.1109/ISMAR-Adjunct.2016.0038
2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct)
2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct)
2584358018
[ "16711565", "62100565", "16266305", "3360834", "9404795", "110331058", "16240497", "53304955", "4081371", "34119949", "9558368", "848504", "23462635", "10473359", "18185007" ]
[ "201067723" ]
true
true
true
https://api.semanticscholar.org/CorpusID:1406356
0
0
0
1
0
[ [ "design dimension", "METHOD" ], [ "educational AR experience", "APPLICATION" ], [ "educational technology expert", "APPLICATION" ], [ "HCI researcher", "APPLICATION" ], [ "STEM education", "APPLICATION" ], [ "game designer", "APPLICATION" ], [ "ideation process", "METHOD" ], [ "state educational curriculum guideline", "APPLICATION" ], [ "participatory design process", "METHOD" ], [ "AR STEM application", "APPLICATION" ] ]
Creating the illusion of motion in 2D images
1,409,944
We introduce a novel con- cept called perceptually meaningful image editing and present techniques for manipulating the apparent depth of objects and creating the illusion of motion in 2D images. Our tech- niques combine principles of human visual perception with approaches developed by traditional artists. For our depth manipulation technique, the user loads an image, selects an object and specifies whether the object should appear closer or further away. The system automatically determines luminance or color temperature target values for the object and/or back- ground that achieve the desired depth change. Our approach for creating the illusion of motion exploits the differences between our peripheral vision and our foveal vision by introducing spatial imprecision to the image.
[ { "first": "Reynold", "middle": [], "last": "Bailey", "suffix": "" }, { "first": "Cindy", "middle": [], "last": "Grimm", "suffix": "" } ]
2,006
10.1145/1179622.1179752
SIGGRAPH '06
2559143497,2060993164
[ "72482084" ]
[]
true
false
true
https://api.semanticscholar.org/CorpusID:1409944
1
0
1
1
0
[ [ "visual perception", "METHOD" ], [ "peripheral vision", "VISUALIZATION" ], [ "color temperature target value", "METHOD" ], [ "2D image", "DATA" ], [ "depth manipulation technique", "METHOD" ], [ "foveal vision", "VISUALIZATION" ], [ "spatial imprecision", "DATA" ], [ "image", "METHOD" ], [ "traditional artist", "METHOD" ], [ "apparent depth", "DATA" ], [ "perceptually meaningful image editing", "METHOD" ], [ "depth change", "EVALUATION" ] ]
Experimentally driven visual language design: texture perception experiments for iconographic displays
19,267,125
Visualization researchers of the Exploratory Visualization (Exvis) project are studying the representation of multidimensional databases as two-dimensional arrays of data-driven icons. Each data point in n-dimensional space is converted into one icon in the display; both visual and auditory features of the icon are determined by the data. The display's texture is produced by packing large numbers of small icons together so densely that they lose their individual identities. The premise of the technology is that interesting features in the visual and auditory texture of an iconographic display will point to interesting features in the data. The technology is in the early stages of formal study. The short-term goal is to provide a workstation that will enable a researcher who is neither a programmer nor a trained experimentalist to design, implement, conduct and analyze human factors experiments for studying the iconographic data-display technique. The long-term goal is to use the information gathered from such experiments to provide a powerful data-representation language for scientists to use for visualizing large data sets. >
[ { "first": "M.G.", "middle": [], "last": "Williams", "suffix": "" }, { "first": "S.", "middle": [], "last": "Smith", "suffix": "" }, { "first": "G.", "middle": [], "last": "Pecelli", "suffix": "" } ]
1,989
10.1109/WVL.1989.77043
[Proceedings] 1989 IEEE Workshop on Visual Languages
[Proceedings] 1989 IEEE Workshop on Visual Languages
2116455557
[]
[]
false
false
false
https://api.semanticscholar.org/CorpusID:19267125
null
null
null
null
null
[ [ "multidimensional database", "DATA" ], [ "data point", "DATA" ], [ "auditory texture", "VISUALIZATION" ], [ "data-representation language", "METHOD" ], [ "two-dimensional array", "DATA" ], [ "formal study", "APPLICATION" ], [ "human factor experiment", "APPLICATION" ], [ "'s texture", "VISUALIZATION" ], [ "iconographic display", "VISUALIZATION" ], [ "n-dimensional space", "DATA" ], [ "iconographic data-display technique", "METHOD" ], [ "visual", "DATA" ] ]
Flow visualization over a thick blunt trailing-edge airfoil with base cavity at low Reynolds numbers using PIV technique
10,470,661
Abstract In this study, the effect of cutting the end of a thick airfoil and adding a cavity on its flow pattern is studied experimentally using PIV technique. First, by cutting 30% chord length of the Riso airfoil, a thick blunt trialing-edge airfoil is generated. The velocity field around the original airfoil and the new airfoil is measured by PIV technique and compared with each other. Then, adding two parallel plates to the end of the new airfoil forms the desired cavity. Continuous measurement of unsteady flow velocity over the Riso airfoil with thick blunt trailing edge and base cavity is the most important innovation of this research. The results show that cutting off the end of the airfoil decreases the wake region behind the airfoil, when separation occurs. Moreover, adding a cavity to the end of the thickened airfoil causes an increase in momentum and a further decrease in the wake behind the trailing edge that leads to a drag reduction in comparison with the thickened airfoil without cavity. Furthermore, using cavity decreases the Strouhal number and vortex shedding frequency.
[ { "first": "Gholamhossein", "middle": [], "last": "Taherian", "suffix": "" }, { "first": "Mahdi", "middle": [], "last": "Nili-Ahmadabadi", "suffix": "" }, { "first": "Mohammad", "middle": [ "Hassan" ], "last": "Karimi", "suffix": "" }, { "first": "Mohammad", "middle": [ "Reza" ], "last": "Tavakoli", "suffix": "" } ]
2,017
PMC5610673
10.1007/s12650-016-0405-3
Journal of visualization
Journal of visualization
2554008749
[ "173172347", "115143504", "108007971", "121038303", "112819542", "123109095", "195258353", "120275580", "122630865", "109956368", "121127177", "120691400", "121016257", "110525820", "122222316", "109666172", "122175491" ]
[]
true
false
true
https://api.semanticscholar.org/CorpusID:10470661
1
1
1
1
1
[ [ "wake region", "EVALUATION" ], [ "Strouhal number", "EVALUATION" ], [ "cavi", "METHOD" ], [ "PIV technique", "METHOD" ], [ "base cavity", "VISUALIZATION" ], [ "parallel plate", "METHOD" ], [ "flow pattern", "DATA" ], [ "velocity field", "DATA" ], [ "unsteady flow velocity", "DATA" ], [ "thickened airfoil", "METHOD" ], [ "vortex shedding frequency", "EVALUATION" ], [ "chord length", "DATA" ], [ "blunt trialing-edge airfoil", "METHOD" ], [ "drag reduction", "EVALUATION" ], [ "blunt trailing edge", "VISUALIZATION" ] ]
Playability Testing of Web-Based Sport Games with Older Children and Teenagers
36,379,113
Playability occupies a central role in videogame design. Heuristics may help for establishing the game concept, but some testing is essential for ensuring a wide acceptance in the target user population. The experience of designing and testing a set of web-based sport videogames is described, focusing on the heuristics employed and the testing approach. The results show that an emphasis on a simple set of game controls and the introduction of humorous elements has obtained a positive response from older children and teenagers.
[ { "first": "Xavier", "middle": [], "last": "Ferre", "suffix": "" }, { "first": "Angélica", "middle": [ "de" ], "last": "Antonio", "suffix": "" }, { "first": "Ricardo", "middle": [], "last": "Imbert", "suffix": "" }, { "first": "Nelson", "middle": [], "last": "Medinilla", "suffix": "" } ]
2,009
10.1007/978-3-642-02583-9_35
HCI
1505056717
[]
[ "19741995" ]
false
true
true
https://api.semanticscholar.org/CorpusID:36379113
0
0
0
0
0
[ [ "target user population", "EVALUATION" ], [ "videogame design", "APPLICATION" ], [ "web-based sport videogames", "APPLICATION" ], [ "positive response", "EVALUATION" ], [ "wide acceptance", "EVALUATION" ], [ "testing approach", "METHOD" ], [ "game concept", "VISUALIZATION" ] ]
Development of tactile and haptic systems for U.S. infantry navigation and communication
18,318,760
In this paper we discuss plans initiated to develop and evaluate multisensory displays (i.e. visual, haptic, tactile) to support dismounted (i.e., not in vehicle) Soldier movement, communication, and targeting. Human factors studies of an array of military operational roles have shown significant demand for focal visual attention that diminishes the capacity for task-sharing and attention allocation, especially in the context of unexpected changes and events. If other sensory modalities can be effectively used in a military environment, the benefit could be significant in increasing survivability, information flow, and mission achievement. We discuss operational task demands and two efforts supported from a 2010 SBIR (Small Business Innovative Research) topic.
[ { "first": "Linda", "middle": [ "R." ], "last": "Elliott", "suffix": "" }, { "first": "Elmar", "middle": [ "T." ], "last": "Schmeisser", "suffix": "" }, { "first": "Elizabeth", "middle": [ "S." ], "last": "Redden", "suffix": "" } ]
2,011
10.1007/978-3-642-21793-7_45
HCI
34052524
[ "18184806", "67406835", "108247756", "115897726", "110799009", "109177075", "109216978", "107864753", "10084381", "8237955", "107486521", "20012738", "107674718" ]
[ "52982492", "53656131", "6224669", "3806456", "9823743", "214807238", "69883698", "15269409", "20747554" ]
true
true
true
https://api.semanticscholar.org/CorpusID:18318760
0
0
0
1
0
[ [ "military operational role", "APPLICATION" ], [ "Human factor study", "EVALUATION" ], [ "multisensory display", "EVALUATION" ], [ "information flow", "EVALUATION" ], [ "task", "APPLICATION" ], [ "military environment", "APPLICATION" ], [ "Soldier movement, communication, and targeting", "APPLICATION" ], [ "SBIR (Small Business Innovative Research)", "APPLICATION" ], [ "focal visual attention", "EVALUATION" ], [ "operational task demand", "APPLICATION" ], [ "attention allocation", "APPLICATION" ], [ "mission achievement", "EVALUATION" ], [ "sensory modality", "METHOD" ] ]
Automa-Persona: A Process to Extract Knowledge Automatic for Improving Personas
34,369,929
During the development of a product, it is necessary for a designer to attempt the special needs of devices and also the target users. To help designers with the problem to attend users’ needs, a technique called Personas is applied during the project. Usually, the Personas creation process is manual, lengthy and also it doesn’t have a attendance during the project. With this objective in mind, this paper presents a process to automatize and to address the users needs through Personas during the whole project.
[ { "first": "Andrey", "middle": [ "Araujo" ], "last": "Masiero", "suffix": "" }, { "first": "Ricardo", "middle": [ "de", "Carvalho" ], "last": "Destro", "suffix": "" }, { "first": "Otavio", "middle": [ "Alberto" ], "last": "Curioni", "suffix": "" }, { "first": "Plinio", "middle": [ "Thomaz", "Aquino" ], "last": "Junior", "suffix": "" } ]
2,013
10.1007/978-3-642-39473-7_13
HCI
141645548
[]
[ "18818670" ]
false
true
false
https://api.semanticscholar.org/CorpusID:34369929
null
null
null
null
null
[ [ "Personas creation process", "METHOD" ], [ "Personas", "METHOD" ] ]
Code Quality Improvement for All: Automated Refactoring for Scratch
197,640,713
Block-based programming has been overwhelmingly successful in revitalizing introductory computing education and in facilitating end-user development. However, poor code quality makes block-based programs hard to understand, modify, and reuse, thus hurting the educational and productivity effectiveness of blocks. There is great potential benefit in empowering programmers in this domain to systematically improve the code quality of their projects. Refactoring–-improving code quality while preserving its semantics–-has been widely adopted in traditional software development. In this work, we introduce refactoring to Scratch. We define four new Scratch refactorings: Extract Custom Block, Extract Parent Sprite, Extract Constant, and Reduce Variable Scope. To automate the application of these refactorings, we enhance the Scratch programming environment with powerful program analysis and transformation routines. To evaluate the utility of these refactorings, we apply them to remove the code smells detected in a representative dataset of 448 Scratch projects. We also conduct a between-subjects user study with 24 participants to assess how our refactoring tools impact programmers. Our results show that refactoring improves the subjects’ code quality metrics, while our refactoring tools help motivate programmers to improve code quality.
[ { "first": "Peeratham", "middle": [], "last": "Techapalokul", "suffix": "" }, { "first": "Eli", "middle": [], "last": "Tilevich", "suffix": "" } ]
2,019
10.1109/VLHCC.2019.8818950
2019 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC)
2019 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC)
2973748760
[ "9390203", "24280317", "27393090", "17303577", "2261562", "5856772", "5551166", "206778272", "5933208", "6492882", "12013393", "2337029", "7932573", "9239423", "24914097", "246522", "17181849", "9116234", "37075993" ]
[ "209496998", "207800100", "209496720", "201711236" ]
true
true
true
https://api.semanticscholar.org/CorpusID:197640713
0
0
0
1
0
[ [ "productivity effectiveness", "EVALUATION" ], [ "ings: Extract Custom Block", "METHOD" ], [ "block-based program", "METHOD" ], [ "code quality", "EVALUATION" ], [ "end-user development", "APPLICATION" ], [ "refactoring", "METHOD" ], [ "quality", "EVALUATION" ], [ "representative dataset", "DATA" ], [ "traditional software development", "APPLICATION" ], [ "Extract Constant", "METHOD" ], [ "introductory computing education", "APPLICATION" ], [ "between-subjects user study", "EVALUATION" ], [ "educational", "EVALUATION" ], [ "code quality metric", "EVALUATION" ], [ "Reduce Variable Scope", "METHOD" ], [ "Extract Parent Sprite", "METHOD" ], [ "poor code quality", "DATA" ], [ "Refactoring", "METHOD" ], [ "refactoring tool", "METHOD" ], [ "code smell", "EVALUATION" ], [ "Scratch programming environment", "METHOD" ], [ "transformation routine", "METHOD" ], [ "Block-based programming", "METHOD" ], [ "program analysis", "METHOD" ] ]
SIGGRAPH Asia 2017 Art Gallery
65,030,844
The SIGGRAPH Asia Art Gallery program will be inviting artists from around the world to showcase their innovative and leading-edge digital contributions on Mind-Body-Machine interaction. With advanced technology, machines today are capable of learning, generating thought- like patterns, supporting physical bodies, problem solving, and discovering new solutions, all while functioning either independently or collectively as part of a larger system. What does it mean when we let our minds wander in a virtual reality world? How do we render our body movements and gestures to allow machines to learn our behaviors? What happens when we trust algorithms to make decisions about what we see, hear, and feel? How can we best utilize the relationship between human and machine "minds and bodies" to enhance human capabilities? ::: ::: The Art Gallery, associated artist talks, and further discussions are expected to stimulate a broader conversation among participants of the SIGGRAPH Asia 2017 conference. ::: ::: The exhibition will highlight innovative Digital Art projects prioritizing the expression of an alternative aesthetic, while employing the rich variety of techniques available to designers and artists who use computer mediation as a part of their creative palette. Focusing on projects using hybrid approaches between physical and digital, between natural and artificial, and between real and synthetic, the exhibition will include the variety of innovative work by artists who merge computation with physical object, while pushing the boundaries of traditional artistic disciplines. ::: ::: Mediated Aesthetics will present a combination of new media technologies, including algorithms, sensors, networking, augmented reality, biotechnology, networking and other technologies. The key is the aesthetic investigation.
[ { "first": "Nhung", "middle": [], "last": "Walsh", "suffix": "" }, { "first": "Kriengkrai", "middle": [], "last": "Supornsahusrungsi", "suffix": "" } ]
2,017
SIGGRAPH 2017
2914825045,2768447014
[]
[]
false
false
false
https://api.semanticscholar.org/CorpusID:65030844
null
null
null
null
null
[ [ "sensor", "METHOD" ], [ "traditional artistic discipline", "APPLICATION" ], [ "Mind-Body-Machine interaction", "APPLICATION" ], [ "augmented reality", "APPLICATION" ], [ "physical body", "APPLICATION" ], [ "virtual reality world", "APPLICATION" ], [ "thought- like pattern", "METHOD" ], [ "physical object", "METHOD" ], [ "biotechnology", "APPLICATION" ], [ "hybrid approach", "METHOD" ], [ "physical", "METHOD" ], [ "problem sol", "APPLICATION" ], [ "networking", "METHOD" ], [ "algorithm", "METHOD" ], [ "Digital Art", "METHOD" ], [ "computer mediation", "METHOD" ], [ "aesthetic investigation", "EVALUATION" ], [ "body movement", "DATA" ] ]
A Low Cost Virtual Reality System for Rehabilitation of Upper Limb
7,872,288
The paper describes an on-going research aimed at creating the low cost virtual reality based system for physical rehabilitation of upper limb. The system is designed to assist in rehabilitation involving various kinds of limb movement, including precise hand movements and movement of the whole extremity. It can be used at patient’s home as a telerehabilitation device. It was decided to use the system with a motion tracking (Razer Hydra) and two alternative display devices: head mounted displays (Sony HMZ-T1) and a LCD display with stereovision glasses (nVidia 3DVision). The custom software was developed to create the virtual reality environment and perform rehabilitation exercises. Three sample rehabilitation games were created to perform assessment of the rehabilitation system. In the preliminary research the usability of the system was assessed by one patient. He was able to use the system for rehabilitation exercises, however some problems with Sony HMZ-T1 usability were spotted. During the next stages of the research extended assessment of the system’s usability and assessment of system’s efficiency are planned.
[ { "first": "Paweł", "middle": [], "last": "Budziszewski", "suffix": "" } ]
2,013
10.1007/978-3-642-39420-1_4
HCI
94030774
[]
[ "12664254", "17893634", "14278600", "18368307" ]
false
true
false
https://api.semanticscholar.org/CorpusID:7872288
null
null
null
null
null
[ [ "motion tracking", "METHOD" ], [ "rehabilitation exercise", "APPLICATION" ], [ "custom software", "METHOD" ], [ "virtual reality environment", "APPLICATION" ], [ "er li", "APPLICATION" ], [ "efficiency", "EVALUATION" ], [ "devices: head mounted display", "METHOD" ], [ "physical rehabilitation", "APPLICATION" ], [ "rehabilita", "APPLICATION" ], [ "low cost virtual reality based system", "METHOD" ], [ "sample rehabilitation game", "EVALUATION" ], [ "precise hand movement", "METHOD" ], [ "Sony HMZ-T1 usability", "METHOD" ], [ "stereovision glass", "METHOD" ], [ "preliminary research", "EVALUATION" ], [ "extended assessment", "EVALUATION" ], [ "LCD display", "METHOD" ], [ "telerehabilitation device", "METHOD" ], [ "limb movement", "METHOD" ], [ "rehabilitation system", "EVALUATION" ] ]
Static visualization of dynamic data flow visual program execution
9,604,606
We propose 'Trace View', a static visualization method for monitoring and debugging the dynamic behavior of programs written in dataflow visual programming languages. Trace View presents a hierarchical structure of the dataflow between nodes that is created over the execution time of the program. The view also serves as an interface that allows the programmer to select a data stream link when data must be examined during debugging. Moreover since visualization grows in size according to the life time of the program, we have developed techniques to scale the view using a multi-focus focus+context view.
[ { "first": "B.", "middle": [], "last": "Shizuki", "suffix": "" }, { "first": "E.", "middle": [], "last": "Shibayama", "suffix": "" }, { "first": "M.", "middle": [], "last": "Toyoda", "suffix": "" } ]
2,002
10.1109/IV.2002.1028853
Proceedings Sixth International Conference on Information Visualisation
Proceedings Sixth International Conference on Information Visualisation
2132117136
[ "2805603", "648772", "60704897", "1368652", "10136346", "207195531", "35162121", "43464074", "16913476", "60699326", "14015377", "6832560", "16696287", "35024873", "35373915", "10453117" ]
[ "9501992" ]
true
true
true
https://api.semanticscholar.org/CorpusID:9604606
0
0
0
1
0
[ [ "Trace View", "VISUALIZATION" ], [ "data stream link", "DATA" ], [ "dataflow visual programming language", "METHOD" ], [ "dataflow", "DATA" ], [ "hierarchical structure", "METHOD" ], [ "static visualization method", "VISUALIZATION" ], [ "multi-focus focus+context view", "VISUALIZATION" ] ]
Importance of binaural cues of depth in low-resolution audio-visual 3D scene reproductions
58,821,873
In spite of their clear audibility, auditory depth cues have been shown to add generally imprecise information to a 3D scene description. We hypothesize that, conversely, this information becomes salient when a scene is reproduced with low visual resolution. For this purpose, a system has been realized by assembling inexpensive audio-visual reproduction technologies together. The system forms a 3D visual scene from two screen images that are polarized orthogonally before reaching the observer, who wears polarized glasses. In parallel, two small loudspeakers are arranged in stereo dipole configuration to create a binaural hot-spot using a cross-talk cancellation solution. Sounds and images are recorded from a real scene using a stereo camera and a pair of microphones, mounted together to capture average anthropometric inter-eye and inter-aural distances. Based on this system, we have measured that the use of binaural instead of monophonic feedback significantly improves the precision of participants who were asked to guess the time-to-passage of a ball rolling down toward them along a rectilinear trajectory. Preliminary results suggest that the binaural rolling sounds coming from the ball approaching the listener were proficiently employed by participants to improve their guess.
[ { "first": "Daniele", "middle": [], "last": "Salvati", "suffix": "" }, { "first": "Carlo", "middle": [], "last": "Drioli", "suffix": "" }, { "first": "Federico", "middle": [], "last": "Fontana", "suffix": "" }, { "first": "Gian", "middle": [ "Luca" ], "last": "Foresti", "suffix": "" } ]
2,018
10.1109/sive.2018.8577121
2018 IEEE 4th VR Workshop on Sonic Interactions for Virtual Environments (SIVE)
2018 IEEE 4th VR Workshop on Sonic Interactions for Virtual Environments (SIVE)
2904074059
[ "117099185", "62736621", "17697822", "122673451", "605849", "14137811", "4030810", "61044442", "20242459", "215690557", "9852836", "215132364", "18773606", "35428876", "19034665", "22998238", "11029458", "15616921", "16975607", "14543835", "10072958", "401258", "16686221", "123368698" ]
[]
true
false
true
https://api.semanticscholar.org/CorpusID:58821873
0
0
0
1
0
[ [ "Sound", "DATA" ], [ "3D visual scene", "VISUALIZATION" ], [ "auditory depth cue", "DATA" ], [ "monophonic feedback", "METHOD" ], [ "rectilinear trajectory", "VISUALIZATION" ], [ "clear audibility", "EVALUATION" ], [ "3D scene description", "APPLICATION" ], [ "impreci", "DATA" ], [ "binaural hot-spot", "METHOD" ], [ "binaural", "METHOD" ], [ "stereo camera", "METHOD" ], [ "inter-aural distance", "DATA" ], [ "screen image", "DATA" ], [ "image", "DATA" ], [ "binaural rolling sound", "DATA" ], [ "cross-talk cancellation solution", "METHOD" ], [ "polarized glass", "VISUALIZATION" ], [ "anthropometric inter-eye", "DATA" ], [ "visual resolution", "DATA" ], [ "audio-visual reproduction technology", "METHOD" ], [ "stereo dipole configuration", "METHOD" ] ]
Fast image auto-annotation with discretized feature distance measures
60,599,756
A new model for the image auto-annotation task is presented. The model can be classified as a fast image auto-annotation one. The main idea behind the model is to avoid various problems with feature space clustering. Both the image segmentation and the auto-annotation process do not use any clustering algorithms. The method presented here simulates continuous feature space analysis with very dense discretization. The paper presents the new approach and discusses the results achieved with it.
[ { "first": "Halina", "middle": [], "last": "Kwasnicka", "suffix": "" }, { "first": "Mariusz", "middle": [], "last": "Paradowski", "suffix": "" } ]
2,006
Machine Graphics & Vision International Journal archive
1560557536
[]
[ "666284" ]
false
true
false
https://api.semanticscholar.org/CorpusID:60599756
null
null
null
null
null
[ [ "image auto-annotation task", "APPLICATION" ], [ "image segmentation", "METHOD" ], [ "feature space clustering", "METHOD" ], [ "fast image auto-annotation", "METHOD" ], [ "dense discretization", "VISUALIZATION" ], [ "auto-annotation process", "METHOD" ], [ "clustering algorithm", "METHOD" ], [ "continuous feature space analysis", "APPLICATION" ] ]
Evaluating the Impact of Interface Agents in an Intelligent Tutoring Systems Authoring Tool
11,094,462
[ { "first": "Maria", "middle": [], "last": "Moundridou", "suffix": "" }, { "first": "Maria", "middle": [], "last": "Virvou", "suffix": "" } ]
2,001
ADVANCES IN HUMAN-COMPUTER INTERACTION I: PROCEEDINGS OF THE PANHELLENIC CONFERENCE WITH INTERNATIONAL PARTICIPATION IN HUMAN-COMPUTER INTERACTION – PC-HCI 2001. TYPORAMA PUBLICATIONS, PATRAS GREECE
1584560082
[ "15004892", "7470697", "143196551", "16652542", "17897603", "11927187", "5596607", "58706942", "40411545", "60511983", "7469323", "144718175" ]
[ "17518330", "145268900", "5842537", "8898217", "16115951" ]
true
true
true
https://api.semanticscholar.org/CorpusID:11094462
0
0
0
1
0
[ [ "Intelligent Tutoring Systems Authoring Tool", "APPLICATION" ], [ "Interface Agents", "METHOD" ] ]
Pose Estimation for Augmented Reality: A Hands-On Survey
9,978,124
Augmented reality (AR) allows to seamlessly insert virtual objects in an image sequence. In order to accomplish this goal, it is important that synthetic elements are rendered and aligned in the scene in an accurate and visually acceptable way. The solution of this problem can be related to a pose estimation or, equivalently, a camera localization process. This paper aims at presenting a brief but almost self-contented introduction to the most important approaches dedicated to vision-based camera localization along with a survey of several extension proposed in the recent years. For most of the presented approaches, we also provide links to code of short examples. This should allow readers to easily bridge the gap between theoretical aspects and practical implementations.
[ { "first": "Eric", "middle": [], "last": "Marchand", "suffix": "" }, { "first": "Hideaki", "middle": [], "last": "Uchiyama", "suffix": "" }, { "first": "Fabien", "middle": [], "last": "Spindler", "suffix": "" } ]
2,016
10.1109/TVCG.2015.2513408
IEEE Transactions on Visualization and Computer Graphics
IEEE Transactions on Visualization and Computer Graphics
2344474200
[ "3346458", "5720491", "8488231", "9274709", "61166326", "2265717", "469744", "791733", "12239571", "186689463", "14625489", "14777911", "15411852", "10890290", "8931834", "21874346", "18441340", "33020803", "11435411", "10381017", "36225362", "13428328", "7581189", "7561663", "7085375", "14399225", "14501637", "16007874", "15436217", "6512328", "14461661", "14547347", "7110290", "15109323", "207116216", "5746274", "2480009", "972888", "7576794", "15869446", "11630218", "34669177", "11616840", "1694378", "54124999", "11038004", "7227469", "406532", "3345516", "8192877", "2163263", "7278659", "206986664", "13136001", "2578629", "15206697", "6040631", "5288651", "206373609", "15516390", "51768777", "12980089", "207252029", "9986383", "1211102", "10336140", "723210", "130535382", "18882836", "2102663", "2121536", "1143677", "16054007", "916303", "14814716", "41908779", "15850848", "17737459", "206765442", "14873793", "8518617", "5057778", "11830123", "1336659", "886598", "8701488", "5450032", "7806207", "5868724", "8464569", "15826312", "675903", "2599290", "60587248", "1417108", "14315609", "17186165", "4824129", "528077", "206764370", "33519033", "2351253", "206769866", "15022990", "520807", "1453808", "2630612", "6862780", "17803873", "778478", "61448848", "15146949", "1401338", "13260334", "5816992", "12871358", "15033310", "151561", "6396561", "1436159", "109948119", "12249543", "8120452", "997548", "1354186", "6308862", "56381054", "15706709", "6232366", "1251314", "15259011", "1458265", "14796703", "5541141", "14966142", "10983626", "3963441", "1150626", "10025978", "206986719" ]
[ "52022677", "904967", "52952494", "199004759", "197451189", "211011324", "3245180", "52305762", "7739427", "209405048", "199064705", "208032547", "201834176", "56042303", "20653123", "209897822", "3271081", "4345995", "4700055", "52276678", "14534177", "125792322", "44127412", "125453245", "67702822", "209415027", "102492594", "54434498", "129944204", "202789797", "25999079", "55702295", "4707564", "46954766", "214641094", "49413387", "139108153", "1274690", "35581281", "67886192", "52148626", "214775437", "85500436", "4406194", "211684999", "140095727", "204820991", "204978455", "46978662", "53011330", "71147261", "57378948", "208017130", "58006460", "29298889", "21468773", "3565192", "53086295", "210164365", "38874755", "209383155", "49564022", "84840630", "202626342", "211553980", "115947379", "3713040", "25141062", "53778213", "13858991", "211731995", "7997646", "13662784", "12604119", "199540590", "3477195", "206444420", "209495721", "207968203", "203612657", "156053407", "33225819", "80628423", "206987417", "3183009", "51598922", "212647327", "207853339", "207825618", "4325253", "209500788", "201830604", "3488425", "202660862", "2658289", "207324261", "198168971", "204819800", "130762", "53427438", "7255977", "210694543", "28755600", "54445944", "169037760", "160020360", "67770143", "19150683", "11179827", "155107061", "51614220", "51983009", "208194845", "27535833", "202777330", "29723466", "57756396", "202666003", "57753918", "52153565", "53013176", "3208054", "91187736", "9137364", "3828051", "204817071", "1180756", "204793485", "52978055", "38778370" ]
true
true
true
https://api.semanticscholar.org/CorpusID:9978124
1
1
1
1
1
[ [ "camera localization process", "APPLICATION" ], [ "synthetic element", "DATA" ], [ "vision-based camera localization", "APPLICATION" ], [ "virtual object", "DATA" ], [ "practical implementation", "EVALUATION" ], [ "Augmented reality (AR)", "APPLICATION" ], [ "image sequence", "DATA" ], [ "pose estimation", "APPLICATION" ] ]
Geolocation Search with SharePoint Fast Search Feature and A (star) Search Algorithm
196,810,416
This paper represents a review on geolocation finding mechanism through SharePoint fast search and A* search algorithm. As a part of the SharePoint Fast search authors will compare two algorithms; Euclidean distance and Taxicab geometry distance in order to find a geolocation based on the shortest path. Throughout the paper, authors have highlighted the use of each individual algorithm (Euclidean distance, Taxicab geometry distance, and A* search algorithm) in terms finding the shortest path.
[ { "first": "H.", "middle": [ "Chathushka", "Dilhan" ], "last": "Hettipathirana", "suffix": "" }, { "first": "Thameera", "middle": [ "Viraj" ], "last": "Ariyapala", "suffix": "" } ]
2,019
10.1007/978-3-030-21817-1_22
HCI
2960301026
[]
[]
false
false
false
https://api.semanticscholar.org/CorpusID:196810416
null
null
null
null
null
[ [ "A* search algorithm", "METHOD" ], [ "SharePoint Fast search", "APPLICATION" ], [ "geolocation finding mechanism", "METHOD" ], [ "Euclidean distance", "METHOD" ], [ "SharePoint fast search", "METHOD" ], [ "Taxicab geometry distance", "METHOD" ] ]
Personal Data Broker: A Solution to Assure Data Privacy in EdTech
196,810,534
Educational technologies (Edtech) collect private and personal data from students. This is a growing trend in both new and already available Edtech. There are different stakeholders in the analysis of the collected students’ data. Teachers use educational analytics to enhance the learning environment, principals use academic analytics for decision making in the leadership of the educational institution and Edtech providers uses students’ data interactions to improve their services and tools. There are some issues in this new context. Edtech have been feeding their analytical algorithms from student’s data, both private and personal, even from minors. This draws a critical problem about data privacy fragility in Edtech. Moreover, this is a sensitive issue that generates fears and angst in the use of educational data analytics in Edtech, such as learning management systems (LMS). Current laws, regulations, policies, principles and good practices are not enough to prevent private data leakage, security breaches, misuses or trading. For instance, data privacy agreements in LMS are deterrent but not an ultimate solution due do not act in real time. There is a need for automated real-time law enforcement to avoid the fragility of data privacy. In this work, we take a step further in the automation of data privacy agreement in LMS. We expose which technology and architecture are suitable for data privacy agreement automation, a partial implementation of the design in Moodle and ongoing work.
[ { "first": "Daniel", "middle": [], "last": "Amo", "suffix": "" }, { "first": "David", "middle": [], "last": "Fonseca", "suffix": "" }, { "first": "Marc", "middle": [], "last": "Alier", "suffix": "" }, { "first": "Francisco", "middle": [ "José" ], "last": "García-Peñalvo", "suffix": "" }, { "first": "María", "middle": [ "José" ], "last": "Casañ", "suffix": "" }, { "first": "Maria", "middle": [], "last": "Alsina", "suffix": "" } ]
2,019
10.1007/978-3-030-21814-0_1
HCI
2957675340
[]
[]
false
false
false
https://api.semanticscholar.org/CorpusID:196810534
null
null
null
null
null
[ [ "real-time law enforcement", "APPLICATION" ], [ "educational analytics", "METHOD" ], [ "technology (Edtech", "METHOD" ], [ "data privacy agreement", "METHOD" ], [ "collected students’ data", "DATA" ], [ "analytical algorithm", "METHOD" ], [ "real time", "EVALUATION" ], [ "data privacy agreement automation", "APPLICATION" ], [ "private data leakage", "APPLICATION" ], [ "data privacy fragility", "APPLICATION" ], [ "private and personal data", "DATA" ], [ "student’s data", "DATA" ], [ "security breach", "APPLICATION" ], [ "interaction", "METHOD" ], [ "learning management system (", "APPLICATION" ], [ "Edtech", "APPLICATION" ], [ "academic analytics", "METHOD" ], [ "data privacy", "DATA" ], [ "educational data analytics", "METHOD" ] ]
Virtual Environment in Construction and Maintenance of Buildings
113,641,198
This paper describes two prototype applications based on Virtual Reality (VR) technology for use in construction and maintenance planning of buildings. The first, applied to construction, is an interactive virtual model designed to present plans three-dimensionally (3D), connecting them to construction planning schedules, resulting in a valuable asset to the monitoring of the development of construction activity. The 4D application considers the time factor showing the 3D geometry of the different steps of the construction activity, according to the plan established for the construction. A second VR model was created in order to help in the maintenance of exterior closures of walls in a building. It allows the visual and interactive transmission of information related to the physical behavior of the elements. To this end, the basic knowledge of material most often used in facades, anomaly surveillance, techniques of rehabilitation, and inspection planning were studied. This information was included in a database that supports the periodic inspection needed in a program of preventive maintenance. This work brings an innovative contribution to the field of construction and maintenance supported by emergenttechnology
[ { "first": "A.Z.", "middle": [], "last": "Sampaio", "suffix": "" }, { "first": "A.R.", "middle": [], "last": "Gomes", "suffix": "" }, { "first": "A. M.", "middle": [], "last": "Gomes", "suffix": "" }, { "first": "J. P.", "middle": [], "last": "Santos", "suffix": "" }, { "first": "D.", "middle": [], "last": "Ros�rio", "suffix": "" } ]
2,012
10.20870/ijvr.2012.11.2.2843
International Journal of Virtual Reality
2981763422,2613504719
[ "111639397", "110316201", "204163754", "112321224" ]
[]
true
false
true
https://api.semanticscholar.org/CorpusID:113641198
1
1
1
1
1
[ [ "exterior closure", "DATA" ], [ "VR model", "METHOD" ], [ "interactive transmission", "METHOD" ], [ "interactive virtual model", "METHOD" ], [ "4D application", "APPLICATION" ], [ "preventive maintenance", "APPLICATION" ], [ "anomaly surveillance", "APPLICATION" ], [ "emergenttechnology", "METHOD" ], [ "rehabilitation", "METHOD" ], [ "dimensional", "DATA" ], [ "construction activity", "APPLICATION" ], [ "Virtual Reality (VR) technology", "METHOD" ], [ "construction and maintenance planning of building", "APPLICATION" ], [ "3D geometry", "DATA" ], [ "construction and maintenance", "APPLICATION" ], [ "construction planning schedule", "DATA" ], [ "inspection planning", "APPLICATION" ], [ "physical behavior", "DATA" ], [ "periodic inspection", "METHOD" ] ]
A case study based approach to knowledge visualization
6,369,168
Case studies are proposed as a research method on knowledge visualization that can deal with the multidisciplinarity, the large variety of research targets and the complex correlations of this type of information visualization utilized for supporting tasks of knowledge management. A suitable case structure is presented that documents the analyzed cases and allows for a comparative analysis of multiple cases. To be able to systematically evaluate and compare the applied visualization techniques a set of evaluation criteria is introduced.
[ { "first": "M.", "middle": [], "last": "Zeiller", "suffix": "" } ]
2,005
10.1109/IV.2005.5
Ninth International Conference on Information Visualisation (IV'05)
Ninth International Conference on Information Visualisation (IV'05)
2109646403
[ "59732190", "5462746", "2281975", "18104312", "108115813", "59216753", "108988822", "60484344", "8802898", "11062713", "2362307", "45965614" ]
[ "17234043", "6344744", "58953629" ]
true
true
true
https://api.semanticscholar.org/CorpusID:6369168
0
0
0
1
0
[ [ "evaluation criterion", "EVALUATION" ], [ "information visualization", "VISUALIZATION" ], [ "knowledge visualization", "VISUALIZATION" ], [ "comparative analysis", "APPLICATION" ], [ "knowledge management", "APPLICATION" ], [ "visualization technique", "VISUALIZATION" ], [ "Case study", "METHOD" ], [ "case structure", "METHOD" ] ]
Managing Complex Augmented Reality Models
14,743,086
Mobile augmented reality requires georeferenced data to present world-registered overlays. To cover a wide area and all artifacts and activities, a database containing this information must be created, stored, maintained, delivered, and finally used by the client application. We present a data model and a family of techniques to address these needs.
[ { "first": "D.", "middle": [], "last": "Schmalstieg", "suffix": "" }, { "first": "G.", "middle": [], "last": "Schall", "suffix": "" }, { "first": "D.", "middle": [], "last": "Wagner", "suffix": "" }, { "first": "I.", "middle": [], "last": "Barakonyi", "suffix": "" }, { "first": "G.", "middle": [], "last": "Reitmayr", "suffix": "" }, { "first": "J.", "middle": [], "last": "Newman", "suffix": "" }, { "first": "F.", "middle": [], "last": "Ledermann", "suffix": "" } ]
2,007
10.1109/MCG.2007.85
IEEE Computer Graphics and Applications
IEEE Computer Graphics and Applications
2104609283
[ "16769759", "16175173", "15169023", "57755", "64380", "13388041", "10252141", "209071408", "3980181", "2234546", "2234398", "15733091", "7766493", "13510240", "15068033", "347626" ]
[ "8064253", "207032898", "2187992", "39125151", "14955462", "61644058", "7472919", "2109861", "3353414", "4677600", "16674462", "53910026", "14433834", "7131717", "16094025", "12111852", "30095441", "11636463", "12949710", "86737963", "1119376", "8509995", "17394073" ]
true
true
true
https://api.semanticscholar.org/CorpusID:14743086
0
0
0
1
0
[ [ "Mobile augmented reality", "APPLICATION" ], [ "world-registered overlay", "VISUALIZATION" ], [ "georeferenced data", "DATA" ], [ "client application", "APPLICATION" ], [ "data model", "METHOD" ] ]
An Implementation Method of Virtual Environment Physical Properties
61,622,679
[ { "first": "Chang", "middle": [ "Hyuck" ], "last": "Im", "suffix": "" }, { "first": "Min-Geun", "middle": [], "last": "Lee", "suffix": "" }, { "first": "", "middle": [], "last": "이명원", "suffix": "" } ]
2,007
10.15701/kcgs.2007.13.1.25
Journal of the Korea Computer Graphics Society
Journal of the Korea Computer Graphics Society
2321184335
[]
[]
false
false
false
https://api.semanticscholar.org/CorpusID:61622679
null
null
null
null
null
[ [ "Implementation Method", "METHOD" ] ]
A self-calibrated photo-geometric depth camera
46,961,796
Compared with geometric stereo vision based on triangulation principle, photometric stereo method has advantages in recovering per-pixel surface details. In this paper, we present a practical 3D imaging system by combining the near-light photometric stereo and the speckle-based stereo matching method. The system is compact in structure and suitable for multi-albedo targets. The parameters (including position and intensity) of the light sources can be self-calibrated. To realize the auto-calibration, we first use the distant lighting model to estimate the initial surface albedo map, and then with the estimated albedo map and the normal vector field fixed, the parameters of the near lighting model are optimized. Next, with the optimized lighting model, we use the near-light photometric stereo method to re-compute the surface normal and fuse it with the coarse depth map from stereo vision to achieve high-quality depth map. Experimental results show that our system can realize high-quality reconstruction in general indoor environments.
[ { "first": "Liang", "middle": [], "last": "Xie", "suffix": "" }, { "first": "Yuhua", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Xiaohu", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Bao", "suffix": "" }, { "first": "Chenpeng", "middle": [], "last": "Tong", "suffix": "" }, { "first": "Boxin", "middle": [], "last": "Shi", "suffix": "" } ]
2,018
10.1007/s00371-018-1507-9
The Visual Computer
The Visual Computer
2803111283
[ "14864047", "14098379", "9902463", "11830123", "10879603", "16115675", "17416189", "2402326", "634437", "206769656", "15578614", "2151293", "14312416", "6076276", "13297536", "211728480", "14308539", "34799453" ]
[]
true
false
true
https://api.semanticscholar.org/CorpusID:46961796
0
0
0
1
0
[ [ "triangulation principle", "METHOD" ], [ "stereo vision", "VISUALIZATION" ], [ "high-quality depth map", "VISUALIZATION" ], [ "geometric stereo vision", "METHOD" ], [ "light source", "DATA" ], [ "Experimental result", "EVALUATION" ], [ "speckle-based stereo matching method", "METHOD" ], [ "near lighting model", "METHOD" ], [ "near-light photometric stereo", "METHOD" ], [ "coarse depth map", "VISUALIZATION" ], [ "optimized lighting model", "METHOD" ], [ "per-pixel surface detail", "DATA" ], [ "3D imaging system", "APPLICATION" ], [ "normal vector field", "DATA" ], [ "surface albedo map", "DATA" ], [ "near-light photometric stereo method", "METHOD" ], [ "indoor environment", "APPLICATION" ], [ "photometric stereo method", "METHOD" ], [ "estimated albedo map", "DATA" ], [ "high-quality reconstruction", "EVALUATION" ], [ "multi-albedo target", "APPLICATION" ], [ "surface normal", "DATA" ], [ "distant lighting model", "METHOD" ], [ "auto-calibration", "METHOD" ] ]
Building and Applying a Human Cognition Model for Visual Analytics
18,777,752
It is well known that visual analytics addresses the difficulty of evaluating and processing large quantities of information. Less often discussed are the increasingly complex analytic and reasoning processes that must be applied in order to accomplish that goal. Success of the visual analytics approach will require us to develop new visualization models that predict how computational processes might facilitate human insight and guide the flow of human reasoning. In this paper, we seek to advance visualization methods by proposing a framework for human ‘higher cognition’ that extends more familiar perceptual models. Based on this approach, we suggest guidelines for the development of visual interfaces that better integrate complementary capabilities of humans and computers. Although many of these recommendations are novel, some can be found in existing visual analytics applications. In the latter case, much of the value of our contribution lies in the deeper rationale that the model provides for those principles. Lastly, we assess these visual analytics guidelines through the evaluation of several visualization examples.
[ { "first": "Tera", "middle": [ "Marie" ], "last": "Green", "suffix": "" }, { "first": "William", "middle": [], "last": "Ribarsky", "suffix": "" }, { "first": "Brian", "middle": [], "last": "Fisher", "suffix": "" } ]
2,009
10.1057/ivs.2008.28
Information Visualization
[ "11258003", "145428925", "141412997", "8325660", "1869081", "8054340", "14683421", "25346537", "15654531", "19237642", "2738870", "1271290", "145291217", "7846513", "197657563", "11774793", "240915", "61206375", "449999", "3249745", "12637249", "17342771", "989354", "11062839", "6863019", "32735751", "37594527" ]
[ "44066757", "17645648", "18770326", "2160325", "34941250", "3004406", "14409400", "15939822", "3500848", "9996594", "16343021", "2536945", "59217", "52194243", "63762512", "52057648", "5740013", "1934174", "12223538", "52914484", "213163682", "5842437", "9417236", "16026991", "599998", "7341582", "2422896", "6734373", "5646011", "17237374", "5755985", "14127587", "11460239", "15222179", "51916561", "13367074", "18900054", "46949549", "9774322", "15839041", "11622826", "171085324", "86392745", "11163867", "1198623" ]
true
true
true
https://api.semanticscholar.org/CorpusID:18777752
1
1
1
1
1
[ [ "visualization method", "METHOD" ], [ "visual analytics guideline", "EVALUATION" ], [ "human ‘higher cognition’", "APPLICATION" ], [ "visual analytics application", "APPLICATION" ], [ "analytic and reasoning process", "METHOD" ], [ "visualization model", "VISUALIZATION" ], [ "deeper rationale", "METHOD" ], [ "visual analytics approach", "METHOD" ], [ "visual analytics", "VISUALIZATION" ], [ "perceptual model", "METHOD" ], [ "visual interface", "VISUALIZATION" ], [ "computational process", "METHOD" ] ]
Eurographics 86 tutorials : August 25–26 1986
61,686,458
[ { "first": "Bob", "middle": [], "last": "Hopgood", "suffix": "" } ]
1,986
10.1016/0097-8493(86)90072-5
Computers & Graphics
2317665848,2914691159,2766270335,2766780471
[]
[]
false
false
false
https://api.semanticscholar.org/CorpusID:61686458
null
null
null
null
null
[]
ARVIKA-augmented reality for development, production and service
2,018,341
Augmented reality (AR) is a form of human-machine interaction where information is presented in the field of view of an individual. ARVIKA, funded by the German Ministry of Education and Research, develops this technology and applications in the fields of development, production, and service in the automotive and aerospace industries, for power and processing plants and for machine tools and production machinery. Up to now, AR has only been a subject of individual research projects and a small number of application-specific industrial projects on a global scale. The current state of the art and the available appliances do not yet permit a product-oriented application of the technology. However, AR enables a new, innovative form of human-machine interaction that not only places the individual in the center of the industrial workflow, but also offers a high potential for process and quality improvements in production and process workflows. ARVIKA is primarily designed to implement an augmented reality system for mobile use in industrial applications. The report presents the milestones that have been achieved after a project duration of a full three years.
[ { "first": "W.", "middle": [], "last": "Friedrich", "suffix": "" } ]
2,002
10.1109/ISMAR.2002.1115059
Proceedings. International Symposium on Mixed and Augmented Reality
Proceedings. International Symposium on Mixed and Augmented Reality
2139361863
[]
[ "167210863", "167210863", "10975063", "13528059", "3666389", "17875271", "62811373", "18760367", "2311355", "18452561", "171089191", "1987727", "15504076", "5383441", "5383441", "12519167", "18009347", "145942770", "18831274", "17024419", "28435143", "4625011", "54808829", "54108177", "2889397", "49524799", "6510536", "10949513", "1252382", "3126812", "1629179", "26653725", "36363746", "12651073", "173984058", "14098159", "115908190", "12473497", "2824524", "14976540", "25937769", "8244941", "13287017", "18528533", "34581568", "3187912", "45503333", "14728654", "12789857", "30425335", "7638228", "15927045", "106980343", "212628526", "3805855", "3805855", "24022794", "12632414", "210956959", "9118512", "18606470", "53032780", "16035796", "46935064", "46935064", "2135480", "43005929", "14427712", "174820468", "38622332", "12667871", "202728796", "1295097", "11528701", "201599327", "14966142", "40519759", "115646573", "12571138", "173171119", "8881591", "62161958", "2695096", "109858464", "2561119", "15566782", "17137190", "17614223", "1560734", "10166651", "17129223", "59206838", "140060032", "14466778", "12883434", "2654136" ]
false
true
true
https://api.semanticscholar.org/CorpusID:2018341
0
0
0
0
0
[ [ "development, production, and service", "APPLICATION" ], [ "industrial application", "APPLICATION" ], [ "augmented reality system", "APPLICATION" ], [ "industrial project", "APPLICATION" ], [ "machine tool and production machinery", "APPLICATION" ], [ "automotive and aerospace industry", "APPLICATION" ], [ "mobile use", "APPLICATION" ], [ "human-machine interaction", "METHOD" ], [ "production and process workflow", "APPLICATION" ], [ "process and quality improvement", "APPLICATION" ], [ "product-oriented application", "APPLICATION" ], [ "power and processing plant", "APPLICATION" ], [ "industrial workflow", "APPLICATION" ], [ "Augmented reality (AR)", "APPLICATION" ] ]
Immersive singularity-free full-body interactions with reduced marker set
19,431,815
Despite the large success of games grounded on movement-based interactions the current state of full-body motion capture technologies still prevents the exploitation of precise interactions with complex environments. The first key requirement in the line of work we present here is to ensure a precise spatial correspondence between the user and the Avatar. For that purpose, we build upon our past effort in human postural control with a prioritized inverse kinematics (PIK) framework. One of its key advantages is to ease the dynamic- and priority-based combination of multiple conflicting constraints such as ensuring the balance and reaching a goal. However, its reliance on a linearized approximation of the problem makes it vulnerable to the well-known full extension singularity of the limbs. We address this issue by presenting a new type of 1D analytic constraint that smoothly integrates within the PIK framework under the name of FLEXT constraint (for FLexion-EXTension constraint). We further ease the full-body interaction by combining this new constraint with a recently introduced motion constraint to exploit the data-based synergy of full-body reach motions. The combination of both techniques allows immersive full-body interactions with a small set of active optical marker. Copyright © 2010 John Wiley & Sons, Ltd.
[ { "first": "Daniel", "middle": [], "last": "Raunhardt", "suffix": "" }, { "first": "Ronan", "middle": [], "last": "Boulic", "suffix": "" } ]
2,011
10.1002/cav.378
Journal of Visualization and Computer Animation
Journal of Visualization and Computer Animation
1989452346
[]
[ "73647694", "57904077", "30903420", "149830289", "9817112", "1441144" ]
false
true
false
https://api.semanticscholar.org/CorpusID:19431815
null
null
null
null
null
[ [ "dynamic- and priority-based combination", "METHOD" ], [ "immersive full-body interaction", "APPLICATION" ], [ "linearized approximation", "METHOD" ], [ "movement-based interaction", "METHOD" ], [ "full-body interaction", "APPLICATION" ], [ "optical marker", "METHOD" ], [ "motion constraint", "METHOD" ], [ "PIK framework", "METHOD" ], [ "full-body motion capture technology", "METHOD" ], [ "straint", "METHOD" ], [ "human postural control", "APPLICATION" ], [ "full-body reach motion", "METHOD" ], [ "spatial correspondence", "METHOD" ], [ "full extension singularity", "METHOD" ], [ "prioritized inverse kinematics (PIK) framework", "METHOD" ], [ "FLexion-EXTension constraint", "METHOD" ], [ "FLEXT constraint", "METHOD" ], [ "data-based synergy", "METHOD" ], [ "1D analytic constraint", "METHOD" ] ]
Virtual camera planning: A survey
9,658,078
Modelling, animation and rendering has dominated research computer graphics yielding increasingly rich and realistic virtual worlds. The complexity, richness and quality of the virtual worlds are viewed through a single media that is a virtual camera. In order to properly convey information, whether related to the characters in a scene, the aesthetics of the composition or the emotional impact of the lighting, particular attention must be given to how the camera is positioned and moved. This paper presents an overview of automated camera planning techniques. After analyzing the requirements with respect to shot properties, we review the solution techniques and present a broad classification of existing approaches. We identify the principal shortcomings of existing techniques and propose a set of objectives for research into automated camera planning.
[ { "first": "Marc", "middle": [], "last": "Christie", "suffix": "" }, { "first": "Rumesh", "middle": [], "last": "Machap", "suffix": "" }, { "first": "Jean-Marie", "middle": [], "last": "Normand", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Olivier", "suffix": "" }, { "first": "Jonathan", "middle": [ "H." ], "last": "Pickering", "suffix": "" } ]
2,005
10.1007/11536482_4
In Proceedings Smart Graphics
2584544059,1718072396
[ "59876674", "2537925", "2564787", "8846857", "14013696", "5760763", "42306166", "7033707", "11974693", "14846006", "417992", "2114752", "8191337", "14360123", "1738577", "17367951", "60899274", "20194983", "60775040", "13250513", "32613224", "13420005", "10092555", "2265929", "36354488", "5532829", "11687612", "2126257", "16951465", "1270148" ]
[ "11633444", "2597775", "7263727", "34031454", "14954395", "14954395", "13593645", "8558992", "9818861", "13528272", "19279439", "16566494", "1828914", "15156967", "18840653", "15167462", "14604084", "17516594", "17054789", "10134064", "10076558", "10432605", "10239125", "2333052", "15215714", "17633963", "18427372", "17014254", "17492151", "15426130", "1167938", "9282567", "1451140", "14317554", "12546491", "6505111", "3941897", "204793180", "10219075", "18019440", "2952308", "12429583", "27329640", "10226568", "59695073", "14598619", "12088779", "49574489", "16589714" ]
true
true
true
https://api.semanticscholar.org/CorpusID:9658078
0
0
0
1
0
[ [ "solution technique", "METHOD" ], [ "computer graphic", "APPLICATION" ], [ "automated camera planning", "APPLICATION" ], [ "Modelling, animation and rendering", "METHOD" ], [ "automated camera planning technique", "METHOD" ], [ "shot property", "DATA" ], [ "emotional impact", "EVALUATION" ], [ "virtual world", "VISUALIZATION" ], [ "broad classification", "METHOD" ], [ "virtual camera", "METHOD" ] ]
CT (city tomography)
11,587,497
CT is a project to reconstruct an existing urban space as a 3D information city on the web. A visitor can browse the city with "building wall browsers" and communicate with other visitors.
[ { "first": "Fumio", "middle": [], "last": "Matsumoto", "suffix": "" }, { "first": "Akira", "middle": [], "last": "Wakita", "suffix": "" } ]
2,002
10.1145/1242073.1242175
SIGGRAPH '02
256796211,2024957253
[ "15053025" ]
[]
true
false
true
https://api.semanticscholar.org/CorpusID:11587497
0
0
0
1
0
[ [ "urban space", "DATA" ], [ "building wall browser", "METHOD" ], [ "3D information city", "DATA" ] ]
System for Measuring Teacher–Student Communication in the Classroom Using Smartphone Accelerometer Sensors
20,591,120
The quality of communication between a teacher and students is deeply related to the cultivation of students’ motivation, autonomy, and creativity in a university education. It is important to evaluate such communication and improve it to enhance faculty development. In this study, a system for measuring this communication has been developed. To implement the system, an application for measuring students’ body movements using the acceleration sensor of a smartphone was developed. At the same time, a server-side web system that visualizes the measured data was developed. Using this measurement system, the communication in a seminar of a university laboratory was measured. The results show that the activities of a presenter and audience can be clearly detected by the raw and frequency-analyzed accelerometer data. Moreover, the correlation between the sonograms of the presenter and of the audience members became stronger when they had constructive discussion. These results suggest that the synchronization between a presenter and the audience is related to their level of rapport.
[ { "first": "Naoyoshi", "middle": [], "last": "Harada", "suffix": "" }, { "first": "Masatoshi", "middle": [], "last": "Kimura", "suffix": "" }, { "first": "Tomohito", "middle": [], "last": "Yamamoto", "suffix": "" }, { "first": "Yoshihiro", "middle": [], "last": "Miyake", "suffix": "" } ]
2,017
10.1007/978-3-319-58077-7_24
HCI
2613229033
[]
[]
false
false
false
https://api.semanticscholar.org/CorpusID:20591120
null
null
null
null
null
[ [ "acceleration sensor", "METHOD" ], [ "constructive discussion", "APPLICATION" ], [ "measured data", "DATA" ], [ "faculty development", "APPLICATION" ], [ "university education", "APPLICATION" ], [ "server-side web system", "METHOD" ], [ "’ body movement", "DATA" ], [ "and frequency-analyzed accelerometer data", "DATA" ], [ "measurement system", "METHOD" ] ]
Designing Interactions for Guided Inquiry Learning Environments.
36,677,548
[ { "first": "Noel", "middle": [], "last": "Enyedy", "suffix": "" }, { "first": "Phil", "middle": [], "last": "Vahey", "suffix": "" }, { "first": "Bernard", "middle": [ "R." ], "last": "Gifford", "suffix": "" } ]
1,997
HCI
156606387
[]
[ "16774154" ]
false
true
false
https://api.semanticscholar.org/CorpusID:36677548
null
null
null
null
null
[ [ "Guided Inquiry Learning Environments", "APPLICATION" ] ]
Designing for people who do not read easily
1,799,537
[ { "first": "Caroline", "middle": [], "last": "Jarrett", "suffix": "" }, { "first": "Katie", "middle": [], "last": "Grant", "suffix": "" }, { "first": "B.", "middle": [ "L.", "William" ], "last": "Wong", "suffix": "" }, { "first": "Neesha", "middle": [], "last": "Kodagoda", "suffix": "" }, { "first": "Kathryn", "middle": [], "last": "Summers", "suffix": "" } ]
2,008
10.1145/1531826.1531889
BCS HCI
[]
[ "256617", "8558628", "114237844" ]
false
true
true
https://api.semanticscholar.org/CorpusID:1799537
0
0
0
0
0
[]
Adaptive multimodal fusion
18,349,269
Multimodal interfaces offer its users the possibility of interacting with computers, in a transparent, natural way, by means of various modalities. Fusion engines are key components in multimodal systems, responsible for combining information from different sources and extract a semantic meaning from them. This fusion process allows many modalities to be effectively used at once and therefore allowing a natural communication between user and machine. Elderly users, whom can possess several accessibility issues, can benefit greatly from this kind of interaction. By developing fusion engines that are capable of adapting, taking into account the characteristics of these users, it is possible to make multimodal systems cope with the needs of impaired users.
[ { "first": "Pedro", "middle": [], "last": "Feiteira", "suffix": "" }, { "first": "Carlos", "middle": [], "last": "Duarte", "suffix": "" } ]
2,011
10.1007/978-3-642-21672-5_41
HCI
185795351
[ "16540884", "62168006", "14721456", "18839217", "2363304", "4122636" ]
[ "1197694" ]
true
true
true
https://api.semanticscholar.org/CorpusID:18349269
1
1
1
1
1
[ [ "natural communication", "APPLICATION" ], [ "semantic meaning", "DATA" ], [ "Fusion engine", "METHOD" ], [ "multimodal system", "METHOD" ], [ "fusion engine", "METHOD" ], [ "Multimodal interface", "METHOD" ], [ "fusion process", "METHOD" ] ]
Re-coloring images for gamuts of lower dimension
1,549,488
Two new techniques for the conversion of color images to gray scale images are discussed. The necessary components for producing visually pleasing gray scale images are identified, and the inadequacies of previous methods are discussed. Several examples of the new techniques are included. The techniques are extended to the problem of recoloring images to preserve visual information for color deficient viewers. Results of a perceptual experiment are discussed, showing the advantages of the new techniques over existing techniques.
[ { "first": "Robert", "middle": [], "last": "Geist", "suffix": "" }, { "first": "Karl", "middle": [], "last": "Rasche", "suffix": "" } ]
2,005
10.1111/j.1467-8659.2005.00867.x
Computer Graphics Forum
2161509693
[ "120819107", "5737802", "56575354", "314226", "6061462", "57047985", "2122471", "670357", "20302790", "579055", "203126912", "18170", "25182959", "1536455", "6459836", "19170136" ]
[ "50779446", "30419008", "2596909", "6057013", "16851405", "8424218", "17636400", "13080139", "12430008", "201809818", "13574506", "14097422", "12032595", "2252631", "38457213", "11201297", "12569053", "3037536", "145817324", "7599705", "8511966", "25411942", "17754061", "1350223", "15555356", "201762415", "6490920", "16356657", "20494381", "157059117", "14843424", "209432145", "19277561", "4835165", "72116", "13260308", "16603782", "149452615", "39317270", "53020002", "15425881", "47367198", "12025944", "7762371", "49036818", "202678678", "18333625", "3415218", "19802124", "13653809", "1947841", "23178", "16599787", "1924651", "43551625", "4529501", "10754725", "23476098", "9388201", "13320305", "16153991", "7462351", "16943993", "202786179", "15458995", "3450997", "52895513", "251656", "15190513", "146118044", "28425866", "428337", "16207978", "15357690", "202689129", "1526656", "574152", "8760275", "15246181", "9876714", "5154056", "18048263", "8139574", "52052820", "10005625", "204917578", "28651878", "59767587", "23886241", "8632899", "16442967", "21109515", "1797006", "4897712", "210694443", "28816117", "16756459", "17458407", "25082366", "12194572", "33672044", "17679764", "32617582", "2229623", "15141822", "6677169", "16338999", "17874552", "8604522", "13951798", "206440492", "6200253", "3133488", "14462724", "46817382", "53758042", "21876213", "203607", "146108606", "51628648", "6195957", "17159595", "37489423", "13344329", "9710636", "7171159", "12649947", "32614920" ]
true
true
true
https://api.semanticscholar.org/CorpusID:1549488
1
1
1
1
1
[ [ "gray scale image", "VISUALIZATION" ], [ "perceptual experiment", "EVALUATION" ], [ "color deficient viewer", "VISUALIZATION" ], [ "visual information", "DATA" ], [ "recoloring image", "APPLICATION" ], [ "color image", "DATA" ] ]
Function Based Flow Modeling and Animation
13,590,082
This paper summarizes a function-based approach to model and animate 2D and 3D flows. We use periodic functions to create cyclical animations that represent 2D and 3D flows. These periodic functions are constructed with an extremely simple algorithm from a set of oriented lines. The speed and orientation of the flow are described directly by the orientation and the lengths of these oriented lines. The resulting cyclical animations are then obtained by sampling the constructed periodic functions. Our approach is independent of dimension, i.e. for 2D and 3D flow the same types of periodic functions are used. Rendering images for 2D and 3D flows is slightly different. In 2D function values directly are mapped to color values. On the other hand, in 3D function values are first mapped to color and opacity and then the volume is rendered by our volume renderer. Modeled and animated flows are used to improve the visualization of operations of rolling piston and rotary vane compressors. Copyright © 2001 John Wiley & Sons, Ltd.
[ { "first": "Ergun", "middle": [], "last": "Akleman", "suffix": "" }, { "first": "Zeki", "middle": [], "last": "Melek", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Haberl", "suffix": "" } ]
2,001
10.1002/vis.259
The Journal of Visualization and Computer Animation
2032542772
[ "2820629", "16994965", "12458639", "33474933", "3142996", "12622800", "6803029", "62639007", "62358502", "86351869", "115460107", "53245185", "62657597" ]
[ "1015619", "16084153", "15948920", "16404673" ]
true
true
true
https://api.semanticscholar.org/CorpusID:13590082
0
0
0
1
0
[ [ "piston", "METHOD" ], [ "opa", "DATA" ], [ "2D and 3D flow", "DATA" ], [ "cyclical animation", "VISUALIZATION" ], [ "2D function", "METHOD" ], [ "Rendering image", "APPLICATION" ], [ "color", "DATA" ], [ "periodic function", "METHOD" ], [ "3D function value", "DATA" ], [ "animated flow", "DATA" ], [ "simple algorithm", "METHOD" ], [ "function-based approach", "METHOD" ], [ "flow", "DATA" ], [ "volume", "DATA" ], [ "oriented line", "DATA" ], [ "color value", "DATA" ], [ "rotary vane compress", "METHOD" ], [ "volume renderer", "METHOD" ] ]
A Fast Image Inpainting Algorithm by Adaptive Mask
63,820,395
Based on the analysis of local characteristic of nature images, a fast and non-iterative inpainting algorithm using adaptive mask is proposed in this paper. First, the direction of isophotes is estimated through sorting. Accordingly, the restore mask can be chosen adaptively. Then, the whole damaged area can be restored along the routine which defined by the fast marching method. The experimental results show that this algorithm has better ability in restoring both smooth area and edge-contained area compared to other fast inpainting algorithms.
[ { "first": "Wang", "middle": [], "last": "Nian", "suffix": "" } ]
2,008
Journal of Image and Graphics
2383760792
[]
[ "17357799", "43477558" ]
false
true
false
https://api.semanticscholar.org/CorpusID:63820395
null
null
null
null
null
[ [ "nature image", "DATA" ], [ "fast inpainting algorithm", "METHOD" ], [ "adaptive mask", "METHOD" ], [ "restore mask", "METHOD" ], [ "non-iterative inpainting algorithm", "METHOD" ], [ "fast marching method", "METHOD" ], [ "smooth area", "VISUALIZATION" ], [ "experimental result", "EVALUATION" ], [ "damaged area", "APPLICATION" ], [ "edge-contained area", "DATA" ] ]
Physics Based Deformation Using Shape Matching in Augmented Reality Environments
11,763,408
The importance of the deformation and physics based deformation methods are continuously increasing in computer graphics area. However, the user interaction and reality using them is still insufficient for the various applications such as modeling process and game industry. In this paper, we propose physics based deformation technology under AR (augmented reality) environments to improve the effectiveness of the model manipulation. In the proposed method, free form deformation and lattice shape matching method are combined first for the stable and fast deformation of the polygonal model. Then dynamics of the lattice shape matching region is applied for the physics based deformation. Finally those algorithms are implemented under AR environment. For the various physics based simulations, the adjustment of the material properties such as elasticity and damping ratio are also enable.
[ { "first": "Han", "middle": [ "Kyun" ], "last": "Choi", "suffix": "" }, { "first": "Hyun", "middle": [ "Soo" ], "last": "Kim", "suffix": "" }, { "first": "Wook", "middle": [ "Je" ], "last": "Park", "suffix": "" }, { "first": "K.H.", "middle": [], "last": "Lee", "suffix": "" } ]
2,008
10.1109/ISUVR.2008.13
2008 International Symposium on Ubiquitous Virtual Reality
2008 International Symposium on Ubiquitous Virtual Reality
2162738700
[ "1567107", "8907128", "6262748", "17635186", "6685153", "14807733", "1303698", "366305", "1756681", "15770967", "3431222", "3401588" ]
[ "17442528", "6824680" ]
true
true
true
https://api.semanticscholar.org/CorpusID:11763408
0
0
0
1
0
[ [ "polygonal model", "DATA" ], [ "matching region", "DATA" ], [ "free form deformation", "METHOD" ], [ "computer graphic area", "APPLICATION" ], [ "AR (augmented reality) environment", "APPLICATION" ], [ "AR environment", "APPLICATION" ], [ "physic based deformation method", "METHOD" ], [ "model manipulation", "METHOD" ], [ "physic based deformation", "APPLICATION" ], [ "game industry", "APPLICATION" ], [ "user interaction", "EVALUATION" ], [ "fast deformation", "METHOD" ], [ "physic based deformation technology", "METHOD" ], [ "physic based simulation", "APPLICATION" ], [ "modeling process", "APPLICATION" ], [ "lattice shape matching method", "METHOD" ] ]
Interaction Forms in Multiplayer Desktop Virtual Reality Games
16,022,081
This paper describes the findings of ethnographical research which elaborates and analyses the interaction forms of a contemporary multiplayer game. The motivation for the research originates from the issue that the lack of intuitive and non-intrusive interaction cues is one of the distinctive features separating desktop virtual reality settings from face-to-face encounters. The analysis of the interaction forms in a multiplayer game session indicates that the participants of collaborative virtual environment can use various forms of non-verbal communication and perceivable actions to reduce communication difficulties. However, players tend to communicate outside the game system and they try to overcome the limitations of the systems by inventing various imaginative ways to communicate, co-ordinate and co-operate. This indicates that there is a need for additional interaction support.
[ { "first": "Tony", "middle": [], "last": "Manninen", "suffix": "" } ]
2,002
In Proceedings of Virtual Reality International Conference
2126278959
[ "17090475", "10691309", "14438504", "16166011", "56492771", "10686927", "1841148", "357043", "117722880", "44943096", "5192737", "154428036", "42819741", "42773184" ]
[ "8973849", "6495381", "14252073", "8869395", "27967679", "6756749", "9416150" ]
true
true
true
https://api.semanticscholar.org/CorpusID:16022081
0
0
0
1
0
[ [ "desktop virtual reality setting", "APPLICATION" ], [ "interaction form", "DATA" ], [ "contemporary multiplayer game", "APPLICATION" ], [ "multiplayer game session", "APPLICATION" ], [ "non-verbal communication", "METHOD" ], [ "communication difficulty", "EVALUATION" ], [ "interaction support", "METHOD" ], [ "non-intrusive interaction cue", "DATA" ], [ "game system", "APPLICATION" ], [ "collaborative virtual environment", "APPLICATION" ], [ "ethnographical research", "METHOD" ] ]
ECOSITE: an application of computer-aided design to the composition of landforms for reclamation
16,314,898
Surface mining, though an efficient method of extracting near-surface coal for the nation's mounting energy needs, requires sound reclamation if the harmful environmental impacts of the method are to be held to a tolerable minimum. Another important requirement is aesthetic quality, a feature which should, but as yet does not, involve professional planners and designers at the early preplanning stage of reclamation. To encourage this needed improvement a multidisciplinary research group at the University of Massachusetts is developing a comprehensive "preplanning-and-design resource package" that includes an interactive graphics program for landform design as an important component. Called ECOSITE, this user-oriented program is the first serious effort to apply the power of interactive graphics and CAD to the design and sculpturing of large-scale topographical compositions for reclamation and other forms of site preparation and improvement. This paper discusses the program from the standpoint of its application, specifications, design, current capabilities and necessary improvements, including the ability to test its own output against relevant criteria.
[ { "first": "Robert", "middle": [], "last": "Mallary", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Ferraro", "suffix": "" } ]
1,977
10.1145/563858.563859
SIGGRAPH '77
2091153350
[]
[ "9160909" ]
false
true
true
https://api.semanticscholar.org/CorpusID:16314898
0
0
0
1
0
[ [ "sound reclamation", "METHOD" ], [ "interactive graphic program", "METHOD" ], [ "near-surface coal", "DATA" ], [ "site preparation and improvement", "APPLICATION" ], [ "reclamation", "APPLICATION" ], [ "user-oriented program", "METHOD" ], [ "ECOSITE", "METHOD" ], [ "environmental impact", "EVALUATION" ], [ "mounting energy need", "APPLICATION" ], [ "Surface mining", "METHOD" ], [ "aesthetic quality", "EVALUATION" ], [ "topographical composition", "APPLICATION" ], [ "preplanning-and-design resource package", "METHOD" ], [ "interactive graphic", "METHOD" ], [ "CAD", "METHOD" ], [ "professional planner and designer", "METHOD" ], [ "landform design", "APPLICATION" ] ]
A domain specific visual language for design and coordination of supply networks
2,325,481
We have developed a domain specific visual language (DSVL) and environment to support the modeling of small business-based dynamic supply networks. We describe our approach to the design of the DSVL, challenges faced, the implementation of a prototype environment, and preliminary evaluation.
[ { "first": "J.", "middle": [], "last": "Hosking", "suffix": "" }, { "first": "N.", "middle": [], "last": "Mehandjiev", "suffix": "" }, { "first": "J.", "middle": [], "last": "Grundy", "suffix": "" } ]
2,008
10.1109/VLHCC.2008.4639068
2008 IEEE Symposium on Visual Languages and Human-Centric Computing
2008 IEEE Symposium on Visual Languages and Human-Centric Computing
2103068655
[ "4512216", "11750514", "195350166", "6508498", "62067857", "2991709", "15169592", "11481122", "19077677" ]
[ "2097258", "4672197", "12602322" ]
true
true
true
https://api.semanticscholar.org/CorpusID:2325481
1
1
1
1
0
[ [ "DSVL", "METHOD" ], [ "domain specific visual language (DSVL)", "METHOD" ], [ "prototype environment", "METHOD" ], [ "preliminary evaluation", "EVALUATION" ], [ "-based dynamic supply network", "APPLICATION" ] ]
A Constraint Based CAD System Using User Defined Features
63,541,799
A constraint based generic mechanism is proposed for defining and instancing user defined features, which are managed by constraint based library functions to realize feature based design paradigm. This mechanism not only speeds up the design process and facilitates the representation of assembly constraints and manufacturing constraints in and among user defined features in detail design phase, but also lays foundation to binding user defined features with function elements of product, thus supporting computer aided conceptual design and establishing integrated product model based on user defined features. The mechanism is realized on Lonicera, a self\|developed CAD/CAM integrated system and a case study of aeroengine rotor design is illustrated.
[ { "first": "Duan", "middle": [], "last": "Hai", "suffix": "" } ]
2,001
Journal of Computer-aided Design & Computer Graphics
2359333809
[]
[]
false
false
false
https://api.semanticscholar.org/CorpusID:63541799
null
null
null
null
null
[ [ "user defined feature", "DATA" ], [ "feature based design paradigm", "METHOD" ], [ "design process", "APPLICATION" ], [ "function element", "DATA" ], [ "aeroengine rotor design", "APPLICATION" ], [ "computer aided conceptual design", "APPLICATION" ], [ "defined feature", "DATA" ], [ "assembly constraint", "DATA" ], [ "constraint based library function", "METHOD" ], [ "manufacturing constraint", "DATA" ], [ "constraint based generic mechanism", "METHOD" ], [ "case study", "EVALUATION" ], [ "CAD/CAM integrated system", "METHOD" ], [ "detail design phase", "APPLICATION" ], [ "integrated product model", "METHOD" ] ]
View-Invariant Human Detection from RGB-D Data of Kinect Using Continuous Hidden Markov Model
42,249,300
In this paper authors have presented a method to detect human from a Kinect captured Gray-Depth (G-D) using Continuous Hidden Markov models (C-HMMs). In our proposed approach, we initially generate multiple gray scale images from a single gray scale image/ video frame based on their depth connectivity. Thus, we initially segment the G image using depth information and then relevant components were extracted. These components were further filtered out and features were extracted from the candidate components only. Here a robust feature named Local gradients histogram(LGH) is used to detect human from G-D video. We have evaluated our system against the data set published by LIRIS in ICPR 2012 and on our own data set captured in our lab. We have observed that our proposed method can detect human from this data-set with a 94.25% accuracy.
[ { "first": "Sangheeta", "middle": [], "last": "Roy", "suffix": "" }, { "first": "Tanushyam", "middle": [], "last": "Chattopadhyay", "suffix": "" } ]
2,014
10.1007/978-3-319-07230-2_32
HCI
138558725
[]
[ "25385807" ]
false
true
false
https://api.semanticscholar.org/CorpusID:42249300
null
null
null
null
null
[ [ "gray scale image", "DATA" ], [ "Local gradient histogram(LGH)", "VISUALIZATION" ], [ "gray scale image/ video frame", "DATA" ], [ "94.25% accuracy", "EVALUATION" ], [ "depth information", "DATA" ], [ "G-D video", "DATA" ], [ "G image", "DATA" ], [ "Continuous Hidden Markov model (C-HMMs)", "METHOD" ], [ "depth connectivity", "DATA" ], [ "d Gray-Depth (G-D)", "METHOD" ], [ "candidate component", "DATA" ] ]
Wavelet-based Multiresolution Volume Rendering in Network
64,251,798
A method of wavelet-based multiresolution volume rendering is presented for accelerating 3D reconstruction and interaction of volume data set in network. The scheme is adopted that the volume rendering takes place on client's workstation, in which volume data set on net server is first decomposed into the discrete approximation and detail coefficients by wavelet multi-resolution analyzing, and then these coefficients are orderly transmitted to client's workstation on which the low resolution image is first rendered by using approximation coefficients and successively refined by using detail coefficients as they are arriving. In this process a group of 3D Mallat filters is employed to speed up 3D wavelet decomposition and reconstruction of volume data set, and a discrete and predigested optical model for wavelet domain rendering is brought forward to satisfy real time request of volume rendering. The experiment results show that the method in this paper is highly propitious to the network for frequently selecting and interacting images, because high-quality images and/or outlines can be produced in 12.5 percent and/or much less of volume data set.
[ { "first": "Zhang", "middle": [], "last": "You-sai", "suffix": "" } ]
2,004
Journal of Image and Graphics
2378596619
[]
[]
false
false
false
https://api.semanticscholar.org/CorpusID:64251798
null
null
null
null
null
[ [ "wavelet domain rendering", "APPLICATION" ], [ "effici", "DATA" ], [ "3D Mallat filter", "METHOD" ], [ "volume data set", "DATA" ], [ "detail coefficient", "DATA" ], [ "volume rendering", "APPLICATION" ], [ "outline", "VISUALIZATION" ], [ "experiment result", "EVALUATION" ], [ "high-quality image", "DATA" ], [ "optical model", "METHOD" ], [ "wavelet-based multiresolution volume rendering", "METHOD" ], [ "3D reconstruction", "APPLICATION" ], [ "low resolution image", "DATA" ], [ "real time request", "APPLICATION" ], [ "3D wavelet decomposition", "APPLICATION" ], [ "wavelet multi-resolution analyzing", "METHOD" ], [ "approximation coefficient", "DATA" ] ]
Digital weaving. 1
206,450,714
Woven cloth is so common these days that many of us take it for granted. But even a moment's examination of an everyday cloth like denim reveals some beautiful patterns. Weavers create cloth on a mechanical device called a loom. I describe the basics of weaving. My motivation is to discover new ways to create attractive visual patterns. Of course, nothing can beat actually going out and creating real, woven fabrics. The goal isn't to replace weaving, but to use the ideas of weaving to create software tools for making patterns.
[ { "first": "A.", "middle": [], "last": "Glassner", "suffix": "" } ]
2,002
10.1109/MCG.2002.1046635
IEEE Computer Graphics and Applications
IEEE Computer Graphics and Applications
2151165352
[]
[ "70125856" ]
false
true
false
https://api.semanticscholar.org/CorpusID:206450714
null
null
null
null
null
[ [ "mechanical device", "METHOD" ], [ "beautiful pattern", "DATA" ], [ "woven fabric", "DATA" ], [ "software tool", "METHOD" ], [ "everyday cloth", "DATA" ], [ "denim", "DATA" ], [ "visual pattern", "VISUALIZATION" ], [ "Woven cloth", "METHOD" ] ]
FEMME:Finite Element Method ModulE for non-linear structural analysis
59,907,292
[ { "first": "G-Jma", "middle": [ "Gerd-Jan" ], "last": "Schreppers", "suffix": "" } ]
1,994
Virtual Reality
146393016
[]
[]
false
false
false
https://api.semanticscholar.org/CorpusID:59907292
null
null
null
null
null
[ [ "ME:Finite Element Method ModulE", "METHOD" ], [ "non-linear structural analysis", "APPLICATION" ] ]
Generating virtual environments to allow increased access to the built environment
55,337,593
This paper describes the generation of virtual models of the built environment based on control network infrastructures currently utilised in intelligent building applications for such things as lighting, heating and access control. The use of control network architectures facilitates the creation of distributed models that closely mirror both the physical and control properties of the environment. The model of the environment is kept local to the installation which allows the virtual representation of a large building to be decomposed into an interconnecting series of smaller models. This paper describes two methods of interacting with the virtual model, firstly a two dimensional representation that can be used as the basis of a portable navigational device. Secondly an augmented reality called DAMOCLES is described that overlays additional information over a users normal field of view. The provision of virtual environments offers new possibilities in the man-machine interface allows intuitive access to network based services and control functions to a user.
[ { "first": "G.", "middle": [ "T." ], "last": "Foster", "suffix": "" }, { "first": "D.", "middle": [ "E.", "N." ], "last": "Wenn", "suffix": "" }, { "first": "William", "middle": [], "last": "Harwin", "suffix": "" }, { "first": "F.", "middle": [], "last": "O'Hart", "suffix": "" } ]
1,998
10.20870/ijvr.1998.3.4.2630
International Journal of Virtual Reality
2135506945
[ "57674726", "68152646", "2108280", "6142831", "17783728", "107339618" ]
[ "154363446", "55981789", "15329423", "6169474", "18191107", "84178809", "15075372" ]
true
true
true
https://api.semanticscholar.org/CorpusID:55337593
1
1
1
1
1
[ [ "virtual representation", "VISUALIZATION" ], [ "virtual model", "METHOD" ], [ "control network architecture", "METHOD" ], [ "control property", "DATA" ], [ "intelligent building application", "APPLICATION" ], [ "access control", "APPLICATION" ], [ "control network infrastructure", "METHOD" ], [ "two dimensional representation", "DATA" ], [ "augmented reality", "APPLICATION" ], [ "DAM", "METHOD" ], [ "virtual environment", "APPLICATION" ], [ "man-machine interface", "APPLICATION" ], [ "portable navigational device", "METHOD" ], [ "distributed model", "METHOD" ], [ "lighting, heating", "APPLICATION" ] ]
Qualitative investigation of a propeller slipstream with Background Oriented Schlieren
22,865,866
The Background Oriented Schlieren (BOS) method has been used to qualitatively identify the (variation of) density gradients in the helical structure of a propeller slipstream. The helical structures are identified for two sideslip angles. In contrast to standard BOS correlations between exposures and a reference image, two exposures at a given time interval were cross-correlated. This revealed a more clear description of the propeller slipstream as it determines the variation of the density gradient during this interval. It enhances the visualization of the helical structure of the propeller slipstream. Based on the visualizations of the blade tip vortex trajectories the propeller slipstream contraction can be estimated.
[ { "first": "Eric", "middle": [ "W.", "M." ], "last": "Roosenboom", "suffix": "" }, { "first": "Andreas", "middle": [], "last": "Schröder", "suffix": "" } ]
2,009
10.1007/BF03181958
Journal of Visualization
2065774311
[]
[ "9916594", "55571334" ]
false
true
false
https://api.semanticscholar.org/CorpusID:22865866
null
null
null
null
null
[ [ "reference image", "DATA" ], [ "sideslip", "DATA" ], [ "Background Oriented Schlieren (BOS) method", "METHOD" ], [ "standard BOS correlation", "METHOD" ], [ "blade tip vortex trajectory", "DATA" ], [ "density gradient", "DATA" ], [ "propeller slipstream contraction", "EVALUATION" ], [ "helical structure", "DATA" ], [ "propeller slipstream", "DATA" ] ]
Initial Review on ICTS Governance for Software Anti-Aging
55,029,843
For the past 20 years various researches regarding software aging have been conducted. Software aging is the situation in which the accumulation of errors occurring in operational software system that has run for a long time that may lead to performance degradation, resource depletion and eventually causing the software to crash or hang [1]. David Parnas divided software aging into two categories: 1) the failure of the software to adapt with environment that is dynamic and 2) the result of the changes itself [2]. Factors that can affects software aging can be classified into several categories: 1) functional, 2) human, 3) product and 4) environment [3]. In general, the factors that affect software aging can be divided into internal and external factors. The main objectives of this paper are to briefly describe the definition of software aging and also ICTS governance. In addition to that, this paper also compiles the software aging factors that are being investigated by previous researchers. The need for future research regarding ICTS governance and Software aging also determined at the end of this paper.
[ { "first": "Mohamad", "middle": [ "Khairudin" ], "last": "Morshidi", "suffix": "" }, { "first": "Rozi", "middle": [ "Nor", "Haizan" ], "last": "Nor", "suffix": "" }, { "first": "Hairulnizam", "middle": [], "last": "Mahdin", "suffix": "" } ]
2,017
10.30630/joiv.1.4-2.84
JOIV : International Journal on Informatics Visualization
JOIV : International Journal on Informatics Visualization
2773416196
[ "15522853", "12466781", "154953839", "10773660", "17601393", "14804446", "41562737" ]
[]
true
false
true
https://api.semanticscholar.org/CorpusID:55029843
1
1
1
1
1
[ [ "software aging factor", "DATA" ], [ "software system", "APPLICATION" ], [ "resource depletion", "EVALUATION" ], [ "performance degradation", "EVALUATION" ], [ "Software aging", "APPLICATION" ], [ "ICTS governance", "APPLICATION" ], [ "software aging", "APPLICATION" ] ]
Real-time FPGA Based Implementation of Color Image Edge Detection
52,264,256
Color Image edge detection is very basic and important step for many applications such as image segmentation, image analysis, facial analysis, objects identifications/tracking and many others. The main challenge for real-time implementation of color image edge detection is because of high volume of data to be processed (3 times as compared to gray images). This paper describes the real-time implementation of Sobel operator based color image edge detection using FPGA. Sobel operator is chosen for edge detection due to its property to counteract the noise sensitivity of the simple gradient operator. In order to achieve real-time performance, a parallel architecture is designed, which uses three processing elements to compute edge maps of R, G, and B color components. The architecture is coded using VHDL, simulated in ModelSim, synthesized using Xilinx ISE 10.1 and implemented on Xilinx ML510 (Virtex-5 FX130T) FPGA platform. The complete system is working at 27 MHz clock frequency. The measured performance of our system for standard PAL (720x576) size images is 50 fps (frames per second) and CIF (352x288) size images is 200 fps.
[ { "first": "Sanjay", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Anil", "middle": [ "K" ], "last": "Saini", "suffix": "" }, { "first": "Ravi", "middle": [], "last": "Saini", "suffix": "" } ]
2,012
10.5815/ijigsp.2012.12.03
International Journal of Image, Graphics and Signal Processing
2170970313
[ "14006780", "1191099", "62715898", "16251630", "8336485", "23519510", "15388597", "63033869", "13918046", "1580098" ]
[ "204798725", "12799948", "53677302", "62797060", "54443390", "63955209" ]
true
true
true
https://api.semanticscholar.org/CorpusID:52264256
1
1
1
1
1
[ [ "x576) size image", "DATA" ], [ "real-time implementation", "APPLICATION" ], [ "simple gradient operator", "METHOD" ], [ "Xilinx ISE 10.1", "METHOD" ], [ "object identifications/tracking", "APPLICATION" ], [ "image analysis", "APPLICATION" ], [ "image segmentation", "APPLICATION" ], [ "noise sensitivity", "EVALUATION" ], [ "parallel architecture", "METHOD" ], [ "ModelSim", "METHOD" ], [ "edge detection", "APPLICATION" ], [ "real-time performance", "EVALUATION" ], [ "facial analysis", "APPLICATION" ], [ "VHDL", "METHOD" ], [ "Color Image edge detection", "METHOD" ], [ "color image edge detection", "APPLICATION" ], [ "x288) size image", "DATA" ], [ "edge map", "VISUALIZATION" ], [ "Sobel operator", "METHOD" ], [ "B color component", "DATA" ], [ "gray image", "DATA" ], [ "R, G", "DATA" ], [ "FPGA", "METHOD" ] ]
Mandible and skull segmentation in cone beam computed tomography using super-voxels and graph clustering
13,695,397
Cone beam computed tomography (CBCT) is a medical imaging technique employed for diagnosis and treatment of patients with cranio-maxillofacial deformities. CBCT 3D reconstruction and segmentation of bones such as mandible or maxilla are essential procedures in surgical and orthodontic treatments. However, CBCT image processing may be impaired by features such as low contrast, inhomogeneity, noise and artifacts. Besides, values assigned to voxels are relative Hounsfield units unlike traditional computed tomography (CT). Such drawbacks render CBCT segmentation a difficult and time-consuming task, usually performed manually with tools designed for medical image processing. We present an interactive two-stage method for the segmentation of CBCT: (i) we first perform an automatic segmentation of bone structures with super-voxels, allowing a compact graph representation of the 3D data; (ii) next, a user-placed seed process guides a graph partitioning algorithm, splitting the extracted bones into mandible and skull. We have evaluated our segmentation method in three different scenarios and compared the results with ground truth data of the mandible and the skull. Results show that our method produces accurate segmentation and is robust to changes in parameters. We also compared our method with two similar segmentation strategy and showed that it produces more accurate segmentation. Finally, we evaluated our method for CT data of patients with deformed or missing bones and the segmentation was accurate for all data. The segmentation of a typical CBCT takes in average 5 min, which is faster than most techniques currently available.
[ { "first": "Oscar", "middle": [], "last": "Cuadros Linares", "suffix": "" }, { "first": "Jonas", "middle": [], "last": "Bianchi", "suffix": "" }, { "first": "Dirceu", "middle": [], "last": "Raveli", "suffix": "" }, { "first": "João", "middle": [], "last": "Batista Neto", "suffix": "" }, { "first": "Bernd", "middle": [], "last": "Hamann", "suffix": "" } ]
2,018
10.1007/s00371-018-1511-0
The Visual Computer
The Visual Computer
2802997824
[ "15415288", "1806278", "3027134", "9720114", "5923724", "24463315", "15573532", "3346085", "16291827", "10112022", "7679930", "37091655", "59706546", "810806", "53303132", "7133354", "58296097", "23848322", "4382307", "18598502", "11054628", "62477678", "246533", "15395637", "12907926", "35196636", "41192355", "7361694", "1660596", "61624748", "8271216" ]
[ "208176434", "153315115" ]
true
true
true
https://api.semanticscholar.org/CorpusID:13695397
1
1
1
1
1
[ [ "skull", "DATA" ], [ "low contrast", "EVALUATION" ], [ "graph partitioning algorithm", "METHOD" ], [ "medical imaging technique", "METHOD" ], [ "accurate segmentation", "EVALUATION" ], [ "super-voxels", "METHOD" ], [ "Cone beam computed tomography (CBCT)", "METHOD" ], [ "medical image processing", "APPLICATION" ], [ "mandible", "DATA" ], [ "two-stage method", "METHOD" ], [ "voxel", "DATA" ], [ "automatic segmentation", "METHOD" ], [ "Hounsfield unit", "DATA" ], [ "CBCT", "DATA" ], [ "CBCT segmentation", "APPLICATION" ], [ "segmentation method", "METHOD" ], [ "CBCT image processing", "METHOD" ], [ "cranio-maxillofacial deform", "DATA" ], [ "deformed or missing bone", "DATA" ], [ "computed tomography (CT)", "METHOD" ], [ "segmentation of CBCT", "APPLICATION" ], [ "CT data", "DATA" ], [ "segmentation strategy", "METHOD" ], [ "CBCT 3D reconstruction", "METHOD" ], [ "segmentation", "APPLICATION" ], [ "user-placed seed process", "METHOD" ], [ "diagnos", "APPLICATION" ], [ "inhomogene", "DATA" ], [ "ground truth data", "DATA" ], [ "compact graph representation", "VISUALIZATION" ], [ "surgical and orthodontic treatment", "APPLICATION" ], [ "3D data", "DATA" ] ]
Virtual training for Fear of Public Speaking — Design of an audience for immersive virtual environments
23,001,351
Virtual Reality technology offers great possibilities for Cognitive Behavioral Therapy on Fear of Public Speaking: Clients can be exposed to virtual fear-triggering stimuli (exposure) and are able to role-play in virtual environments, training social skills to overcome their fear. This poster deals with the design of a realistic virtual presentation scenario based on an observation of a real audience.
[ { "first": "S.", "middle": [], "last": "Poeschl", "suffix": "" }, { "first": "N.", "middle": [], "last": "Doering", "suffix": "" } ]
2,012
10.1109/VR.2012.6180902
2012 IEEE Virtual Reality Workshops (VRW)
2012 IEEE Virtual Reality Workshops (VRW)
2154726688
[ "147968924", "206155558", "3184651", "18711182", "40320949", "28062076", "17882289" ]
[ "13336717", "204829204", "199488395", "67868793", "52962337" ]
true
true
true
https://api.semanticscholar.org/CorpusID:23001351
0
0
0
1
0
[ [ "virtual presentation scenario", "APPLICATION" ], [ "social skill", "EVALUATION" ], [ "Virtual Reality technology", "METHOD" ], [ "fear-triggering stimulus", "METHOD" ], [ "Cognitive Behavioral Therapy", "APPLICATION" ], [ "Public Speaking", "APPLICATION" ], [ "virtual environment", "APPLICATION" ] ]
Visual tools for generating iconic programming environments
26,206,847
The authors present VAMPIRE a visual system for rapid generation of iconic programming systems. VAMPIRE uses a graphical class editor to construct a hierarchy describing an iconic language's structure. Each node in the hierarchy represents an abstraction of a group of language elements; each leaf represents an icon which can be placed into a program. Attributes are added to each 'class' in the tree to represent aspects of the language elements, including 'icons' which describe the graphical visualization of the elements and 'rules' which describe the semantics of the element. The semantics of the language are defined using attributed graphical rule. VAMPIRE has been designed so that it can create iconic systems similar to all of the major classes which have appeared to date in the literature. >
[ { "first": "D.W.", "middle": [], "last": "McIntyre", "suffix": "" }, { "first": "E.P.", "middle": [], "last": "Glinert", "suffix": "" } ]
1,992
10.1109/WVL.1992.275769
Proceedings IEEE Workshop on Visual Languages
Proceedings IEEE Workshop on Visual Languages
1867925069
[ "58461053", "8107294", "12073180", "1385317" ]
[ "15834104", "18181175", "41965608", "16097473", "19982085", "33362629", "60429014", "11930938", "58812217", "16820349", "14237012", "6060032", "1430664", "14363213", "1368652", "18346492", "14506865", "2025519", "12408607", "58147301", "16054730", "3104655" ]
true
true
true
https://api.semanticscholar.org/CorpusID:26206847
0
0
0
1
0
[ [ "iconic programming system", "APPLICATION" ], [ "language elements,", "VISUALIZATION" ], [ "graphical visualization", "VISUALIZATION" ], [ "attributed graphical rule", "METHOD" ], [ "visual system", "VISUALIZATION" ], [ "graphical class editor", "METHOD" ], [ "language element", "DATA" ], [ "iconic language", "DATA" ], [ "rapid generation", "APPLICATION" ], [ "iconic system", "METHOD" ] ]
Visual Analytics : Scope and Challenges
5,544,353
In today's applications data is produced at unprecedented rates. While the capacity to collect and store new data rapidly grows, the ability to analyze these data volumes increases at much lower rates. This gap leads to new challenges in the analysis process, since analysts, decision makers, engineers, or emergency response teams depend on information hidden in the data. The emerging field of visual analytics focuses on handling these massive, heterogenous, and dynamic volumes of information by integrating human judgement by means of visual representations and interaction techniques in the analysis process. Furthermore, it is the combination of related research areas including visualization, data mining, and statistics that turns visual analytics into a promising field of research. This paper aims at providing an overview of visual analytics, its scope and concepts, addresses the most important research challenges and presents use cases from a wide variety of application scenarios.
[ { "first": "Daniel", "middle": [ "A." ], "last": "Keim", "suffix": "" }, { "first": "Florian", "middle": [], "last": "Mansmann", "suffix": "" }, { "first": "Jörn", "middle": [], "last": "Schneidewind", "suffix": "" }, { "first": "James", "middle": [ "J." ], "last": "Thomas", "suffix": "" }, { "first": "Hartmut", "middle": [], "last": "Ziegler", "suffix": "" } ]
2,008
10.1007/978-3-540-71080-6_6
Visual Data Mining
2097481508
[ "60837336", "195603137", "126007400", "21228071", "37096840", "7039904", "2281975", "4474038", "10792835", "8872116", "59315146", "1068924", "15352946", "11281836", "14441902", "1929458", "310605", "13791826", "20325790", "15008687" ]
[ "212423758", "44066757", "6495970", "207901101", "13058862", "211298507", "14958590", "31308976", "7960978", "10694203", "6725318", "16488751", "15001515", "14323353", "59553332", "12256025", "55247722", "4071247", "69964488", "133477636", "12534801", "16095875", "102338982", "45219204", "16056308", "6743071", "9526346", "10137924", "16019758", "57737772", "255103", "46045344", "52078238", "16996645", "53591080", "8237043", "11195269", "18846592", "18889376", "1823260", "21724342", "12860251", "10000409", "50847749", "14867763", "33008772", "6363800", "10825519", "25644703", "18304397", "2521824", "17835164", "53425645", "8431546", "53067676", "12586171", "25994235", "56463396", "115142772", "195426443", "6512902", "48356202", "2870367", "7851221", "5925170", "67675811", "53308039", "62878972", "120190796", "6067026", "206868952", "6697376", "10017327", "1001595", "9540201", "17142849", "7123996", "1449025", "9672542", "8416674", "199000895", "2795850", "16007490", "59601129", "26243143", "13748831", "54885372", "11695885", "12849937", "56183762", "207906358", "34168339", "13805770", "5687803", "24283618", "118566", "674916", "15201793", "209545842", "13546116", "62555842", "12630904", "71714711", "32290259", "17545482", "14813702", "15438941", "213163682", "15613611", "6187669", "17445977", "15353869", "11513818", "32250102", "7693254", "201125273", "121422", "52895355", "15977346", "15154546", "2453613", "13856869", "55860156", "7415173", "214774433", "62896246", "76652953", "10309523", "1812025", "64884241", "207028988", "754363", "7341582", "40133727", "58005892", "54070051", "9205017", "14787376", "21634269", "27885078", "18701258", "15067721", "15775785", "3414518", "211244567", "3409444", "119214551", "17072795", "198322670", "53087662", "215730218", "88502187", "15564618", "38483927", "196217017", "53772038", "13253606", "6855989", "1074087", "28620180", "53225208", "1176223", "18935246", "152162206", "9215123", "6543110", "11302788", "55341179", "54010574", "51871455", "18503040", "8263839", "208243625", "203599184", "18355867", "39979643", "489808", "204937772", "3423224", "4109258", "202680717", "28439867", "15839041", "5916437", "11622826", "173989522", "56293820", "7363312", "59231073", "16590559", "29315306", "12643070", "16558772", "6724459", "2897847", "36658766", "210718229", "51813737", "53474594", "209168689", "6092753", "12810283", "7605500", "201652132", "6072789", "13841872", "17031454", "198980681", "166426493", "208282772", "4837238" ]
true
true
true
https://api.semanticscholar.org/CorpusID:5544353
1
1
1
1
1
[ [ "analysis process", "APPLICATION" ], [ "heterogenous,", "DATA" ], [ "dynamic volume", "DATA" ], [ "emergency response team", "APPLICATION" ], [ "data volume", "DATA" ], [ "interaction technique", "METHOD" ], [ "decision maker", "APPLICATION" ], [ "analysis", "METHOD" ], [ "visual analytics", "VISUALIZATION" ], [ "visual representation", "VISUALIZATION" ], [ "data mining", "APPLICATION" ] ]
“Just-in-Place” Information for Mobile Device Interfaces
24,429,422
This paper addresses the potentials of context sensitivity for making mobile device interfaces less complex and easier to interact with. Based on a semiotic approach to information representation, it is argued that the design of mobile device interfaces can benefit from spatial and temporal indexicality, reducing information complexity and interaction space of the device while focusing on information and functionality relevant here and now. Illustrating this approach, a series of design sketches show the possible redesign of an existing web and wap-based information service.
[ { "first": "Jesper", "middle": [], "last": "Kjeldskov", "suffix": "" } ]
2,002
10.1007/3-540-45756-9_21
Mobile HCI
1599936372
[ "16214640" ]
[ "15478748", "5624201", "11577178", "18528931", "1784922", "18761768", "409745", "26607435", "15283891", "6657481", "14825445", "18093658", "6486618", "27564271", "34087386", "59469242", "56013525", "4862821", "55161165", "15135322", "59374441", "18113567", "6603988", "13964535", "15179815", "12860080", "18146314" ]
true
true
true
https://api.semanticscholar.org/CorpusID:24429422
1
1
1
1
1
[ [ "wap-based information service", "METHOD" ], [ "design sketch", "VISUALIZATION" ], [ "information representation", "APPLICATION" ], [ "spatial and temporal indexicality", "METHOD" ], [ "semiotic approach", "METHOD" ], [ "mobile device interface", "APPLICATION" ], [ "context sensitivity", "METHOD" ] ]
Work Analysis - Perspectives on and Requirements for a Methodology.
5,074,850
[ { "first": "Peter", "middle": [ "H." ], "last": "Carstensen", "suffix": "" }, { "first": "Kjeld", "middle": [], "last": "Schmidt", "suffix": "" } ]
1,993
HCI
70022075
[]
[ "3111590", "27737490", "14113805", "7353591" ]
false
true
false
https://api.semanticscholar.org/CorpusID:5074850
null
null
null
null
null
[ [ "Work Analys", "METHOD" ] ]
Deep learning of brain lesion patterns and user-defined clinical and MRI features for predicting conversion to multiple sclerosis from clinically isolated syndrome
80,460,782
AbstractMultiple sclerosis (MS) is a neurological disease with an early course that is characterised by attacks of clinical worsening, separated by variable periods of remission. The ability to predict the risk of attacks in a given time frame can be used to identify patients who are likely to benefit from more proactive treatment. We aim to determine whether deep learning can extract latent MS lesion features that, when combined with user-defined radiological and clinical measurements, can predict conversion to MS (defined with criteria that include new T2 lesions, new T1 gadolinium enhancing lesions and/or new clinical relapse) in patients with early MS symptoms (clinically isolated syndrome), a prodromal stage of MS, more accurately than imaging biomarkers that have been used in clinical studies to evaluate overall disease state, such as lesion volume and brain volume. More specifically, we use convolutional neural networks to extract latent MS lesion patterns that are associated with conversion to def...
[ { "first": "Youngjin", "middle": [], "last": "Yoo", "suffix": "" }, { "first": "Lisa", "middle": [], "last": "Tang", "suffix": "" }, { "first": "David", "middle": [ "K.B." ], "last": "Li", "suffix": "" }, { "first": "Luanne", "middle": [ "M." ], "last": "Metz", "suffix": "" }, { "first": "Shannon", "middle": [ "H." ], "last": "Kolind", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Traboulsee", "suffix": "" }, { "first": "Roger", "middle": [], "last": "Tam", "suffix": "" } ]
2,019
10.1080/21681163.2017.1356750
CMBBE: Imaging & Visualization
CMBBE: Imaging & Visualization
2742947264
[]
[ "174804227", "215365348", "117913937" ]
false
true
false
https://api.semanticscholar.org/CorpusID:80460782
null
null
null
null
null
[ [ "T2 lesion", "DATA" ], [ "MS lesion feature", "DATA" ], [ "brain volume", "DATA" ], [ "neurological disease", "APPLICATION" ], [ "lesion volume", "DATA" ], [ "deep learning", "METHOD" ], [ "MS lesion pattern", "DATA" ], [ "imaging biomarkers", "EVALUATION" ], [ "al relapse", "DATA" ], [ "radiological and clinical measurement", "EVALUATION" ], [ "proactive treatment", "METHOD" ], [ "clinical study", "EVALUATION" ], [ "Multiple sclerosis (MS)", "METHOD" ], [ "disease state", "EVALUATION" ], [ "1 gadolini", "DATA" ], [ "convolutional neural network", "METHOD" ] ]