appln_id
int64 7.41k
576M
| appln_filing_date
stringlengths 10
10
| docdb_family_id
int64 3.49M
82.2M
| granted
stringclasses 2
values | appln_abstract
stringlengths 120
10k
| appln_abstract_lg
stringclasses 1
value | appln_title
stringlengths 4
567
| applt_coun
stringlengths 4
289
| invt_coun
stringlengths 8
347
| cpc
stringlengths 11
1.62k
| ipc
sequence | __index_level_0__
int64 25
167k
|
---|---|---|---|---|---|---|---|---|---|---|---|
556,591,096 | 2020-12-28 | 77,413,045 | N | A method and apparatus for compositing a face image, relating to the field of artificial intelligence. The face image can be composited by performing attribute information correction on the basis of a real face image, the authenticity of a composited face image is improved, and the compositing efficiency is improved. The method comprises: obtaining first attribute information, the first attribute information being attribute information comprised in a face image to be composited; according to the first attribute information, searching a real face image library for a first face image, the first face image comprising second attribute information, and a repetitive rate of the second attribute information and the first attribute information satisfying a threshold requirement; obtaining attribute difference information according to the first attribute information and the second attribute information, the attribute difference information being used for representing an attribute difference between the first face image and the face image to be composited; performing facial feature extraction on the first face image to obtain first facial feature information of the first face image; and compositing a second face image according to the first facial feature information and the attribute difference information. | en | METHOD AND APPARATUS FOR COMPOSITING FACE IMAGE | 63625819_CN | 80751604_CN,82729745_CN,80746687_CN | G06K 9/62,G06K 9/6268,G06N 3/04,G06N 3/0454,G06N 3/08,G06N 3/084,G06T 3/00,G06T 3/0012,G06T2200/24 | [
"G06T 3/00"
] | 157,332 |
471,985,118 | 2016-04-05 | 55,802,311 | Y | Abstract A surgical buttress (500) for use in a surgical stapling apparatus (10), the surgical buttress (500) comprising a body portion (520) having lateral sides defining pockets. According to an aspect of the present disclosure, a surgical stapling apparatus is provided and includes a cartridge assembly defining a first tissue contacting surface, the cartridge assembly housing a plurality of surgical fasteners therein, the cartridge assembly defining at least one distal attachment point and at least one proximal attachment point; an anvil assembly defining a second tissue contacting surface, the anvil assembly movably secured in relation to cartridge assembly, the anvil assembly defining at least one distal attachment point and at least one proximal attachment point, wherein the at least one proximal attachment point of the anvil assembly is offset an axial distance from the at least one proximal attachment point of the cartridge assembly; and a surgical buttress releasably secured to each of the first tissue contacting surface and the second tissue contacting surface, the surgical buttress including a body portion configured to substantially overlie at least one of the first and second tissue contacting surfaces of either the first length and second length cartridge assembly and anvil assembly. | en | Buttress brachytherapy and integrated staple line markers for margin identification | 12334718_US | 43371518_,43371519_,43542168_ | A61B 17/07292,A61B2017/00004,A61B2017/00893,A61B2017/07271,A61B2090/061,A61B2090/392,A61B2090/3937,A61B2090/3966 | [
"A61B 17/068",
"A61B 17/072"
] | 105,090 |
1,394,452 | 2001-01-16 | 3,819,268 | Y | A system for customising the synthesis and generation of spatial hearing in virtual auditory space (VAS) for individual listeners includes a measuring ruler (12) and a digital processing unit (DPU) (14) to which the ruler (12) is connected. The ruler (12) is used for measuring various physical dimensions of the morphology of a person's external auditory periphery (16). The ruler (12) comprises a base (18) insertable into an ear canal (20) of the person's ear. A robotic measuring arm (22) extends from the base (18) and has a measuring tip (24) to enable the ruler (12) to record the Cartesian (x.y.z) coordinates of the tip (24) relative to the base (18). By suitable manipulation of the robotic arm (22), the tip (24) can be positioned at desired locations on the person's external auditory periphery (16) to enable the morphological measurements of the person's external auditory periphery (16) to be effected. The ruler (12) is electrically connected via a lead (26) to the DPU (14) which is programmed to compute either head related transfer function (HRTF) filter coefficients or HRTF spectral weights. Functional relationships may thus be used to generate, by means of the DPU (28), HRTFs at any location for any individual listener given an encoding of the individual listener's morphological measurements. | en | THE GENERATION OF CUSTOMISED THREE DIMENSIONAL SOUND EFFECTS FOR INDIVIDUALS | 12622902_AU,13584364_AU,13584363_CN,12622901_AU,12622900_AU,5556106_AU | 13584364_AU,13584363_CN,12622900_AU,12622902_AU,12622901_AU | A61B 5/1077,A61B 5/7232,G01B 21/04,G01B 21/20,H04S 1/00,H04S2420/01 | [
"H04S 1/00",
"G01B 21/20",
"G01B 21/04",
"A61B 5/107"
] | 2,643 |
328,425,013 | 2010-04-24 | 43,011,239 | Y | A state change of a person is captured with timing closer to consciousness of the person. A frequency slope waveform of a time series signal of a pulse wave of a body of a person detected by an air pack is obtained by a frequency slope time-series analyzing and computing means 612 and a frequency fluctuation time-series waveform is obtained by a frequency fluctuation time-series analyzing and computing means 613. By comparing changes of two waveforms with each other in a waveform determining means 614, a state change of a person can be captured with timing closer to consciousness of the person. A sleep-onset point can be specified clearly based upon the case where a fluctuation waveform steep gradient portion indicating a steep gradient change emerges in the frequency fluctuation waveform, and amplitude and a base line position of the frequency fluctuation waveform or the frequency slope waveform are in predetermined ranges. Further, when a mean slope line of the fluctuation waveform steep gradient portion is approximately parallel to a mean slope line of the slope waveform steep gradient portion in the frequency slope waveform before emergence of the fluctuation waveform steep gradient portion, a sleepiness waveform is determined, so that a sleepiness state leading to sleep onset can be detected. | en | DEVICE AND COMPUTER PROGRAM FOR ANALYZING THE STATE OF A LIVING BODY | 216236_JP | 49265289_JP,49228546_JP,49251185_JP,49550895_JP | A61B 5/024,A61B 5/11,A61B 5/165,A61B 5/18,A61B 5/369,A61B 5/4035,A61B 5/4809,A61B 5/6891,A61B 5/6892,A61B 5/6893,A61B 5/7239,A61B2562/0247,B60N 2/002,B60N2002/981,G08B 21/06 | [
"A61B 5/0476",
"A61B 5/0245",
"A61B 5/024",
"A61B 5/00",
"A61B 5/18",
"A61B 5/11",
"G08B 21/06",
"A61B 5/16",
"B60N 2/90",
"B60N 2/00"
] | 64,606 |
16,729,048 | 1988-06-17 | 22,077,431 | Y | Apparatus (101-112) for encoding speech uses an improved code excited linear predictive (CELP) encoder (102, 103, 104, 106, 107) using a recursive computational unit. In response to a target excitation vector that models a present frame of speech, the computational unit utilizes a finite impulse response linear predictive coding (LPC) filter and an overlapping codebook to determine a candidate excitation vector from the codebook that matches the target excitation vector after searching the entire codebook for the best match. For each candidate excitation vector accessed from the overlapping codebook, only one sample of the accessed vector and one sample of the previously accessed vector must have arithmetic operations performed on them to evaluate the new vector rather than all of the samples as is normal for CELP methods. For increased performance, a stochastically excited linear predictive (SELP) encoder (105, 107) is used in series with adaptive CELP encoder. The SELP encoder is responsive to the difference between the target excitation vector and the best matched candidate excitation vector to search its own overlapping codebook in a recursive manner to determine a candidate excitation vector that provides the best match. Both of the best matched candidate vectors are used in speech synthesis. | en | Code excited linear predictive vocoder and method of operation. | 8688_US | 2728050_US,2646377_US,2646376_US | G10L 19/12,G10L 25/06,G10L2019/0002,G10L2019/0004,G10L2019/0013 | [
"G10L 19/04",
"G10L 19/12",
"G10L 19/00"
] | 17,693 |
378,676,359 | 2009-05-28 | 41,380,582 | N | Disclosed herein are tricyclic fused dihydropyrazolo[3,4,5-de]isoquinolinone and dihydropyrrolo[4,3,2-de]isoquinolinone compounds of formula I, wherein the substituents are as defined with in the specification processes for their preparation, compositions comprising said compounds and uses thereof. Said compounds are useful as serotonin typ-3 (5-HT3) receptor modulators and as such are useful in the treatment of anxiety disorders, social phobias, vertigo, obsessive-compulsive disorders, panic disorders, post-traumatic stress disorders, bulimia nervosa, drug withdrawal effects, alcohol dependency, pain, sleep related central apneas, chronic fatigue syndrome, Parkinson's Disease Psychosis, schizophrenia, cognitive decline and defects in schizophrenia, Parkinson's Disease, Huntington's Chorea, pre-senile dementias, Alzheimer's Disease, obesity, substance abuse disorders, dementia associated with neurodegenerative disease, cognition deficits, fibromyalgia syndrome, rosacea, cardiovascular disorders mediated by serotonin, chemotherapy induced nausea and vomiting (CINV), post-operative induced nausea and vomiting (PONV), radiation induced nausea and vomiting (RINV), gastrointestinal disorders, gastroesophageal reflux disease (GERD), Burkitt's lymphoma, bronchial asthma, pruritus, migraine, and epilepsy. | en | 5-HT3 RECEPTOR MODULATORS, METHODS OF MAKING, AND USE THEREOF | 13670675_ | 44165101_ | A61K 31/437,A61K 31/439,A61K 31/55,A61P 1/00,A61P 1/08,A61P 3/04,A61P 9/00,A61P 11/06,A61P 17/04,A61P 25/00,A61P 25/04,A61P 25/06,A61P 25/08,A61P 25/14,A61P 25/16,A61P 25/18,A61P 25/22,A61P 25/28,A61P 25/30,A61P 25/32,A61P 35/00,A61P 43/00,C07D 451/14,C07D 453/02,C07D 471/06,C07D 487/04,C07D 487/06,C07D 519/00 | [
"C07D 471/06",
"A61K 31/55",
"C07D 487/06",
"A61K 31/437",
"A61K 31/4162"
] | 77,136 |
15,645,609 | 2003-11-12 | 32,313,060 | Y | The invention relates to a voice processing system comprising at least one extractor as a device for lexical allocation of the language which is to be processed and at least one connector as a device for combining the lexically allocated language with an utterance. According to the invention, the extractor allocates concepts to the language to be processed (concepts can, for example, be objects, events, characteristics (categories of concepts) wherein a variable is allocated associated terms, features and/or more complex structures such that the corresponding term is filled with life by means of said structures such as concepts, features and/or more complex structures and can thus be understood. Unlike the usual statistical methods for voice processing, the system described here does not analyze the probability of tonal sequences (spoken language) or character strains (written language) occurring. Said system extracts and processes the understandable meaning of voice messages. All core procedures and knowledge bases of the system operate independently of language. In order to process the input of a specific national language, solely the respective language-specific lexicon is added. The invention also enables the meaning of syntactically/grammatically false voice instructions to be reconstructed. | en | VOICE PROCESSING SYSTEM, METHOD FOR ALLOCATING ACOUSTIC AND/OR WRITTEN CHARACTER STRINGS TO WORDS OR LEXICAL ENTRIES | 13138230_DE | 13138230_DE | G06F 40/35,G10L 15/1815,G10L 15/1822,G10L 15/22,H04M 3/493,H04M 3/4936,H04M 3/527,H04M2201/40,H04M2203/2061 | [
"G10L 15/22",
"H04M 3/527",
"H04M 3/493"
] | 12,137 |
492,425,828 | 2017-09-01 | 61,562,112 | Y | PROBLEM TO BE SOLVED: To estimate the actuating force of a locomotorium relating to body motion in daily life of a subject.SOLUTION: A locomotorium actuating force estimation system includes: a body parameter information input part 10 for receiving body parameter information of a body region of a subject; a body parameter information allocation part 20 for allocating the body parameter information of the body region of the subject to a corresponding body region of a virtual human body model; a moment of inertia calculation part 30 for calculating the moment of inertia of the body region of the virtual human body model; a body motion measurement part 40 for measuring body motion in daily life of the subject; a locomotorium actuating force calculation part 50 for calculating joint torque of a body region relating to body motion in a daily life of the virtual human body model on the basis of a calculation result of the moment of inertia of the body region of the virtual human body model, and a measurement result of the body motion in the daily life of the subject; and a locomotorium actuating force output part 60 for estimating the joint torque as actuating force of the locomotorium relating to body motion in the daily life of the subject and outputting the actuating force.SELECTED DRAWING: Figure 1 | en | LOCOMOTORIUM ACTUATING FORCE ESTIMATION SYSTEM | 60868696_,65292268_ | 59640742_,59734142_,59653968_,60522713_,58924251_ | A61B 5/22 | [
"A61B 5/22",
"A61B 5/11"
] | 116,506 |
420,202,354 | 2014-03-18 | 51,165,979 | Y | Specification covers new algorithms, methods, and systems for artificial intelligence, soft computing, and deep learning/recognition, e.g., image recognition (e.g., for action, gesture, emotion, expression, biometrics, fingerprint, facial, OCR (text), background, relationship, position, pattern, and object), Big Data analytics, machine learning, training schemes, crowd-sourcing (experts), feature space, clustering, classification, SVM, similarity measures, modified Boltzmann Machines, optimization, search engine, ranking, question-answering system, soft (fuzzy or unsharp) boundaries/impreciseness/ambiguities/fuzziness in language, Natural Language Processing (NLP), Computing-with-Words (CWW), parsing, machine translation, sound and speech recognition, video search and analysis (e.g. tracking), image annotation, geometrical abstraction, image correction, semantic web, context analysis, data reliability, Z-number, Z-Web, Z-factor, rules engine, control system, autonomous vehicle, self-diagnosis and self-repair robots, system diagnosis, medical diagnosis, biomedicine, data mining, event prediction, financial forecasting, economics, risk assessment, e-mail management, database management, indexing and join operation, memory management, data compression, event-centric social network, Image Ad Network. | en | Method and system for feature detection | 12246909_US,11700314_US,5748091_US,45219659_US | 5748091_US,11700314_US,12246909_US | A61B 5/163,A61B 5/165,A61B 5/4803,A61B 5/7221,A61B 5/7267,G06K 9/627,G06N 7/005,G16H 50/70 | [
"G06K 9/62",
"G06N 7/00"
] | 87,544 |
440,697,254 | 2015-02-02 | 47,114,625 | Y | Specification covers new algorithms, methods, and systems for artificial intelligence, soft computing, and deep learning/recognition, e.g., image recognition (e.g., for action, gesture, emotion, expression, biometrics, fingerprint, facial, OCR (text), background, relationship, position, pattern, and object), large number of images (“Big Data”) analytics, machine learning, training schemes, crowd-sourcing (using experts or humans), feature space, clustering, classification, similarity measures, optimization, search engine, ranking, question-answering system, soft (fuzzy or unsharp) boundaries/impreciseness/ambiguities/fuzziness in language, Natural Language Processing (NLP), Computing-with-Words (CWW), parsing, machine translation, sound and speech recognition, video search and analysis (e.g. tracking), image annotation, geometrical abstraction, image correction, semantic web, context analysis, data reliability (e.g., using Z-number (e.g., “About 45 minutes; Very sure”)), rules engine, control system, autonomous vehicle, self-diagnosis and self-repair robots, system diagnosis, medical diagnosis, biomedicine, data mining, event prediction, financial forecasting, economics, risk assessment, e-mail management, database management, indexing and join operation, memory management, and data compression. | en | Method and system for analyzing or resolving ambiguities in image recognition for gesture, emotion, or expression recognition for a human | 12246909_US,45219659_US | 12246909_US | G05B 13/0275,G06F 16/90344,G06F 40/253,G06F 40/40,G06K 9/6267,G06N 5/02,G06N 5/046,G06N 5/047,G06N 7/005,G06N 7/02,G06N 7/023,G06N 20/00,G06T 7/12,G06V 10/42,G06V 10/443 | [
"G06F 17/27"
] | 93,707 |
38,318,897 | 2003-12-17 | 32,592,948 | Y | <P>PROBLEM TO BE SOLVED: To enable a viewer to easily discriminate a target and pay his/her attention to the target by automatically selecting a parameter of visual characteristic of the target to the background. <P>SOLUTION: The average characteristic of an area 100 is calculated, and the covariance of the area 100 is determined. This covariance is inverted to generate an inverse covariance, a target characteristic is evaluated to determine the difference between the average characteristic and the target characteristic. The difference is transposed to generate a transposed matrix of difference, and conspicuity is determined from the product obtained by multiplying the transposed matrix of difference by the inverse covariance multiplied by the difference. This conspicuity is compared with one acceptance reference. When the conspicuity does not satisfy the acceptance reference, the target characteristic is adjusted, and the difference determination step, the difference transposition step, the conspicuity determination step and the conspicuity comparison step are repeated. When the notability satisfies the acceptance reference, the target characteristic is outputted. According to this, an object 130 is visually emphasized from other attention diffusing matters 120. <P>COPYRIGHT: (C)2004,JPO&NCIPI | en | METHOD FOR AUTOMATICALLY CHOOSING VISUAL CHARACTERISTIC TO HIGHLIGHT TARGET AGAINST BACKGROUND | 12657575_ | 32312815_ | G06T 7/00,G06T 7/11,G06T 7/90,G06T 11/60,G06T2207/30176,G06V 10/22,G06V 10/56,G06V 30/10,G06V 30/1444,G06V 30/18105 | [
"G06T 7/40",
"G06T 5/00",
"G06K 9/00",
"G06T 7/00",
"G06F 3/00",
"G06F 17/00",
"G06F 3/14",
"G06F 17/30",
"G06T 11/60"
] | 25,333 |
545,362,489 | 2020-11-05 | 74,351,461 | N | The invention discloses a colonoscope image sequence intestinal polyp area tracking initialization decision-making system which is characterized by comprising an intestinal polyp area information acquisition module, an intestinal polyp area judgment module, a target association area judgment module, a target intestinal polyp area neutrosophy modeling module and a target intestinal polyp area tracking initialization decision-making module, neutral set modeling is carried out on a target intestinal polyp area, the cross entropy of the neuronal measurement of the target intestinal polyp area andthe ideal neuronal measurement is calculated, tracking initial judgment is carried out on the target intestinal polyp area according to the principle that the smaller the cross entropy is, the more likely the target intestinal polyp area is the real intestinal polyp area, and the judgment result indicates that tracking needs to be carried out, and thus adding into the intestinal polyp region set which is being tracked for processing. According to the system, the technical problem that video target tracking is started by mistake due to polyp region detection and segmentation uncertainty when avideo target tracking and segmentation algorithm is introduced into intestinal polyp sequence detection can be solved. | en | Colonoscope image sequence intestinal polyp area tracking initialization decision system | 60018862_ | 59927718_,69386887_,58571004_,75303606_,66116078_,61104666_,61188540_,60521688_ | G06T 7/215,G06T 7/246,G06T2207/10016,G06T2207/10068,G06T2207/30032 | [
"G06T 7/246",
"G06T 7/215"
] | 149,778 |
530,102,684 | 2019-12-27 | 70,318,707 | Y | The invention discloses a network rumor detection method based on a multi-modal relationship. The network rumor detection method comprises the steps of obtaining a to-be-detected image and a related text published on a network platform; extracting visual feature vectors containing different types of objects in the image through a pre-training factor R-CNN model; after the text is preprocessed, performing semantic vector extraction through a GRU; capturing importance degrees of the visual feature vector and the semantic vector through an attention mechanism, and realizing cross-modal association between the image and the text so as to update the visual feature vector and the semantic vector; moreover, for the visual feature vector and the semantic vector, the relationship of internal dynamic information is modeled through an attention mechanism, so that the visual feature vector and the semantic vector are updated; and connecting the visual feature vector and the semantic vector obtained by updating the two parts together, and obtaining the probability that the information to be detected is the rumor and the real category through a binary classifier. The method can automatically judge whether the to-be-detected information belongs to the network rumors, and has relatively high detection accuracy. | en | Network rumor detection method based on multi-modal relationship | 63876211_,72590970_ | 75367170_,65360503_,75034358_,60887996_ | G06K 9/6284,G06K 9/629,G06N 3/0454,G06N 3/08,G06Q 50/01,G06V 10/464 | [
"G06N 3/08",
"G06K 9/46",
"G06K 9/62",
"G06N 3/04",
"G06Q 50/00",
"G06F 40/30"
] | 139,496 |
47,179,806 | 1992-01-21 | 24,613,496 | N | A peptide of the formula: NH2- alpha - beta gamma delta epsilon eta THETA kappa lambda mu pi - rho -CO2H, wherein alpha is an amino acid having an R value of 4.5 +/- 1 unit; beta is an amino acid having an R value of 3.9 +/- 1 unit; gamma is an amino acid having an R value of -3.8 plus 1 unit; delta is an amino acid having an R value of -1.8 +/- 1 unit; epsilon is an amino acid having an R value of -4.2 +/- 1 unit; eta is an amino acid having an R value of -1.3 +/- 1 unit; THETA is an amino acid having an R value of 0.9 +/- 1 unit; kappa is an amino acid having an R value of 0.8 +/- 1 unit; lambda is an amino acid having an R value of 0.8 +/-1 unit, mu is an amino acid having an R value of -1.3 +/- 1 unit; pi is an amino acid having an R value of 3.9 +/- 1 unit; and rho is from 1 to about 18 amino acid residues having an average R value of at least about 0 +/- 1 unit; and wherein both alpha and the last residue of rho comprise hydrophilic end groups; or alpha is from 1 to about 18 amino acid residues having an average R value of at least about 0 +/- 1 unit; beta , gamma , delta , epsilon , eta , THETA , kappa , lambda , mu and pi are as defined above and rho is an amino acid having an R value of -1.3 +/- 1 unit; and wherein both the first residue of alpha and rho comprise hydrophilic end groups. | en | RECEPTOR BLOCKING PEPTIDES OF FIBROBLAST GROWTH FACTOR RECEPTOR | 37602223_US | 37625281_US | A61K 38/00,C07K 14/503,C07K 16/22,Y02A 50/30 | [
"C07K 14/50",
"A61K 38/00",
"C07K 16/22"
] | 33,609 |
57,581,268 | 2006-09-27 | 40,716,976 | Y | A system for identifying keywords in search results includes a plurality of neurons connected as a neural network, the neurons being associated with words and documents. An activity regulator regulates a minimum and/or maximum number of neurons of the neural network that are excited at any given time. Means for displaying the neurons to a user and identifying the neurons that correspond to keywords can be provided. Means for changing positions of the neurons relative to each other based on input from the user can be provided. The change in position of one neuron changes the keywords. The input from the user can be dragging a neuron on a display device, or changing a relevance of two neurons relative to each other. The neural network can be excited by a query that comprises words selected by a user. The neural network can be a bidirectional network. The user can inhibit neurons of the neural network by indicating irrelevance of a document. The neural network can be excited by a query that identifies a document considered relevant by a user. The neural network can also include neurons that represent groups of words. The neural network can be excited by a query that identifies a plurality of documents considered relevant by a user, and can output keywords associated with the plurality of documents. | en | Use of neural networks for keyword generation | 5228426_US | 5228427_RU | G06F 16/338,G06F 16/345,G06N 3/10,Y10S 707/99936 | [
"G06E 1/00"
] | 56,378 |
339,768,039 | 2009-05-26 | 40,175,031 | Y | A system for identifying keywords in search results includes a plurality of neurons connected as a neural network, the neurons being associated with words and documents. An activity regulator regulates a minimum and/or maximum number of neurons of the neural network that are excited at any given time. Means for displaying the neurons to a user and identifying the neurons that correspond to keywords can be provided. Means for changing positions of the neurons relative to each other based on input from the user can be provided. The change in position of one neuron changes the keywords. The input from the user can be dragging a neuron on a display device, or changing a relevance of two neurons relative to each other. The neural network can be excited by a query that comprises words selected by a user. The neural network can be a bidirectional network. The user can inhibit neurons of the neural network by indicating irrelevance of a document. The neural network can be excited by a query that identifies a document considered relevant by a user. The neural network can also include neurons that represent groups of words. The neural network can be excited by a query that identifies a plurality of documents considered relevant by a user, and can output keywords associated with the plurality of documents. | en | Use of neural networks for keyword generation | 11793241_US,5228427_RU | 5228427_RU | G06F 16/9535,Y10S 707/99933,Y10S 707/99935,Y10S 707/99936 | [
"G06E 1/00"
] | 71,116 |
48,634,350 | 1980-11-06 | 26,899,439 | Y | A multi-channel instrumentation system is provided wherein signals from selected electrode pairs are sequentially connected to the isolated input of a common signal amplifying path. The electrode pair selecting means is included in a portable head box which may be placed adjacent to the patient to be tested. Included in the head box are facilities for testing the impedance of any selected one of the patient electrodes and providing a visual indication of the magnitude of such impedance. The head box also includes facilities for selecting a group of patient electrodes which are connected together and used as an average or reference potential for one input of the common signal path. Facilities are also provided for calibrating the common signal path by applying a d.c. calibration signal to the input thereof by means isolated from system ground. An improved circuit arrangement is provided for limiting the current which can be drawn from the patient ground electrode to a small value to provide for the safety of the patient in the event of malfunctioning of the electrode selecting means. Facilities are provided in the common signal path for removing the d.c. components of the electrode pair signal voltages, and for selectively varying the gain of the common signal path on an individual channel basis. | en | Electroencephalograph | 5264065_US | 6838896_US,6838897_US | A61B 5/276,A61B 5/30,A61B 5/369 | [
"A61B 5/276"
] | 37,519 |
520,601,544 | 2018-08-02 | 63,659,944 | N | A diabetic retina image classification method and system based on deep learning, belonging to the technical field of artificial intelligence. The method comprises: acquiring an eye fundus image; importing the same eye fundus image into a micro-hemangioma lesion recognition model, a bleeding lesion recognition model and an exudative lesion recognition model for recognition; and extracting lesion feature information according to a recognition result, and then using a trained SVM classifier to classify the extracted lesion feature information to obtain a classification result, wherein the micro-hemangioma lesion recognition model is obtained by means of extracting a micro-hemangioma lesion candidate region from the eye fundus image and then inputting same into a CNN model for training; and the bleeding lesion recognition model and the exudative lesion recognition model are respectively obtained by means of marking a bleeding lesion region and an exudative lesion region in the eye fundus image and then inputting same into an FCN model for training. The requirement for a network model description capability is reduced, so that a model is easy to train, and lesion focus regions can be located and sketched with regard to different lesions, thereby assisting a doctor in carrying out clinical screening. | en | DIABETIC RETINA IMAGE CLASSIFICATION METHOD AND SYSTEM BASED ON DEEP LEARNING | 73997019_CN | 74100440_CN,73491893_CN,73481986_CN,74121388_CN,74136942_CN | A61B 5/14532,A61B 5/7267,G06K 9/6256,G06K 9/6269,G06K 9/6273,G06K 9/6293,G06K 9/6296,G06N 3/0454,G06N 3/08,G06N 20/10,G06T 5/002,G06T 5/20,G06T 7/0012,G06T 7/0014,G06T2207/20081,G06T2207/20084,G06T2207/20182,G06T2207/30041,G06T2207/30096,G06V 10/454,G06V 10/811,G06V 10/82,G06V 40/197,G06V2201/03,G16H 30/40,G16H 50/20,G16H 50/70 | [
"G06K 9/62"
] | 132,883 |
472,199,772 | 2016-04-14 | 53,911,981 | N | An eye-controlled apparatus, an eye-controlled method and an eye-controlled system. The eye-controlled apparatus comprises: a fixation point acquisition unit (1), a human eye action detection unit (2) and a control signal generation unit (3). The fixation point acquisition unit (1) is configured to acquire position information about a fixation point of human eyes on a device to be operated (5). The human eye action detection unit is configured to detect whether the human eyes take a pre-set action, and control, when detecting the pre-set action of the human eyes, the fixation point acquisition unit (1) to send current position information about the fixation point of the human eyes on the device to be operated (5) to the control signal generation unit (3). The control signal generation unit (3) is configured to generate, according to a pre-stored position control correspondence table corresponding to the device to be operated (5), a control signal corresponding to the current position information, and send the control signal to the device to be operated (5) so as to control the device to be operated (5) to execute a corresponding operation. The eye-controlled apparatus, the eye-controlled method and the eye-controlled system can effectively control the device to be operated (5) using human eyes. | en | EYE-CONTROLLED APPARATUS, EYE-CONTROLLED METHOD AND EYE-CONTROLLED SYSTEM | 63835158_CN | 64124306_CN,67317491_CN | G06F 3/013,G06F 3/14,G09G2320/0261,G09G2354/00 | [
"G06F 3/01"
] | 105,165 |
551,557,901 | 2019-12-26 | 70,298,690 | N | Provided is a missing target search method based on a reinforcement learning algorithm. The method comprises the following steps: step S1, data pre-processing: comprising the discretization of time and space, the discretization of a target movement trajectory, and the standardization of the search difficulty in different times and spaces; step S2, construction of a reinforcement learning training environment: constructing a reinforcement learning training environment, wherein training environment information includes expected search costs, at different search moments, of objects starting from different positions at different times and the probability of same transferring to different positions at different search moments; step S3, offline training of a time-space search model: defining states and behaviors, and performing adaptive optimization on the model; and step S4, online time-space search decision making: iteratively determining, on the basis of the trained time-space search model in step S3, a time-space search sequence by using a greedy policy, and executing a time-space search. According to the present invention, the search cost for finding the position of a target at a target moment is effectively reduced, and a target search task is completed under the constraints of the search cost. | en | MISSING TARGET SEARCH METHOD BASED ON REINFORCEMENT LEARNING ALGORITHM | 68063478_CN | 80764128_CN,64497098_CN,79115484_CN,81247619_CN | G01S 19/19,G06F 16/9537,G06N 20/00 | [
"G01S 19/19"
] | 153,919 |
563,094,457 | 2021-09-14 | 78,974,238 | N | The invention discloses a progressive viewpoint extraction method and system for hot topics. The method comprises the steps: providing priori knowledge; constructing a seed event structure diagram based on priori knowledge, wherein the diagram comprises viewpoint information nodes and edges representing the relationship between viewpoint elements; carrying out training and prediction of a viewpoint extraction model through combining the event structure diagram and data of the current stage, and giving a prediction result of the data of the current stage after the training is finished; removing viewpoints already existing in the event structure diagram, and submitting new viewpoints to experts for confirmation; screening confirmation results returned by the experts, removing unqualified viewpoints, and adding qualified viewpoints into the event structure diagram; and returning to the viewpoint extraction step again, and repeating the steps until the viewpoint extraction model achieves convergence. According to the method, historical information is used for new text viewpoint extraction under the same topic, the influence of an unbalanced hot topic data set on the neural model can be effectively relieved, and high-quality viewpoint information can be obtained under a small amount of labeled data. | en | Progressive viewpoint extraction method and system for hot topics | 59023411_ | 84463586_,65905835_,61304521_,64943760_,65153217_,58359993_ | G06F 16/335,G06F 16/36,G06F 40/205,G06F 40/284,G06N 3/0445,G06N 3/08 | [
"G06N 3/04",
"G06N 3/08",
"G06F 40/205",
"G06F 16/36",
"G06F 16/335",
"G06F 40/284"
] | 161,408 |
561,842,023 | 2020-07-28 | 72,647,534 | N | The present application relates to the field of artificial intelligence. Disclosed are an intention identification method, apparatus and device (300) based on an attention mechanism, and a storage medium, which are used for improving the accuracy of multi-modal intention identification for information needing to be inferred. The method comprises: acquiring text intention features of text information and image intention features of image information (101); respectively calculating text attention values and image attention values (102); respectively obtaining a text weighted feature matrix and an image weighted feature matrix according to the text attention values and the text intention features, and the image attention values and the image intention features (103); generating attention fusion intention features and gating mechanism fusion intention features according to the text intention features, the image intention features, the text weighted feature matrix, the image weighted feature matrix and a preset gating mechanism (104); combining the attention fusion intention features and the gating mechanism fusion intention features to obtain a target intention feature (105); and carrying out intention classification on the target intention feature to obtain a corresponding target intention (106). | en | INTENTION IDENTIFICATION METHOD, APPARATUS AND DEVICE BASED ON ATTENTION MECHANISM, AND STORAGE MEDIUM | 79007958_CN | 80722366_CN,63956070_CN | G06F 16/35,G06K 9/6267,G06K 9/629,G06N 3/0454,G06N 3/08 | [
"G06N 3/08",
"G06N 3/04",
"G06F 16/35",
"G06K 9/62"
] | 160,614 |
16,718,833 | 1988-11-25 | 17,857,614 | Y | For use in recognizing an input pattern consisting of feature vectors positioned at a first, ,,,, an i-th, ..., and an I-th pattern time instant along a pattern time axis, a connected word recognition system comprises first through N-th neural networks B(1) to B(N) which are assigned to reference words identified by a first,..., an n-th, ..., and an N-th word identifier and are arranged along a signal time axis divisible into a first, ..., a j-th, ..., and a J-th signal time instant. The n-th word identifier n corresponds to consecutive ones of the pattern time instants by a first function n(i). The time axes are related to each other by a second function j(i). Among various loci (n(i), j(i)) in a space defined by the word identifiers and the time axes, an optimum locus (n/< ANd >(i), j/< ANd >(i)) is determined to maximize a summation of output signals of the respective neural networks when the consecutive pattern time instants are varied between the first and the I-th pattern time instants for each word identifier. The input pattern is recognized as a concatenation of word identifiers used as optimum first functions n/< ANd >(i) in the optimum locus. Preferably, the optimum locus is determined by a dynamic programming algorithm. If possible, use of a finite-state automaton is more preferred. | en | Connected word recognition system including neural networks arranged along a signal time axis. | 11871_JP | 2684414_JP | G06K 9/6296,G06N 3/063,G10L 15/16 | [
"G10L 15/08",
"G10L 15/12",
"G06F 15/18",
"G06N 99/00",
"G10L 15/10",
"G06N 3/063",
"G06N 3/00",
"G10L 11/00",
"G10L 15/16",
"G10L 15/18"
] | 17,659 |
547,434,401 | 2019-11-14 | 69,427,340 | N | A neural network-based text encoding method, an apparatus, a device, and a storage medium, relating to the field of neural networks. The method comprises: an encoder converts training text into a text sequence (101), and increases, according to left and right adjacent entropies of a target word, weights of associated words associated with the target word (103); an encoding modifier monitors, according to the weights of the associated words, a target associated word having the weight higher than a preset weight, and monitors the target word associated with the target associated word (104); update an encoding quality determining condition according to a first hidden state and a second hidden state of a decoder (105); if the encoding quality of an encoding result meets the encoding quality determining condition, the decoder decodes a target language sequence (107-1); and if not, adjusting vector representations of source sentences, repeatedly executing the operations above until the encoding quality meets the encoding quality determining condition, and then decoding the target language sequence (107-2). According to the method, vector representations of a source language sequence are continuously improved to be expressed to a target terminal, so that the effect of a translation model is improved. | en | NEURAL NETWORK-BASED TEXT ENCODING METHOD, APPARATUS, DEVICE, AND STORAGE MEDIUM | 63942312_CN | 80675057_CN,63750138_CN,69638817_CN,81407733_CN | G06N 3/04 | [
"G06F 40/126"
] | 151,352 |
514,381,348 | 2018-02-27 | 62,122,467 | N | An emotion identification method, a device, and a storage medium. The method comprises the following steps: generating a large quantity of neutral questions, comparison questions, and relevant questions, and constructing a test question bank (S10); generating a test questionnaire according to the test question bank (S20); segmenting a video of a test subject answering the test questionnaire, and obtaining a video segment for each question answered by the test subject (S30); extracting an expression feature vector from each video segment, and using the same as the feature vector of the corresponding question (S40); calculating a center point of a feature vector of the neutral questions, a center point of a feature vector of the comparison questions, a first distance between the feature vector of each relevant question and the center point of the feature vector of the neutral questions, and a second distance between the feature vector of each relevant question and the center point of the feature vector of the comparison questions (S50); and if the first distance is greater than the second distance, determining that the test subject concealed genuine emotions, and if the first distance is less than the second distance, determining that the emotions expressed by the test subject are genuine (S60). | en | EMOTION IDENTIFICATION METHOD, DEVICE, AND A STORAGE MEDIUM | 71868230_CN | 71643896_CN,71870857_CN | G06K 9/6215,G06K 9/6223,G06K 9/623,G06K 9/6232,G06V 20/46,G06V 40/176 | [
"G06K 9/00"
] | 128,734 |
47,153,888 | 1991-02-12 | 23,902,523 | N | Novel methods of retroviral-mediated gene transfer for the in vivo incorporation and stable expression of eukaryotic or prokaryotic foreign genes in tissues of living animals is described. More specifically, methods of incorporating foreign genes into mitotically active cells are disclosed. The constitutive and stable expression of E.coli beta -galactosidase gene under the promoter control of the Moloney murine leukemia virus long terminal repeat is employed as a particularly preferred embodiment, by way of example, establishes the model upon which the incorporation of a foreign gene into ba mitotically-active living eukaryotic tissue is based. Use of the described methods in therauptic treatments for genetic deseases, such as those muscular degenerative diseases, is also presented. In muscle tissue, the described processes result in genetically-altered satellite cells which proliferate daughter myoblasts which preferentially fuse to form a single undamaged muscle fiber replacing damaged muscle tissue in a treated animal. The retroviral vector, by way of example, includes a dystrophin gene construct for use in treating muscular dystrophy. The present invention also comprises an experimental model utilizable in the study of the physiological regulation of skeletal muscle gene in intact animals. | en | SATELLITE CELL PROLIFERATION IN ADULT SKELETAL MUSCLE | 37545186_US | 37599627_US,37599625_US,37599626_US,37599628_US | A61K 48/00,C12N 5/0658,C12N 15/86,C12N2740/13043 | [
"C12N 15/867",
"C12N 5/077",
"A61K 48/00"
] | 33,441 |
504,438,647 | 2017-11-13 | 60,221,293 | N | Provided in the present invention are an OCTS-CAR dual targeting chimeric antigen receptor based on OCTS technology, an encoding gene, an OCTS-CAR-T recombinant expression vector and a construction method and the use thereof. The OCTS-CAR dual targeting chimeric antigen receptor comprises a CD8 leader membrane receptor signal peptide, a dual antigen binding region, a CD8 Hinge chimeric receptor hinge, a CD8 Transmembrane chimeric receptor transmembrane region, a CD28 chimeric receptor costimulatory factor, a CD134 chimeric receptor costimulatory factor and a TCR chimeric receptor T cell activation domain sequentially connected in series, wherein the dual antigen binding region comprises a heavy chain VH and a light chain VL of the two single chain antibodies connected in a certain manner, the hinge Inner-Linker within the antibody and the hinge Inter-Linker between the single chain antibodies, and the two single chain antibodies refer to the combination of any two of the BCMA single chain antibody, CD319 single chain antibody, CD38 single chain antibody, PDL1 single chain antibody and CD123 single chain antibody. In addition, also provided are the gene encoding the OCTS-CAR dual targeting chimeric antigen receptor, the recombinant expression vector and the construction method and use thereof. | en | OCTS-CAR DUAL TARGETING CHIMERIC ANTIGEN RECEPTOR, ENCODING GENE, RECOMBINANT EXPRESSION VECTOR AND CONSTRUCTION AND USE THEREOF | 67871182_CN | 69640621_CN,67884530_CN,69041222_CN,68080033_CN,68566732_CN | A61K 35/17,C07K 14/7051,C07K 16/2866,C07K 16/2896,C07K 16/30,C07K2317/622,C07K2319/00,C07K2319/02,C07K2319/03,C07K2319/33,C12N 5/0636,C12N 15/86,C12N2510/00,C12N2740/15043 | [
"C12N 15/62",
"A61K 35/17",
"C07K 19/00",
"C12N 5/10",
"C12N 15/867",
"A61P 35/00"
] | 123,404 |
45,664,370 | 2002-06-19 | 26,971,157 | N | The mdx mouse is a model of Duchenne muscular dystrophy. The present invention describes that mdx mice exhibited clinically relevant cardiac phenotypes. A non-invasive method of recording electrocardiograms (ECGs) was used to a study mdx mice (n=15) and control mice (n=15). The mdx mice had significant tachycardia, consistent with observations in patients with muscular dystrophy. Heart-rate was nearly 15% faster in mdx mice than control mice (P<0.01). ECGs revealed significant shortening of the rate-corrected QT interval duration (QTc) in mdx mice compared to control mice (P?0.05). PR interval duration were shorter at baseline in mdx compared to control mice (P?0.05). The muscarinic antagonist atropine significantly increased heart-rate and decreased PR interval duration in C57 mice. Paradoxically, atropine significantly decreased heart-rate and increased PR interval duration in all mdx mice. Pharmacological autonomic blockade and baroreflex sensitivity testing demonstrated an imbalance in autonomic nervous system modulation of heart-rate, with decreased parasympathetic activity and increased sympathetic activity in mdx mice. These electrocardiographic findings in dystrophin-deficient mice provide new bases for diagnosing, understanding, and treating patients with Duchenne muscular dystrophy. | en | METHOD OF EARLY DETECTION OF DUCHENNE MUSCULAR DYSTROPHY AND OTHER NEUROMUSCULAR DISEASE | 12591860_US,37088614_US | 12591860_US | A61B 5/4035,A61K 49/0004 | [
"A61K 49/00"
] | 30,898 |
545,725,440 | 2020-10-14 | 71,216,095 | N | The invention discloses a respiratory tract infection pathogen nucleic acid combined detection kit. The invention develops a primer and probe combination for detecting various respiratory tract infection pathogens such as novel corona virus, influenza A virus, influenza B virus, respiratory syncytial virus, human parainfluenza virus, adenovirus, mycoplasma pneumoniae and chlamydia pneumoniae by combining a multiple fluorescent quantitative PCR technology and a diversion hybridization gene chip technology. The nucleotide sequences of the primer and probe combination are shown in SEQ ID NO: 1-36in sequence. The respiratory tract infection pathogen nucleic acid combined detection kit is constructed. Synchronous joint detection of eight respiratory tract infection pathogens can be realized, the detection accuracy is good, the specificity is strong, the sensitivity is high, the repeatability is good, false negative and false positive are low, the detection time is short, the cost is low, apatient can be comprehensively detected, the pathogens can be accurately positioned, timely treatment is carried out or corresponding isolation measures are carried out, and the kit has important significance for effectively controlling respiratory tract infection to prevent related infectious infection outbreak. | en | Respiratory tract infection pathogen nucleic acid combined detection kit | 75414038_,66608515_,59800897_ | 64963648_,59409310_,65998935_,65625281_ | C12Q 1/6837,C12Q 1/689,C12Q 1/701,C12Q2600/16,C12Q2600/166,Y02A 50/30 | [
"C12Q 1/04",
"C12Q 1/6837",
"C12Q 1/689",
"C12Q 1/70",
"C12N 15/11"
] | 149,845 |
352,307,504 | 2011-05-24 | 45,004,621 | Y | Among other things, we describe a reality alternative to our physical reality, named the Expandaverse, that includes multiple digital realities that may be continuously created, broadcast, accessed, and used interactively. In what we call an Alternate Reality Teleportal Machine (ARTPM), some elements of the digital reality(ies) can be implemented using and providing functions that include: devices, architectures, processing, sensors, translation, speech recognition, remote controls, subsidiary devices usage, virtual Teleportals on alternate devices, presence, shared planetary life spaces, constructed digital realities, reality replacements, filtered views, data retrieval in constructed views, alternate realities machine(s), multiple identities, directories, controlled boundaries, life space metrics, boundaries switching, property protection, publishing/broadcasting, digital events, events location/joining, revenues, utility(ies), infrastructure, services, devices management, business systems, applications, consistent customizable user interface, active knowledge, optimizations, alerts, reporting, dashboards, switching to “best”, marketing and sales systems, improvement systems, user chosen goals, user management, governances, digital freedom from dictatorships, photography, and entertainment. | en | Reality alternate | 11322843_US | 11322843_US | G06Q 10/067,G06Q 10/10,G06Q 30/02,G06Q 30/0601,G06Q 40/12 | [
"G06Q 20/00",
"G06F 21/00",
"G06Q 30/06",
"H04N 7/18",
"G06Q 10/10",
"G06Q 10/00",
"G06F 21/24",
"G06Q 10/06",
"G06Q 30/02",
"H04N 5/225",
"G06Q 40/00",
"G10L 21/00",
"H04N 7/14",
"G09G 5/02",
"G06F 15/16",
"G09G 5/00",
"G06F 3/01"
] | 72,738 |
45,625,690 | 2001-07-28 | 24,520,319 | N | A novel psychophysical visual test based on the visual response of the eye to alternating chromatic complementary colors or achromatic grays of varying saturation, luminance and/or contrast is proposed for the early detection of glaucoma, and other diseases. In one embodiment, although the luminance level remains constant, the visual stimulus alternates between two complementary or counter phase colors, preferably against a gray background at about 40 time a sec, for example, between blue and yellow. When the colors are alternated in this manner, the visual stimulus appears white or gray to an observer, instead of either blue or yellow. When the colors are alternated in this manner, the visual stimulus appears white or gray to an observer, instead of either blue or yellow. As the saturation is reduced, however, the alternating colors appear grayer, and then eventually cannot be perceived. Persons suffering from glaucoma and other diseases, however, find it more difficult than normal people to distinguish the visual stimulus as the saturation and/or luminance is reduced. The patient views color monitor (130) at a predetermined distance for the stimulus to subtend a predetermined angle. An operator may be seated at or near computer (110) for controlling the test parameters via a keyboard (150). | en | VISUAL TEST | 15401775_US | 36973594_,37035759_ | A61B 3/005,A61B 3/024,A61B 3/06,A61B 3/066,A61B 5/16 | [
"A61B 3/024",
"A61B 3/06",
"A61B 5/16"
] | 30,249 |
375,832,914 | 2010-10-08 | 44,195,035 | N | The present invention discloses a system and method for a morphological solution to the macroscopic problem of n-entropy (i.e. loss of control/information) of the prevailing global anarchy by super-augmenting a persona to manifest a pan-environment super-cyborg for global governance. Through a Christocratic Necked Service Oriented Architecture (CNSOA) model, the method of said system, categorizes the world people into two spaces, Bridespace and Christocratic-space. Each member or citizen of Bridespace and consenting Christocratic space is incorporated with Bridal Wedding Garments, namely holy goods & services & Necktie imitating Personal-Extender that includes a data processing device connected to a global network. Each member's Persona and proximity Meatspace are augmented by recasting the metaphoric environment of the data processor (network-is-the-supercomputer) as a Necktie Personal-Extender/Environment-Integrator. The ultimate objective of this invention is to provide a viable regulatory system for global governance to bring justice, peace and wealth for rightful people. With the Necktie Personal-Extender/Environment-Integrator that extends man into both space and time (eternal life) to solve all his problems, it is asserted that a union with the divine is achievable for all of mankind. | en | Necktie Personal-Extender/Environment-Integrator and Method for Super-Augmenting a Persona to manifest a Pan-Environment Super-Cyborg for Global Governance | 12216708_IN,12216707_IN | 12216708_IN | G06Q 10/00 | [
"G05B 19/00"
] | 76,218 |
53,226,071 | 1996-11-19 | 6,440,625 | Y | A method and apparatus for processing a sequence of words in a speech signal for speech recognition. The method includes the steps of sampling, at recurrent instants, said speech signal for generating a series of test signals. Signal-by-signal matching and scoring is generated between the test signals and a series of reference signals, where each of the series of reference signals forms one of a plurality of vocabulary words arranged as a vocabulary tree. The vocabulary tree includes a root and a plurality of tree branches wherein any tree branch has a predetermined number of reference signals and is assigned to a speech element and any vocabulary word is assigned to a particular branch junction or branch end. Acoustic recombination determines both continuations of branches and the most probable partial hypotheses within a word because of the use of a vocabulary built up as a tree with branches having reference signals. At least one complete word for a particular test signal is determined, and, separately, for each completed word, there is: I) a word result formed including a word score and an aggregate score, said aggregate score derived from said word score and from a language model value assigned to a combination of said completed word and a uniform-length string of prior completed words. | en | Method and apparatus for recognizing spoken words in a speech signal by organizing the vocabulary in the form of a tree | 5229768_US | 6172045_DE,8625911_DE | G10L 15/08,G10L 15/187 | [
"G10L 15/08",
"G10L 15/187",
"G10L 15/12"
] | 45,589 |
540,118,113 | 2020-06-11 | 72,986,516 | N | The invention provides a scale-adaptive hyperspectral image classification method and system, and the method comprises the steps of obtaining a target hyperspectral image, carrying out the principal component analysis of the target hyperspectral image, and obtaining a principal component image; obtaining the category number of surface coverage in the target hyperspectral image, and setting the category number as an interval lower bound; calculating an interval upper bound according to the category number and the main component image, and forming a scale pool by using the interval upper bound and the interval lower bound; and calculating a scale discrimination index according to the scale pool to obtain a plurality of scale discrimination index values, performing superpixel segmentation onthe target hyperspectral image according to the maximum scale in the scale discrimination index values, and classifying the target hyperspectral image according to the spatial distribution informationafter superpixel segmentation. According to the invention, the scene self-adaptive scale selection process is realized, manual parameter adjustment is not needed, the scale suitable for scene characteristics can be selected, and the hyperspectral image classification efficiency and scene robustness are improved. | en | Scale-adaptive hyperspectral image classification method and scale-adaptive hyperspectral image classification system | 59070503_ | 60825321_,78763795_,59148168_ | G06K 9/6247,G06K 9/6268,G06T 3/4053,G06T 7/13,G06T 7/41,G06T2207/10036,G06V 10/44,G06V 20/13,G06V 20/194 | [
"G06T 7/41",
"G06K 9/62",
"G06T 7/13",
"G06K 9/46",
"G06K 9/00",
"G06T 3/40"
] | 146,316 |
51,283,613 | 1990-03-06 | 15,075,876 | Y | A neural circuit device modeled on vital cells includes a plurality of first signal lines to which signals to be computed are transferred, a plurality of amplifiers serving as the bodies of the vital cells, a plurality of second signal lines arranged to intersect with the first signal lines in correspondence to the respective amplifiers, and a plurality of coupling elements provided in crossings between the first and second signal lines. Each of the coupling elements is adapted to couple an associated first signal line with an associated second signal line in a specific degree of coupling, and includes a first capacitive element storing information of the degree of coupling in the form of charges, and a circuit which converts a signal potential of the associated first signal line along the coupling degree information stored in the first capacitive element to transfer the converted potential to the associated second signal line without feeding current between an operation power source or a ground potential and the associated second signal line. This conversion/transfer circuit includes a circuit which causes potential change on the second signal line through charge pumping function or charging function. The amplifiers as the neuron bodies include a capacitive coupling element at their inputs. | en | Semiconductor neural circuit device having capacitive coupling and operating method thereof | 5227267_JP | 5471703_JP,5292970_JP | G06N 3/063,G06N 3/0635 | [
"G06F 15/18",
"G06N 3/06",
"G06N 99/00",
"G06N 3/063"
] | 41,725 |
317,745,444 | 2007-11-05 | 39,364,454 | N | To provide a brain-image diagnosis supporting method or the like. The method is a statistical evaluation method excluding the subjective judgment of an examiner, and enables image diagnosis. The method can present stable judgment criteria with respect to data on brain images imaged by a predetermined method in order to discriminate difficult diseases to diagnose. The method is also effective with respect to relationships which can not be always explained with a simple linear relationship, for example, the relationship between data on brain images imaged by a predetermined method and a disease which is a variable. By applying a predetermined nonlinear multivariate analysis method to data on brain images of a plurality of examinees imaged by a predetermined method and by classifying the data, image diagnosis support using a computer performed with respect to the data on brain images is performed. For example, SOM method is applied as a predetermined nonlinear multivariate analysis method. Data on brain images of a plurality of examinees imaged by SPECT or the like are handled as input data vectors x, which are presented to neurons on a two-dimensional lattice array in the SOM method so as to perform image diagnosis support based on the two-dimensional SOM after a predetermined training length. | en | BRAIN-IMAGE DIAGNOSIS SUPPORTING METHOD, PROGRAM, AND RECORDING MEDIUM | 6181463_JP,7789578_JP,11454813_JP,11454815_JP,5401412_JP,11039934_JP,11454814_JP | 6181463_JP,11454814_JP,11454815_JP,11454813_JP | A61B 5/024,A61B 5/055,A61B 5/14542,A61B 5/4082,A61B 5/4088,A61B 5/7267,A61B 6/037,A61B 6/501,A61B 6/507,G06K 9/6234,G06K 9/6251,G06K 9/6269,G06T 7/0012,G06T2207/10104,G06T2207/10108,G06T2207/30016,G06V 10/7715,G16H 50/70 | [
"G06K 9/00"
] | 62,846 |
51,138,273 | 1999-12-22 | 8,235,791 | Y | The neural semiconductor chip first includes: a global register and control logic circuit block, a R/W memory block and a plurality of neurons fed by buses transporting data such as the input vector data, set-up parameters, etc., and signals such as the feed back and control signals. The R/W memory block, typically a RAM, is common to all neurons to avoid circuit duplication, increasing thereby the number of neurons integrated in the chip. The R/W memory stores the prototype components. Each neuron comprises a computation block, a register block, an evaluation block and a daisy chain block to chain the neurons. All these blocks (except the computation block) have a symmetric structure and are designed so that each neuron may operate in a dual manner, i.e. either as a single neuron (single mode) or as two independent neurons (dual mode). Each neuron generates local signals. The neural chip further includes an OR circuit which performs an OR function for all corresponding local signals to generate global signals that are merged in an on-chip common communication bus shared by all neurons of the chip. The R/W memory block, the neurons and the OR circuit form an artificial neural network having high flexibility due to this dual mode feature which allows to mix single and dual neurons in the ANN. | en | Neural chip architecture and neural networks incorporated therein | 5230241_US | 6122354_FR,8308151_FR,6122352_FR | G06K 9/6276,G06N 3/063 | [
"G06N 3/063",
"G06K 9/62"
] | 41,357 |
516,668,425 | 2018-11-07 | 67,258,170 | Y | According to one embodiment of the present invention, the present invention relates to a system for providing an experience of drug addiction and a method thereof. The system for providing an experience of drug addiction comprises: at least one user terminal configured to provide experience request data associated with drug addiction; and a server configured to provide user experience data associated with drugs, based on the experience request data, to the user terminal, wherein the server comprises: a memory having a drug addiction experience program recorded thereon; and a processor configured to execute the program. The processor is configured to, by execution of the program, execute a big data analysis with respect to a correlation between a drug and a user by collecting drug information including a type of a drug and an addiction period and user information including an age and physical features of the user, derive a drug addiction pattern through machine learning using results of the big data analysis, calculate drug addiction state information by matching the experience request data with the drug addiction pattern, and provide, to a user terminal, user experience data which expresses the drug addiction state information using at least one among images, texts, measurements, and graphs. | en | SYSTEM AND METHOD FOR PROVIDING EXPERIENCE OF DRUG ADDICTION | 68349801_KR | 72983955_ | G06Q 50/10,G06Q 50/26 | [
"G06Q 50/10",
"G06Q 50/26"
] | 129,912 |
575,095,945 | 2021-12-22 | 75,450,400 | N | A magnetoencephalography source localization method and device based on Tucker decomposition and a ripple time window, the device comprising: a magnetoencephalography sensor, which is used for acquiring a first magnetoencephalography signal of a user; a ripple detection unit, which is used for detecting, by means of a root mean square method, a ripple time window in the first magnetoencephalography signal as a time window for source localization, so as to obtain the first magnetoencephalography signal in the ripple time window as a second magnetoencephalography signal; a higher-order orthogonal iteration-based Tucker decomposition unit, which is used for performing Tucker decomposition on an original tensor of the second magnetoencephalography signal to calculate an estimated value of the original tensor; and a source localization unit, which is used for calculating a covariance matrix of the estimated value, and calculating, by means of an LCMV inverse problem solving method in a beamforming method, a source location corresponding to the second magnetoencephalography signal. The present invention eliminates the effect of noise signals, reduces calculation complexity, ensures consistency in each instance of calculation results, and improves the accuracy of epileptogenic region localization. | en | MAGNETOENCEPHALOGRAPHY SOURCE LOCALIZATION METHOD AND DEVICE BASED ON TUCKER DECOMPOSITION AND RIPPLE TIME WINDOW | 67300867_CN | 85710801_CN,84589682_CN | A61B 5/245,A61B 5/4094,A61B 5/7203,A61B 5/7225,A61B 5/7235 | [
"A61B 5/245"
] | 167,220 |
476,293,686 | 2015-07-13 | 55,078,478 | Y | Provided are a program and the like capable of analyzing the effects of promotion sites on transitions of psychological states of users on the basis of comment text transmitted from the users. This program recognizes psychological state transitions brought about by site addresses by causing a computer to function as: a psychological state determination means for outputting psychological states indicated by psychological keywords in comment text per poster; a score storage means for storing scores on the respective psychological states per promotional site address; and a score update means for instructing the score storage means to increase the score of a psychological state indicated by reference comment text containing a site address when a change in psychological state is found between the reference comment text and one or more previous occurrences of comment text generated in the past and/or to increase the score of the psychological state indicated by one or more later occurrences of comment text generated after the reference comment text when a change in psychological state is found between the reference comment text and the one or more later occurrences of comment text. The program enables the identification of the psychological state transition caused by the promotional site address. | en | Program, device, and method for analyzing effect of promotion site on transition of psychological state of user | 60563565_ | 73894609_,63119105_,74018322_ | G06F 16/00,G06Q 50/10 | [
"G06Q 50/00",
"G06F 16/9536"
] | 107,439 |
536,232,224 | 2018-12-21 | 68,836,575 | N | The invention belongs to the means of multimedia message memorability measurement; it consists in: via '10-20 system' use and reference electrodes installed at user's earlobes or mastoids, will be installed electrodes at the points C3, C4, T3, T4, P3 and F4 (identified according to 10-20% international electrodes arrangement scheme) on user's head; user's brain activity index will be registered during multimedia message presentation to the user, based on registration results, values will be determined for: leads entropy for C3 electrode (E1); leads' first coherence (K1), that is equal to leads' coherence for T3 and T4 electrodes within the range 30-45 Hz; leads' second coherence (K2), that is equal to leads' coherence for C4 and T4 electrodes within the range 4-8 Hz; leads' third coherence (K3) that is equal to leads' coherence for P3 and F4 electrodes within the range 8-13 Hz, the memorability's weighting factor will be specified for the E1 (KE1), first, second and third memorability's weighting factors, respectively, will be specified for K1, K2 and K3 (KR1, KR2 and KR3, respectively) and there will be defined the multimedia message's memorability (Rec) via calculation of the sum of productions KE1*E1, KR1 *K1, KR*K2 and KR3*K3 for E1, K1, K2 and K3, calculated for the multimedia message. | en | MEMORABILITY MEASUREMENT METHOD FOR MULTIMEDIA MESSAGES | 77476037_RU | 76914652_RU,76456162_RU,76456758_RU,77144670_RU | A61B 5/0077,A61B 5/1103,A61B 5/16,A61B 5/163,A61B 5/25,A61B 5/256,A61B 5/271,A61B 5/291,A61B 5/372,A61B 5/378,A61B 5/4064,A61B 5/6803,A61B 5/6816,A61B 5/682,A61B 5/7264,A61B2503/12 | [
"A61B 5/0484",
"G06F 3/01"
] | 143,559 |
562,553,987 | 2021-06-08 | 78,835,755 | N | The present invention relates to a method of diagnosing and further determining the course of the disease in an individual with adult Still's disease. The method includes the following steps: (a) providing a blood sample and peripheral blood mononuclear cells; (b) detecting the expression level of the protein biomarker IL-18 in the blood sample; (c) detecting the expression of long-chain non-coding RNA biomarkers MIAT or THRIL in the peripheral blood mononuclear cells; (d) calculating the A-score by using the predicted function obtained by multiple regression analysis with the expression levels of IL-18, MIAT and THRIL detected in steps (b) and (c); (d) calculating the A-score by using the predicted function obtained by multiple regression analysis with the expression levels of IL-18, MIAT and THRIL detected in steps (b) and (c); and (e) comparing the A-score calculated in step (d) with a critical value, obtained by the receiver operating characteristic curve method, corresponding to the area under the ROC curve reaching a maximum, the A-score above the critical value value indicates that the individual suffers from adult Still's disease. The sequence of step (b) and step (c) can be reversed, which does not affect the judgment of the embodiment of the present invention and the final result. | en | Methods of diagnosing and typing adult Still's disease | 83751085_ | 84061432_,83756703_ | C12Q 1/6883,C12Q2600/158,C12Q2600/178,G01N 33/6869,G01N2800/102,G01N2800/202,G16H 10/40,G16H 50/30 | [
"C12Q 1/6883",
"G16H 50/30",
"G01N 33/68",
"G16H 10/40"
] | 161,137 |
554,064,878 | 2020-09-30 | 71,868,702 | N | The present application relates to the field of natural language processing in artificial intelligence. Disclosed is a knowledge graph question-answer method based on deep learning technology. The method comprises: receiving a question statement of a user; performing entity recognition on the question statement by using an LSTM+CRF-based sequence labeling model to obtain entity information from the question statement; using an attribute recognition model to perform attribute recognition on the question statement to obtain attribute information from the question statement; performing attribute expansion and attribute standardization on the attribute information to obtain a corresponding standard attribute in a knowledge graph; and generating a structured query for the knowledge graph according to the entity information and the standard attribute, querying the knowledge graph for an answer, and returning the found answer to the user. Models and knowledge graph information can be stored in a blockchain. According to the present application, semantic information of an entity layer, a phrase layer and a question layer is sufficiently integrated into models by means of multiple Bi-LSTM layers and an Attention operation, thereby improving the effect of the models and the question-answer accuracy. | en | KNOWLEDGE GRAPH QUESTION-ANSWER METHOD AND APPARATUS BASED ON DEEP LEARNING TECHNOLOGY, AND DEVICE | 63743437_CN | 76504095_CN | G06F 16/3329,G06F 16/367,G06F 40/211,G06F 40/295,G06F 40/35,G06N 3/0454,G06N 3/08 | [
"G06F 16/332"
] | 155,464 |
510,208,690 | 2018-12-29 | 65,872,223 | N | The invention discloses a crowd counting method and system based on a spatial perception attention refinement framework. The method comprises the following steps: generating feature maps, initial crowd density maps and density grade information of all training images by utilizing a convolutional neural network; inputting the information into an iterative spatial perception refining module, carrying out dynamic localization on the crowd density map in a spatial perception mode by utilizing a spatial regression mapping network, generating a corrected local density map, and carrying out a dynamiclocalization updating strategy for next iteration by combining a long-short-term memory module; Encoding the density level information to generate a density level graph, integrating the local densitygraph with the density level graph to serve as local and global information of each round of iteration, and inputting the local and global information into a local refining network; the local refinement network adjusts the density distribution of the input area, performs reverse space regression mapping on the input area, and updates and generates a crowd density map in a residual error learningmode; and iteratively carrying out the training process for multiple times to obtain a refined crowd density map. | en | A crowd counting method and system based on a spatial perception attention refinement framework | 58743861_ | 60810304_,64928555_,61664680_,66900462_ | G06K 9/6256,G06K 9/6268,G06V 20/53 | [
"G06K 9/62",
"G06K 9/00"
] | 126,489 |
573,271,398 | 2021-09-22 | 74,398,615 | N | Disclosed are an adaptive fully decoupled autopilot controlled using a radial basis function (RBF) neural network and a control method thereof. A system comprises: a required overload receiving module for receiving required overload information transmitted by a guidance system in real time; an aircraft parameter measurement module for acquiring flight parameters of an aircraft in real time; and a decoupling control module for acquiring an available steering instruction. A steering instruction for decoupling control is obtained according to the required overload information and the flight parameters of the aircraft. Next, a transit steering instruction is obtained according to the steering instruction for decoupling control and the flight parameters of the aircraft. Finally, an available steering instruction is obtained according to the transit steering instruction and the flight parameters of the aircraft, thereby controlling steering operations of a steering engine. When the decoupling control module is used to perform decoupling calculation, a state feedback matrix and a feed-forward compensation matrix associated therewith are obtained in real time by means of a radial basis function neural network model and a current state of the aircraft, thereby further improving control performance. | en | AUTOPILOT EMPLOYING RADIAL BASIS FUNCTION NEURAL NETWORK AND DECOUPLING CONTROL METHOD THEREOF | 63565149_CN | 76872654_CN,64523553_CN,63685449_CN,67877015_CN,85253134_CN,76996475_CN,76755068_CN | G05D 1/0808,G05D 1/101,Y02T 90/00 | [
"G05D 1/10",
"G05D 1/08",
"G05B 13/04"
] | 166,293 |
573,921,308 | 2021-11-19 | 74,995,778 | N | Disclosed in embodiments of the present application are a model structure, a model training method, and an image enhancement method and device, applicable to the computer vision field in the field of artificial intelligence. The model structure comprises: a selecting module, a plurality of first neural network layers, a segmentation module, a transformer module, a recombination module, and a plurality of second neural network layers; the model breaks through the limitation that the transformer module can only be used for processing a natural language task, and can be applied to an underlying visual task; the model has a plurality of first/second neural network layers, and the different first/second neural network layers correspond to different image enhancement tasks, so that the model can be used for processing different image enhancement tasks after being trained, and compared with the existing model for processing underlying visual tasks, which is mostly based on CNN (CNN, as an excellent feature extractor, has good performance in high-layer visual tasks, but is difficult to pay attention to global information when the underlying visual task is processed), the model can pay attention to global information by means of the transformer module, thereby improving an image enhancement effect. | en | MODEL STRUCTURE, MODEL TRAINING METHOD, AND IMAGE ENHANCEMENT METHOD AND DEVICE | 63625819_CN | 78487359_CN,63962232_CN,79201357_CN,79174382_CN | G06K 9/6268,G06K 9/6277,G06N 3/0454,G06N 3/0472,G06N 3/08 | [
"G06K 9/62",
"G06N 3/08",
"G06N 3/04"
] | 166,702 |
17,175,610 | 1996-04-23 | 23,941,695 | N | A multi-detector head nuclear camera system automatically switchable (and optimized) to perform either SPECT imaging or PET imaging. The camera system employs, in one embodiment, multi-detector configuration having dual head scintillation detectors but can be implemented with more than two detector heads. The detectors contain switchable triggering circuitry so that coincidence detection for PET imaging and non-coincidence detection for SPECT imaging is available. Using a variable integration technique with programmable integration interval, the event detection and acquisition circuitry of the camera system is switchable to detect events of different energy distribution and count rate which are optimized for PET and SPECT imaging. The system also includes dual integrators on each scintillation detector channel for collecting more than one event per detector at a time for PET or SPECT mode. In PET or SPECT mode, the system also employs variable PMT cluster sizing having smaller cluster sizes for PET imaging and relatively larger cluster sizes for SPECT. In PET or SPECT mode, the system also employs variable centroid shape and zonal triggering. Utilizing the above programmable settings, the camera system can be automatically configured to operate in either SPECT or PET imaging modes. <IMAGE> | en | Multi-head nuclear medicine camera for dual SPECT and PET imaging | 823176_US | 823299_US,3473285_US,823180_US,3589404_US,823181_US | A61B 6/037,G01T 1/1642,G01T 1/2985 | [
"G01T 1/161",
"G01T 1/29",
"G01T 1/164"
] | 19,197 |
39,709,388 | 1998-09-28 | 21,769,713 | Y | PURPOSE: Benzyl(idene)-lactam derivatives(formula 1) are useful as psychotherapeutic agents. CONSTITUTION: Formula (1) shows a compound wherein R¬1 is a five different formula; R¬2 is hydrogen, (C1-C4)alkyl, phenyl or naphthyl, wherein phenyl or naphthyl may optionally be substituted with one or more substituents; R¬3 is (CH2)mB, wherein m is zero, one, two or three and B is hydrogen, phenyl, naphthyl or a 5- or 6-membered heteroaryl group containing from one to four hetero atoms in the ring and wherein each of the foregoing aryl and hetero aryl groups may optionally be substituted with one or more substituents; Z is R¬4R¬5, wherein R¬4 and R¬5 are independently selected from hydrogen, (C1-C6)alkyl and trifluoromethyl; or Z maybe one of the aryl or heteroaryl groups; X is hydrogen, chloro, fluoro, bromo, iodo, cyano, (C1-C6)alkyl, hydroxy, trifluoromethyl, (C1-C6)alkoxy, -SOg(C1-C6)alkyl, wherein g is zero, one or two, CO2R¬6 or CONR¬7R¬8; each of R¬6, R¬8 is selected, independently, from the radicals set fourth in the definition of R¬2; or R¬7 and R¬8, together with the nitrogen to which they are attached, form a 5- to 7-membered ring; or a pharmaceutically acceptable salts thereof. These compounds are selective (ant)agonists of 5-HT1A(serotonin 1A) and/or 5-HTID(serotonin 1D) receptors. | en | - / - 1 / 1 / BENZYLIDENE-LACTAM DERIVATIVES THEIR PREPARATION AND THEIR USE AS SELECTIVE ANTAGONISTS OF 5-HT1A- AND/OR 5-HD1D RECEPTORS | 32609736_US | 71327518_US | A61P 1/00,A61P 3/04,A61P 3/14,A61P 9/00,A61P 9/12,A61P 25/00,A61P 25/08,A61P 25/16,A61P 25/22,A61P 25/24,A61P 43/00,C07D 207/26,C07D 207/38,C07D 209/34,C07D 211/76,C07D 211/86 | [
"A61K 31/403",
"A61K 31/40",
"A61P 25/00",
"A61K 31/497",
"C07D 211/86",
"C07D 403/00",
"A61K 31/496",
"A61K 31/4015",
"A61K 31/397",
"C07D 207/26",
"C07D 401/10",
"A61K 31/47",
"A61P 1/00",
"A61K 31/495",
"C07D 215/22",
"C07D 211/76",
"A61K 31/445",
"A61K 31/404",
"A61P 9/00",
"A61K 31/55",
"C07D 207/38",
"A61P 9/12",
"A61P 3/04",
"A61K 31/472",
"C07D 223/10",
"C07D 205/08",
"C07D 207/36",
"A61P 43/00",
"C07D 217/24",
"C07D 403/10",
"A61K 31/395",
"C07D 209/34"
] | 26,129 |
17,226,936 | 1997-07-03 | 16,068,875 | Y | The invention provides a signal light outputting apparatus for use for optical communication by wavelength multiplex transmission and an optical transmission system with the same wherein control of powers and/or wavelengths of signal lights to be transmitted can be performed with certainty and the signal lights can be transmitted and received accurately while making transmission characteristics of the signal lights equal to each other. The signal light outputting apparatus includes a plurality of signal light outputting units (15-1 to 15-n) each including a signal light source (11-1 to 11-n) and an optical amplifier (14-1 to 14-n), an optical combiner (16) for combining signal lights outputted from the signal light outputting units (15-1 to 15-n), a signal light power detector (18) for extracting part of signal light combined by the optical combiner (16) to detect powers of the signal light for individual wavelengths corresponding to the signal light wavelengths, and a signal light output control element (19) for controlling signal light outputs of the corresponding optical amplifiers (14-1 to 14-n) for amplification of the signal lights of the wavelengths in response to the powers of the signal lights of the individual wavelengths detected by the signal light power detector (18). <IMAGE> | en | Optical signal tranmsitting apparatus and optical transmission system having such an optical signal transmitting apparatus | 4315_JP | 3665746_JP,3676930_JP | H04B 10/506,H04B 10/564,H04J 14/0221 | [
"H01S 3/10",
"H04J 1/00",
"H04B 10/296",
"H04J 14/00",
"H01S 3/067",
"H04B 10/07",
"H01S 3/06",
"H04J 14/02"
] | 19,520 |
547,370,158 | 2021-03-23 | 73,527,527 | N | The present disclosure discloses a joint training method and apparatus for models, a device and a storage medium, and relates to the technical field of artificial intelligence. A specific implementation includes: training a first-party model to be trained using a first sample quantity of first-party training samples to obtain first-party feature gradient information; acquiring second-party feature gradient information and second sample quantity information from a second party, where the second-party feature gradient information is obtained by training, by the second party, a second-party model to be trained using a second sample quantity of second-party training samples; and determining model joint gradient information according to the first-party feature gradient information, the second-party feature gradient information, first sample quantity information and the second sample quantity information, and updating the first-party model and the second-party model according to the model joint gradient information. The embodiments of the present disclosure may determine, according to the gradients and sample quantities of training parties, the model joint gradient information suitable for updating the models of both parties, thereby greatly improving the efficiency of joint training of models. | en | JOINT TRAINING METHOD AND APPARATUS FOR MODELS, DEVICE AND STORAGE MEDIUM | 74584442_CN | 79331146_CN,79354352_CN,79367016_CN | G06F 21/602,G06K 9/6256,G06N 20/00,G06N 20/20,H04L 9/008,H04L 9/0618 | [
"H04L 9/00",
"G06N 20/00"
] | 151,214 |
575,103,512 | 2021-01-15 | 75,509,682 | N | A method for establishing automatic sleep staging. The method comprises: acquiring a plurality of groups of PSG signals and artificial sleep mark information of the PSG signals (110); performing pre-analysis to decompose original time sequences in the PSG signals into a group of similar intrinsic mode functions (120); combining the similar intrinsic mode functions to obtain m groups of time sequence sets (121); performing multiscale entropy analysis to calculate entropy values of the m groups of time sequence sets by using n sampling scales, so as to obtain an entropy matrix with m × n elements (130); establishing a correlation coefficient matrix between consciousness levels and the elements of the entropy matrix, and finding a sampling scale and a filtering scale that correspond to the maximum positive correlation element or the maximum negative correlation element in the correlation coefficient matrix (140), wherein the sampling scale is a coarse-grained scale; and according to the sampling scale and the filtering scale of the maximum positive correlation element or the maximum negative correlation element, calculating entropy values of a person to be tested on the sampling scale and the filtering scale, and determining a sleep state of said person according to the entropy values (150). | en | METHOD FOR ESTABLISHING AUTOMATIC SLEEP STAGING AND APPLICATION THEREOF | 85709443_CN | 80634853_CN | A61B 5/4815,A61B 5/4863 | [
"A61B 5/00",
"A61B 5/372"
] | 167,232 |
17,184,721 | 1996-04-01 | 23,687,421 | N | An image processor based system and method for recognizing predefined-types of coating density imperfections in a web, specifically continuous type or streak imperfections. Continuous type imperfections are recognized in a continuous web moved at a certain rate through an imaging region illuminated by a stripe of substantially constant illumination. A time-delay integrating CCD camera is focused on the illuminated imaging region. The TDI CCD camera comprises an array of N rows of M light sensitive CCD elements each imaged on a fixed discrete pixel-related image area of the illuminated imaging region. The charge levels accumulated in the CCD elements of each row are shifted to the succeeding row or CCD elements and summed with the charge levels therein at a line shift clock frequency that ensures that an asynchronous relationship exists with respect to the incremental movement of the web. During the clock cycle of the N rows, the corresponding pixel areas of the illuminated web shift asynchronously or creep through the discrete pixel-related image areas. The accumulated pixel charge values derived from the pixel-related image areas of the illuminated region of said moving web emphasize imaging of longitudinal streak imperfections in the web due to the asynchronous movement the web. <IMAGE> | en | Coating density analyser and method using non-synchronous TDI camera | 5817_US | 3606337_US,3606336_US | G01N 21/8422,G01N 21/89,G01N 21/8903,G01N2021/0168,G01N2021/8427,G01N2021/8838,G01N2021/8887,G01N2021/936,G01N2201/102,H04N 5/372,H04N 5/37206 | [
"G01N 21/892",
"G01N 21/95",
"G01N 21/89",
"G01N 21/01",
"H04N 7/18",
"H04N 5/372",
"H04N 1/00",
"G01N 21/93"
] | 19,229 |
493,789,594 | 2017-10-31 | 62,021,742 | N | An automated method for personal health management of a user having a personal user profile, the method including repeatedly measuring a plurality of health parameters by a mobile electronic device, the health parameters including a heart rate variability, a blood pressure, a motion activity, and a weight of the user, calculating a base line dataset by the computing device for each one of the plurality of health parameters for a predetermined period of time, comparing recently measured health parameters from the step of repeatedly measuring with the base line dataset, providing on a display that is operatively connected to the computing device a first raw feedback on a health performance of the user based on the step of comparing, repeatedly prompting the user to answer contextual questions, the contextual questions related to at least one of physical, mental, emotional, and behavioral status of the user, and generating a second detailed feedback based on the personal user profile, the baseline data set, the measured health parameters, and a given context, to determine values for a plurality of health segments, and providing the second detailed feedback to the display of the user, based on a request by the user that includes the given context and data of the plurality of health segments. | en | Multilevel Intelligent Interactive Mobile Health System for Behavioral Physiology Self-Regulation in Real-Time | 56817054_CH | 56811529_CH | G16H 10/20,G16H 10/60,G16H 40/63,G16H 40/67,G16H 50/20,G16H 50/30 | [
"G16H 50/30",
"G16H 40/67",
"G16H 10/20",
"G16H 50/20"
] | 117,265 |
472,579,356 | 2016-07-04 | 57,473,781 | Y | The invention provides a new word recognition immune genetic method based on a word formatting rate fitness function and belongs to the field of application of natural language information processing. The new word recognition immune genetic method comprises the following steps of: firstly, extracting a common morpheme according to characteristics of network new words and taking the common morpheme and a single word as demonstrative antibodies in an immune genetic method; designing a suitable fitness function by utilizing a word formatting rate and adding adjustment parameters into the fitness function so as to optimize a final experiment result; and finally, processing candidate words, identified by the immune genetic method, by news corpus, so as to obtain a final network new word. Compared with the prior art, aiming at the characteristics of the network new word, the fitness function suitable for identifying the network new word is designed; factors including the length of the network new word, occupied ratios of a single word and a word string and the like are sufficiently considered, and frequency number information in the word string is added into the design of the fitness function, so that the accuracy, recalling rate and F value of the finding of the network new word are improved. | en | New word recognition immune genetic method based on word formatting rate fitness function | 58644572_ | 62499100_,61154161_,58940570_,69389625_ | G06F 40/279,G06N 3/126 | [
"G06F 17/27",
"G06N 3/12"
] | 105,415 |
51,685,168 | 2000-04-03 | 27,760,013 | Y | A knowledge acquisition and retrieval apparatus and method that emulate the human brain and comprise at least one first memory segment, and a distinct second memory segment, wherein elements of the at least one first memory segment are reciprocally associated to elements of the second memory segment, and vice-versa. The at least one first memory segment comprises categorized data from the physical world, known as representational data, while the second memory segment contains abstract or conceptual data, otherwise known as consciousness data. Physical data comprises auditory data, language data, visual data, motion data, and sensory data, and each element of the at least one first memory segment is identified as auditory data, language data, visual data, motion data, or sensory data. By reciprocally associating the physical (representational) and conceptual (consciousness) data, a hierarchical structure is created that allows information retrieval by traversing the reciprocal associations. Varying retrieval algorithms traverse the hierarchical structure differently to generate specified system outputs. Retrieval algorithms are implemented to represent human information retrieval functions commonly known as reduction, imaging, deduction, recognition, recall, categorization, and reasoning. | en | Knowledge acquisition and retrieval apparatus and method | 8718126_US | 8718127_US | G06N 5/022,H04L 41/0213,H04L 41/024,H04L 41/046,Y10S 707/99931,Y10S 707/99932,Y10S 707/99942,Y10S 707/99943 | [
"H04L 12/24",
"G06N 5/02"
] | 42,586 |
476,936,625 | 2016-05-12 | 54,441,272 | N | A robot, comprising: a mobile terminal, a head (110), trunk (120), left arm bracket (140), right arm bracket (130), left leg bracket (160), right leg bracket (150) and electronic control unit (180). The trunk (120) comprises a first structural member (128). A first servo (124) connects to the head (110) by means of a first servo horn (1242). A second servo (122) connects to the right leg bracket (130) by means of a second servo horn (1222). A third servo (126) connects to the left arm bracket (140) by means of a third servo horn (1262). The side wall of the first structural member (128) furthest from the head (110) connects to the right leg bracket (150) by means of a sixth servo horn (1520), while the side wall of the first structural member (128) furthest from the head (110) connects to the left leg bracket (160) by means of an eighth servo horn. The electronic control unit (180) is provided within a recess of the first structural member (128), with the electronic control unit (180) being electrically connected to the servos within the robot. The various parts of the robot are controlled by the mobile terminal via the electronic control unit (180). Also provided is a robot control method, which solves the problem of unvarying movements and poor emotional interaction of existing robots. | en | ROBOT AND CONTROL METHOD THEREFOR | 67612007_CN | 67902418_CN,69044052_CN,68303763_CN | B25J 11/00,B25J 13/00 | [
"B25J 11/00",
"B25J 13/00"
] | 107,803 |
4,741,545 | 2000-06-02 | 22,478,267 | N | A system and method for reprogramming a device using programming data that i s transmitted over a broadcast network. In one embodiment, a smart toy works cooperatively with an interactive television system to provide an easy-to-us e means for reprogramming the toy. The interactive television system has a broadcast station that transmits a carousel of data modules over a unidirectional broadcast link to a group of receiving stations. A radio frequency (RF) transceiver in the receiving station and a corresponding transceiver in the toy provide a bidirectional communications link communicates over which the data modules are transmitted from the receiving station to the toy. The data modules (e.g., data files or application code) are used to reprogram the smart toy. Particular ones of the data modules are selected, either manually or by filtering them according to user preferences , and the toy is reprogrammed with the selected modules. The user preferences can be explicitly entered or they can be constructed by the system according to the use of the toy. The toy can serve as an input device for uploading us er preferences or other data to the receiving station or broadcast station. The receiving station can transmit signals to the toy, which can then provide notifications or cues to a user. | en | NETWORKING SMART TOYS | 12613475_US | 13002512_US | A63H2200/00,H04N 7/163,H04N 21/41265,H04N 21/42204,H04N 21/4349,H04N 21/4532,H04N 21/454,H04N 21/47,H04N 21/4751,H04N 21/4882,H04N 21/8166,H04N 21/8186 | [
"H04N 21/475",
"H04N 21/434",
"H04N 21/454",
"H04N 21/45",
"H04N 21/41",
"H04N 21/488",
"H04N 21/81",
"H04N 7/16",
"H04N 5/445",
"H04N 5/44"
] | 7,036 |
459,388,312 | 2016-02-11 | 56,615,158 | N | The inventors surprisingly found that neural stimulation caused the synthesis and degradation of proteins into peptides which were then secreted into the cell media within minutes of stimulation by a novel neural-specific and membrane bound proteasome (neuronal membrane proteasome or NMP) that is transmembrane in nature. These secreted, activity-induced, proteasomal peptides (SNAPPs) range in size from about 500 Daltons to about 3000 Daltons. Surprisingly none of the peptides appear to be those previously known to have any neuronal function. Moreover, these SNAPPs have stimulatory activity and are heretofore a new class of signaling molecules. Moreover, the NMP appears to play a highly significant role in aspects of neuronal signaling known to be critical for neuronal function. The inventors have gone on to develop all tools to study this novel mechanisms including protocols and practice for generation and purification of SNAPPs as well as a new and specific inhibitor of the NMP allowing for selective control of this process in the nervous system. The present invention provides methods of making and using these SNAPPs for both laboratory and clinical purposes, the screening for molecules which modulate NMP function in vivo and in vitro, and methods for diagnosis of NMP related diseases. | en | A NOVEL NERVOUS SYSTEM-SPECIFIC TRANSMEMBRANE PROTEASOME COMPLEX THAT MODULATES NEURONAL SIGNALING THROUGH EXTRACELLULAR SIGNALING VIA BRAIN ACTIVITY PEPTIDES | 54367031_US | 54260167_US,54380722_US | A61K 31/165,A61K 31/69,A61K 38/05,A61K 51/00,G01N 30/72,G01N 33/5023,G01N 33/5058,G01N 33/6896,G01N2030/8831,G01N2333/47,G01N2800/2821 | [
"G01N 30/72",
"G01N 27/62",
"G01N 33/60",
"G01N 33/68"
] | 102,958 |
534,578,303 | 2019-11-11 | 66,363,710 | N | An electroencephalogram signal-based auditory attention state arousal level recognition method and apparatus, and a storage medium. Said method comprises: acquiring required electroencephalogram signals; on the basis of the acquired electroencephalogram signals and a first-level main component filter constructed in the training process, performing first-level feature extraction based on ensemble empirical mode decomposition and main component filtering; on the basis of a first-level feature extraction signal and a second-level main component filter constructed in the training process, performing second-level feature extraction based on ensemble empirical mode decomposition and main component filtering; on the basis of a second-level feature extraction signal, performing variance statistics-based feature vector calculation on the extraction feature signals; and on the basis of a feature vector calculation result and a machine learning classifier constructed in the training process, extracting an electroencephalogram signal-based auditory attention state arousal level in a test process. The invention realizes arousal level recognition of an auditory attention state, and facilitates the improvement of the accuracy and effectiveness of the auditory attention state arousal level recognition. | en | AUDITORY ATTENTION STATE AROUSAL LEVEL RECOGNITION METHOD AND APPARATUS, AND STORAGE MEDIUM | 73849893_CN | 76228627_CN,66820760_CN,76997917_CN,77384258_CN | G06K 9/00,G06K 9/62 | [
"G06K 9/00"
] | 142,471 |
329,962,515 | 2009-06-30 | 42,183,772 | N | Foreign language learning tools for children of the present invention comprise: plural groups of word cards wherein a word of a language to be studied is displayed on one surface, a picture according to the meaning of the word is displayed on the other surface, a colored portion is formed on at least one surface, the color of the colored portion is varied by parts of speech of the displayed word, and one group of the word cards are constituted by parts of speech; and a story book which comprises a plurality of pages having card substitution portions for substituting word cards for expressing the picture in the language, wherein pictures that show examples of situations and colored figures of different colors corresponding to the color of word cards at the lower part of the picture are arranged according to the sentence form of a language to be studied. According to the present invention, children can find that studying a foreign language is interesting, easy and familiar, easily recognize the sentence form of a foreign language due to the easy and automatic completion of a foreign language sentence, and easily acquire the concept of the parts of speech of a foreign language. In addition, the variety and applicability of expressing a foreign language sentence by children can be improved. | en | FOREIGN LANGUAGE LEARNING TOOLS FOR CHILDREN | 42162400_KR,41604561_KR | 42162400_KR | B42D 1/007,B42D 3/12,B42D 15/042,G09B 1/16,G09B 19/06,G09B 19/08 | [
"G09B 19/08",
"B42D 1/00",
"G09B 5/02"
] | 65,577 |
15,886,401 | 2001-10-16 | 18,794,264 | Y | The present invention discloses an optical measurement apparatus for living body provided with light transmitting means for transmitting light to a plurality of positions in areas to be examined within a living body; light detecting means for detecting light which has been transmitted by the light transmitting means and has passed through the living body at the plurality of positions in the examination area; load applying means for applying to the living body a load to which the living body responds; means for calculating signals representing intensity change of the transillumination detected by the light detecting means at the measuring points determined based on the positional relation between each of light transmitting means and light detecting means during both a period in which a load is applied to the living body and a period in which a load is not applied; means for calculating time variations of at least one signal among two signals calculated above at the measuring points determined based on the positional relation between each of light transmitting means and light detecting means; and display control means for displaying graphs of the above-calculated signals in correspondence with the positional relation between each light transmitting means and light detecting means. <IMAGE> | en | ORGANISM OPTICAL MEASUREMENT INSTRUMENT | 248947_JP | 1173971_JP,1173972_JP | A61B 5/0048,A61B 5/14553,A61B 5/4094,A61B2017/00057,A61B2562/0233,A61B2562/046 | [
"A61B 17/00",
"A61B 5/00",
"A61B 10/00"
] | 13,820 |
24,307,795 | 2005-09-22 | 36,090,399 | N | Electrical stimulation is applied to the inferior colliculus or colliculi (IC), in order to diminish tinnitus by revising auditory pathway neuronal activity. This intervention diminishes tinnitus and treats other neurological and otological disorders. The locations and methods of electrode placement and anchoring and the structure of the electrodes are an advance over prior treatments. Other treatment locations in the nearby region of the IC, including the superior colliculi (SC) and Peri-aqueductal gray (PAG), provide treatments for other disorders and symptoms such as partial hearing loss and pain. The IC is a unique choice for the treatment of tinnitus and other disorders because an electrode placed in that region enables minimal invasiveness. The anchoring location also uniquely minimizes invasiveness by providing the option of residing in the meninges instead of the brain tissue. The shape of the electrode and its anchoring process uniquely match the brain's anatomy in order to provide greater specificity in diagnosis and treatment. Stimulation of areas near the IC, particularly the superior colliculus and peri-aqueductal gray, can be used to treat various neurological disorders. Customized feedback from the implantable system enables the creation of customized treatment programs. | en | METHOD AND APPARATUS FOR TREATMENT OF TINNITUS AND OTHER NEUROLOGICAL DISORDERS BY BRAIN STIMULATION IN THE INFERIOR COLLICULI AND/OR IN ADJACENT AREAS | 29720463_IL | 29720463_IL | A61N 1/0534,A61N 1/0541,A61N 1/36082,A61N 1/361 | [
"A61N 1/00"
] | 23,878 |
531,108,966 | 2019-12-27 | 54,699,618 | Y | To provide a BCI for performing processes of a plurality of steps for facilitating the specification of the selection and a change of a direct standard perception examination, one or more selections, motions and/or states which are desired by a user.SOLUTION: There are disclosed a method, a system, a device and a non-temporary computer readable medium using a brain computer interface (BCI). A user is allowed to select a solution of a multi-limb formula, provided with control for an electric wheelchair, and can play a game by the BCI. This invention includes a non-corrected standard examination which brings about a result that, in a perception evaluation examination, a format arises which is the same as, or similar to that in the case without using the BCI. The accuracy of a BCI examination is improved by using three processes of an intentional determination of the selection of a user's solution, the monitoring of user's brain activities for the determination of the selected solution, and the verification of the selected solution with respect to each of examination questions. The selected solution monitors and verifies the user's brain activities following a holding/release process for determining an intention of the user whether or not to start a state change.SELECTED DRAWING: Figure 1 | en | BRAIN COMPUTER INTERFACE FOR FACILITATING SPECIFICATION OF DIRECT SELECTION AND STATE CHANGE OF SOLUTION OF MULTI-LIMB SELECTION FORMULA | 75727283_ | 76708305_,75963029_,75761404_ | A61B 5/316,A61B 5/374,A61B 5/378,A61B 5/4088,A61B 5/7221,A61B 5/7264,A61B 5/7267,A61B 5/7282,A61B 5/742,A61B 5/7435,A61F 2/72,A61F 4/00,G06F 3/015,G09B 7/06,G16H 50/70,Y02A 90/10 | [
"A61B 5/38",
"G09B 19/00",
"A61B 5/372",
"G09B 7/02"
] | 140,255 |
560,087,691 | 2020-09-17 | 71,800,232 | N | An unmanned vehicle lane changing decision-making method based on adversarial imitation learning, and a system for implementing the method. The method comprises: first, describing an unmanned vehicle lane changing decision-making task as a partially observable Markov decision-making process; then, using an adversarial imitation learning method to perform training from examples provided by professional driving teaching to obtain an unmanned vehicle lane changing decision-making model; and in an unmanned driving process of a vehicle, obtaining a vehicle lane changing decision-making result by means of the unmanned vehicle lane changing decision-making model by taking currently obtained vehicle environmental information as an input parameter of the unmanned vehicle lane changing decision-making model. According to the method, a lane changing strategy is learned by means of adversarial imitation learning from examples provided by professional driving teaching; a task reward function is not required to be manually designed, and direct mapping from a vehicle state to a vehicle lane changing decision can be directly established, thereby effectively improving the correctness, robustness and adaptivity of the lane changing decision of the unmanned vehicle under a dynamic traffic flow condition. | en | UNMANNED VEHICLE LANE CHANGING DECISION-MAKING METHOD AND SYSTEM BASED ON ADVERSARIAL IMITATION LEARNING | 82987113_CN | 18791191_CN,19334801_CN | B60W 30/12,B60W 30/18163,B60W 50/00 | [
"G05D 1/02",
"B60W 50/00"
] | 159,433 |
511,439,461 | 2018-12-04 | 66,112,762 | Y | The invention belongs to the field of cognitive neuroscience, particularly relates to a question and answer method and system based on a brain-like semantic hierarchical memory reasoning model, and aims to solve the problem of small sample learning of natural language understanding tasks such as text generation and automatic question and answer. The method comprises the steps of acquiring and inputting a question text and an answer text; Performing time sequence pooling on the text to obtain a word vector matrix; Pooling the space and time of each word vector in the word vector matrix to obtain a binary word representation set of which each bit is 0 or 1 corresponding to the word vector; Performing brain-like learning on the text and the word set to obtain an optimized model; And independently inputting the question text, performing word reduction based on the cell prediction state in the model, obtaining an answer text, and outputting the answer text. According to the method, a semantic hierarchical time sequence memory model is combined, the model is constructed based on a learning mode of small sample data and knowledge reasoning, the requirement for the number of samples is low, a large number of parameters do not need to be adjusted, and the expandability of the model is improved. | en | Question and answer method and system based on brain-like semantic hierarchical memory reasoning model | 62210826_ | 65563411_,70565218_,67302569_ | G06N 5/04 | [
"G06F 16/36",
"G06F 16/332",
"G06N 5/04",
"G06F 16/33"
] | 127,057 |
407,580,576 | 2012-02-28 | 46,480,746 | N | A 3D face recognition method based on intermediate frequency information in a geometric image. The steps are as follows: (1) preprocessing a library set model of a 3D face and a test model, including 3D face area cutting, smoothing processing and point cloud thinning, and at last cutting the point sets near the mouth and retaining the upper half of the face; (2) mapping the upper half of the face to a 2D grid through grid parameters, and performing linear interpolation on the 3D coordinates of the grid top to acquire the 3D coordinate attributes of each pixel point and generate a geometric image of a 3D face model; (3) performing multi-scale filtering on the geometric image with a multi-scale Haar wavelet filter to extract a horizontal intermediate frequency information image, a vertical intermediate frequency information image and an diagonal intermediate frequency information image as invariable facial expression features of the face; (4) calculating the similarity between the test model and the library set model with a wavelet domain structuring similarity algorithm; and (5) judging that the library set model and the test model with the maximum similarity belong to the same body according to the similarity between the test model and each library set model of the 3D face library set. | en | 3D FACE RECOGNITION METHOD BASED ON INTERMEDIATE FREQUENCY INFORMATION IN GEOMETRIC IMAGE | 19458263_CN,43700488_CN,17940462_CN | 43700488_CN,19458263_CN | G06T 7/00,G06T 17/00,G06V 20/64,G06V 20/653,G06V 40/16,G06V 40/168 | [
"G06K 9/00"
] | 80,703 |
551,881,990 | 2021-06-09 | 74,999,256 | N | The present application discloses a road test method and apparatus for an autonomous driving vehicle, a device and a storage medium, relating to the field of artificial intelligence and, in particular, to autonomous driving and intelligent transportation technologies. The specific implementation is: analyzing, based on evaluation baselines corresponding to autonomous driving scenarios in a process during which a vehicle is travelling along a test route, information of test parameters corresponding to the autonomous driving scenarios to determine a first test result of the vehicle; generating a second test result of the vehicle according to experience evaluation data; determining a road test result of the vehicle according to the first test result and the second test result of the vehicle, the road test result including not only an objective test result based on objective test data, but also a quantitative test result based on subjective evaluation data, such that objectivity, rigorousness and preciseness of a road test are guaranteed, and an evaluation result on travelling safety, comfort and intelligence and travelling efficiency of a vehicle is reflected from a subjective perspective of a passenger, thereby improving an accuracy of a road test result of an autonomous driving vehicle. | en | ROAD TEST METHOD AND APPARATUS FOR AUTONOMOUS DRIVING VEHICLE, DEVICE AND STORAGE MEDIUM | 74608609_CN | 81629025_CN | B60W 40/02,B60W 50/04,B60W 50/045,B60W 60/0011,B60W2050/0005,B60W2552/00,B60W2554/4049,B60W2554/406,B60W2554/80,B60W2556/10,G01M 17/007,G01M 17/06,G08G 1/0145,G08G 1/07,G08G 1/096716,G08G 1/096741,G08G 1/096775,G08G 1/096783,G08G 1/096791,H04W 4/44 | [
"G06F 30/15",
"G01M 17/007"
] | 154,160 |
25,507,654 | 2003-12-25 | 33,509,175 | N | From a biological signal detected by a nondestructive sensor, a breath signal, a heart beat signal, and a biological signal intensity are detected. A signal extracted from these or a parameter calculated from this signal is used as an index value so as to provide a sleep stage judgment method and a judgment device. The method and the device are used for judging the sleep stage of an examinee who is sleeping. The device includes a nondestructive sensor arranged on a bed for detecting a biological signal, detection means for detecting hear beat, breath, a biological intensity signal, and the like from the output of the nondestructive sensor, and index value calculation means for evaluating the autonomic nerve from a power spectrum density obtained by subjecting the R-R interval signal detected from the heart beat signal to the Fourier transform and calculating the index value of the sleep stage. By using a plurality of parameters as index signals and a threshold value according to the sleep stage is calculated from data of a predetermined time for each of the index signals, thereby judging the sleep stage. Similarly, a sleep stage judgment method and judgment device using at least one of the parameters calculated from the nondestructive sensor signal as an index value are also disclosed. | en | SLEEP STAGE JUDGMENT METHOD AND JUDGMENT DEVICE | 24236067_JP,30561269_JP | 24236067_JP | A61B 5/0205,A61B 5/16,A61B 5/4035,A61B 5/4812,A61B 5/7253 | [
"A61B 5/0205",
"A61B 5/16"
] | 24,385 |
4,385,359 | 1991-07-29 | 26,066,139 | Y | 3-Arylindole or 3-arylindazole derivatives having formula (see formula I) wherein Ar is optionally substituted phenyl or a heteroaromatic group; R1-R4 are independently selected from hydrogen, halogen, alkyl, alkoxy, hydroxy, nitro, alkylthio, alkylsulphonyl, alkyl- or dialkylamino, cyano, trifluoromethyl, or trifluoromethylthio; the dotted lines designate optional bonds; X is N or a group CR6 wherein R6 is hydrogen, halogen, trifluoromethyl or alkyl, or X is CH2; Y is N or CH, or Y is C; R5 is hydrogen, cycloalkyl, cycloalkylmethyl, alkyl or alkenyl, optionally substituted with one or two hydroxy groups, or R5 is a group taken from structures 1a an d 1b: (see formula 1a, 1b) wherein n is an integer from 2 - 6; W is O or S; U is N or CH ; Z is -(CH2)m -, m being 2 or 3 , -CH=CH-, 1,2-phenylene or -COCH2- or -CSCH2-; V is O, S, CH2, or NR7; U1 is O, S, CH2 or NR8; and V1 is NR9R10, OR11, SR12 or CR13R14R15, and R7- R15 are independently hydrogen, alkyl, alkenyl, cycloalkyl or cycloalkylalky l; may be prepared by methods known per se. The compounds are selective central ly acting 5-HT2-antagonists in the brain and are useful in treatment of anxiety , agression, depression, sleep disturbances, migraine, negative symptoms of schizophrenia, drug-induces Parkinsonism and Parkinson's disease. | en | 3-ARYLINDOLE AND 3-ARYLINDAZOLE DERIVATIVES | 8507643_DK | 13364342_DK,13010247_DK | A61P 25/00,A61P 25/04,A61P 25/18,A61P 25/20,A61P 25/24,A61P 25/26,C07D 209/04,C07D 209/08,C07D 209/10,C07D 231/56,C07D 401/04,C07D 401/14,C07D 413/14 | [
"A61P 25/20",
"A61P 25/26",
"C07D 405/04",
"C07D 401/04",
"C07D 401/14",
"A61P 25/24",
"A61K 31/4433",
"C07D 417/14",
"C07D 231/56",
"C07D 209/04",
"C07D 409/14",
"C07D 403/12",
"C07D 209/08",
"C07D 417/12",
"C07D 209/10",
"C07D 413/14",
"C07D 401/12",
"C07D 409/12",
"A61P 25/04",
"A61K 31/443",
"C07D 405/14",
"A61P 25/18",
"C07D 413/12",
"C07D 409/04",
"C07D 405/12",
"A61K 31/40",
"A61K 31/4427",
"A61K 31/445",
"A61P 25/00",
"A61K 31/454",
"A61K 31/496",
"A61K 31/495"
] | 4,263 |
47,207,129 | 1979-11-09 | 4,388,323 | Y | A liquid crystal mixture intended for electrooptical displays having no polarizers, consisting of: a nematic liquid crystal as the host phase; an optically active additive to give the host phase a cholesteric structure; and at least one pleochroic anthraquinone dye as the guest phase dissolved in the host phase, said anthraquinone dye having the formula: <IMAGE> (I) wherein R2, R3 and R4 are the same or different substituents and are selected from the group consisting of hydrogen, hydroxyl, amino, and short-chain N-monoalkylamino substituents; Y represents halogen, alkyl or alkoxy groups each with 1-16 carbon atoms, amino, alkylamino groups with 1-16 carbon atoms in the alkyl portion, nitro, or hydroxyl group; X represents halogen, amino or the group R5, wherein R5 is an alkyl group with 1-18 carbon atoms, an alkyl chain with 1-18 carbon atoms that is interrupted with one or two oxy groups or a group having the formula: -(CH2)p-(O-C2H4)r-O-R6 wherein p is an integer from 1 to 6, r is either zero or an integer from 1 to 6, and R6 is an alkyl group with 1-6 carbon atoms; the symbol A represents a 5- or 6-membered aromatic, alicyclic or heterocyclic ring which may be substituted with oxo- and/or imino groups, or optionally, hydroxy and/or amino groups and indices n and m can be 0, 1 or 2. | en | Liquid crystal mixture | 5279026_CH | 5519017_CH,5576785_CH,5299964_CH,6077082_CH | C09B 3/78,C09B 5/24,C09K 19/603 | [
"G02F 1/13",
"C09K 19/60",
"C09B 5/24",
"C09B 3/78"
] | 33,788 |
17,068,703 | 1993-11-04 | 25,527,734 | N | An electronic book (100) comprising a plurality of leaves (120), each leaf (120) comprising pages of printed material (126) bound at one edge to form a spine (140) with electrical circuits (123) formed in each leaf (120). A common electronic circuit (160) such as a speech generator and/or controller (168) cooperates with the electrical circuits (123) on each of the various pages, connected to the electrical circuits (123) in the leaves (120) through conductive paths (101) through the spine (140) of the book (100). The electrical circuits (123) in the leaves (120) include electrical elements such as switches (127), (129), and sensory output devices (125) (e.g., thermochromic devices, light emitting diodes, thermo-olfactory devices, electrochromic devices, and the like). The electrical elements (127), (129) are associated with particular portions of the printed material (126) so that the particular portions can be selectively highlighted or emphasized (e.g., designated by actuation of a visual or olfactory device and/or text read). A particularly advantageous switch structure formed integral to the page employing standard printing techniques, the bonding of LEDs directly into the printed circuit (112), and various advantageous methods of construction of the book (100) are also disclosed. | en | ELECTRONIC BOOK. | 3380987_US | 3380987_US | B42C 9/00,B42D 1/004,B42D 1/006,B42D 1/009,B42D 3/123,G09B 5/00,G09B 5/062,G09B 5/065,Y10S 345/901 | [
"G09B 5/06"
] | 18,576 |
529,499,134 | 2019-10-09 | 70,387,934 | N | Brain Functional Connectivity Correlation Value Adjustment Method, Brain Functional Connectivity Correlation Value Adjustment System, Brain Activity Classifier Harmonization Method, Brain Activity Classifier Harmonization System, and Brain Activity Biomarker SystemA harmonization system for a brain activity classifier harmonizing brain measurement data obtained at a plurality of sites to realize a discrimination process based on brain functional imaging: obtains data, for a plurality of traveling subjects as common objects of measurements at each of the plurality of measurement sites, resulting from measurements of brain activities of a predetermined plurality of brain regions of each of the traveling subjects; calculates, for each of the traveling subjects, prescribed elements of a brain functional connectivity matrix representing the temporal correlation of brain activities of a set of the plurality of brain regions; using a generalized linear mixed model, calculates measurement bias data 3108 for each element of the brain functional connectivity matrix, as a fixed effect at each measurement site with respect to an average of the corresponding element across the plurality of measurement sites and across the plurality of traveling subjects; and thereby executes a harmonizing process. | en | BRAIN FUNCTIONAL CONNECTIVITY CORRELATION VALUE ADJUSTMENT METHOD, BRAIN FUNCTIONAL CONNECTIVITY CORRELATION VALUE ADJUSTMENT SYSTEM, BRAIN ACTIVITY CLASSIFIER HARMONIZATION METHOD, BRAIN ACTIVITY CLASSIFIER HARMONIZATION SYSTEM, AND BRAIN ACTIVITY BIOMARKER SYSTEM | 51238673_JP | 77727655_JP,81739456_JP,81644461_JP,81723248_JP | A61B 5/055,A61B 5/7203,A61B 5/7267,G01R 33/4806,G01R 33/5608,G01R 33/58,G16H 20/70,G16H 30/40,G16H 50/20,G16H 50/30 | [
"A61B 5/055"
] | 139,184 |
531,585,592 | 2019-10-31 | 70,731,523 | N | Provided are an optical coherence tomography device, imaging method, and imaging program that have position accuracy that is not easily influenced by operation conditions of a light source and the like. An optical coherence tomography device (100) comprises: a branching and merging device (103) that branches light emitted from a wavelength-sweeping laser light source (101) into object light and reference light; a balance-type light-reception device (108) that generates information about variation in the intensity ratio between interference light R(5) and R(6) produced by reference light and object light that has passed through a transparent substrate (106) having a structure with varying thickness formed on the surface thereof, been emitted onto an object (200) of measurement, and been scattered by the object (200) of measurement; and a control unit (110) that acquires structure data for the depth direction of the object (200) of measurement on the basis of the information about the variation in the interference light R(5), R(6) intensity ratio. Using the position of the structure on the transparent substrate (106) as a reference, the control unit (110) connects a plurality of items of depth-direction structure data acquired while moving the emission position of the object light R(1). | en | OPTICAL COHERENCE TOMOGRAPHY DEVICE, IMAGING METHOD, AND NON-TRANSITORY COMPUTER-READABLE MEDIUM STORING IMAGING PROGRAM | 67886625_JP | 67883034_JP | A61B 3/102,A61B 3/12,A61B 5/0066,A61B 5/1172,G01B 9/02004,G01B 9/02043,G01B 9/02044,G01B 9/02084,G01B 9/02091,G01N 21/47,G01N 21/4795,G01N2021/1787 | [
"A61B 5/1172",
"G01N 21/17"
] | 140,561 |
523,486,045 | 2018-07-03 | 64,612,037 | N | A handwritten character model training method, a handwritten character recognition method, an apparatus, a device, and a medium. The handwritten character model training method comprises: acquiring a standard Chinese character training sample, labeling the standard Chinese character training sample and obtaining the real result of each sample, performing model training according to the real result of each sample, and using a batch gradient descent-based time-dependent reverse propagation algorithm to update network parameters of a bidirectional long short-term memory neural network, so as to acquire a standard Chinese character recognition model; acquiring and using a non-standard Chinese character training sample, to train and acquire an adjusted handwritten Chinese character recognition model; acquiring and using a Chinese character sample to be tested to obtain an erroneous character training sample; and using the erroneous character training sample to update the network parameters of the handwritten Chinese character recognition model, to acquire a target handwritten Chinese character recognition model. With the handwritten character model training method, a target handwritten Chinese character recognition model having a high handwritten character recognition rate can be obtained. | en | HANDWRITTEN CHARACTER MODEL TRAINING METHOD, HANDWRITTEN CHARACTER RECOGNITION METHOD, APPARATUS, DEVICE, AND MEDIUM | 57359376_CN | 19455066_CN,36877453_CN | G06K 9/6256,G06N 3/049,G06V 30/2455 | [
"G06K 9/68"
] | 134,912 |
45,464,944 | 1987-01-16 | 11,645,868 | Y | Disclosed herein is an optical information reproducing apparatus having a phase difference tracking system which is free from an influence of a tangential phase difference of output signals from a photodetector. The photodetector comprises first through fourth photocells which are respectively disposed at first through fourth quadrants defined by a first straight line parallel to an information track image and a second straight line perpendicular to the first straight line. Output signals of the first and second photocells are delayed for a specified time by respective first and second delaying circuits. Output signals of the first delaying circuit and the third photocell are added by a first adder, and output signals of the second delaying circuit and the fourth photocell are added by a second adder. A phase comparator produces a tracking error signal from a phase difference between output signals of the first and second adders. Alternatively, a first phase comparator produces a phase difference between the output signals of the first and second photocells, and/or a second comparator produces a phase difference between the output signals of the third and fourth photocells. At least one of output signals of the first and second comparator is used for producing a tracking error signal. | en | Tracking error correction apparatus utilizing phase difference information derived from four photocells | 5210917_JP | 5278518_JP,5278519_JP,5278516_JP,5278517_JP | G11B 7/0901,G11B 7/0909,G11B 7/094,G11B 7/095 | [
"G11B 7/095",
"G11B 7/09"
] | 29,183 |
519,025,044 | 2019-05-20 | 67,821,386 | N | The invention discloses an electric energy quality mixed disturbance analysis method based on deep learning. The power quality mixed disturbance classification method can effectively classify the power quality mixed disturbance, and has higher robustness. The method comprises the steps of firstly adjusting the attenuation rate of a Gaussian window function to change the time resolution and the frequency resolution of S transformation so as to embody the characteristics of different types of electric quality signals better and improve the distinguishing strength of the different types of electric quality signals; then constructing a deep learning model based on the LSTM, and classifying the the electric quality signals, in a constructed deep learning model, preprocessing the data by adopting a multi-layer perceptron, carrying out preliminary analysis and feature extraction on the data, and then, carrying out semantic segmentation on the electric quality signal by utilizing an LSTM artificial neural network with stronger analysis capability on sequence data, and finally, performing supervised classification on the semantic segmentation of the LSTM by adopting a pooling layer and a multi-layer perceptron, thereby facilitating the improvement of the classification capability of the model. | en | Electric energy quality mixed disturbance analysis method based on deep learning | 58644572_ | 64959265_ | G06N 3/0454,G06Q 10/06395,G06Q 50/06 | [
"G06Q 50/06",
"G06Q 10/06",
"G06N 3/04"
] | 131,432 |
489,949,147 | 2016-12-22 | 57,547,453 | N | A fatigue driving early warning method and a cloud server (4). The fatigue driving early warning method comprises the following steps: receiving vehicle driving state information sent by an intelligent vehicle-mounted unit (2), the vehicle driving state information comprising continuous running time of a vehicle, a vehicle speed, and engine operation state information (10); determining a fatigue driving state level of a driver according to the continuous running time of the vehicle, the vehicle speed, and the engine operation state information (20); and sending an instruction corresponding to the fatigue driving state level of the driver to an intelligent mobile terminal (3) or the intelligent vehicle-mounted unit (2) (30). According to the fatigue driving early warning method and the cloud server (4), it unnecessary to collect and identify the state or sensual consciousness of the driver in a complex manner; instead, the fatigue driving state level of the driver can be determined only according to vehicle detection information such as the continuous running time of the vehicle, the vehicle speed, and the engine operation state. The identification and judgment operation is easy, the controllability is higher, the hardware cost is relatively reduced, and system framework is simplified. | en | FATIGUE DRIVING EARLY WARNING METHOD AND CLOUD SERVER | 67627559_CN | 67867866_CN,69065391_CN,66820309_CN,69635830_CN | B60K 28/02,G08B 21/06 | [
"G08B 21/06",
"B60K 28/02"
] | 115,425 |
17,176,031 | 1996-05-13 | 23,823,212 | Y | Magnetic resonance is excited (50) in first and second species dipoles of a subject in a temporally constant magnetic field. The resonance is refocused (52) to generate a spin echo (54) centered at a time when the first and second species resonance signals are in-phase. Gradients echoes (64, 68) are generated, centered at a time (2n+1) pi / delta omega before and after the spin echo, where delta omega is a difference between the first and second species resonance frequencies. In this manner, the first and second species signals are 180 DEG out-of-phase in the gradient echoes. The resonance is refocused (82) one or more times to generate additional spin and gradient echoes with different phase encodings (78). The sequence is repeated with yet more phase encodings, and magnetic resonance signals from the spin echo and the two gradient echoes are reconstructed (86) into a spin echo image (s0) and a pair of gradient echo images (s+1, s-1). A phase map is generated (90) from the spin and gradient echo images. One of the gradient echo images is corrected (116) with the phase map. The phase corrected gradient image is additively combined (118) with the spin echo image to generate a first species image (112) and is subtractively combined (120) to generate a second species image (114). <IMAGE> | en | A magnetic resonance imaging method and apparatus | 152759_US | 3590222_US,3590223_US,3590224_US | G01R 33/4828 | [
"G01R 33/54",
"G01R 33/48",
"A61B 5/055"
] | 19,204 |
525,508,329 | 2019-11-12 | 68,253,470 | N | The invention provides a three-dimensional reconstruction and semantic segmentation method for an indoor scene of an Android mobile terminal. The method comprises the steps of completing the parallelcomputing of an ICP part, a voxel model fusion part and a ray tracing algorithm part in three-dimensional reconstruction through a Renderscript framework of Android; adopting an IMU device of a mobileterminal for collecting the angular velocity and the acceleration to obtain initial camera attitude change estimation between two frames, and achieving the quick three-dimensional reconstruction. A lightweight two-dimensional indoor scene semantic segmentation CNN network is designed, and a semantic segmentation result is mapped to a three-dimensional voxel space according to a camera attitude calculated in three-dimensional reconstruction, so that voxel-level semantic segmentation of a three-dimensional reconstruction model of a mobile terminal is completed. Since only the two-dimensional CNN is used, the parameters of the whole network model are greatly reduced. The semantic segmentation deep learning framework can be conveniently placed on common Android equipment, so that a whole setof Android mobile terminal three-dimensional reconstruction and semantic segmentation solution is realized. | en | Three-dimensional reconstruction and semantic segmentation method for indoor scene of Android mobile terminal | 58240906_,58632991_ | 58499456_,60621895_,70740791_,62206762_,60049052_ | G06T 17/00,G06V 10/267 | [
"G06K 9/34",
"G06T 17/00"
] | 136,596 |
575,551,715 | 2021-03-30 | 75,698,053 | N | Provided are a conversation intent recognition model training method, apparatus, device, and medium, relating to the technical field of semantic parsing. First conversation sample data is inputted into a search model constructed on the basis of elastic search (ES) to determine augmented sample data (S20); the first conversation sample data and the augmented sample data are inputted to an initial intent recognition model, and augmented intent recognition is performed on the first conversation sample data and the augmented sample data to obtain a first sample distribution and a second sample distribution (S30); distribution loss values are determined according to the first sample distribution and the second sample distribution, and a total loss value of the initial intent recognition model is determined according to the distribution loss values (S40); if the total loss value does not reach a preset convergence condition, then a first initial parameter of an initial intent recognition model is updated and iterated until the total loss value reaches the preset convergence condition, at which time the initial intention recognition model after convergence is recorded as a conversation intent recognition model (S50); thus the recognition accuracy of the intent recognition model is improved. | en | CONVERSATION INTENT RECOGNITION MODEL TRAINING METHOD, APPARATUS, COMPUTER DEVICE, AND MEDIUM | 63942312_CN | 67641874_CN,67304325_CN,69638817_CN,83050528_CN | G06K 9/6256,G06N 3/0445,G06N 3/0454,G06N 3/08 | [
"G06F 40/35",
"G06F 40/20",
"G06F 40/30"
] | 167,402 |
551,561,098 | 2020-08-12 | 76,129,711 | N | Disclosed is a visual and tactile integration assessment method using virtual reality, in which a visual and tactile integration assessment apparatus: supports a first device to provide a full tactile stimulus or an approximate tactile stimulus to a user; supports a process, in which a virtual object comes into full contact or approximate contact with a virtual body image-processed by a third device, to be provided, as a virtual visual stimulus, to the user through a second device; acquires, as first input data, space-time coordinates corresponding to a first contact point at which a certain actual object is in full contact or approximate contact with the user's actual body; acquires, as second input data, space-time coordinates corresponding to a second contact point at which the virtual object is in full contact or approximate contact with the virtual body; acquires, as first output data, space-time coordinates corresponding to the user's response to the full tactile stimulus or the approximate tactile stimulus; acquires, as second output data, space-time coordinates corresponding to the user's response to the virtual visual stimulus; determines, by referring to the acquired data, whether the user's visual and tactile integration is normal; and stores a result of the determination. | en | VISUAL AND TACTILE INTEGRATION ASSESSMENT METHOD AND APPARATUS USING VIRTUAL REALITY | 64751533_KR | 68130841_KR,64314915_KR,81455612_KR | A61B 5/00,A61B 5/16,A61B 5/162,A61B 5/4005,A61B 5/4076,A61B 5/4827,A61B 5/4884 | [
"A61B 5/16",
"A61B 5/00"
] | 153,937 |
4,817,410 | 2002-04-08 | 27,403,267 | N | Devices and methods of use thereof for determining the presence and concentration of chemicals in a cell, tissue, organ or organism involve semiderivative voltammetric measurements and chronoamperometric measurements of chemicals, e.g. neurotransmitters, precursors, and metabolites. Methods o f diagnosing and/or treating a subject with abnormal levels of neurotransmitte rs includes those having or at risk of developing epilepsy, diseases of the bas al ganglia, athtoid, dystonic diseases, neoplasms, Parkinson's disease, brain injuries, spinal cord injuries, and cancer. Microvoltammetry methods may be performed in vitro, in vivo, or in situ to differentiate white matter from grey matter, diagnose brain tumors, for cancer diagnosis and treatment, and to locate a tumor's position. Broderick probes are used to determine the concentration of the material, e.g. dopamine, norepinephrine, and serotonin, in the brains of patients having epilepsy. In some embodiments of the invention, regions of the brain to be resected or to be targeted for pharmaceutical therapy are identified using Broderick probes. The invention further provides methods of measuring the neurotoxicity of a material by comparing Broderick probe microvoltammograms of a neural tissue in the presence and absence of the material. | en | DIAGNOSIS AND TREATMENT OF NEURAL DISEASE AND INJURY USING MICROVOLTAMMETRY | 6311079_US,15737027_US | 16769771_US,16369524_US | A61B 5/14542,A61B 5/1473,A61B 5/4094 | [
"A61B 5/00"
] | 8,063 |
566,027,209 | 2021-07-30 | 80,003,854 | N | Systems and methods are disclosed for detecting both the occurrence and type of driver distraction experienced by a driver, determined through evaluation of sensed vehicle conditions or activities, such as steering, braking, lane changing, etc. as detected by one or more sensors on or associated with the vehicle. That detection of the occurrence and type of driver distraction may then be used to initiate another action, including initiating an audible or visual alert to the driver, taking measures to interfere with or stop operation of the device that is causing the distraction, log the occurrence and type of distraction that occurred, and report the occurrence and type of distraction to an outside monitoring computer (such as one associated with a guardian of the driver, an insurer of the vehicle, a law enforcement authority, or the like). To allow such detection, the system and method set forth herein employ machine learning methods to first train a classifier to classify certain driver behaviors (as evidenced by sensed vehicle movements and conditions) as a distraction or non-distraction event, and if a distraction event is detected then to further classify a type of such distraction event, and then to apply the trained classifier to classify the driver's ongoing driving activity. | en | SYSTEM AND METHOD FOR DRIVER DISTRACTION DETECTION AND CLASSIFICATION | 12373409_US | 81870409_US,77857001_US,83610490_US,83527433_US | B60W 40/02,B60W 40/09,B60W 50/14,B60W2050/143,B60W2050/146,B60W2540/229,B60W2540/30,B60W2552/00,B60W2556/10,B60W2756/10,G06K 9/00536,G06K 9/6267,G06N 20/00,G06N 20/10,G06V 20/597 | [
"G06K 9/62",
"B60W 50/14",
"B60W 40/09",
"B60W 40/02",
"G06K 9/00",
"G06N 20/00"
] | 162,761 |
415,194,378 | 2013-02-13 | 50,100,788 | N | The present invention enables people to propose campaigns for crowd-source funding intended to label some public claim as bogus or to label some public claim that's being widely denied as true. A fee and several inputs are required to submit the proposed campaign application for vetting, review, editing and approval. Approved campaigns are published on a website so people can find them and contribute to them. A cash bounty is offered as a reward to any challenger who can refute (falsify) the campaign message. A fee and several inputs are required to submit a challenge for evaluation and decision. Challenges are processed sequentially until one is sustained (upheld as proving the campaign position is wrong), or until the campaign duration expires. When a challenge is sustained, the bounty is paid to the challenger, other pending challenges are closed and their fees are returned, and the campaign ends as 'falsified' ('unsuccessful'). When the campaign duration expires without a sustained challenge, the campaign ends as 'successful' and a monetary reward is paid to the campaign creator and the remainder of the unspent bounty is returned to the sponsors in proportion to their contributions to the campaign fund. Sponsors can provide comments and evidence to augment the campaign materials. | en | System and Method for Promoting Truth in Public Discourse | 47190278_US | 8274819_US | G06Q 10/10,G06Q 50/182 | [
"G06Q 50/18",
"G06Q 20/38"
] | 84,872 |
273,164 | 2005-09-23 | 35,414,976 | N | In order to establish a method to measure the cognitive capacity to reconstruct from incomplete data original data explainable by correspondence to brain function, to apply these results to conduct research on brain function and in the future to select and determine the suitability of training appropriate to each individual, and to contribute to early detection, etc. of diseases related to cognitive function such as Alzheimer type dementia, [a configuration] is made such that: a degraded image, which is an image for which specified processing has been conducted on an original image having a significant photographic object and the data for recognizing said photographic object has been degraded, is displayed to a subject; the fact that said photographic object has been recognized is received from the subject; the recognition time, which is the time required for the subject to recognize said photographic object, is calculated based on these received results; and the cognitive capacity score, which is an index that digitizes the cognitive capacity of the subject by specified computations, is calculated from this recognition time and the challenge level data, which is data related to the challenge level of said degraded image in conjunction with the recognition of the photographic object. | en | Cognitive capacity measurement device and method | 56538_JP | 450763_JP | A61B 5/16,A61B 5/162,A61B 5/4088 | [
"A61B 5/16"
] | 1,541 |
53,887,672 | 1992-04-10 | 27,511,223 | Y | Disclosed is an apparatus for examining the field of vision, having a scanning device, which is provided with beam deflecting and beam imaging elements, via which the illumination light beam from an illumination light source is guided onto the region of the fundus oculi to be imaged and, if need be, the light reflected from the fundus oculi is guided to a detector device, from the time-sequential output signal of which an evaluation and synchronization unit generates an image of the scanned section by points, and having a control unit, which controls the intensity of the illumination light beam scanning the fundus oculi in such a manner that marks, respectively patterns, are projected onto a predeterminable region of the fundus oculi with predeterminable brightness, which the person under examination perceives, respectively does not perceive in the event of defects in the field of vision. The present invention is distinguished by the fact that in order to set a specific value of brightness of the marks, respectively of the patterns, the control unit switches the illumination light beam within the time span, during which the illumination light beam illuminates a scanning point, from a first intensity value to at least a second intensity value for a specific fraction of this time span. | en | Apparatus for examining the field of vision | 10281280_DE | 6216155_DE | A61B 3/024,A61B 3/1225 | [
"A61B 3/024",
"A61B 3/12"
] | 47,422 |
49,305,928 | 1988-12-16 | 27,303,671 | Y | An optical reading including a device for effecting discrimination between a dither-matrix reading area and a non-dither-matrix reading area on a subject copy, an irradiating device for irradiating local segments of the subject copy with light beams having different intensities, a reading device for obtaining image data representative of the presence or absence of an achromatic tone in each local segment based on an amount of the light beam reflected by each local segment, a control device for operating the irradiating and reading devices to irradiate the local segments in a predetermined discriminating zone of the subject copy with the light beams having different intensities at different times, to obtain first and second image data, a device for comparing the first and second image data of each local segment in the discrimination zone, to determine whether the first and second image data agree with each other or not for each local segment, and a device for determining that the discrimination zone is the dither-matrix reading area, if a degree of disagreement of the first and second image data exceeds a predetermined reference value, and determining that the discrimination zone is the non-dither-matrix reading area, if the degree of disagreement does not exceed the reference value. | en | Optical reader having apparatus for discrimination between dither-matrix and non-dither-matrix reading areas, and/or means for determining light emitter drive power values by using reference reflector surface | 5243852_JP | 7198867_JP,7352495_JP,7198874_JP,7198869_JP,7198871_JP,5756764_JP,7352494_JP,7198870_JP,5886896_JP | H04N 1/40056,H04N 1/40062 | [
"H04N 1/40"
] | 38,542 |
4,197,363 | 1984-01-25 | 8,191,740 | Y | An averaging method which eliminates periodic strays and which is usable in analog-digital computers performing an averaging procedure. Such a procedure which processes a set of signals picked up from an examined object elicits an averaged evoked response due to its capability of reducing internal noise and periodic strays originated in an environment comprising a mains supply network proportionally to the square root of N sweeps of the procedure, and in the case where the periodic strays and external stimuli are synchronous, the averaged periodic strays are not sufficiently reduced and distort the averaged evoked response making its right interpretation difficult. In order to overcome the latter problem, the averaging method eliminates the periodic strays from the set of signals during the averaging procedure in greater extend i.e. proportionally to the number of N sweeps of the averaging procedure resulting in their at least N-fold reduction thanks to introducing permanent desynchronization between the frequency ? of the external stimuli and the frequency ? of the periodic strays and thanks to introducing an odd number K of half-periods of the frequency ?, which number K determines the value of the period T according to the formula: T = ? t K, where K is a natural and odd integer. | en | AVERAGING METHOD FOR PERIODIC STRAYS ELIMINATION AND THE COUNTING CIRCUIT FOR EVOKED RESPONSES MEASURING SET-UP FOR APPLYING THE METHOD | 16107076_ | 13233863_,16107073_ | A61B 5/377,H04B 15/005 | [
"H04B 15/00",
"A61B 5/0484"
] | 3,544 |
513,303,518 | 2019-05-22 | 62,528,246 | N | The invention relates to a system and a method for automatically generating an appealing visual based on an original visual captured by a vehicle mounted camera. A semantic image content and its arrangement in the original visual is computed; An optimization process is performed that improves an appeal of the original visual by making it more similar to a set of predetermined traits. The optimization process includes adding information to the original visual to generate an enhanced visual by adapting content from further visuals. The further visuals can be captured by other sensors or created based on information from other sensors or from a database of visuals. The optimization process comprises iteratively a geometric parameter set of the enhanced visual to generate a certain perspective or morphing to improve an arrangement of semantics in the enhanced visual. Finally, the optimized parameter set is applied to the enhanced visual. Post-processing may be conducted after applying the optimized parameter set using a set of templates to generate a final visual. The final visual is output for immediate or later use. The set of templates is defined by a set of preferences for the resulting enhanced visual. User feedback on the final visual can be utilized to improve criteria of appeal. | en | METHOD AND SYSTEM FOR AUTOMATICALLY GENERATING AN APPEALING VISUAL BASED ON AN ORIGINAL VISUAL CAPTURED BY THE VEHICLE MOUNTED CAMERA | 81135_DE | 72083608_DE,72116499_DE,72080204_DE | G06T 5/00,G06T 5/006,G06T 5/007,G06T 11/00,G06T 11/60,G06V 30/274,H04N 5/2253 | [
"G06T 11/00",
"G06T 5/00"
] | 128,137 |
442,616,445 | 2014-11-12 | 51,409,400 | Y | PCT/JP2014/079937 Provided is a biological information generation method, a biological information generation program, and a biological information generation apparatus, which are capable of easily acquiring information on a biological condition by using a human's response to a smell. A biological information generation method for generating graphic information for diagnosing a biological condition of a subject by using an olfactory stimulant includes: under a situation in which fragrant molecules are classified according to a height of a polarity and a charged state, a step of classifying the respective fragrant components capable of constituting olfactory stimulants into N types of classifications; a step of associating the fragrant components of each classification with an action thereof; a step of determining an unpleasure degree for olfactory stimulation given to the subject with respect to the N tves of the olfactory stimulants; a step of assigning a correlation coefficient of the classification according to a main component in the N types of the olfactory stimulants and the actions associated with the corresponding classifications; and a step of generating graphic information for illustrating the biological condition of the subject with respect to each of the classifications. | en | Biological information generation method, biological information generation program, and biological information generation apparatus | 50107986_JP | 50469819_ | A61B 5/4011,A61B 5/7264,A61B 5/7275,A61B 5/742,A61B 5/7475,G16H 50/20 | [
"A61B 5/00"
] | 94,739 |
336,148,012 | 2010-06-17 | 30,117,624 | N | This invention provides a method for isolating and identifying proteins participating in protein-protein interactions in a complex mixture. The method uses a chemically reactive supporting matrix to isolate proteins that in turn non-covalently bind other proteins. The supporting matrix is isolated, and the non-covalently bound proteins are subsequently released for analysis. Because the proteins are accessible to chemical manipulation at both the binding and release steps, identification of the non-covalently bound proteins yields information on specific classes of interacting proteins, such as calcium-dependent or substrate-dependent protein interactions. This permits selection of a subpopulation of proteins from a complex mixture on the basis of specified interaction criteria. The method has the advantage of screening the entire proteome simultaneously, unlike two-hybrid systems or phage display methods which can only detect proteins binding to a single bait protein at a time. The method is applicable to the study of protein-protein interactions in biopsy and autopsy specimens, to the study of protein-protein interactions in the presence of signaling molecules, pharmacological agents or toxins, and for comparison of diseased and normal tissues or cancerous and untransformed cells. | en | PCK ACTIVATION AS A MEANS FOR ENHANCING sAPPa SECRETION AND IMPROVING COGNITION USING BRYOSTATIN TYPE COMPOUNDS | 11838273_US,5885402_US,7111608_US | 11838273_US,5885402_US | A61K 31/00,A61K 31/335,A61K 31/35,A61K 31/365,A61K 31/7048,A61P 25/00,A61P 25/16,A61P 25/28,A61P 35/00,A61P 43/00 | [
"A61P 25/16"
] | 69,030 |
553,244,618 | 2020-10-16 | 70,117,991 | N | A method, system and device for formulating and implementing a personalized paced breathing exercise prescription. The method comprises: defining a personalized paced breathing exercise prescription (210); guiding a testee to be subjected to a cardiopulmonary resonance test: when the testee is in a resting state, guiding the testee to breathe freely and carry out guided breathing respectively, and calculating cardiopulmonary resonance indices according to synchronously measured electrocardio signals and breathing signals (220); formulating a personalized paced breathing exercise prescription according to the cardiopulmonary resonance test result of the testee: selecting a guided breathing frequency for the optimal cardiopulmonary resonance indices of the testee as a breathing frequency parameter of a breathing exercise, and taking values corresponding to the other parameters in the personalized paced breathing exercise prescription as default values (230); and according to the personalized paced breathing exercise prescription, guiding the testee to carry out breathing exercises (240). The cardiopulmonary resonance state of a testee is evaluated by taking cardiopulmonary resonance indices as key indexes, and thereby a personalized paced breathing exercise prescription is formulated. | en | METHOD, SYSTEM AND DEVICE FOR FORMULATING AND IMPLEMENTING PERSONALIZED PACED BREATHING EXERCISE PRESCRIPTION | 73998543_CN | 73385419_CN | A61B 5/0205,A61B 5/08,A61B 5/11,A61B 5/1116,A61B 5/318,A61B 5/48 | [
"A61B 5/00",
"A61B 5/11",
"A61B 5/08",
"A61B 5/0205"
] | 154,989 |
556,590,114 | 2020-09-28 | 71,648,822 | N | The present application relates to the field of artificial intelligence. Disclosed are a semantic understanding model training method and apparatus, a computer device, and a storage medium. The method comprises: obtaining a total word sequence corresponding to a training text from a training set; randomly selecting a preset number of continuous word vectors from the total word sequence, replacing the continuous word vectors with a mask sequence to obtain an input word sequence, and taking the preset number of continuous word vectors as a test output word sequence; inputting the input word sequence into an encoder-attention-decoder model for training to obtain a prediction output word sequence; according to a difference between the prediction output word sequence and the test output word sequence, adjusting model parameters of the encoder-attention-decoder model to reduce the difference; and returning to the step of inputting the input word sequence into an encoder-attention-decoder model for training to obtain a prediction output word sequence, continuing training, and stopping training until a preset training stop condition is satisfied, so as to obtain a semantic understanding model. The present application improves the understanding accuracy of a computer to the natural language. | en | SEMANTIC UNDERSTANDING MODEL TRAINING METHOD AND APPARATUS, COMPUTER DEVICE, AND STORAGE MEDIUM | 63942312_CN | 77580007_CN,63604368_CN,67315619_CN | G06F 16/3344,G06K 9/6256,G06N 3/0454 | [
"G06F 16/33",
"G06F 40/30",
"G06F 40/289"
] | 157,320 |
48,780,834 | 2002-08-16 | 34,810,945 | Y | Visual methods and systems are described for detecting alertness and vigilance of persons under conditions of fatigue, lack of sleep, and exposure to mind altering substances such as alcohol and drugs. In particular, the intention can have particular applications for truck drivers, bus drivers, train operators, pilots and watercraft controllers and stationary heavy equipment operators, and students and employees during either daytime or nighttime conditions. The invention robustly tracks a person's head and facial features with a single on-board camera with a fully automatic system, that can initialize automatically, and can reinitialize when it need's to and provide outputs in realtime. The system can classify rotation in all viewing direction, detects' eye/mouth occlusion, detects' eye blinking, and recovers the 3D(three dimensional) gaze of the eyes. In addition, the system is able to track both through occlusion like eye blinking and also through occlusion like rotation. Outputs can be visual and sound alarms to the driver directly. Additional outputs can slow down the vehicle cause and/or cause the vehicle to come to a full stop. Further outputs can send data on driver, operator, student and employee vigilance to remote locales as needed for alarms and initiating other actions. | en | Algorithm for monitoring head/eye motion for driver alertness with one camera | 5619295_US | 6956354_US,6956355_US,6956353_US | B60K 28/066,G08B 21/06 | [
"B60K 28/06",
"G08B 21/06"
] | 37,757 |
495,181,861 | 2017-01-16 | 58,329,531 | N | A self-driven and adaptive-gait wearable knee-joint walking assistance device, comprising: a left-foot power output component (1); a right-foot power output component (2); a left knee joint assistance execution component (3); a right knee joint assistance execution component (4); and a drive power transmission device. The device is based on the lever principle and a slider-crank mechanism, and utilizes a manner of operation combining an ipsilateral parallel drive input with a contralateral cross drive input. Human bodyweight is used as a driving force during a walking process, so as to enable the left-foot power output component (1) to provide an extending tensile force for the left knee joint assistance execution component (3), and provide a bending tensile force for the right knee joint assistance execution component (4); the right-foot power output component (2) provides an extending tensile force for the right knee joint assistance execution component (4), and provides a bending tensile force for the left knee joint assistance execution component (3), such that the device generates twisting moments for assisting the left and the right knee joints to bend and extend according to a rhythm of a gait cycle during the walking process, thus achieving the purpose of walking assistance. | en | SELF-DRIVEN AND ADAPTIVE-GAIT WEARABLE KNEE-JOINT WALKING ASSISTANCE DEVICE | 63565149_CN | 64525973_CN,63898207_CN | A61H 1/00,A61H 1/024,A61H 3/00,A61H2003/005,A61H2003/007,A61H2201/0157,A61H2201/0165,A61H2201/1276,A61H2201/1284,A61H2201/14,A61H2201/1642,A61H2201/165,A61H2201/1652,A61H2205/102 | [
"A61H 3/00"
] | 118,188 |
476,531,103 | 2015-10-30 | 58,276,213 | N | The present invention provides a method for identifying images of brain function. In the beginning, choosing one of the brain data collected by multichannel scalp EEG/MEG, and using a mode decomposition method to obtain a plurality of intrinsic mode functions for each brain data, transforming the intrinsic mode functions (IMFs) in the same frequency scale into a plurality of source IMFs across the cerebral cortex by a source reconstruction algorithm, and classifying each source IMF in the same frequency scale into a plurality of frequency regions corresponding to the different brain sites. Then, repeatedly choosing a source IMF, and obtaining an amplitude envelope line through each absolution value of the source IMF. Further to obtain a plurality of source first-layer amplitude IMFs decomposed from the function of the amplitude envelope line by the mode decomposition method. Until obtaining the source first-layer amplitude IMFs from each source IMF, classifying each source first-layer amplitude IMF in the same amplitude frequency scale into a plurality of amplitude frequency regions corresponding to the different brain sites. In the end, a brain amplitude modulation spectrum is provided for analyzing the relationship between each frequency region and each amplitude frequency region. | en | Method for Identifying Images of Brain Function and System Thereof | 49985815_TW | 54725212_TW,54725014_TW,54675770_TW | A61B 5/0042,A61B 5/245,A61B 5/316,A61B 5/369,A61B 5/374,A61B 5/4064,A61B 5/7235,A61B 5/7246,A61B 5/7253,A61B2576/026,G16H 30/40 | [
"A61B 5/00",
"A61B 5/0476",
"A61B 5/04"
] | 107,607 |
24,004,935 | 2000-05-18 | 23,996,885 | N | An audiological screening and testing method and apparatus for newborns and infants employing audiological screening via statistical phase analysis of otoacoustic emissions in response to acoustic stimuli comprising: generating one or more stimuli with acoustic transmitters in each ear canal of the infant or newborn, collecting any transient evoked and distortion product otoacoustic emissions generated by the cochlea in each ear canal in response to the stimulus with microphones for generating a frequency mixed product electronic signal, inputting the frequency mixed product electronic signal from the microphone means and the stimululs frequencies into a computer processor, amplifying the frequency mixed product electronic signal with an input amplifier, computer analyzing the frequencies of a measured acoustic signal by means of a frequency and phase analyzer to separate the different frequencies and phases from one another, computer statistically evaluating the different acoustic signal components separately by means of binomial statistics to determine whether the measured signal contains stimulus elicited components for each frequency on a defined level of significance, and displaying if the otoacoustic signal response is or is not statistically significant on a computer display. | en | AUDIOLOGICAL SCREENING METHOD AND APPARATUS | 13006147_DE,29562600_DE,23300977_DE | 13006147_DE,29562600_DE,23300977_DE | A61B 5/121,A61B2560/0406,A61B2560/0443 | [
"A61B 5/12"
] | 23,213 |
15,912,025 | 2002-07-26 | 19,188,451 | N | An apparatus (10) for evaluating arteriosclerosis of a living subject, comprising a pulse-wave detecting device (12, 54) which detects a pulse wave from each of a first portion (14) and a second portion (38) of the subject, each of the respective pulse waves detected from the first and second portions containing an incident-wave component; an augmentation-index determining means (88) for determining, based on the pulse wave detected from the first portion by the pulse-wave detecting device, a first augmentation index indicative of a degree of augmentation of an amplitude of the pulse wave detected from the first portion, from an amplitude of the incident-wave component of the pulse wave detected from the first portion, and determining, based on the pulse wave detected from the second portion by the pulse-wave detecting device, a second augmentation index indicative of a degree of augmentation of an amplitude of the pulse wave detected from the second portion, from an amplitude of the incident-wave component of the pulse wave detected from the second portion; and an arteriosclerosis evaluating means (92) for evaluating the arteriosclerosis of the subject, based on a comparison of the first and second augmentation indexes determined by the augmentation-index determining means. <IMAGE> | en | Arterisclerosis evaluating apparatus | 779631_JP | 1224873_JP | A61B 5/02,A61B 5/02007,A61B 5/021,A61B 5/0285 | [
"A61B 5/0285",
"A61B 5/0245",
"A61B 5/02"
] | 14,008 |
491,835,900 | 2017-06-30 | 59,655,831 | Y | Provided are a diagnosis assisting device, an imaging processing method in the diagnosis assisting device, and a non-transitory storage medium having stored therein a program that facilitate a grasp of a difference in an diseased area to perform a highly 5 precise diagnosis assistance. According to an image processing method in a diagnosis assisting device that diagnoses lesions from a picked-up image, a reference image corresponding to a known first picked-up image relating to lesions is registered in a database, and when a diagnosis assistance is performed by comparing a query image corresponding to an unknown second picked-up image relating to lesions with the 10 reference image in the database, a reference image is created from the reference image by geometric transformation, or a query image is created from the query image by geometric transformation. Selected drawing: FIG. 1 9229569_1 (GHMatters) P106254.AU START AND INPUT AN IMAGE S41 TO BE PREDCTED MULTIPLY IMAGE PRE-INCREASE S42 GEOMETRIC TRAN SFORMAION (8 PATTERN BY S YNTI-ESIS 43 OF M0x DEGREE ROTATION AND INVER SKON) INPUT 8xN SPECIES OF RESPECTIVE IMAGESIN IDENTFlER HRM G PERORMED S44 LEARNINO. AND 0JTAIN xN NUMBE R OF INFER ENCE WLUE8 TAKE AVERAGE OF xN NUMBER OF INFERENCE S45 VAL LES TO OBTAIN EVENTUAL NFER ENCE WLUE | en | Diagnosis assisting device, image processing method in diagnosis assisting device, and non-transitory storage medium having stored therein program | 5243020_JP | 46810959_ | A61B 5/0077,A61B 5/444,A61B 5/7267,G06T 7/0014,G06T 7/11,G06T 7/337,G06T 7/62,G06T2207/10024,G06T2207/20081,G06T2207/20084,G06T2207/30088,G06T2207/30096,G06V 10/40 | [
"G06T 7/60",
"G06T 3/00",
"A61B 5/00",
"A61B 8/08"
] | 116,122 |
46,537,480 | 1998-04-08 | 8,228,198 | Y | The invention relates to a fast imaging method based on gradient recalled echoes of nuclear spins whose excitation and echo formation are not contained in the same sequence. The method has an increased susceptibility to variations in the time constant T2* of the free induction decay of the MR signal and is used in, for example, functional MR imaging studies that are based on temporary changes in T2* which are caused by local changes in magnetic susceptibility e.g. local changes in brain oxygenation state of a human or animal body. In order to reduce the susceptibility of the image quality to motion navigator gradients are generated in each sequence so as to measure a navigator MR signal. From the measured navigator signals a phase correction is determined and the MR signals measured are corrected by means of this phase correction. The invention is based on the insight that the image quality is dependent on phase errors in successive MR signals and that motion of the body makes a substantial contribution to these phase errors. Furthermore, the motion-related phase error of the navigator MR signal and the phase error of the MR signal are correlated. Therefore, the correction of phase errors of the measured MR signals can be determined from the phases of navigator MR signals measured. | en | Shifted echo MR method and device | 5229768_US | 5790404_NL,5539603_NL | G01R 33/4806,G01R 33/50,G01R 33/56554 | [
"A61B 5/055",
"G01R 33/54",
"G01R 33/50",
"G01R 33/561",
"G01R 33/48"
] | 32,619 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.