Id
stringlengths 1
6
| PostTypeId
stringclasses 6
values | AcceptedAnswerId
stringlengths 2
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
3
| ViewCount
stringlengths 1
6
⌀ | Body
stringlengths 0
32.5k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 2
values | FavoriteCount
stringclasses 2
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
sequence |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
125 | 1 | 126 | null | 1 | 770 | I want learn about NoSQL and when is better to use SQL or NoSQL. I know that this question depends on the case, but I'm asking for a good documentation on NoSQL, and some explanation of when is better to use SQL or NoSQL (use cases, etc). Also, your opinions on NoSQL databases, and any recommendations for learning about this topic are welcome.
| How to learn noSQL databases and how to know when SQL or noSQL is better | CC BY-SA 3.0 | null | 2014-05-17T21:52:34.563 | 2021-03-15T21:14:08.800 | 2014-05-18T16:10:50.820 | 84 | null | [
"nosql"
] |
126 | 2 | null | 125 | 2 | null | Please have a look at my answer here:
[Motivations for using relational database / ORM or document database / ODM](https://stackoverflow.com/questions/13528216/motivations-for-using-relational-database-orm-or-document-database-odm/13599767#13599767)
Short version:
- Use NoSQL when data size and number of transactions per second forces it, which typically happens above a few tens of TB and millions of transactions per second (db in memory, running on cluster), or at hundreds of TB and thousands of transactions per second (traditional db on disk, transactions per second is highly dependent on the usage pattern). Traditional SQL scales up to that point just fine.
- NoSQL is well suited for some problems (data has a natural sharding, schema is flexible, eventual consistency is ok). You can use for those even if scaling doesn't force you to.
- Developer familiarity with tools and ops ease of deployment are major factors, don't overlook them. A solution may be technically better but you may have a hard time using it, make sure you need it and make sure you budget for the learning curve.
As to how to learn it: fire up a MongoDB image on AWS, or DynamoDB, and have fun!
- MongoDB on AWS tutorial
- DynamoDB tutorial
| null | CC BY-SA 3.0 | null | 2014-05-17T23:53:42.700 | 2016-04-11T22:08:14.620 | 2017-05-23T12:38:53.587 | -1 | 26 | null |
128 | 1 | 296 | null | 62 | 31257 | [Latent Dirichlet Allocation (LDA)](http://en.wikipedia.org/wiki/Latent_Dirichlet_allocation) and [Hierarchical Dirichlet Process (HDP)](http://en.wikipedia.org/wiki/Hierarchical_Dirichlet_process) are both topic modeling processes. The major difference is LDA requires the specification of the number of topics, and HDP doesn't. Why is that so? And what are the differences, pros, and cons of both topic modelling methods?
| Latent Dirichlet Allocation vs Hierarchical Dirichlet Process | CC BY-SA 3.0 | null | 2014-05-18T06:10:52.543 | 2021-02-04T09:10:56.807 | 2014-05-20T13:45:59.373 | 84 | 122 | [
"nlp",
"topic-model",
"lda"
] |
129 | 1 | 166 | null | 10 | 1581 | [This question](https://stackoverflow.com/questions/879432/what-is-the-difference-between-a-generative-and-discriminative-algorithm) asks about generative vs. discriminative algorithm, but can someone give an example of the difference between these forms when applied to Natural Language Processing? How are generative and discriminative models used in NLP?
| What is generative and discriminative model? How are they used in Natural Language Processing? | CC BY-SA 3.0 | null | 2014-05-18T06:17:37.587 | 2014-05-19T11:13:48.067 | 2017-05-23T12:38:53.587 | -1 | 122 | [
"nlp",
"language-model"
] |
130 | 1 | 132 | null | 70 | 39376 | From wikipedia:
>
dimensionality reduction or dimension reduction is the process of
reducing the number of random variables under consideration, and
can be divided into feature selection and feature extraction.
What is the difference between feature selection and feature extraction?
What is an example of dimensionality reduction in a Natural Language Processing task?
| What is dimensionality reduction? What is the difference between feature selection and extraction? | CC BY-SA 4.0 | null | 2014-05-18T06:26:15.673 | 2021-02-07T20:32:33.683 | 2021-02-07T20:32:33.683 | 29169 | 122 | [
"feature-selection",
"feature-extraction",
"dimensionality-reduction"
] |
131 | 2 | null | 115 | 9 | null | Look it up on:
- Google Scholar link
- Citeseer link
If you get a single exact title match then you have probably found the right article, and can fill in the rest of the info from there. Both give you download links and bibtex-style output. What you would likely want to do though to get perfect metadata is download and parse the pdf (if any) and look for DOI-style identifier.
Please be nice and rate-limit your requests if you do this.
| null | CC BY-SA 3.0 | null | 2014-05-18T06:54:08.560 | 2014-05-18T06:54:08.560 | null | null | 26 | null |
132 | 2 | null | 130 | 57 | null | Simply put:
- feature selection: you select a subset of the original feature set; while
- feature extraction: you build a new set of features from the original feature set.
Examples of feature extraction: extraction of contours in images, extraction of digrams from a text, extraction of phonemes from recording of spoken text, etc.
Feature extraction involves a transformation of the features, which often is not reversible because some information is lost in the process of dimensionality reduction.
| null | CC BY-SA 3.0 | null | 2014-05-18T07:53:58.203 | 2014-05-18T07:53:58.203 | null | null | 172 | null |
133 | 2 | null | 103 | 11 | null |
- I think a number of clustering algorithms that normally use a metric, do not actually rely on the metric properties (other than commutativity, but I think you'd have that here). For example, DBSCAN uses epsilon-neighborhoods around a point; there is nothing in there that specifically says the triangle inequality matters. So you can probably use DBSCAN, although you may have to do some kind of nonstandard spatial index to do efficient lookups in your case. Your version of epsilon-neighborhood will likely be sim > 1/epsilon rather than the other way around. Same story with k-means and related algorithms.
- Can you construct a metric from your similarity? One possibility: dist(ei, ej) = min( sim(ei, ek) + sim(ek, ej) ) for all k ... Alternately, can you provide an upper bound such that sim(ei, ej) < sim(ei, ek) + sim(ek, ej) + d, for all k and some positive constant d? Intuitively, large sim values means closer together: is 1/sim metric-like? What about 1/(sim + constant)? What about min( 1/sim(ei, ek) + 1/sim(ek, ej) ) for all k? (that last is guaranteed to be a metric, btw)
- An alternate construction of a metric is to do an embedding. As a first step, you can try to map your points ei -> xi, such that xi minimize sum( abs( sim(ei, ej) - f( dist(xi, xj) ) ), for some suitable function f and metric dist. The function f converts distance in the embedding to a similarity-like value; you'd have to experiment a bit, but 1/dist or exp^-dist are good starting points. You'd also have to experiment on the best dimension for xi. From there, you can use conventional clustering on xi. The idea here is that you can almost (in a best fit sense) convert your distances in the embedding to similarity values, so they would cluster correctly.
- On the use of predefined parameters, all algorithms have some tuning. DBSCAN can find the number of clusters, but you still need to give it some parameters. In general, tuning requires multiple runs of the algorithm with different values for the tunable parameters, together with some function that evaluates goodness-of-clustering (either calculated separately, provided by the clustering algorithm itself, or just eyeballed :) If the character of your data doesn't change, you can tune once and then use those fixed parameters; if it changes then you have to tune for each run. You can find that out by tuning for each run and then comparing how well the parameters from one run work on another, compared to the parameters specifically tuned for that.
| null | CC BY-SA 3.0 | null | 2014-05-18T09:09:47.780 | 2014-05-18T09:17:15.557 | 2014-05-18T09:17:15.557 | 26 | 26 | null |
134 | 1 | 246 | null | 11 | 703 | In our company, we have a MongoDB database containing a lot of unstructured data, on which we need to run map-reduce algorithms to generate reports and other analyses. We have two approaches to select from for implementing the required analyses:
- One approach is to extract the data from MongoDB to a Hadoop cluster and do the analysis completely in Hadoop platform. However, this requires considerable investment on preparing the platform (software and hardware) and educating the team to work with Hadoop and write map-reduce tasks for it.
- Another approach is to just put our effort on designing the map-reduce algorithms, and run the algorithms on MongoDB map-reduce functionalities. This way, we can create an initial prototype of final system that can generate the reports. I know that the MongoDB's map-reduce functionalities are much slower compared to Hadoop, but currently the data is not that big that makes this a bottleneck yet, at least not for the next six months.
The question is, using the second approach and writing the algorithms for MongoDB, can them be later ported to Hadoop with little needed modification and algorithm redesign? MongoDB just supports JavaScript but programming language differences are easy to handle. However, is there any fundamental differences in the map-reduce model of MongoDB and Hadoop that may force us to redesign algorithms substantially for porting to Hadoop?
| Can map-reduce algorithms written for MongoDB be ported to Hadoop later? | CC BY-SA 3.0 | null | 2014-05-18T12:03:21.650 | 2014-06-10T02:42:02.050 | null | null | 227 | [
"scalability",
"apache-hadoop",
"map-reduce",
"mongodb"
] |
135 | 2 | null | 41 | 9 | null | Considering another criteria, I think that in some cases using Python may be much superior to R for Big Data. I know the wide-spread use of R in data science educational materials and the good data analysis libraries available for it, but sometimes it just depend on the team.
In my experience, for people already familiar with programming, using Python provides much more flexibility and productivity boost compared to a language like R, which is not as well-designed and powerful compared to Python in terms of a programming language. As an evidence, in a data mining course in my university, the best final project was written in Python, although the others has access to R's rich data analysis library. That is, sometimes the overall productivity (considering learning materials, documentation, etc.) for Python may be better than R even in the lack of special-purpose data analysis libraries for Python. Also, there are some good articles explaining the fast pace of Python in data science: [Python Displacing R](http://readwrite.com/2013/11/25/python-displacing-r-as-the-programming-language-for-data-science) and [Rich Scientific Data Structures in Python](http://wesmckinney.com/blog/?p=77) that may soon fill the gap of available libraries for R.
Another important reason for not using R is when working with real world Big Data problems, contrary to academical only problems, there is much need for other tools and techniques, like data parsing, cleaning, visualization, web scrapping, and a lot of others that are much easier using a general purpose programming language. This may be why the default language used in many Hadoop courses (including the Udacity's [online course](https://www.udacity.com/course/ud617)) is Python.
Edit:
Recently DARPA has also invested $3 million to help fund Python's data processing and visualization capabilities for big data jobs, which is clearly a sign of Python's future in Big Data. ([details](http://www.computerworld.com/s/article/9236558/Python_gets_a_big_data_boost_from_DARPA))
| null | CC BY-SA 3.0 | null | 2014-05-18T12:30:06.853 | 2014-05-19T08:13:05.037 | 2014-05-19T08:13:05.037 | 227 | 227 | null |
136 | 5 | null | null | 0 | null | MapReduce is a framework for processing parallelizable problems across huge datasets using a large number of computers (nodes), collectively referred to as a cluster (if all nodes are on the same local network and use similar hardware) or a grid (if the nodes are shared across geographically and administratively distributed systems, and use more heterogenous hardware). Computational processing can occur on data stored either in a file system (unstructured) or in a database (structured). MapReduce can take advantage of locality of data, processing it on or near the storage assets in order to reduce the distance over which it must be transmitted.
| null | CC BY-SA 4.0 | null | 2014-05-18T12:55:39.657 | 2021-02-10T14:59:02.167 | 2021-02-10T14:59:02.167 | 111611 | 227 | null |
137 | 4 | null | null | 0 | null | MapReduce is a programming model for processing large data sets with a parallel, distributed algorithm on a cluster. | null | CC BY-SA 3.0 | null | 2014-05-18T12:55:39.657 | 2014-05-20T13:53:39.727 | 2014-05-20T13:53:39.727 | 227 | 227 | null |
138 | 1 | 251 | null | 10 | 223 | Any small database processing can be easily tackled by Python/Perl/... scripts, that uses libraries and/or even utilities from the language itself. However, when it comes to performance, people tend to reach out for C/C++/low-level languages. The possibility of tailoring the code to the needs seems to be what makes these languages so appealing for BigData -- be it concerning memory management, parallelism, disk access, or even low-level optimizations (via assembly constructs at C/C++ level).
Of course such set of benefits would not come without a cost: writing the code, and sometimes even reinventing the wheel, can be quite expensive/tiresome. Although there are lots of libraries available, people are inclined to write the code by themselves whenever they need to grant performance. What disables performance assertions from using libraries while processing large databases?
For example, consider an entreprise that continuously crawls webpages and parses the data collected. For each sliding-window, different data mining algorithms are run upon the data extracted. Why would the developers ditch off using available libraries/frameworks (be it for crawling, text processing, and data mining)? Using stuff already implemented would not only ease the burden of coding the whole process, but also would save a lot of time.
In a single shot:
- what makes writing the code by oneself a guarantee of performance?
- why is it risky to rely on a frameworks/libraries when you must assure high performance?
| Why is it hard to grant efficiency while using libraries? | CC BY-SA 3.0 | null | 2014-05-18T14:02:51.350 | 2014-06-10T05:57:13.897 | 2014-05-29T15:01:17.840 | 84 | 84 | [
"bigdata",
"efficiency",
"performance"
] |
139 | 2 | null | 76 | 3 | null | I'd suggest [Apache Kafka](http://kafka.apache.org) as message store and any stream processing solution of your choice like [Apache Camel](https://camel.apache.org) or [Twitter Storm](https://github.com/apache/incubator-storm)
| null | CC BY-SA 3.0 | null | 2014-05-18T14:04:37.870 | 2014-05-18T14:04:37.870 | null | null | 118 | null |
140 | 2 | null | 107 | 4 | null | I've read very good [article](http://www.michael-noll.com/blog/2013/01/18/implementing-real-time-trending-topics-in-storm/) recently that suggests using [Twitter storm](https://github.com/nathanmarz/storm) for a task that looks pretty similar to yours.
| null | CC BY-SA 3.0 | null | 2014-05-18T14:30:10.553 | 2014-05-18T14:30:10.553 | null | null | 118 | null |
141 | 5 | null | null | 0 | null | null | CC BY-SA 3.0 | null | 2014-05-18T14:36:16.350 | 2014-05-18T14:36:16.350 | 2014-05-18T14:36:16.350 | -1 | -1 | null |
|
142 | 4 | null | null | 0 | null | Efficiency, in algorithmic processing, is usually associated to resource usage. The metrics to evaluate the efficiency of a process are commonly account for execution time, memory/disk or storage requirements, network usage and power consumption. | null | CC BY-SA 3.0 | null | 2014-05-18T14:36:16.350 | 2014-05-20T13:51:49.240 | 2014-05-20T13:51:49.240 | 84 | 84 | null |
143 | 1 | 165 | null | 10 | 1015 | As we all know, there are some data indexing techniques, using by well-known indexing apps, like Lucene (for java) or Lucene.NET (for .NET), MurMurHash, B+Tree etc. For a No-Sql / Object Oriented Database (which I try to write/play a little around with C#), which technique you suggest?
I read about MurMurhash-2 and specially v3 comments say Murmur is very fast. Also Lucene.Net has good comments on it. But what about their memory footprints in general? Is there any efficient solution which uses less footprint (and of course if faster is preferable) than Lucene or Murmur? Or should I write a special index structure to get the best results?
If I try to write my own, then is there any accepted scale for a good indexing, something like 1% of data-node, or 5% of data-node? Any useful hint will be appreciated.
| What is the most efficient data indexing technique | CC BY-SA 3.0 | null | 2014-05-18T14:37:20.477 | 2014-05-19T12:05:13.513 | 2014-05-19T12:05:13.513 | 229 | 229 | [
"nosql",
"efficiency",
"indexing",
"data-indexing-techniques",
".net"
] |
144 | 5 | null | null | 0 | null | [Cluster analysis](http://en.wikipedia.org/wiki/Cluster_analysis) is the task of grouping objects into subsets (called clusters) so that observations in the same cluster are similar in some sense, while observations in different clusters are dissimilar.
In [machine-learning](/questions/tagged/machine-learning) and [data-mining](/questions/tagged/data-mining), clustering is a method of unsupervised learning used to discover hidden structure in unlabeled data, and is commonly used in exploratory data analysis. Popular algorithms include [k-means](/questions/tagged/k-means), expectation maximization (EM), spectral clustering, correlation clustering and hierarchical clustering.
Related topics: [classification](/questions/tagged/classification), pattern-recognition, knowledge discovery, taxonomy. Not to be confused with cluster computing.
| null | CC BY-SA 3.0 | null | 2014-05-18T14:58:34.853 | 2018-01-01T18:56:53.570 | 2018-01-01T18:56:53.570 | 29575 | 29575 | null |
145 | 4 | null | null | 0 | null | Cluster analysis or clustering is the task of grouping a set of objects in such a way that objects in the same group (called a cluster) are more similar (in some sense or another) to each other than to those in other groups (clusters). It is a main task of exploratory data mining, and a common technique for statistical data analysis, used in many fields, including machine learning, pattern recognition, image analysis, information retrieval etc. | null | CC BY-SA 3.0 | null | 2014-05-18T14:58:34.853 | 2014-05-20T13:53:26.907 | 2014-05-20T13:53:26.907 | 118 | 118 | null |
146 | 5 | null | null | 0 | null | Natural language processing (NLP) is a subfield of artificial intelligence that involves transforming or extracting useful information from natural language data. Methods include machine-learning and rule-based approaches. It is often regarded as the engineering arm of Computational Linguistics.
NLP tasks
- Text pre-processing
- Coreference resolution
- Dependency parsing
- Document summarization
- Named entity recognition (NER) named-entity-recognition
- Information extraction (IE) information-retrieval
- Language modeling
- Part-of-speech (POS) tagging
- Morphological analysis and wordform generation
- Phrase-structure (constituency) parsing
- Machine translation (MT) machine-translation
- Question answering (QA)
- Sentiment analysis sentiment-analysis
- Semantic parsing
- Text categorization
- Textual entailment detection
- Topic modeling topic-model
- Word Sense Disambiguation (WSD)
Beginner books on Natural Language Processing
- Foundations of Statistical Natural Language Processing
- Speech and Language Processing
- NLTK Book
| null | CC BY-SA 3.0 | null | 2014-05-18T15:01:24.080 | 2018-04-10T01:03:28.253 | 2018-04-10T01:03:28.253 | 21163 | 29575 | null |
147 | 4 | null | null | 0 | null | Natural language processing (NLP) is a field of computer science, artificial intelligence, and linguistics concerned with the interactions between computers and human (natural) languages. As such, NLP is related to the area of human–computer interaction. Many challenges in NLP involve natural language understanding, that is, enabling computers to derive meaning from human or natural language input, and others involve natural language generation. | null | CC BY-SA 3.0 | null | 2014-05-18T15:01:24.080 | 2014-05-20T13:52:59.427 | 2014-05-20T13:52:59.427 | 118 | 118 | null |
148 | 5 | null | null | 0 | null | In Computer science and Technologies The Data is the most important part.
Since to work with data in an efficient manner to store the data in mediums and reuse, we use some techniques which named in general indexing.
This tag interests with these techniques, efficiency, and anything about using stored data with indexing
| null | CC BY-SA 3.0 | null | 2014-05-18T15:08:08.913 | 2014-05-20T13:53:31.717 | 2014-05-20T13:53:31.717 | 229 | 229 | null |
149 | 4 | null | null | 0 | null | In Computer science and Technologies The Data is the most important part.
Since to work with data in an efficient manner to store the data in mediums and reuse, we use some techniques which named in general indexing.
This tag interests with these techniques, efficiency, and anything about using stored data with indexing | null | CC BY-SA 3.0 | null | 2014-05-18T15:08:08.913 | 2014-05-20T13:52:19.333 | 2014-05-20T13:52:19.333 | 229 | 229 | null |
151 | 5 | null | null | 0 | null | Indexing is the almost most important part of data to get an efficient, properly storing and retrieval data from mediums
In different Programming Languages, there are different indexing algorithms and structures can be found.
As an Example, in Java Language the Apache Foundation's Lucene is very popular.
In years there are very efficient, fast, scalable Indexing Algorithms projectioned to the computer-world such as MurMurHashing Algorithm (using by some NoSQL Databases), B+Tree Algorithm (using by versions of Windows OS itself), or Lucene (very popular in web technologies) and its variations.
Also there can be found problem-specific Indexing within Chemicals or Medical Data Representations in Digital World
| null | CC BY-SA 3.0 | null | 2014-05-18T15:34:16.437 | 2014-05-20T13:48:12.163 | 2014-05-20T13:48:12.163 | 229 | 229 | null |
152 | 4 | null | null | 0 | null | Indexing is the almost most important part of data to get an efficient, properly storing and retrieval data from mediums | null | CC BY-SA 3.0 | null | 2014-05-18T15:34:16.437 | 2014-05-20T13:53:17.567 | 2014-05-20T13:53:17.567 | 229 | 229 | null |
153 | 2 | null | 81 | 3 | null | The answers presented so far are very nice, but I was also expecting an emphasis on a particular difference between parallel and distributed processing: the code executed. Considering parallel processes, the code executed is the same, regardless of the level of parallelism (instruction, data, task). You write a single code, and it will be executed by different threads/processors, e.g., while computing matrices products, or generating permutations.
On the other hand, distributed computing involves the execution of different algorithms/programs at the same time in different processors (from one or more machines). Such computations are later merged into a intermediate/final results by using the available means of data communication/synchronization (shared memory, network). Further, distributed computing is very appealing for BigData processing, as it allows for exploiting disk parallelism (usually the bottleneck for large databases).
Finally, for the level of parallelism, it may be taken rather as a constraint on the synchronization. For example, in GPGPU, which is single-instruction multiple-data (SIMD), the parallelism occurs by having different inputs for a single instruction, each pair (data_i, instruction) being executed by a different thread. Such is the restraint that, in case of divergent branches, it is necessary to discard lots of unnecessary computations, until the threads reconverge. For CPU threads, though, they commonly diverge; yet, one may use synchronization structures to grant concurrent execution of specific sections of the code.
| null | CC BY-SA 3.0 | null | 2014-05-18T17:38:01.383 | 2014-05-18T17:38:01.383 | null | null | 84 | null |
154 | 2 | null | 125 | 3 | null | Check [Martin Fowler's Personal website](http://www.martinfowler.com). He writes good and specially answers one of your questions in his book: "NoSQL Distilled"
| null | CC BY-SA 4.0 | null | 2014-05-18T17:53:37.750 | 2021-03-15T21:14:08.800 | 2021-03-15T21:14:08.800 | 29169 | 229 | null |
155 | 1 | 158 | null | 201 | 32456 | One of the common problems in data science is gathering data from various sources in a somehow cleaned (semi-structured) format and combining metrics from various sources for making a higher level analysis. Looking at the other people's effort, especially other questions on this site, it appears that many people in this field are doing somewhat repetitive work. For example analyzing tweets, facebook posts, Wikipedia articles etc. is a part of a lot of big data problems.
Some of these data sets are accessible using public APIs provided by the provider site, but usually, some valuable information or metrics are missing from these APIs and everyone has to do the same analyses again and again. For example, although clustering users may depend on different use cases and selection of features, but having a base clustering of Twitter/Facebook users can be useful in many Big Data applications, which is neither provided by the API nor available publicly in independent data sets.
Is there any index or publicly available data set hosting site containing valuable data sets that can be reused in solving other big data problems? I mean something like GitHub (or a group of sites/public datasets or at least a comprehensive listing) for the data science. If not, what are the reasons for not having such a platform for data science? The commercial value of data, need to frequently update data sets, ...? Can we not have an open-source model for sharing data sets devised for data scientists?
| Publicly Available Datasets | CC BY-SA 3.0 | null | 2014-05-18T18:45:38.957 | 2022-07-01T05:57:50.363 | 2016-12-05T22:33:53.380 | 26596 | 227 | [
"open-source",
"dataset"
] |
156 | 2 | null | 155 | 38 | null | [Freebase](https://www.freebase.com) is a free community driven database that spans many interesting topics and contains about 2,5 billion facts in machine readable format. It is also have good API to perform data queries.
[Here](http://www.datapure.co/open-data-sets) is another compiled list of open data sets
| null | CC BY-SA 4.0 | null | 2014-05-18T19:19:44.240 | 2021-07-08T20:42:25.690 | 2021-07-08T20:42:25.690 | 120060 | 118 | null |
157 | 2 | null | 41 | 12 | null | R is great for "big data"! However, you need a workflow since R is limited (with some simplification) by the amount of RAM in the operating system. The approach I take is to interact with a relational database (see the `RSQLite` package for creating and interacting with a SQLite databse), run SQL-style queries to understand the structure of the data, and then extract particular subsets of the data for computationally-intensive statistical analysis.
This just one approach, however: there are packages that allow you to interact with other databases (e.g., Monet) or run analyses in R with fewer memory limitations (e.g., see `pbdR`).
| null | CC BY-SA 3.0 | null | 2014-05-18T19:22:05.160 | 2014-05-18T19:22:05.160 | null | null | 36 | null |
158 | 2 | null | 155 | 111 | null | There is, in fact, a very reasonable list of publicly-available datasets, supported by different enterprises/sources.
Some of them are below:
- Public Datasets on Amazon WebServices;
- Frequent Itemset Mining Implementation Repository;
- UCI Machine Learning Repository;
- KDnuggets -- a big list of lots of public repositories.
Now, two considerations on your question. First one, regarding policies of database sharing. From personal experience, there are some databases that can't be made publicly available, either for involving privacy restraints (as for some social network information) or for concerning government information (like health system databases).
Another point concerns the usage/application of the dataset. Although some bases can be reprocessed to suit the needs of the application, it would be great to have some nice organization of the datasets by purpose. The taxonomy should involve social graph analysis, itemset mining, classification, and lots of other research areas there may be.
| null | CC BY-SA 3.0 | null | 2014-05-18T19:29:53.530 | 2016-11-29T04:04:46.463 | 2016-11-29T04:04:46.463 | 26596 | 84 | null |
159 | 1 | 160 | null | 6 | 558 | I see a lot of courses in Data Science emerging in the last 2 years. Even big universities like Stanford and Columbia offers MS specifically in Data Science. But as long as I see, it looks like data science is just a mix of computer science and statistics techniques.
So I always think about this. If it is just a trend and if in 10 years from now, someone will still mention Data Science as an entire field or just a subject/topic inside CS or stats.
What do you think?
| Is Data Science just a trend or is a long term concept? | CC BY-SA 3.0 | null | 2014-05-18T19:46:44.653 | 2014-05-18T21:05:28.990 | null | null | 199 | [
"bigdata",
"machine-learning",
"databases",
"statistics",
"education"
] |
160 | 2 | null | 159 | 12 | null | The one thing that you can say for sure is: Nobody can say this for sure. And it might indeed be opinion-based to some extent. The introduction of terms like "Big Data" that some people consider as "hypes" or "buzzwords" don't make it easier to flesh out an appropriate answer here. But I'll try.
In general, interdisciplinary fields often seem to have the problem of not being taken serious by either of the fields they are spanning. However, the more research is invested into a particular field, the greater is the urge to split this field into several sub-topics. And these sub-topics sonner of later have to be re-combined in new ways, in order to prevent an overspecialization, and to increase and broaden the applicability of techniques that are developed by the (over?)specialized experts in the different fields.
And I consider "Data Science" as such an approach to combine the expertise and findings from different fields. You described it as
>
...a mix of computer science and statistics techniques
And indeed, several questions here aim at the differentiation between data science and statistics. But a pure statistician will most likely not be able to set up a Hadoop cluster and show the results of his analysis in an interactive HTML5 dashboard. And someone who can implement a nice HTML5 dashboard might not be so familiar with the mathematical background of a Chi-Squared-Test.
It is reasonable to assume that giving students enough knowledge to apply the most important techniques from the different fields that are covered by data science will lead to new applications of these techniques, and be beneficial - also for the "purists" in these fields. The combination of these techniques is not straightforward in many cases, and can justify an own branch of research.
You also asked whether in 10 years, data science will be considered as "just a topic inside computer science". Again: Nobody can say for sure. But I wonder at which point people stopped asking the question whether "Computer Science" will one day only be considered only as a mix of (or a subject of) Electrical Engineering and Mathematics...
| null | CC BY-SA 3.0 | null | 2014-05-18T21:05:28.990 | 2014-05-18T21:05:28.990 | null | null | 156 | null |
161 | 2 | null | 138 | 7 | null | I don't think that everyone reaches for C/C++ when performance is an issue.
The advantage to writing low-level code is using fewer CPU cycles, or sometimes, less memory. But I'd note that higher-level languages can call down to lower-level languages, and do, to get some of this value. Python and JVM languages can do this.
The data scientist using, for example, scikit-learn on her desktop is already calling heavily optimized native routines to do the number crunching. There is no point in writing new code for speed.
In the distributed "big data" context, you are more typically bottleneck on data movement: network transfer and I/O. Native code does not help. What helps is not writing the same code to run faster, but writing smarter code.
Higher-level languages are going to let you implement more sophisticated distributed algorithms in a given amount of developer time than C/C++. At scale, the smarter algorithm with better data movement will beat dumb native code.
It's also usually true that developer time, and bugs, cost loads more than new hardware. A year of a senior developer's time might be $200K fully loaded; over a year that also rents hundreds of servers worth of computation time. It may just not make sense in most cases to bother optimizing over throwing more hardware at it.
I don't understand the follow up about "grant" and "disable" and "assert"?
| null | CC BY-SA 3.0 | null | 2014-05-18T21:25:29.040 | 2014-05-18T21:25:29.040 | null | null | 21 | null |
162 | 2 | null | 155 | 44 | null | There are many openly available data sets, one many people often overlook is [data.gov](http://www.data.gov/). As mentioned previously Freebase is great, so are all the examples posted by @Rubens
| null | CC BY-SA 3.0 | null | 2014-05-18T22:16:19.300 | 2014-05-18T22:16:19.300 | null | null | 59 | null |
163 | 2 | null | 138 | 4 | null | As all we know, in Digital world there are many ways to do the same work / get expected results..
And responsibilities / risks which comes from the code are on developers' shoulders..
This is small but i guess a very useful example from .NET world..
So Many .NET developers use the built-in BinaryReader - BinaryWriter on their data serialization for performance / get control over the process..
This is CSharp source code of the FrameWork's built in BinaryWriter class' one of the overloaded Write Methods :
```
// Writes a boolean to this stream. A single byte is written to the stream
// with the value 0 representing false or the value 1 representing true.
//
public virtual void Write(bool value)
{
//_buffer is a byte array which declared in ctor / init codes of the class
_buffer = ((byte) (value? 1:0));
//OutStream is the stream instance which BinaryWriter Writes the value(s) into it.
OutStream.WriteByte(_buffer[0]);
}
```
As you see, this method could written without the extra assigning to _buffer variable:
```
public virtual void Write(bool value)
{
OutStream.WriteByte((byte) (value ? 1 : 0));
}
```
Without assigning we could gain few milliseconds..This few milliseconds can accept as "almost nothing" but what if there are multi-thousands of writing (i.e. in a server process)?
Lets suppose that "few" is 2 (milliseconds) and multi-Thousands instances are only 2.000..
This means 4 seconds more process time..4 seconds later returning..
If we continue to subject from .NET and if you can check the source codes of BCL - .NET Base Class Library- from MSDN you can see a lot of performance losts from the developer decides..
Any of the point from BCL source It's normal that you see developer decided to use while() or foreach() loops which could implement a faster for() loop in their code.
This small gains give us the total performance..
And if we return to the BinaryWriter.Write() Method..
Actually extra assigning to a _buffer implementation is not a developer fault..This is exactly decide to "stay in safe" !
Suppose that we decide to not use _buffer and decided to implement the second method..If we try to send multi-thousands bytes over a wire (i.e. upload / download a BLOB or CLOB data) with the second method, it can fail commonly because of connection lost..Cause we try to send all data without any checks and controlling mechanism.When connection lost, Both the server and Client never know the sent data completed or not.
If the developer decides "stay in safe" then normally it means performance costs depends to implemented "stay in safe" mechanism(s).
But if the developer decides "get risky, gain performance" this is not a fault also..Till there are some discussions about "risky" coding.
And as a small note : Commercial library developers always try to stay in safe because they can't know where their code will use.
| null | CC BY-SA 3.0 | null | 2014-05-18T23:21:07.220 | 2014-05-18T23:29:39.227 | 2014-05-18T23:29:39.227 | 229 | 229 | null |
164 | 2 | null | 14 | 11 | null | What @Clayton posted seems about right to me, for those terms, and for "data mining" being one tool of the data scientist. However, I haven't really used the term "data collection," and it doesn't strike me as synonymous with "data mining."
My own answer to your question: no, the terms aren't the same. Definitions may be loose in this field, but I haven't seen those terms used interchangeably. In my work, we sometimes use them to differentiate between goals, or methodologies. For us, [data-science](/questions/tagged/data-science) is more about testing a hypothesis, and typically the data have been collected just for that purpose. [data-mining](/questions/tagged/data-mining) is more about sifting through existing data, looking for structure, and perhaps generating hypotheses. Data mining can start with a hypothesis, but it's often very weak or general, and can be difficult to resolve with confidence. (Dig long enough and you'll find something, though it may turn out to be pyrite.)
However, we also have used "data science" as a wider term, to include "data mining." We also talk about "data modeling," which for us is about finding a model for a system of interest, based on data as well as other knowledge and objectives. Sometimes that means trying to find the math that explains the real system, and sometimes it means finding a predictive model that is good enough for a purpose.
| null | CC BY-SA 3.0 | null | 2014-05-19T00:09:17.900 | 2014-05-19T00:09:17.900 | null | null | 208 | null |
165 | 2 | null | 143 | 10 | null | I think you messed up some things in your question. Lucene (I know nothing about Lucene,NET, but I suppose is the same) is a library used to analyze, split in tokens, and store documents in order to be able to query and retrieve them later. Lucene has a pretty old but effective model, it uses inverted trees to find and retrieve documents. Without further details, all documents are split in tokens (terms), and for each term is maintained a data structure, which stores all the documents which contains the given term. As a data structure could be used a BTree, a hash table and in the latest major revisions you can even plug in your own data structures.
A BTree (see [Wikipedia page](http://en.wikipedia.org/wiki/B-tree) for further details), is a kind of a tree data structure, which is appropriate for working with big chunks of data and is often used for storing tree-like ordered structures on disk. For in-memory other trees performs better.
Murmur hash (see [Wikipedia page](http://en.wikipedia.org/wiki/MurmurHash) for further details), is a family of hash functions used in hash table. The implementation of the hash table is not important, it could be a standard chained implementation or more advanced open hash addressing scheme. The idea is that the hash tables allows one to get fast a key, from an unordered set of keys, and can answer to tasks like: is this key part of this set of keys? which is the value associated with this key?
Now back to your main problem. You have one library (Lucene) and to data structures, both data structures are used in Lucene. Now you see that it is not possible to answer your question in these terms since they are not comparable.
However, regarding you footprint and performance part of the question. First of all you have to know which kind of operations you need to implement.
Do you need only get value for key, or do you need to find all elements in a range? In other words do you need order or not? If you do, than a tree can help. If you do not, than a hash table, which is faster could be used instead.
Do you have a lot of data which does not fit the memory? If yes than a disk-based solution would help (like BTree). If your data fit the memory, than use the fastest in-memory solution and use disk only as a storage (with a different structure, much simpler).
| null | CC BY-SA 3.0 | null | 2014-05-19T08:46:47.683 | 2014-05-19T08:46:47.683 | null | null | 108 | null |
166 | 2 | null | 129 | 8 | null | Let's say you are predicting the topic of a document given its words.
A generative model describes how likely each topic is, and how likely words are given the topic. This is how it says documents are actually "generated" by the world -- a topic arises according to some distribution, words arise because of the topic, you have a document. Classifying documents of words W into topic T is a matter of maximizing the joint likelihood: P(T,W) = P(W|T)P(T)
A discriminative model operates by only describing how likely a topic is given the words. It says nothing about how likely the words or topic are by themselves. The task is to model P(T|W) directly and find the T that maximizes this. These approaches do not care about P(T) or P(W) directly.
| null | CC BY-SA 3.0 | null | 2014-05-19T11:13:48.067 | 2014-05-19T11:13:48.067 | null | null | 21 | null |
167 | 5 | null | null | 0 | null | .NET is a very popular Object Oriented Programming Language Family.This Family includes members such as C# (pronounced CSharp), VB.NET, F# (pronounced Fsharp), J# (pronounced JSharp) and much more. The .NET Family offers programming with small effort with well-known high speed of compiled languages such as C and C++
This Tag aims to group Data Science Questions and Answers which users want to operate their processes under .NET Programming Language Family
| null | CC BY-SA 3.0 | null | 2014-05-19T12:17:45.960 | 2014-05-20T13:52:50.373 | 2014-05-20T13:52:50.373 | 229 | 229 | null |
168 | 4 | null | null | 0 | null | .NET is a very popular Object Oriented Programming Language Family which includes members such as C# (pronounced CSharp), VB.NET, F# (pronounced FSharp), J# (pronounced JSharp) and much more. The .NET Family offers programming with small effort with well-known high speed of compiled languages such as C and C++
This Tag aims to group Data Science Questions and Answers which users want to operate their processes under .NET Programming Language Family | null | CC BY-SA 3.0 | null | 2014-05-19T12:17:45.960 | 2014-05-20T13:50:32.440 | 2014-05-20T13:50:32.440 | 229 | 229 | null |
169 | 1 | 170 | null | 15 | 5505 | Assume a set of loosely structured data (e.g. Web tables/Linked Open Data), composed of many data sources. There is no common schema followed by the data and each source can use synonym attributes to describe the values (e.g. "nationality" vs "bornIn").
My goal is to find some "important" attributes that somehow "define" the entities that they describe. So, when I find the same value for such an attribute, I will know that the two descriptions are most likely about the same entity (e.g. the same person).
For example, the attribute "lastName" is more discriminative than the attribute "nationality".
How could I (statistically) find such attributes that are more important than others?
A naive solution would be to take the average IDF of the values of each attribute and make this the "importance" factor of the attribute. A similar approach would be to count how many distinct values appear for each attribute.
I have seen the term feature, or attribute selection in machine learning, but I don't want to discard the remaining attributes, I just want to put higher weights to the most important ones.
| How to specify important attributes? | CC BY-SA 3.0 | null | 2014-05-19T15:55:24.983 | 2021-03-11T20:12:24.030 | 2015-05-18T13:30:46.940 | 113 | 113 | [
"machine-learning",
"statistics",
"feature-selection"
] |
170 | 2 | null | 169 | 16 | null | A possible solution is to calculate the [information gain](http://en.wikipedia.org/wiki/Decision_tree_learning#Information_gain) associated to each attribute:
$$I_{E}(f) = - \sum \limits_{i = 1}^m f_ilog_2f_i$$
Initially you have the whole dataset, and compute the information gain of each item. The item with the best information gain is the one you should use to partition the dataset (considering the item's values). Then, perform the same computations for each item (but the ones selected), and always choose the one which best describes/differentiates the entries from your dataset.
There are implementations available for such computations. [Decision trees](http://en.wikipedia.org/wiki/Decision_tree_learning) usually base their feature selection on the features with best information gain. You may use the resulting tree structure to find these important items.
| null | CC BY-SA 4.0 | null | 2014-05-19T18:08:32.327 | 2021-03-11T20:12:24.030 | 2021-03-11T20:12:24.030 | 29169 | 84 | null |
171 | 2 | null | 35 | 5 | null | Two things you might find useful:
- meta-learning to speedup the search for the right model and the optimal parameters.
Meta learning consists in applying machine learning tools to the problem of finding the right machine learning tool/parameters for the problem at hand. This for instance this paper for a practical example;
- gpucomputing to speedup the algorithm on larger datasets. For instance, OpenCV can use GPUs, which are very effective at processing images/videos and can bring 10 to 100 speedups with respect to CPUs. As your computer most probably has a gpucomputing-able GPU, you could gain lots of time using it.
| null | CC BY-SA 3.0 | null | 2014-05-19T19:44:48.500 | 2014-05-19T19:44:48.500 | null | null | 172 | null |
172 | 1 | null | null | 27 | 19451 | I have a modeling and scoring program that makes heavy use of the `DataFrame.isin` function of pandas, searching through lists of facebook "like" records of individual users for each of a few thousand specific pages. This is the most time-consuming part of the program, more so than the modeling or scoring pieces, simply because it only runs on one core while the rest runs on a few dozen simultaneously.
Though I know I could manually break up the dataframe into chunks and run the operation in parallel, is there any straightforward way to do that automatically? In other words, is there any kind of package out there that will recognize I'm running an easily-delegated operation and automatically distribute it? Perhaps that's asking for too much, but I've been surprised enough in the past by what's already available in Python, so I figure it's worth asking.
Any other suggestions about how this might be accomplished (even if not by some magic unicorn package!) would also be appreciated. Mainly, just trying to find a way to shave off 15-20 minutes per run without spending an equal amount of time coding the solution.
| Is there a straightforward way to run pandas.DataFrame.isin in parallel? | CC BY-SA 3.0 | null | 2014-05-19T23:59:58.070 | 2020-08-02T12:40:19.397 | 2014-05-20T04:47:25.207 | 84 | 250 | [
"performance",
"python",
"pandas",
"parallel"
] |
173 | 2 | null | 35 | 4 | null | Guessing it's likely you've seen this [YouTube demo](http://www.youtube.com/watch?v=1GhNXHCQGsM) and the related [Google Tech Talk](http://www.youtube.com/watch?v=lmG_FjG4Dy8), which is related to these papers:
- P-N Learning: Bootstrapping Binary Classifiers by Structural Constraints
- Tracking-Learning-Detection
And this set of code on GitHub for [OpenTLD](https://github.com/zk00006/OpenTLD/). If you check the "read me" on GitHub here, you'll notice that [author's email (Zdenek Kalal)](https://github.com/zk00006/OpenTLD/blob/master/README) is listed, so it might be worth sending him an email about your questions, or even inviting him to reply to this question too.
| null | CC BY-SA 3.0 | null | 2014-05-20T03:56:43.147 | 2014-05-20T03:56:43.147 | null | null | 158 | null |
174 | 2 | null | 116 | 5 | null | Another suggestion is to test the [logistic regression](http://en.wikipedia.org/wiki/Logistic_regression). As an added bonus, the weights (coefficients) of the model will give you an idea of which sites are age-distriminant.
Sklearn offers the [sklearn.linear_model.LogisticRegression](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) package that is designed to handle sparse data as well.
As mentionned in the comments, in the present case, with more input variables than samples, you need to regularize the model (with [sklearn.linear_model.LogisticRegression](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) use the `penalty='l1'` argument).
| null | CC BY-SA 3.0 | null | 2014-05-20T09:24:30.697 | 2014-05-20T19:36:16.283 | 2014-05-20T19:36:16.283 | 172 | 172 | null |
175 | 1 | null | null | 3 | 62 | I have data coming from a source system that is pipe delimited. Pipe was selected over comma since it was believed no pipes appeared in field, while it was known that commas do occur. After ingesting this data into Hive however it has been discovered that rarely a field does in fact contain a pipe character.
Due to a constraint we are unable to regenerate from source to escape the delimiter or change delimiters in the usual way. However we have the metadata used to create the Hive table. Could we use knowledge of the fields around the problem field to reprocess the file on our side to escape it or to change the file delimiter prior to reloading the data into Hive?
| Can metadata be used to adapt parsing for an unescaped in field use of the delimiter? | CC BY-SA 3.0 | null | 2014-05-20T22:14:02.927 | 2014-05-21T07:19:32.297 | null | null | 249 | [
"metadata",
"parsing"
] |
176 | 1 | 190 | null | 12 | 4786 | I am seeking for a library/tool to visualize how social network changes when new nodes/edges are added to it.
One of the existing solutions is [SoNIA: Social Network Image Animator](http://www.stanford.edu/group/sonia/). It let's you make movies like [this one](https://www.youtube.com/watch?v=yGSNCED6mDc).
SoNIA's documentation says that it's broken at the moment, and besides this I would prefer JavaScript-based solution instead. So, my question is: are you familiar with any tools or are you able to point me to some libraries which would make this task as easy as possible?
Right after posting this question I'll dig into [sigma.js](http://sigmajs.org/), so please consider this library covered.
In general, my input data would be something like this:
```
time_elapsed; node1; node2
1; A; B
2; A; C
3; B; C
```
So, here we have three points in time (1, 2, 3), three nodes (A, B, C), and three edges, which represent a triadic closure between the three considered nodes.
Moreover, every node will have two attributes (age and gender), so I would like to be able to change the shape/colour of the nodes.
Also, after adding a new node, it would be perfect to have some ForceAtlas2 or similar algorithm to adjust the layout of the graph.
| How to animate growth of a social network? | CC BY-SA 3.0 | null | 2014-05-21T05:29:36.787 | 2014-05-22T12:14:49.727 | 2014-05-21T05:51:58.330 | 173 | 173 | [
"social-network-analysis",
"time-series",
"javascript",
"visualization"
] |
177 | 5 | null | null | 0 | null | [MongoDB](https://www.mongodb.com) is a [widely used](https://www.mongodb.com/who-uses-mongodb), [general-purpose](https://www.mongodb.com/use-cases), [document-oriented NoSQL database](https://en.wikipedia.org/wiki/Document-oriented_database) with features including high-availability replication and auto-sharding for horizontal scaling.
The MongoDB Community Edition database server and tools are [open-sourced](https://github.com/mongodb/mongo) and available under [Server Side Public License](https://www.mongodb.com/licensing/server-side-public-license) (all versions released after October 16, 2018) or [AGPL v3.0 license](http://www.gnu.org/licenses/agpl-3.0.html) (versions released prior to October 16, 2018). Commercial Licenses are also available from MongoDB, Inc.
MongoDB has strong support for dynamic querying and aggregating data including [MapReduce](https://docs.mongodb.com/manual/core/map-reduce/) and an [Aggregation Framework](https://docs.mongodb.com/manual/aggregation/). MongoDB uses [BSON](http://bsonspec.org) (Binary JSON) format for storage purposes and the [MongoDB Wire Protocol](https://docs.mongodb.com/manual/reference/mongodb-wire-protocol/) for communication between client drivers and the MongoDB server. Officially supported [Drivers and Client Libraries](https://docs.mongodb.com/ecosystem/drivers/) are available for most popular programming languages, and there are also [Community Supported Drivers](https://docs.mongodb.com/ecosystem/drivers/community-supported-drivers/) which offer alternative implementations and support for further programming languages.
The latest MongoDB server releases can be [installed via common packaging systems](https://docs.mongodb.com/manual/installation/) or downloaded as binary archives from [mongodb.com](https://www.mongodb.com/download-center).
The current production release series of [MongoDB 4.0](https://docs.mongodb.com/manual/release-notes/4.0/). It is generally recommended to stay current with the latest minor release of a production release series (e.g. 4.0.x) to take advantage of bug fixes and backward-compatible improvements. For more information on the versioning scheme used by the server, see [MongoDB Version Numbers](https://docs.mongodb.com/manual/release-notes/#mongodb-version-numbers).
## FAQ
The [mongodb-user discussion forum](https://groups.google.com/group/mongodb-user) is very busy and almost every question on earth has been asked on there, so try searching the archives.
Tip: searching Google with `" site:groups.google.com/group/mongodb-user"` added to your search terms generally works better than using the Google Groups search bar.
For help with data modeling (schema design), check out the [Data Modeling Considerations for MongoDB Applications](https://docs.mongodb.com/manual/data-modeling/) documentation page or search the [mongodb-user discussion forum](https://groups.google.com/group/mongodb-user) archives. For information on MongoDB Security, view the [Security section](https://docs.mongodb.com/manual/security/) of the MongoDB Manual which includes a [MongoDB Security Checklist](https://docs.mongodb.com/manual/administration/security-checklist/).
MongoDB, Inc. (the company behind MongoDB) provides archives of many [presentations](https://www.mongodb.com/presentations) from their [events](https://www.mongodb.com/events) such as conferences and [webinars](https://www.mongodb.com/webinars). They also develop a number of related tools and services including [MongoDB Cloud Manager](https://cloud.mongodb.com/), [MongoDB Ops Manager](https://www.mongodb.com/products/ops-manager), [MongoDB Atlas](https://www.mongodb.com/atlas), and [MongoDB Compass](https://www.mongodb.com/products/compass).
## Useful links
- Frequently Asked Questions
- Install MongoDB Server
- Client Drivers and Libraries
- Community Supported Drivers
- MongoDB manual
- MongoDB tutorials
- MongoDB University courses
- MongoDB YouTube Channel
- MongoDB JIRA - bug/feature tracking for MongoDB server and drivers
- MongoDB Atlas - hosted MongoDB as a Service
- MongoDB Cloud Manager - freemium monitoring, automation, and backup
- MongoDB Ops Manager
- Server Side Public License (SSPL) FAQ
- BSON specification
- MongoDB Wikipedia Article
- Presentations
- MongoDB White Papers
- Mongoose - MongoDB object modeling for Node.js
- IRC: freenode.net/#mongodb
### Initial Release:
`February 11, 2009`
### Stable Release:
`6.2.0` (Feb 7, 2019)
[https://docs.mongodb.com/manual/release-notes/4.0](https://en.wikipedia.org/wiki/Document-oriented_database)
### Books
- MongoDB in Action: Covers MongoDB version 3.0 2nd Edition
- MongoDB Cookbook - Second Edition
- The Definitive Guide to MongoDB: Third Edition A complete guide to dealing with Big Data using MongoDB
- Practical MongoDB Architecting, Developing, and Administering MongoDB
- MongoDB Basics
- MongoDB the Definitive Guide: 2nd Edition
- MongoDB Applied Design Patterns Practical Use Cases with the Leading NoSQL Database
- Scaling MongoDB Sharding, Cluster Setup, and Administration
- The Little MongoDB Book - a free book introducing MongoDB (updated for MongoDB 2.6)
| null | CC BY-SA 4.0 | null | 2014-05-21T07:05:44.780 | 2019-04-08T13:45:37.843 | 2019-04-08T13:45:37.843 | 201 | 201 | null |
178 | 4 | null | null | 0 | null | MongoDB is a scalable, high-performance, open source, document-oriented NoSQL database. It supports a large number of languages and application development platforms. Questions about server administration can be asked on http://dba.stackexchange.com. | null | CC BY-SA 4.0 | null | 2014-05-21T07:05:44.780 | 2019-04-08T17:28:18.497 | 2019-04-08T17:28:18.497 | 201 | 201 | null |
179 | 2 | null | 176 | 6 | null | My first guess is to [visualize social network in Tableau](https://www.google.com/search?q=visualize%20social%20network%20in%20tableau).
And particularly: [building network graphs in Tableau](http://www.clearlyandsimply.com/clearly_and_simply/2012/12/build-network-graphs-in-tableau.html).
What you need is to add time dimension to the "Pages" section to be able to see network change dynamics.
This is screen from the link above.
![enter image description here](https://i.stack.imgur.com/Quq8G.png)
| null | CC BY-SA 3.0 | null | 2014-05-21T07:09:20.093 | 2014-05-21T07:18:48.453 | 2014-05-21T07:18:48.453 | 97 | 97 | null |
180 | 5 | null | null | 0 | null | Neo4j is a open-source, transactional, high performance native graph database.
Neo4j stores its data as a graph: Nodes are connected through Relationships, both with arbitrary properties. Neo4j features a graph-centric declarative query language called [Cypher](http://neo4j.com/docs/stable/cypher-refcard). Its drivers support many [programming languages](http://neo4j.com/developer/language-guides). More details in the Online Training and the [Neo4j Manual](http://neo4j.com/docs).
For questions that are not suitable for the StackOverflow Q&A Format, please reach out on [Slack](https://neo4j-users-slack-invite.herokuapp.com/). Neo4j also maintains a public [Trello Board](https://trello.com/b/2zFtvDnV/public-idea-board) for feature ideas and future development directions.
### Useful Links
- Official Site
- GitHub Repo
- Twitter
- YouTube
- Awesome Neo4j
| null | CC BY-SA 4.0 | null | 2014-05-21T07:10:45.827 | 2019-04-08T13:45:45.567 | 2019-04-08T13:45:45.567 | 201 | 201 | null |
181 | 4 | null | null | 0 | null | Neo4j is an open-source graph database (GDB) well suited to connected data. Please mention your exact version of Neo4j when asking questions. You can use it for recommendation engines, fraud detection, graph-based search, network ops/security, and many other user cases. The database is accessed via official drivers in Java, JavaScript, .Python and .NET, or community-contributed drivers in PHP, Ruby, R, Golang, Elixir, Swift and more. | null | CC BY-SA 4.0 | null | 2014-05-21T07:10:45.827 | 2019-04-08T13:45:18.490 | 2019-04-08T13:45:18.490 | 201 | 201 | null |
182 | 2 | null | 175 | 4 | null | So, a few of your rows will have too many columns by one or more as a result. That's easy to detect, but harder to infer where the error was -- which two columns are actually one? which delimiter is not a delimiter?
In some cases, you can use the metadata, because it helps you know when an interpretation of the columns can't be right. For example, if just the one column can have a text value, and all the others must be numeric, it's unambiguous where the error is. Any additional columns created by this error occur right after the text column.
If they're all text, this doesn't work of course.
You might be able to leverage more than the metadata's column type. For example you may know that some fields are from an enumerated set of values, and use that to determine when a column assignment is wrong.
| null | CC BY-SA 3.0 | null | 2014-05-21T07:19:32.297 | 2014-05-21T07:19:32.297 | null | null | 21 | null |
183 | 2 | null | 176 | 12 | null |
### Fancy animations are cool
I was very impressed when I saw [this animation](http://youtu.be/T7Ncmq6scck) of the [discourse](http://www.discourse.org) git repository. They used [Gourse](https://code.google.com/p/gource/) which is specifically for git. But it may give ideas about how to represent the dynamics of growth.
### You can create animations with matplotlib
[This stackoverflow answer](https://stackoverflow.com/a/13571425/1083707) seems to point at a python/networkx/matplotlib solution.
### But D3.js provides interaction
If you're looking for a web-based solution then d3.js is excellent. See [this](http://bl.ocks.org/mbostock/4062045), [this](http://mbostock.github.io/d3/talk/20111116/force-collapsible.html) and [this](http://bl.ocks.org/mbostock/929623) for example.
See also [this stackoverflow question](https://stackoverflow.com/questions/7416659/interactive-graph-visualisation), the accepted answer points to D3.js again.
## Conclusion
I would be drawn towards the python/networkx options for network analysis (possibly to add attributes to your raw data file for example). Then, for visualisation and dissemination D3.js is perfect. You might be surprised how easy it can be to write d3.js once you get into it. I believe [it even works within an ipython notebook!](https://www.youtube.com/watch?v=8UtoIR2IEkI)
| null | CC BY-SA 3.0 | null | 2014-05-21T10:53:26.870 | 2014-05-21T15:39:57.380 | 2020-06-16T11:08:43.077 | -1 | 262 | null |
184 | 1 | 185 | null | 9 | 520 | The details of the Google Prediction API are on this [page](https://developers.google.com/prediction/), but I am not able to find any details about the prediction algorithms running behind the API.
So far I have gathered that they let you provide your preprocessing steps in PMML format.
| Google prediction API: What training/prediction methods Google Prediction API employs? | CC BY-SA 3.0 | null | 2014-05-21T11:22:34.657 | 2014-06-10T04:47:13.040 | null | null | 200 | [
"tools"
] |
185 | 2 | null | 184 | 6 | null | If you take a look over the specifications of PMML which you can find [here](http://www.dmg.org/v3-0/GeneralStructure.html) you can see on the left menu what options you have (like ModelTree, NaiveBayes, Neural Nets and so on).
| null | CC BY-SA 3.0 | null | 2014-05-21T14:14:38.797 | 2014-05-21T14:14:38.797 | null | null | 108 | null |
186 | 1 | 187 | null | 9 | 345 | I'm learning [Support Vector Machines](http://en.wikipedia.org/wiki/Support_vector_machine), and I'm unable to understand how a class label is chosen for a data point in a binary classifier. Is it chosen by consensus with respect to the classification in each dimension of the separating hyperplane?
| Using SVM as a binary classifier, is the label for a data point chosen by consensus? | CC BY-SA 3.0 | null | 2014-05-21T15:12:18.980 | 2014-05-21T15:39:54.830 | 2014-05-21T15:26:02.533 | 84 | 133 | [
"svm",
"classification",
"binary"
] |
187 | 2 | null | 186 | 9 | null | The term consensus, as far as I'm concerned, is used rather for cases when you have more a than one source of metric/measure/choice from which to make a decision. And, in order to choose a possible result, you perform some average evaluation/consensus over the values available.
This is not the case for SVM. The algorithm is based on a [quadratic optimization](http://upload.wikimedia.org/wikipedia/commons/2/2a/Svm_max_sep_hyperplane_with_margin.png), that maximizes the distance from the closest documents of two different classes, using a hyperplane to make the split.
![Hyperplane separating two different classes](https://i.stack.imgur.com/CCO7Z.png)
So, the only consensus here is the resulting hyperplane, computed from the closest documents of each class. In other words, the classes are attributed to each point by calculating the distance from the point to the hyperplane derived. If the distance is positive, it belongs to a certain class, otherwise, it belongs to the other one.
| null | CC BY-SA 3.0 | null | 2014-05-21T15:39:54.830 | 2014-05-21T15:39:54.830 | null | null | 84 | null |
188 | 2 | null | 134 | 5 | null | You can use map reduce algorithms in Hadoop without programming them in Java. It is called streaming and works like Linux piping. If you believe that you can port your functions to read and write to terminal, it should work nicely. [Here](http://www.michael-noll.com/tutorials/writing-an-hadoop-mapreduce-program-in-python/) is example blog post which shows how to use map reduce functions written in Python in Hadoop.
| null | CC BY-SA 3.0 | null | 2014-05-21T16:13:25.590 | 2014-05-21T16:13:25.590 | null | null | 82 | null |
189 | 1 | null | null | 4 | 983 | I'm currently using [General Algebraic Modeling System](http://en.wikipedia.org/wiki/General_Algebraic_Modeling_System) (GAMS), and more specifically CPLEX within GAMS, to solve a very large mixed integer programming problem. This allows me to parallelize the process over 4 cores (although I have more, CPLEX utilizes a maximum of 4 cores), and it finds an optimal solution in a relatively short amount of time.
Is there an open source mixed integer programming tool that I could use as an alternative to GAMS and CPLEX? It must be comparable in speed or faster for me to consider it. I have a preference for R based solutions, but I'm open to suggestions of all kinds, and other users may be interested in different solutions.
| Open source solver for large mixed integer programming task? | CC BY-SA 3.0 | null | 2014-05-21T19:41:19.857 | 2014-06-17T16:18:02.437 | 2014-06-17T16:18:02.437 | 84 | 151 | [
"r",
"open-source",
"parallel",
"optimization"
] |
190 | 2 | null | 176 | 7 | null | It turned out that this task was quite easy to accomplish using [vis.js](http://visjs.org/). [This](http://visjs.org/examples/graph/20_navigation.html) was the best example code which I have found.
The example of what I have built upon this is [here](http://laboratoriumdanych.pl/jak-powstaje-siec/) (scroll to the bottom of this post). This graph represents the growth of a subnetwork of Facebook friends. Green dots are females, blue ones are males. The darker the colour, the older the user. By clicking "Dodaj węzły" you can add more nodes and edges to the graph.
Anyway, I am still interested in other ways to accomplish this task, so I won't accept any answer as for now.
Thanks for your contributions!
| null | CC BY-SA 3.0 | null | 2014-05-22T12:14:49.727 | 2014-05-22T12:14:49.727 | null | null | 173 | null |
191 | 1 | 194 | null | 8 | 1166 | Can someone explain me, how to classify a data like MNIST with MLBP-Neural network if I make more than one output (e.g 8), I mean if I just use one output I can easily classify the data, but if I use more than one, which output should I choose ?
| Multi layer back propagation Neural network for classification | CC BY-SA 3.0 | null | 2014-05-22T13:36:24.120 | 2014-06-10T08:38:27.093 | null | null | 273 | [
"neural-network"
] |
192 | 1 | null | null | 8 | 522 | The most popular use case seem to be recommender systems of different kinds (such as recommending shopping items, users in social networks etc.).
But what are other typical data science applications, which may be used in a different verticals?
For example: customer churn prediction with machine learning, evaluating customer lifetime value, sales forecasting.
| What are the most popular data science application use cases for consumer web companies | CC BY-SA 3.0 | null | 2014-05-22T15:15:41.133 | 2014-05-23T06:03:40.577 | null | null | 88 | [
"usecase",
"consumerweb"
] |
193 | 2 | null | 192 | 4 | null | It depends, of course, on the focus of the company: commerce, service, etc. In adition to the use cases you suggested, some other use cases would be:
- Funnel analysis: Analyzing the way in which consumers use a website and complete a sale may include data science techniques, especially if the company operates at a large scale.
- Advertising: Companies that place ads use a lot of machine learning techniques to analyze and predict which ads would be most effective or most remunerative give the user's demographics that would view them.
| null | CC BY-SA 3.0 | null | 2014-05-22T15:43:57.160 | 2014-05-22T15:43:57.160 | null | null | 178 | null |
194 | 2 | null | 191 | 5 | null | Suppose that you need to classify something in K classes, where K > 2. In this case the most often setup I use is one hot encoding. You will have K output columns, and in the training set you will set all values to 0, except the one which has the category index, which could have value 1. Thus, for each training data set instance you will have all outputs with values 0 or 1, all outputs sum to 1 for each instance.
This looks like a probability, which reminds me of a technique used often to connect some outputs which are modeled as probability. This is called softmax function, more details [on Wikipedia](http://en.wikipedia.org/wiki/Softmax_activation_function). This will allow you to put some constraints on the output values (it is basically a logistic function generalization) so that the output values will be modeled as probabilities.
Finally, with or without softmax you can use the output as a discriminant function to select the proper category.
Another final thought would be to avoid to encode you variables in a connected way. For example you can have the binary representation of the category index. This would induce to the learner an artificial connection between some outputs which are arbitrary. The one hot encoding has the advantage that is neutral to how labels are indexed.
| null | CC BY-SA 3.0 | null | 2014-05-22T19:20:14.130 | 2014-05-22T19:20:14.130 | null | null | 108 | null |
195 | 2 | null | 192 | 5 | null | Satisfaction is a huge one that I run into a lot. Huge referring to importance/difficulty/complexity.
The bottom line is that for very large services (search engines, facebook, linkedin, etc...) your users are simply a collection of log lines. You have little ability to solicit feed back from them (not a hard and fast rule necessarily). So you have to infer their positive or negative feedback most of the time.
This means finding ways, even outside of predictive modelling, to truly tell, from a collection of log lines, whether or not someone actually liked something they experienced. This simple act is even more fundamental (in my biased opinion) than a/b testing since you're talking about metrics you will eventually track on a test scorecard.
Once you have a handle on good SAT metrics then you can start making predictive models and experimenting. But even deciding what piece of log instrumentation can tell you about SAT is non-trivial (and often changes).
| null | CC BY-SA 3.0 | null | 2014-05-22T20:48:55.297 | 2014-05-23T06:03:40.577 | 2014-05-23T06:03:40.577 | 92 | 92 | null |
196 | 1 | 197 | null | 13 | 7379 | So we have potential for a machine learning application that fits fairly neatly into the traditional problem domain solved by classifiers, i.e., we have a set of attributes describing an item and a "bucket" that they end up in. However, rather than create models of probabilities like in Naive Bayes or similar classifiers, we want our output to be a set of roughly human-readable rules that can be reviewed and modified by an end user.
Association rule learning looks like the family of algorithms that solves this type of problem, but these algorithms seem to focus on identifying common combinations of features and don't include the concept of a final bucket that those features might point to. For example, our data set looks something like this:
```
Item A { 4-door, small, steel } => { sedan }
Item B { 2-door, big, steel } => { truck }
Item C { 2-door, small, steel } => { coupe }
```
I just want the rules that say "if it's big and a 2-door, it's a truck," not the rules that say "if it's a 4-door it's also small."
One workaround I can think of is to simply use association rule learning algorithms and ignore the rules that don't involve an end bucket, but that seems a bit hacky. Have I missed some family of algorithms out there? Or perhaps I'm approaching the problem incorrectly to begin with?
| Algorithm for generating classification rules | CC BY-SA 3.0 | null | 2014-05-22T21:47:26.980 | 2020-08-06T11:04:09.857 | 2014-05-23T03:27:20.630 | 84 | 275 | [
"machine-learning",
"classification"
] |
197 | 2 | null | 196 | 10 | null | C45 made by Quinlan is able to produce rule for prediction. Check this [Wikipedia](http://en.wikipedia.org/wiki/C4.5_algorithm) page. I know that in [Weka](http://www.cs.waikato.ac.nz/~ml/weka/) its name is J48. I have no idea which are implementations in R or Python. Anyway, from this kind of decision tree you should be able to infer rules for prediction.
Later edit
Also you might be interested in algorithms for directly inferring rules for classification. RIPPER is one, which again in Weka it received a different name JRip. See the original paper for RIPPER: [Fast Effective Rule Induction, W.W. Cohen 1995](http://www.google.ro/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0CCYQFjAA&url=http://www.cs.utsa.edu/~bylander/cs6243/cohen95ripper.pdf&ei=-XJ-U-7pGoqtyAOej4Ag&usg=AFQjCNFqLnuJWi3gGXVCrugmv3NTRhHHLA&bvm=bv.67229260,d.bGQ&cad=rja)
| null | CC BY-SA 3.0 | null | 2014-05-22T21:54:05.660 | 2014-05-22T21:59:22.117 | 2014-05-22T21:59:22.117 | 108 | 108 | null |
198 | 2 | null | 192 | 5 | null | Also, there seem to be a very comprehensive list of data science use cases by function and by vertical on Kaggle - ["Data Science Use Cases"](http://www.kaggle.com/wiki/DataScienceUseCases)
| null | CC BY-SA 3.0 | null | 2014-05-23T03:05:57.990 | 2014-05-23T03:05:57.990 | null | null | 88 | null |
199 | 1 | 202 | null | 25 | 33962 | LDA has two hyperparameters, tuning them changes the induced topics.
What does the alpha and beta hyperparameters contribute to LDA?
How does the topic change if one or the other hyperparameters increase or decrease?
Why are they hyperparamters and not just parameters?
| What does the alpha and beta hyperparameters contribute to in Latent Dirichlet allocation? | CC BY-SA 3.0 | null | 2014-05-23T06:25:50.480 | 2022-05-20T16:20:33.417 | null | null | 122 | [
"topic-model",
"lda",
"parameter"
] |
200 | 2 | null | 134 | 4 | null | You also can create a MongoDB-Hadoop [connection](http://docs.mongodb.org/ecosystem/tutorial/getting-started-with-hadoop/).
| null | CC BY-SA 3.0 | null | 2014-05-23T08:34:38.900 | 2014-05-23T08:34:38.900 | null | null | 278 | null |
201 | 2 | null | 155 | 76 | null | Update:
Kaggle.com, a home of modern data science & machine learning enthusiasts:), opened [it's own repository of the data sets](https://www.kaggle.com/datasets).
---
In addition to the listed sources.
Some social network data sets:
- Stanford University large network dataset collection (SNAP)
- A huge twitter dataset that includes followers + large collection of twitter datasets here
- LastFM data set
There are plenty of sources listed at Stats SE:
- Locating freely available data samples
- Data APIs/feeds available as packages in R
- Free data set for very high dimensional classification
| null | CC BY-SA 3.0 | null | 2014-05-23T09:09:44.490 | 2016-01-21T08:44:01.610 | 2017-04-13T12:44:20.183 | -1 | 97 | null |
202 | 2 | null | 199 | 22 | null | The Dirichlet distribution is a multivariate distribution. We can denote the parameters of the Dirichlet as a vector of size K of the form ~$\frac{1}{B(a)} \cdot \prod\limits_{i} x_i^{a_{i-1}}$, where $a$ is the vector of size $K$ of the parameters, and $\sum x_i = 1$.
Now the LDA uses some constructs like:
- a document can have multiple topics (because of this multiplicity, we need the Dirichlet distribution); and there is a Dirichlet distribution which models this relation
- words can also belong to multiple topics, when you consider them outside of a document; so here we need another Dirichlet to model this
The previous two are distributions which you do not really see from data, this is why is called latent, or hidden.
Now, in Bayesian inference you use the Bayes rule to infer the posterior probability. For simplicity, let's say you have data $x$ and you have a model for this data governed by some parameters $\theta$. In order to infer values for this parameters, in full Bayesian inference you will infer the posterior probability of these parameters using Bayes' rule with $$p(\theta|x) = \frac{p(x|\theta)p(\theta|\alpha)}{p(x|\alpha)} \iff \text{posterior probability} = \frac{\text{likelihood}\times \text{prior probability}}{\text{marginal likelihood}}$$ Note that here comes an $\alpha$. This is your initial belief about this distribution, and is the parameter of the prior distribution. Usually this is chosen in such a way that will have a conjugate prior (so the distribution of the posterior is the same with the distribution of the prior) and often to encode some knowledge if you have one or to have maximum entropy if you know nothing.
The parameters of the prior are called hyperparameters. So, in LDA, both topic distributions, over documents and over words have also correspondent priors, which are denoted usually with alpha and beta, and because are the parameters of the prior distributions are called hyperparameters.
Now about choosing priors. If you plot some Dirichlet distributions you will note that if the individual parameters $\alpha_k$ have the same value, the pdf is symmetric in the simplex defined by the $x$ values, which is the minimum or maximum for pdf is at the center.
If all the $\alpha_k$ have values lower than unit the maximum is found at corners
![](https://i.stack.imgur.com/5khZE.png)
or can if all values $\alpha_k$ are the same and greater than 1 the maximum will be found in center like
![](https://i.stack.imgur.com/qrfm7.png)
It is easy to see that if values for $\alpha_k$ are not equal the symmetry is broken and the maximum will be found near bigger values.
Additional, please note that values for priors parameters produce smooth pdfs of the distribution as the values of the parameters are near 1. So if you have great confidence that something is clearly distributed in a way you know, with a high degree of confidence, than values far from 1 in absolute value are to be used, if you do not have such kind of knowledge than values near 1 would be encode this lack of knowledge. It is easy to see why 1 plays such a role in Dirichlet distribution from the formula of the distribution itself.
Another way to understand this is to see that prior encode prior-knowledge. In the same time you might think that prior encode some prior seen data. This data was not seen by the algorithm itself, it was seen by you, you learned something, and you can model prior according to what you know (learned). So in the prior parameters (hyperparameters) you encode also how big this data set you apriori saw, because the sum of $\alpha_k$ can be that also as the size of this more or less imaginary data set. So the bigger the prior data set, the bigger is the confidence, the bigger the values of $\alpha_k$ you can choose, the sharper the surface near maximum value, which means also less doubts.
Hope it helped.
| null | CC BY-SA 4.0 | null | 2014-05-23T13:47:54.603 | 2022-05-20T16:20:33.417 | 2022-05-20T16:20:33.417 | -1 | 108 | null |
203 | 2 | null | 189 | 2 | null | Never done stuff on that scale, but as no-one else has jumped in yet have you seen these two papers that discuss non-commercial solutions? Symphony and COIN-OR seem to be the dominant suggestions.
Linderoth, Jeffrey T., and Andrea Lodi. "MILP software." Wiley encyclopedia of operations research and management science (2010). [PDF version](http://homepages.cae.wisc.edu/~Linderot/papers/Linderoth-Lodi-10.pdf)
Linderoth, Jeffrey T., and Ted K. Ralphs. "Noncommercial software for mixed-integer linear programming." Integer programming: theory and practice 3 (2005): 253-303. [Compares performance](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.147.5872)
| null | CC BY-SA 3.0 | null | 2014-05-23T14:28:41.563 | 2014-05-23T14:28:41.563 | null | null | 265 | null |
204 | 2 | null | 116 | 7 | null | I recently did a similar project in Python (predicting opinions using FB like data), and had good results with the following basic process:
- Read in the training set (n = N) by iterating over comma-delimited like records line-by-line and use a counter to identify the most popular pages
- For each of the K most popular pages (I used about 5000, but you can play around with different values), use pandas.DataFrame.isin to test whether each individual in the training set likes each page, then make a N x K dataframe of the results (I'll call it xdata_train)
- Create a series (I'll call it ydata_train) containing all of the outcome variables (in my case opinions, in yours age) with the same index as xdata_train
- Set up a random forest classifier through scikit-learn to predict
ydata_train based on xdata_train
- Use scikit-learn's cross-validation testing to tweak parameters and
refine accuracy (tweaking number of popular pages, number of trees,
min leaf size, etc.)
- Output random forest classifier and list of most popular pages with pickle (or keep in memory if you are doing everything at once)
- Load in the rest of your data, load the list of popular pages (if necessary), and repeat step 2 to produce xdata_new
- Load the random forest classifier (if necessary) and use it to predict values for the xdata_new data
- Output the predicted scores to a new CSV or other output format of your choosing
In your case, you'd need to swap out the classifier for a regressor (so see [sklearn.ensemble.RandomForestRegressor](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html)) but otherwise the same process should work without much trouble.
Also, you should be aware of the most amazing feature of random forests in Python: instant parallelization! Those of us who started out doing this in R and then moved over are always amazed, especially when you get to work on a machine with a few dozen cores.
Finally, note that this would be a perfect application for network analysis if you have the data on friends as well as the individuals themselves. If you can analyze the ages of a user's friends, the age of the user will almost certainly be within a year or two of the median among his or her friends, particularly if the users are young enough to have built their friend networks while still in school (since most will be classmates). That prediction would likely trump any you would get from modeling---this is a textbook example of a problem where the right data > the right model every time.
Good luck!
| null | CC BY-SA 4.0 | null | 2014-05-23T19:28:01.903 | 2020-08-20T18:51:34.423 | 2020-08-20T18:51:34.423 | 98307 | 250 | null |
205 | 1 | 208 | null | 12 | 1771 | Working on what could often be called "medium data" projects, I've been able to parallelize my code (mostly for modeling and prediction in Python) on a single system across anywhere from 4 to 32 cores. Now I'm looking at scaling up to clusters on EC2 (probably with StarCluster/IPython, but open to other suggestions as well), and have been puzzled by how to reconcile distributing work across cores on an instance vs. instances on a cluster.
Is it even practical to parallelize across instances as well as across cores on each instance? If so, can anyone give a quick rundown of the pros + cons of running many instances with few cores each vs. a few instances with many cores? Is there a rule of thumb for choosing the right ratio of instances to cores per instance?
Bandwidth and RAM are non-trivial concerns in my projects, but it's easy to spot when those are the bottlenecks and readjust. It's much harder, I'd imagine, to benchmark the right mix of cores to instances without repeated testing, and my projects vary too much for any single test to apply to all circumstances. Thanks in advance, and if I've just failed to google this one properly, feel free to point me to the right answer somewhere else!
| Instances vs. cores when using EC2 | CC BY-SA 3.0 | null | 2014-05-23T19:45:54.283 | 2017-02-19T09:12:49.270 | null | null | 250 | [
"parallel",
"clustering",
"aws"
] |
206 | 2 | null | 205 | 6 | null | All things considered equal (cost, CPU perf, etc.) you could choose the smallest instance that can hold all of my dataset in memory and scale out. That way
- you make sure not to induce unnecessary latencies due to network communications, and
- you tend to maximize the overall available memory bandwidth for your processes.
Assuming you are running some sort of [cross-validation scheme](http://en.wikipedia.org/wiki/Cross-validation_%28statistics%29) to optimize some [meta parameter](http://en.wikipedia.org/wiki/Meta-optimization) of your model, assign each core a value to test and choose an many instances as needed to cover all the parameter space in as few rounds as you see fit.
If your data does not fit in the memory of one system, of course you'll need to distribute across instances. Then it is a matter of balancing memory latency (better with many instances) with network latency (better with fewer instances) but given the nature of EC2 I'd bet you'll often prefer to work with few fat instances.
| null | CC BY-SA 3.0 | null | 2014-05-23T21:01:18.630 | 2014-05-23T21:01:18.630 | null | null | 172 | null |
207 | 2 | null | 205 | 11 | null | A general rule of thumb is to not distribute until you have to. It's usually more efficient to have N servers of a certain capacity than 2N servers of half that capacity. More of the data access will be local, and therefore fast in memory versus slow across the network.
At a certain point, scaling up one machine becomes uneconomical because the cost of additional resource scales more than linearly. However this point is still amazingly high.
On Amazon in particular though, the economics of each instance type can vary a lot if you are using spot market instances. The default pricing more or less means that the same amount of resource costs about the same regardless of the instance type, that can vary a lot; large instances can be cheaper than small ones, or N small instances can be much cheaper than one large machine with equivalent resources.
One massive consideration here is that the computation paradigm can change quite a lot when you move from one machine to multiple machines. The tradeoffs that the communication overhead induce may force you to, for example, adopt a data-parallel paradigm to scale. That means a different choice of tools and algorithm. For example, SGD looks quite different in-memory and in Python than on MapReduce. So you would have to consider this before parallelizing.
You may choose to distribute work across a cluster, even if a single node and non-distributed paradigms work for you, for reliability. If a single node fails, you lose all of the computation; a distributed computation can potentially recover and complete just the part of the computation that was lost.
| null | CC BY-SA 3.0 | null | 2014-05-24T10:36:58.987 | 2014-05-24T10:36:58.987 | null | null | 21 | null |
208 | 2 | null | 205 | 11 | null | When using IPython, you very nearly don't have to worry about it (at the expense of some loss of efficiency/greater communication overhead). The parallel IPython plugin in StarCluster will by default start one engine per physical core on each node (I believe this is configurable but not sure where). You just run whatever you want across all engines by using the DirectView api (map_sync, apply_sync, ...) or the %px magic commands. If you are already using IPython in parallel on one machine, using it on a cluster is no different.
Addressing some of your specific questions:
"how to reconcile distributing work across cores on an instance vs. instances on a cluster" - You get one engine per core (at least); work is automatically distributed across all cores and across all instances.
"Is it even practical to parallelize across instances as well as across cores on each instance?" - Yes :) If the code you are running is embarrassingly parallel (exact same algo on multiple data sets) then you can mostly ignore where a particular engine is running. If the core requires a lot of communication between engines, then of course you need to structure it so that engines primarily communicate with other engines on the same physical machine; but that kind of problem is not ideally suited for IPython, I think.
"If so, can anyone give a quick rundown of the pros + cons of running many instances with few cores each vs. a few instances with many cores? Is there a rule of thumb for choosing the right ratio of instances to cores per instance?" - Use the largest c3 instances for compute-bound, and the smallest for memory-bandwidth-bound problems; for message-passing-bound problems, also use the largest instances but try to partition the problem so that each partition runs on one physical machine and most message passing is within the same partition. Problems which would run significantly slower on N quadruple c3 instances than on 2N double c3 are rare (an artificial example may be running multiple simple filters on a large number of images, where you go through all images for each filter rather than all filters for the same image). Using largest instances is a good rule of thumb.
| null | CC BY-SA 3.0 | null | 2014-05-24T11:18:26.497 | 2017-02-19T09:12:49.270 | 2017-02-19T09:12:49.270 | 26 | 26 | null |
209 | 1 | 212 | null | 10 | 1421 | A recommendation system keeps a log of what recommendations have been made to a particular user and whether that user accepts the recommendation. It's like
```
user_id item_id result
1 4 1
1 7 -1
5 19 1
5 80 1
```
where 1 means the user accepted the recommendation while -1 means the user did not respond to the recommendation.
Question: If I am going to make recommendations to a bunch of users based on the kind of log described above, and I want to maximize MAP@3 scores, how should I deal with the implicit data (1 or -1)?
My idea is to treat 1 and -1 as ratings, and predict the rating using factorization machines-type algorithms. But this does not seem right, given the asymmetry of the implicit data (-1 does not mean the user does not like the recommendation).
Edit 1
Let us think about it in the context of a matrix factorization approach. If we treat -1 and 1 as ratings, there will be some problem. For example, user 1 likes movie A which scores high in one factor (e.g. having glorious background music) in the latent factor space. The system recommends movie B which also scores high in "glorious background music", but for some reason user 1 is too busy to look into the recommendation, and we have a -1 rating movie B. If we just treat 1 or -1 equally, then the system might be discouraged to recommend movie with glorious BGM to user 1 while user 1 still loves movie with glorious BGM. I think this situation is to be avoided.
| How should one deal with implicit data in recommendation | CC BY-SA 3.0 | null | 2014-05-25T13:57:52.657 | 2014-05-27T10:58:00.620 | 2014-05-26T05:12:32.653 | 71 | 71 | [
"recommender-system"
] |
210 | 2 | null | 199 | 13 | null | Assuming symmetric Dirichlet distributions (for simplicity), a low alpha value places more weight on having each document composed of only a few dominant topics (whereas a high value will return many more relatively dominant topics). Similarly, a low beta value places more weight on having each topic composed of only a few dominant words.
| null | CC BY-SA 3.0 | null | 2014-05-26T04:07:32.390 | 2014-05-26T04:07:32.390 | null | null | 283 | null |
211 | 1 | 213 | null | 9 | 4593 | I'm new to this community and hopefully my question will well fit in here.
As part of my undergraduate data analytics course I have choose to do the project on human activity recognition using smartphone data sets. As far as I'm concern this topic relates to Machine Learning and Support Vector Machines. I'm not well familiar with this technologies yet so I will need some help.
I have decided to follow [this project idea](http://www.inf.ed.ac.uk/teaching/courses/dme/2014/datasets.html) (first project on the top)
The project goal is determine what activity a person is engaging in (e.g., WALKING, WALKING_UPSTAIRS, WALKING_DOWNSTAIRS, SITTING, STANDING, LAYING) from data recorded by a smartphone (Samsung Galaxy S II) on the subject's waist. Using its embedded accelerometer and gyroscope, the data includes 3-axial linear acceleration and 3-axial angular velocity at a constant rate of 50Hz.
All the data set is given in one folder with some description and feature labels. The data is divided for 'test' and 'train' files in which data is represented in this format:
```
2.5717778e-001 -2.3285230e-002 -1.4653762e-002 -9.3840400e-001 -9.2009078e-001 -6.6768331e-001 -9.5250112e-001 -9.2524867e-001 -6.7430222e-001 -8.9408755e-001 -5.5457721e-001 -4.6622295e-001 7.1720847e-001 6.3550240e-001 7.8949666e-001 -8.7776423e-001 -9.9776606e-001 -9.9841381e-001 -9.3434525e-001 -9.7566897e-001 -9.4982365e-001 -8.3047780e-001 -1.6808416e-001 -3.7899553e-001 2.4621698e-001 5.2120364e-001 -4.8779311e-001 4.8228047e-001 -4.5462113e-002 2.1195505e-001 -1.3489443e-001 1.3085848e-001 -1.4176313e-002 -1.0597085e-001 7.3544013e-002 -1.7151642e-001 4.0062978e-002 7.6988933e-002 -4.9054573e-001 -7.0900265e-001
```
And that's only a very small sample of what the file contain.
I don't really know what this data represents and how can be interpreted. Also for analyzing, classification and clustering of the data, what tools will I need to use?
Is there any way I can put this data into excel with labels included and for example use R or python to extract sample data and work on this?
Any hints/tips would be much appreciated.
| Human activity recognition using smartphone data set problem | CC BY-SA 4.0 | null | 2014-05-27T10:41:33.220 | 2020-08-17T03:25:03.437 | 2020-08-16T21:51:47.670 | 98307 | 295 | [
"bigdata",
"machine-learning",
"databases",
"clustering",
"data-mining"
] |
212 | 2 | null | 209 | 7 | null | Your system isn't just trained on items that are recommended right? if so you have a big feedback loop here. You want to learn from all clicks/views, I hope.
You suggest that not-looking at an item is a negative signal. I strongly suggest you do not treat it that way. Not interacting with something is almost always best treated as no information. If you have an explicit signal that indicates a dislike, like a down vote (or, maybe watched 10 seconds of a video and stopped), maybe that's valid.
I would not construe this input as rating-like data. (Although in your case, you may get away with it.) Instead think of them as weights, which is exactly the treatment in the Hu Koren Volinsky paper on ALS that @Trey mentions in a comment. This lets you record relative strength of positive/negative interactions.
Finally I would note that this paper, while is very likely to be what you're looking for, does not provide for negative weights. It is simple to extend in this way. If you get that far I can point you to the easy extension, which exists already in two implementations that I know of, in [Spark](https://github.com/apache/spark/blob/master/mllib/src/main/scala/org/apache/spark/mllib/recommendation/ALS.scala) and [Oryx](https://github.com/cloudera/oryx/blob/master/als-common/src/main/java/com/cloudera/oryx/als/common/factorizer/als/AlternatingLeastSquares.java).
| null | CC BY-SA 3.0 | null | 2014-05-27T10:58:00.620 | 2014-05-27T10:58:00.620 | null | null | 21 | null |
213 | 2 | null | 211 | 8 | null | The data set definitions are on the page here:
[Attribute Information at the bottom](http://archive.ics.uci.edu/ml/datasets/Human+Activity+Recognition+Using+Smartphones#)
or you can see inside the ZIP folder the file named activity_labels, that has your column headings inside of it, make sure you read the README carefully, it has some good info in it. You can easily bring in a `.csv` file in R using the `read.csv` command.
For example if you name you file `samsungdata` you can open R and run this command:
```
data <- read.csv("directory/where/file/is/located/samsungdata.csv", header = TRUE)
```
Or if you are already inside of the working directory in R you can just run the following
```
data <- read.csv("samsungdata.csv", header = TRUE)
```
Where the name `data` can be changed to whatever you want to call your data set.
| null | CC BY-SA 3.0 | null | 2014-05-27T12:07:45.920 | 2014-05-27T12:07:45.920 | null | null | 59 | null |
214 | 2 | null | 155 | 37 | null | The following links are available
- Public Data Sets
- Google Public Data Sets
- Amazon Web Services
- Finding Data on the Internet
| null | CC BY-SA 3.0 | null | 2014-05-27T16:05:02.883 | 2017-12-17T19:30:24.190 | 2017-12-17T19:30:24.190 | 28175 | 295 | null |
215 | 1 | null | null | 16 | 1055 | I'm building a workflow for creating machine learning models (in my case, using Python's `pandas` and `sklearn` packages) from data pulled from a very large database (here, Vertica by way of SQL and `pyodbc`), and a critical step in that process involves imputing missing values of the predictors. This is straightforward within a single analytics or stats platform---be it Python, R, Stata, etc.---but I'm curious where best to locate this step in a multi-platform workflow.
It's simple enough to do this in Python, either with the [sklearn.preprocessing.Imputer](http://scikit-learn.org/stable/modules/preprocessing.html#imputation-of-missing-values) class, using the [pandas.DataFrame.fillna](http://pandas.pydata.org/pandas-docs/version/0.13.1/generated/pandas.DataFrame.fillna.html) method, or by hand (depending upon the complexity of the imputation method used). But since I'm going to be using this for dozens or hundreds of columns across hundreds of millions of records, I wonder if there's a more efficient way to do this directly through SQL ahead of time. Aside from the potential efficiencies of doing this in a distributed platform like Vertica, this would have the added benefit of allowing us to create an automated pipeline for building "complete" versions of tables, so we don't need to fill in a new set of missing values from scratch every time we want to run a model.
I haven't been able to find much guidance about this, but I imagine that we could:
- create a table of substitute values (e.g., mean/median/mode, either overall or by group) for each incomplete column
- join the substitute value table with the original table to assign a substitute value for each row and incomplete column
- use a series of case statements to take the original value if available and the substitute value otherwise
Is this a reasonable thing to do in Vertica/SQL, or is there a good reason not to bother and just handle it in Python instead? And if the latter, is there a strong case for doing this in pandas rather than sklearn or vice-versa? Thanks!
| Where in the workflow should we deal with missing data? | CC BY-SA 3.0 | null | 2014-05-27T21:07:48.973 | 2014-06-11T11:32:13.433 | null | null | 250 | [
"machine-learning",
"python",
"pandas",
"scikit-learn"
] |
216 | 2 | null | 215 | 15 | null | My strong opinion regarding automated tasks like imputation (but, here I can include also scaling, centering, feature selection, etc) is to avoid in any way do such things without carefully inspecting your data.
Of course, after deciding what kind of imputation to apply it can be automated (under the assumption that the new data has the same shape/problems).
So, before anything, take a wise decision. I often wasted time trying to automate this things, destroying my data. I will give you some examples:
- a marketplace encoded as N/A, which I missed and considered to be North/America
- numbers like -999.0, because the data producer could not find a better replacement for missing data
- number like 0 for blood pressure or body temperature, instead of missing data (it is hard to imagine a living human with 0 blood pressure)
- multiple placeholders for missing data, due to the fact that the data was collected from various sources
After that you need to understand what kind of imputation would resemble better the information from your data for a given task. This is often much harder to do it right than it seems.
After all those things, my advice is to delay your imputation task to an upper layer where you have tools to reproduce on new data and to inspect if the assumptions for the new data are not violated (if it is possible).
| null | CC BY-SA 3.0 | null | 2014-05-28T07:08:05.393 | 2014-05-28T07:08:05.393 | null | null | 108 | null |
217 | 2 | null | 211 | 5 | null | It looks like this (or very similar data set) is used for Coursera courses. Cleaning this dataset is task for [Getting and Cleaning Data](https://www.coursera.org/course/getdata), but it is also used for case study for [Exploratory Data analysis](https://class.coursera.org/exdata-002). Video from this case study is available in videos for week 4 of EDA course-ware. It might help you with starting with this data.
| null | CC BY-SA 3.0 | null | 2014-05-28T09:43:54.197 | 2014-05-28T09:43:54.197 | null | null | 82 | null |
218 | 1 | null | null | 3 | 1714 | So, I have a dataset with 39.949 variables and 180 rows. dataset is successfully
saved in DataFrame but when I try to find cov() it result an error.
here is the code
```
import pandas as pd
cov_data=pd.DataFrame(dataset).cov()
```
Here is the error
```
File "/home/syahdeini/Desktop/FP/pca_2.py", line 44, in find_eagen
cov_data=pd.DataFrame(data_mat).cov()
File "/usr/lib/python2.7/dist-packages/pandas/core/frame.py", line 3716, in cov
baseCov = np.cov(mat.T)
File "/usr/lib/python2.7/dist-packages/numpy/lib/function_base.py", line 1766, in cov
return (dot(X, X.T.conj()) / fact).squeeze()
ValueError: array is too big.
```
| built-in cov in pandas DataFrame results ValueError array is too big | CC BY-SA 3.0 | null | 2014-05-29T13:08:09.060 | 2014-05-29T14:31:51.913 | null | null | 273 | [
"python",
"pandas"
] |
219 | 2 | null | 218 | 2 | null | Since you have 39,949 variables, the covariance matrix would have about 1.6 billion elements (39,949 * 39,949 = 1,595,922,601). That is likely why you are getting that error.
| null | CC BY-SA 3.0 | null | 2014-05-29T13:51:48.820 | 2014-05-29T13:51:48.820 | null | null | 178 | null |
220 | 2 | null | 218 | 6 | null | Christopher is right about the size of the array. To be simplistic about it, if this translates to 1.6B floats, at 16 bytes per float (32-bit version; 64-bit is bigger), then you're trying to create an array of about 26 GB. Even if you have the RAM for that, I'd imagine that it's probably going to overload something else along the way.
(Maybe not, but generally speaking, any operations that are that computationally intensive should first raise the question of whether you are doing the right calculation in the first place. And if you do need to do something of that magnitude, you should then try to break it down into more manageable chunks that can be run in parallel or distributed across machines.)
But given that you are describing a very, very wide dataset (~40k columns x 180 rows), I wonder whether you really want to take the covariance matrix of the transposed dataset (so 180x180 = 32,400 covariances)? That would be a far more tractable problem, and it's easier to see how it might be useful.
In any case, you're probably far better off calculating each pairwise covariance (or at least, the vector of cov(x_i,x_k) for all x_k != x_i) at the point where you'll actually use it, rather than calculating a giant matrix initially then referring back to it later. Memory issues aside, it'll make your life much easier if you start running things in parallel, and will help ensure you don't waste resources on unnecessary calculations.
| null | CC BY-SA 3.0 | null | 2014-05-29T14:13:00.317 | 2014-05-29T14:31:51.913 | 2014-05-29T14:31:51.913 | 250 | 250 | null |
221 | 2 | null | 196 | 8 | null | It's actually even simpler than that, from what you describe---you're just looking for a basic classification tree algorithm (so no need for slightly more complex variants like C4.5 which are optimized for prediction accuracy). The canonical text is [this](https://rads.stackoverflow.com/amzn/click/com/0412048418).
This is readily implemented in [R](http://cran.r-project.org/web/packages/tree/tree.pdf) and [Python](http://scikit-learn.org/stable/modules/tree.html).
| null | CC BY-SA 4.0 | null | 2014-05-29T14:30:21.357 | 2020-08-06T11:03:53.690 | 2020-08-06T11:03:53.690 | 98307 | 250 | null |
222 | 2 | null | 155 | 22 | null | [Enigma](http://enigma.io) is a repository of public available datasets. Its free plan offers public data search, with 10k API calls per month. Not all public databases are listed, but the list is enough for common cases.
I used it for academic research and it saved me a lot of time.
---
Another interesting source of data is the [@unitedstates project](http://theunitedstates.io/), containing data and tools to collect them, about the United States (members of Congress, geographic shapes…).
| null | CC BY-SA 3.0 | null | 2014-05-29T19:02:13.210 | 2014-05-30T21:05:52.147 | 2014-05-30T21:05:52.147 | 43 | 43 | null |
223 | 1 | null | null | 23 | 3690 | Having a lot of text documents (in natural language, unstructured), what are the possible ways of annotating them with some semantic meta-data? For example, consider a short document:
```
I saw the company's manager last day.
```
To be able to extract information from it, it must be annotated with additional data to be less ambiguous. The process of finding such meta-data is not in question, so assume it is done manually. The question is how to store these data in a way that further analysis on it can be done more conveniently/efficiently?
A possible approach is to use XML tags (see below), but it seems too verbose, and maybe there are better approaches/guidelines for storing such meta-data on text documents.
```
<Person name="John">I</Person> saw the <Organization name="ACME">company</Organization>'s
manager <Time value="2014-5-29">last day</Time>.
```
| How to annotate text documents with meta-data? | CC BY-SA 3.0 | null | 2014-05-29T20:11:16.327 | 2020-07-02T13:04:00.943 | null | null | 227 | [
"nlp",
"metadata",
"data-cleaning",
"text-mining"
] |
224 | 1 | null | null | 4 | 342 | The output of my word alignment file looks as such:
```
I wish to say with regard to the initiative of the Portuguese Presidency that we support the spirit and the political intention behind it . In bezug auf die Initiative der portugiesischen Präsidentschaft möchte ich zum Ausdruck bringen , daß wir den Geist und die politische Absicht , die dahinter stehen , unterstützen . 0-0 5-1 5-2 2-3 8-4 7-5 11-6 12-7 1-8 0-9 9-10 3-11 10-12 13-13 13-14 14-15 16-16 17-17 18-18 16-19 20-20 21-21 19-22 19-23 22-24 22-25 23-26 15-27 24-28
It may not be an ideal initiative in terms of its structure but we accept Mr President-in-Office , that it is rooted in idealism and for that reason we are inclined to support it . Von der Struktur her ist es vielleicht keine ideale Initiative , aber , Herr amtierender Ratspräsident , wir akzeptieren , daß sie auf Idealismus fußt , und sind deshalb geneigt , sie mitzutragen . 0-0 11-2 8-3 0-4 3-5 1-6 2-7 5-8 6-9 12-11 17-12 15-13 16-14 16-15 17-16 13-17 14-18 17-19 18-20 19-21 21-22 23-23 21-24 26-25 24-26 29-27 27-28 30-29 31-30 33-31 32-32 34-33
```
How can I produce the phrase tables that are used by MOSES from this output?
| How to get phrase tables from word alignments? | CC BY-SA 3.0 | null | 2014-05-31T14:28:42.317 | 2018-05-06T22:11:50.250 | 2018-05-06T22:11:50.250 | 25180 | 122 | [
"machine-translation"
] |
226 | 2 | null | 155 | 16 | null | I've found this link in Data Science Central with a list of free datasets: [Big data sets available for free](http://www.datasciencecentral.com/profiles/blogs/big-data-sets-available-for-free)
| null | CC BY-SA 3.0 | null | 2014-05-31T19:59:15.563 | 2014-05-31T19:59:15.563 | null | null | 290 | null |
227 | 1 | 242 | null | 12 | 7178 | Can someone kindly tell me about the trade-offs involved when choosing between Storm and MapReduce in Hadoop Cluster for data processing? Of course, aside from the obvious one, that Hadoop (processing via MapReduce in a Hadoop Cluster) is a batch processing system, and Storm is a real-time processing system.
I have worked a bit with Hadoop Eco System, but I haven't worked with Storm. After looking through a lot of presentations and articles, I still haven't been able to find a satisfactory and comprehensive answer.
Note: The term tradeoff here is not meant to compare to similar things. It is meant to represent the consequences of getting results real-time that are absent from a batch processing system.
| Tradeoffs between Storm and Hadoop (MapReduce) | CC BY-SA 3.0 | null | 2014-06-01T10:25:51.163 | 2014-06-10T23:33:28.443 | 2014-06-01T21:59:02.347 | 339 | 339 | [
"bigdata",
"efficiency",
"apache-hadoop",
"distributed"
] |