Id
stringlengths 1
6
| PostTypeId
stringclasses 6
values | AcceptedAnswerId
stringlengths 2
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
3
| ViewCount
stringlengths 1
6
⌀ | Body
stringlengths 0
32.5k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 2
values | FavoriteCount
stringclasses 2
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
sequence |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
341 | 2 | null | 253 | 7 | null | You have to first make it clear what do you mean by "learn Hadoop". If you mean using Hadoop, such as learning to program in MapReduce, then most probably it is a good idea. But fundamental knowledge (database, machine learning, statistics) may play a bigger role as time goes on.
| null | CC BY-SA 3.0 | null | 2014-06-12T13:42:05.383 | 2014-06-12T13:42:05.383 | null | null | 743 | null |
343 | 2 | null | 319 | 3 | null | Increasing the number of hidden layers for a standard neural network actually won't improve results in a majority of cases. Changing the size of the hidden layer will.
This fact (that the number of hidden layers does very little) has actually was noted historically and is the motivation behind the field of deep learning. Deep learning is effectively clever ways of training multilayer neural networks by, for example, isolating subsets of features when training different layers.
Good introductory video on this topic on [YouTube](https://www.youtube.com/watch?v=vXMpKYRhpmI&index=52&list=PL6Xpj9I5qXYEcOhn7TqghAJ6NAPrNmUBH)
| null | CC BY-SA 3.0 | null | 2014-06-12T15:22:16.247 | 2014-06-12T18:08:07.507 | 2014-06-12T18:08:07.507 | 754 | 754 | null |
345 | 2 | null | 313 | 22 | null | Introductory:
- Machine Learning: The Art and Science of Algorithms that Make Sense of Data (Flach)
- Learning From Data (Abu-Mostafa et al.)
- Introduction to Statistical Learning (James et al.)
Digging deeper:
- Elements of Statistical Learning (Hastie et al.)
- Pattern Recognition and Machine Learning (Bishop)
Some special interest examples:
- Convex Optimization (Boyd)
- Bayesian Reasoning and Machine Learning (Barber)
- Probabilistic Graphical Models (Koller)
- Neural Networks for Pattern Recognition (Bishop)
Broader reference works on machine learning (not really what you asked for, but for completeness):
- Machine Learning: A Probabilistic Perspective (Murphy)
- Artificial Intelligence: A Modern Approach (Russell & Norvig)
Bonus paper:
- Statistical Modeling: The Two Cultures (Breiman)
| null | CC BY-SA 3.0 | null | 2014-06-12T16:52:46.557 | 2014-06-12T16:52:46.557 | null | null | 554 | null |
346 | 2 | null | 266 | 1 | null | In addition to the courses and tutorials posted, I would suggest something a bit more 'hands on': [Kaggle](http://www.kaggle.com/competitions) has some introductory competitions that might pique your interest (most people start with the Titanic competition). And there's a large variety of subjects to explore and compete in when you want to get more experience.
| null | CC BY-SA 3.0 | null | 2014-06-12T17:03:15.733 | 2014-06-12T17:03:15.733 | null | null | 554 | null |
347 | 2 | null | 266 | 1 | null | As mentioned in above answers grasp the basics of ML by following MOOCs by Prof.Andrew Ng and ['Learning From Data'](http://work.caltech.edu/telecourse.html) by Prof. Yaser Abu-Mostafa.
R is the [clear winner](http://www.kaggle.com/wiki/Software) as the most used tool in Kaggle competitions. (Don't forget to check the resources on Kaggle wiki and forums)
Learn basic R and Python. Coursera 'Data Science' track has an [introductory R course](https://www.coursera.org/course/rprog). Almost all the algorithms can be found in Python and R libraries. Feel free to use the algorithms you learned in few kaggle competitions. As a starting point compare the performance of several algorithms on Titanic dataset and Digit recognizer dataset on [kaggle](http://www.kaggle.com/).
And do continue practising on various datasets!
| null | CC BY-SA 3.0 | null | 2014-06-12T17:58:21.467 | 2014-06-12T17:58:21.467 | null | null | 733 | null |
348 | 2 | null | 334 | 6 | null | Not sure about the cloud era one, but one of my friends joined the John Hopkins one and in his words it's "brilliant to get you started". It has also been recommended by a lot of people. I am planning to join it in few weeks. As far as seriousness is concerned, I don't think these certifications are gonna help you land a job, but they sure will help you learn.
| null | CC BY-SA 3.0 | null | 2014-06-12T18:33:21.540 | 2014-06-12T18:33:21.540 | null | null | 456 | null |
349 | 2 | null | 334 | 33 | null | As a former analytics manager and a current lead data scientist, I am very leery of the need for data science certificates. The term data scientist is pretty vague and the field of data science is in it's infancy. A certificates implies some sort of uniform standard which is just lacking in data science, it is still very much the wild west.
While a certificate is probably not going to hurt you, I think your time would be better spent developing the experience to know when to use a certain approach, and depth of understanding to be able to explain that approach to a non-technical audience.
| null | CC BY-SA 3.0 | null | 2014-06-12T20:51:59.930 | 2014-06-13T15:42:46.987 | 2014-06-13T15:42:46.987 | 780 | 780 | null |
350 | 1 | 359 | null | 0 | 3324 | Could you give some examples of typical tasks that a data scientist does in his daily job, and the must-know minimum for each of the levels (like junior, senior, etc. if there are any)? If possible, something like a [Programmer competency matrix](http://www.starling-software.com/employment/programmer-competency-matrix.html).
| Example tasks of a data scientist and the necessary knowledge | CC BY-SA 3.0 | null | 2014-06-12T22:11:46.607 | 2014-06-13T17:36:21.937 | 2014-06-12T22:42:11.590 | 84 | 194 | [
"knowledge-base"
] |
351 | 2 | null | 334 | 8 | null | There are multiple certifications going on, but they have different focus area and style of teaching.
I prefer The Analytics Edge on eDX lot more over John Hopkins specialization, as it is more intensive and hands on. The expectation in John Hopkins specialization is to put in 3 - 4 hours a week vs. 11 - 12 hours a week on Analytics Edge.
From an industry perspective, I take these certifications as a sign of interest and not level of knowledge a person possesses. There are too many dropouts in these MOOCs. I value other experience (like participating in Kaggle competitions) lot more than undergoing XYZ certification on MOOC.
| null | CC BY-SA 3.0 | null | 2014-06-13T03:50:24.610 | 2014-06-13T03:50:24.610 | null | null | 735 | null |
352 | 1 | null | null | 10 | 279 | In some cases, [it may be impossible](http://www.ncbi.nlm.nih.gov/pubmed/20975147) to draw Euler diagrams with overlapping circles to represent all the overlapping subsets in the correct proportions. This type of data then requires using polygons or other figures to represent each set. When dealing with data that describes overlapping subsets, how can I figure out whether a simple Euler diagram is possible?
| How do I figure out if subsets can be plotted in a normal Euler diagram? | CC BY-SA 3.0 | null | 2014-06-13T05:40:39.360 | 2018-09-29T09:01:00.843 | null | null | 62 | [
"visualization"
] |
354 | 1 | 355 | null | 7 | 275 | as I am very interested in programming and statistics, Data Science seems like a great career path to me - I like both fields and would like to combine them. Unfortunately, I have studied political science with a non-statistical sounding Master. I focused on statistics in this Master, visiting optional courses and writing a statistical thesis on a rather large dataset.
Since almost all job adds are requiring a degree in informatics, physics or some other techy-field, I am wondering if there is a chance to become a data scientist or if I should drop that idea.
I am lacking knowledge in machine learning, sql and hadoop, while having a rather strong informatics and statistics background.
So can somebody tell me how feasible my goal of becoming a data scientist is?
| Data Science as a Social Scientist? | CC BY-SA 3.0 | 0 | 2014-06-13T07:28:37.763 | 2014-06-15T01:29:15.240 | null | null | 791 | [
"statistics"
] |
355 | 2 | null | 354 | 12 | null | The downvotes are because of the topic, but I'll attempt to answer your question as best I can since it's here.
Data science is a term that is thrown around as loosely as Big Data. Everyone has a rough idea of what they mean by the term, but when you look at the actual work tasks, a data scientist's responsibilities will vary greatly from company to company.
Statistical analysis could encompass the entirety of the workload in one job, and not even be a consideration for another.
I wouldn't chase after a job title per se. If you are interested in the field, network (like you are doing now) and find a good fit. If you are perusing job ads, just look for the ones that stress statistical and informatics backgrounds. Hadoop and SQL are both easy to become familiar with given the time and motivation, but I would stick with the areas you are strongest in and go from there.
| null | CC BY-SA 3.0 | null | 2014-06-13T10:08:14.087 | 2014-06-13T10:08:14.087 | null | null | 434 | null |
356 | 1 | null | null | 6 | 140 | I attack this problem frequently with inefficiency because it's always pretty low on the priority list and my clients are resistant to change until things break. I would like some input on how to speed things up.
I have multiple datasets of information in a SQL database. The database is vendor-designed, so I have little control over the structure. It's a sql representation of a class-based structure. It looks a little bit like this:
```
Main-class table
-sub-class table 1
-sub-class table 2
-sub-sub-class table
...
-sub-class table n
```
Each table contains fields for each attribute of the class. A join exists which contains all of the fields for each of the sub-classes which contains all of the fields in the class table and all of the fields in each parent class' table, joined by a unique identifier.
There are hundreds of classes. which means thousands of views and tens of thousands of columns.
Beyond that, there are multiple datasets, indicated by a field value in the Main-class table. There is the production dataset, visible to all end users, and there are several other datasets comprised of the most current version of the same data from various integration sources.
Daily, we run jobs that compare the production dataset to the live datasets and based on a set of rules we merge the data, purge the live datasets, then start all over again. The rules are in place because we might trust one source of data more than another for a particular value of a particular class.
The jobs are essentially a series of SQL statements that go row-by-row through each dataset, and field by field within each row. The common changes are limited to a handful of fields in each row, but since anything can change we compare each value.
There are 10s of millions of rows of data and in some environments the merge jobs can take longer than 24 hours. We resolve that problem generally, by throwing more hardware at it, but this isn't a hadoop environment currently so there's a pretty finite limit to what can be done in that regard.
How would you go about scaling a solution to this problem such that there were no limitations? And how would you go about accomplishing the most efficient data-merge? (currently it is field by field comparisons... painfully slow).
| How to best accomplish high speed comparison of like data? | CC BY-SA 3.0 | null | 2014-06-13T10:57:10.623 | 2014-06-13T17:14:26.260 | 2014-06-13T15:47:16.340 | 84 | 434 | [
"efficiency",
"scalability",
"sql"
] |
357 | 2 | null | 334 | 10 | null | I lead data science teams for a major Internet company and I have screened hundreds of profiles and interviewed dozens for our teams around the world. Many candidates have passed the aforementioned courses and programs or bring similar credentials. Personally, I have also taken the courses, some are good, others are disappointing but none of them makes you a "data scientist".
In general, I agree with the others here. A certificate from Coursera or Cloudera just signalizes an interest but it does not move the needle. There is a lot more to consider and you can have a bigger impact by providing a comprehensive repository of your work (github profile for example) and by networking with other data scientists. Anyone hiring for a data science profile will always prefer to see your previous work and coding style/abilities.
| null | CC BY-SA 3.0 | null | 2014-06-13T12:18:53.830 | 2014-06-13T12:18:53.830 | null | null | 797 | null |
358 | 2 | null | 356 | 3 | null | Sounds interesting. Could the solution be to dump the data out, build a fast custom processing thingie to run it through and then import it back to the database? I've seen some blazing fast Java-based text processing tools for topic modeling that handle millions of lines of text per second.
If it's an option then you can build a shell script to first dump the data in as good as format as possible. Then some pre-processing to separate the datasets, then real processing of comparison and changes. Lastly something that writes it back to a good format for input into database.
Definately not a one-afternoon project, but you could probably get it to work in a couple of weeks.
| null | CC BY-SA 3.0 | null | 2014-06-13T12:30:45.290 | 2014-06-13T12:30:45.290 | null | null | 587 | null |
359 | 2 | null | 350 | 3 | null | [Becoming a Data Scientist – Curriculum via Metromap](http://nirvacana.com/thoughts/becoming-a-data-scientist/) is a popular reference for this kind of question.
| null | CC BY-SA 3.0 | null | 2014-06-13T13:49:35.777 | 2014-06-13T13:49:35.777 | null | null | 743 | null |
360 | 2 | null | 61 | 5 | null | That's because something called [bias-variance dilema](http://en.wikipedia.org/wiki/Bias%E2%80%93variance_dilemma). The overfitted model means that we will have more complex decision boundary if we give more variance on model. The thing is, not only too simple models but also complex models are likely to have dis-classified result on unseen data. Consequently, over-fitted model is not good as under-fitted model. That's why overfitting is bad and we need to fit the model somewhere in the middle.
| null | CC BY-SA 3.0 | null | 2014-06-13T14:46:30.393 | 2014-06-13T14:46:30.393 | null | null | 801 | null |
361 | 1 | null | null | 65 | 42718 | Logic often states that by underfitting a model, it's capacity to generalize is increased. That said, clearly at some point underfitting a model cause models to become worse regardless of the complexity of data.
How do you know when your model has struck the right balance and is not underfitting the data it seeks to model?
---
Note: This is a followup to my question, "[Why Is Overfitting Bad?](https://datascience.stackexchange.com/questions/61/why-is-overfitting-bad/)"
| When is a Model Underfitted? | CC BY-SA 3.0 | null | 2014-06-13T16:44:29.323 | 2020-04-26T15:03:13.663 | 2017-04-13T12:50:41.230 | -1 | 158 | [
"efficiency",
"algorithms",
"parameter"
] |
362 | 2 | null | 356 | 4 | null | Can't you create a hash for each classes, and then merge rows by rows, field by field only the classes where the hash changed ? It should be faster if most of the classes don't change..
Or a hash of each rows or maybe columns.. depending on how the data normally change..
| null | CC BY-SA 3.0 | null | 2014-06-13T17:14:26.260 | 2014-06-13T17:14:26.260 | null | null | 737 | null |
363 | 2 | null | 361 | 8 | null | Models are but abstractions of what is seen in real life. They are designed in order to abstract-away nitty-gritties of the real system in observation, while keeping sufficient information to support desired analysis.
If a model is overfit, it takes into account too many details about what is being observed, and small changes on such object may cause the model to lose precision. On the other hand, if a model is underfit, it evaluates so few attributes that noteworthy changes on the object may be ignored.
Note also that underfit may be seen as an overfit, depending on the dataset. If your input can be 99%-correctly-classified with a single attribute, you overfit the model to the data by simplifying the abstraction to a single characteristic. And, in this case, you'd be generalizing too much the 1% of the base into the 99%-class -- or also specifying the model so much that it can only see one class.
A reasonable way to say that a model is neither over nor underfit is by performing cross-validations. You split your dataset into k parts, and say, pick one of them to perform your analysis, while using the other k - 1 parts to train your model. Considering that the input itself is not biased, you should be able to have as much variance of data to train and evaluate as you'd have while using the model in real life processing.
| null | CC BY-SA 3.0 | null | 2014-06-13T17:14:57.517 | 2014-06-13T17:14:57.517 | null | null | 84 | null |
364 | 2 | null | 350 | 0 | null | The [Programmer Competency Matrix](http://sijinjoseph.com/programmer-competency-matrix/) is just a set of skills, which are more likely to occur when being a real programmer than other skills, they are not a checklist to being a programmer, or for that matter, required to be a programmer; most common way to know someone is a programmer is that they're paid to be a programmer, which honestly has nothing to do with programming skills.
To be a data scientist, do data science.
| null | CC BY-SA 3.0 | null | 2014-06-13T17:36:21.937 | 2014-06-13T17:36:21.937 | null | null | 158 | null |
365 | 2 | null | 334 | 4 | null | @OP: Choosing answers by votes is the WORST idea.
Your question becomes a popularity contest. You should seek the right answer, I doubt you know what you are asking, know what you are looking for.
To answer your question:
Q: how seriously DS certifications are viewed at this point by the community.
A: What is your goal from taking these courses? For work, for school, for self-improvement, etc? Coursera classes are very applied, you will not learn much theory, they are intentionally reserved for classroom setting.
Nonetheless, Coursera classes are very useful. I'd say it is equivalent to one year of stat grad class, out of a two year Master program.
I am not sure of its industry recognition yet, because the problem of how did you actually take the course? How much time did you spend? It's a lot easier to get A's in these courses than a classroom paper-pencil exam. So, there is be a huge quality variation from person to person.
| null | CC BY-SA 3.0 | null | 2014-06-13T18:59:11.493 | 2014-06-15T15:56:06.780 | 2014-06-15T15:56:06.780 | 386 | 386 | null |
366 | 2 | null | 361 | 11 | null | To answer your question it is important to understand the frame of reference you are looking for, if you are looking for what philosophically you are trying to achieve in model fitting, check out Rubens answer he does a good job of explaining that context.
However, in practice your question is almost entirely defined by business objectives.
To give a concrete example, lets say that you are a loan officer, you issued loans that are \$3,000 and when people pay you back you make \$50. Naturally you are trying to build a model that predicts how if a person defaults on their loan. Lets keep this simple and say that the outcomes are either full payment, or default.
From a business perspective you can sum up a models performance with a contingency matrix:
![enter image description here](https://i.stack.imgur.com/RgWr9.png)
When the model predicts someone is going to default, do they? To determining the downsides of over and under fitting I find it helpful to think of it as an optimization problem, because in each cross section of predicted verses actual model performance there is either a cost or profit to be made:
![enter image description here](https://i.stack.imgur.com/78kH2.png)
In this example predicting a default that is a default means avoiding any risk, and predicted a non-default which doesn't default will make \$50 per loan issued. Where things get dicey is when you are wrong, if you default when you predicted non-default you lose the entire loan principal and if you predict default when a customer actually would not have you suffer \$50 of missed opportunity. The numbers here are not important, just the approach.
With this framework we can now begin to understand the difficulties associated with over and under fitting.
Over fitting in this case would mean that your model works far better on you development/test data then it does in production. Or to put it another way, your model in production will far underperform what you saw in development, this false confidence will probably cause you to take on far more risky loans then you otherwise would and leaves you very vulnerable to losing money.
On the other hand, under fitting in this context will leave you with a model that just does a poor job of matching reality. While the results of this can be wildly unpredictable, (the opposite word you want to describe your predictive models), commonly what happens is standards are tightened up to compensate for this, leading to less overall customers leading to lost good customers.
Under fitting suffers a kind of opposite difficulty that over fitting does, which is under fitting gives you lower confidence. Insidiously, the lack of predictability still leads you to take on unexpected risk, all of which is bad news.
In my experience the best way to avoid both of these situations is validating your model on data that is completely outside the scope of your training data, so you can have some confidence that you have a representative sample of what you will see 'in the wild'.
Additionally, it is always a good practice to revalidate your models periodically, to determine how quickly your model is degrading, and if it is still accomplishing your objectives.
Just to some things up, your model is under fitted when it does a poor job of predicting both the development and production data.
| null | CC BY-SA 3.0 | null | 2014-06-13T20:13:01.913 | 2015-04-30T22:18:55.023 | 2015-04-30T22:18:55.023 | 325 | 780 | null |
368 | 2 | null | 235 | 4 | null | If you know R and it's ggplot library, you could try ggplot for python:
I like it, because I do work in R and python, and both are virtually identical.
But if you are not familiar you have to deal with a very "unpythonic" syntax. But I think it's an easy library overall.
| null | CC BY-SA 3.0 | null | 2014-06-14T07:34:37.643 | 2014-06-14T07:34:37.643 | null | null | 791 | null |
369 | 1 | 465 | null | 9 | 3911 | What kind of error measures do RMSE and nDCG give while evaluating a recommender system, and how do I know when to use one over the other? If you could give an example of when to use each, that would be great as well!
| Difference between using RMSE and nDCG to evaluate Recommender Systems | CC BY-SA 3.0 | null | 2014-06-14T18:53:32.243 | 2014-10-09T02:35:24.533 | 2014-06-16T19:30:46.940 | 84 | 838 | [
"machine-learning",
"recommender-system",
"model-evaluations"
] |
370 | 1 | 372 | null | 11 | 1695 | I'd like to explore 'data science'. The term seems a little vague to me, but I expect it to require:
- machine learning (rather than traditional statistics);
- a large enough dataset that you have to run analyses on clusters.
What are some good datasets and problems, accessible to a statistician with some programming background, that I can use to explore the field of data science?
To keep this as narrow as possible, I'd ideally like links to open, well used datasets and example problems.
| Data Science oriented dataset/research question for Statistics MSc thesis | CC BY-SA 3.0 | null | 2014-06-14T19:54:53.193 | 2014-06-18T13:53:25.307 | 2014-06-18T13:53:25.307 | 322 | 839 | [
"statistics",
"education",
"knowledge-base",
"definitions"
] |
371 | 1 | 374 | null | 12 | 9876 | I'm curious about natural language querying. Stanford has what looks to be a strong set of [software for processing natural language](http://nlp.stanford.edu/software/index.shtml). I've also seen the [Apache OpenNLP library](http://opennlp.apache.org/documentation/1.5.3/manual/opennlp.html), and the [General Architecture for Text Engineering](http://gate.ac.uk/science.html).
There are an incredible amount of uses for natural language processing and that makes the documentation of these projects difficult to quickly absorb.
Can you simplify things for me a bit and at a high level outline the tasks necessary for performing a basic translation of simple questions into SQL?
The first rectangle on my flow chart is a bit of a mystery.
![enter image description here](https://i.stack.imgur.com/wJPx9.png)
For example, I might want to know:
```
How many books were sold last month?
```
And I'd want that translated into
```
Select count(*)
from sales
where
item_type='book' and
sales_date >= '5/1/2014' and
sales_date <= '5/31/2014'
```
| How to process natural language queries? | CC BY-SA 3.0 | null | 2014-06-14T20:32:06.143 | 2021-05-16T12:13:06.787 | 2014-06-14T20:39:25.657 | 434 | 434 | [
"nlp"
] |
372 | 2 | null | 370 | 8 | null | Just head to kaggle.com; it'll keep you busy for a long time. For open data there's the [UC Irvine Machine Learning Repository](http://archive.ics.uci.edu/ml/). In fact, there's a whole [Stackexchange site](https://opendata.stackexchange.com/) devoted to this; look there.
| null | CC BY-SA 3.0 | null | 2014-06-14T20:52:00.873 | 2014-06-14T20:52:00.873 | 2017-04-13T12:57:29.190 | -1 | 381 | null |
373 | 2 | null | 370 | 5 | null | The [Sunlight Foundation](http://sunlightfoundation.com/) is an organization that is focused on opening up and encouraging non-partisan analysis of government data.
There is a ton of analysis out there in the wild that can be used for comparison, and a wide variety of topics.
They provide [tools](http://sunlightfoundation.com/tools/) and [apis](http://sunlightfoundation.com/api/) for accessing data, and have helped push to make data available in places like [data.gov](http://www.data.gov/).
One interesting project is [Influence Explorer](http://influenceexplorer.com/). You can get [source data here](http://data.influenceexplorer.com/) as well as access to real time data.
You might also want to take a look at one of our more popular questions:
[Publicly available datasets](https://datascience.stackexchange.com/questions/155/publicly-available-datasets).
| null | CC BY-SA 3.0 | null | 2014-06-14T21:13:20.110 | 2014-06-14T21:13:20.110 | 2017-04-13T12:50:41.230 | -1 | 434 | null |
374 | 2 | null | 371 | 7 | null | Natural language querying poses very many intricacies which can be very difficult to generalize. From a high level, I would start with trying to think of things in terms of nouns and verbs.
So for the sentence: How many books were sold last month?
You would start by breaking the sentence down with a parser which will return a tree format similar to this:
![enter image description here](https://i.stack.imgur.com/ogoiY.png)
You can see that there is a subject books, a compound verbal phrase indicating the past action of sell, and then a noun phrase where you have the time focus of a month.
We can further break down the subject for modifiers: "how many" for books, and "last" for month.
Once you have broken the sentence down you need to map those elements to sql language e.g.: how many => count, books => book, sold => sales, month => sales_date (interval), and so on.
Finally, once you have the elements of the language you just need to come up with a set of rules for how different entities interact with each other, which leaves you with:
Select count(*)
from sales
where
item_type='book' and
sales_date >= '5/1/2014' and
sales_date <= '5/31/2014'
This is at a high level how I would begin, while almost every step I have mentioned is non-trivial and really the rabbit hole can be endless, this should give you many of the dots to connect.
| null | CC BY-SA 3.0 | null | 2014-06-14T21:39:15.833 | 2014-06-14T21:39:15.833 | null | null | 780 | null |
375 | 1 | 394 | null | 4 | 2209 | The majority of people use S3. However, Google Drive seems a promising alternative solution for storing large amounts of data. Are there specific reasons why one is better than the other?
| Amazon S3 vs Google Drive | CC BY-SA 3.0 | null | 2014-06-14T23:52:10.490 | 2021-07-12T21:20:40.697 | null | null | 418 | [
"bigdata"
] |
376 | 1 | null | null | 8 | 177 | I'm developing a distributed algorithm, and to improve efficiency, it relies both on the number of disks (one per machine), and on an efficient load balance strategy. With more disks, we're able to reduce the time spent with I/O; and with an efficient load balance policy, we can distribute tasks without much data replication overhead.
There are many studies on the literature that deal with the same problem, and each of them runs different experiments to evaluate their proposal. Some experiments are specific of the strategy presented, and some others, like weak scaling (scalability) and strong scaling (speedup), are common to all of the works.
The problem is the experiments are usually executed over entirely different infrastructures (disks, processors, # machines, network), and depending on what is being evaluated, it may raise false/unfair comparisons. For example, I may get 100% of speedup in my application running on 10 machines with Infiniband connection, whereas I could get the same or even worse results if my connection was Ethernet.
So, how can one honestly compare different experiments to point out efficiency gains?
| How to compare experiments run over different infrastructures | CC BY-SA 3.0 | null | 2014-06-15T00:00:51.657 | 2014-07-18T04:50:58.287 | 2014-06-15T00:05:58.457 | 84 | 84 | [
"bigdata",
"efficiency",
"performance",
"scalability",
"distributed"
] |
377 | 2 | null | 375 | 4 | null | From our perspective on here, the big benefit of S3 is the ease of accessing the data from within EC2.
Google Drive is directly accessible from the Google Cloud platform.
There are a host of other differences that might matter depending on your usage requirements, but that's the one that would matter most around here.
The only other difference I can think of that would matter to the DS community is that when you are sharing something, you have no control of the address of a given file on google drive.
| null | CC BY-SA 3.0 | null | 2014-06-15T00:11:54.950 | 2014-06-15T00:11:54.950 | null | null | 434 | null |
379 | 2 | null | 307 | 6 | null | Financial Services is a big user of Big Data, and innovator too. One example is mortgage bond trading. To answer your questions for it:
>
What kinda data these companies used. What was the size of the data?
- Long histories of each mortgage issued for the past many years, and payments by month against them. (Billions of rows)
- Long histories of credit histories. (Billions of rows)
- Home price indices. (Not as big)
>
What kinda of tools technologies they used to process the data?
It varies. Some use in-house solutions built on databases like Netezza or Teradata. Others access the data via systems provided by the data providers. (Corelogic, Experian, etc) Some banks use columnal database technologies like KDB, or 1010data.
>
What was the problem they were facing and how the insight they got the
data helped them to resolve the issue.
The key issue is determining when mortgage bonds (mortgage backed-securities) will prepay or default. This is especially important for bonds that lack the government guarantee. By digging into payment histories, credit files, and understanding the current value of the house, it's possible to predict the likelihood of a default. Adding an interest rate model and prepayment model also helps predict the likelihood of a prepayment.
>
How they selected the tool\technology to suit their need.
If the project is driven by internal IT, usually it's based off of a large database vendor like Oracle, Teradata or Netezza. If it's driven by the quants, then they are more likely to go straight to the data vendor, or a 3rd party "All in" system.
>
What kinda pattern they identified from the data & what kind of
patterns they were looking from the data.
Linking the data gives great insights into who is likely to default on their loans, and prepay them. When you aggregated the loans into bonds, it can be the difference between a bond issued at $100,000,000 being worth that amount, or as little as $20,000,000.
| null | CC BY-SA 3.0 | null | 2014-06-15T01:25:48.563 | 2014-06-15T01:25:48.563 | null | null | 842 | null |
380 | 2 | null | 354 | 5 | null | I suspect this will get closed since it is very narrow, but my 2 cents...
Data Science requires 3 skills:
- Math/Stats
- Programming
- Domain Knowledge
It can be very hard to show all three. #1 and #2 can be signaled via degrees, but a hiring manager who may not have them doesn't want to trust a liberal arts degree. If you're looking to get into Data Science, position yourself as a domain expert first. Publish election predictions. If you're correct, cite them. That will get you noticed.
If you're Domain knowledge is A+ level, you don't need A+ level programming skills, but learn programming enough so that you don't need someone else to fetch data for you.
| null | CC BY-SA 3.0 | null | 2014-06-15T01:29:15.240 | 2014-06-15T01:29:15.240 | null | null | 842 | null |
381 | 2 | null | 361 | 4 | null | CAPM (Capital Asset Pricing Model) in Finance is a classic example of an underfit model. It was built on the beautiful theory that "Investors only pay for risk they can't diversify away" so expected excess returns are equal to correlation to market returns.
As a formula [0] Ra = Rf + B (Rm - Rf)
where Ra is the expected return of the asset, Rf is the risk free rate, Rm is the market rate of return, and Beta is the correlation to the Equity premium (Rm - Rf)
This is beautiful, elegant, and wrong. Investors seem to require a higher rate of small stocks and value (defined by book to market, or dividend yield) stocks.
Fama and French [1] presented an update to the model, which adds additional Betas for Size and Value.
So how do you know in a general sense? When the predictions you are making are wrong, and another variable with a logical explanation increases the prediction quality. It's easy to understand why someone might think small stocks are risky, independent of non-diversifiable risk. It's a good story, backed by the data.
[0] [http://www.investopedia.com/terms/c/capm.asp](http://www.investopedia.com/terms/c/capm.asp)
[1] [http://en.wikipedia.org/wiki/Fama%E2%80%93French_three-factor_model](http://en.wikipedia.org/wiki/Fama%E2%80%93French_three-factor_model)
| null | CC BY-SA 3.0 | null | 2014-06-15T01:36:51.693 | 2014-06-15T01:36:51.693 | null | null | 842 | null |
382 | 1 | null | null | 10 | 408 | I've came across the following problem, that I recon is rather typical.
I have some large data, say, a few million rows. I run some non-trivial analysis on it, e.g. an SQL query consisting of several sub-queries. I get some result, stating, for example, that property X is increasing over time.
Now, there are two possible things that could lead to that:
- X is indeed increasing over time
- I have a bug in my analysis
How can I test that the first happened, rather than the second? A step-wise debugger, even if one exists, won't help, since intermediate results can still consist of millions of lines.
The only thing I could think of was to somehow generate a small, synthetic data set with the property that I want to test and run the analysis on it as a unit test. Are there tools to do this? Particularly, but not limited to, SQL.
| How to debug data analysis? | CC BY-SA 3.0 | null | 2014-06-15T12:26:50.060 | 2014-06-16T13:21:58.777 | null | null | 846 | [
"data-mining",
"sql",
"experiments"
] |
384 | 1 | 395 | null | 20 | 28189 | I have a binary classification problem:
- Approximately 1000 samples in training set
- 10 attributes, including binary, numeric and categorical
Which algorithm is the best choice for this type of problem?
By default I'm going to start with SVM (preliminary having nominal attributes values converted to binary features), as it is considered the best for relatively clean and not noisy data.
| Choose binary classification algorithm | CC BY-SA 3.0 | null | 2014-06-15T14:01:38.233 | 2015-04-12T16:12:39.063 | 2014-06-16T14:02:42.467 | 97 | 97 | [
"classification",
"binary",
"svm",
"random-forest",
"logistic-regression"
] |
386 | 2 | null | 384 | 12 | null | For low parameters, pretty limited sample size, and a binary classifier logistic regression should be plenty powerful enough. You can use a more advanced algorithm but it's probably overkill.
| null | CC BY-SA 3.0 | null | 2014-06-15T14:23:19.793 | 2014-06-15T14:23:19.793 | null | null | 780 | null |
387 | 1 | 388 | null | 8 | 374 | I once heard that filtering spam by using blacklists is not a good approach, since some user searching for entries in your dataset may be looking for particular information from the sources blocked. Also it'd become a burden to continuously validate the current state of each spammer blocked, checking if the site/domain still disseminate spam data.
Considering that any approach must be efficient and scalable, so as to support filtering on very large datasets, what are the strategies available to get rid of spam in a non-biased manner?
Edit: if possible, any example of strategy, even if just the intuition behind it, would be very welcome along with the answer.
| Filtering spam from retrieved data | CC BY-SA 3.0 | null | 2014-06-15T15:11:29.970 | 2014-06-16T20:18:50.123 | 2014-06-15T15:23:01.007 | 84 | 84 | [
"bigdata",
"efficiency"
] |
388 | 2 | null | 387 | 5 | null | Spam filtering, especially in email, has been revolutionized by neural networks, here are a couple papers that provide good reading on the subject:
On Neural Networks And The Future Of Spam
A. C. Cosoi, M. S. Vlad, V. Sgarciu
[http://ceai.srait.ro/index.php/ceai/article/viewFile/18/8](http://ceai.srait.ro/index.php/ceai/article/viewFile/18/8)
Intelligent Word-Based Spam Filter Detection Using
Multi-Neural Networks
Ann Nosseir, Khaled Nagati and Islam Taj-Eddin
[http://www.ijcsi.org/papers/IJCSI-10-2-1-17-21.pdf](http://www.ijcsi.org/papers/IJCSI-10-2-1-17-21.pdf)
Spam Detection using Adaptive Neural Networks: Adaptive Resonance Theory
David Ndumiyana, Richard Gotora, and Tarisai Mupamombe
[http://onlineresearchjournals.org/JPESR/pdf/2013/apr/Ndumiyana%20et%20al.pdf](http://onlineresearchjournals.org/JPESR/pdf/2013/apr/Ndumiyana%20et%20al.pdf)
EDIT:
The basic intuition behind using a neural network to help with spam filtering is by providing a weight to terms based on how often they are associated with spam.
Neural networks can be trained most quickly in a supervised -- you explicitly provide the classification of the sentence in the training set -- environment. Without going into the nitty gritty the basic idea can be illustrated with these sentences:
Text = "How is the loss of the Viagra patent going to affect Pfizer", Spam = false
Text = "Cheap Viagra Buy Now", Spam = true
Text = "Online pharmacy Viagra Cialis Lipitor", Spam = true
For a two stage neural network, the first stage will calculate the likelihood of spam based off of if the word exists in the sentence. So from our example:
viagra => 66%
buy => 100%
Pfizer => 0%
etc..
Then for the second stage the results in the first stage are used as variables in the second stage:
viagra & buy => 100%
Pfizer & viagra=> 0%
This basic idea is run for many of the permutations of the all the words in your training data. The end results once trained is basically just an equation that based of the context of the words in the sentence can assign a probability of being spam. Set spamminess threshold, and filter out any data higher then said threshold.
| null | CC BY-SA 3.0 | null | 2014-06-15T15:20:31.300 | 2014-06-15T21:22:10.500 | 2014-06-15T21:22:10.500 | 780 | 780 | null |
389 | 2 | null | 382 | 4 | null | Here is a suggestion:
- Code your analysis in such a way that it can be run on sub-samples.
- Code a complementary routine which can sample, either randomly, or by time, or by region, or ... This may be domain-specific. This is where your knowledge enters.
- Combine the two and see if the results are stable across subsamples.
| null | CC BY-SA 3.0 | null | 2014-06-15T15:49:12.907 | 2014-06-15T15:49:12.907 | null | null | 515 | null |
390 | 2 | null | 384 | 5 | null | When categorical variables are in the mix, I reach for Random Decision Forests, as it handles categorical variables directly without the 1-of-n encoding transformation. This loses less information.
| null | CC BY-SA 3.0 | null | 2014-06-15T16:07:35.543 | 2014-06-15T16:07:35.543 | null | null | 21 | null |
391 | 2 | null | 382 | 1 | null | This is what I normally do - take up the most important variables (basis your business understanding and hypothesis - you can always revise it later), group by on these attributes to reduce the number of rows, which can then be imported into a Pivot. You should include the sum and count of the relevant metrics on each row.
Make sure that you don't put any filters in the previous step. Once you have entire data at a summarized level, you can play around in Pivot tables and see what things are changing / increasing or decreasing.
If the data is too big to be summarized even on important parameters, you need to partition it in 3 - 4 subsets and then do this again.
Hope it helps.
| null | CC BY-SA 3.0 | null | 2014-06-15T16:59:50.190 | 2014-06-15T16:59:50.190 | null | null | 735 | null |
393 | 2 | null | 384 | 5 | null | Linear SVM should be a good starting point. Take a look at [this](http://scikit-learn.org/stable/tutorial/machine_learning_map/index.html) guide to choose the right estimator.
| null | CC BY-SA 3.0 | null | 2014-06-15T22:33:17.670 | 2014-06-15T22:33:17.670 | null | null | 478 | null |
394 | 2 | null | 375 | 5 | null | Personally, we use S3 on top of GCE and really love it. Depending on how much data you're dealing with, Google Drive just doesn't quite match the 5 TB max that S3 gives you. Also, if you're using python, `boto` does a pretty fantastic job of making most aws services pretty accessible regardless of what stack you're dealing with. Even if you're not using python, they've got a pretty straightforward API that generally is more accessible than Google Drive.
Instead of google drive, though google did recently release a cloud storage service, apart from drive, that lets you more closely integrate your storage with any gce instance you've got, [google cloud storage](https://cloud.google.com/products/cloud-storage/)
They've got an API which seems to be pretty comparable to S3's, but I can't profess to having really played around with it much. Pricing-wise the two are identical, but I think that the large community and experience with aws in general still puts S3 squarely above both google's cloud storage and google drive.
| null | CC BY-SA 4.0 | null | 2014-06-16T04:21:36.340 | 2021-07-12T21:20:40.697 | 2021-07-12T21:20:40.697 | 120060 | 548 | null |
395 | 2 | null | 384 | 15 | null | It's hard to say without knowing a little more about your dataset, and how separable your dataset is based on your feature vector, but I would probably suggest using extreme random forest over standard random forests because of your relatively small sample set.
Extreme random forests are pretty similar to standard random forests with the one exception that instead of optimizing splits on trees, extreme random forest makes splits at random. Initially this would seem like a negative, but it generally means that you have significantly better generalization and speed, though the AUC on your training set is likely to be a little worse.
Logistic regression is also a pretty solid bet for these kinds of tasks, though with your relatively low dimensionality and small sample size I would be worried about overfitting. You might want to check out using K-Nearest Neighbors since it often performs very will with low dimensionalities, but it doesn't usually handle categorical variables very well.
If I had to pick one without knowing more about the problem I would certainly place my bets on extreme random forest, as it's very likely to give you good generalization on this kind of dataset, and it also handles a mix of numerical and categorical data better than most other methods.
| null | CC BY-SA 3.0 | null | 2014-06-16T04:37:58.817 | 2014-06-16T04:37:58.817 | null | null | 548 | null |
396 | 2 | null | 103 | 10 | null | Alex made a number of good points, though I might have to push back a bit on his implication that DBSCAN is the best clustering algorithm to use here. Depending on your implementation, and whether or not you're using accelerated indices (many implementations do not), your time and space complexity will both be `O(n2)`, which is far from ideal.
Personally, my go-to clustering algorithms are OpenOrd for winner-takes-all clustering and FLAME for fuzzy clustering. Both methods are indifferent to whether the metrics used are similarity or distance (FLAME in particular is nearly identical in both constructions). The implementation of OpenOrd in Gephi is `O(nlogn)` and is known to be more scalable than any of the other clustering algorithms present in the Gephi package.
FLAME on the other hand is great if you're looking for a fuzzy clustering method. While the complexity of FLAME is a little harder to determine since it's an iterative process, it has been shown to be sub-quadratic, and similar in run-speed to knn.
| null | CC BY-SA 3.0 | null | 2014-06-16T04:51:47.847 | 2014-06-16T04:51:47.847 | null | null | 548 | null |
397 | 2 | null | 382 | 1 | null | First you need to verify that your implementation of the algorithm is accurate. For that use a small sample of data and check whether the result is correct. At this stage the sample doesn't need to be representative of the population.
Once the implementation is verified, you need to verify that there is a significant relationship among the variables that you try to predict. To do that define null hypothesis and try to reject the null hypothesis with a significant confidence level. ([hypothesis testing for linear regression](http://stattrek.com/regression/slope-test.aspx))
There might be unit test frameworks for your SQL distribution. But using a programming language like R will be more easier to implement.
| null | CC BY-SA 3.0 | null | 2014-06-16T06:34:53.683 | 2014-06-16T06:34:53.683 | null | null | 733 | null |
398 | 1 | 405 | null | 8 | 990 | I'm currently in the very early stages of preparing a new research-project (still at the funding-application stage), and expect that data-analysis and especially visualisation tools will play a role in this project.
In view of this I face the following dilemma: Should I learn Python to be able to use its extensive scientific libraries (Pandas, Numpy, Scipy, ...), or should I just dive into similar packages of a language I'm already acquainted with (Racket, or to a lesser extent Scala)?
(Ideally I would learn Python in parallel with using statistical libraries in Racket, but I'm not sure I'll have time for both)
I'm not looking for an answer to this dilemma, but rather for feedback on my different considerations:
My current position is as follows:
In favour of Python:
- Extensively used libraries
- Widely used (may be decisive in case of collaboration with others)
- A lot of online material to start learning it
- Conferences that are specifically dedicated to Scientific Computing with Python
- Learning Python won't be a waste of time anyway
In favour of a language I already know:
- It's a way to deepen my knowledge of one language rather than getting superficial knowledge of one more language (under the motto: you should at least know one language really well)
- It is feasible. Both Racket and Scala have good mathematics and statistics libraries
- I can start right away with learning what I need to know rather than first having to learn the basics
Two concrete questions:
- What am I forgetting?
- How big of a nuisance could the Python 2 vs 3 issue be?
| What to consider before learning a new language for data analysis | CC BY-SA 3.0 | null | 2014-06-16T07:32:29.137 | 2014-06-16T15:00:04.577 | null | null | 872 | [
"python",
"visualization"
] |
399 | 2 | null | 398 | 1 | null | According to me, all the factors, you have mentioned are superficial in nature. You have not considered the core of tool selection. In this case, there are 2 aspects, you mentioned:
- Data analysis - What kind of analysis are you working on? There might be some analysis which are easier in some languages and more difficult in other.
- Visualization - R provides similar community and learning material (as Python) and has the best visualizations compared to other languages here.
At this stage, you can be flexible with what language to learn, since you are starting from scratch.
Hope this helps.
| null | CC BY-SA 3.0 | null | 2014-06-16T09:52:24.913 | 2014-06-16T09:52:24.913 | null | null | 735 | null |
400 | 2 | null | 398 | 5 | null | From my experience, the points to keep in mind when considering a data analysis platform are:
- Can it handle the size of the data that I need? If your data sets fit in memory, there's usually no big trouble, although AFAIK Python is somewhat more memory-efficient than R. If you need to handle larger-than-memory data sets, the platform need to handle it conveniently. In this case, SQL would cover for basic statistics, Python + Apache Spark is another option.
- Does the platform covers all of my analysis needs? The greatest annoyance I've encountered in data mining projects is having to juggle between several tools, because tool A handles web connections well, tool B does the statistics and tool C renders nice pictures. You want your weapon-of-choice to cover as many aspects of your projects as possible. When considering this issue, Python is very comprehensive, but R has a lot of build-in statistical tests ready-to-use, if that's what you need.
| null | CC BY-SA 3.0 | null | 2014-06-16T11:45:05.800 | 2014-06-16T11:45:05.800 | null | null | 846 | null |
401 | 2 | null | 382 | 1 | null | I like a multiple step strategy:
- Write clean easy to understand code, as opposed to short-tricky code. I know statisticians like tricky code, but spotting problems in tricky code is dangerous.
( I am mentioning this because a supervisor of mine was fond of undocumented 500 lines python scrips - have fun debugging that mess and I have seen that pattern a lot, especially from people who are not from an IT background)
- Break down your code in smaller functions, which can be tested and evaluated in smaller stes.
- Look for connected elements, e.g. the number of cases with condition X is Y - so this query MUST return Y. Most often this is more complex, but doable.
- When you are running your script the first time, test it with a small subsample and carefully check if everything is in order. While I like unit tests in IT, bugs in statistics scripts are often so pronounced that they are easily visible doing a carefully check. Or they are methodical errors, which are probably never caught by unit tests.
That should suffice to ensure a clean "one - off " job. But for a time series as you seem to have, I would add that you should check for values out of range, impossible combinations etc. For me, most scripts that have reached step 4 are probably bug free - and they will stay that way unless something changes. And most often, the data are changing - and that is something which should be checked for every run. Writing code for that can be time consuming and annoying, but it beats subtle errors due to data entry errors.
| null | CC BY-SA 3.0 | null | 2014-06-16T13:21:58.777 | 2014-06-16T13:21:58.777 | null | null | 791 | null |
402 | 2 | null | 223 | 7 | null | The [brat annotation tool](http://brat.nlplab.org/) might be useful for you as per my comment. I have tried many of them and this is the best I have found. It has a nice user interface and can support a number of different types of annotations. The annotations are stored in a separate .annot file which contain each annotation as well as its location within the original document. A word of warning though, if you ultimately want to feed the annotations into a classifier like the Stanford NER tool then you will have to do some manipulation to get the data into a format that it will accept.
| null | CC BY-SA 3.0 | null | 2014-06-16T13:25:48.453 | 2014-06-16T13:25:48.453 | null | null | 387 | null |
403 | 1 | null | null | 2 | 431 | ![enter image description here](https://i.stack.imgur.com/4Ih6o.png)
I am trying to do Logistic Regression using SAS Enterprise Miner.
My Independent variables are
```
CPR/Inc (Categorical 1 to 7)
OD/Inc (Categorical 1 to 4)
Insurance (Binary 0 or 1)
Income Loss (Binary 0 or 1)
Living Arrangement (Categorical 1 to 7)
Employment Status (categorical 1 to 8)
```
My Dependent Variable is Default (Binary 0 or 1)
The following is the output from running Regression Model.
Analysis of Maximum Likelihood Estimates
```
Standard Wald
Parameter DF Estimate Error Chi-Square Pr > ChiSq Exp(Est)
Intercept 1 -0.4148 0.0645 41.30 <.0001 0.660
CPR___Inc 1 1 -0.8022 0.1051 58.26 <.0001 0.448
CPR___Inc 2 1 -0.4380 0.0966 20.57 <.0001 0.645
CPR___Inc 3 1 0.3100 0.0871 12.68 0.0004 1.363
CPR___Inc 4 1 -0.00304 0.0898 0.00 0.9730 0.997
CPR___Inc 5 1 0.1331 0.0885 2.26 0.1324 1.142
CPR___Inc 6 1 0.1694 0.0881 3.70 0.0546 1.185
Emp_Status 1 1 -0.2289 0.1006 5.18 0.0229 0.795
Emp_Status 2 1 0.4061 0.0940 18.66 <.0001 1.501
Emp_Status 3 1 -0.2119 0.1004 4.46 0.0347 0.809
Emp_Status 4 1 0.1100 0.0963 1.30 0.2534 1.116
Emp_Status 5 1 -0.2280 0.1007 5.12 0.0236 0.796
Emp_Status 6 1 0.3761 0.0943 15.91 <.0001 1.457
Emp_Status 7 1 -0.3337 0.1026 10.59 0.0011 0.716
Inc_Loss 0 1 -0.1996 0.0449 19.76 <.0001 0.819
Insurance 0 1 0.1256 0.0559 5.05 0.0246 1.134
Liv_Arran 1 1 -0.1128 0.0916 1.52 0.2178 0.893
Liv_Arran 2 1 0.2576 0.0880 8.57 0.0034 1.294
Liv_Arran 3 1 0.0235 0.0904 0.07 0.7950 1.024
Liv_Arran 4 1 0.0953 0.0887 1.16 0.2825 1.100
Liv_Arran 5 1 -0.0493 0.0907 0.29 0.5871 0.952
Liv_Arran 6 1 -0.3732 0.0966 14.93 0.0001 0.689
OD___Inc 1 1 -0.2136 0.0557 14.72 0.0001 0.808
OD___Inc 2 1 -0.0279 0.0792 0.12 0.7248 0.973
OD___Inc 3 1 -0.0249 0.0793 0.10 0.7534 0.975
```
Now I used this Model to Score a new set of data. An example row of my new data is
```
CPR - 7
OD - 4
Living Arrangement - 4
Employment Status - 4
Insurance - 0
Income Loss - 1
```
For this sample row, the model predicted output (Probability of default = 1) as 0.7335
To check this manually, I added the estimates
```
Intercept + Emp Status 4 + Liv Arran 4 + Insurance 0
-0.4148 + 0.1100 + 0.0953 + 0.1256 = -0.0839
```
Odds ratio = Exponential(-0.0839) = 0.9195
Hence probability = 0.9195 / (1 + 0.9195) = 0.4790
I am unable to understand why there is such a mismatch between the Model's predicted probability and theoretical probability.
Any help would be much appreciated .
Thanks
| Why is there such a mismatch between the Model's predicted probability and theoretical probability in logistic regression? | CC BY-SA 3.0 | null | 2014-06-16T13:30:01.320 | 2014-09-16T16:37:33.357 | 2014-06-18T13:53:29.630 | 368 | 880 | [
"categorical-data",
"logistic-regression"
] |
404 | 2 | null | 223 | 16 | null | Personally I would advocate using something that is both not-specific to the NLP field, and something that is sufficiently general that it can still be used as a tool even when you've started moving beyond this level of metadata. I would especially pick a format that can be used regardless of development environment and one that can keep some basic structure if that becomes relevant (like tokenization)
It might seem strange, but I would honestly suggest `JSON`. It's extremely well supported, supports a lot of structure, and is flexible enough that you shouldn't have to move from it for not being powerful enough. For your example, something like this:
```
{'text': 'I saw the company's manager last day.", {'Person': [{'name': 'John'}, {'indices': [0:1]}, etc...]}
```
The one big advantage you've got over any NLP-specific formats here is that `JSON` can be parsed in any environment, and since you'll probably have to edit your format anyway, JSON lends itself to very simple edits that give you a short distance to other formats.
You can also implicitly store tokenization information if you want:
```
{"text": ["I", "saw", "the", "company's", "manager", "last", "day."]}
```
EDIT: To clarify the mapping of metadata is pretty open, but here's an example:
```
{'body': '<some_text>',
'metadata':
{'<entity>':
{'<attribute>': '<value>',
'location': [<start_index>, <end_index>]
}
}
}
```
Hope that helps, let me know if you've got any more questions.
| null | CC BY-SA 3.0 | null | 2014-06-16T14:35:20.980 | 2014-06-16T17:35:41.207 | 2014-06-16T17:35:41.207 | 548 | 548 | null |
405 | 2 | null | 398 | 13 | null | Personally going to make a strong argument in favor of Python here. There are a large number of reasons for this, but I'm going to build on some of the points that other people have mentioned here:
- Picking a single language: It's definitely possible to mix and match languages, picking d3 for your visualization needs, FORTRAN for your fast matrix multiplies, and python for all of your networking and scripting. You can do this down the line, but keeping your stack as simple as possible is a good move, especially early on.
- Picking something bigger than you: You never want to be pushing up against the barriers of the language you want to use. This is a huge issue when it comes to languages like Julia and FORTRAN, which simply don't offer the full functionality of languages like python or R.
- Pick Community: The one most difficult thing to find in any language is community. Python is the clear winner here. If you get stuck, you ask something on SO, and someone will answer in a matter of minutes, which is simply not the case for most other languages. If you're learning something in a vacuum you will simply learn much slower.
In terms of the minus points, I might actually push back on them.
Deepening your knowledge of one language is a decent idea, but knowing only one language, without having practice generalizing that knowledge to other languages is a good way to shoot yourself in the foot. I have changed my entire favored development stack three time over as many years, moving from `MATLAB` to `Java` to `haskell` to `python`. Learning to transfer your knowledge to another language is far more valuable than just knowing one.
As far as feasibility, this is something you're going to see again and again in any programming career. Turing completeness means you could technically do everything with `HTML4` and `CSS3`, but you want to pick the right tool for the job. If you see the ideal tool and decide to leave it by the roadside you're going to find yourself slowed down wishing you had some of the tools you left behind.
A great example of that last point is trying to deploy `R` code. 'R''s networking capabilities are hugely lacking compared to `python`, and if you want to deploy a service, or use slightly off-the-beaten path packages, the fact that `pip` has an order of magnitude more packages than `CRAN` is a huge help.
| null | CC BY-SA 3.0 | null | 2014-06-16T15:00:04.577 | 2014-06-16T15:00:04.577 | null | null | 548 | null |
406 | 1 | null | null | 21 | 12113 | If I have a retail store and have a way to measure how many people enter my store every minute, and timestamp that data, how can I predict future foot traffic?
I have looked into machine learning algorithms, but I'm not sure which one to use. In my test data, a year over year trend is more accurate compared to other things I've tried, like KNN (with what I think are sensible parameters and distance function).
It almost seems like this could be similar to financial modeling, where you deal with time series data. Any ideas?
| How can I predict traffic based on previous time series data? | CC BY-SA 3.0 | null | 2014-06-16T15:49:55.673 | 2021-03-11T19:03:30.347 | 2014-06-17T16:17:46.027 | 84 | 886 | [
"machine-learning",
"time-series"
] |
407 | 2 | null | 313 | 8 | null | Other answers recommended a good set of books about the mathematics behind data science. But as you mentioned, its not just mathematics and activities like data collection and inference from data has their own rules and theories, even if not being as rigorous as mathematical backgrounds (yet).
For theses parts, I suggest the book [Beautiful Data: The Stories Behind Elegant Data Solutions](http://rads.stackoverflow.com/amzn/click/0596157118) which contains twenty case-study like chapters written by people really engaged with real world data analysis problems. It does not contain any mathematics, but explores areas like collecting data, finding practical ways of using data in analyses, scaling and selecting the best solutions very well.
Another really interesting book is [Thinking with Data: How to Turn Information into Insights](http://rads.stackoverflow.com/amzn/click/1449362931), which is not technical (=programming tutorial) either, but covers important topics on how to really use the data science power in decision making and real world problems.
| null | CC BY-SA 3.0 | null | 2014-06-16T16:10:34.423 | 2014-06-16T16:10:34.423 | null | null | 227 | null |
408 | 2 | null | 406 | 17 | null | The problem with models like KNN is that they do not take into account seasonality (time-dependent variations in trend). To take those into account, you should use Time Series analysis.
For count data, such as yours, you can use generalized linear auto-regressive moving average models (GLARMA). Fortunately, there is an R package that implements them ([glarma](http://cran.r-project.org/web/packages/glarma/index.html)).
The [vignette](http://cran.r-project.org/web/packages/glarma/vignettes/glarma.pdf) is a good resource for the theory behind the tool.
| null | CC BY-SA 3.0 | null | 2014-06-16T16:34:50.317 | 2014-06-16T16:34:50.317 | null | null | 178 | null |
409 | 2 | null | 406 | 10 | null | I think Christopher's answers above are entirely sensible. As an alternate approach (or perhaps just in addition to the advise he's given), I might start by just visualizing the data a bit to try get a rough sense of what's going on.
If you haven't already done this, you might try adding a date's month and day of week as features -- if you end up sticking with KNN, this will help the model pick up seasonality.
As a different way of taking this on, you might consider starting with a really, really basic model (like OLS).. these often go a long way in generating reasonable predictions.
Finally, the more we know about your data, the easier it will be for us to help generate suggestions -- What time frame are you observing? What are the features you're currently using? etc.
Hope this helps --
| null | CC BY-SA 3.0 | null | 2014-06-16T17:26:17.697 | 2014-06-16T17:26:17.697 | null | null | 889 | null |
410 | 1 | 414 | null | 114 | 121896 | I'm currently working on implementing Stochastic Gradient Descent, `SGD`, for neural nets using back-propagation, and while I understand its purpose I have some questions about how to choose values for the learning rate.
- Is the learning rate related to the shape of the error gradient, as it dictates the rate of descent?
- If so, how do you use this information to inform your decision about a value?
- If it's not what sort of values should I choose, and how should I choose them?
- It seems like you would want small values to avoid overshooting, but how do you choose one such that you don't get stuck in local minima or take to long to descend?
- Does it make sense to have a constant learning rate, or should I use some metric to alter its value as I get nearer a minimum in the gradient?
In short: How do I choose the learning rate for SGD?
| Choosing a learning rate | CC BY-SA 3.0 | null | 2014-06-16T18:08:38.623 | 2020-01-31T16:28:25.547 | 2018-01-17T14:59:36.183 | 28175 | 890 | [
"machine-learning",
"neural-network",
"deep-learning",
"optimization",
"hyperparameter"
] |
411 | 1 | 415 | null | 11 | 7428 | It seems as though most languages have some number of scientific computing libraries available.
- Python has Scipy
- Rust has SciRust
- C++ has several including ViennaCL and Armadillo
- Java has Java Numerics and Colt as well as several other
Not to mention languages like `R` and `Julia` designed explicitly for scientific computing.
With so many options how do you choose the best language for a task? Additionally which languages will be the most performant? `Python` and `R` seem to have the most traction in the space, but logically a compiled language seems like it would be a better choice. And will anything ever outperform `Fortran`? Additionally compiled languages tend to have GPU acceleration, while interpreted languages like `R` and `Python` don't. What should I take into account when choosing a language, and which languages provide the best balance of utility and performance? Also are there any languages with significant scientific computing resources that I've missed?
| Best languages for scientific computing | CC BY-SA 3.0 | null | 2014-06-16T19:14:38.553 | 2014-06-20T07:11:40.053 | 2014-06-16T19:22:00.133 | 890 | 890 | [
"efficiency",
"statistics",
"tools",
"knowledge-base"
] |
412 | 1 | 446 | null | 44 | 6139 |
# Motivation
I work with datasets that contain personally identifiable information (PII) and sometimes need to share part of a dataset with third parties, in a way that doesn't expose PII and subject my employer to liability. Our usual approach here is to withhold data entirely, or in some cases to reduce its resolution; e.g., replacing an exact street address with the corresponding county or census tract.
This means that certain types of analysis and processing must be done in-house, even when a third party has resources and expertise more suited to the task. Since the source data is not disclosed, the way we go about this analysis and processing lacks transparency. As a result, any third party's ability to perform QA/QC, adjust parameters or make refinements may be very limited.
# Anonymizing Confidential Data
One task involves identifying individuals by their names, in user-submitted data, while taking into account errors and inconsistencies. A private individual might be recorded in one place as "Dave" and in another as "David," commercial entities can have many different abbreviations, and there are always some typos. I've developed scripts based on a number of criteria that determine when two records with non-identical names represent the same individual, and assign them a common ID.
At this point we can make the dataset anonymous by withholding the names and replacing them with this personal ID number. But this means the recipient has almost no information about e.g. the strength of the match. We would prefer to be able to pass along as much information as possible without divulging identity.
# What Doesn't Work
For instance, it would be great to be able to encrypt strings while preserving edit distance. This way, third parties could do some of their own QA/QC, or choose to do further processing on their own, without ever accessing (or being able to potentially reverse-engineer) PII. Perhaps we match strings in-house with edit distance <= 2, and the recipient wants to look at the implications of tightening that tolerance to edit distance <= 1.
But the only method I am familiar with that does this is [ROT13](http://www.techrepublic.com/blog/it-security/cryptographys-running-gag-rot13/) (more generally, any [shift cipher](https://en.wikipedia.org/wiki/Caesar_cipher)), which hardly even counts as encryption; it's like writing the names upside down and saying, "Promise you won't flip the paper over?"
Another bad solution would be to abbreviate everything. "Ellen Roberts" becomes "ER" and so forth. This is a poor solution because in some cases the initials, in association with public data, will reveal a person's identity, and in other cases it's too ambiguous; "Benjamin Othello Ames" and "Bank of America" will have the same initials, but their names are otherwise dissimilar. So it doesn't do either of the things we want.
An inelegant alternative is to introduce additional fields to track certain attributes of the name, e.g.:
```
+-----+----+-------------------+-----------+--------+
| Row | ID | Name | WordChars | Origin |
+-----+----+-------------------+-----------+--------+
| 1 | 17 | "AMELIA BEDELIA" | (6, 7) | Eng |
+-----+----+-------------------+-----------+--------+
| 2 | 18 | "CHRISTOPH BAUER" | (9, 5) | Ger |
+-----+----+-------------------+-----------+--------+
| 3 | 18 | "C J BAUER" | (1, 1, 5) | Ger |
+-----+----+-------------------+-----------+--------+
| 4 | 19 | "FRANZ HELLER" | (5, 6) | Ger |
+-----+----+-------------------+-----------+--------+
```
I call this "inelegant" because it requires anticipating which qualities might be interesting and it's relatively coarse. If the names are removed, there's not much you can reasonably conclude about the strength of the match between rows 2 & 3, or about the distance between rows 2 & 4 (i.e., how close they are to matching).
# Conclusion
The goal is to transform strings in such a way that as many useful qualities of the original string are preserved as possible while obscuring the original string. Decryption should be impossible, or so impractical as to be effectively impossible, no matter the size of the data set. In particular, a method that preserves the edit distance between arbitrary strings would be very useful.
I've found a couple papers that might be relevant, but they're a bit over my head:
- Privacy Preserving String Comparisons Based on Levenshtein Distance
- An Empirical Comparison of Approaches to Approximate String
Matching in Private Record Linkage
| How can I transform names in a confidential data set to make it anonymous, but preserve some of the characteristics of the names? | CC BY-SA 3.0 | null | 2014-06-16T19:48:31.797 | 2015-12-07T17:44:55.910 | 2015-12-07T17:44:55.910 | 322 | 322 | [
"data-cleaning",
"anonymization"
] |
413 | 2 | null | 130 | 18 | null | Dimensionality reduction is typically choosing a basis or mathematical representation within which you can describe most but not all of the variance within your data, thereby retaining the relevant information, while reducing the amount of information necessary to represent it. There are a variety of techniques for doing this including but not limited to `PCA`, `ICA`, and `Matrix Feature Factorization`. These will take existing data and reduce it to the most discriminative components.These all allow you to represent most of the information in your dataset with fewer, more discriminative features.
Feature Selection is hand selecting features which are highly discriminative. This has a lot more to do with feature engineering than analysis, and requires significantly more work on the part of the data scientist. It requires an understanding of what aspects of your dataset are important in whatever predictions you're making, and which aren't. Feature extraction usually involves generating new features which are composites of existing features. Both of these techniques fall into the category of feature engineering. Generally feature engineering is important if you want to obtain the best results, as it involves creating information that may not exist in your dataset, and increasing your signal to noise ratio.
| null | CC BY-SA 3.0 | null | 2014-06-16T19:49:24.377 | 2014-06-16T21:44:32.587 | 2014-06-16T21:44:32.587 | 890 | 890 | null |
414 | 2 | null | 410 | 86 | null |
- Is the learning rate related to the shape of the error gradient, as
it dictates the rate of descent?
In plain SGD, the answer is no. A global learning rate is used which is indifferent to the error gradient. However, the intuition you are getting at has inspired various modifications of the SGD update rule.
- If so, how do you use this information to inform your decision about a value?
Adagrad is the most widely known of these and scales a global learning rate η on each dimension based on l2 norm of the history of the error gradient gt on each dimension:
Adadelta is another such training algorithm which uses both the error gradient history like adagrad and the weight update history and has the advantage of not having to set a learning rate at all.
- If it's not what sort of values should I choose, and how should I choose them?
Setting learning rates for plain SGD in neural nets is usually a
process of starting with a sane value such as 0.01 and then doing cross-validation
to find an optimal value. Typical values range over a few orders of
magnitude from 0.0001 up to 1.
- It seems like you would want small values to avoid overshooting, but
how do you choose one such that you don't get stuck in local minima
or take too long to descend? Does it make sense to have a constant learning rate, or should I use some metric to alter its value as I get nearer a minimum in the gradient?
Usually, the value that's best is near the highest stable learning
rate and learning rate decay/annealing (either linear or
exponentially) is used over the course of training. The reason behind this is that early on there is a clear learning signal so aggressive updates encourage exploration while later on the smaller learning rates allow for more delicate exploitation of local error surface.
| null | CC BY-SA 3.0 | null | 2014-06-16T19:53:09.957 | 2016-12-05T19:57:08.293 | 2016-12-05T19:57:08.293 | 26596 | 548 | null |
415 | 2 | null | 411 | 12 | null | This is a pretty massive question, so this is not intended to be a full answer, but hopefully this can help to inform general practice around determining the best tool for the job when it comes to data science. Generally, I have a relatively short list of qualifications I look for when it comes to any tool in this space. In no particular order they are:
- Performance: Basically boils down to how quickly the language does matrix multiplication, as that is more or less the most important task in data science.
- Scalability: At least for me personally, this comes down to ease of building a distributed system. This is somewhere where languages like Julia really shine.
- Community: With any language, you're really looking for an active community that can help you when you get stuck using whichever tool you're using. This is where python pulls very far ahead of most other languages.
- Flexibility: Nothing is worse than being limited by the language that you use. It doesn't happen very often, but trying to represent graph structures in haskell is a notorious pain, and Julia is filled with a lot of code architectures pains as a result of being such a young language.
- Ease of Use: If you want to use something in a larger environment, you want to make sure that setup is a straightforward and it can be automated. Nothing is worse than having to set up a finnicky build on half a dozen machines.
There are a ton of articles out there about performance and scalability, but in general you're going to be looking at a performance differential of maybe 5-10x between languages, which may or may not matter depending on your specific application. As far as GPU acceleration goes, `cudamat` is a really seamless way of getting it working with `python`, and the `cuda` library in general has made GPU acceleration far more accessible than it used to be.
The two primary metrics I use for both community and flexibility are to look at the language's package manager, and the language questions on a site like SO. If there are a large number of high-quality questions and answers, it's a good sign that the community is active. Number of packages and the general activity on those packages can also be a good proxy for this metric.
As far as ease of use goes, I am a firm believer that the only way to actually know is to actually set it up yourself. There's a lot of superstition around a lot of Data Science tools, specifically things like databases and distributed computing architecture, but there's no way to really know if something is easy or hard to setup up and deploy without just building it yourself.
| null | CC BY-SA 3.0 | null | 2014-06-16T20:10:12.167 | 2014-06-16T20:10:12.167 | null | null | 548 | null |
416 | 2 | null | 412 | 7 | null | If feasible I would link related records (e.g., Dave, David, etc.) and replace them with a sequence number (1,2,3, etc.) or a [salted](http://en.wikipedia.org/wiki/Salt_%28cryptography%29) [hash of the string](http://www.cse.yorku.ca/~oz/hash.html) that is used to represent all related records (e.g., David instead of Dave).
I assume that third parties need not have any idea what the real name is, otherwise you might as well give it to them.
edit: You need to define and justify what kind of operations the third party needs to be able to do. For example, what is wrong with using initials followed by a number (e.g., BOA-1, BOA-2, etc.) to disambiguate Bank of America from Benjamin Othello Ames? If that's too revealing, you could bin some of the letters or names; e.g., [A-E] -> 1, [F-J] -> 2, etc. so BOA would become 1OA, or ["Bank", "Barry", "Bruce", etc.] -> 1 so Bank of America is again 1OA.
For more information see [k-anonymity](http://en.wikipedia.org/wiki/K-anonymity).
| null | CC BY-SA 3.0 | null | 2014-06-16T20:12:39.130 | 2014-06-24T05:05:21.900 | 2014-06-24T05:05:21.900 | 381 | 381 | null |
417 | 2 | null | 387 | 1 | null | Blacklists aren't have value for a number of reasons:
- They're easy to set up and scale - it's just a key/value store, and you can probably just re-use some of your caching logic for the most basic implementation.
- Depending on the size and type of the spam attack, there will probably be some very specific terms or URLs being used. It's much faster to throw that term into a blacklist than wait for your model to adapt.
- You can remove items just as quickly as you added them.
- Everybody understands how they work and any admin can use them.
The key to fighting spam is monitoring. Make sure you have some sort of interface showing which items are on your blacklist, how often they've been hit in the last 10 minutes / hour / day / month, and the ability to easily add and remove items.
You'll want to combine a number of different spam detection models and tactics. Neural nets seem to be a good suggestion, and I'd recommend looking at user behavior patterns in addition to just content. Normal humans don't do things like send batches of 1,000 emails every 30 seconds for 12 consecutive hours.
| null | CC BY-SA 3.0 | null | 2014-06-16T20:18:50.123 | 2014-06-16T20:18:50.123 | null | null | 414 | null |
418 | 1 | null | null | 16 | 12645 | I would like to know what is the best way to classify a data set composed of mixed types of attributes, for example, textual and numerical. I know I can convert textual to boolean, but the vocabulary is diverse and data become too sparse. I also tried to classify the types of attributes separately and combine the results through meta-learning techniques, but it did not work well.
| Best way to classify datasets with mixed types of attributes | CC BY-SA 3.0 | null | 2014-06-17T00:16:24.287 | 2023-04-12T19:18:51.107 | null | null | 900 | [
"machine-learning",
"classification"
] |
419 | 2 | null | 411 | 4 | null | First you need to decide what you want to do, then look for the right tool for that task.
A very general approach is to use R for first versions and to see if your approach is correct. It lacks a little in speed, but has very powerful commands and addon libraries, that you can try almost anything with it:
[http://www.r-project.org/](http://www.r-project.org/)
The second idea is if you want to understand the algorithms behind the libraries, you might wanna take a look at the Numerical Recipies. They are available for different languages and free to use for learning. If you want to use them in commercial products, you need to ourchase a licence:
[http://en.wikipedia.org/wiki/Numerical_Recipes](http://en.wikipedia.org/wiki/Numerical_Recipes)
Most of the time performance will not be the issue but finding the right algorithms and parameters for them, so it is important to have a fast scripting language instead of a monster program that first needs to compile 10 mins before calculating two numbers and putting out the result.
And a big plus in using R is that it has built-in functions or libraries for almost any kind of diagram you might wanna need to visualize your data.
If you then have a working version, it is almost easy to port it to any other language you think is more performant.
| null | CC BY-SA 3.0 | null | 2014-06-17T00:19:09.773 | 2014-06-17T00:19:09.773 | null | null | 901 | null |
420 | 2 | null | 418 | 6 | null | It is hard to answer this question without knowing more about the data. That said, I would offer the following advice:
Most machine learning techniques can handle mixed-type data. Tree based methods (such as AdaBoost and Random Forests) do well with this type of data. The more important issue is actually the dimensionality, about which you are correct to be concerned.
I would suggest that you do something to reduce that dimensionality. For example, look for the words or phrases that separate the data the best and discard the other words (note: tree based methods do this automatically).
| null | CC BY-SA 3.0 | null | 2014-06-17T00:39:15.990 | 2014-06-17T00:39:15.990 | null | null | 178 | null |
421 | 1 | null | null | 10 | 1572 | Does anyone know some good tutorials on online machine learning technics?
I.e. how it can be used in real-time environments, what are key differences compared to normal machine learning methods etc.
UPD: Thank you everyone for answers, by "online" I mean methods which can be trained in a real-time mode, based on a new inputs one by one.
| Online machine learning tutorial | CC BY-SA 3.0 | null | 2014-06-17T04:31:34.067 | 2015-06-23T15:12:32.633 | 2014-06-18T07:17:45.110 | 88 | 88 | [
"machine-learning",
"education",
"beginner"
] |
422 | 1 | 510 | null | 28 | 26844 | As an extension to our great list of [publicly available datasets](https://datascience.stackexchange.com/questions/155/publicly-available-datasets), I'd like to know if there is any list of publicly available social network datasets/crawling APIs. It would be very nice if alongside with a link to the dataset/API, characteristics of the data available were added. Such information should be, and is not limited to:
- the name of the social network;
- what kind of user information it provides (posts, profile, friendship network, ...);
- whether it allows for crawling its contents via an API (and rate: 10/min, 1k/month, ...);
- whether it simply provides a snapshot of the whole dataset.
Any suggestions and further characteristics to be added are very welcome.
| Publicly available social network datasets/APIs | CC BY-SA 3.0 | null | 2014-06-17T05:29:11.830 | 2021-02-09T04:27:20.537 | 2017-04-13T12:50:41.230 | -1 | 84 | [
"open-source",
"dataset",
"crawling"
] |
423 | 1 | null | null | 4 | 802 | I'm planning to run experiments with large datasets on distributed system in order to evaluate efficiency gains in comparison with previous proposals.
I have limited number of machines nearly ten machines having 200 GB of free space on hard disk on each. On the contrary, I wished to perform experiments on more than available nodes in order to measure scalability, more precisely. Since I don't have any, I thought about using a commodity cluster. However, I'm not sure about the policies of usage, and I need to reliably measure execution times.
Are there commodity services which will grant me that only my application would be running at a given time?
| How to measure execution time on distributed system | CC BY-SA 3.0 | null | 2014-06-17T05:55:04.710 | 2014-08-02T14:55:37.347 | 2014-07-30T22:36:59.310 | null | 84 | [
"bigdata",
"scalability",
"distributed"
] |
424 | 1 | 440 | null | 23 | 5223 | I recently saw a cool feature that [was once available](https://support.google.com/docs/answer/3543688?hl=en) in Google Sheets: you start by writing a few related keywords in consecutive cells, say: "blue", "green", "yellow", and it automatically generates similar keywords (in this case, other colors). See more examples in [this YouTube video](http://youtu.be/dlslNhfrQmw).
I would like to reproduce this in my own program. I'm thinking of using Freebase, and it would work like this intuitively:
- Retrieve the list of given words in Freebase;
- Find their "common denominator(s)" and construct a distance metric based on this;
- Rank other concepts based on their "distance" to the original keywords;
- Display the next closest concepts.
As I'm not familiar with this area, my questions are:
- Is there a better way to do this?
- What tools are available for each step?
| How to grow a list of related words based on initial keywords? | CC BY-SA 3.0 | null | 2014-06-17T06:05:39.653 | 2020-08-06T16:18:05.960 | 2014-06-19T05:48:43.540 | 322 | 906 | [
"nlp",
"text-mining",
"freebase"
] |
425 | 2 | null | 422 | 5 | null | An example from germany: Xing a site similar to linkedin but limited to german speaking countries.
Link to it's developer central: [https://dev.xing.com/overview](https://dev.xing.com/overview)
Provides access to: User profiles, Conversations between users (limited to the user itself), Job advertisings, Contacts and Contacts of Contacts, news from the network and some geolocation api.
Yes it has an api, but I did not find information about the rate. But it seems to me, that some information is limited to the consent of the user.
| null | CC BY-SA 3.0 | null | 2014-06-17T06:32:22.883 | 2014-06-17T06:32:22.883 | null | null | 791 | null |
426 | 2 | null | 421 | 0 | null | There are plenty on youtube and here's a famous one by Andrew Ng from coursea: [https://www.coursera.org/course/ml](https://www.coursera.org/course/ml)
| null | CC BY-SA 3.0 | null | 2014-06-17T06:53:44.007 | 2014-06-17T06:53:44.007 | null | null | 122 | null |
427 | 1 | 436 | null | 5 | 1666 | Is anyone using `Julia` ([http://julialang.org/](http://julialang.org/)) for professional jobs?
Or using it instead of R, Matlab, or Mathematica?
Is it a good language?
If you have to predict next 5-10 years: Do you think it grow up enough to became such a standard in data science like R or similar?
| Does anyone use Julia programming language? | CC BY-SA 3.0 | null | 2014-06-17T07:46:39.783 | 2017-05-10T03:58:56.370 | 2017-05-10T03:58:56.370 | 31513 | 908 | [
"tools",
"julia"
] |
428 | 2 | null | 59 | 3 | null | This critique is no longer justified:
While it is true that most of the standard and most respected R libraries were restricted to in-memory computations, there is a growing number of specialized libraries to deal with data that doesn't fit into memory.
For instance, for random forests on large datasets, you have the library `bigrf`. More info here: [http://cran.r-project.org/web/packages/bigrf/](http://cran.r-project.org/web/packages/bigrf/)
Another area of growth is R's connectedness to big data environments like hadoop, which opens another world of possibilities.
| null | CC BY-SA 3.0 | null | 2014-06-17T09:33:37.230 | 2014-06-17T09:33:37.230 | null | null | 906 | null |
429 | 2 | null | 422 | 8 | null | It's not a social network per se, but Stackexchange publish their entire database dump periodically:
- Stackexchange data dump hosted on the archive.org
- Post describing the database dump schema
You can extract some social information by analyzing which users ask and answer to each other. One nice thing is that since posts are tagged, you can analyze sub-communities easily.
| null | CC BY-SA 3.0 | null | 2014-06-17T09:56:49.180 | 2014-06-17T09:56:49.180 | 2017-03-20T09:39:09.267 | -1 | 846 | null |
430 | 1 | 525 | null | 14 | 1612 | I'm trying to understand how all the "big data" components play together in a real world use case, e.g. hadoop, monogodb/nosql, storm, kafka, ... I know that this is quite a wide range of tools used for different types, but I'd like to get to know more about their interaction in applications, e.g. thinking machine learning for an app, webapp, online shop.
I have vistors/session, transaction data etc and store that; but if I want to make recommendations on the fly, I can't run slow map/reduce jobs for that on some big database of logs I have. Where can I learn more about the infrastructure aspects? I think I can use most of the tools on their own, but plugging them into each other seems to be an art of its own.
Are there any public examples/use cases etc available? I understand that the individual pipelines strongly depend on the use case and the user, but just examples will probably be very useful to me.
| Looking for example infrastructure stacks/workflows/pipelines | CC BY-SA 3.0 | null | 2014-06-17T10:37:22.987 | 2014-06-23T13:36:51.493 | 2014-06-17T13:37:47.400 | 84 | 913 | [
"machine-learning",
"bigdata",
"efficiency",
"scalability",
"distributed"
] |
431 | 2 | null | 422 | 2 | null | A small collection of such links can be found at [here](http://lgylym.github.io/big-graph/dataset.html). Many of them are social graphs.
| null | CC BY-SA 3.0 | null | 2014-06-17T12:37:17.150 | 2014-06-17T12:37:17.150 | null | null | 743 | null |
432 | 2 | null | 421 | 5 | null |
- There is a very nice library of online machine learning algorithms from a group at NTU, called LIBOL. This would be a very good place to start experimenting with the algorithms.
The accompanying user guide, and associated JMLR publication are very nice introductions to the basic algorithms in this field.
- Avrim Blum has an older and more technical survey paper on online learning algorithms.
| null | CC BY-SA 3.0 | null | 2014-06-17T12:41:58.900 | 2015-06-23T15:12:32.633 | 2015-06-23T15:12:32.633 | 241 | 241 | null |
433 | 2 | null | 430 | 8 | null | One of the most detailed and clear explanations of setting up a complex analytics pipeline is from the folks over at [Twitch](http://www.twitch.tv/).
They give detailed motivations of each of the architecture choices for collection, transportation, coordination, processing, storage, and querying their data.
Compelling reading! Find it [here](http://blog.twitch.tv/2014/04/twitch-data-analysis-part-1-the-twitch-statistics-pipeline/) and [here](http://blog.twitch.tv/2014/04/twitch-data-analysis-part-2-architectural-decisions/).
| null | CC BY-SA 3.0 | null | 2014-06-17T13:31:30.503 | 2014-06-17T13:31:30.503 | null | null | 241 | null |
434 | 2 | null | 427 | 7 | null | There is really no question here as you ask for pure conjectures but consider at least that
- this week has Julia Con, the first Julia conference
- you could search GitHub and/or the registered Julia modules
| null | CC BY-SA 3.0 | null | 2014-06-17T13:44:43.807 | 2014-06-17T13:44:43.807 | null | null | 515 | null |
435 | 1 | null | null | 9 | 331 | I have a huge dataset from a relational database which I need to create a classification model for. Normally for this situation I would use [Inductive Logic Programming](http://en.wikipedia.org/wiki/Inductive_logic_programming) (ILP), but due to special circumstances I can't do that.
The other way to tackle this would be just to try to aggregate the values when I have a foreign relation. However, I have thousands of important and distinct rows for some nominal attributes (e.g.: A patient with a relation to several distinct drug prescriptions). So, I just can't do that without creating a new attribute for each distinct row of that nominal attribute, and furthermore most of the new columns would have NULL values if I do that.
Is there any non-ILP algorithm that allows me to data mine relational databases without resorting to techniques like pivoting, which would create thousands of new columns?
| Relational Data Mining without ILP | CC BY-SA 3.0 | null | 2014-06-17T13:46:06.367 | 2015-04-26T18:23:32.323 | 2014-06-17T14:26:40.380 | 84 | 917 | [
"data-mining",
"classification",
"relational-dbms"
] |
436 | 2 | null | 427 | 15 | null | I personally have used `Julia` for a good number of professional projects, and while, as Dirk mentioned, this is purely conjecture, I can give some insights on where Julia really stands out. The question of whether or not these reasons will prove enough to have `Julia` succeed as a language is anyone's guess.
- Distributed Systems: Julia is the easiest language I've ever dealt with in terms of building distributed systems. This is becoming more and more relevant in computing, and will potentially become a deciding factor, but the question of whether or not Julia'a relative ease decides this is up for debate
- JIT Performance: Julia's JIT compiler is extremely fast, and while there is a lot of debate as to how accurate these benchmark numbers are, the Julia Website shows a series of relevant benchmarks
- Community: This is an area where Julia just isn't quite there. The community that is there is generally supportive, but not quite as knowledgable as the R or python communities, which is a definite minus.
- Extensibility: This is another place where Julia is currently lacking, there is a large disconnect between the implies code patterns that Julia steers you toward and what it can actually support. The type system is currently overly bulky and difficult to use effectively.
Again, can't say what this means for the future, but these are just a couple of relevant points when it comes to evaluating `Julia` in my opinion.
| null | CC BY-SA 3.0 | null | 2014-06-17T14:55:47.103 | 2014-06-17T14:55:47.103 | null | null | 548 | null |
437 | 1 | 444 | null | 5 | 157 | I think that Bootstrap can be useful in my work, where we have a lot a variables that we don't know the distribution of it. So, simulations could help.
What are good sources to learn about Bootstrap/other useful simulation methods?
| What are good sources to learn about Bootstrap? | CC BY-SA 3.0 | null | 2014-06-17T18:13:46.230 | 2014-06-17T22:29:36.720 | null | null | 199 | [
"data-mining",
"statistics",
"education"
] |
439 | 2 | null | 412 | 8 | null | Halfway through reading your question, I realized Levenshtein Distance could be a nice solution to your problem. Its good to see that you have a link to a paper on the topic, let me see if I can shed some light into what a Levenshtein solution would look like.
Levenshtein distance is used across many industries for entity resolution, what makes it useful is that it is a measure of the difference between two sequences. In the case of string comparison it is just sequences characters.
This could help solve your problem by allowing you to provide one number that gives a measure of how similar the text of another field is.
Here is an example of a basic way of using Levenshtein with the data you gave:
![enter image description here](https://i.stack.imgur.com/vpaAH.png)
This provides an ok solution, the distance of 8 provides some indication of a relationship, and it is very PII compliant. However, it is still not super useful, let see what happens if we do some text magic to take only the first initial of the first name and the full last name dropping anything in the middle:
![enter image description here](https://i.stack.imgur.com/JlPu9.png)
As you can see the Levenshtein distance of 0 is pretty indicative of a relationship. Commonly data providers will combine a bunch of Levenshtein permutations of the first and last names with 1, 2, or all of the characters just to give some dimensionality as to how entities are related while still maintaining anonymity within the data.
| null | CC BY-SA 3.0 | null | 2014-06-17T18:42:55.423 | 2014-06-17T18:42:55.423 | null | null | 780 | null |
440 | 2 | null | 424 | 16 | null | The [word2vec algorithm](https://code.google.com/p/word2vec/) may be a good way to retrieve more elements for a list of similar words. It is an unsupervised "deep learning" algorithm that has previously been demonstrated with Wikipedia-based training data (helper scripts are provided on the Google code page).
There are currently [C](https://code.google.com/p/word2vec/) and [Python](http://radimrehurek.com/gensim/models/word2vec.html) implementations. This [tutorial](http://radimrehurek.com/2014/02/word2vec-tutorial) by [Radim Řehůřek](http://radimrehurek.com/), the author of the [Gensim topic modelling library](http://radimrehurek.com/gensim/), is an excellent place to start.
The ["single topic"](http://radimrehurek.com/2014/02/word2vec-tutorial#single) demonstration on the tutorial is a good example of retreiving similar words to a single term (try searching on 'red' or 'yellow'). It should be possible to extend this technique to find the words that have the greatest overall similarity to a set of input words.
| null | CC BY-SA 3.0 | null | 2014-06-17T18:59:14.947 | 2014-06-17T18:59:14.947 | null | null | 922 | null |
441 | 1 | 449 | null | 34 | 12878 | With Hadoop 2.0 and YARN Hadoop is supposedly no longer tied only map-reduce solutions. With that advancement, what are the use cases for Apache Spark vs Hadoop considering both sit atop of HDFS? I've read through the introduction documentation for Spark, but I'm curious if anyone has encountered a problem that was more efficient and easier to solve with Spark compared to Hadoop.
| What are the use cases for Apache Spark vs Hadoop | CC BY-SA 3.0 | null | 2014-06-17T20:48:35.267 | 2020-04-23T16:00:54.130 | 2020-04-23T16:00:54.130 | 94454 | 426 | [
"apache-spark",
"apache-hadoop",
"distributed",
"knowledge-base",
"cloud-computing"
] |
442 | 2 | null | 418 | 11 | null | Christopher's answers seem very reasonable. In particular tree based methods do well with this sort of data because they branch on discriminative features. It's a little hard to say without knowing your specific application, but in general if you think that some of your features might be significantly more discriminative than others, you could try some dimensionality reduction techniques to clean this up a bit.
Also if you use a dimensionality reduction technique you end up getting a slightly more robust format for your feature vector (they generally end up being straight numerical vectors instead of mixed data types), which might let you leverage different methods. You could also look into hand engineering features. With properly hand engineered features `Random Forest` will get you very close to state of the art on most tasks.
| null | CC BY-SA 3.0 | null | 2014-06-17T21:17:31.210 | 2014-06-17T21:17:31.210 | null | null | 548 | null |
443 | 2 | null | 412 | 6 | null | One option (depending on your dataset size) is to just provide edit distances (or other measures of similarity you're using) as an additional dataset.
E.g.:
- Generate a set of unique names in the dataset
- For each name, calculate edit distance to each other name
- Generate an ID or irreversable hash for each name
- Replace names in the original dataset with this ID
- Provide matrix of edit distances between ID numbers as new dataset
Though there's still a lot that could be done to deanonymise the data from these even.
E.g. if "Tim" is known to be the most popular name for a boy, frequency counting of IDs that closely match the known percentage of Tims across the population might give that away. From there you could then look for names with an edit distance of 1, and conclude that those IDs might refer to "Tom" or "Jim" (when combined with other info).
| null | CC BY-SA 3.0 | null | 2014-06-17T22:28:36.070 | 2014-06-18T08:08:02.163 | 2014-06-18T08:08:02.163 | 474 | 474 | null |
444 | 2 | null | 437 | 4 | null | A classic book is by B. Efron who created the technique:
- Bradley Efron; Robert Tibshirani (1994). An Introduction to the Bootstrap. Chapman & Hall/CRC. ISBN 978-0-412-04231-7.
| null | CC BY-SA 3.0 | null | 2014-06-17T22:29:36.720 | 2014-06-17T22:29:36.720 | null | null | 418 | null |
445 | 2 | null | 370 | 5 | null | Is your Masters in Computer Science? Statistics?
Is 'data science' going to be at the center of your thesis? Or a side topic?
I'll assume your in Statistics and that you want to focus your thesis on a 'data science' problem. If so, then I'm going to go against the grain and suggest that you should not start with a data set or an ML method. Instead, you should seek an interesting research problem that's poorly understood or where ML methods have not yet been proven successful, or where there are many competing ML methods but none seem better than others.
Consider this data source: [Stanford Large Network Dataset Collection](http://snap.stanford.edu/data/). While you could pick one of these data sets, make up a problem statement, and then run some list of ML methods, that approach really doesn't tell you very much about what data science is all about, and in my opinion doesn't lead to a very good Masters thesis.
Instead, you might do this: look for all the research papers that use ML on some specific category -- e.g. Collaboration networks (a.k.a. co-authorship). As you read each paper, try to find out what they were able to accomplish with each ML method and what they weren't able to address. Especially look for their suggestions for "future research".
Maybe they all use the same method, but never tried competing ML methods. Or maybe they don't adequately validate their results, or maybe there data sets are small, or maybe their research questions and hypothesis were simplistic or limited.
Most important: try to find out where this line of research is going. Why are they even bothering to do this? What is significant about it? Where and why are they encountering difficulties?
| null | CC BY-SA 3.0 | null | 2014-06-17T23:30:45.897 | 2014-06-17T23:30:45.897 | null | null | 609 | null |
446 | 2 | null | 412 | 19 | null | One of the references I mentioned in the OP led me to a potential solution that seems quite powerful, described in "Privacy-preserving record linkage using Bloom filters" ([doi:10.1186/1472-6947-9-41](http://www.biomedcentral.com/1472-6947/9/41)):
>
A new protocol for privacy-preserving record linkage with encrypted identifiers allowing for errors in identifiers has been developed. The protocol is based on Bloom filters on q-grams of identifiers.
The article goes into detail about the method, which I will summarize here to the best of my ability.
A Bloom filter is a fixed-length series of bits storing the results of a fixed set of independent hash functions, each computed on the same input value. The output of each hash function should be an index value from among the possible indexes in the filter; i.e., if you have a 0-indexed series of 10 bits, hash functions should return (or be mapped to) values from 0 to 9.
The filter starts with each bit set to 0. After hashing the input value with each function from the set of hash functions, each bit corresponding to an index value returned by any hash function is set to 1. If the same index is returned by more than one hash function, the bit at that index is only set once. You could consider the Bloom filter to be a superposition of the set of hashes onto the fixed range of bits.
The protocol described in the above-linked article divides strings into n-grams, which are in this case sets of characters. As an example, `"hello"` might yield the following set of 2-grams:
```
["_h", "he", "el", "ll", "lo", "o_"]
```
Padding the front and back with spaces seems to be generally optional when constructing n-grams; the examples given in the paper that proposes this method use such padding.
Each n-gram can be hashed to produce a Bloom filter, and this set of Bloom filters can be superimposed on itself (bitwise OR operation) to produce the Bloom filter for the string.
If the filter contains many more bits than there are hash functions or n-grams, arbitrary strings are relatively unlikely to produce exactly the same filter. However, the more n-grams two strings have in common, the more bits their filters will ultimately share. You can then compare any two filters `A, B` by means of their Dice coefficient:
>
DA, B = 2h / (a + b)
Where `h` is the number of bits that are set to 1 in both filters, `a` is the number of bits set to 1 in only filter A, and `b` is the number of bits set to 1 in only filter B. If the strings are exactly the same, the Dice coefficient will be 1; the more they differ, the closer the coefficient will be to `0`.
Because the hash functions are mapping an indeterminate number of unique inputs to a small number of possible bit indexes, different inputs may produce the same filter, so the coefficient indicates only a probability that the strings are the same or similar. The number of different hash functions and the number of bits in the filter are important parameters for determining the likelihood of false positives - pairs of inputs that are much less similar than the Dice coefficient produced by this method predicts.
I found [this tutorial](http://billmill.org/bloomfilter-tutorial/) to be very helpful for understanding the Bloom filter.
There is some flexibility in the implementation of this method; see also [this 2010 paper](https://www.uni-due.de/~hq0215/documents/2010/Bachteler_2010_An_Empirical_Comparison_Of_Approaches_To_Approximate_String_Matching_In_Private_Record_Linkage.pdf) (also linked at the end of the question) for some indications of how performant it is in relation to other methods, and with various parameters.
| null | CC BY-SA 3.0 | null | 2014-06-17T23:38:20.133 | 2014-06-18T15:08:44.507 | 2014-06-18T15:08:44.507 | 322 | 322 | null |
447 | 1 | null | null | 9 | 247 | I have a large number of samples which represent Manchester encoded bit streams as audio signals. The frequency at which they are encoded is the primary frequency component when it is high, and there is a consistent amount of white noise in the background.
I have manually decoded these streams, but I was wondering if I could use some sort of machine learning technique to learn the encoding schemes. This would save a great deal of time manually recognizing these schemes. The difficulty is that different signals are encoded differently.
Is it possible to build a model which can learn to decode more than one encoding scheme? How robust would such a model be, and what sort of techniques would I want to employ? [Independent Component Analysis](http://en.wikipedia.org/wiki/Independent_component_analysis) (ICA) seems like could be useful for isolating the frequency I care about, but how would I learn the encoding scheme?
| Learning signal encoding | CC BY-SA 3.0 | null | 2014-06-18T03:19:07.557 | 2014-06-25T06:58:43.670 | 2014-06-19T14:55:59.483 | 84 | 890 | [
"machine-learning",
"data-mining",
"scalability",
"algorithms",
"feature-selection"
] |
448 | 2 | null | 441 | 1 | null | Not sure about the YARN, but I think that Spark makes a real difference compared to Hadoop (advertised as 100 times faster) if data can fit nicely in the memory of the computational nodes. Simply because it avoids hard disk access. If data doesn't fit memory there's still some gain because of buffering.
| null | CC BY-SA 3.0 | null | 2014-06-18T05:35:35.700 | 2014-06-18T05:35:35.700 | null | null | 418 | null |
449 | 2 | null | 441 | 43 | null | Hadoop means HDFS, YARN, MapReduce, and a lot of other things. Do you mean Spark vs MapReduce? Because Spark runs on/with Hadoop, which is rather the point.
The primary reason to use Spark is for speed, and this comes from the fact that its execution can keep data in memory between stages rather than always persist back to HDFS after a Map or Reduce. This advantage is very pronounced for iterative computations, which have tens of stages each of which is touching the same data. This is where things might be "100x" faster. For simple, one-pass ETL-like jobs for which MapReduce was designed, it's not in general faster.
Another reason to use Spark is its nicer high-level language compared to MapReduce. It provides a functional programming-like view that mimics Scala, which is far nicer than writing MapReduce code. (Although you have to either use Scala, or adopt the slightly-less-developed Java or Python APIs for Spark). [Crunch](http://crunch.apache.org) and [Cascading](http://cascading.org) already provide a similar abstraction on top of MapReduce, but this is still an area where Spark is nice.
Finally Spark has as-yet-young but promising subprojects for ML, graph analysis, and streaming, which expose a similar, coherent API. With MapReduce, you would have to turn to several different other projects for this (Mahout, Giraph, Storm). It's nice to have it in one package, albeit not yet 'baked'.
Why would you not use Spark? [paraphrasing](https://www.quora.com/Apache-Spark/Assuming-you-have-a-system-with-both-Hadoop-and-Spark-installed-say-under-Yarn-is-there-any-reason-to-use-Hadoop-map-reduce-instead-of-the-equivalent-Spark-commands) myself:
- Spark is primarily Scala, with ported Java APIs; MapReduce might be friendlier and more native for Java-based developers
- There is more MapReduce expertise out there now than Spark
- For the data-parallel, one-pass, ETL-like jobs MapReduce was designed for, MapReduce is lighter-weight compared to the Spark equivalent
- Spark is fairly mature, and so is YARN now, but Spark-on-YARN is still pretty new. The two may not be optimally integrated yet. For example until recently I don't think Spark could ask YARN for allocations based on number of cores? That is: MapReduce might be easier to understand, manage and tune
| null | CC BY-SA 3.0 | null | 2014-06-18T07:27:22.677 | 2014-06-18T07:27:22.677 | null | null | 21 | null |