Id
stringlengths 1
6
| PostTypeId
stringclasses 6
values | AcceptedAnswerId
stringlengths 2
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
3
| ViewCount
stringlengths 1
6
⌀ | Body
stringlengths 0
32.5k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 2
values | FavoriteCount
stringclasses 2
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
sequence |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
228 | 1 | 249 | null | 8 | 75 | Going through the presentation and material of Summingbird by Twitter, one of the reasons that is mentioned for using Storm and Hadoop clusters together in Summingbird is that processing through Storm results in cascading of error. In order to avoid this cascading of error and accumulation of it, Hadoop cluster is used to batch process the data and discard the Storm results after the same data is processed by Hadoop.
What is the reasons for generation of this accumulation of error? and why is it not present in Hadoop? Since I have not worked with Storm, I do not know the reasons for it. Is it because Storm uses some approximate algorithm to process the data in order to process them in real time? or is the cause something else?
| Cascaded Error in Apache Storm | CC BY-SA 3.0 | null | 2014-06-01T12:51:25.040 | 2014-06-10T03:42:51.637 | null | null | 339 | [
"bigdata",
"apache-hadoop"
] |
229 | 2 | null | 227 | 9 | null | This is kind of like asking about the tradeoffs between frying pan and your drawer of silverware. They are not two things you compare, really. You might use them together as part of a larger project.
Hadoop itself is not one thing, but a name for a federation of services, like HDFS, Hive, HBase, MapReduce, etc. Storm is something you use with some of these services, like HDFS or HBase. It is a stream-processing framework. There are others within the extended Hadoop ecosystem, like Spark Streaming.
When would you choose a stream-processing framework? when you need to react to new data in near-real-time. If you need this kind of tool, you deploy this kind of tool, too.
| null | CC BY-SA 3.0 | null | 2014-06-01T19:48:41.693 | 2014-06-01T19:48:41.693 | null | null | 21 | null |
230 | 2 | null | 223 | 7 | null | In general, you don't want to use XML tags to tag documents in this way because tags may overlap.
[UIMA](http://uima.apache.org/), [GATE](http://gate.ac.uk/) and similar NLP frameworks denote the tags separate from the text. Each tag, such as `Person`, `ACME`, `John` etc. is stored as the position that the tag begins and the position that it ends. So, for the tag `ACME`, it would be stored as starting a position 11 and ending at position 17.
| null | CC BY-SA 3.0 | null | 2014-06-02T15:03:35.940 | 2014-06-02T15:03:35.940 | null | null | 178 | null |
231 | 1 | 287 | null | 10 | 6442 | I want to test the accuracy of a methodology. I ran it ~400 times, and I got a different classification for each run. I also have the ground truth, i.e., the real classification to test against.
For each classification I computed a confusion matrix. Now I want to aggregate these results in order to get the overall confusion matrix. How can I achieve it?
May I sum all confusion matrices in order to obtain the overall one?
| How to get an aggregate confusion matrix from n different classifications | CC BY-SA 3.0 | null | 2014-06-05T09:00:27.950 | 2014-06-11T09:39:34.373 | 2014-06-05T15:21:40.640 | 84 | 133 | [
"classification",
"confusion-matrix",
"accuracy"
] |
232 | 2 | null | 231 | 4 | null | There are a few ways to achieve your "master confusion matrix".
- Sum all the confusion matrices together: Like you suggested, summing this results in a confusion matrix. The problem with this is you can not interpret totals.
- Average the entries. This method is the same as number one, but you divide each entry by the number of trials (~400 in your case). This would be my preferred method because then you can actually translate each category to a (mean) +- (an error measurement) and actually see which categories are the most volatile or stable. Careful with interpreting this 'error measurement' though.
- Report a problem specific measurement of the confusion numbers. For example, if your numbers have outliers, medians would preferred over means.
There are other statistics that are possible to report as well. You can redo the method to keep track of individual classifications. Then we can say other important stats like '% of classifications that stay the same and are accurate', etc...
| null | CC BY-SA 3.0 | null | 2014-06-06T14:55:18.867 | 2014-06-06T14:55:18.867 | null | null | 375 | null |
233 | 2 | null | 130 | 3 | null | To complete Damien's answer, an example of dimensionality reduction in NLP is a [topic model](http://en.wikipedia.org/wiki/Topic_model), where you represent the document by a vector indicating the weights of its constituent topics.
| null | CC BY-SA 3.0 | null | 2014-06-08T07:03:43.710 | 2014-06-08T07:03:43.710 | null | null | 381 | null |
234 | 1 | null | null | 5 | 470 | As Yann LeCun [mentioned](http://www.reddit.com/r/MachineLearning/comments/25lnbt/ama_yann_lecun/chisdw1), a number of PhD programs in data science will be popping up in the next few years.
[NYU](http://datascience.nyu.edu/academics/programs/) already have one, where Prof.LeCun is at right now.
A statistics or cs PhD in machine learning is probably more rigorous than a data science one. Is data science PhD for the less mathy people like myself?
Are these cash cow programs?
There is a huge industry demand for big data, but what is the academic value of these programs, as you probably can't be a professor or publish any paper.
| Data science Ph.D. program, what do you think? | CC BY-SA 3.0 | null | 2014-06-09T04:43:03.497 | 2014-06-10T03:21:56.473 | null | null | 386 | [
"knowledge-base"
] |
235 | 1 | 237 | null | 3 | 1572 | Data visualization is an important sub-field of data science and python programmers need to have available toolkits for them.
Is there a Python API to Tableau?
Are there any Python based data visualization toolkits?
| Are there any python based data visualization toolkits? | CC BY-SA 4.0 | null | 2014-06-09T08:34:29.337 | 2019-06-08T03:11:24.957 | 2019-06-08T03:11:24.957 | 29169 | 122 | [
"python",
"visualization"
] |
236 | 2 | null | 234 | 2 | null | No-one knows since no-one's completed one of these PhD programs yet! However, I would look at the syllabus and the teachers to base my decision. It all depends on what you want to do; industry or academia?
| null | CC BY-SA 3.0 | null | 2014-06-09T18:02:00.613 | 2014-06-09T18:02:00.613 | null | null | 381 | null |
237 | 2 | null | 235 | 11 | null | There is a Tablaeu API and you can use Python to use it, but maybe not in the sense that you think. There is a Data Extract API that you could use to import your data into Python and do your visualizations there, so I do not know if this is going to answer your question entirely.
As in the first comment you can use Matplotlib from [Matplotlib website](http://www.matplotlib.org), or you could install Canopy from Enthought which has it available, there is also Pandas, which you could also use for data analysis and some visualizations. There is also a package called `ggplot` which is used in `R` alot, but is also made for Python, which you can find here [ggplot for python](https://pypi.python.org/pypi/ggplot).
The Tableau data extract API and some information about it can be found [at this link](http://www.tableausoftware.com/new-features/data-engine-api-0). There are a few web sources that I found concerning it using duckduckgo [at this link](https://duckduckgo.com/?q=tableau%20PYTHON%20API&kp=1&kd=-1).
Here are some samples:
[Link 1](https://www.interworks.com/blogs/bbickell/2012/12/06/introducing-python-tableau-data-extract-api-csv-extract-example)
[Link 2](http://ryrobes.com/python/building-tableau-data-extract-files-with-python-in-tableau-8-sample-usage/)
[Link 3](http://nbviewer.ipython.org/github/Btibert3/tableau-r/blob/master/Python-R-Tableau-Predictive-Modeling.ipynb)
As far as an API like matplotlib, I cannot say for certain that one exists. Hopefully this gives some sort of reference to help answer your question.
Also to help avoid closure flags and downvotes you should try and show some of what you have tried to do or find, this makes for a better question and helps to illicit responses.
| null | CC BY-SA 3.0 | null | 2014-06-09T19:52:41.847 | 2014-06-09T19:52:41.847 | null | null | 59 | null |
238 | 2 | null | 234 | 1 | null | I think this question assumes a false premise. As a student at NYU, I only know of a Masters in Data Science. You linked to a page that confirms this.
It's hard to gauge the benefit of a program that doesn't exist yet.
| null | CC BY-SA 3.0 | null | 2014-06-09T21:36:44.297 | 2014-06-09T21:36:44.297 | null | null | 395 | null |
241 | 2 | null | 234 | 3 | null | It seems to me that the premise of a PhD is to expand knowledge in some little slice of the world. Since a "data scientist" is by nature is somewhat of a jack-of-all-trades it does seem a little odd to me. A masters program seems much more appropriate.
What do you hope to gain from a PhD? If the rigor scares (or bores) you, then what about a more applied area? Signal processing, robotics, applied physics, operations research, etc.
| null | CC BY-SA 3.0 | null | 2014-06-09T21:51:53.793 | 2014-06-09T21:51:53.793 | null | null | 403 | null |
242 | 2 | null | 227 | 13 | null | MapReduce: A fault tolerant distributed computational framework. MapReduce allows you to operate over huge amounts of data- with a lot of work put in to prevent failure due to hardware. MapReduce is a poor choice for computing results on the fly because it is slow. (A typical MapReduce job takes on the order of minutes or hours, not microseconds)
A MapReduce job takes a file (or some data store) as an input and writes a file of results. If you want these results available to an application, it is your responsibility to put this data in a place that is accessible. This is likely slow, and there will be a lag between the values you can display, and the values that represent your system in its current state.
An important distinction to make when considering using MapReduce in building realtime systems is that of training your model, and applying your model. If you think your model parameters do not change quickly, you can fit them with MapReduce, and then have a mechanism for accessing these pre-fit parameters when you want to apply your model.
Storm: A real-time, streaming computational system. Storm is online framework, meaning, in this sense, a service that interacts with a running application. In contrast to MapReduce, it receives small pieces of data (not a whole file) as they are processed in your application. You define a DAG of operations to perform on the data. A common and simple use case for Storm is tracking counters, and using that information to populate a real-time dashboard.
Storm doesn't have anything (necessarily) to do with persisting your data. Here, streaming is another way to say keeping the information you care about and throwing the rest away. In reality, you probably have a persistence layer in your application that has already recorded the data, and so this a good and justified separation of concerns.
If you want to know more...
If you would like to learn more about real-time systems that that fit parameters with MR and apply the models a different way [here are slides for a talk I gave on building real-time recommendation engines on HBase.](http://www.slideshare.net/cloudera/hbasecon-2013-24063525)
An excellent paper that marries real-time counting and persistence in an interesting way is [Google News Personalization: Scalable Online Collaborative Filtering](http://dl.acm.org/citation.cfm?id=1242610)
Another interesting marriage of MR and Storm is [SummingBird](https://github.com/twitter/summingbird/wiki). Summingbird allows you to define data analysis operations that can be applied via Storm or MR.
| null | CC BY-SA 3.0 | null | 2014-06-09T21:57:30.240 | 2014-06-10T23:33:28.443 | 2014-06-10T23:33:28.443 | 406 | 406 | null |
243 | 2 | null | 184 | 5 | null | A variety of methods are available to the user. The support documentation gives walkthroughs and tips for when one or another model is most appropriate.
[This page](https://developers.google.com/prediction/docs/pmml-schema) shows the following learning methods:
- "AssociationModel"
- "ClusteringModel"
- "GeneralRegressionModel"
- "MiningModel"
- "NaiveBayesModel"
- "NeuralNetwork"
- "RegressionModel"
- "RuleSetModel"
- "SequenceModel"
- "SupportVectorMachineModel"
- "TextModel"
- "TimeSeriesModel"
- "TreeModel"
EDIT: I don't see any specific information about the algorithms, though. For example, does the tree model use information gain or gini index for splits?
| null | CC BY-SA 3.0 | null | 2014-06-10T01:36:27.520 | 2014-06-10T01:43:47.883 | 2014-06-10T01:43:47.883 | 432 | 432 | null |
244 | 2 | null | 138 | 4 | null | Coming from a programmers perspective, frameworks rarely target performance as the highest priority. If your library is going to be widely leveraged the things people are likely to value most are ease of use, flexibility, and reliability.
Performance is generally valued in secondary competitive libraries. "X library is better because it's faster." Even then very frequently those libraries will trade off the most optimal solution for one that can be widely leveraged.
By using any framework you are inherently taking a risk that a faster solution exists. I might go so far as to say that a faster solution almost always exists.
Writing something yourself is not a guarantee of performance, but if you know what you are doing and have a fairly limited set of requirements it can help.
An example might be JSON parsing. There are a hundred libraries out there for a variety of languages that will turn JSON into a referable object and vice versa. I know of one implementation that does it all in CPU registers. It's measurably faster than all other parsers, but it is also very limited and that limitation will vary based on what CPU you are working with.
Is the task of building a high-performant environment specific JSON parser a good idea? I would leverage a respected library 99 times out of 100. In that one separate instance a few extra CPU cycles multiplied by a million iterations would make the development time worth it.
| null | CC BY-SA 3.0 | null | 2014-06-10T01:40:23.263 | 2014-06-10T01:40:23.263 | null | null | 434 | null |
245 | 2 | null | 234 | 3 | null | Computer Science is itself a multi-disciplinary field which has varying requirements among universities. For example, Stockholm University does not require any math above algebra for its CS programs (some courses may have higher requirements, but not often).
I am not sure what you mean by a machine learning program being more rigorous. They are just two different programs. Data Science would likely take a broader view and focus on application and management (business courses are maybe on offer?). The research could be rigorous in its own right, but it definitely won't be tailored to someone who wants to optimize new algorithms or solve the low-level problems of machine learning.
I don't see the Ph.D program listed yet in the link you provided. Will you please follow up here if you get more specific information?
| null | CC BY-SA 3.0 | null | 2014-06-10T01:54:40.647 | 2014-06-10T01:54:40.647 | null | null | 432 | null |
246 | 2 | null | 134 | 4 | null | There will definitely be a translation task at the end if you prototype using just mongo.
When you run a MapReduce task on mongodb, it has the data source and structure built in. When you eventually convert to hadoop, your data structures might not look the same. You could leverage the mongodb-hadoop connector to access mongo data directly from within hadoop, but that won't be quite as straightforward as you might think. The time to figure out how exactly to do the conversion most optimally will be easier to justify once you have a prototype in place, IMO.
While you will need to translate mapreduce functions, the basic pseudocode should apply well to both systems. You won't find anything that can be done in MongoDB that can't be done using Java or that is significantly more complex to do with Java.
| null | CC BY-SA 3.0 | null | 2014-06-10T02:42:02.050 | 2014-06-10T02:42:02.050 | null | null | 434 | null |
247 | 2 | null | 235 | 12 | null | [Bokeh](http://bokeh.pydata.org/) is an excellent data visualization library for python.
[NodeBox](http://www.cityinabottle.org/nodebox/) is another that comes to mind.
| null | CC BY-SA 3.0 | null | 2014-06-10T02:50:51.153 | 2014-06-10T02:50:51.153 | null | null | 434 | null |
248 | 2 | null | 234 | 3 | null | A cash cow program? No. PhD programs are never cash cows.
I don't know why you couldn't be a professor with a PhD in data science. Rarely does a professor of a given course have to have a specific degree in order to teach it.
As far as publishing goes, there are any number of related journals that would accept papers from somebody on topics that would be covered by the topic of Data Science.
When I went to college, MIS, Computer Engineering, and Computer Science were new subjects. Most of the people in my graduating class for Computer Science couldn't program anything significant at graduation. Within a few years, CS programs around the country matured significantly.
When you are part of a new program, sometimes it's possible to help define what it is that's required for graduation. Being a part of that puts you in rare company for that field.
As far as mathematical rigor is concerned, I would expect Data Science to leverage a heavy dose of mathematically based material. I wouldn't expect anything particularly new - statistics, calculus, etc. should have been covered in undergrad. Masters and PhD programs should be more about applying that knowledge and not so much about learning it.
| null | CC BY-SA 3.0 | null | 2014-06-10T03:21:56.473 | 2014-06-10T03:21:56.473 | null | null | 434 | null |
249 | 2 | null | 228 | 4 | null | Twitter uses Storm for real-time processing of data. Problems can happen with real-time data. Systems might go down. Data might be inadvertently processed twice. Network connections can be lost. A lot can happen in a real-time system.
They use hadoop to reliably process historical data. I don't know specifics, but for instance, getting solid information from aggregated logs is probably more reliable than attaching to the stream.
If they simply relied on Storm for everything - Storm would have problems due to the nature of providing real-time information at scale. If they relied on hadoop for everything, there's a good deal of latency involved. Combining the two with Summingbird is the next logical step.
| null | CC BY-SA 3.0 | null | 2014-06-10T03:42:51.637 | 2014-06-10T03:42:51.637 | null | null | 434 | null |
250 | 2 | null | 184 | 3 | null | Google does not publish the models they use, but they specifically do not support models from the PMML specification.
If you look closely at the documentation on [this page](https://developers.google.com/prediction/docs/pmml-schema), you will notice that the model selection within the schema is greyed out indicating that it is an unsupported feature of the schema.
The [documentation does spell out](https://developers.google.com/prediction/docs/developer-guide#whatisprediction) that by default it will use a regression model for training data that has numeric answers, and an unspecified categorization model for training data that results in text based answers.
The Google Prediction API also supports hosted models (although only a few demo models are currently available), and models specified with a PMML transform. The documentation does contain an [example of a model defined by a PMML transform](https://developers.google.com/prediction/docs/pmml-schema). (There is also a note on that page stating that PMML ...Model elements are not supported).
The PMML standard that google partially supports is [version 4.0.1](http://www.dmg.org//pmml-v4-0-1.html).
| null | CC BY-SA 3.0 | null | 2014-06-10T04:47:13.040 | 2014-06-10T04:47:13.040 | null | null | 434 | null |
251 | 2 | null | 138 | 4 | null | Having done the rewriting game over and over myself (and still doing it), my immediate reaction was adaptability.
While frameworks and libraries have a huge arsenal of (possibly intertwinable) routines for standard tasks, their framework property often (always?) disallows shortcuts. In fact, most frameworks have some sort of core infrastructure around which a core layer of basic functionality is implemented. More specific functionality makes use of the basic layer and is placed in a second layer around the core.
Now by shortcuts I mean going straight from a second layer routine to another second layer routine without using the core. Typical example (from my domain) would be timestamps: You have a timestamped data source of some kind. Thus far the job is simply to read the data off the wire and pass it to the core so your other code can feast on it.
Now your industry changes the default timestamp format for a very good reason (in my case they went from unix time to GPS time). Unless your framework is industry-specific it is very unlikely that they're willing to change the core representation of time, so you end up using a framework that almost does what you want. Every time you access your data you have to convert it to industry-time-format first, and every time you want it modified you have to convert it back to whatever the core deems appropriate. There is no way that you can hand over data straight from the source to a sink without double conversion.
This is where your hand-crafted frameworks will shine, it's just a minor change and you're back modelling the real world whereas all other (non-industry-specific) frameworks will now have a performance disadvantage.
Over time, the discrepancy between the real world and the model will add up. With an off-the-shelf framework you'd soon be facing questions like: How can I represent `this` in `that` or how do make routine `X` accept/produce `Y`.
So far this wasn't about C/C++. But if, for some reason, you can't change the framework, i.e. you do have to put up with double conversion of data to go from one end to another, then you'd typically employ something that minimises the additional overhead. In my case, a TAI->UTC or UTC->TAI converter is best left to raw C (or an FPGA). There is no elegance possible, no profound smart data structure that makes the problem trivial. It's just a boring switch statement, and why not use a language whose compilers are good at optimising exactly that?
| null | CC BY-SA 3.0 | null | 2014-06-10T05:57:13.897 | 2014-06-10T05:57:13.897 | null | null | 451 | null |
252 | 1 | null | null | 3 | 852 | There seem to be at least 2 ways to connect to HBase from external application, with language other then Java (i.e. Python):
- HBase Thrift API
- HBase Stargate (REST API)
Does anyone know which one should be used in which circumstances?
I.e. what are their main differences, and pros/cons?
| HBase connector - Thrift or REST | CC BY-SA 3.0 | null | 2014-06-10T06:19:46.510 | 2014-06-10T07:56:45.587 | null | null | 88 | [
"bigdata",
"apache-hadoop"
] |
253 | 1 | 256 | null | 39 | 21145 | An aspiring data scientist here. I don't know anything about Hadoop, but as I have been reading about Data Science and Big Data, I see a lot of talk about Hadoop. Is it absolutely necessary to learn Hadoop to be a Data Scientist?
| Do I need to learn Hadoop to be a Data Scientist? | CC BY-SA 3.0 | null | 2014-06-10T06:20:20.817 | 2022-08-26T05:51:09.493 | null | null | 456 | [
"bigdata",
"apache-hadoop"
] |
254 | 2 | null | 169 | 7 | null | Actually there are more than one question to answer here:
- How to work on schemaless/loose/missing data
- How to label a person (from what I understand unsupervised) and create an identifier
- How to train your system so that it can tell you which attributes you should use in order to identify the person
As Rubens mentioned, you can use decision tree methods, specifically [Random Forests](http://www.stat.berkeley.edu/~breiman/RandomForests/cc_home.htm) for calculating the most important attributes based on information gain if you have already found a way to identify how to label a person.
However, if you do not have any label information maybe you can use some expert view for preliminary attribute selection. After that you make unsupervised classification in order to retrieve your labels. Lastly, you can select the most important fields using Random Forest or other methods like Bayesian Belief Networks.
In order to achieve all that, you also need complete data set. If your data set is loose you have to manually or heuristically find a way to couple attributes indicating same thing with different names. What is more, you can use imputation techniques such as [Expectation Maximization](http://en.wikipedia.org/wiki/Expectation%E2%80%93maximization_algorithm) method and complete your data set. Or you can also work with Bayesian Networks and can leave missing fields as they are.
| null | CC BY-SA 3.0 | null | 2014-06-10T07:10:45.057 | 2014-06-10T07:10:45.057 | null | null | 463 | null |
255 | 2 | null | 253 | 2 | null | You should learn Hadoop if you want to be work as data scientist, but maybe before starting with Hadoop you should read something about ETL or Big Data... this book could be a good starting point: [http://www.amazon.com/Big-Data-Principles-practices-scalable/dp/1617290343](http://rads.stackoverflow.com/amzn/click/1617290343)
Hope it helps and good luck!
| null | CC BY-SA 3.0 | null | 2014-06-10T07:28:49.127 | 2014-06-10T07:28:49.127 | null | null | 466 | null |
256 | 2 | null | 253 | 48 | null | Different people use different tools for different things. Terms like Data Science are generic for a reason. A data scientist could spend an entire career without having to learn a particular tool like hadoop. Hadoop is widely used, but it is not the only platform that is capable of managing and manipulating data, even large scale data.
I would say that a data scientist should be familiar with concepts like MapReduce, distributed systems, distributed file systems, and the like, but I wouldn't judge someone for not knowing about such things.
It's a big field. There is a sea of knowledge and most people are capable of learning and being an expert in a single drop. The key to being a scientist is having the desire to learn and the motivation to know that which you don't already know.
As an example: I could hand the right person a hundred structured CSV files containing information about classroom performance in one particular class over a decade. A data scientist would be able to spend a year gleaning insights from the data without ever needing to spread computation across multiple machines. You could apply machine learning algorithms, analyze it using visualizations, combine it with external data about the region, ethnic makeup, changes to environment over time, political information, weather patterns, etc. All of that would be "data science" in my opinion. It might take something like hadoop to test and apply anything you learned to data comprising an entire country of students rather than just a classroom, but that final step doesn't necessarily make someone a data scientist. And not taking that final step doesn't necessarily disqualify someone from being a data scientist.
| null | CC BY-SA 3.0 | null | 2014-06-10T07:30:53.510 | 2014-06-10T08:21:19.197 | 2014-06-10T08:21:19.197 | 434 | 434 | null |
257 | 2 | null | 253 | 4 | null | Yes, you should learn a platform that is capable of dissecting your problem as a data parallel problem. Hadoop is one. For your simple needs (design patterns like counting, aggregation, filtering etc.) you need Hadoop and for more complex Machine Learning stuff like doing some Bayesian, SVM you need Mahout which in turn needs Hadoop (Now Apache Spark) to solve your problem using a data-parallel approach.
So Hadoop is a good platform to learn and really important for your batch processing needs. Not only Hadoop but you also need to know Spark (Mahout runs it's algorithms utilizing Spark) & Twitter Storm (for your real time analytics needs). This list will continue and evolve so if you are good with the building blocks (Distributed Computing, Data-Parallel Problems and so on) and know how one such platform (say Hadoop) operates you will fairly quickly be up to speed on others.
| null | CC BY-SA 3.0 | null | 2014-06-10T07:42:57.470 | 2014-06-10T07:42:57.470 | null | null | 11 | null |
258 | 2 | null | 252 | 2 | null | Thrift is generally faster because the data being exchanged is smaller. Stargate offers a web service which is an integration method that is widely supported, which is a concern when you are working with commercial products with limited integration possibilities.
In a closed environment where everything is controlled, I would prefer Thrift. If I'm exposing data to external teams or systems I would prefer Stargate.
| null | CC BY-SA 3.0 | null | 2014-06-10T07:56:45.587 | 2014-06-10T07:56:45.587 | null | null | 434 | null |
259 | 2 | null | 155 | 20 | null | I would like to point to [The Open Data Census](http://national.census.okfn.org/). It is an initiative of the Open Knowledge Foundation based on contributions from open data advocates and experts around the world.
The value of Open data Census is open, community driven, and systematic effort to collect and update the database of open datasets globally on country and, in some cases, [like U.S., on city level](http://us-city.census.okfn.org/).
Also, it presents an opportunity to compare different countries and cities on in selected areas of interest.
| null | CC BY-SA 3.0 | null | 2014-06-10T08:04:20.400 | 2014-06-10T08:04:20.400 | null | null | 454 | null |
260 | 2 | null | 191 | 2 | null | The algorithm that is used in this case is called [one-vs-all classifier](https://class.coursera.org/ml-003/lecture/38) or multiclass classifier.
In your case you have to take one class, e. g. number 1 , mark it as positive and combine the rest seven classes in one negative class. The neural network will output the probability of this case being class number 1 vs the rest of the classes.
Afterwords, you have to assign as positive another class, e.g. number 2, assign all other classes as one big negative class and get the predicted probability from the network again.
After repeating this procedure for all eight classes, assign each case to the the class that had the maximum probability from all the classes outputted from the neural network.
| null | CC BY-SA 3.0 | null | 2014-06-10T08:38:27.093 | 2014-06-10T08:38:27.093 | null | null | 454 | null |
261 | 2 | null | 196 | 3 | null | You could take a look at CN2 rule learner in [Orange 2](http://orange.biolab.si/orange2/)
| null | CC BY-SA 4.0 | null | 2014-06-10T09:21:14.013 | 2020-08-06T11:04:09.857 | 2020-08-06T11:04:09.857 | 98307 | 480 | null |
262 | 1 | 293 | null | 40 | 22477 | What are the advantages of HDF compared to alternative formats? What are the main data science tasks where HDF is really suitable and useful?
| What are the advantages of HDF compared to alternative formats? | CC BY-SA 4.0 | null | 2014-06-10T09:26:06.593 | 2021-07-02T00:43:22.377 | 2020-04-14T16:54:08.463 | null | 97 | [
"data-formats",
"hierarchical-data-format"
] |
263 | 2 | null | 172 | 8 | null | Unfortunately, parallelization is not yet implemented in pandas. You can join [this github issue](http://github.com/pydata/pandas/issues/5751) if you want to participate in the development of this feature.
I don't know any "magic unicorn package" for this purposes, so the best thing will be write your own solution. But if you still don't want to spend time on that and want to learn something new – you can try the two methods built into MongoDB (map reduce and agg framework). See [mongodb_agg_framework](http://docs.mongodb.org/manual/core/aggregation/).
| null | CC BY-SA 3.0 | null | 2014-06-10T09:41:34.697 | 2014-06-10T09:41:34.697 | null | null | 478 | null |
264 | 2 | null | 22 | 4 | null | You can also give the Expectation Maximization clustering algorithm a try. It can work on categorical data and will give you a statistical likelihood of which categorical value (or values) a cluster is most likely to take on.
| null | CC BY-SA 3.0 | null | 2014-06-10T10:48:58.457 | 2014-06-10T10:48:58.457 | null | null | 490 | null |
265 | 1 | 285 | null | 42 | 45677 | I have a variety of NFL datasets that I think might make a good side-project, but I haven't done anything with them just yet.
Coming to this site made me think of machine learning algorithms and I wondering how good they might be at either predicting the outcome of football games or even the next play.
It seems to me that there would be some trends that could be identified - on 3rd down and 1, a team with a strong running back theoretically should have a tendency to run the ball in that situation.
Scoring might be more difficult to predict, but the winning team might be.
My question is whether these are good questions to throw at a machine learning algorithm. It could be that a thousand people have tried it before, but the nature of sports makes it an unreliable topic.
| Can machine learning algorithms predict sports scores or plays? | CC BY-SA 3.0 | null | 2014-06-10T10:58:58.447 | 2020-08-20T18:25:42.540 | 2015-03-02T12:33:11.007 | 553 | 434 | [
"machine-learning",
"sports"
] |
266 | 1 | 272 | null | 12 | 3010 | Being new to machine-learning in general, I'd like to start playing around and see what the possibilities are.
I'm curious as to what applications you might recommend that would offer the fastest time from installation to producing a meaningful result.
Also, any recommendations for good getting-started materials on the subject of machine-learning in general would be appreciated.
| What are some easy to learn machine-learning applications? | CC BY-SA 3.0 | null | 2014-06-10T11:05:47.273 | 2014-06-12T17:58:21.467 | null | null | 434 | [
"machine-learning"
] |
268 | 2 | null | 266 | 5 | null | I think [Weka](http://www.cs.waikato.ac.nz/ml/weka/) is a good starting point. You can do a bunch of stuff like supervised learning or clustering and easily compare a large set of algorithms na methodologies.
Weka's manual is actually a book on machine learning and data mining that can be used as introductory material.
| null | CC BY-SA 3.0 | null | 2014-06-10T11:36:19.287 | 2014-06-10T11:36:19.287 | null | null | 418 | null |
269 | 2 | null | 265 | 9 | null | Definitely they can.
I can target you to a [nice paper](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.56.7448&rep=rep1&type=pdf). Once I used it for soccer league results prediction algorithm implementation, primarily aiming at having some value against bookmakers.
From paper's abstract:
>
a Bayesian dynamic generalized model to estimate the time dependent skills of all teams in a league, and to predict next weekend's soccer matches.
Keywords:
>
Dynamic Models, Generalized Linear Models, Graphical Models, Markov
Chain Monte Carlo Methods, Prediction of Soccer Matches
Citation:
>
Rue, Havard, and Oyvind Salvesen. "Prediction and retrospective analysis of soccer matches in a league." Journal of the Royal Statistical Society: Series D (The Statistician) 49.3 (2000): 399-418.
| null | CC BY-SA 3.0 | null | 2014-06-10T11:37:28.293 | 2014-06-25T16:03:21.447 | 2014-06-25T16:03:21.447 | 322 | 97 | null |
270 | 2 | null | 265 | 7 | null | Machine learning and statistical techniques can improve the forecast, but nobody can predict the real result.
There was a kaggle competition a few month ago about [predicting the 2014 NCAA Tournament](https://www.kaggle.com/c/march-machine-learning-mania). You can read the Competition Forum to get a better idea on what people did and what results did they achieve.
| null | CC BY-SA 3.0 | null | 2014-06-10T11:39:19.603 | 2014-06-10T11:39:19.603 | null | null | 478 | null |
271 | 2 | null | 265 | 8 | null | It has been shown before that machine learning techniques can be applied for predicting sport results. Simple google search should give you a bunch of results.
However, it has also been showed (for NFL btw) that very complex predictive models, simple predictive models, questioning people, or crowd knowledge by utilising betting info, they all perform more or less the same. Source: "[Everything is obvious once you know the answer - How common sense Fails](http://everythingisobvious.com/)", Chapter 7, by Duncan Watts.
| null | CC BY-SA 3.0 | null | 2014-06-10T11:49:23.777 | 2014-06-10T11:49:23.777 | null | null | 418 | null |
272 | 2 | null | 266 | 13 | null | I would recommend to start with some MOOC on machine learning. For example Andrew Ng's [course](https://www.coursera.org/course/ml) at coursera.
You should also take a look at [Orange](http://orange.biolab.si/) application. It has a graphical interface and probably it is easier to understand some ML techniques using it.
| null | CC BY-SA 3.0 | null | 2014-06-10T11:53:07.737 | 2014-06-10T11:53:07.737 | null | null | 478 | null |
273 | 2 | null | 253 | 2 | null | You can apply data science techniques to data on one machine so the answer to the question as the OP phrased it, is no.
| null | CC BY-SA 3.0 | null | 2014-06-10T12:10:28.713 | 2014-06-10T15:04:01.177 | 2014-06-10T15:04:01.177 | 498 | 498 | null |
274 | 2 | null | 262 | 12 | null | One benefit is wide support - C, Java, Perl, Python, and R all have HDF5 bindings.
Another benefit is speed. I haven't ever seen it benchmarked, but HDF is supposed to be faster than SQL databases.
I understand that it is very good when used with both large sets of scientific data and time series data - network monitoring, usage tracking, etc.
I don't believe there is a size limitation for HDF files (although OS limits would still apply.
| null | CC BY-SA 3.0 | null | 2014-06-10T12:57:04.307 | 2014-06-10T12:57:04.307 | null | null | 434 | null |
275 | 2 | null | 155 | 26 | null | For time series data in particular, [Quandl](http://www.quandl.com/) is an excellent resource -- an easily browsable directory of (mostly) clean time series.
One of their coolest features is [open-data stock prices](http://blog.quandl.com/blog/quandl-open-data/) -- i.e. financial data that can be edited wiki-style, and isn't encumbered by licensing.
| null | CC BY-SA 3.0 | null | 2014-06-10T13:17:48.433 | 2014-06-10T13:17:48.433 | null | null | 508 | null |
276 | 2 | null | 155 | 10 | null | Not all government data is listed on data.gov - [Sunlight Foundation](http://sunlightfoundation.com/blog/2014/02/21/open-data-inventories-ready-for-human-consumption/) put together a [set of spreadsheets](https://drive.google.com/folderview?id=0B4QuErjcV2a0WXVDOURwbzh6S2s&usp=sharing) back in February describing sets of available data.
| null | CC BY-SA 3.0 | null | 2014-06-10T13:38:31.207 | 2014-06-10T13:38:31.207 | null | null | 434 | null |
277 | 2 | null | 266 | 11 | null | To be honest, I think that doing some projects will teach you much more than doing a full course. One reason is that doing a project is more motivating and open-ended than doing assignments.
A course, if you have the time AND motivation (real motivation), is better than doing a project. The other commentators have made good platform recommendations on tech.
I think, from a fun project standpoint, you should ask a question and get a computer to learn to answer it.
Some good classic questions that have good examples are:
- Neural Networks for recognizing hand written digits
- Spam email classification using logistic regression
- Classification of objects using Gaussian Mixture models
- Some use of linear regression, perhaps forecasting of grocery prices given neighborhoods
These projects have the math done, code done, and can be found with Google readily.
Other cool subjects can be done by you!
Lastly, I research robotics, so for me the most FUN applications are behavioral.
Examples can include (if you can play with an arduino)
Create a application, that uses logistic regression perhaps, that learns when to turn the fan off and on given the inner temperature, and the status of the light in the room.
Create an application that teaches a robot to move an actuator, perhaps a wheel, based on sensor input (perhaps a button press), using Gaussian Mixture Models (learning from demonstration).
Anyway, those are pretty advanced. The point I'm making is that if you pick a project that you (really really) like, and spend a few week on it, you will learn a massive amount, and understand so much more than you will get doing a few assignments.
| null | CC BY-SA 3.0 | null | 2014-06-10T14:25:41.903 | 2014-06-10T14:25:41.903 | null | null | 518 | null |
278 | 2 | null | 266 | 2 | null | Assuming you're familiar with programming I would recommend looking at [scikit-learn](http://scikit-learn.org/stable/). It has especially nice help pages that can serve as mini-tutorials/a quick tour through machine learning. Pick an area you find interesting and work through the examples.
| null | CC BY-SA 3.0 | null | 2014-06-10T14:30:33.667 | 2014-06-10T14:30:33.667 | null | null | 524 | null |
279 | 2 | null | 155 | 19 | null | There is also another resource provided by The Guardian, the British Daily on their website. The datasets published by the Guardian Datablog are all hosted. Datasets related to Football Premier League Clubs' accounts, Inflation and GDP details of UK, Grammy awards data etc.
The datasets are available at
- http://www.theguardian.com/news/datablog/interactive/2013/jan/14/all-our-datasets-index
Some more resources. Some of the datasets are in R format or R commads exist for directly importing data to R.
- http://www.inside-r.org/howto/finding-data-internet
| null | CC BY-SA 3.0 | null | 2014-06-10T14:57:47.810 | 2014-06-11T16:30:06.930 | 2014-06-11T16:30:06.930 | 514 | 514 | null |
280 | 1 | null | null | 8 | 164 | I am developing a system that is intended to capture the "context" of user activity within an application; it is a framework that web applications can use to tag user activity based on requests made to the system. It is hoped that this data can then power ML features such as context aware information retrieval.
I'm having trouble deciding on what features to select in addition to these user tags - the URL being requested, approximate time spent with any given resource, estimating the current "activity" within the system.
I am interested to know if there are good examples of this kind of technology or any prior research on the subject - a cursory search of the ACM DL revealed some related papers but nothing really spot-on.
| Feature selection for tracking user activity within an application | CC BY-SA 3.0 | null | 2014-06-10T15:08:54.073 | 2020-06-19T08:28:09.320 | null | null | 531 | [
"feature-selection"
] |
282 | 2 | null | 265 | 14 | null | Yes. Why not?!
With so much of data being recorded in each sport in each game, smart use of data could lead us to obtain important insights regarding player performance.
Some examples:
- Baseball: In the movie Moneyball (which is an adaptation of the Moneyball book), Brad Pitt plays a character who analyses player statistics to come up with a team that performs tremendously well! It was a depiction of the real-life story of Oakland Athletics baseball team. For more info.
- Cricket: SAP Labs has come up with an auction analytics tool that has given insights about impact players to buy in the 2014 Indian Premier League auction for the Kolkata Knight Riders team, which eventually went on to win the 2014 IPL Championship. For more info.
So, yes, statistical analysis of the player records can give us insights about which players are more likely to perform but not which players will perform. So, machine learning, a close cousin of statistical analysis will be proving to be a game changer.
| null | CC BY-SA 4.0 | null | 2014-06-10T16:25:24.223 | 2020-08-20T18:25:04.213 | 2020-08-20T18:25:04.213 | 98307 | 514 | null |
284 | 2 | null | 280 | 5 | null | Well, this may not answer the question thoroughly, but since you're dealing with information retrieval, it may be of some use. [This page](http://moz.com/search-ranking-factors) mantains a set of features and associated correlations with page-ranking methods of search engines. As a disclaimer from the webpage itself:
>
Note that these factors are not "proof" of what search engines use to rank websites, but simply show the characteristics of web pages that tend to rank higher.
The list pointed may give you some insights on which features would be nice to select. For example, considering the second most correlated feature, # of google +1's, it may be possible to add some probability of a user making use of such service if he/she accesses many pages with high # of google +1 (infer "user context"). Thus, you could try to "guess" some other relations that may shed light on interesting features for your tracking app.
| null | CC BY-SA 3.0 | null | 2014-06-10T17:06:54.950 | 2014-06-10T17:06:54.950 | null | null | 84 | null |
285 | 2 | null | 265 | 19 | null | There are a lot of good questions about Football (and sports, in general) that would be awesome to throw to an algorithm and see what comes out. The tricky part is to know what to throw to the algorithm.
A team with a good RB could just pass on 3rd-and-short just because the opponents would probably expect run, for instance. So, in order to actually produce some worthy results, I'd break the problem in smaller pieces and analyse them statistically while throwing them to the machines.
There are a few (good) websites that try to do the same, you should check'em out and use whatever they found to help you out:
- Football Outsiders
- Advanced Football Analytics
And if you truly want to explore Sports Data Analysis, you should definitely check the [Sloan Sports Conference](http://www.sloansportsconference.com/) videos. There's a lot of them spread on Youtube.
| null | CC BY-SA 4.0 | null | 2014-06-10T17:15:52.953 | 2019-01-23T14:36:40.430 | 2019-01-23T14:36:40.430 | 553 | 553 | null |
287 | 2 | null | 231 | 5 | null | I do not know a standard answer to this, but I thought about it some times ago and I have some ideas to share.
When you have one confusion matrix, you have more or less a picture of how you classification model confuse (mis-classify) classes. When you repeat classification tests you will end up having multiple confusion matrices. The question is how to get a meaningful aggregate confusion matrix. The answer depends on what is the meaning of meaningful (pun intended). I think there is not a single version of meaningful.
One way is to follow the rough idea of multiple testing. In general, you test something multiple times in order to get more accurate results. As a general principle one can reason that averaging on the results of the multiple tests reduces the variance of the estimates, so as a consequence, it increases the precision of the estimates. You can proceed in this way, of course, by summing position by position and then dividing by the number of tests. You can go further and instead of estimating only a value for each cell of the confusion matrix, you can also compute some confidence intervals, t-values and so on. This is OK from my point of view. But it tell only one side of the story.
The other side of the story which might be investigated is how stable are the results for the same instances. To exemplify that I will take an extreme example. Suppose you have a classification model for 3 classes. Suppose that these classes are in the same proportion. If your model is able to predict one class perfectly and the other 2 classes with random like performance, you will end up having 0.33 + 0.166 + 0.166 = 0.66 misclassification ratio. This might seem good, but even if you take a look on a single confusion matrix you will not know that your performance on the last 2 classes varies wildly. Multiple tests can help. But averaging the confusion matrices would reveal this? My belief is not. The averaging will give the same result more or less, and doing multiple tests will only decrease the variance of the estimation. However it says nothing about the wild instability of prediction.
So another way to do compose the confusion matrices would better involve a prediction density for each instance. One can build this density by counting for each instance, the number of times it was predicted a given class. After normalization, you will have for each instance a prediction density rather a single prediction label. You can see that a single prediction label is similar with a degenerated density where you have probability of 1 for the predicted class and 0 for the other classes for each separate instance. Now having this densities one can build a confusion matrix by adding the probabilities from each instance and predicted class to the corresponding cell of the aggregated confusion matrix.
One can argue that this would give similar results like the previous method. However I think that this might be the case sometimes, often when the model has low variance, the second method is less affected by how the samples from the tests are drawn, and thus more stable and closer to the reality.
Also the second method might be altered in order to obtain a third method, where one can assign as prediction the label with highest density from the prediction of a given instance.
I do not implemented those things but I plan to study further because I believe might worth spending some time.
| null | CC BY-SA 3.0 | null | 2014-06-10T17:32:19.120 | 2014-06-11T09:39:34.373 | 2014-06-11T09:39:34.373 | 108 | 108 | null |
288 | 2 | null | 280 | 5 | null | The goal determines the features, so I would initially take as many as possible, then use cross validation to select the optimal subset.
My educated guess is that a Markov model would work. If you discretize the action space (e.g., select this menu item, press that button, etc.), you can predict the next action based on the past ones. It's a sequence or [structured prediction](http://en.wikipedia.org/wiki/Structured_prediction) problem.
For commercial offerings, search app analytics.
| null | CC BY-SA 3.0 | null | 2014-06-10T17:36:11.580 | 2014-06-11T01:19:18.183 | 2014-06-11T01:19:18.183 | 381 | 381 | null |
289 | 1 | 291 | null | 10 | 580 | Yann LeCun mentioned in his [AMA](http://www.reddit.com/r/MachineLearning/comments/25lnbt/ama_yann_lecun/) that he considers having a PhD very important in order to get a job at a top company.
I have a masters in statistics and my undergrad was in economics and applied math, but I am now looking into ML PhD programs. Most programs say there are no absolutely necessary CS courses; however I tend to think most accepted students have at least a very strong CS background. I am currently working as a data scientist/statistician but my company will pay for courses. Should I take some intro software engineering courses at my local University to make myself a stronger candidate? What other advice you have for someone applying to PhD programs from outside the CS field?
edit: I have taken a few MOOCs (Machine Learning, Recommender Systems, NLP) and code R/python on a daily basis. I have a lot of coding experience with statistical languages and implement ML algorithms daily. I am more concerned with things that I can put on applications.
| Qualifications for PhD Programs | CC BY-SA 3.0 | null | 2014-06-10T17:56:34.847 | 2015-09-10T20:17:35.897 | 2015-09-10T20:17:35.897 | 560 | 560 | [
"education"
] |
290 | 2 | null | 266 | 2 | null | I found the pluralsight course [Introduction to machine learning encog](http://pluralsight.com/training/courses/TableOfContents?courseName=introduction-to-machine-learning-encog&highlight=abhishek-kumar_introduction-to-machine-learning-encog-m2-applications!abhishek-kumar_introduction-to-machine-learning-encog-m3-tasks!abhishek-kumar_introduction-to-machine-learning-encog-m1-intro%2a1#introduction-to-machine-learning-encog-m2-applications) a great resource so start with. It uses the [Encog library](http://www.heatonresearch.com/encog) to quickly explore different ml techniques.
| null | CC BY-SA 3.0 | null | 2014-06-10T18:22:32.610 | 2014-06-10T18:22:32.610 | null | null | 571 | null |
291 | 2 | null | 289 | 10 | null | If I were you I would take a MOOC or two (e.g., [Algorithms, Part I](https://www.coursera.org/course/algs4partI), [Algorithms, Part II](https://www.coursera.org/course/algs4partII), [Functional Programming Principles in Scala](https://www.coursera.org/course/progfun)), a good book on data structures and algorithms, then just code as much as possible. You could implement some statistics or ML algorithms, for example; that would be good practice for you and useful to the community.
For a PhD program, however, I would also make sure I were familiar with the type of maths they use. If you want to see what it's like at the deep end, browse the papers at the [JMLR](http://jmlr.org/papers/). That will let you calibrate yourself in regards to theory; can you sort of follow the maths?
Oh, and you don't need a PhD to work at top companies, unless you want to join research departments like his. But then you'll spend more time doing development, and you'll need good coding skills...
| null | CC BY-SA 3.0 | null | 2014-06-10T18:55:39.010 | 2014-06-10T18:55:39.010 | null | null | 381 | null |
292 | 2 | null | 289 | 7 | null | Your time would probably be better spent on Kaggle than in a PhD program. When you read the stories by winners ([Kaggle blog](http://blog.kaggle.com/)) you'll see that it takes a large amount of practice and the winners are not just experts of one single method.
On the other hand, being active and having a plan in a PhD program can get you connections that you otherwise would probably not get.
I guess the real question is for you - what are the reasons for wanting a job at a top company?
| null | CC BY-SA 3.0 | null | 2014-06-10T19:43:11.860 | 2014-06-10T19:43:11.860 | null | null | 587 | null |
293 | 2 | null | 262 | 37 | null | Perhaps a good way to paraphrase the question is, what are the advantages compared to alternative formats?
The main alternatives are, I think: a database, text files, or another packed/binary format.
The database options to consider are probably a columnar store or NoSQL, or for small self-contained datasets SQLite. The main advantage of the database is the ability to work with data much larger than memory, to have random or indexed access, and to add/append/modify data quickly. The main *dis*advantage is that it is much slower than HDF, for problems in which the entire dataset needs to be read in and processed. Another disadvantage is that, with the exception of embedded-style databases like SQLite, a database is a system (requiring admnistration, setup, maintenance, etc) rather than a simple self-contained data store.
The text file format options are XML/JSON/CSV. They are cross-platform/language/toolkit, and are a good archival format due to the ability to be self-describing (or obvious :). If uncompressed, they are huge (10x-100x HDF), but if compressed, they can be fairly space-efficient (compressed XML is about the same as HDF). The main disadvantage here is again speed: parsing text is much, much slower than HDF.
The other binary formats (npy/npz numpy files, blz blaze files, protocol buffers, Avro, ...) have very similar properties to HDF, except they are less widely supported (may be limited to just one platform: numpy) and may have specific other limitations. They typically do not offer a compelling advantage.
HDF is a good complement to databases, it may make sense to run a query to produce a roughly memory-sized dataset and then cache it in HDF if the same data would be used more than once. If you have a dataset which is fixed, and usually processed as a whole, storing it as a collection of appropriately sized HDF files is not a bad option. If you have a dataset which is updated often, staging some of it as HDF files periodically might still be helpful.
To summarize, HDF is a good format for data which is read (or written) typically as a whole; it is the lingua franca or common/preferred interchange format for many applications due to wide support and compatibility, decent as an archival format, and very fast.
P.S. To give this some practical context, my most recent experience comparing HDF to alternatives, a certain small (much less than memory-sized) dataset took 2 seconds to read as HDF (and most of this is probably overhead from Pandas); ~1 minute to read from JSON; and 1 hour to write to database. Certainly the database write could be sped up, but you'd better have a good DBA! This is how it works out of the box.
| null | CC BY-SA 3.0 | null | 2014-06-10T20:28:54.613 | 2014-06-10T20:28:54.613 | null | null | 26 | null |
294 | 2 | null | 253 | 9 | null | As a former Hadoop engineer, it is not needed but it helps. Hadoop is just one system - the most common system, based on Java, and a ecosystem of products, which apply a particular technique "Map/Reduce" to obtain results in a timely manner. Hadoop is not used at Google, though I assure you they use big data analytics. Google uses their own systems, developed in C++. In fact, Hadoop was created as a result of Google publishing their Map/Reduce and BigTable (HBase in Hadoop) white papers.
Data scientists will interface with hadoop engineers, though at smaller places you may be required to wear both hats. If you are strictly a data scientist, then whatever you use for your analytics, R, Excel, Tableau, etc, will operate only on a small subset, then will need to be converted to run against the full data set involving hadoop.
| null | CC BY-SA 3.0 | null | 2014-06-10T20:40:25.623 | 2014-06-10T20:40:25.623 | null | null | 602 | null |
295 | 2 | null | 289 | 5 | null | I am glad you also found Yann LeCun's AMA page, it's very useful.
Here are my opinions
Q: Should I take some intro software engineering courses at my local University to make myself a stronger candidate?
A: No, you need to take more math courses. It's not the applied stuff that's hard, it's the theory stuff. I don't know what your school offers. Take theoretical math courses, along with some computer science courses.
Q:What other advice you have for someone applying to PhD programs from outside the CS field?
A: How closely related are you looking for. Without a specific question, it's hard to give a specific answer.
| null | CC BY-SA 3.0 | null | 2014-06-10T20:43:28.533 | 2014-06-10T20:43:28.533 | null | null | 386 | null |
296 | 2 | null | 128 | 44 | null | HDP is an extension of LDA, designed to address the case where the number of mixture components (the number of "topics" in document-modeling terms) is not known a priori. So that's the reason why there's a difference.
Using LDA for document modeling, one treats each "topic" as a distribution of words in some known vocabulary. For each document a mixture of topics is drawn from a Dirichlet distribution, and then each word in the document is an independent draw from that mixture (that is, selecting a topic and then using it to generate a word).
For HDP (applied to document modeling), one also uses a Dirichlet process to capture the uncertainty in the number of topics. So a common base distribution is selected which represents the countably-infinite set of possible topics for the corpus, and then the finite distribution of topics for each document is sampled from this base distribution.
As far as pros and cons, HDP has the advantage that the maximum number of topics can be unbounded and learned from the data rather than specified in advance. I suppose though it is more complicated to implement, and unnecessary in the case where a bounded number of topics is acceptable.
| null | CC BY-SA 3.0 | null | 2014-06-10T21:50:51.347 | 2014-06-10T21:50:51.347 | null | null | 14 | null |
297 | 2 | null | 130 | 7 | null | As in @damienfrancois answer feature selection is about selecting a subset of features. So in NLP it would be selecting a set of specific words (the typical in NLP is that each word represents a feature with value equal to the frequency of the word or some other weight based on TF/IDF or similar).
Dimensionality reduction is the introduction of new feature space where the original features are represented. The new space is of lower dimension that the original space. In case of text an example would be the [hashing trick](http://en.wikipedia.org/wiki/Feature_hashing) where a piece of text is reduced to a vector of few bits (say 16 or 32) or bytes. The amazing thing is that the geometry of the space is preserved (given enough bits), so relative distances between documents remain the same as in the original space, so you can deploy standard machine learning techniques without having to deal with unbound (and huge number of) dimensions found in text.
| null | CC BY-SA 3.0 | null | 2014-06-10T22:26:53.623 | 2015-10-20T03:28:37.247 | 2015-10-20T03:28:37.247 | 381 | 418 | null |
298 | 2 | null | 289 | 7 | null | You already have a Masters in Statistics, which is great! In general, I'd suggest to people to take as much statistics as they can, especially Bayesian Data Analysis.
Depending on what you want to do with your PhD, you would benefit from foundational courses in the discipline(s) in your application area. You already have Economics but if you want to do Data Science on social behavior, then courses in Sociology would be valuable. If you want to work in fraud prevention, then a courses in banking and financial transactions would be good. If you want to work in information security, then taking a few security courses would be good.
There are people who argue that it's not valuable for Data Scientists to spend time on courses in sociology or other disciplines. But consider the recent case of the Google Flu Trends project. In [this article](http://www.uvm.edu/~cdanfort/csc-reading-group/lazer-flu-science-2014.pdf) their methods were strongly criticized for making avoidable mistakes. The critics call it "Big Data hubris".
There's another reason for building strength in social science disciplines: personal competitive advantage. With the rush of academic degree programs, certificate programs, and MOOCs, there is a mad rush of students into the Data Science field. Most will come out with capabilities for core Machine Learning methods and tools. PhD graduates will have more depth and more theoretical knowledge, but they are all competing for the same sorts of jobs, delivering the same sorts of value. With this flood of graduates, I expect that they won't be able to command premium salaries.
But if you can differentiate yourself with a combination of formal education and practical experience in a particular domain and application area, then you should be able to set yourself apart from the crowd.
(Context: I'm in a PhD program in Computational Social Science, which has a heavy focus on modeling, evolutionary computation, and social science disciplines, and less emphasis on ML and other empirical data analysis topics).
| null | CC BY-SA 3.0 | null | 2014-06-10T22:29:52.873 | 2014-06-10T22:29:52.873 | null | null | 609 | null |
300 | 2 | null | 103 | 4 | null | Topological Data Analysis is a method explicitly designed for the setting you describe. Rather than a global distance metric, it relies only on a local metric of proximity or neighborhood. See: [Topology and data](http://www.ams.org/bull/2009-46-02/S0273-0979-09-01249-X/S0273-0979-09-01249-X.pdf) and [Extracting insights from the shape of complex data using topology](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3566620/). You can find additional resources at the website for Ayasdi.
| null | CC BY-SA 3.0 | null | 2014-06-10T23:20:16.670 | 2014-06-10T23:20:16.670 | null | null | 609 | null |
301 | 2 | null | 52 | 11 | null | One reason that data cleaning is rarely fully automated is that there is so much judgment required to define what "clean" means given your particular problem, methods, and goals.
It may be as simple as imputing values for any missing data, or it might be as complex as diagnosing data entry errors or data transformation errors from previous automated processes (e.g. coding, censoring, transforming). In these last two cases, the data looks good by outward appearance but it's really erroneous. Such diagnosis often requires manual analysis and inspection, and also out-of-band information such as information about the data sources and methods they used.
Also, some data analysis methods work better when erroneous or missing data is left blank (or N/A) rather than imputed or given a default value. This is true when there is explicit representations of uncertainty and ignorance, such as Dempster-Shafer Belief functions.
Finally, it's useful to have specific diagnostics and metrics for the cleaning process. Are missing or erroneous values randomly distributed or are they concentrated in any way that might affect the outcome of the analysis. It's useful to test the effects of alternative cleaning strategies or algorithms to see if they affect the final results.
Given these concerns, I'm very suspicious of any method or process that treats data cleaning in a superficial, cavalier or full-automated fashion. There are many devils hiding in those details and it pays to give them serious attention.
| null | CC BY-SA 3.0 | null | 2014-06-11T00:32:54.887 | 2014-06-11T00:32:54.887 | null | null | 609 | null |
302 | 2 | null | 280 | 3 | null | I've seen a few similar systems over the years. I remember a company called ClickTrax which if I'm not mistaken got bought by Google and some of their features are now part of Google Analytics.
Their purpose was marketing, but the same concept can be applied to user experience analytics. The beauty of their system was that what was tracked was defined by the webmaster - in your case the application developer.
I can imagine as an application developer I would want to be able to see statistical data on two things - task accomplishment, and general feature usage.
As an example of task accomplishment, I might have 3 ways to print a page - Ctrl+P, File->Print, and a toolbar button. I would want to be able to compare usage to see if the screenspace utilized by the toolbar button was actually worth it.
As an example of general feature usage, I would want to define a set of features within my application and focus my development efforts on expanding the features used most by my end users. Some features that take maybe 5 clicks and are popular, I might want to provide a hotkey for, or slim down the number of clicks to activate that feature. There is also event timing. Depending on the application, I might want to know the average amount of time spent on a particular feature.
Another thing I would want to look at are click streams. How are people getting from point A to point B in my application? What are the most popular point B's? What are the most popular starting points?
| null | CC BY-SA 3.0 | null | 2014-06-11T00:44:20.517 | 2014-06-11T00:44:20.517 | null | null | 434 | null |
303 | 2 | null | 224 | 3 | null | This isn't my area of specialty and I'm not familiar with Moses, but I found this after some searching.
I think you are looking for GIZA++. You'll see GIZA++ listed in the "Training" section (left menu) on the Moses home page, as the second step. GIZA++ is briefly described in tutorial fashion [here](https://stackoverflow.com/questions/5752043/is-there-a-tutorial-about-giza). Here are a few tutorial PowerPoint slides: [http://www.tc.umn.edu/~bthomson/wordalignment/GIZA.ppt](http://www.tc.umn.edu/~bthomson/wordalignment/GIZA.ppt)
| null | CC BY-SA 3.0 | null | 2014-06-11T01:09:06.100 | 2014-06-11T01:09:06.100 | 2017-05-23T12:38:53.587 | -1 | 609 | null |
305 | 1 | 309 | null | 12 | 3082 | There is plenty of hype surrounding Hadoop and its eco-system. However, in practice, where many data sets are in the terabyte range, is it not more reasonable to use [Amazon RedShift](http://aws.amazon.com/redshift/) for querying large data sets, rather than spending time and effort building a Hadoop cluster?
Also, how does Amazon Redshift compare with Hadoop with respect to setup complexity, cost, and performance?
| Does Amazon RedShift replace Hadoop for ~1XTB data? | CC BY-SA 3.0 | null | 2014-06-11T04:24:04.183 | 2015-01-28T18:42:06.763 | 2014-06-11T15:02:46.890 | 434 | 534 | [
"apache-hadoop",
"map-reduce",
"aws"
] |
306 | 2 | null | 305 | 3 | null | Personally, I don't think it's all that difficult to set up a hadoop cluster, but I know that it is sometimes painful when you are getting started.
HDFS size limitations well exceed a TB (or did you mean exabyte?). If I'm not mistaken it scales to yottabytes or some other measurement that I don't even know the word for. Whatever it is, it's really big.
Tools like Redshift have their place, but I always worry about vendor specific solutions. My main concern is always "what do I do when I am dissatisfied with their service?" - I can go to google and shift my analysis work into their paradigm or I can go to hadoop and shift that same work into that system. Either way, I'm going to have to learn something new and do a lot of work translating things.
That being said, it's nice to be able to upload a dataset and get to work quickly - especially if what I'm doing has a short lifecycle. Amazon has done a good job of answering the data security problem.
If you want to avoid hadoop, there will always be an alternative. But it's not all that difficult to work with once you get going with it.
| null | CC BY-SA 3.0 | null | 2014-06-11T05:17:12.253 | 2014-06-11T05:17:12.253 | null | null | 434 | null |
307 | 1 | null | null | 14 | 3467 | I have read lot of blogs\article on how different type of industries are using Big Data Analytic. But most of these article fails to mention
- What kinda data these companies used. What was the size of the data
- What kinda of tools technologies they used to process the data
- What was the problem they were facing and how the insight they got the data helped them to resolve the issue.
- How they selected the tool\technology to suit their need.
- What kinda pattern they identified from the data & what kind of patterns they were looking from the data.
I wonder if someone can provide me answer to all these questions or a link which at-least answer some of the the questions. I am looking for real world example.
It would be great if someone share how finance industry is making use of Big Data Analytic.
| Big data case study or use case example | CC BY-SA 3.0 | null | 2014-06-11T06:07:45.767 | 2020-08-16T16:54:32.553 | 2016-08-17T10:41:41.383 | 3151 | 496 | [
"data-mining",
"bigdata",
"usecase"
] |
308 | 2 | null | 307 | 14 | null | News outlets tend to use "Big Data" pretty loosely. Vendors usually provide case studies surrounding their specific products. There aren't a lot out there for open source implementations, but they do get mentioned. For instance, Apache isn't going to spend a lot of time building a case study on hadoop, but vendors like Cloudera and Hortonworks probably will.
Here's an [example case study from Cloudera](http://www.cloudera.com/content/cloudera/en/resources/library/casestudy/joint-success-story-major-retail-bank-case-study-datameer.html) in the finance sector.
Quoting the study:
>
One major global financial services conglomerate uses Cloudera and Datameer to help
identify rogue trading activity. Teams within the firm’s asset management group are
performing ad hoc analysis on daily feeds of price, position, and order information. Having
ad hoc analysis to all of the detailed data allows the group to detect anomalies across
certain asset classes and identify suspicious behavior. Users previously relied solely on
desktop spreadsheet tools. Now, with Datameer and Cloudera, users have a powerful
platform that allows them to sift through more data more quickly and avert potential
losses before they begin.
.
>
A leading retail bank is using Cloudera and Datameer to validate data accuracy and quality
as required by the Dodd-Frank Act and other regulations. Integrating loan and branch data
as well as wealth management data, the bank’s data quality initiative is responsible for
ensuring that every record is accurate. The process includes subjectingthe data to over
50 data sanity and quality checks. The results of those checks are trended over time to
ensure that the tolerances for data corruption and data domains aren’t changing adversely
and that the risk profiles being reported to investors and regulatory agencies are prudent
and in compliance with regulatory requirements. The results are reported through a data
quality dashboard to the Chief Risk Officer and Chief Financial Officer, who are ultimately
responsible for ensuring the accuracy of regulatory compliance reporting as well as
earnings forecasts to investors
I didn't see any other finance related studies at Cloudera, but I didn't search very hard. You can have a look at [their library](http://www.cloudera.com/content/cloudera/en/resources/library.html?q=bank) here.
Also, Hortonworks has a [case study on Trading Strategies](http://hortonworks.com/blog/building-stock-trading-strategies-20-faster-with-hadoop/) where they saw a 20% decrease in the time it took to develop a strategy by leveraging K-means, Hadoop, and R.
![Each color indicates a group of strategies with similar probability of a profit and loss](https://i.stack.imgur.com/qX3Qy.png)
![how the trading system was improved by using Hadoop (Hortonworks Data Platform), and the k-means algorithm](https://i.stack.imgur.com/Sj8U9.png)
These don't answer all of your questions. I'm pretty sure both of these studies covered most of them. I don't see anything about tool selection specifically. I imagine sales reps had a lot to do with getting the overall product in the door, but the data scientists themselves leveraged the tools they were most comfortable with. I don't have a lot of insight into that area in the big data space.
| null | CC BY-SA 3.0 | null | 2014-06-11T06:49:04.070 | 2014-06-11T06:54:42.593 | 2014-06-11T06:54:42.593 | 434 | 434 | null |
309 | 2 | null | 305 | 12 | null | tl;dr: They markedly differ in many aspects and I can't think Redshift will replace Hadoop.
-Function
You can't run anything other than SQL on Redshift. Perhaps most importantly, you can't run any type of custom functions on Redshift. In Hadoop you can, using many languages (Java, Python, Ruby.. you name it). For example, NLP in Hadoop is easy, while it's more or less impossible in Redshift. I.e. there are lots of things you can do in Hadoop but not on Redshift. This is probably the most important difference.
-Performance Profile
Query execution on Redshift is in most cases significantly more efficient than on Hadoop. However, this efficiency comes from the indexing that is done when the data is loaded into Redshift (I'm using the term `indexing` very loose here). Therefore, it's great if you load your data once and execute multiple queries, but if you want to execute only one query for example, you might actually lose out in performance overall.
-Cost Profile
Which solution wins out in cost depends on the situation (like performance), but you probably need quite a lot of queries in order to make it cheaper than Hadoop (more specifically Amazon's Elastic Map Reduce). For example, if you are doing OLAP, it's very likely that Redshift comes out cheaper. If you do daily batch ETLs, Hadoop is more likely to come out cheaper.
Having said that, we've replaced part of our ETL that was done in Hive to Redshift, and it was a pretty great experience; mostly for the ease of development. Redshift's Query Engine is based on PostgreSQL and is very mature, compared to Hive's. Its ACID characteristics make it easier to reason about it, and the quicker response time allows more testing to be done. It's a great tool to have, but it won't replace Hadoop.
EDIT: As for setup complexity, I'd even say it's easier with Hadoop if you use AWS's EMR. Their tools are so mature that it's ridiculously easy to have your Hadoop job running. Tools and mechanisms surrounding Redshift's operation aren't that mature yet. For example, Redshift can't handle trickle loading and thus you have to come up with something that turns that into a batched load, which can add some complexity to your ETL.
| null | CC BY-SA 3.0 | null | 2014-06-11T06:51:19.143 | 2014-06-11T09:07:33.570 | 2014-06-11T09:07:33.570 | 638 | 638 | null |
310 | 1 | null | null | 17 | 2865 | I'm working on improving an existing supervised classifier, for classifying {protein} sequences as belonging to a specific class (Neuropeptide hormone precursors), or not.
There are about 1,150 known "positives", against a background of about 13 million protein sequences ("Unknown/poorly annotated background"), or about 100,000 reviewed, relevant proteins, annotated with a variety of properties (but very few annotated in an explicitly "negative" way).
My previous implementation looked at this as a binary classification problem:
Positive set = Proteins marked as Neuropeptides.
Negative set: Random sampling of 1,300 samples (total) from among the remaining proteins of a roughly similar length-wise distribution.
That worked, but I want to greatly improve the machines discriminatory abilities (Currently, it's at about 83-86% in terms of accuracy, AUC, F1, measured by CV, on multiple randomly sampled negative sets).
My thoughts were to:
1) Make this a multiclass problem, choosing 2-3 different classes of protein that will definetly be negatives, by their properties/functional class, along with (maybe) another randomly sampled set.
(Priority here would be negative sets that are similar in their characteristics/features to the positive set, while still having defining characteristics) .
2) One class learning - Would be nice, but as I understand it, it's meant just for anomaly detection, and has poorer performance than discriminatory approaches.
*) I've heard of P-U learning, which sounds neat, but I'm a programming N00b, and I don't know of any existing implementations for it. (In Python/sci-kit learn).
So, does approach 1 make sense in a theoretical POV? Is there a best way to make multiple negative sets? (I could also simply use a massive [50K] pick of the "negative" proteins, but they're all very very different from each other, so I don't know how well the classifier would handle them as one big , unbalanced mix).
Thanks!
| One-Class discriminatory classification with imbalanced, heterogenous Negative background? | CC BY-SA 3.0 | null | 2014-06-11T10:11:59.397 | 2020-08-16T13:00:45.530 | null | null | 555 | [
"machine-learning",
"data-mining",
"python",
"classification"
] |
311 | 2 | null | 310 | 5 | null | The way I would attack the problem, in general, is to leverage statistical analysis like Principal Component Analysis or Ordinary Least Squares to help determine what attributes within these protein sequences are best suited to classify proteins as Neuropeptide hormone precursors.
In order to do that, you'll have to convert the protein sequences into numeric data, but I believe some work has already been done in that regard using formulas leveraged in Amino Acid PCA.
See these two links:
- Principal components analysis of protein sequence clusters
- Amino Acid Principal Component Analysis (AAPCA) and its applications in protein structural class prediction
Once that work has been done, I would attempt to classify using the entire dataset and a reinforcement learning algorithm, like [Naive Bayes](http://findingscience.com/ankusa/hbase/hadoop/ruby/2010/12/02/naive-bayes-classification-in-ruby-using-hadoop-and-hbase.html) while slimming down the data into that which PCA has identified as important.
The reason I would try to use Bayes is that it has proven to be one of the best methods for determining spam vs. regular email, which has a similarly skewed dataset.
Having said all of that...
Slimming down the number or type of negative classifications might skew your results a few points one way or the other, but I don't think you'll see the long term effectiveness change substantially until you do the leg work of determining how to best remove the fuzziness from your training data. That will either require a field expert or statistical analysis.
I could be completely off base. I am interested in seeing some other answers, but that is my 2 cents.
| null | CC BY-SA 4.0 | null | 2014-06-11T11:24:19.963 | 2020-08-16T13:00:45.530 | 2020-08-16T13:00:45.530 | 50406 | 434 | null |
312 | 2 | null | 215 | 5 | null | Therriault, really happy to hear you are using Vertica! Full disclosure, I am the chief data scientist there :) . The workflow you describe is exactly what I encounter quite frequently and I am a true believer in preprocessing those very large datasets in the database prior to any pyODBC and pandas work. I'd suggest creating a view or table via a file based query just to ensure reproducible work. Good Luck
| null | CC BY-SA 3.0 | null | 2014-06-11T11:32:13.433 | 2014-06-11T11:32:13.433 | null | null | 655 | null |
313 | 1 | null | null | 29 | 3323 | What are the books about the science and mathematics behind data science? It feels like so many "data science" books are programming tutorials and don't touch things like data generating processes and statistical inference. I can already code, what I am weak on is the math/stats/theory behind what I am doing.
If I am ready to burn $1000 on books (so around 10 books... sigh), what could I buy?
Examples: Agresti's [Categorical Data Analysis](http://rads.stackoverflow.com/amzn/click/0470463635), [Linear Mixed Models for Longitudinal Data](http://rads.stackoverflow.com/amzn/click/1441902996), etc... etc...
| Books about the "Science" in Data Science? | CC BY-SA 3.0 | null | 2014-06-11T13:28:35.980 | 2016-02-21T04:02:38.847 | 2016-02-21T04:02:38.847 | 11097 | 663 | [
"statistics",
"reference-request"
] |
314 | 2 | null | 313 | 14 | null | If I could only recomend one to you, it would be: [The Elements of Statistical Learning and Prediction](http://rads.stackoverflow.com/amzn/click/0387848576) by Hastie, Tibshirani and Friedman. It provides the math/statistics behind a lot of commonly used techniques in data science.
For Bayesian Techniques, [Bayesian Data Analysis](http://rads.stackoverflow.com/amzn/click/1439840954) by Gelman, Carlin, Stern, Dunson, Vehtari and Rubin is excellent.
[Statistical Inference](http://rads.stackoverflow.com/amzn/click/0534243126) by Casella and Berger is a good graduate-level textbook on the theoretical foundation of statistics. This book does require a pretty high level of comfort with math (probability theory is based on measure theory, which is not trivial to understand).
With respect to data generating processes, I don't have a recommendation for a book. What I can say is that a good understanding of the assumptions of the techniques used and ensuring that the data was collected or generated in a manner that does not violate those assumptions goes a long way towards a good analysis.
| null | CC BY-SA 3.0 | null | 2014-06-11T13:49:08.970 | 2014-06-11T13:49:08.970 | null | null | 178 | null |
316 | 2 | null | 59 | 10 | null | As Konstantin has pointed, R performs all its computation in the system's memory i.e. RAM. Hence, RAM capacity is a very important constraint for computation intensive operations in R. Overcoming this constraint, data is being stored these days in HDFS systems, where data isn't loaded onto memory and program is run instead, program goes to the data and performs the operations, thus overcoming the memory constraints. RHadoop ([https://github.com/RevolutionAnalytics/RHadoop/wiki](https://github.com/RevolutionAnalytics/RHadoop/wiki)) is the connector you are looking for.
Coming to the impact on algorithms which are computation intensive, Random Forests/Decision Trees/Ensemble methods on a considerable amount of data (minimum 50,000 observations in my experience) take up a lot of memory and are considerably slow. To speed up the process, parallelization is the way to go and parallelization is inherently available in Hadoop! That's where, Hadoop is really efficient.
So, if you are going for ensemble methods which are compute intensive and are slow, you would want to try out on the HDFS system which gives a considerable performance improvement.
| null | CC BY-SA 3.0 | null | 2014-06-11T16:25:34.747 | 2014-06-11T16:25:34.747 | null | null | 514 | null |
317 | 2 | null | 266 | 2 | null | If you already know R Studio, then the caret package is a good place to start. Here are some tutorials:
- https://class.coursera.org/predmachlearn-002
- http://caret.r-forge.r-project.org/index.html
With R and caret you can easily load and splice data sets, feature reduction, principal component analysis, and train and predict using various algorithms.
| null | CC BY-SA 3.0 | null | 2014-06-11T16:56:59.660 | 2014-06-11T16:56:59.660 | null | null | 680 | null |
318 | 2 | null | 266 | 2 | null | If you can reproduce the 6x3 grid of graphs from the banner of the [http://scikit-learn.org/](http://scikit-learn.org/) page then you will have learnt some ML and some Python. You didn't mention a language. Python is easy enough to learn very quickly, and scikit-learn has a wide range of algorithms implemented.
Then try on your own data!
| null | CC BY-SA 3.0 | null | 2014-06-11T17:24:53.610 | 2014-06-11T17:24:53.610 | null | null | 471 | null |
319 | 1 | null | null | 10 | 2177 | I've built an artificial neural network in python using the scipy.optimize.minimize (Conjugate gradient) optimization function.
I've implemented gradient checking, double checked everything etc and I'm pretty certain it's working correctly.
I've run it a few times and it reaches 'Optimization terminated successfully' however when I increase the number of hidden layers, the cost of the hypothesis increases (everything else is kept the same) after it has successfully terminated.
Intuitively it feels as if the cost should decrease when the number of hidden layers is increased, as it is able to generate a more complex hypothesis which can fit the data better, however this appears not to be the case.
I'd be interested to understand what's going on here, or if I've implemented neural net incorrectly?
| Debugging Neural Networks | CC BY-SA 3.0 | null | 2014-06-11T18:22:36.267 | 2014-06-12T18:08:07.507 | 2014-06-12T16:43:59.513 | 381 | 691 | [
"machine-learning",
"python",
"neural-network"
] |
320 | 2 | null | 319 | 9 | null | There are so many ways to go wrong with a neural net that it's going to be difficult to debug. Also, to address your intuition, each additional hidden layer makes learning much harder. With that said, here are some possibilities:
- You have added weight decay. Adding more layers adds more weights which increases your regularization cost.
- The problem is simple enough that a model with a single hidden layer is sufficient. Adding more hidden layers makes it harder for the network to learn (harder optimization problem).
- The optimization method is not doing a great job (I prefer climin to scipy.optimize).
- You are using the sigmoid/tanh activation function. The sigmoid function causes the vanishing gradient problem which makes learning hard with more layers. Try using the ReLu function.
Training neural nets takes a lot of practice, luck, and patience. Good luck.
| null | CC BY-SA 3.0 | null | 2014-06-11T20:34:51.873 | 2014-06-11T20:34:51.873 | null | null | 574 | null |
323 | 1 | null | null | 5 | 9516 | The setup is simple: binary classification using a simple decision tree, each node of the tree has a single threshold applied on a single feature. In general, building a ROC curve requires moving a decision threshold over different values and computing the effect of that change on the true positive rate and the false positives rate of predictions. What's that decision threshold in the case of a simple fixed decision tree?
| How can we calculate AUC for a simple decision tree? | CC BY-SA 3.0 | null | 2014-06-11T23:52:37.823 | 2021-02-18T20:33:30.027 | null | null | 418 | [
"machine-learning"
] |
324 | 1 | null | null | -2 | 619 | I would like to extract news about a company from online news by using the RODBC package in R. I would then like to use the extracted data for sentiment analysis. I want to accomplish this in such a way that the positive news is assigned a value of +1, the negative news is assigned a value of -1, and the neutral news is assigned a value of 0.
| How can I extract news about a particular company from various websites using RODBC package in R? And perform sentiment analysis on the data? | CC BY-SA 4.0 | null | 2014-06-12T03:11:00.033 | 2019-06-11T14:43:33.343 | 2019-06-11T14:43:33.343 | 29169 | 714 | [
"r",
"text-mining",
"sentiment-analysis"
] |
325 | 2 | null | 324 | 2 | null | This isn't a question with a simple answer, so all I can really do is point you in the right direction.
The RODBC package isn't meant to extract data online, it's meant to pull data from a database. If you will be leveraging that package, it will be after you pull data down from the web.
Jeffrey Bean put together a [slideshow tutorial](http://jeffreybreen.wordpress.com/2011/07/04/twitter-text-mining-r-slides/) for doing sentiment analysis with Twitter data a few years back. He used the Twitter stream as well as some data pulled in from web scraping. It's a good starting point.
There's also this site that discusses a few different approaches to this problem in detail, including Bean's, the sentiment package, and ViralHeat (which is a commercial sentiment analysis service who's data you can pull into R). Sentiment has since been removed ([archived versions here](http://cran.us.r-project.org/src/contrib/Archive/sentiment/)), but the [qdap package](http://cran.us.r-project.org/web/packages/qdap/index.html) is available and is designed for use in transcript analysis.
| null | CC BY-SA 3.0 | null | 2014-06-12T05:17:46.453 | 2014-06-12T05:17:46.453 | null | null | 434 | null |
326 | 1 | null | null | 127 | 119318 | I'm just starting to develop a [machine learning](https://en.wikipedia.org/wiki/Machine_learning) application for academic purposes. I'm currently using R and training myself in it. However, in a lot of places, I have seen people using Python.
What are people using in academia and industry, and what is the recommendation?
| Python vs R for machine learning | CC BY-SA 4.0 | null | 2014-06-12T06:04:48.243 | 2022-07-11T21:50:49.357 | 2019-06-10T15:56:58.013 | 29169 | 721 | [
"machine-learning",
"r",
"python"
] |
327 | 2 | null | 326 | 26 | null | There is nothing like "python is better" or "R is much better than x".
The only fact I know is that in the industry allots of people stick to python because that is what they learned at the university. The python community is really active and have a few great frameworks for ML and data mining etc.
But to be honest, if you get a good c programmer he can do the same as people do in python or r, if you got a good java programmer he can also do (near to) everything in java.
So just stick with the language you are comfortable with.
| null | CC BY-SA 3.0 | null | 2014-06-12T07:05:05.653 | 2014-06-12T07:05:05.653 | null | null | 115 | null |
328 | 2 | null | 326 | 7 | null | There is no "better" language. I have tried both of them and I am comfortable with Python so I work with Python only. Though I am still learning stuff, but I haven't encounter any roadblock with Python till now. The good thing about Python is community is too good and you can get a lot of help on the Internet easily. Other than that, I would say go with the language you like not the one people recommend.
| null | CC BY-SA 3.0 | null | 2014-06-12T08:30:49.757 | 2014-06-12T08:30:49.757 | null | null | 456 | null |
330 | 2 | null | 235 | 4 | null | You can also checkout the seaborn package for statistical charts.
| null | CC BY-SA 3.0 | null | 2014-06-12T09:57:39.890 | 2014-06-12T09:57:39.890 | null | null | 729 | null |
331 | 2 | null | 326 | 18 | null | Some additional thoughts.
The programming language 'per se' is only a tool. All languages were designed to make some type of constructs more easy to build than others. And the knowledge and mastery of a programming language is more important and effective than the features of that language compared to others.
As far as I can see there are two dimensions of this question. The first dimension is the ability to explore, build proof of concepts or models at a fast pace, eventually having at hand enough tools to study what is going on (like statistical tests, graphics, measurement tools, etc). This kind of activity is usually preferred by researchers and data scientists (I always wonder what that means, but I use this term for its loose definition). They tend to rely on well-known and verified instruments, which can be used for proofs or arguments.
The second dimension is the ability to extend, change, improve or even create tools, algorithms or models. In order to achieve that you need a proper programming language. Roughly all of them are the same. If you work for a company, than you depend a lot on the company's infrastructure, internal culture and your choices diminish significantly. Also, when you want to implement an algorithm for production use, you have to trust the implementation. And implementing in another language which you do not master will not help you much.
I tend to favor for the first type of activity the R ecosystem. You have a great community, a huge set of tools, proofs that these tools works as expected. Also, you can consider Python, Octave (to name a few), which are reliable candidates.
For the second task, you have to think before at what you really want. If you want robust production ready tools, then C/C++, Java, C# are great candidates. I consider Python as a second citizen in this category, together with Scala and friends. I do not want to start a flame war, it's my opinion only. But after more than 17 years as a developer, I tend to prefer a strict contract and my knowledge, than the freedom to do whatever you might think of (like it happens with a lot of dynamic languages).
Personally, I want to learn as much as possible. I decided that I have to choose the hard way, which means to implement everything from scratch myself. I use R as a model and inspiration. It has great treasures in libraries and a lot of experience distilled. However, R as a programming language is a nightmare for me. So I decided to use Java, and use no additional library. That is only because of my experience, and nothing else.
If you have time, the best thing you can do is to spend some time with all these things. In this way you will earn for yourself the best answer possible, fitted for you. Dijkstra said once that the tools influence the way you think, so it is advisable to know your tools before letting them to model how you think. You can read more about that in his famous paper called [The Humble Programmer](http://www.cs.utexas.edu/~EWD/transcriptions/EWD03xx/EWD340.html)
| null | CC BY-SA 4.0 | null | 2014-06-12T10:09:23.887 | 2018-11-14T18:58:00.230 | 2018-11-14T18:58:00.230 | 62609 | 108 | null |
332 | 2 | null | 323 | 3 | null | In order to build the ROC curve and AUC (Area under curve) you have to have a binary classifier which provides you at classification time, the distribution (or at least a score), not the classification label. To give you an example, suppose you have a binary classification model, with classes $c1$ and $c2$. For a given instance, your classifier would have to return a score for $c1$ and another for $c2$. If this score is a probability-like (preferrable), than something like $p(c1)$, $p(c2)$ would work. In plain English is translated like "I (the model) classify this instance as $c1$ with probability $p(c1)$, and as $c2$ with probability $p(c2)=1-p(c1)$".
This applies for all type of classifiers, not only for decision trees. Having these scores you can than compute ROC or AUC by varying a threshold on $p(c1)$ values, from the smallest to the greatest value.
Now, if you have an implementation of a decision tree and you want to change that implementation to return scores instead of labels you have to compute those values. The most used way for decision trees is to use the proportion of classes from the leaf nodes. So, for example you have built a decision tree and when you predict the class for an instance you arrive at a leaf node which have (stored from the learning phase) 10 instances of class $c1$ and 15 instances of class $c2$, you can use the ratios as the scores. So, in our example, you would return $p(c1) = 10 / (10+15) = 0.4$ probability of class $c1$ and $p(c2) = 15/(10+15)=0.6$ probability of being class $c2$.
For further reading on the ROC curves, the best and inspiring source of information I found to be the Tom Fawcett paper called [An Introduction to ROC Analysis](https://www.google.ie/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0CCsQFjAA&url=https://ccrma.stanford.edu/workshops/mir2009/references/ROCintro.pdf&ei=DYCZU96QGPOV7AaAx4CgBA&usg=AFQjCNECLoecin6ieT-0ymLQ--FoMjkMZw&sig2=3bGE2y1N3I2Re_YWn0EDFA), it's solid gold on this topic.
| null | CC BY-SA 4.0 | null | 2014-06-12T10:27:14.480 | 2021-02-18T20:33:30.027 | 2021-02-18T20:33:30.027 | 29169 | 108 | null |
334 | 1 | 338 | null | 37 | 19856 | I've now seen two data science certification programs - the [John Hopkins one available at Coursera](https://www.coursera.org/specialization/jhudatascience/1?utm_medium=listingPage) and the [Cloudera one](http://cloudera.com/content/cloudera/en/training/certification/ccp-ds.html).
I'm sure there are others out there.
The John Hopkins set of classes is focused on R as a toolset, but covers a range of topics:
- R Programming
- cleaning and obtaining data
- Data Analysis
- Reproducible Research
- Statistical Inference
- Regression Models
- Machine Learning
- Developing Data Products
- And what looks to be a Project based completion task similar to Cloudera's Data Science Challenge
The Cloudera program looks thin on the surface, but looks to answer the two important questions - "Do you know the tools", "Can you apply the tools in the real world". Their program consists of:
- Introduction to Data Science
- Data Science Essentials Exam
- Data Science Challenge (a real world data science project scenario)
I am not looking for a recommendation on a program or a quality comparison.
I am curious about other certifications out there, the topics they cover, and how seriously DS certifications are viewed at this point by the community.
EDIT: These are all great answers. I'm choosing the correct answer by votes.
| What do you think of Data Science certifications? | CC BY-SA 3.0 | null | 2014-06-12T10:52:03.410 | 2019-07-11T07:02:42.803 | 2014-06-13T11:35:51.697 | 434 | 434 | [
"education"
] |
335 | 2 | null | 334 | 12 | null | The certification programs you mentioned are really entry level courses. Personally, I think these certificates show only person's persistence and they can be only useful to those who is applying for internships, not the real data science jobs.
| null | CC BY-SA 3.0 | null | 2014-06-12T11:11:35.600 | 2014-06-12T11:11:35.600 | null | null | 478 | null |
336 | 2 | null | 326 | 13 | null | There isn't a silver bullet language that can be used to solve each and every data related problem. The language choice depends on the context of the problem, size of data and if you are working at a workplace you have to stick to what they use.
Personally I use R more often than Python due to its visualization libraries and interactive style. But if I need more performance or structured code I definitely use Python since it has some of the best libraries as SciKit-Learn, numpy, scipy etc. I use both R and Python in my projects interchangeably.
So if you are starting on data science work I suggest you to learn both and it's not difficult since Python also provides a similar interface to R with [Pandas](http://pandas.pydata.org/).
If you have to deal with much larger datasets, you can't escape eco-systems built with Java(Hadoop, Pig, Hbase etc).
| null | CC BY-SA 3.0 | null | 2014-06-12T11:30:20.943 | 2014-06-12T11:30:20.943 | null | null | 733 | null |
337 | 2 | null | 326 | 15 | null | I would add to what others have said till now. There is no single answer that one language is better than other.
Having said that, R has a better community for data exploration and learning. It has extensive visualization capabilities. Python, on the other hand, has become better at data handling since introduction of pandas. Learning and development time is very less in Python, as compared to R (R being a low level language).
I think it ultimately boils down to the eco-system you are in and personal preferences. For more details, you can look at this comparison [here](http://www.analyticsvidhya.com/blog/2014/03/sas-vs-vs-python-tool-learn/).
| null | CC BY-SA 3.0 | null | 2014-06-12T11:54:59.140 | 2014-06-12T11:54:59.140 | null | null | 735 | null |
338 | 2 | null | 334 | 13 | null | I did the first 2 courses and I'm planning to do all the others too. If you don't know R, it's a really good program. There are assignments and quizzes every week. Many people find some courses very difficult. You are going to have hard time if you don't have any programming experience (even if they say it's not required).
Just remember.. it's not because you can drive a car that you are a F1 pilot ;)
| null | CC BY-SA 3.0 | null | 2014-06-12T12:13:26.940 | 2014-06-12T12:13:26.940 | null | null | 737 | null |
339 | 2 | null | 326 | 110 | null | Some real important differences to consider when you are choosing R or Python over one another:
- Machine Learning has 2 phases. Model Building and Prediction phase. Typically, model building is performed as a batch process and predictions are done realtime. The model building process is a compute intensive process while the prediction happens in a jiffy. Therefore, performance of an algorithm in Python or R doesn't really affect the turn-around time of the user. Python 1, R 1.
- Production: The real difference between Python and R comes in being production ready. Python, as such is a full fledged programming language and many organisations use it in their production systems. R is a statistical programming software favoured by many academia and due to the rise in data science and availability of libraries and being open source, the industry has started using R. Many of these organisations have their production systems either in Java, C++, C#, Python etc. So, ideally they would like to have the prediction system in the same language to reduce the latency and maintenance issues.
Python 2, R 1.
- Libraries: Both the languages have enormous and reliable libraries. R has over 5000 libraries catering to many domains while Python has some incredible packages like Pandas, NumPy, SciPy, Scikit Learn, Matplotlib. Python 3, R 2.
- Development: Both the language are interpreted languages. Many say that python is easy to learn, it's almost like reading english (to put it on a lighter note) but R requires more initial studying effort. Also, both of them have good IDEs (Spyder etc for Python and RStudio for R). Python 4, R 2.
- Speed: R software initially had problems with large computations (say, like nxn matrix multiplications). But, this issue is addressed with the introduction of R by Revolution Analytics. They have re-written computation intensive operations in C which is blazingly fast. Python being a high level language is relatively slow. Python 4, R 3.
- Visualizations: In data science, we frequently tend to plot data to showcase patterns to users. Therefore, visualisations become an important criteria in choosing a software and R completely kills Python in this regard. Thanks to Hadley Wickham for an incredible ggplot2 package. R wins hands down. Python 4, R 4.
- Dealing with Big Data: One of the constraints of R is it stores the data in system memory (RAM). So, RAM capacity becomes a constraint when you are handling Big Data. Python does well, but I would say, as both R and Python have HDFS connectors, leveraging Hadoop infrastructure would give substantial performance improvement. So, Python 5, R 5.
So, both the languages are equally good. Therefore, depending upon your domain and the place you work, you have to smartly choose the right language. The technology world usually prefers using a single language. Business users (marketing analytics, retail analytics) usually go with statistical programming languages like R, since they frequently do quick prototyping and build visualisations (which is faster done in R than Python).
| null | CC BY-SA 3.0 | null | 2014-06-12T12:59:00.663 | 2017-01-17T14:14:47.423 | 2017-01-17T14:14:47.423 | 28021 | 514 | null |
340 | 2 | null | 235 | 5 | null | There's plenty. If you've ever used ggplot2 in R and want to do that in [Python](https://pypi.python.org/pypi/ggplot/0.5.9).
If you want to use a similar visualisation grammar (Vega) and go via [D3](https://github.com/wrobstory/vincent).
Or if you want the full-on 3d: [shizzle](http://docs.enthought.com/mayavi/mayavi/).
| null | CC BY-SA 4.0 | null | 2014-06-12T13:38:10.877 | 2019-04-25T18:59:20.977 | 2019-04-25T18:59:20.977 | 29575 | 471 | null |