id
stringlengths
5
27
question
stringlengths
19
69.9k
title
stringlengths
1
150
tags
stringlengths
1
118
accepted_answer
stringlengths
4
29.9k
_unix.58179
Is it correct to say that the only difference between the 'input redirection operator' and the 'pipeline operator', is that '<' redirects standard input from a file, and '|' redirects input TO a program?
Understanding output redirection?
bash;shell
Both set the standard input to a command. The difference is that the pipe operator connects one command's standard output to another command's standard input, and the file redirection operator connects a file to a command's standard input.There is also the use of an anonymous pipe to connect the programs when using the pipe operator, which is not required when redirecting from a file. Another thing to bear in mind is that the pipe operator creates a subshell, whereas IO redirection does not.
_softwareengineering.186790
I recently started at a company and am working on statistical data analysis and data handling (market research). A lot of my tasks were previously done entirely by hand, so I created a few tools to automate much of my work.My boss asked if I could go ahead and look for other automation-possibilities. I certainly have the time and skill, but I don't know how to find processes which would be good candidates to be automated. I could ask co-workers, but most of them don't understand what is easy to automate and what isn't. I believe that there are processes for this sort of thing - but I don't know enough to start Googling.What comes before requirements discovery and analysis?How do I identify business processes that will be more efficient as automated tasks?
How to identify (business) processes for automation?
requirements;automation;systems analysis;business process
null
_softwareengineering.288231
I understand the structure of binary trees and how to traverse them. However, I am struggling to realize their actual uses, purposes in programs and programming. When I think about 'real life' examples of hierarchical data they almost certainly have more than 2 children. For example, in a family tree, a mother may often have more than two children.Are 'binary trees' really only useful to store linearly related data due to the faster processing times over arrays and lists? Alternatively, do they serve a specific purpose in storing hierarchical data? If so, what examples are there of the application of binary trees. What data is such that a node has at most 2 children?
Do binary trees serve a specific purpose in storing hierarchical data? What is their canonical use?
data structures;binary tree
No, binary trees are not for storing hierarchical data in the sense you're thinking of. The primary use case for n-ary trees, where n is a fixed number, is fast search capability, not a semantic hierarchy.Remember the old game where one person thinks of a number between 1 and 100, and the other has to guess it in as few guesses as possible, and if you guess wrong the person thinking of the number has to tell you if you're too high or too low? It gets boring after a while because you quickly figure out that you should always start at 50, then go to 25 or 75, and keep dividing the range to be searched in half with each new guess after that, and eventually you can guess any number in at most 7 guesses, guaranteed.It may not make for a fun game, but that property is what makes binary (and other n-ary) trees useful: you can use them to search a very large data set in a very small amount of time.
_softwareengineering.75275
This is more a web design / user experience question, but since programmers.stackexchange.com is more opinion based, I posted this here - Does it make sense to have a single tab in a web app? I don't see any reason to have a tab if you can't tab to it from some other tab. My PM is demanding a single tab. I honestly think my PM is making horrible decisions regarding the design (we have no official designer) and eventually we will lose customers. The design of the application was made early 2000's and it shows (HTML tables, lots of inline css, etc). I am afraid that if we don't update it to match this decade's expectations of application, it will just turn off our potential customers. The application is an enterprise app that we sell to potential customers.We are a small company - our team for this application is 6 (3 developers, 2 qa (one being the pm), and one of the company's partners). Everyone's opinion matters (so I have been told), but if someone happens to disagree with the PM (who can do no wrong), she gets all huffy....and I am rambling.So my question remains - what are your thoughts on a single tab?
Does it make sense to have a single tab interface?
web development;user interface
I wouldn't create a single tab unless I knew there would be a second tab soon and I wanted to save myself the trouble of rewriting part of the interface later. Maybe your PM is just thinking ahead?
_cs.70395
I aim to gain more intuition with regards to the role of a hidden layer in a neural network architecture. And further to understand whether my classification problem is a linear one or not.Here is what I have: I have a input of size 128 which is the output of the FFT of a multi-tone signal. Bellow you can find one input sample (vector size of 128 features):In my dataset, for each input sample the peaks are randomly located across 128 frequencies 90-127Hz].The output is simply the number of the peaks in the input samples (FFT). f0r example the output/label for the provided sample in the figure is 8. I have total of 10 labels/classes, ranges from 1 to 10. My data-set consists of 1000 samples/rows.I choose the fully connected feedforward network with one hidden layer to be training using my datasest. thats is my network has 128 (+ 1 bios) units in the input layer, 1 hidden layer and the output layer with 10 unit in it.To understand the effect of the the hidden layer in training and testing, I set up a simulation where I train my network 100 times per number of units in the hidden layer. The number of units in the hidden layer ranges from 0 (no hidden layer) to 40 units.I used neural network toolbox in MATLAB to program this. Moreover, I utilized patternet command and used the default setting for configuration of network: the training function is traincg, the loss function to minimize error is cross-entropy loss, and I use softmax fuction for the output layer.I configure the network, as well as initialize the weights before training. And finally, for the training the network, I partition my data-set to 70/15/15 ratio corresponding to train/validation/test data blocks.To gain a better insight, I plotted the misclassification error of network vs. the number of units in the hidden layer on the testing data block, which you can find below:now here are my questions: a) Given the fact that I have the best performance when the network does not have a hidden layer, --is my classification problem a linear or a non-linear? --Are my classes linearly separable or not? (consider the high dimension of my input, 128)b) Referring to the Fig. 2, --Why does the network using one hidden layer with 10 units (or around 10) produce the second most optimum performance among? --is it because I have 10 classes? --Moreover, if I have only two classes, would the network with one hidden layer size two get the better classification accuracy, or we can't answer this?c) what is the role of the hidden layer?Given the proven power of such technique in many complex real world problems, utilizing the neural network for this problem is over-kill. As the main goals is to have deep intuition on the neural network power in classification, I choose this toy problem.I would appreciate if someone could answer above questions to help me have better insight regarding choosing the suitable network architecture. I can provide more details in case more clarification is needed.
What is the Effect of Hidden Layer Size?
machine learning;performance
null
_unix.360763
Tried out ProxyJump as suggested by @JensErat.I configured it like this in ssh config:Host jump-to-serverHostName server.hostnameProxyJump [email protected] ubuntuBut it does not work, it just hangs during connection.Do I need gnupg installed on the jump server also?I have 2 computers running OSX. home and laptop. I also have a number of servers that I need to access, let's refer to them all as server.On home and laptop I have installed gnupg 2.1.20 and I have a yubikey that works on them both. I can connect to a server using the yubikey over ssh.Servers only have regular ssh, no gnupg.This works great using gnupg and yubikey:home > serverlaptop > serverlaptop > homeI have added the following in .bash_profile on home and laptop to make this work:if [ -f ${HOME}/.gpg-agent-info ]; then . ${HOME}/.gpg-agent-info export GPG_AGENT_INFO export SSH_AUTH_SOCKfiI would also like to do thislaptop > home > serverTo do this I have read that I read I need to open an extra socket so this is .gnupg/gpg-agent.conf on laptop:pinentry-program /usr/local/bin/pinentry-macextra-socket /Users/deadlock/.gnupg/S.gpg-agent.extraenable-ssh-supportwrite-env-fileuse-standard-socketdefault-cache-ttl 600max-cache-ttl 7200allow-preset-passphraseThis is the same on home:pinentry-program /usr/local/bin/pinentry-macenable-ssh-supportwrite-env-fileuse-standard-socketdefault-cache-ttl 600max-cache-ttl 7200gpg-agent is running on both laptop and home; I have made sure ssh-agent is NOT running.I have configured home like this in ~/.ssh/configHost homeHostName 12.34.45.67Port 22User jensForwardAgent noRemoteForward /Users/jens/.gnupg/S.gpg-agent /Users/jens/.gnupg/S.gpg-agent.extraThis does not work. On laptop, ssh-add -l lists my keys, but after I try to ssh home. I cannot ssh further to server. It just hangs or fails with that it could not authenticate.If possible I would also be able to do thislaptop > server > serverbut since server is not running gnupg at all I don't know if it is possible?
GnuPG 2.1.20 ssh agent forwarding with yubikey on OSX fails
ssh;gpg;gpg agent;gnupg;yubikey
null
_webmaster.91455
I work for an agency and I need to add the agency analytics google account to a client's YouTube channel so the agency account can see the YouTube channel and specifically access its analytics. How do I do that?I have added the agency account as a manager to the YouTube / Google+ account of the client but when I try to use Google Script to call the YouTubeAnalytics.Reports.query('channel==clientChannel',....) method. I receive a Forbidden error even though when I click through to the client channel I can look at the analytics page.
Add Google Account to YouTube Analytics Access
analytics;youtube;google plus
null
_cs.67149
I've been stuck on this for a while now, I've tried reading the related topics on cs.stackexchange as well as the textbook and youtube videos. Suppose we have a 8KB direct-mapped data cache with 64-byte blocks.Offset $= log_2(64) = 6$ bitsnumber of blocks $= 8k / 64 = 125$$Index = 7$ bits$Tag = 32 13 = 19$ bits Then how do I tell whether or not I have a hit or a miss given an address?Is it (Block address) modulo (Number of blocks in the cache)Where block address = byte address / bytes per blocks?For example if (block address) mod (number of blocks in cache) = 2, then does that mean row 2 of my picture is a hit?
What determines a hit or a miss for direct mapped cache?
computer architecture;cpu cache
null
_cs.19916
What is the point of non-binary printed letters on a turing machine? I understand that these need to be omitted to get a computable number, but why are they used in the first place?
Why have additional symbols on a Turing machine?
computability;turing machines
Turing Machines are an abstract model of computation, their purpose is to define in a mathematical way what problems are theoretically computable and which are not. It is true that a binary alphabet is enough to reach the full expressive power of Turing Machines. However, when playing with them in practice to prove theorems and to communicate these proofs, it can be more convenient to use more symbols, instead of encoding everything in binary. We could always go back to binary if it's absolutely needed.An extreme example of the same sort would be why are we still using 26 characters to write texts, now that we know that two are enough? It would save time when learning the alphabet!.
_unix.355979
Hi guys:) I am using a Beagle Bone Black with Debian Wheezy to make a project. I have a small problem:I have my index.php in /var/www from there I call a python file called send_email.php using ajax: $.ajax({ url:/cgi-bin/send_email.py});It works properly (it sends me an email and I receive it).But when I try to do the same thing with send_sms.py which has the following code insideimport nexmoclient = nexmo.Client(key='XXXXX', secret='XXXXXXXXX')client.send_message({'from': 'Nexmo number', 'to': 'My own number', 'text': 'Hello World'})When I run it from the terminal using: python send_sms.py, it works properly but when I call it using ajax it does not. I am confused as I thought that by calling any .py file in cgi-bin by using ajax, it would execute them (and it works for my send_email.py) but with send_sms.py it does not.Thank you for your help, it is appreciated!
Unable to run a .py file in cgi-bin by using ajax call from my .php file
python;webserver;cgi;javascript;sms
null
_unix.79966
I cannot boot the operating system on my laptop. I have three versions of the kernel installed and none of them will boot.Booting into Windows (installed in a separate partition) still works, so I suspect the hardware is not at fault. I may have tried to update drivers before the problem occurred, so that could be the cause. I have also tried resetting the BIOS, to no effect.I am using GRUB v1.99.Selecting Fedora (3.6.11-1.fc16.x86_64) from the GRUB menu, the following is displayed:Fedora (3.6.11-1.fc16.x86_64)Loading initial ramdisk ...Then I get the normal splash screen. But then it returns to the black screen with just the above two lines being displayed and hangs indefinitely.Enabling verbose mode yields the following:Fedora (3.6.11-1.fc16.x86_64)Loading initial ramdisk ...... (many lines - can transcribe if relevant) ...Started Machine Check Exception Logging Daemon [OK]Started Install ABRT coredump hoot [OK]Started Console Mouse manager [OK]Started irqbalance daemon [OK]Started SSH server keys generation [OK]Started Kernel Samepage Merging [OK]Started Harvest vmcores for ABRT [OK]Started ACPI Event Daemon [OK]Started Display Manager [OK]_(hangs here)No obvious errors are displayed - it just stops.The grub config looks like:setparams 'Fedora (3.6.11-1.fc16.x86_64)'load_videoset gfxpayload=keepinsmod gzioinsmod part_msdosinsmod ext2set root='(hd0,msdos2)'search --no-floppy --fs-uuid --set=root dd61afbf-2b76-44ab-b2ca-0e65f0664425echo 'Loading Fedora (3.6.11-1.fc16.x86_64)'linux /boot/vmlinuz-3.6.11-1.fc16.x86_64 root=UUID=dd61afbf-2b76-44ab-b2ca-0e65f0664425 ro rd.md=0 rd.lvm=0 rd.dm=0 quiet SYSFONT=latarcyrhebsun16 rhgb KEYTABLE=uk rd.luks=0 LANG=en_US.UTF-8echo 'Loading initial ramdisk ...'initrd /boot/initramfs-3.6.11-1.fc16.x86_64.img/boot is in /dev/sda2; using a livedisk to inspect the directory yields the following:cd /mnt/sda2/bootls -ld *-rw-r--r--. 1 root root 119880 2012-08-15 17:01 config-3.4.9-1.fc16.x86_64-rw-r--r--. 1 root root 122870 2012-12-17 16:33 config-3.6.11-1.fc16.x86_64-rw-r--r--. 1 root root 122897 2012-10-31 23:53 config-3.6.5-2.fc16.x86_64drwxr-xr-x. 2 root root 4096 2013-02-02 13:44 extlinuxdrwxr-xr-x. 2 root root 4096 2011-12-03 11:22 grubdrwxr-xr-x. 3 root root 4096 2013-01-21 03:37 grub2-rw-r--r--. 1 root root 17757091 2012-08-31 05:50 initramfs-3.4.9-1.fc16.x86_64.img-rw-------. 1 root root 18065462 2013-01-21 03:37 initramfs-3.6.11-1.fc16.x86_64.img-rw-------. 1 root root 18052180 2012-11-07 17:15 initramfs-3.6.5-2.fc16.x86_64.img-rw-r--r--. 1 root root 593313 2012-01-16 17:29 initrd-plymouth.img-rw-------. 1 root root 2444127 2012-08-15 17:01 System.map-3.4.9-1.fc16.x86_64-rw-------. 1 root root 2497974 2012-12-17 16:33 System.map-3.6.11-1.fc16.x86_64-rw-------. 1 root root 2496741 2012-10-31 23:53 System.map-3.6.5-2.fc16.x86_64-rwxr-xr-x. 1 root root 4728480 2012-08-15 17:01 vmlinuz-3.4.9-1.fc16.x86_64-rwxr-xr-x. 1 root root 4824784 2012-12-17 16:33 vmlinuz-3.6.11-1.fc16.x86_64-rwxr-xr-x. 1 root root 4822224 2012-10-31 23:53 vmlinuz-3.6.5-2.fc16.x86_64I'm not very good at sysadmin tasks, so I apologise if I am being stupid. However, I really cannot figure out what is going wrong - I would be incredibly grateful if anyone can help?
Cannot boot Fedora Linux
fedora;boot;grub
null
_codereview.84001
This is a follow up to Log probe requests of WiFi devices focussing on a specific element of the code. How can I indent this code to make it look great and be well formatted?parser = argparse.ArgumentParser(description='Collect WiFi probe requests')parser.add_argument('-i', '--interface', default=default_interface, help='the interface used for monitoring')parser.add_argument('--tshark-path', default=distutils.spawn.find_executable(tshark), help='path to tshark binary')parser.add_argument('--ifconfig-path', default=distutils.spawn.find_executable(ifconfig), help='path to ifconfig')parser.add_argument('--iwconfig-path', default=distutils.spawn.find_executable(iwconfig), help='path to iwconfig')parser.add_argument('-o', '--output', default='-', help='output file (path or - for stdout)')parser.add_argument('-c', '--channel', default='all', help='channel/s to hop (i.e. 3 or 3,6,9 or 3-14 or all or 0 for current channel')parser.add_argument('--verbose', action='store_true', help='verbose information')parser.add_argument('-p', '--only-probes', action='store_true', help='only saves probe data spit by newline')parser.add_argument('--delay', default=5, help='delay between channel change')args = parser.parse_args()
Argparse implementation
python;beginner;parsing
Python has a style guide, which lays out guidelines for (among other things) code formatting. Your current code is compliant with the style guide, except that the help parameter for --channel means the line is too long.You can easily avoid this by simply breaking the string across multiple lines:parser.add_argument('-c', '--channel', default='all', help='channel/s to hop (i.e. 3 or 3,6,9 or 3-14 or all or 0 ' 'for current channel')Alternatively, you can shorten the message slightly and use one of the other recommended indent styles, which uses less horizontal space:parser.add_argument( '-c', '--channel', default='all', help=channel/s to hop (e.g. '3', '3,6,9', '3-14', 'all', '0' (current channel))
_softwareengineering.194834
I'm working in a small team with up to 5 (web)developers. Because our team is growing frequently and we ran into problems with multiple people working on the same code we decided to set up a VCS.Current SituationCurrently we are working with a central development server (LAMP). So every developer works on the same code base and if the code is tested and ready for our live server we just copy it via ftp. I know this is some sort of an anno 1600 workflow but yeah - it is what it is and also the reason for this question.On the development server our directory structure looks like this:/var/www /Project1 /Project2 /Project3 ...Additionally there are some small non-webapplications - Android/iPhone/Windows 8 etc. apps and some C# tools which also should be included in the VCS.Goal and problemsOur goal is to get a clean setup for a VCS, which works together with an issue tracking software, enables us to work togheter on the same project at the same time without overwriting our codes and simply gives us the advantage of version-control. I think the first question for us is, which technology should we use. Some of us have allready experienced subversion. But because git is some sort of becoming the standard and there are a lot of pro git arguments among the web users we tend to using git.There starts our uncertainty. For using git - a decentralised VCS - it looks like we have to start using seperate development servers on each developer's computer. The problems with that are:Some times we work on different computers, so when we forget to push our code we have a problem.We would have to work with virtual machines because the dev servers should be the same as our live server (This would simply not be enforceable in our environment, believe me it's not possible). The development server usually also served as a tryout or presentation server where non-developers had a look at what's going on.Is there another possible setup with git so that we are able to benefit from the system while still using a single(!) development server? Maybe with different directories for each developer. Or can we still work on the same code base with maybe locking the files we are working on and then commit them to a repository. It's maybe important to say that while it became a factor for us it is still uncommon that multiple developers work on the same part of an application at the same time.
Manage version control with a central development server (LAMP)
web development;version control;git;svn
You're on the right track, here.Maybe with different directories for each developer.Yes, definitely different directories for each developer.The machine you're on is is pretty uninteresting. Just ensure that each developer logs in as himself, and checks out a copy of the git repo under his own home directory. You'll wind up with several copies of the code living in your filesystem, but hey, disk is free.It's true that git supports decentralized operation. But you won't be using it that way in your typical workflow. Just make sure that a bare repo is available on some convenient server, and have everyone pull from that. Access it via http, ssh, or even via the filesystem if you like.You mentioned a tryout area as part of your workflow. Do yourself a favor. Hire another developer, named Jenkins, who also checks out your code using git. He's implemented here: http://jenkins-ci.org/ (and runs under http://tomcat.apache.org/download-70.cgi). That way every push to the central repo will immediately make an updated version of your website available under Jenkins' home directory, where you can quickly try it out and run sanity checks. We call this Continuous Integration, and it will definitely improve your workflow.
_cogsci.1685
Usoh et al. (PDF) applied presence questionnaires (which are usually designed to measure one's feeling of presence in a virtual environment) in both a virtual and a real world environment in a between-subjects design and found no difference in people's sense of presence between the two environments. While the authors only argue that presence questionnaires are not valid unless they pass a reality check, i.e. participants self-report a higher feeling of presence in the real environment, in my opinion, results might have been very different if they had employed a within-subjects design, exposing participants to both environments and thereby arriving at a relative sense of presence score for each environment. If this were true, it would imply that such questionnaires are essentially not applicable in between-subject designs, as people would only be able to reliably rate their sense of presence in a particular environment relative to another environment. Hence I've recently asked whether it is valid to treat repeated measures data as both within- and between-subject data and compare the analysis of both to see if there are differences. But my underlying assumption was questioned by some of the answers, and I posted a follow-up question to ask if it ever makes sense for between- and within analyses of the same data to differ. By way of a very contrived example, assume I get subjects to rate on a scale of 1 to 10 how much they like an offer of free beer. In one condition, people are offered one litre of beer for free, in the second condition, people are offered two litres of beer for free. If I ran this experiment with a between-subjects design, I would personally assume that there isn't any discernible difference between the two conditions, because - hey, free beer! But as a within-subjects design, I think I stand a reasonable chance of seeing an effect, because more free beer is better than less free beer.One comment has asked for the question to be migrated to here so, to give the whole thing a more cog-sci spin, my intent here is to question my (admittedly intuitive) assumption that people can not relate their absolute sense of presence to a number from 1-7, but can relate their relative sense of presence compared to another environment. Is there any research other than Usoh's paper on presence in particular, or are there similar phenomenon in other measurement instruments?
Are presence questionnaires valid in a between-subjects design?
measurement;survey
The underlying question here is: Does the context of a questionnaire affect the answer? I think you already answered the question yourself. Yes it does. Let's make the comparison even more extreme:Please rate on a scale of 1 (not at all) to 7 (very) how happy you would be if you won 1 million dollars because you were the 1 millionth customer of a super market. Please rate on a scale of 1 (not at all) to 7 (very) how happy you would be to win 100 dollars because you were the 1 millionth customer of a super market. Now imagine you would have been asked only the second question. Will the answer on the second question be affected by the fact that there was question 1? Very likely.For an excellent introduction to research about such context effects and other survey-related phenomena check out Self-reports: How the questions shape the answers. by Norbert Schwarz (1999). Schwarz summarizes several ways in which pragmatic aspects of questions affect answers in questionnaires. There is an important distinction with regard to how people construct the meaning of such questions. On the one hand, there is the literal meaning of the question, its semantic content. On the other hand, there is also the pragmatic meaning of a question, which involves an assessment about the intention of the person (the researcher) who has asked the question. (What is implied? Also called the implicature). To infer this pragmatic meaning, people rely on several communication norms or maxims (Grice, 1975). The maxim that is most relevant here is the maxim of relation:a maxim of relation enjoins speakers to make their contribution relevant to the aims of the ongoing conversation. In research situations, this maxim licenses the use of contextual information in question interpretation and invites respondents to relate the question to the context of the ongoing exchange. (Schwarz, 1999, p. 94)So in the context of question 1 (1 million dollars/presence in environment 1/2 liters of free beer), the pragmatic meaning of question 2 (100 dollars/presence in environment 2/1 liters of free beer) may be interpreted as comparative (compared to the first option, who much do you like it?) and the answer may be different that than when it is asked in isolation.Another way in which the context may affect the answer is that the implied meaning of the scale anchors change (What means very happy vs. not at all happy?). An example of research that has investigated such issues is the research on shifting standards in stereotyping (e.g. Biernat & Manis, 1994). For example, if you judge woman X with regard to aggressiveness, she may be rated as highly aggressive in the context of other women (because the stereotype is that women are low in aggressiveness). However, if the question is asked in the context of men, she might be rated lower in terms of aggressiveness (also because of the stereotype that women are less aggressive). The point is that the inferred meaning of the scale anchor very aggressive changes based on the context the question is asked in. ReferencesBiernat, M., & Manis, M. (1994). Shifting standards and stereotype-based judgments. Journal of Personality and Social Psychology, 66, 520. doi:10.1037/0022-3514.66.1.5Grice, H. P. ( 1975). Logic and conversation. In P.Cole & J. L.Morgan ( Eds.), Syntax and semantics: Vol. 3. Speech acts (pp. 41 58). New York: Academic Press. Schwarz, N. (1999). Self-reports: How the questions shape the answers. American Psychologist, 54, 93105. doi:10.1037/0003-066X.54.2.93
_unix.369185
I want to see the pagetable that kernel manages for one of my processes. In my case PID 4680 is mapped to dhclient. So in order to view the page table I tried the following:sudo cat /proc/4680/pagemap However this command just hangs on my Ubuntu 14.04 without any output. I have tried waiting 2 minutes and then have to kill it. Is there a better way of doing this?
Viewing pagetable for a process
kernel;virtual memory
null
_webmaster.57143
I come from a development background but just recently starting learning SEO along with RDFa. I am working on our companies new website where the designer is using our different divisions logos at the top of their respective pages. There currently are not any H1 tags. I know I need to add and H1 but the way the pages were designed the logos are where I would usually put an H1 heading. And if I added an H1 with text it would throw off the design. I also know that using images inside H1 isn't good practice even if I use the alt attribute. So my question is if I put H1 tags around the image, then use RDFa will Google find this acceptable or will this be penalized. Here is my code example:<div class=twelve columns typeof=gr:BusinessEntity> <h1 property=gr:name content=Thoughtwire Marketing> <img rel=foaf:depiction src='http://www.thoughtwiremarketing.com/images/TW-marketing-logo.png' class='twmarketing' alt=Thoughtwire Marketing/> </h1></div>I defined my prefixes in the body tag and when I run my code through the validator at http://rdfa.info/play/ it seems like it would work, just not knowledgeable enough yet with SEO to know for sure.
Using RDFa with an image inside of an H1 tag
seo;semantic web;rdfa
null
_unix.270288
All this while I have been thinking that mkdir can only create directories. But I am surprised to see that it was able to create files also under some conditions. I recently started working with cgroups and When I run 'mkdir' command under /cgroup, it created files along with directories.[abc@master ~]$ which mkdir/bin/mkdir[abc@master ~]$ mkdir /cgroup/cpu/group0[abc@master ~]$ ls /cgroup/cpu/group0/cgroup.event_control cpu.cfs_period_us cpu.rt_period_us cpu.shares notify_on_release cgroup.procs cpu.cfs_quota_us cpu.rt_runtime_uscpu.stat tasks.How is the mkdir command able to create files along with directories? I can see that this only happens under cgroups. How does the OS distinguish between mkdir under cgroup and mkdir under elsewhere?I tried to find the answer online but could not find anything really helpful. Any relevant information will be really appreciated.
mkdir under /cgroup creates files along with directories
linux;linux kernel;cgroups
null
_softwareengineering.198594
Since I discovered the joys of AJAX, I tend to do all my requests to the server using AJAX. Is this a good idea?According to you, what should I do in PHP and what should I do in AJAX?I like to do my requests to the server with AJAX because I can use it as a kind of web service.
What should I do in AJAX or PHP?
php;ajax
null
_webmaster.60889
I've a old site : site1.com but I want to move it to a folder in another site: site2.com.So site1.com/about.htm content should be accessible as site2.com/new/about.htm.Beyond doing a 301/302 I think I'd need to add content of robots.txt from site1.com into site2.com's.Similarly site1.com sitemap URLs would need to be changed and should appear below site2.com's.Is there a checklist for doing such migration?
Migrating website to subfolder in another domain
migration;subdirectory
null
_unix.234331
I know that I can use the seq command to generate a sequence of numbers, such as:seq 100 999...and I know I can create a file with the addition of:seq 100 999 > file.txt...but what if I wanted to perform a calculation on each number, before writing it to a file?I want to basically create a file of numbers, which contain the results of:seq 100 999 x(times) date +%s > file.txtI know this isn't the way to do it, but I'm curios about how it could be done.Ultimately, the numbers that are created will be serial numbers, which can never (ever) be duplicated. The resulting numbers will actually need to be added to a MySQL database (not a file.txt file) and I will need to add more numbers to said MySQL database on a hourly/daily basis.Any help/suggestions would be appreciated.
Create a file with a list of randomized/serialized numbers
timestamps
Is Perl ok? For every number in the range 100-999, it multiplies that number with perl function time() to it (which is the same as doing date +%s).perl -E 'for(100..999){say $_ * time();}' > file.txt1438382172768143982633157614412704903841442714649192etc.
_codereview.91622
Printing to stdout is thread-safe in many systems when using printf or std::cout, but not in all systems (Windows!). So I decided to make my own thread-safe and type-safe printing function with some help of C++11 variadic templates functions.#include <iostream>#include <iomanip>#include <mutex>class Mutex:private std::mutex{public: class Lock:private std::lock_guard<std::mutex>{ public: Lock(Mutex& mutex): std::lock_guard<std::mutex>(mutex){} };};class{private: Mutex m_mutex; void Print(){} template<typename T,typename... Ts> void Print(const T& t,const Ts&... ts){ std::cout<<t<<std::flush; Print(ts...); }public: template<typename T,typename... Ts> void operator()(const T& t,const Ts&... ts){ Mutex::Lock lock(m_mutex); Print(t,ts...); }}Print;template<typename T>void PrintLine(){ Print('\n');}template<typename T,typename... Ts>void PrintLine(const T& t,const Ts&... ts){ Print(t,ts...,'\n');}int main(){ Print(10, devided by ,3, equals ); PrintLine(std::fixed,std::setprecision(5),10.0/3); return 0;}
Thread-Safe Variadic Printing Function
c++;c++11;thread safety;template;variadic
Mutexclass Mutex:private std::mutex{ public:Don't see the need for your custom Mutex/Lock class. Just use the standard ones directly there is no need to add a layer of indirection that future developers need to go and check.Excessive FlushingCalling flush after each item:std::cout<<t<<std::flush;Is probably not a good idea. Let the user of your code decide when to flush he probably has more concept about what what is about to be printed and thus when flush needs to be called.If you must do it then do it after all the items have been printed.Template Recusrsion.Rather than using a recursive call to print all the items:Print(ts...);Use a sub class and create a std::initializer_list// std::cout<<t<<std::flush;// Print(ts...);// Create a list. This will act more like a loop than recursion.// Items inside the {} are initialized left to right.auto printer = { ItemPrinter(ts)... };Then ItemPrinter just prints the item in the constructor.template<typename T>struct ItemPrinter { ItemPrinter(T const& t){std::cout << t;}};As a nice benifit. You will not need the terminating case.void Print(){}Naming Convention.PrintIt is more standard to use an initial capitol letter to define a type. An initial lower case letter donates an object (which also encompasses functions).Since you have a nameless type. With an object called Print. I would have called it streamPrinter. Then used your wrapper functions call that directly.DeclarationThe code works well as is for a single file program. But you have problems when using it from header files. Because the class is nameless you can not use it any declarations to mark the object external and thus you will be getting an instantiation of the object in every compilation unit. This will break your guarantee that it is thread safe as each has its own mutex.I think your best bet is to put the Print object in its own file. Then expose all printing via wrapper functions which can be made external in the header file.Expansion.Currently your printer is only used for std::cout. Why not expand it so that it can be used for any stream.
_unix.15055
I have been trying to start a small home server, using Ubuntu 10.04 Server edition. The installation process finished, and I got an error from Grub saying that it was out of disk. After a bit of debugging, I created and ran Grub from a CD, but the best I could do was get to a Grub shell, where using the boot command gave the error message error: no loaded kernel.After more playing around, I decided to try re-installing Ubuntu, and booted it up to find a Grub terminal (not splash menu, but not recovery mode) telling me that it had an error, no loaded kernel again. The same thing happens when trying to follow instructions on loading an OS from grub, at the linux /vmlinux root=/dev/sda1 command. After many searches, all of the information I can find is this:The error has been reported when upgrading in Ubuntu 9, and can be solved by installing a later version of Grub.The Grub shell will load without selection if Grub can't find a configuration file.The first doesn't seem to be applicable, but the second, along with the exact commands that fail, seem to point to the problem being getting info off of the hard drive.The operating system is Ubuntu 10.04.2 Server LTS, running on the internal hard drive of a Compaq Armada m700 (very old, very slow, but I just want a text-based/LAMP server).Any suggestions on how to get the kernel to load, or another solution? Again, I have tried re-installing the OS, booting multiple times, and running Grub off of a cd.
Grub2 Error Loading Kernel
ubuntu;grub2
You can try installing grub at /dev/sdaFor manually loading kernel, you can try following:set root (hd0,1)linux /vmlinuz root=/dev/sda1initrd /initrd.imghere please note that you need to put your kernel version. For example, my kernel version is 3.0.0-12 (initrd.img-3.0.0-12-generic & vmlinuz-3.0.0-12-generic). To load this kernel, you have to try following:set root (hd0,1)linux /vmlinuz-3.0.0-12-generic root=/dev/sda1initrd /initrd.img-3.0.0-12-genericYou will find your available versions by pressing after typing linux or initrd command. Another thing is, make sure your root resides on /dev/sda1Best luck :)
_unix.264415
I'd like to clone a running Debian machine (ext4, no LVM) to identical hardware, preferably over the network to create a test box.Q1) Which tools / scripts are available that allow me to do this? AFAIK clonezilla and gparted need to be started from a separate medium. Would dd or ddrescue work?Q2) In what ways do I need to adapt the clone so that both will run in the same network segment (my home network)? The original machine has a static DHCP IP via its MAC so that won't be an issue. I thought of changing the hostname and disabling crontabs. Anything else?
Clone running machine over network
linux;debian;cloning;clonezilla
null
_reverseengineering.13013
I have the STM32L151's firmware that I extracted via JTAG, but I cannot find a start point in IDA. I have tried two methods:1) I start IDA, drag the binary into the workspace, select ARM Little-endian for the processor type, click ok, the disassembly memory organization window appears, entered in relevant information found here on page 48, click ok, windows pops up saying IDA can not identify the entry point..., in the workspace I see RAM:08000000 DCB [some hex number]2) Converted the binary to elf using my arm toolchain's objcopy, used readelf -h [my binary file] to find the entry point, got this output where the entry point is 0xff810000, dragged the elf into IDA's workspace, selected ARM Little-endian processor under processor type, clicked ok, and the workspace shows lines that look like .data:0000002C [several hex values separated by commas]If I try to jump to the entry point address (0xff810000 from readelf) I get an JumpAsk fail. How do I find my start point so I can start reading the disassembled ARM assembly code?
Reverse Engineer STM32L151's Firmware
ida;firmware;entry point
From the PDF to which you linked:3.3.4 Boot modesAt startup, boot pins are used to select one of three boot options:Boot from Flash memoryBoot from System MemoryBoot from embedded RAMThe boot loader is located in System Memory. It is used to reprogram the Flash memory by using USART1 or USART2. See STM32 microcontroller system memory boot mode AN2606 for details.If we Google for AN2606, we find the documentation for the STM32 microcontroller system memory boot mode, which suggests in the table below that the bootloader begins at memory location 0x1FF00FFE.Additionally, the bootloader configuration table for each chip specifies the address of the bootloader firmware. For example, for STM32L01xxx/02xxx chips, page 174 specifies that the the bootloader's firmware is a 4 KB chunk that begins at address 0x1FF00000.
_unix.27661
I recently got a new laptop for work, and I was wondering whether it'd be good practice to keep using the same RSA keypair as I'm using on my old work laptop. I'd really like to not have to create another keypair to keep track of. Is this, generally speaking, an acceptable practice? Since the keypair does have a passphrase, it should be fairly secure, as long as my physical machines are secure, right?
Good practice to use same SSH keypair on multiple machines?
ssh;security
Yes, it is safe as long as it is in safe hands i.e. physical machines are secure. Of course, if an attacker gets access and is able to ssh into one machine, he can then get the key from that machine, and use the key for other computers as well. See this for more information.
_codereview.59447
Inspired by the recent surge of questions dealing with changing numbers to their English equivalents (See here, here, and here), I decided to write my own version in Python.zero_to_nineteen = ( zero, one, two, three, four, five, six, seven, eight, nine, ten, eleven, twelve, thirteen, fourteen , fifteen, sixteen, seventeen, eighteen, nineteen)tens = ( zero, ten, twenty, thirty, forty, fifty, sixty, seventy, eighty, ninety)suffixes = { 3 : thousand, 6 : million, 9 : billion, 12 : trillion, 15 : quadrillion, 18 : quintillion, 21 : sextillion, 24 : septillion, 27 : octillion, 30 : nonillion, 33 : decillion, 36 : undecillion 39 : duodecillion, 42 : tredicillion, 45 : quattuordecillion, 48 : quinquadecillion, 51 : sedecillion, 54 : septendecillion, 57 : octodecillion, 60 : novendecillion, 63 : vigintillion, 66 : unvigintillion, 69 : duovigintillion, 72 : tresvigintillion, 75 : quattuorvigintillion, 78 : quinquavigintillion, 81 : sesvigintillion, 84 : septemvigintillion, 87 : octovigintillion, 90 : novemvigintillion, 93 : trigintillion, 96 : untrigintillion, 99 : duotrigintillion, 102 : trestrigintilion, 105 : quattuortrigintillion, 108 : quinquatrigintillion, 111 : sestrigintillion, 114 : septentrigintillion, 117 : octotrigintillion, 120 : noventrigintillion, 123 : quadragintillion}def spell_out(number): Returns a string representation of the number, in english if isinstance(number, float): raise ValueError(number must be an integer) if number < 0: return negative + spell_out(-1 * number) if number < 20: return zero_to_nineteen[number] if number < 100: tens_digit = number // 10 ones_digit = number % 10 if ones_digit == 0: return tens[tens_digit] else: return {}-{}.format(tens[tens_digit], zero_to_nineteen[ones_digit]) if number < 1000: hundreds_digit = zero_to_nineteen[number // 100] rest = number % 100 if rest == 0: return {} hundred.format(hundreds_digit) else: return {} hundred {}.format(hundreds_digit, spell_out(rest)) suffix_index = log(number) suffix_index -= suffix_index % 3 prefix = spell_out(number // 10 ** suffix_index) suffix = suffixes[suffix_index] rest_of_number = number % (10 ** suffix_index) if suffix_index in suffixes: if number % (10 ** suffix_index) == 0: return {} {}.format(prefix, suffix) else: return {} {} {}.format(prefix, suffix, spell_out(rest_of_number)) return infinitydef log(number): Returns integer which is log base 10 of number answer = 0 while number > 9: answer += 1 number //= 10 return answerI would appreciate any comments on my code.
Numbers to English Strings in Python3
python;python 3.x;converting
Following up on Davidmh's answer, here are some other improvements:Consistency:Instead of using tuples for only tens and zero_to_nineteen, stay consistent and use it for the suffixes as well, by making it 1000's based. Aka, the indices would correspond to the power of a thousand that the number relates to. For example, 1000 could be 1st element, while 10 ** 30 would be the 10th.Code Style:Your code repeatedly utilizes blocks of the format:a = number // xb = number % xThis can be simplified down using Python's built-int divmod function like so:a, b = divmod(number, x)Also, your code repeatedly uses code segments like this:if second_part == 0: return first_partelse: return first_part + separator + second_partYou are violating the DRY principle by doing this and should instead abstract this logic out into a function of its own like so:def _create_number(prefix, suffix, seperator= ): # Returns a number with given prefix and suffix (if needed) if not isinstance(prefix, str): prefix = spell_out(prefix) return prefix if suffix == 0 else prefix + seperator + spell_out(suffix)Other Suggestions:Normally, numbers are expected to be separated by commas. So one million three hundred thousand forty-five would be one million, three hundred thousand, forty-five. This is easy to do with the suggested function, as you simply make the seperator , instead of . Instead of fumbling around with floats and such, let Python handle it via int() and then the only thing you would want to catch is when that throws an OverflowError for large floating point values (like 1e584). This is done like so:try: number = int(number)except OverflowError: # This will be triggered if trying to use this with numbers like 1e584 return infinityPython will raise a ValueError automatically anyways if it's not something that can be interpreted by int(). This code also has the benefit of being a bit more future-proof. If int() will start to work with tuples or other data types someday (aka (1, 2, 3) could become 123), then this code will still work.Lastly, you could add a special case for a thousand, and then remove the -illion from each of the suffixes for a more compact form.
_webapps.55046
A | B-----1 | a2 | b3 | cand I'd like to have a third column that merges the data like this:A | B |---------1 | 1, a2 | 2, b3 | 3, ccurrently I can kind of do this one row at a time in a column C with this formula:= A:A1&, &B:B1How can it be done to the entire column B?
How do I merge cell data into a new column?
google spreadsheets
null
_unix.255350
After upgrading to OpenRC 0.20 the system fails to boot properly:mounted into runlelevel unknown (kernel 3.17.1)The / partition is mounted read-only/dev/sda3 on / type ext4 (ro, realtime, data=ordered)so I did the following:# mount / -o remount,rw.. whch worked, after that I did # mount -awhich mounted my /dev/sda4 (/home) But any service I try to start gets me a segfault, e.g.# service root startSegmentation fault I am running openrc 0.20 which seems to have been installed yesterday in my latest emerge world.
Gentooo system mounts readonly, won't boot, services segfault
boot;gentoo;openrc
null
_unix.215849
I removed a faulty disk from my RAID5 with mdadm --manage /dev/md0 -r /dev/sdd1 and replaced it with a new one. I tried to add it with mdadm --manage /dev/md0 -a /dev/sdd1 but it's only added as a spare disk.mdadm --detail /dev/md0/dev/md0: Version : 1.0 Creation Time : Mon Jul 13 20:08:27 2015 Raid Level : raid5 Array Size : 14651324160 (13972.59 GiB 15002.96 GB) Used Dev Size : 2930264832 (2794.52 GiB 3000.59 GB) Raid Devices : 6 Total Devices : 7 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Tue Jul 14 09:11:21 2015 State : active Active Devices : 6Working Devices : 7 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 128K Name : creadata:0 (local to host creadata) UUID : c41c15fd:6e7deae3:5cace8e0:4bf7e244 Events : 7459 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 8 33 1 active sync /dev/sdc1 2 8 65 2 active sync /dev/sde1 3 8 81 3 active sync /dev/sdf1 4 8 97 4 active sync /dev/sdg1 6 8 113 5 active sync /dev/sdh1 7 8 49 - spare /dev/sdd1How can I tell mdadm to resync the array with the new disk?
Replacing a disk to RAID5 failed
linux;raid;mdadm
null
_softwareengineering.140826
After using Hibernate on most of my projects for about 8 years, I've landed on a company that discourages its use and wants applications to only interact with the DB through stored procedures.After doing this for a couple of weeks, I haven't been able to create a rich domain model of the application I'm starting to build, and the application just looks like a (horrible) transactional script.Some of the issues I've found are:Cannot navigate object graph as the stored procedures just load the minimum amount of data, which means that sometimes we have similar objects with different fields. One example is: we have a stored procedure to retrieve all the data from a customer, and another to retrieve account information plus a few fields from the customer.Lots of the logic ends up in helper classes, so the code becomes more structured (with entities used as old C structs).More boring scaffolding code, as there's no framework that extracts result sets from a stored procedure and puts it in an entity.My questions are:has anyone been in a similar situation and didn't agree with the store procedure approach? what did you do?Is there an actual benefit of using stored procedures? apart from the silly point of no one can issue a drop table.Is there a way to create a rich domain using stored procedures? I know that there's the possibility of using AOP to inject DAOs/Repositories into entities to be able to navigate the object graph. I don't like this option as it's very close to voodoo.ConclusionFirst, thank you all for your answers. The conclusion that I've arrived is that ORMs don't enable the creation of Rich Domain models (as some people mentioned), but it does simplify the the amount of (often repetitive) work. The following is a more detailed explanation of the conclusion, but is not based on any hard data.Most applications request and send information to other systems. To do this, we create an abstraction in the model terms (e.g. a business event) and the domain model sends or receives the event. The event usually needs a small subset of information from the model, but not the whole model. For example in a online shop, a payment gateway requests some user information and the total to charge a user, but doesn't require the purchase history, available products, and all the customer base. So the event has a small and specific set of data.If we take the database of an application as an external system, then we need to create an abstraction that allows us to map the Domain Model entities to the database (as NimChimpsky mentioned, using a data-mapper). The obvious difference, is that now we need to handcraft a mapping for each model entity to the database (either a legacy schema or stored procedures), with the extra pain that, since the two are not in sync, one domain entity might map partially to a database entity (e.g a UserCredentials class that only contains username and password is mapped to a Users table that has other columns), or one domain model entity might map to more than one database entity (for example if there's a one-to-one mapping on the table, but we want all the data in just one class).In an application with a few entities, the amount of extra work might be small if there's no need to transverse the entities, but it increases when there's a conditional need to transverse the entities (and thus we might want to implement some kind of 'lazy loading'). As an application grows to have more entities, this work just increases (and I have the feeling that it increases non-linearly). My assumption here, is that we don't try to reinvent an ORM.One benefit of treating the DB as an external system, is that we can code around situations in which we want 2 different versions of an application running, in which each application has a different mapping. This becomes more interesting in the scenario of continuous deliveries to production... but I think this is also possible with ORMs to a lesser extent.I'm going to dismiss the security aspect, on the basis that a developer, even if he doesn't have access to the database, can obtain most if not all the information stored in a system, just by injecting malicious code (eg. I can't believe I forgot to remove the line that logs the credit card details of the customers, dear lord!).Small update (6/6/2012)Stored procedures (at least in Oracle) prevent doing anything like continuous delivery with Zero downtime, as any change to the structure of the tables will invalidate the procedures and triggers. So during the time that the DB is being updated, the application will be down too.Oracle provides a solution for this called Edition-Based Redefinition, but the few DBAs I've asked about this feature mentioned that it was poorly implmented and they wouldn't put it in a production DB.
Do ORMs enable the creation of rich domain models?
java;design;orm;stored procedures;domain driven design
Your application should still be modelled from domain driven design principles. Whether you use an ORM, straight JDBC, calling SPs (or whatever) should not matter. Hopefully a thin layer abstracting your model from the SPs should do the trick in this case. As another poster stated, you should view the SPs and their results as a service and map the results to your domain model.
_softwareengineering.201152
This last semester i've had lectures about OOP design, i understood most of what i was supposed tobut there is something that i can't get right.I'm pretty sure that the models i create are wrong because they cannot be implemented.I wrote quite a lot because i can't tell where the problem is.This isn't homework , it is a sample of a problem from the past year.An example: Design a Stackoverflow like thing for a class.Use case: Posting a questionpre-conditions: There is a authenticated studentpos-conditions: The question was recorded and everyone that can read the question has been notified.Scenario:1. The student tells the system he wants to submit a question and supplies the title2. The system asks for the question body.3. The student submits the question body.4. The system records this.5. The system reports all used question tags on that student's class.6. The student submits the choosen question tag7. The system records everything and reports the used tags8. The student tells if he wants the question to be public9. The system records everything and notifies the people it shouldSteps 6,7 can be repeated until the user tells us he's done.There are extensions but not necessary to demonstrate my problem.How would i do it:System Sequence DiagramnewQuestion(title)------------------>OK<------------------submitBody(bodytext)-------------------> _ | LOOPExisting tags |<------------------- | |choosetag(tag) |-------------------> | _chooseVisibility(b) ------------------->OK<-------------------The domain model would have:StudentClassInstructorQuestionTagStudentCatalogInstructorCatalogThe relations between each other's are simple ( i believe ) thats why i'm not sketching it up.I noticed my issue when making the interaction diagram for this use-case.I decided that the use-case controller would be a made up handler class QuestionHandler so the first two interaction diagrams i wouldbelieve to be something like:newQuestion(title) - it has to create a question with the proper title title submitBody(bodytext) - it has to set bodytext has the text for the question we are creating ( and i don't know where it is!)The actual problem: From all this i imagine the code to be:class QuestionHandler method postQuestion(title) { newQuestion(title); submitBody(bodytext); ... etc }And i can't see this working like this, my problem where does submitBody(bodytext) gets the currentQuestion from?How do i handle the context of each use-case, which in this case i would make it messy ( that's how it feels to me) and use the return values to make it work.But what if i have a use-case context that requires lots of things to me change and moved around?Im totally lost, i thought it would solve itself but it turns out o can't see how things would be implemented with this issue.
OOP Design - Possible wrong approach makes it impossible to implement it in code
object oriented;design patterns;design
Objects carry state. A sequence diagram doesn't convey this state, but your objects will keep state in between requests. You could go further and transmit the state as a method argument, as in submitBody(bodyText, session), but this is a bit dangerous in that it invites procedural style programming if you intend to simply translate this into code.Instead, I would accept that your use case and state diagrams have some level of abstraction in them and they don't need to convey all the details. Your object details doesn't come entirely from diagrams.Looking at the vocabulary of your domain, you should have a type along the lines of Question that constructs its state iteratively as more information is given to it by way of method calls into it. As stated already, this object would be maintained as state across requests, a typical scenario in workflow style interactions such as the one you describe here.Fair enough?
_codereview.10040
Is it possible to make this code more efficient & elegant? int HexToDecimal(char ch){ ch = tolower(ch); if ('a' <= ch && ch <= 'f') { return ch - 'a' + 10; } else if ('0' <= ch && ch <= '9') { return ch - '0'; } printf(Error\n); return 0;}Update: Printing error could be ignored. By returning 0 I only mean to show that the input char is not a hex digit.
Converting hex character to decimal
c++;converting
From memory (I haven't checked it) this should be the most efficient in terms of speed at the cost of a small increase in memory footprint.Note that it does not repeat your printf(Error\n) smell.return\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 // <nul> ... <si>\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 // <dle> ... <us>\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 // ' ' ... '/'\x00\x01\x02\x03\x04\x05\x06\x07\x08\x09\x00\x00\x00\x00\x00\x00 // '0' ... '?'\x00\x0a\x0b\x0c\x0d\x0e\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x00 // '@' 'A' ... 'O'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 // 'P' ... 'Z' ... '_'\x00\x0a\x0b\x0c\x0d\x0e\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x00 // '`' 'a' ... 'o'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 // 'p' ... 'z' ... '~' <del>\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 /// Accented/Extended\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00[c];AddedApologies for not being clear. This solution is not better than the original, just different and provably faster. I would use the originally posted code in all cases except when maximum speed is critical and only then after careful deliberation.I posted this for completeness, it is not elegant in any way, it is only more efficient ... almost brutally so.
_unix.86387
I'm not sure if this is more of a SuperUser or UnixLinux question, but I'll try here...Recently, I found this:#710689 - aptitude: use unicode character in the trees - Debian Bug report logsIt would be nice when aptitude would use unicode characters for the trees in the dependency lists, e.g. instead of:--\ Depends (3) --- libc-dev-bin (= 2.17-3) --- libc6 (= 2.17-3) --- linux-libc-dev --\ Suggests (2) --- glibc-doc (UNSATISFIED) --\ manpages-dev...... and I thought - wow, I really like that ASCII-art tree output, wasn't aware that aptitude could do that! So, I start messing for an hour with aptitude command line switches - and I simply cannot get that output? So my initial question was - where does that output come from in the first place?!After a while, I realized that on my system, aptitude ultimately symlinks to /usr/bin/aptitude-curses; and I finally realized that aptitude has a curses interface! :/ So, I finally run aptitude without any arguments - and so the curses interface starts, and I can see something like this:... so quite obviously, those ASCII tree characters come from the curses interface.So I was wondering - is there a Debian/apt tool, which will output such a visual ASCII tree - but with actual dependencies of packages? I know about debtree - Package dependency graphs (also software recommendation - How to visually display dependencies of a package? - Ask Ubuntu); but I'd rather have something in terminal, resembling a directory tree (rather than the unordered [in terms of node position] graphs from debtree, generated by graphviz's dot). I've also seen Is there anything that will show dependencies visually, like a tree?, which recommends:$ apt-rdepends aptitudeReading package lists... DoneBuilding dependency tree Reading state information... Doneaptitude Depends: libapt-pkg4.10 Depends: libboost-iostreams1.42.0 (>= 1.42.0-1) Depends: libc6 (>= 2.4) Depends: libcwidget3 Depends: libept1 Depends: libgcc1 (>= 1:4.1.1) Depends: libncursesw5 (>= 5.7+20100313) Depends: libsigc++-2.0-0c2a (>= 2.0.2) Depends: libsqlite3-0 (>= 3.7.3) Depends: libstdc++6 (>= 4.5) Depends: libxapian22libapt-pkg4.10libboost-iostreams1.42.0 Depends: libbz2-1.0 Depends: libc6 (>= 2.3.6-6~) Depends: libgcc1 (>= 1:4.1.1) Depends: libstdc++6 (>= 4.2.1) Depends: zlib1g (>= 1:1.1.4)...... which is good, because it lists first the immediate dependencies of the required package; and then the dependencies of the first-level dependency packages, and so on - but it's not visualized as a tree (and actually, aptitude's curses interface simply shows installed info when you expand dependency node; it does not expand to further dependencies). So, the question is - is there a tool, that would produce a dependency tree graph with terminal characters - like, say, in the following pseudocode:$ pseudo-deb-graph --show-package=aptitudeaptitude --- Depends: libapt-pkg4.10 --\ Depends: libboost-iostreams1.42.0 (>= 1.42.0-1) --- Depends: libbz2-1.0 --- Depends: libc6 (>= 2.4) --\ Depends: libc6 (>= 2.3.6-6~) --\ Depends: libc-bin (= 2.13-0ubuntu13) --- ... --\ Depends: libgcc1 --- ... --\ Depends: tzdata --- ......
Output visual (ASCII) Debian dependency tree to terminal?
linux;debian;apt;aptitude
You can do it with bash scriptSource code: apt-rdepends-treehttps://gist.github.com/damphat/6214499Run# save code as apt-rdepends-tree# chmod +x apt-rdepends-tree# ./apt-rdepends-tree gccOutput look like this:# ./apt-rdepends-tree gcc gcc cpp (>= 4:4.7.2-1) gcc-4.7 (>= 4.7.2-1) package-a package-b package-c
_softwareengineering.153468
I'm developing an application that has methods of this kind:attackIfIsFar();protectIfIsNear();helpAfterDeadOf();helpBeforeAttackOf();etc.The initialization of my application for n players is something likeplayer1.attackIfIsFar(player2);player2.protectIfIsNear(player4);player3.helpAfterDeadOf(player1);player4.helpBeforeAttackOf(player3);etc.I don't know how to configure a jtable that that can allow me to set the equivalent of this code-blockIn others words I need simply a way to create a jtable with 3 column and n row, were I can set in the column 1 and 3, the player, and in the central column one of the available methods that each player on the column 1 must invoke on each player of column 3
How to configure simple game AI setting with jtable
java
First you need an class to represent the expression.public class Expression { private Person lhs; private Person rhs; private Action action;}Then you need to make a TableModel that is backed by a list of Expressions. Column count will be 3 and row count will be the size of the list. The rest of the methods are fairly straight forward to implement by mapping the row and/or column to the respective Expression and/or field.Then you need a JTable that sets the TableCellEditor for each column. Each of which would be a combo box with the available options for the respective field. If you need more help with tables, read this tutorial.Once all the values are configured, you will need to create some logic that will evaluate the expressions and calls the correct methods.EditHere is a very basic cell editor example. You could take steps to not populate the combo boxes with a value that will allow the user to select an invalid value.@Overridepublic TableCellEditor getCellEditor(int row, int col){ switch (col) { case 0: case 2: return new DefaultCellEditor(new JComboBox(LIST_OF_PEOPLE)); case 1: return new DefaultCellEditor(new JComboBox(LIST_OF_ACTIONS)); default: return super.getCellEditor(row, col); }}
_unix.193263
I have Oracle jdk7 installed on my centos6. I noticed that the /etc/profile has the lines below:#below lines are added for Javaexport JAVA_HOME=/usr/java/latest## export JAVA_HOME JDK ##export JAVA_HOME=/usr/java/jdk1.7.0_75In order to test my assumptions on env variables and PATHs, I commented all the lines above (the export lines). I tried to load the new /etc/profile by sourcing it . /etc/profile and issued the echo $JAVA_HOME; it still returned the above path. So, I rebooted the machine as the source didn't work. After the reboot, the echo $JAVA_HOME returns nothing, which is expected. There is nothing in ~/.bash_profile for Java. But if I issue the command: java -version on the shell, it still returns [root@localhost ~]# java -versionjava version 1.7.0_75Java(TM) SE Runtime Environment (build 1.7.0_75-b13)Java HotSpot(TM) 64-Bit Server VM (build 24.75-b04, mixed mode)Where is the Java defined in the PATH? the path in the ~/.bash_profile is as below, it doesn't have any for Java.# User specific environment and startup programsPATH=$PATH:$HOME/binexport PATH~~~set - on the shell returns the line below for the PATH variable:PATH=/usr/lib64/qt-3.3/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin
Question on environmental variable
environment variables;java;etc
The default PATH is set in /etc/profile. Users can modify their PATH by editing ~/.profile, ~/.bash_profile or ~/.bashrc (if they're running bash) but if they don't they will still have a PATH as defined in /etc/profile. That's why the line wasPATH=$PATH:$HOME/binand not just PATH=$HOME/binThat way, the original value of PATH is kept and the new directory is simply appended. On my system, the PATH set in /etc/profile isPATH=/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/gamesYou probably have a very similar line in yours and that's where /usr/bin comes from.
_unix.3854
Asked this on superuser, got no response:Can anyone tell me of the status/state of WLM (Workload Management) kernel scheduler systems in Linux? Alternatively, any user-space process goal-based load management programs?This is a good start, but I'm not aware if these proposals are implemented?http://www.computer.org/plugins/dl/pdf/proceedings/icac/2004/2114/00/21140314.pdfhttp://ckrm.sourceforge.net/downloads/ckrm-linuxtag04-paper.pdfAIX has inclusive WLM, anything comparable for Linux?
Are there any Workload Management subsystems for Linux?
linux;kernel;scheduling
Not very sure, but the closest I can think of is cgroups:Control Groups provide a mechanism for aggregating/partitioning sets of tasks, and all their future children, into hierarchical groups with specialized behaviour.For more information, see one of:Arch Wiki page for cgroupsWikipedia cgroups page.RedHat cgroups page.
_cs.54241
How would you recursively add numbers? Just in terms of pseudocode how would one approach such a problem in the most efficient run time
Addition recursively
algorithms;arithmetic
null
_webapps.19631
I would like to print an entire list in Trello for offline review. How do can I do this?
How do I print a list in Trello for offline use?
trello;printing
null
_webmaster.26807
Possible Duplicate:My site disappeared from Google search, how long does it take to get back? I got this message:Google Analytics web property: link has been removed from http://aglassmenagerie.net etc. etc.I don't know why or what I should do to fix this. I have been trying to add stuff to my site on a regular basis. How do I find out what the problem is and how to fix it?I am a stained glass artist and have an online store. I am not that knowledgeable about this stuff. Someone please look at my site or code and tell me what I did wrong.Pam
Help! Google analytics removed and don't know why
google;analytics
null
_codereview.154223
Here is a simple app that i am trying to implement in java.my goal is to write a program that can convert ASCII to Binary and back.Questions:Am i violating any OOP Rule. Is my code readable ? How can i make it more efficient ? Anything that want to suggest.Code:public class FXMLDocumentController implements Initializable {@FXMLTextArea asciiTextArea;@FXMLTextArea binaryTextArea;@FXMLStackPane animationPane;private static final int AMOUNT_OF_BITS = 8;@FXMLprivate void handleBinaryToText(ActionEvent event) { Callable<String> task = () -> binaryToAscii(binaryTextArea.getText()); try { runTask(task, asciiTextArea); } catch (InterruptedException | ExecutionException ex) { asciiTextArea.setText(ex.toString()); }}@FXMLprivate void handleTextToBinary(ActionEvent event) { Callable<String> task = () -> asciiToBinary(asciiTextArea.getText()); try { runTask(task, binaryTextArea); } catch (InterruptedException | ExecutionException ex) { binaryTextArea.setText(ex.toString()); }}@Overridepublic void initialize(URL location, ResourceBundle resources) { //Validate the Binary TextArea final Pattern binaryRegex = Pattern.compile(\\A[01\\n]*\\Z); Predicate<String> tester = binaryRegex.asPredicate(); binaryTextArea.setTextFormatter(new TextFormatter<>(change -> { if (!tester.test(change.getControlNewText())) { return null; } return change; })); //binaryTextArea.setTextFormatter();}// Create the Binary To Text Task taskprivate String binaryToAscii(final String input) { if (input.length() % AMOUNT_OF_BITS != 0) { String msg = Input must be a multiple of + AMOUNT_OF_BITS; binaryTextArea.setText(msg); throw new IllegalArgumentException(msg); } final int INPUT_LEN = input.length(); final int BUILDER_SIZE = INPUT_LEN / 8; StringBuilder result = new StringBuilder(BUILDER_SIZE); for (int i = 0; i < INPUT_LEN; i += AMOUNT_OF_BITS) { char charCode = (char) Integer.parseInt(input.substring(i, i + AMOUNT_OF_BITS), 2); result.append(charCode); } return result.toString();}// Create the Binary To Text Taskprivate String asciiToBinary(String text) { final byte[] bytes = text.getBytes(); StringBuilder result = new StringBuilder(bytes.length * 8); for (byte b : bytes) { int val = b; for (int i = 0; i < AMOUNT_OF_BITS; i++) { result.append((val & 128) == 0 ? 0 : 1); val <<= 1; } } return result.toString();}private void runTask(Callable<String> task, TextArea textArea) throws InterruptedException, ExecutionException { ExecutorService thread = Executors.newFixedThreadPool(1); final String result = thread.submit(task).get(); Platform.runLater(() -> { textArea.setText(result); }); thread.shutdown();}}Github link
ASCII to Binary and reverse in java
java;object oriented;multithreading
null
_unix.40099
Using Emacs on a Linux system I click:Scala>Browser Scala APIwhich produces a terminal window with this message:/usr/bin/xterm: Can't execvp lynx: No such file or directoryHow do I inform Emacs that the correct web browser to use is Chrome?
Configure Emacs to use Chrome as the web browser of choice?
configuration;emacs
null
_webapps.49842
What is the reason for the (1) in My Contacts (1) on the left? I have clicked the link, but it has no effect. I've refreshed the page a dozen times and still get the same count. Is there a default contact in my list? Might there be any sort of pending process in the background?
Wrong number of contacts showing in Google account
google contacts
You do appear to have one (blank) contact line/entry with a checkbox to the left.You say this record is entirely empty, however, it is possible to create an almost empty contact record with a single space in one of the fields.Whilst it doesn't seem possible to easily create an entirely empty contact record, you can edit that contact record, delete the space (or any data it might contain), save the record, and end up with an entirely blank contact record. Note that Gmail auto-saves contact records, so this could happen without you explicitly do so.Presumably you can delete this blank entry to have an entirely empty contacts list?
_unix.241224
cannot ftp through port 21Get below error when trying to ftpftp: connect: The file access permissions do not allow the specified action.Has any one came across above error and having any soultion please.OS: AIX 6
ftp: connect: The file access permissions do not allow the specified action
aix;ftp;telnet
null
_unix.218607
I have a laptop (with Ubuntu) and a router (with OpenWrt).When I check the version for both, the laptop says I have version 2.05 for multithreads and the router says I have 2.05 for pthreads.Both have an IP address in the same subnet.When I run client commands on the laptop or the router (i e. iperf -c router IP address), I get a connection refused error. If I run server commands (i.e. iperf -s) on either, the iperf header pops up like it's checking the network but then nothing else ever happens and I have to hit ctrl+C to kill the process.Then I tried putting iperf on a second router with OpenWrt and trying the commands between the two routers and the same thing happend.This is my first time using iperf so I am not sure if it just isn't working or if there is a configuration step I am missing. I've tried a couple of different tutorials, but the results are the same above for the different commands.Is there a configuration step that I missed?Is there a specific line I am supposed to write to start iperf on the server and client?If I did everything correctly, could it not be working because one has multithread installed and the other has pthread installed?Does anyone have a good iperf tutorial for first time users?Thanks for any help in advance!
Incorrect iperf configuration?
ubuntu;networking;openwrt
You have to run the iperf in server mode on one deviceiperf -sand iperf in client mode on the otheriperf -c IPADRESSOFSERVERat the same time.
_unix.369543
I am building a Linux distro (via Yocto) on an imx6 (Congatech Q7) arm (4.1 kernel, Xorg 1.18 module version 2.10.1 ) I have two 800x480 displays connected on the LVDS. I have successfully configured the hardware to run mirrored with X. It comes up and works like a champ.I need to run the displays independent. I can bring up either one individually with X.The hardware only has 1 graphics accelerator, so I need to use the old fashioned fb device.If I do not bring up X I can copy images back and forth from the fb devices (fb0 and fb2) successfully. So I am pretty sure the devices are okay.When I configure the fbdev (using the same options as I had with the acceleration for size. It reports a size of 211x127. The size is 800x480.I have run into a dead on on what to look at next, I am hoping someone in the community has run into this before.[3254904.461] (II) VIVANTE(0): Setting screen physical size to 211 x 127Snippet of the Xorg.0.log[3254904.080] (II) VIVANTE(0): [drm] Using the DRM lock SAREA also for drawables[3254904.080] (II) VIVANTE(0): [drm] framebuffer handle = 0x44800000 [3254904.080] (II) VIVANTE(0): [drm] added 1 reserved context for kernel [3254904.080] (II) VIVANTE(0): X context handle = 0x1 [3254904.080] (II) VIVANTE(0): [drm] installed DRM signal handler [3254904.081] (II) VIVANTE(0): [DRI] installation complete [3254904.081] (--) RandR disabled [3254904.118] (II) AIGLX: Screen 0 is not DRI2 capable [3254904.118] (EE) AIGLX: reverting to software rendering [3254904.460] (II) AIGLX: enabled GLX_MESA_copy_sub_buffer [3254904.461] (II) AIGLX: Loaded and initialized swrast [3254904.461] (II) GLX: Initialized DRISWRAST GL provider for screen 0 [3254904.461] (II) VIVANTE(0): Setting screen physical size to 211 x 127 My xorg.conf file:Section Device Identifier fbB Driver fb Option fbdev /dev/fb0 Option vivante_fbdev /dev/fb0 Option HWcursor false Screen 1EndSectionSection Device Identifier fbA Driver vivante Option fbdev /dev/fb2 Option vivante_fbdev /dev/fb2 Option HWcursor false Screen 0EndSectionSection Monitor Identifier MonAlpha Modeline U:800x480p-59 33.26 800 840 968 1056 480 490 492 525 -hsync -vsync -csyncEndSectionSection Monitor Identifier MonBeta Modeline U:800x480p-59 33.26 800 840 968 1056 480 490 492 525 -hsync -vsync -csyncEndSectionSection Screen Identifier ScreenAlpha Monitor MonAlpha Device fbA Subsection Display Modes U:800x480p-59 EndSubSectionEndSectionSection Screen Identifier ScreenBeta Monitor MonBeta Device fbB Subsection Display Modes U:800x480p-59 EndSubSectionEndSectionSection ServerLayout Identifier Main Layout Screen 0 ScreenAlpha Screen 1 ScreenBeta Absolute 0 480EndSectionSection ServerFlags Option BlankTime 0 Option StandbyTime 0 Option SuspendTime 0 Option OffTime 0EndSection
dual display X Windows configuration arm
linux;x11;display settings;arch arm
null
_webmaster.108412
Our site is prikkabelled.nl. We deal in prikkabel which is Dutch for (Illumination cable). We used to have the number one spot for the keyword prikkabel for almost a year, but we dropped to the second spot on the 8th of July 2017. How to determine what was the exact cause of this? We had some downtime of a few hours around that day, but according to Matt Cutts of Google, this is not a contributing factor for SE position. How tom more accurately determine the reason behind the shift from number 1 to number 2 spot?
Get back to number 1 position for keyword
seo;google;ranking;google ranking
null
_hardwarecs.7485
Which is a good Mac laptop for development: the MacBook Pro MD101HN/A or the MacBook Air MMGF2HN/A. In 2017, is it worth buying a MacBook Pro(MD101HN/A)?The MacBook Pro MD101HN/A (Late 2012 Model) is upgradeable.The MacBook Air MMGF2HN/A, released in 2016 has 8GB Ram, but its processor is 1.6 GHz, does that make the machine slow?Please suggest if any other Mac is good for development, the budget is the constraint (80K INR).Edit:Here i found the processor comparision, Which suggests 1.6 GHz i5 Air is better when compared to 2.5GHz i5 Pro Model, Please have a look
MacBook for iOS development
laptop;development;osx
null
_unix.329279
I want to display a video when a wrong password is entered (say only in GUI login screen or within the display manager).I have added a line to /etc/pam.d/common-auth to run my script /usr/local/bin/movie# here are the per-package modules (the Primary block)auth [success=2 default=ignore] pam_unix.so nullok_secureauth [default=ignore] pam_exec.so seteuid /usr/local/bin/movieThe script /usr/local/bin/movie is simply:#!/bin/bash/usr/bin/mplayer /usr/local/movie.mp4exit0When entering password, I only get 0.1 s of black screen instead of the film.How can I make my script working?
How to display a video when a wrong password is entered
gui;pam;mplayer;startx;display server
To display within a gnome session, add DISPLAY=<display ID>. For instance:#!/bin/bashDISPLAY=:0 /usr/bin/mplayer -fr /usr/local/movie.mp4exit 0with -fs for full screen.
_webapps.75085
I am the owner and moderator of a group on Yahoo! Groups. The group has tens of thousands of messages, and I need to delete a few individual, very old messages posted by someone whose Yahoo! account has been deleted. How can I do this?According to Yahoo! Help, posters can delete their own messages by opening them in the web interface and pressing the Delete button which appears at the bottom. Though I know the URLs of all the messages I need to delete, this technique doesn't seem to work for moderators; for me pressing the Delete button has no effect. And the original poster cannot delete his own messages because there is no way to reactivate an account which has been dormant for more than a year.The same Yahoo! Help article says that moderators can delete a message from the message list by ticking the checkbox next to its entry and then pressing the Delete button at the top of the page. However, in Yahoo! Groups's new interface, the message list is now one of those infinitely scrolling pages. In my case I would have to scroll and manually look through tens of thousands of archived messages in order to get to the ones I want to delete. Surely there must be a better way?
How can a moderator delete an old post on Yahoo! Groups?
yahoo;mailing list;forums;yahoo groups
null
_softwareengineering.318842
I am writing a parser for a fairly complicated language in C++. The Parser class is given a list of tokens and it builds the AST. Though only a part of the parser is completed, the Parser.cpp file is already more than 1.5k lines and the class has around 25 functions. So, I plan to break the large Parser class into smaller classes such that I can have separate classes for parsing different language constructs. For example, I wish to have ExprParser class that parses expressions, a TypeParser class that parses types. It seems to be much cleaner. The problem is that the parsing functions must have access to a state that includes the position of the current token, and several parsing helper functions. In C#, it is possible to implement related functions in different classes using partial classes. Is there any specific design pattern or recommended way for this?
Breaking large class into smaller classes when they need a common state?
design patterns;object oriented
Create a Scanner or Tokenizer class, which takes the input data (the text to be parsed) and holds the position of the current token or similar state. It can also provide some shared helper functions. Then provide a reference (or a shared pointer) to the Scanner object to all your individual xyzParser objects, so they can all access the same scanner. The scanner will be only responsible for accessing the data by basic tokenize functions, the individual parsers will be responsible for the actual parsing logic.This will work most easily as long as your scanner does not need to know which individual parsers exists. If the scanner actually needs to know this, you might consider to resolve the cyclic dependency by introducing abstract interface base classes, or by implementing some kind of call back or event mechanism, where the scanner can notify any kind of observers.
_codereview.162380
Problem 3 - Largest prime factorExerciseThe prime factors of 13195 are 5, 7, 13 and 29.What is the largest prime factor of the number 600851475143 ?My solutionpackage pl.hubot.projecteuler.problem3;import java.util.ArrayList;import java.util.Collections;import java.util.List;public class Main { public static void main(String[] args) { System.out.print(Collections.max(primeFactors())); } private static List<Long> primeFactors() { long n = 600851475143L; List<Long> factors = new ArrayList<>(); for (long i = 2; i <= n; i++) { while (n % i == 0) { factors.add(i); n /= i; } } return factors; }}I would like ask review for my code and possible improvements. I am interested how is performance of my code.
Project Euler 3 - Largest prime factor
java;programming challenge
Get rid of the list. No, really, it's as simple as that. The sequence of is in factors will be non-decreasing, so essentially you could switchSystem.out.print(Collections.max(primeFactors()));withList<Long> factors = primeFactors();System.out.print(factors.get(factors.size() - 1));While the list for prime factors might come in handy for other challenges, you're just interested in the largest, so it's fine to return just a single value:private static long largestPrimeFactor(long n) { long largest = -1; for (long i = 2; i <= n; i++) { while (n % i == 0) { largest = i; n /= i; } } return largest;}Since you're interested in performance, you probably want to split the for into two parts to skip the unneeded even integers:private static long largestPrimeFactor(long n) { long largest = -1; while(n % 2 == 0) { largest = 2; n /= 2; } for (long i = 3; i <= n; i = i + 2) { while (n % i == 0) { largest = i; n /= i; } } return largest;}Other than that, well done, and the correct approach. But just as in your previous code, ask yourself whether you really need the whole collection.
_unix.222486
I have an application which is an object file (obtained from a c source code).When I run this application from the terminal it works fine.I want to run this application on system start up. Since all the log data currently I am printing on the terminal, I want to open the terminal and run this application in terminal (so that I can see the live log and also give input to my application from terminal).After searching some tutorials I am able to create a service which runs a shell script on startup. I modified this script to open a terminal and run the application.If I run the shell script only from the terminal it works well. But when I am running the script from service I am getting following warning:(x-terminal-emulator:16048): Gtk-WARNING **: cannot open display:Where am I making mistake?Here I am using beaglebone black running with debian.This is my service code(application.service)[Unit]Description=application setup[Service]WorkingDirectory=/root/application/ExecStart=/root/application/start_applicationSyslogIdentifier=applicationRestart=on-failureRestartSec=5[Install]WantedBy=multi-user.targetHere is start_application.sh#! /bin/sh## start_app_server#echo Starting application serverx-terminal-emulator -e app_server/a.outecho Done
opening a terminal with a command on startup in debian
debian;shell script;services
null
_unix.224296
I use the following tmux code in a script file tmux-dev.sh and add it to bash using bash /home/rohit/tmux-dev.sh :tmux new-session -dtmux split-window -htmux split-window -vtmux -2 attach-session -dThe script causes a nesting of tmux panes giving error: pane too small.To my surprise , the same bash tmux-dev.sh when put into the title and command box of gnome-terminal it works perfectly fine and gives this screenPlease help me out with this.I am using ubuntu 14.10.P.S-- Please stay away from suggesting any tools , I am here for an explanation for this behavior and raw shell script solution.
Running script through bashrc causes nesting of tmux panes
bash;ubuntu;terminal;tmux;bashrc
As @fiximan suggested , I tried to test if tmux session exists or not and then execute some code and finally, with a little tweak I am successful in getting the layout I wanted. Here is what I added to my .bashrc :test -z $TMUX && (tmux new-session -d && tmux split-window -h && tmux split-window -v && tmux -2 attach-session -d)I will break down the above for an explanation:test -z $TMUX -> This tests whether there is already a tmux session running or not , thus, preventing nesting of tmux sessionstmux new-session -d -> Creates a new sessiontmux split-window -h -> Splits the window verticallytmux split-window -v -> Splits the window horizontallytmux -2 attach-session -d-> Attaches the sessionsNOTE-- I have used && operator not the || operator because the later would short-circuit.
_webapps.46202
Today I wanted to register an MSN email account (eg. [email protected]), but the link that previously worked doesnt work anymore.https://accountservices.passport.net/reg.srf?ns=msn.com&sl=1&lc=2057https://accountservices.passport.net/reg.srf?ns=msn.com&sl=1&lc=1033Do you have any idea how can I register a new MSN email account?
How to register an MSN email account (googling the link doesnt work anymore)
windows live messenger;account management
null
_softwareengineering.189191
I was reading this blog by Joel Spolsky about 12 steps to better code. The absence of Test Driven Development really surprised me. So I want to throw the question to the Gurus. Is TDD not really worth the effort?
Why is test driven development missing from Joel's Test?
programming practices;testing;code quality;tdd
Test driven development was virtually unknown before Kent Beck's book came out in 2002, two years after Joel wrote that post. The question then becomes why hasn't Joel updated his test, or if TDD had been better known in 2000 would he have included it among his criteria?I believe he wouldn't have, for the simple reason that the important thing is you have a well-defined process, not the specific details of that process. It's the same reason he recommends version control without specifying a specific version control system, or recommends having a bug database without recommending a specific brand. Good teams continually improve and adapt, and use tools and processes that are a good fit for their particular situation at that particular time. For some teams, that definitely means TDD. For other teams, not so much. If you do adopt TDD, make sure it's not out of a cargo cult mentality.
_softwareengineering.283232
Our team has an idea of implementing a simple declarative DSL that would let users query the enterprise's domain model via a single interface without caring which specific microservices to call to get specific portions of data and how to then relate and combine them.Suggested syntax is based on SQL, but:Is much more limited: no grouping or aggregation, no explicit subqueries, no functions etc.Joins cannot be specified and are only implicit based on the predefined schema (entities and relations).Example:SELECT entityTypeOne.name, entityTypeTwo.value, entityTypeTwo.date WHERE entityTypeOne.name LIKE 'Sample%' AND entityTypeTwo.date BETWEEN (2015-05-01, 2015-05-31)Expected result: name value date London 1000 01/05/2015 London 2000 02/05/2015 London 3000 03/05/2015 Moscow 2000 02/05/2015 Moscow 9000 05/05/2015 Tokyo 1000 30/05/2015 The underlying entity-relation schema knows that entities are related like this: entityTypeOne.id = entityTypeTwo.parentId which creates an implicit join.The query engine should know that it will first query the entityTypeTwo microservice applying the date range filtering on server, then entityTypeOne microservice applying the id filtering based on previous query's result.The problems we currently see:Representing the object-relation schema.Figuring out the optimal order of querying.Denormalizing resulting data.I was wondering if this is a known problem and if there are any algorithms to check (maybe something from graph theory)?This is the closest thing I could find so far:What is a heterogeneous query?If it makes things simpler we can assume that microservices are exposing data via OData.
How to implement efficient heterogeneous microservice data queries?
graph;distributed computing;dsl;query;microservices
null
_webmaster.532
How do I evaluate my website from a usability standpoint?
How do I test the usability of my website?
usability
Loaded Question! You will need to conduct user tests. Try reading Don't make me Think and Rocket Surgery Made Easy by Steve Krug.These book are a must read for this topic and will show you how to do the testing.
_softwareengineering.334112
The title pretty much speaks for itself, but I'll provide the current decision I am facing.I am migrating python code towards the use of generators. The current code looks like this:...l = returns_a_list(args)log.debug('examining {} entries', len(l))for e in l: do_stuff(e)...Outside of the debug log, l fits the use case of a generator very well, and its length is not needed anywhere else. However, due to the debug log, using a generator would look something like this:...log.debug('examining {} entries', sum(1 for _ in returns_a_generator(args))for e in returns_a_generator(args): do_stuff(e)...This one is less readable and calls the generator twice. However, the production code is straightforward. Another option would be:...count = 0for e in returns_a_generator(args): do_stuff(e) count += 1log.debug('examining {} entries', count)...This one does not call the generator twice, which is not a big deal since we do not really care about performance in debug mode. In my opinion it looks a bit more straightforward in terms of counting elements compared to sum(1 for _ in generator), which does not convey the intentions as clearly (however, the preceding debug message should hint towards what that snippet is doing). However I am still not certain if moving the log is acceptable (what if do_stuff fails? what if for some reason the generator yields way to many values? I'd like to have that debug line before the program crashes or starts computing until the universe's heat death.). Moreover, the counting is still done even in with no debug log.So, what is your opinion on this general issue? What would you think when stumbling onto one of those three options? After having understood them, would you care about the debug log being convoluted and file an issue/patch or would you think Ah, got it, it's reasonable like that?EDIT: This is the programmers community. I am not that interested in concrete solutions to this particular problem, but more in opinions on the first, larger question (unless of course there is a good argument that such a choice should never happen).
Is it reasonable to write worse debug code in order to improve production code?
python;readability;iterator
On a generic case : if the logging make you some non-productions operations that can have a real impact on production performance wrap your code with code like isDebugEnabled() (or directive #ifdef DEBUG or whatever), for instance : l = returns_a_list(args)if(logger.isDebugEnabled()){ // your code log.debug('examining {} entries', len(l))}for e in l: do_stuff(e)Use only the wrapping when there is extra calculation, a simple log without don't need it and will make your code less readable.Another way : if(logger.isDebugEnabled()){ l = returns_a_list(args); log.debug('examining {} entries', len(l))}else{ l = returns_a_generator(args);}for e in l: do_stuff(e)You have to be sure however that changing returns_a_generator / returns_a_list don't impact the behaviour of your code, like not having the same ordering could impact something. However, don't use your last solution for debug purpose, adding code like this only for debug is not worth, because people won't be sure it's only for debug purpose unless you didn't forget to put a comment. Note : I don't know what is the real difference between returns_a_genrator / returns_a_list So I don't know if the case presented doesn't belong to unecessary pre-optimization (computing length cost generally nothing). I'm just showing a standard usage considering the sample given.EDIT To conclude : I value prod readability/maintenability/logic over everything about debugging : I don't mix any additional logic for debugging code within my producion's logic. This make harder to isolate what is really required for the production and what is used for debugging. Your 3rd option is a breach of that rule. All the code that happen in the if wrapper musn't modify variables used by prod code. So when i see a if(logger.isDebugEnabled) and I'm checking some problem code with the production, I don't even read it. So I'm reading as fast as the debug code wasn't here.
_webapps.12932
Facebook recently moved me to the new image viewer, which shows a download link below every picture.With this download link it is possible to download full-resolution original images. I suppose the default is that only the uploader of an image can redownload the original, but I wouldn't mind sharing full versions of certain images directly through Facebook.On my newest picture, for example,Right-click > Save as... offers me: 185715_182979465077370_115238648518119_369265_7842179_n.jpg (53KB)Download gets me:172951_182979465077370_115238648518119_369265_7842179_o.jpg (363KB)Is there some way to control who can and cannot download my original pictures?
Privacy setting on Facebook for downloading pictures?
facebook;privacy;images;download
null
_cstheory.7739
Given a curve $f(x)$ (for $x \in [0,1]$), and a line $y=a$, let $U$ be the total area below $f$ and above $a$, and let $L$ be the total area above $f$ and below $a$. If $L=U$, this means that $a =\int_0^1 f(x) dx$.Or if $f(x)$ is a sequence of $n$ numbers, this means $a$ is their mean. Equivalently, $U = \sum \max(f(x)-a,0) \; dx$, $L = \sum \max(a-f(x),0) \; dx$.What I want instead is the value $a$ for which $c \cdot L = U$, for some constant $c>0$.This can be computed approximately by binary search or exactly with a more cumbersome algorithm, but: is there a closed-form expression for this value? And: has this value been studied?(The motivation is a battery-charging problem, where $a$ is the constant level of energy available, and $f$ is the energy usage at each moment, so $L$ corresponds to charging into the battery, and $U$ corresponds to discharging from the battery. Because of charge/discharge inefficiencies, after charging 1 unit in, you'll only be able to discharge, say, .5 units out. (For this interpretation to make sense, you might assume f is monotonically increasing.))
mean/integral, except where positive differences between values and mean are weighted differently from negative differences?
reference request;computing over reals;computable analysis
null
_unix.126244
I'm running CentOS 5 with a PATA hard drive. I've used hdparm to tune the hard disk for better performance, but there are 2 settings that don't work:hdparm -M 254 /dev/hdagives the errorHDIO_DRIVE_CMD:ACOUSTIC failed: Input/output errorandhdparm -d1 /dev/hdagives the errorHDIO_SET_DMA failed: Operation not permittedWhat do I need to check to set these? It's already old hardware so anything I can do to squeeze out more performance would be helpful.Thanks.By request, here is the output of hdparm -iI /dev/hda and cat /proc/ide/hda/settingsDMA and acoustic settings do exist but I just can't set them successfully. Here is the output:[root@hptest ~]# hdparm -iI /dev/hda/dev/hda: Model=ST3500320AS, FwRev=SD15, SerialNo=9QM6WHGY Config={ HardSect NotMFM HdSw>15uSec Fixed DTR>10Mbs RotSpdTol>.5% } RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=4 BuffType=unknown, BuffSize=0kB, MaxMultSect=16, MultSect=off CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=268435455 IORDY=on/off, tPIO={min:120,w/IORDY:120}, tDMA={min:120,rec:120} PIO modes: pio0 pio1 pio2 pio3 pio4 AdvancedPM=no WriteCache=enabled Drive conforms to: unknown: ATA/ATAPI-4 ATA/ATAPI-5 ATA/ATAPI-6 ATA/ATAPI-7 * signifies the current active modeATA device, with non-removable media Model Number: ST3500320AS Serial Number: 9QM6WHGY Firmware Revision: SD15Transport: SerialStandards: Supported: 8 7 6 5 Likely used: 8Configuration: Logical max current cylinders 16383 65535 heads 16 1 sectors/track 63 63 -- CHS current addressable sectors: 4128705 LBA user addressable sectors: 268435455 LBA48 user addressable sectors: 976773168 device size with M = 1024*1024: 476940 MBytes device size with M = 1000*1000: 500107 MBytes (500 GB)Capabilities: LBA, IORDY(can be disabled) Queue depth: 32 Standby timer values: spec'd by Standard, no device specific minimum R/W multiple sector transfer: Max = 16 Current = 16 Recommended acoustic management value: 254, current value: 0 DMA: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 udma5 *udma6 Cycle time: min=120ns recommended=120ns PIO: pio0 pio1 pio2 pio3 pio4 Cycle time: no flow control=120ns IORDY flow control=120nsCommands/features: Enabled Supported: * SMART feature set Security Mode feature set * Power Management feature set * Write cache * Look-ahead * Host Protected Area feature set * WRITE_BUFFER command * READ_BUFFER command * DOWNLOAD_MICROCODE SET_MAX security extension * 48-bit Address feature set * Device Configuration Overlay feature set * Mandatory FLUSH_CACHE * FLUSH_CACHE_EXT * SMART error logging * SMART self-test * General Purpose Logging feature set * 64-bit World wide name * Write-Read-Verify feature set * WRITE_UNCORRECTABLE command * {READ,WRITE}_DMA_EXT_GPL commands * SATA-I signaling speed (1.5Gb/s) * SATA-II signaling speed (3.0Gb/s) * Native Command Queueing (NCQ) * Phy event counters * Software settings preservationSecurity: Master password revision code = 65534 supported not enabled not locked not frozen not expired: security count supported: enhanced erase 102min for SECURITY ERASE UNIT. 102min for ENHANCED SECURITY ERASE UNIT.Checksum: correct[root@hptest ~]# cat /proc/ide/hda/settingsname value min max mode---- ----- --- --- ----acoustic 0 0 254 rwaddress 1 0 2 rwbios_cyl 60801 0 65535 rwbios_head 255 0 255 rwbios_sect 63 0 63 rwbswap 0 0 1 rcurrent_speed 0 0 70 rwfailures 0 0 65535 rwinit_speed 0 0 70 rwio_32bit 0 0 3 rwkeepsettings 0 0 1 rwlun 0 0 7 rwmax_failures 1 0 65535 rwmultcount 0 0 16 rwnice1 1 0 1 rwnowerr 0 0 1 rwnumber 0 0 3 rwpio_mode write-only 0 255 wunmaskirq 0 0 1 rwusing_dma 0 0 1 rwwcache 1 0 1 rw[root@hptest ~]#
CentOS 5 - hdparm - how to set DMA mode
centos;disk;hdparm;tuning
null
_unix.340071
G'DayI found myself exposed to a problem that originated as a high load problem with the logging-server, that has been traced back to a samba-file-server, which floods its syslog with a certain log-error (which sums up to log files about some Gigabyte, what drives poor logrotate nuts).Anyway, that's what samba (according to dpkg it seems to be version 4.2.10, on a debian jessie) keeps logging (I think every time the most tiny ammount of request is incomming from someone accessing the files stored there):Jan 25 06:33:29 samba winbindd[25923]: [2017/01/25 06:33:29.551897, 0] ../source3/winbindd/idmap_ldap.c:95(get_credentials) Jan 25 06:33:29 samba winbindd[25923]: get_credentials: Unable to fetch auth credentials for cn=sambaadmin,dc=secret-domain,dc=de in WORKGROUPI replaced the true domain with that secret-domain (and the second one at the very end wasn't WORKGROUP either). Expect for the numbers its always the same... always the sambaadmin. So... after seeing this being printed out over and over again, I began to fear that people could have problems accessing any files at the samba-store, but... no one even noticed before me. Looking inside log-files from the distant past (last week), I noticed that this appears to happen for quite a while now, but it does not affect any work after all. I think the nsswitch.conf file is okay, the smb.conf too, and the slapd.conf too. All processes are running, expect for nscd, which I stopped after reading about it ability to make like hard. I can ask that winbind using 'wbinfo -u' for users and get all users, or groups. Sadly my knowledge of samba is limited, and I run out of people to ask or people who know more than me at this point about this stuff. No, I didn't install this, I just happen to be inside the recognition range of my boss when the problem solving dude was designated... So I wonder if you can direct me in a direction I might miss as a samba-newbie.EDIT:So... did some more research today. by increasing the loglevel of... all up to 5 I found something more detailed inside the log.winbind-idmap, which look like this:child daemon request 59[2017/01/26 12:24:20.287296, 5] ../source3/lib/smbldap.c:1249(smbldap_search_ext) smbldap_search_ext: base => [dc=secret-domain,dc=de], filter => [(&(uidNumber=10005)(objectClass=posixAccount)(objectClass=sambaSamAccount))], scope => [2][2017/01/26 12:24:20.287761, 3] ../source3/passdb/pdb_ldap.c:5039(ldapsam_uid_to_sid) ERROR: Got 0 entries for uid 10005, expected one[2017/01/26 12:24:20.287869, 0] ../source3/winbindd/idmap_ldap.c:95(get_credentials) get_credentials: Unable to fetch auth credentials for cn=sambaadmin,dc=secret-domain,dc=de in WORKGROUP[2017/01/26 12:24:20.287947, 1] ../source3/winbindd/idmap_ldap.c:485(idmap_ldap_db_init) idmap_ldap_db_init: Failed to get connection credentials (NT_STATUS_ACCESS_DENIED)[2017/01/26 12:24:20.287971, 5] ../source3/lib/smbldap.c:1114(smbldap_close) The connection to the LDAP server was closed Finished processing child request 59After this I did search for these gid (somethimes its about an uid, I think, too) I did end up looking inside /var/cache/samba/gencache.tdb, which is a binary file but contains some plain-text... which happens to be the only source of that gid I can find. By the way, it not only the 10005, but 10000 and sometimes 10006 and others too.There is no linux user with one of these uid or gids in use, so I'm really puzzled about this. At the moment I have no idea where this does keep comming from. I even removed this tdb file with services stopped, but as soon as someone dares to even look at something in a random samba share, the error pops up again.Still hoping for help...EDIT2:Today I found the tdbtool, which can be used to look inside .tdb databases located in the directory you are in while starting tdbtool... And there is tdbdump, which will enable you to put all the content inside a file (or pipe it to less). By doing so I was able to get back on track of my mysterious gid and uids; the random wrong id of the day is 10005, which happens to appear inside the gencache.tdb key(20) = IDMAP/UID2SID/10005\00data(15) = 1485506196/-\00At the moment I just can assume where this does come from, but the number in data(15) seems to be a windows user id or something like this. I wonder if there is some place inside windows (the registry?) where wrong information could be stored, or - even more interesting - who is providing windows with stuff like this. After all, I wont pull it out of thin air (well... its Windows, so this could be a valid option too) usually, so it must be offered by samba somehow. But where this does come from?Thanks
Samba Winbind unable to fetch auth_credentials
samba;ldap;winbind
null
_unix.369002
Tl;dr QuestionIs there a nice way to interactively edit a string from the shell, storing the edited value in a variable.Basically exactly like a=$(cat hello world > /tmp/command && vim /tmp/command </dev/tty >/dev/tty && cat /tmp/command) but without clearing my screen while editing (so that I don't lose my place and can still read output from previous commands).I really want behaviour exactly like imv or icp, but I just want to get given the string back rather than moving or copying a file.Or like a=$(echo hello world | zenity --entry-text $(cat) --entry) but without X.This is rather an I WANT A PONY type question, but it feels useful in general situations.ContextStack exchange loves context, so here is my specific use case:I love zsh line editing. I'm using it for a zsh widget to add parts of a command in my history to my current line like so:strace <M-l h>-><LIST of history commands>->SELECT COMMAND->POP UP EDITOR->TWEAK COMMAND->strace command with lots of arguments some of which I want to editAlternatives consideredMake use of the shell's command-line editor and history (!!) to do what I want.Using zenity echo hello | zenity --entry-text -entry <- I don't really like using X
Way of interactively editing some text from the shell
shell
You're describing something like the zsh vared builtin. It puts the current value of a variable into ZLE, and when you finish editing, the edited line becomes the new value of the variable.% x=foo% vared xfoo[Do some edits to change foo to bar and press Return]% echo $xbar
_unix.312056
I use gv command in my scripts to view *.pdf or *.ps files (I do that instead of using acroread or okular because gv has a nice option -watch that allows me to see changes in a troff or LaTex file I am working on while using vi editor.) However in my new laptop (Dell-Precision-M with ubuntu 14-04 installed) gv keeps giving me trouble by always complaining like that:$gv test.pdfWarning: Cannot convert string --Helvetica-Medium-R-Normal---140---P-*-ISO8859-1 to type FontStructWarning: Cannot convert string --Helvetica-Medium-R-Normal---120---P-*-ISO8859-1 to type FontStructWarning: Cannot convert string --Helvetica-Medium-R-Normal---100---P-*-ISO8859-1 to type FontStructWarning: Cannot convert string --Helvetica-Bold-R-Normal---120---P-*-ISO8859-1 to type FontStructFrankly, I don't care about those fonts, and anyway, I think it is some kind of a bug (since neither acroread nor okular ever complained about that). So, I tried to simply suppress that warning mgs, by using any options available for gv command to be quiet, silent, etc, which could be found in man gv or gv --help, like -quiet, -infoSilent, -dsc, -eof, -- but to no avail. The gv is hell-bent on screaming out those four lines no matter what is a target file. I could've live with that, but in my scrips a few lines after gv command I have a vi command and those complaints mess up my text in vi.Any way of fixing that?
gv stubborn complaints
ghostscript
I would fix it by installing the fonts. On my Debian system, those are in the xfonts-75dpi and xfonts-100dpi packages. Red Hat uses different package names.Other people simply ignore warnings, which are usually sent to the standard error:gv test.pdf 2>/dev/null
_vi.10093
3rd block of code needs to go from ipt1 to ipt44th block of code needs to go from ipt1 to ipt5Because it is only 3 selections, creating a macro is probably slower than manually, but doing it manually is still tedious.What is the fastest way to be changing these? I come across situations like this a lot, and usually I try to use visual block mode with the change function, but these are not aligned so that is not an option.
How to quickly replace a single character in a word search, but only for a block of code in Fakevim?
search;visual mode;replace;visual block
null
_webapps.102940
I have a group on Facebook where folks can ask a question inside a comment. I want to be able to response on this comments quickly, but Facebook doesn't notify me about comments in the group, only about publications and other stuff. Is there any settings to enable it?I've sent request to FB support, but didn't get any answer yet.
How to get notified about comments in my group in Facebook?
facebook;facebook notifications
null
_unix.154306
I want a solution for the general case (N folder)I'm using awk to process a file and extract its content and put it in a variable then echo it:This is the file:H1 H2 H3 H4 H5 H6 H7 H8 H9not importantnotnotThis is the code:$value1=awk '/H1/ { print $1}' file$value2=awk '/H1/ { print $2}' file$value3=awk '/H1/ { print $3}' fileecho $value1echo $value2echo $value3I get the result:H1H2H3My question if I have multiple files with the same format (not the exact content, the same format) with the file and which is located in different folders but the same name:/folder1file/folder2file/folder3fileHow can I echo the first 3 values of the H1 line but from each file in those folders so I get 9 results?
How to use awk through multiple files?
text processing;awk
I wonder whether youre leaving something out of the question,because you seem to be doing more work than you need tofor what you say you want to do. If Im understanding you correctly.If all you want to do is output (echo) the first three fields (values)from the line in the file that contains H1 (assuming that there is only one such line),all you need to do isawk '/H1/ { print $1, $2, $3 }' input_fileor, if you want the values on three separate lines,awk '/H1/ { print $1; print $2; print $3 }' input_fileTo achieve the same result for multiple files, just list their names,for example, using brace expansion:awk '/H1/ { print $1; print $2; print $3 }' /folder{1,2,3}/fileor use a wildcard, as the other answers suggested:awk '/H1/ { print $1; print $2; print $3 }' /folder?/fileIf you require that the values be assigned to variables,so you can manipulate them in your script,you need to explain your requirements more clearly.
_codereview.26269
I am working on page titles to be responsive. The code I have works and gets the job done, but I know that this is verbose. I decided upon the widths by trial and error based on how the words were stacking on each other DesktopTabletMobileThis is the what I currently have as codeHTML <div class=row> <div id=page-title> <h1>BruxZir<sup>&reg;</sup> Solid Zirconia Crowns &amp; Bridges</h1> </div>CSS#page-title, #page-title-video { background: url(../img/top-banner.jpg) #273344 no-repeat right top; padding: 0.2em 0 0.2em 1em; }#page-title { margin: 12px 15px 24px 15px; }#page-title-video { margin: 12px 0 24px 0; }#page-title h1, #page-title-video h1 { color: #FFFFFF; letter-spacing: 0.08em;}@media screen and (min-width: 1px) and (max-width: 321px) { #page-title h1, #page-title-video h1 { font-size: 18px; }}@media screen and (min-width: 322px) and (max-width: 569px) { #page-title h1, #page-title-video h1 { font-size: 20px; }}@media screen and (min-width: 570px) and (max-width: 749px) { #page-title h1, #page-title-video h1 { font-size: 22px; }}@media screen and (min-width: 750px) and (max-width: 950px) { #page-title h1, #page-title-video h1 { font-size: 24px; }}@media screen and (min-width: 951px) { #page-title h1, #page-title-video h1 { font-size: 36px; } }
Minimizing CSS media queries for the page title?
html;css
null
_cs.67118
I know $P$ vs $NP$ is an open problem in Computer Science. However the people concerned (at least according to Wikipedia) believe $P \neq NP$. I have two questions:$(1.)$ Given that it is possible to prove lower bounds for problems. Does there exist any problem(s) for which the solution can be verified in polynomial time but no known algorithm can solve it in polynomial time which has a proven lower bound? $(2.)$ Is this proven lower bound polynomial or exponential?
Lower bounds and $P$ vs $NP$
algorithms;time complexity;asymptotics
null
_softwareengineering.191911
https://stackoverflow.com/questions/98734/what-is-separation-of-concernsIn computer science, separation of concerns (SoC) is the process of breaking a computer program into distinct features that overlap in functionality as little as possible. A concern is any piece of interest or focus in a program. Typically, concerns are synonymous with features or behaviors. Progress towards SoC is traditionally achieved through modularity and encapsulation, with the help of information hiding.From Pro Asp.Net MVC 4 book ( page 375 ):The problem with relying on route names to generate outgoing URLs ( @Html.RouteLink(Click me, MyOtherRoute,Index, Customer) is that doing so breaks through the separation of concerns that is so central to the MVC design pattern. When generating a link or a URL in a view or action method, we want to focus on the action and controller that the user will be directed to, not the format of the URL that will be used. By bringing knowledge of the different routes into the views or controllers, we are creating dependencies that we would prefer to avoid.a) I understand that we create a dependency ( between action method/view and a routing configuration module ) by having Html.RouteLink ( called within action method or view ) specifying the name of the route we want to use.But is introducing such a dependency already considered a violation of SoC? Namely, even though we created a dependency between the two modules, we haven't actually introduced any additional functionality/concern into either of the modules ( the definition of SoC implies that violation of SoC occurs when new functionality/concern is introduced into a module )b) Anyhow, I don't understand how will simply generating an URL ( within action method/view ) by specifying a named route bring focus to the format of the URL?Thank you
Why is using named routes for generating outbound URLs a violation of Separation of Concerns?
asp.net mvc;mvc;design patterns;separation of concerns
I think what the Pro ASP.NET MVC book means is that, by referring to a route by name you have now created a dependency on a particular route definition, rather than relying strictly on the action and controller that will be called.The route is what determines the shape of the URL. If you create a link by using a route name, you are literally saying I want the URL to be this shape, rather than saying I want the URL to invoke this functionality and letting the route engine decide which route is most appropriate.Whether or not this makes sense in your particular application ultimately depends on your needs. Using a named route creates a level of indirection which allows you to change both the shape of the URL, and the controller/method that gets called, by merely changing the entry in the route table.As to the coupling aspect, using a named route does tightly-couple the links to that specific route because no other route would be eligible, and because it requires the view to have knowledge of that specific route. Whether that is a problem or not, again, depends on your needs. If that coupling is a desirable feature, it doesn't really matter whether it is tight or not according to someone else's opinion.
_cs.50304
You are given a directed acyclic graph G = (V, E) in which each node has oneleft out-arc and one right out-arc, with a distinguished source node s and sink node t. You are also given a list of ties (u, v) which say that if you take the left [right] edge out of u, then you must also take the left [right] edge out of v. Is there a path from s to t subject to the ties? Show that this decision problem is NP-complete.For this I know we can reduce a 3SAT to a problem of checking whether a path exists or not between 's' and 't'. But I am finding difficult to add the constraints of ties into it
Reduction from 3SAT
complexity theory;np complete;reductions;np hard
null
_unix.25982
I want to shutdown my X to be able to install proprietary NVidia driver, but I couldn't find out how. There are some suggestions that didn't work for me.I neither have /etc/init.d/gdm nor /etc/init.d/kdm nor /etc/init.d/xdm, on which I could call stop or start./etc/init.d/x11-common commands seem to have no effect, neither does init 2.What can I try next?
stop Xserver in Xubuntu 11
ubuntu;nvidia;x server;xubuntu
Xubuntu uses upstart, so you should usesudo service gdm stoporsudo service lightdm stopdepending if you are using Ubuntu 11.04 (or prior) or Ubuntu 11.10.
_unix.139679
When I copied some files from work onto my external hard drive it copied them as encrypted files.NOTE: The drive is not encrypted - just individual files.Now when I try to copy files from the external drive to my PC, Nautilus crashes. In fact Nautilus crashes even when I browse a directory that contains an encrypted file. I can browse the file system using the terminal, although I get Permission Denied errors if I try to copy the encrypted files. On Windows the encrypted files are listed in green in Windows explorer.Is there anyway that I can list which files on the external drive are encrypted?UPDATE #1I understand that you cannot decrypt these encrypted files under Linux per this Q&AL Use Linux to read contents of Windows encrypted folder. However, I can list them in the terminal so I'm looking for a way to list which ones are encrypted and which ones are not.
How can I tell If a file is NTFS encrypted?
encryption;ntfs
null
_webmaster.3270
I have some extensive website that I need to analyze the images and see if they can be optimized. Can any one recommend a good program for this?
What is a good program for optimizing images on a website?
images;optimization
null
_computergraphics.1644
Most information about memory mapped displays on the net are about those in which there is essentially a location in main memory for each pixel on the display. A hypothetical 1024 x 512 display would therefore have 524,288 locations each mapped to a unique pixel. To set a pixel to a particular color, all you basically need to do is write an RGB value into the corresponding memory location for that pixel, eg: MOV 0xFF3C0A, 0x00BB75. The first argument in that instruction is the address holding the RGB value and the second is the address of the pixel you are copying it to.However, this can use up a quite a lot of main memory, which might not be ideal for certain systems. In a machine with a 32 bit or higher word size, it could instead be possible to have a single display output register into which you would write a single word of data containing the X and Y coordinates as well as the RGB value for the pixel you want to set. For a 1024 x 512 display with a 12-bit color depth, 31 bits would suffice to specify all this. The data 'written' into that register would either be sent to a display adapter or sent directly to the display and decoded there. This would save over half a megabyte of memory while still retaining the memory-mapped model.So is this actually a thing or not?
Does this type of memory-mapping for a display exist?
pixels;memory
null
_unix.313840
I wanted to burn an ISO to a DVD+R. I decided to do a dummy write first (where the laser is off so it's basically a dry run) since I'd never used the command before and I wanted to make sure I was doing it right. I did wodim -v dev=/dev/sr0 speed=4 -dummy -eject path/to.iso and it looked like it was working correctly. Figuring I didn't want to wait for it to finish pretending to burn the large disc image, I hit Ctrl+C and typed the same command without -dummy to start the burn for real.wodim: WARNING: Data may not fit on current disk.wodim: Notice: Most recorders cannot write CD's >= 90 minutes.wodim: Notice: Use -ignsize option to allow >= 90 minutes.wodim: Notice: Use -overburn option to write more than the official disk capacity.wodim: Notice: Most CD-writers do overburning only on SAO or RAW mode.I took the disc out and looked at the bottom, and saw a thin ring with a difference in shade, indicating a small amount of data had been burned to the disc. I can only conclude that for some reason the -dummy option didn't work, and it was burning the image for real, at least until I aborted it.I figure the -overburn option it suggests is used to burn a new track from the beginning, which isn't what I want. I need to boot from this disc, so the actual structure of the data matters, not just that the files I want are accessible. How do I make it finish where it left off so the disc doesn't go to waste? I don't see any exact indication of where it left off (just 145 of 4177 MB written, which isn't exact enough) but it should be easy to determine by reading the disc and the image and finding where they first differ.EDIT: I just used cmp to compare /dev/sr0 to the ISO, and it said the first differing byte is byte 152307713. So that's where it left off. If I cut off the beginning of the ISO so it starts at that byte, and then burn that file to the disc using the same command, will that work? Or will there be a track boundary or something in between that will cause problems?EDIT 2: Here's the output of the commands suggested by Thomas Schmitt:$ dvd+rw-mediainfo /dev/sr0INQUIRY: [MATSHITA][DVD+-RW UJ8C7 ][1.00]GET [CURRENT] CONFIGURATION: Mounted Media: 1Bh, DVD+R Media ID: CMC MAG/M01 Current Write Speed: 8.0x1385=11080KB/s Write Speed #0: 8.0x1385=11080KB/s Write Speed #1: 2.4x1385=3324KB/s Speed Descriptor#0: 01/2295103 [email protected]=4294967040KB/s [email protected]=11080KB/s Speed Descriptor#1: 01/2295103 [email protected]=4294967040KB/s [email protected]=3324KB/sREAD DVD STRUCTURE[#0h]: Media Book Type: 00h, DVD-ROM book [revision 0] Legacy lead-out at: 2295104*2KB=4700372992READ DISC INFORMATION: Disc status: appendable Number of Sessions: 1 State of Last Session: incomplete Next Track: 1 Number of Tracks: 2READ TRACK INFORMATION[#1]: Track State: partial/complete Track Start Address: 0*2KB Next Writable Address: 74384*2KB Free Blocks: 2064480*2KB Track Size: 2138864*2KBREAD TRACK INFORMATION[#2]: Track State: blank Track Start Address: 2138880*2KB Next Writable Address: 2138880*2KB Free Blocks: 156224*2KB Track Size: 156224*2KB ROM Compatibility LBA: 265696READ CAPACITY: 0*2048=0$ cdrskin -v dev=/dev/sr0 -minfocdrskin 1.4.2 : limited cdrecord compatibility wrapper for libburncdrskin: verbosity level : 1cdrskin: NOTE : greying out all drives besides given dev='/dev/sr0'cdrskin: scanning for devices ...cdrskin: ... scanning for devices donecdrskin: pseudo-atip on drive 0cdrskin: status 3 BURN_DISC_APPENDABLE There is an incomplete disc in the drivescsidev: '/dev/sr0'Device type : Removable CD-ROMVendor_info : 'MATSHITA'Identifikation : 'DVD+-RW UJ8C7'Revision : '1.00'Drive id : 'WQ36 064543'Driver flags : BURNFREESupported modes: TAO SAOcdrskin: burn_drive_get_write_speed = 11080 (8.0x)Current: DVD+RProfile: 0x0012 (DVD-RAM)Profile: 0x002B (DVD+R/DL)Profile: 0x001B (DVD+R) (current)Profile: 0x001A (DVD+RW)Profile: 0x0013 (DVD-RW restricted overwrite)Profile: 0x0014 (DVD-RW sequential recording)Profile: 0x0016 (DVD-R/DL layer jump recording)Profile: 0x0015 (DVD-R/DL sequential recording)Profile: 0x0011 (DVD-R sequential recording)Profile: 0x0010 (DVD-ROM)Profile: 0x000A (CD-RW)Profile: 0x0009 (CD-R)Profile: 0x0008 (CD-ROM)Profile: 0x0002 (Removable disk)book type: DVD+R (emulated booktype)Product Id: CMC_MAG/M01/48Producer: CMC Magnetics CorporationManufacturer: 'CMC MAG'Media type: 'M01'Mounted media class: DVDMounted media type: DVD+RDisk Is not erasabledisk status: incomplete/appendablesession status: emptyfirst track: 1number of sessions: 1first track in last sess: 1last track in last sess: 2Disk Is unrestrictedDisk type: DVD, HD-DVD or BDTrack Sess Type Start Addr End Addr Size============================================== 1 1 Apdbl 0 2138863 2138864 2 1 Blank 2138880 2295103 156224 Next writable address: 2138880 Remaining writable size: 156224 Warning: Incomplete session encountered !$ xorriso -outdev /dev/sr0 -tocxorriso 1.4.2 : RockRidge filesystem manipulator, libburnia project.Drive current: -outdev '/dev/sr0'Media current: DVD+RMedia status : is written , is appendableMedia summary: 1 session, 2295104 data blocks, 4483m data, 305m freexorriso : WARNING : Incomplete session encountered !Drive current: -outdev '/dev/sr0'Drive type : vendor 'MATSHITA' product 'DVD+-RW UJ8C7' revision '1.00'Drive id : 'WQ36 064543'Media current: DVD+RMedia product: CMC_MAG/M01/48 , CMC Magnetics CorporationMedia status : is written , is appendableMedia blocks : 1 readable , 156224 writable , 2295104 overallTOC layout : Idx , sbsector , Size , Volume IdIncmp session: 1 , 0 , 0s , Media summary: 1 session, 2295104 data blocks, 4483m data, 305m freeMedia nwa : 2138880sxorriso : WARNING : Incomplete session encountered !$ cdrecord -v dev=/dev/sr0 -minfowodim: Bad Option: -minfo.Usage: wodim [options] track1...tracknUse wodim -helpto get a list of valid options.Use wodim blank=helpto get a list of valid blanking options.Use wodim dev=b,t,l driveropts=help -checkdriveto get a list of drive specific options.Use wodim dev=helpto get a list of possible SCSI transport specifiers.
How do I resume a 'wodim' DVD burn aborted with Ctrl+C?
iso;dvd;burning;wodim
Probably you will have to give up this partly written mediumand start with a new (blank) DVD.It is theoretically not impossible to resume a write run on anincompletely written DVD+R track. But i am not aware of any burn programwhich would do it. I may be wrong, though. So just try what happens ifyou let a burn program act on that medium.I'd expect that the burn programs will either complain about an opentrack and abort, or that they will try to start a new track in theyet unclaimed area on the DVD. Both will not yield a flawless copy ofyour ISO on the DVD.Further opinions and info:wodim is not really suitable for DVD. Use growisofs, cdrskin,xorrecord, or cdrecord.Drives with DVD+R media in them do not offer simulated writing.Whatever wodim did when you ran it with option -dummy, it was notthe same what you see with CD-R[W], DVD-R, or unformatted DVD-RW.Take its starting of real burning as an indication that wodim has noclue of DVD+R, DVD+RW, DVD-RAM, formatted DVD-RW, or BD media.(It might suffice for DVD-R and unformatted DVD-RW, because theybehave quite similar to CD-R.)You may inspect the current state of the DVD+R by one of followingcommands:dvd+rw-mediainfo /dev/sr0cdrskin -v dev=/dev/sr0 -minfoxorriso -outdev /dev/sr0 -toccdrecord -v dev=/dev/sr0 -minfoUpdate after Edit 2 in the question:wodim: Bad Option: -minfo indicates that you did not try original cdrecordbut rather its meanwhile quite orphaned clone wodim. There the option wouldbe the older -toc rather than -minfo. The output is harder to interpret.Whatever, the output of dvd+rw-mediainfo tells the story in best detail.wodim reserved track number 1 with a size of 2138864 blocks =~ 4177 MiB.This track would still be writable beginning at block 74384 =~ 145 MiB.But this writability of existing tracks is a special feature of DVD+R(and maybe BD-R) which does not fit well into the usage model of burnprograms. So they rather will try to use the remaining unreserved tracknumber 2, which begins at block 2138880. If they accept this medium stateat all.At least cdrskin and xorriso announce that they would try writing thereby their statements Next writable address: and Media nwa.growisofs source code looks like it will make the same choice.About (original) cdrecord i can only guess.Of course, a write attempt of the remaining ISO to track 2 will failbecause it has only 300 MB free. (It would create a giant gap of unreadablesectors anyways.)What a burn program would possibly have to do:It is mainly about determining the Next Writable Address from the existingtrack rather than from the next track to come.This could be overridden in libburn function burn_disc_track_lba_nwa()or after cdrskin has called it in its function Cdrskin_obtain_nwa().In the end, cdrskin variable *nwa would need to get the value 74384.In growisofs the function to determine the NWA is plusminus_r_C_parm().The variable next_session would need to get value 74384.Probably one will have to give the program run additional option-use-the-force-luke=seek=74384 and use option -Z rather than -M.Another potential problem is that the programs after such a hack could stillissue SCSI command RESERVE TRACK. This must be prevented.It seems that growisofs sends the command only to DVD-R, DVD-R DL, andunformatted DVD-RW. cdrskin will not send it if its option -tao ispresent.It has to be feared that this sketch is not fully sufficient and thatexperiments spoil the partly written DVD+R beyond repair. If you want todare it nevertheless, the starting point would be to get the source codeof dvd+rw-tools (for growisofs) or of staticallylinked cdrskin.Then we could begin to discuss by mail what code change will give bestchances for success on the first and only try. The outcome would then bereported here.(In case it is not obvious: I am developer of libburn and cdrskin.)
_datascience.21744
I am attempting to aggregate professional profile info from multiple sources, imposing a consistent taxonomy. Specifically, the current problem is how to impose a preferred taxonomy on profiles with inconsistent or absent in-bound taxonomy terms.Primary source of profile info is biography pages on people's employer websites. Some of those sites choose to state employees' multiple specialist topics, some make only narrative biographies available, some both. I have collected all available info, using Python's Scrapy, in to CSV files - one per company, people are rows - where available, topics at my end now themselves reside in a comma-separated field/string.Example: in one sheet, cell S7 is: Analytics Applications,Big Data,Cognitive Computing,Competitive Intelligence,eDiscovery,Enterprise Content Management (ECM),Information Architecture,Market Research,Product Information Management (PIM)The problem is severalfold:Taxonomy terms across companies are inconsistent (eg. Cognitive Computing in the above example may, to another company, be AI).Some companies use far too many terms in total (eg. one company alone uses approx 450 tags in total).Often, none are available at all.As biography narratives describe more than just employees' specialist topics (eg. education and upbringing background), their usefulness in automation may be questionable.My goal is to create a taxonomy that categorises all the collected person bios in a much more harmonious, consistent and briefer fashion.System setup is PHP/MySQL/WordPress. Profile CSVs are imported in to WordPress, and the system has the ability to perform PHP functions on imported content (not just on the info in WordPress after import, but during import via PHP).Total profiles count is approx 4,500, so manual taxonomisation is unappealing. So I have examined AI/machine learning techniques. I am not strictly a developer and certainly not a data scientist or mathematician.So far, I have found text classification tests carried out using Aylien and Monkey Learn to yield poor results. In each case, output results are not granular enough, ie. turning in-bound terms of biogs about granular topics like cloud computing infrastructure and data centres in to overly basic terms like Computers & Internet. Aylien uses the off-the-shelf IPTC NewsCodes taxonomy, and I understand I can use Monkey Learn to train. I like the idea of using a standardised off-the-shelf taxonomy like NewsCodes, but a) the results are questionable, and b) it may not be granular enough for my needs.At this point, I have decided to draw up my preferred hierarchy of taxonomy terms, approx 230, which should each speak roughly to the swathe of inconsistent in-bound terms and profiles (in other words, correlate to the people's topics). That seemed like an important step, assuming I need to steer this manually. But I'm struggling to grasp how to actually implement that correlation.So, I am looking for some guidance on best methods.One idea I am toying with is to put my own preferred taxonomy in to WordPress as taxonomy terms, and, alongside each, put a cluster of terms from the actual source material so that, if one of the related terms is found in a user's inbound data, the term from my preferred taxonomy should be assigned. But I'm not sure whether this is particularly efficient, or even wise.This is my first time on the Data Science group at StackExchange. I apologise if I have shot wide of the mark here at all.
What methods to create singular content classification from inconsistent inbound info?
classification;text
null
_unix.103765
It appears that the Postgresql installation is split into three folder locations on Debian:Configuration: /etc/postgresqlBinaries: /usr/lib/postgresqlData: /var/lib/postgresqlI understand the benefits of splitting up the configuration files and the data, however, the binaries location is confusing to me -- why wouldn't it simply be in /usr/bin?More to the point, why would some binaries go into /usr/bin and others into /usr/lib?
Logic behind Postgres binary installation path on Debian
debian;postgresql;fhs
This splitting is pretty typical for most services. I'm on Fedora but most distributions do the same in terms of organizing files based on their type, into designated areas.Taking a look at the Postgres SQL server:The configuration files go into /etc/Executables go into /usr/binLibraries go into /usr/lib64/pgsql/Locale information goes into /usr/share/locale/Man pages and docs goes into /usr/share/Data goes into /var/lib/The rational for having a libraries directory usr/lib/postgresql in your case, which is equivalent to /usr/lib64/pgsql/ for my install, is that applications can make use of libraries of functions that are provided by Postgres. These functions are contained in these libraries. So as an application developer, you could link against the libraries here to incorporate function calls into Postgres, into your application. These libraries will often times include API documentation, and the developers of Postgres make sure to keep their API specified and working correctly through these libraries, so that applications that make use of them, can be guaranteed that they'll work correctly with this particular version of Postgres.
_unix.379376
I have a file like:available_space:1232334343capacity:123456432total_space:1232323232I want to calculate capacity/total_space so I need to calculate 123456432/1232323232I can imagine I need to use something like:cat my_file | awk -F:'FNR==2 {print$2}' But cannot write the division itself, I'm not quiet sure about the syntax. So how I can do that?
divide second field of n.th line of a file
linux;shell script;awk
According to your initial approach, the crucial lines are the 2nd and 3rd line only:awk -F':' 'NR==2{ c=$2 }NR==3{ print c/$2 }' my_file0.100182
_softwareengineering.77160
I heard about a tool named FitNesse, which is supposed to promote better collaboration between development, testing, and product groups.What are your experiences with it? Does it really improve program quality? Are there any drawbacks to using it?
Does FitNesse improve product quality and collaboration?
productivity;tools;collaboration
FitNesse is an interesting tool. I think it can work well in some cases, and maybe not so well in others. The table-driven tests are very good for testing business rules and the like. If the product group is used to using Excel to communicate requirements, FitNesse is a really good fit.Where I work, we're not really in a business-y environment, but we use FitNesse in a few places to produce executable documentation of some of our external command protocols. Having the docs and the tests combined in one document ensures that both get maintained well.One thing I really like about FitNesse is the multiple language bindings. Because of this, the tests can be used as-is in the context of a re-write using a new language. Not a common scenario, obviously, but an interesting one. The fixture code acts as a shearing layer that allows your code and your tests to stay decoupled.Probably the biggest weakness I see with FitNesse is the lack of tools for maintaining the test suite (refactoring, mass editing, etc). People are working on those, though.If you decide to try FitNesse, I highly recommend reading anything you can get your hands on by Rick Mugridge (http://www.rimuresearch.com) and Gojko Adzic (http://gojko.net/ and http://fitnesse.info/, including their books. It's really easy to write unmaintainable, uncommunicative script tests with FitNesse, and these guys will get you on the right path.
_cs.32232
Can every recursively enumerable language be defined with regular expression?I came across this question, when studying for my test: Prove that for any finite language $L$, there is a Turing machine $M$ with $L(M) = L$ with time and space complexity $t(n) \leq n+1, s(n) \leq n+2$, respectively ($n$ is the length of the input word).My proof goes as follows:We can create a DFA from a regular expression defining the language. Then we read the input symbol by symbol and output YES, if we finished in a final state or NO otherwise. Every DFA can be transformed into a TM where every transition of the DFA corresponds to one transition of the TM. It is obvious that we only need $n+1$ steps to read the input (+1 for the blank in the end) and $s+1$ cells.However, I am not sure, if I can assume here, that there is a regular expression for every recursively enumerable language. And if not, would my proof be still valid? Can I create a DFA for arbitrary RE language?
Can every recursively enumerable language be defined with regular expression?
formal languages;turing machines
Of course you can't create a DFA for every RE language. The language $\{0^n1^n\mid n\geq 0\}$ is well known to not be regular and is obviously RE.But you don't need that for this question. The question refers to finite languages, which are all regular. Also, given that finite languages are regular, they can be decided by a DFA, which is a Turing machine that uses zero storage space (so much less than the $O(n)$ allowed by the question) and which always moves the head forwards.
_softwareengineering.290213
I am currently trying to refactor some code and one of the problems I came across was the constructors had far too many parameters (15 in fact) and was being initialised by another object which had the same amount of properties.I am meant to be reducing the amount of classes we have so creating a set of classes which represented the different parts of the parameters seemed silly so I began looking and the best answer I came across was from this question on stackoverflow.public class DoSomeActionParameters{ readonly string _a; readonly int _b; public string A { get { return _a; } } public int B{ get { return _b; } } DoSomeActionParameters(Initializer data) { _a = data.A; _b = data.B; } public class Initializer { public Initializer() { A = (unknown); B = 88; } public string A { get; set; } public int B { get; set; } } public static DoSomeActionParameters Create(Action<Initializer> assign) { var i = new Initializer(); assign(i) return new DoSomeActionParameters(i); }}which can be called like so using a lambdaDoSomeAction( DoSomeActionParameters.Create( i => { i.A = Hello; }) );I really liked this method even though it doesn't solve directly the problem of actually initialising the class, it does make it mean that I can have one constructor instead of 3. Also it doesn't require my class to know about objects it doesn't care about.My problem is however, I do not know what this pattern is called so I can't do any further research on it to find out if it is still valid, is superseded byanother pattern or whether it has sever performance issues. From my research the closest pattern I could find is the builder pattern but it seems to be more of an extension on that.As a bonus point, it would be great if you could give me a hand with how to make certain parameters mandatory as we have about 10 parameters from the database and 5 more which are just flags which we don't really need to know about.
Unknown design pattern
design patterns;refactoring
I am creating my own answer (after a discussion on meta) because I believe that the correct answer is in fact a combination of everything so far.Firstly as DavidArno put, the pattern that I have asked about is more of an anti-pattern because it is just hiding the problem somewhere else. However I believe the suggestion of using the builder pattern from the answer by Carl Manaster does solve the issue that I have because I cannot break down the list of parameters without over-engineering the Single Responsibility Principle.The problem I had with the builder pattern was that there was no way to make it easy for a developer using it to know about the mandatory fields which will mean there will be lots of trial and error which will lead to copy and pasting the initialisation everywhere which will lead to more mistakes. Fuhrmanator however linked to the following blog which demonstrates how to have mandatory fields with the builder pattern (after looking elsewhere I believe this is called a Step Builder).Essentially the principle of the Step Builder is to guide a developer through the initialisation of all the mandatory fields. Your builder implements various interfaces which act as steps to create the object. They each have a method which have a return type of the next step. This aids intellisense more than the standard builder patter as well because it cuts down the amount of methods you can call. See the following example from the blog.public class Address {private String protocol;private String url;private int port;private String path;private String description;// only builder should be able to create an instanceprivate Address(Builder builder) { this.protocol = builder.protocol; this.url = builder.url; this.port = builder.port; this.path = builder.path; this.description = builder.description;}public static Url builder() { return new Builder();}public static class Builder implements Url, Port, Build{ private String protocol; private String url; private int port; private String path; private String description; /** Mandatory, must be followed by {@link Port#port(int)} */ public Port url(String url) { this.url = url; return this; } /** Mandatory, must be followed by methods in {@link Build} */ public Build port(int port) { this.port = port; return this; } /** Non-mandatory, must be followed by methods in {@link Build} */ public Build protocol(String protocol) { this.protocol = protocol; return this; } /** Non-mandatory, must be followed by methods in {@link Build} */ public Build path(String path) { this.path = path; return this; } /** Non-mandatory, must be followed by methods in {@link Build} */ public Build description(String description) { this.description = description; return this; } /** Creates an instance of {@link Address} */ public Address build() { return new Address(this); }}interface Url { public Port url(String url);}interface Port { public Build port(int port);}interface Build { public Build protocol(String protocol); public Build path(String path); public Build description(String description); public Address build();}You have the collection of Mandatory parameters which each return the next statement but also what I think is great about this is that you can have optional parameters as well by making the methods return type the build interface.This solves my problem because I had mandatory parameters mixed in with optional ones which made expanding it difficult as there was lots of copy and pasting and potential problems. With the step builder I can control how a developer creates my object so that I have facts to use in the future, and I allow them the flexibility of creating the object in as many ways as they possibly need to.Also it makes it very readable!
_unix.387574
My server program is running and i am trying to esctablish some TCP connections from redis client.I am trying to establish 1150 client connections and when I check netstat at server side I find few ESTABLISHED - state which I dont understand wheta is the reason.At client side:[root@smarak-2storage-testvnf-vm0 src]# ulimit -n4096[root@smarak-2storage-testvnf-vm0 src]# ./redis-benchmark -h 10.111.89.230 -p 6379 -c 1150 -t set -n 20000 -d 10000 -r 100000000000000 -IAt server side:[root@sdl-blr-vm-1-14 src]# ulimit -n1024[root@sdl-blr-vm-1-14 src]# netstat -anp | grep -i 6379tcp 129 0 0.0.0.0:6379 0.0.0.0:* LISTEN 31535/respAccesstcp 0 0 10.111.89.230:6379 10.111.89.112:34276 ESTABLISHED 31535/respAccesstcp 0 0 10.111.89.230:6379 10.111.89.112:35048 ESTABLISHED -tcp 0 0 10.111.89.230:6379 10.111.89.112:34614 ESTABLISHED 31535/respAccesstcp 0 0 10.111.89.230:6379 10.111.89.112:34234 ESTABLISHED 31535/respAccesstcp 0 0 10.111.89.230:6379 10.111.89.112:34984 ESTABLISHED 31535/respAccesstcp 0 0 10.111.89.230:6379 10.111.89.112:34441 ESTABLISHED -tcp 0 0 10.111.89.230:6379 10.111.89.112:34441 ESTABLISHED -tcp 0 0 10.111.89.230:6379 10.111.89.112:34441 ESTABLISHED -Why this ESTABLISHED -?? I think there are 1024 file descriptors at server side and hence as 1150 connections are initiated from client side, only 1024 connections should be established i.e ESTABLISHED 31535/respAccess as state (with program name) and others should be discarded. If there is a connection with state ESTABLISHED, then why no program name attached to it? Please help me if anyone have any idea on this.
netstat output is ESTABLISHED - (no program name attached). What is the issue?
linux;ip;tcp;netstat;redis
null
_unix.307132
I setup a nice utility to log and record scanner radio traffic and then remove silence using SOX. To record continuously I keep calling SOX with silence detection:function dosox() {/usr/bin/sox -t alsa -D plughw:2,0 $DATE.wav silence 1 0.1 2% 1 1.0 2% dosox }dosoxThis works very well for ALSA sound devices. However I now would like to get the stream from a url. I use the URL source like:function dosox() {/usr/bin/sox -t mp3 $CHAN_URL $DATE.wav silence 1 1.0 2% 1 2.0 2% dosox }dosoxWith the stream SOX seems to generate many audio output files quickly. Many of these files are near duplicates. (They contain the same transmission). I think SOX is out running the stream cache and considering this the end of the file, then terminating. Then when SOX is called the next time it grabs the same stream which apparently can be of audio which was already streamed.Assuming this is what is going on, I am looking for an easy way to have SOX exit on actual silence on the stream, but wait a bit when it thinks it got to the end of the stream for more audio. Things I tried:-Used mplayer to stream to a fifo and read that in with SOX.-Tried to adjust the silence settings of the SOX silence effect.Here is an example stream
SOX detecting silence when reading mp3 from url
audio;streaming;sox
null
_opensource.4137
A similar question already asked here (https://softwareengineering.stackexchange.com/questions/159023/can-cc0-code-use-a-gpl-library)but my question is a little bit different, perhaps more accurately. Can I use/apply a public domain license to my program while my program uses the GCC library or uses GPL libraries (link or runtime use or...)?like this:main.cpp:#include <gpl_licensed_library>#include <my_library>......And now can i say:code is licensed under the CC0. and i use gpl_licensed_library library that released under the GPL.ANDif answer is NO, then What license compatibility mean?
Can Public Domain use GPL licensed library/program?
gpl;license compatibility;linked libraries;public domain;cc0
You need to release the whole program under GPL. But nothing prevents you to release the additional source code that you wrote under CC0 as well (it is your code, you can give as many permissions on it as you wish).However, if you distribute a binary for your software, it can only be distributed under the GPL to meet with the conditions of this license.Similar to your situation would be a US government employee contributing to a GPL software. Their contribution is automatically in the public domain but the full software continues to be licensed under GPL. See this reference : https://www.gnu.org/licenses/old-licenses/gpl-2.0-faq.en.html#GPLUSGovAddNote also, from the GNU FAQ:If a program combines public-domain code with GPL-covered code, can I take the public-domain part and use it as public domain code?You can do that, if you can figure out which part is the public domain part and separate it from the rest. If code was put in the public domain by its developer, it is in the public domain no matter where it has been.What compatibility meansCompatibility is understood one way: public domain code can be included in a GPL software, not the other way around.
_softwareengineering.278149
Consider the following requirementsWindows software which communicates with a web application using basic authenticationThe software is an MSI packageThe software requires a token to be placed in order to authenticate itself to the web application The token is unique to every user, but the user is expected to install the software with the same token on several machinesThe question is:What is the best strategy to distribute the software from the web application with appropriate token in it ? (The user should not be asked to enter the token.)The proposed solution is::When the user clicks the download software button from the web application instead of giving then the software, we generate a VBScript file which contains the token and download location of the software package (MSI), and deliver it as a download. After that, when the user invokes the script, the following things will happen.The software will be downloaded and the token is passed to the installer and a custom action inside the installer will get the token and configure the application accordinglyWill the Windows world accept the above solution? Do they feel it unnatural? Is there a better solution to the problem at hand?
Is it the standard accepted practice to install software using VBScript?
web development;windows;packages;installer;vbscript
The standard solution which you have surely seen from other software vendors is tolet the user download and install the software, maybe with or without the registration, but let the software itself lead the user through the registration process afterwards.Through that process, after the user is authentificated, the token is downloaded in form of a license file. If you want to let the user install the software on a second machine without a new authentification, allow the license file to be copied to the other machine. You wrote user should not be asked to enter the token - but actually, where is the difference for the user if he has to copy a VBS file to a second machine, or if he has to copy a license file? Note that this does not prevent your user to give away the software together with the license file to another person, but since your own approach does not prevent that either, I guess you are not after a solution for that problem.A different approach is not to use VBS, but generate a personalized downloader in form of an exe file (it is not too hard to provide a C or C++ source code which is personalized by modifiying just one small code file and then compiled on-the-fly by your web server). At least this prohibits modifications by the average user with a simple text editor like Notepad, which would be possible for a VBS downloader. And you do not have to teach your users how to run a VB script.And if you really want to provide personalized installer packages, there is a third option. It is possible to generate or regenerate your MSI packages on-the-fly, so you can replace the personal token file inside an MSI package for each individual download. This SO post contains some information about this, and here is shown how a single file can be replaced inside a compressed MSI. That may be the smoothest solution for your case.
_unix.217994
I want to create a sh file to send e-mail by telnet. Something like this:read -p from: fromread -p from (friendly name): fromfread -p dest: destread -p dest (friendly name): destfread -p subjct: subjctread -p text: texttelnet server port (the user put this values on sh file directly)helomail from: $fromrcpt to: $destdatafrom: $fromf <$from>to: $destf <$dest>subject: $subjct$text.How can I do?Thanks in advance.
How to create a script to send e-mail by telnet?
shell script;telnet
I'm going to ignore the aspect of why you're doing this, and assume you already know the caveats if not, know that many ISPs might block port 25 and some SMTP servers will block request from dynamic IP addresses.A little-known feature of bash is that you can direct output to /dev/tcp/hostname/port and it'll connect to the server. So IF you're using bash, you can do something like:cat > /dev/tcp/server/port <<EOFHELOMAIL FROM: $from[...]EOF
_codereview.56322
I'm having a hard time figuring out if the code I wrote is purely combinatorial or sequential logic. I'm designing a simple 16-bit microprocessor (will be implemented on a Spartan 6) and I'm new to Verilog, HDL and FPGAs. The code for the microprocessor is complete, but I'm having second thoughts about the best practices behind the code.I'm aware that since this is the first time coding for me, the code is not up to any standard, but I tried my best.One of the most important elements is the ALU, and it has been designed like any other person would. It has overflow/underflow detection which I also asked about here and I quickly wrote some code for it, may or may not be correct.But my question is whether I should be using the non-blocking operator (<=) or blocking operator (=) in the always block for the ALU. I know the standard practice is to use the blocking operator while designing combinatorial circuits versus using the non-blocking one, which is better for sequential circuits.If I was to use blocking operators in the always block for the ALU, would be synthesized version be slower than if I use non-blocking operators? I plan to use the on-board 100MHz clock on my development board, so I was wondering if the ALU could keep up. Here's the full code for the ALU:module alu(clk, rst, en, re, opcode, a_in, b_in, o, z, n, cond, d_out);// Parameter Definitionsparameter width = 'd16; // ALU Width// Inputsinput wire clk /* System Clock Input */, rst /* Reset Result Register */, en /* Enables ALU Processing */, re /* ALU Read Enable */;input wire [4:0] opcode /* 5-bit Operation Code for the ALU. Refer to documentation. */;input wire signed [width-1:0] a_in /* Operand A Input Port */, b_in /* Operand B Input Port */;// Outputsoutput wire z /* Zero Flag Register (Embedded in res_out) */, n /* Negative/Sign Flag Register */;output reg o /* Overflow/Underflow/Carry Flag Register */;output reg cond /* Conditional Flag Register */;output wire [width-1:0] d_out /* Data Output Port */;// Internalsreg [1:0] chk_oflow /* Check for Overflow/Underflow */;reg signed [width+width:0] res_out /* ALU Process Result Register */;// Flag Logicassign z = ~|res_out; // Zero Flagassign n = res_out[15]; // Negative/Sign Flagassign d_out [width-1:0] = res_out [width-1:0]; // Read Port// Tri-State Read Controlassign d_out [width-1:0] = (re)?res_out [width-1:0]:0; // Assign d_out Port the value of res_out if re is true.// Overflow/Underflow Detection Blockalways@(chk_oflow) begin if(rst) o <= 1'b0; else begin case(chk_oflow) // synthesis parallel-case 2'b00: o <= 1'b0; 2'b01: begin if(res_out [width:width-1] == (2'b01 || 2'b10)) o <= 1'b1; // Scenario only possible on Overflow/Underflow. else o <= 1'b0; end 2'b10: begin if((res_out[width+width]) && (~res_out [width+width-1:width-1] != 0)) o <= 1'b1; // Multiplication result is negative. else if ((~res_out[width+width]) && (res_out [width+width-1:width-1] != 0)) o <= 1'b1; // Multiplication result is positive. else o <= 1'b0; end 2'b11: o <= 1'b0; default: o <= 1'b0; endcase endend// ALU Processing Blockalways@(posedge clk) begin if(en && !rst) begin case(opcode) // synthesis parallel-case 5'b00000: begin res_out [width-1:0] <= a_in [width-1:0]; // A end 5'b00001: begin res_out [width-1:0] <= b_in [width-1:0]; // B end 5'b00010: begin res_out [width-1:0] <= a_in [width-1:0] + 1'b1; // Increment A end 5'b00011: begin res_out [width-1:0] <= b_in [width-1:0] + 1'b1; // Increment B end 5'b00100: begin res_out [width-1:0] <= a_in [width-1:0] - 1'b1; // Decrement A end 5'b00101: begin res_out [width-1:0] <= b_in [width-1:0] - 1'b1; // Decrement B end 5'b00110: begin chk_oflow <= 2'b01; res_out [width:0] <= {a_in[width-1], a_in [width-1:0]} + {b_in[width-1], b_in [width-1:0]}; // Add A + B end 5'b00111: begin chk_oflow <= 2'b01; res_out [width:0] <= {a_in[width-1], a_in [width-1:0]} - {b_in[width-1], b_in [width-1:0]}; // Subtract A - B end 5'b01000: begin chk_oflow <= 2'b10; res_out [width+width:0] <= a_in [width-1:0] * b_in [width-1:0]; // Multiply A * B end 5'b01001: begin res_out [width-1:0] <= ~a_in [width-1:0]; // One's Complement of A end 5'b01010: begin res_out [width-1:0] <= ~b_in [width-1:0]; // One's Complement of B end 5'b01011: begin res_out [width-1:0] <= ~a_in [width-1:0] + 1'b1; // Two's Complement of A end 5'b01100: begin res_out [width-1:0] <= ~b_in [width-1:0] + 1'b1; // Two's Complement of B end 5'b01101: begin if(a_in [width-1:0] == b_in [width-1:0]) cond <= 1'b1; // Compare A == B, set Conditional Register as result else cond <= 1'b0; end 5'b01110: begin if(a_in [width-1:0] < b_in [width-1:0]) cond <= 1'b1; // Compare A < B, set Conditional Register as result else cond <= 1'b0; end 5'b01111: begin if(a_in [width-1:0] > b_in [width-1:0]) cond <= 1'b1;// Compare A > B, set Conditional Register as result else cond <= 1'b0; end 5'b10000: begin res_out [width-1:0] <= a_in [width-1:0] & b_in [width-1:0]; // Bitwise AND end 5'b10001: begin res_out [width-1:0] <= a_in [width-1:0] | b_in [width-1:0]; // Bitwise OR end 5'b10010: begin res_out [width-1:0] <= a_in [width-1:0] ^ b_in [width-1:0]; // Bitwise XOR end 5'b10011: begin res_out [width-1:0] <= a_in [width-1:0] ~& b_in [width-1:0]; // Bitwise NAND end 5'b10100: begin res_out [width-1:0] <= a_in [width-1:0] ~| b_in [width-1:0]; // Bitwise NOR end 5'b10101: begin res_out [width-1:0] <= a_in [width-1:0] ~^ b_in [width-1:0]; // Bitwise XNOR end 5'b10110: begin res_out [width-1:0] <= {a_in [width-2:0], 1'b0}; // Logical Left Shift A end 5'b10111: begin res_out [width-1:0] <= {b_in [width-2:0], 1'b0}; // Logical Left Shift B end 5'b11000: begin res_out [width-1:0] <= {1'b0, a_in [width-1:1]}; // Logical Right Shift A end 5'b11001: begin res_out [width-1:0] <= {1'b0, b_in [width-1:1]}; // Logical Right Shift B end 5'b11010: begin res_out [width-1:0] <= {a_in [width-1], a_in [width-1:1]}; // Arithmetic Right Shift A end 5'b11011: begin res_out [width-1:0] <= {b_in [width-1], b_in [width-1:1]}; // Arithmetic Right Shift B end 5'b11100: begin res_out [width-1:0] <= {a_in [width-2:0], a_in [width-1]}; // Rotate Left A end 5'b11101: begin res_out [width-1:0] <= {b_in [width-2:0], b_in [width-1]}; // Rotate Left B end 5'b11110: begin res_out [width-1:0] <= {a_in [0], a_in [width-1:1]}; // Rotate Right A end 5'b11111: begin res_out [width-1:0] <= {b_in [0], b_in [width-1:1]}; // Rotate Right B end default: begin cond <= 1'b0; res_out [width-1:0] <= 0; end end else if(rst) begin cond <= 1'b0; chk_oflow <= 2'b0; res_out [width-1:0] <= 0; endendendmoduleThere is a good chance I'll reduce the number of operations it performs to reduce the amount of unnecessary operations it does on both A and B, since less operations translates to better RISC performance.My question is, would a blocking or non-blocking operator be appropriate in this case?Edit: Apologies that the syntax highlighting seems poor.
Verilog coding practices for synthesis
verilog
non-blocking one, which is better for sequential circuits.It is not better per-se but it is the correct way to simulate a flip-flop. Combinatorialalways @* begin a = b;Sequential (flip-flop)always @(posedge clock) begin a <= b ;In the examples above nothing would go wrong if you used the wrong type but think about always @(posedge clock) begin b <= c ; a <= b ;Which is the same as always @(posedge clock) begin a <= b ; b <= c ;We are specifying a delay line which is c -> b -> a. If we use the wrong type :always @(posedge clock) begin b = c ; a = b ; //=cWe actually get c -> b and c -> a, b does not block and feeds directly into a. Mixing the styles is possible but unless done very carefully bugs can creep in. for the purpose of code review it is best not to mix them so that it is a clear cut case.Which will give a different result to :always @(posedge clock) begin a = b ; b = c ;When implying parallel hardware you would not expect an order dependence like this.Mixing styles, or using the wrong style can lead to RTL vs Gate level mismatch. ie using a <= in a combinatorial section always @* will give the desired result in simulation but synthesis will ignore this and give you the equivalent of =.
_unix.104714
What is the very fundamental difference between Unix, Linux, BSD and GNU? Unix was the earliest OS, so the term 'Unix like' is understandable, since they have kernel, file system structure, most of the commands, users etc are same as Unix. Still why are they different? What made them set apart? Is it the kernel?
What is the difference between Unix, Linux, BSD and GNU?
linux;bsd;gnu
That is a difficult question to answer. Fist Unix Like or *nix usually means POSIX. All the systems you listed are POSIX systems. POSIX is a set of standards to implement.Now for the harder questions. GNU isn't really a OS. It's more of a set of rules or philosophies that govern free software, that at the same time gave birth to a bunch of tools while trying to create an OS. So GNU tools are basically open versions of tools that already existed but were redone to conform to principals of open software. GNU/Linux is a mesh of those tools and the Linux kernel to form a complete OS, but there are other GNUs. GNU/Hurd for example. Unix and BSD are older implementations of POSIX that are various levels of closed source. Unix s usually totally closed source, but there are as many flavors of Unix as there are Linux if not more. BSD is not usually considered open by some people but in truth it is a lot more open then anything else that existed. It's licencing also allowed for commercial use with far fewer restrictions as the more open licences allowed.Linux is the new comer. Strictly speaking it's just a kernel, however, in general it's thought of as a full OS when combined with GNU Tools and a bunch of other things. The main governing difference is ideals. Unix, Linux, and BSD have different ideals that they implement. They are all POSIX, and are all basically interchangeable. They do solve some of the same problems in different ways. So other then ideals and how they chose to implement POSIX standards, there is little difference. For more info I suggest your read a brief article on the creation of GNU, OSS, Linux, BSD, and UNIX. They will be slanted towards their individual ideas, but when read though you will get a good idea of the differences.
_unix.171087
## if MAXFILES is not set, set to 10#if [ -z MAXFILES ]then MAXFILES=10fi## now check to see if the number of files being removed is > MAXFILES# but only if MAXFILES = 0#if [ $# -gt $MAXFILES -a $MAXFILES -ne 0 ]then # if it is, prompt user before removing files echo Remove $# files (y/n)? \c read reply if [ $reply = y ] then rm $@ else echo files not removed fielse # number of args <= MAXFILES rm $@fiThe above program I have to remove files. However when I attempt to run it, its telling me that line 15: [: : integer expression expectedCan anyone help me?Thanks, I am new to UNIX programming.
BASH Program to remove files, Integer expression expected
bash
null
_webmaster.59182
My main concern is whether dofollow is interpreted as more important than links without any rel attribute. Does it make any difference for search engines?
What is the difference between using dofollow and omitting rel?
links;rel;dofollow
There is no such thing as dofollow. All links are followed unless specifically stated otherwise (nofollow).
_cs.2641
I wanted ask if you know an algorithm to find the witness for $EU(\phi_1,\phi_2)$ (CTL formula Exist Until) using BDDs (Binary Decision Diagram). In pratice you should use the fixed point for calculating $EU(\phi_1,\phi_2)$, that is:$\qquad \displaystyle EU(\phi_1,\phi_2)=\mu.Q (\phi_2 \vee (\phi_1 \wedge EX Q)) $Unwinding the recursion, we get:$\qquad \displaystyle \begin{align} Q_0 &= \textrm{false} \\ Q_1 &= \phi_2 \\ Q_2 &= \phi_2 \vee (\phi_1 \wedge EX \phi_2) \\ \ \vdots\end{align}$and so on.To generate a witness (path) we can do a forward reachability check within the sequence of $Q_is$, that is find a path$\qquad \displaystyle \pi= s_1 \rightarrow s_2 \rightarrow \cdots \rightarrow s_n$ such that $s_i \in Q_{n-i} \cap R(s_{i-1})$ (where $R(s_{i-1})= \{ s \mid R(s_{i-1},s) \}$ and $R(s_{i-1},s$) is the transition from $s_{i-1}$ to $s$ ) where $s_0 \in Q_n $ and $s_n \in Q_1=\phi_2$. How you can do this with BDDs?
Witness for the $EU(\phi_1,\phi_2)$ using BDDs
formal methods;model checking
What you describe is symbolic model checking, and it is treated in this set of slides, using reduced ordered BDDs.In a nutshell, you still do the fixpoint iteration, the main issue being how to do the transformation $Q\mapsto \phi_2\vee(\phi_1\wedge EXQ)$ on BDDs. The elementary operations you need are renaming (to replace unprimed by primed variables in $Q$, obtaining $Q'$), boolean operations (to form $\phi_2\vee(\phi_1\wedge R\wedge Q')$) and abstraction (to do existential quantifier elimination on the primed variables). The witness generation can then be done similarly forwards from the initial states, requiring at step $i$ that $s_i$ is in $Q_{n-i}$ as you did above.
_unix.147367
We are using Cyclone V, which essentially is a SOC comprised of FPGA + ARM core. Is it possible to encrypt the root file system and decrypt it by using U-Boot with a key which is located in the FPGA?
Encrypt root file system and decrypt using U-Boot with key stored in FPGA
encryption;u boot;root filesystem
null
_codereview.168771
Getting close to a release of generic server.Nisse Server: Part 1 Helper FunctionsHere is the socket layer code.This has previously been reviewed here. But there have been some changes.Socket.h#ifndef THORSANVIL_SOCKET_SOCKET_H#define THORSANVIL_SOCKET_SOCKET_H#include <string>#include <vector>#include <sstream>namespace ThorsAnvil{ namespace Socket {// An RAII base class for handling sockets.// Socket is movable but not copyable.class BaseSocket{ int socketId; protected: static constexpr int invalidSocketId = -1; // Designed to be a base class not used used directly. BaseSocket(int socketId, bool blocking = false); public: int getSocketId() const {return socketId;} public: virtual ~BaseSocket(); // Moveable but not Copyable BaseSocket(BaseSocket&& move) noexcept; BaseSocket& operator=(BaseSocket&& move) noexcept; void swap(BaseSocket& other) noexcept; BaseSocket(BaseSocket const&) = delete; BaseSocket& operator=(BaseSocket const&) = delete; // User can manually call close void close();};// A class that can read/write to a socketclass DataSocket: public BaseSocket{ public: DataSocket(int socketId, bool blocking = false) : BaseSocket(socketId, blocking) {} std::pair<bool, std::size_t> getMessageData(char* buffer, std::size_t size, std::size_t alreadyGot = 0); std::pair<bool, std::size_t> putMessageData(char const* buffer, std::size_t size, std::size_t alreadyPut = 0); void putMessageClose();};// A class the conects to a remote machine// Allows read/write accesses to the remote machineclass ConnectSocket: public DataSocket{ public: ConnectSocket(std::string const& host, int port, bool blocking = false);};// A server socket that listens on a port for a connectionclass ServerSocket: public BaseSocket{ static constexpr int maxConnectionBacklog = 5; public: ServerSocket(int port, bool blocking = false); // An accepts waits for a connection and returns a socket // object that can be used by the client for communication DataSocket accept(bool blocking = false);}; }}#endifSocket.cpp#include Socket.h#include Utility.h#include event.h#include <arpa/inet.h>#include <sys/types.h>#include <sys/socket.h>#include <unistd.h>#include <fcntl.h>#include <sstream>#include <stdexcept>using namespace ThorsAnvil::Socket;#pragma vera_pushoffusing SocketAddr = struct sockaddr;using SocketStorage = struct sockaddr_storage;using SocketAddrIn = struct sockaddr_in;#pragma vera_popBaseSocket::BaseSocket(int socketId, bool blocking) : socketId(socketId){ if (socketId == -1) { throw std::runtime_error(buildErrorMessage(ThorsAnvil::Socket::BaseSocket::, __func__, : bad socket: , systemErrorMessage())); } if (!blocking && evutil_make_socket_nonblocking(socketId) != 0) { throw std::runtime_error(buildErrorMessage(ThorsAnvil::Socket::BaseSocket::, __func__, : evutil_make_socket_nonblocking: failed to make non blocking: )); }}BaseSocket::~BaseSocket(){ if (socketId == invalidSocketId) { // This object has been closed or moved. // So we don't need to call close. return; } try { close(); } catch (...) { // We should log this // TODO: LOGGING CODE HERE // If the user really want to catch close errors // they should call close() manually and handle // any generated exceptions. By using the // destructor they are indicating that failures is // an OK condition. }}void BaseSocket::close(){ return; if (socketId == invalidSocketId) { throw std::logic_error(buildErrorMessage(ThorsAnvil::Socket::BaseSocket::, __func__, : accept called on a bad socket object (this object was moved))); } while (true) { int state = ::close(socketId); if (state == invalidSocketId) { break; } switch (errno) { case EBADF: throw std::domain_error(buildErrorMessage(ThorsAnvil::Socket::BaseSocket::, __func__, : close: , socketId, , systemErrorMessage())); case EIO: throw std::runtime_error(buildErrorMessage(ThorsAnvil::Socket::BaseSocket::, __func__, : close: , socketId, , systemErrorMessage())); case EINTR: { // TODO: Check for user interrupt flags. // Beyond the scope of this project // so continue normal operations. break; } default: throw std::runtime_error(buildErrorMessage(ThorsAnvil::Socket::BaseSocket::, __func__, : close: , socketId, , systemErrorMessage())); } } socketId = invalidSocketId;}void BaseSocket::swap(BaseSocket& other) noexcept{ using std::swap; swap(socketId, other.socketId);}BaseSocket::BaseSocket(BaseSocket&& move) noexcept : socketId(invalidSocketId){ move.swap(*this);}BaseSocket& BaseSocket::operator=(BaseSocket&& move) noexcept{ move.swap(*this); return *this;}ConnectSocket::ConnectSocket(std::string const& host, int port, bool blocking) : DataSocket(::socket(PF_INET, SOCK_STREAM, 0), blocking){ SocketAddrIn serverAddr{}; serverAddr.sin_family = AF_INET; serverAddr.sin_port = htons(port); serverAddr.sin_addr.s_addr = inet_addr(host.c_str()); if (::connect(getSocketId(), reinterpret_cast<SocketAddr*>(&serverAddr), sizeof(serverAddr)) != 0) { close(); throw std::runtime_error(buildErrorMessage(ThorsAnvil::Socket::ConnectSocket::, __func__, : connect: , systemErrorMessage())); }}ServerSocket::ServerSocket(int port, bool blocking) : BaseSocket(::socket(PF_INET, SOCK_STREAM, 0), blocking){ SocketAddrIn serverAddr = {}; serverAddr.sin_family = AF_INET; serverAddr.sin_port = htons(port); serverAddr.sin_addr.s_addr = INADDR_ANY; if (::bind(getSocketId(), reinterpret_cast<SocketAddr*>(&serverAddr), sizeof(serverAddr)) != 0) { close(); throw std::runtime_error(buildErrorMessage(ThorsAnvil::Socket::ServerSocket::, __func__, : bind: , systemErrorMessage())); } if (::listen(getSocketId(), maxConnectionBacklog) != 0) { close(); throw std::runtime_error(buildErrorMessage(ThorsAnvil::Socket::ServerSocket::, __func__, : listen: , systemErrorMessage())); }}DataSocket ServerSocket::accept(bool blocking){ if (getSocketId() == invalidSocketId) { throw std::logic_error(buildErrorMessage(ThorsAnvil::Socket::ServerSocket::, __func__, : accept called on a bad socket object (this object was moved))); } SocketStorage serverStorage; socklen_t addr_size = sizeof serverStorage; int newSocket = ::accept(getSocketId(), reinterpret_cast<SocketAddr*>(&serverStorage), &addr_size); if (newSocket == -1) { throw std::runtime_error(buildErrorMessage(ThorsAnvil::Socket::ServerSocket:, __func__, : accept: , systemErrorMessage())); } return DataSocket(newSocket, blocking);}std::pair<bool, std::size_t> DataSocket::getMessageData(char* buffer, std::size_t size, std::size_t alreadyGot){ if (getSocketId() == 0) { throw std::logic_error(buildErrorMessage(ThorsAnvil::Socket::DataSocket::, __func__, : accept called on a bad socket object (this object was moved))); } std::size_t dataRead = alreadyGot; while (dataRead < size) { // The inner loop handles interactions with the socket. std::size_t get = ::read(getSocketId(), buffer + dataRead, size - dataRead); if (get == static_cast<std::size_t>(-1)) { switch (errno) { case EBADF: case EFAULT: case EINVAL: case ENXIO: { // Fatal error. Programming bug throw std::domain_error(buildErrorMessage(ThorsAnvil::Socket::DataSocket::, __func__, : read: critical error: , systemErrorMessage())); } case EIO: case ENOBUFS: case ENOMEM: { // Resource acquisition failure or device error throw std::runtime_error(buildErrorMessage(ThorsAnvil::Socket::DataSocket::, __func__, : read: resource failure: , systemErrorMessage())); } case EINTR: { // TODO: Check for user interrupt flags. // Beyond the scope of this project // so continue normal operations. continue; } case ETIMEDOUT: case EAGAIN: //case EWOULDBLOCK: { // Temporary error. // Simply retry the read. return {true, dataRead}; } case ECONNRESET: case ENOTCONN: { // Connection broken. // Return the data we have available and exit // as if the connection was closed correctly. return {false, dataRead}; } default: { throw std::runtime_error(buildErrorMessage(ThorsAnvil::Socket::DataSocket::, __func__, : read: returned -1: , systemErrorMessage())); } } } if (get == 0) { return {false, dataRead}; } dataRead += get; } return {true, dataRead};}std::pair<bool, std::size_t> DataSocket::putMessageData(char const* buffer, std::size_t size, std::size_t alreadyPut){ std::size_t dataWritten = alreadyPut; while (dataWritten < size) { std::size_t put = ::write(getSocketId(), buffer + dataWritten, size - dataWritten); if (put == static_cast<std::size_t>(-1)) { switch (errno) { case EINVAL: case EBADF: case ECONNRESET: case ENXIO: case EPIPE: { // Fatal error. Programming bug throw std::domain_error(buildErrorMessage(ThorsAnvil::Socket::DataSocket::, __func__, : write: critical error: , systemErrorMessage())); } case EDQUOT: case EFBIG: case EIO: case ENETDOWN: case ENETUNREACH: case ENOSPC: { // Resource acquisition failure or device error throw std::runtime_error(buildErrorMessage(ThorsAnvil::Socket::DataSocket::, __func__, : write: resource failure: , systemErrorMessage())); } case EINTR: { // TODO: Check for user interrupt flags. // Beyond the scope of this project // so continue normal operations. continue; } case ETIMEDOUT: case EAGAIN: //case EWOULDBLOCK: { // Temporary error. // Simply retry the read. return {true, dataWritten}; } default: { throw std::runtime_error(buildErrorMessage(ThorsAnvil::Socket::DataSocket::, __func__, : write: returned -1: , systemErrorMessage())); } } } dataWritten += put; } return {true, dataWritten};}void DataSocket::putMessageClose(){ if (::shutdown(getSocketId(), SHUT_WR) != 0) { throw std::domain_error(buildErrorMessage(ThorsAnvil::Socket::DataSocket::, __func__, : shutdown: critical error: , systemErrorMessage())); }}
Nisse Server: Part 2 Socket Layer
c++;c++14;socket
null
_webapps.17417
How to do a Google search for webpages last updated within 2 years ?In other words I would like Google, but I do not wish any of the results to be over 2 years old.
How to do a Google search for webpages last updated within 2 years?
google search
You can search for a webpage that is indexed in Google in a particular time period by selecting the More option from different search options available in left side of Google page. Select the time period like last hour, last year etc. or you can enter a custom time period in which you can specify exact date range.
_reverseengineering.10921
I was making C++ addon headers for BlockLauncher Addon with IDA, reverse engineering libminecraftpe.so file.While doing that, i got trouble.I want to see specific class's non-static members but i cant find non-static member view.Is there a way to see non-static members?
Can i see non-static members of a class with IDA?
ida;c++
Non static members are allocated as a structure. For virtual functions there is also a pointer to a table of pointers that is the first offset in the structure. Each class member is a specific offset from the beginning of the structure. The easiest way to see this is to create a class, and assign different members to values that you know. Then look in IDA and see the offsets that are modified with those known values.Here is a good article that describes the layout. http://www.openrce.org/articles/full_view/23
_datascience.21616
In lightGBM, there're its original training API and also Scikit API to use with Scikit (I believe xgboost also got the same things). I'm working on a model for binary classification using lightgbm. After my first try, the result of the original training API is significantly different compared to Scikit API result. Both were tested with the same parameters. The original api returned 82.74% of accuracy (20 iterations) but the result Scikit version returned 94.60%. Can someone tell me what's going wrong in my case?
Why original training API of lightgbm performs worst than scikit api version?
machine learning;xgboost
null
_unix.363965
I'm a bit new to linux (running ubuntu 14.04 - using it about a year), and I've recently started having issues with apt-get. I can't install or remove any package (error below). I tried reinstalling these packages, I tried deleting the deb files and I still get the same error. The following extra packages will be installed: python3-software-properties software-properties-common software-properties-gtk The following packages will be upgraded: python3-software-properties software-properties-common software-properties-gtk 3 upgraded, 0 newly installed, 0 to remove and 412 not upgraded. 108 not fully installed or removed. Need to get 0 B/126 kB of archives. After this operation, 0 B of additional disk space will be used. Do you want to continue? [Y/n] y WARNING: The following packages cannot be authenticated! dh-python software-properties-common software-properties-gtk python3-software-properties Install these packages without verification? [y/N] y (Reading database ... 305261 files and directories currently installed.) Preparing to unpack .../software-properties-common_0.92.37.7_all.deb ... /var/lib/dpkg/info/software-properties-common.prerm: 6: /var/lib/dpkg/info/software-properties-common.prerm: py3clean: not found dpkg: warning: subprocess old pre-removal script returned error exit status 127 dpkg: trying script from the new package instead ... /var/lib/dpkg/tmp.ci/prerm: 6: /var/lib/dpkg/tmp.ci/prerm: py3clean: not found dpkg: error processing archive /var/cache/apt/archives/software-properties-common_0.92.37.7_all.deb (--unpack): subprocess new pre-removal script returned error exit status 127 /var/lib/dpkg/info/software-properties-common.postinst: 6: /var/lib/dpkg/info/software-properties-common.postinst: py3compile: not found dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 127 Preparing to unpack .../software-properties-gtk_0.92.37.7_all.deb ... /var/lib/dpkg/info/software-properties-gtk.prerm: 6: /var/lib/dpkg/info/software-properties-gtk.prerm: py3clean: not found dpkg: warning: subprocess old pre-removal script returned error exit status 127 dpkg: trying script from the new package instead ... /var/lib/dpkg/tmp.ci/prerm: 6: /var/lib/dpkg/tmp.ci/prerm: py3clean: not found dpkg: error processing archive /var/cache/apt/archives/software-properties-gtk_0.92.37.7_all.deb (--unpack): subprocess new pre-removal script returned error exit status 127 /var/lib/dpkg/info/software-properties-gtk.postinst: 6: /var/lib/dpkg/info/software-properties-gtk.postinst: py3compile: not found dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 127 Preparing to unpack .../python3-software-properties_0.92.37.7_all.deb ... /var/lib/dpkg/info/python3-software-properties.prerm: 6: /var/lib/dpkg/info/python3-software-properties.prerm: py3clean: not found dpkg: warning: subprocess old pre-removal script returned error exit status 127 dpkg: trying script from the new package instead ... /var/lib/dpkg/tmp.ci/prerm: 6: /var/lib/dpkg/tmp.ci/prerm: py3clean: not found dpkg: error processing archive /var/cache/apt/archives/python3-software-properties_0.92.37.7_all.deb (--unpack): subprocess new pre-removal script returned error exit status 127 /var/lib/dpkg/info/python3-software-properties.postinst: 6: /var/lib/dpkg/info/python3-software-properties.postinst: py3compile: not found dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 127 Errors were encountered while processing: /var/cache/apt/archives/software-properties-common_0.92.37.7_all.deb /var/cache/apt/archives/software-properties-gtk_0.92.37.7_all.deb /var/cache/apt/archives/python3-software-properties_0.92.37.7_all.deb E: Sub-process /usr/bin/dpkg returned an error code (1)
apt-get Failing for all packages
ubuntu;apt
null
_webmaster.95980
Is it possible to when a user typexxxx.comthat the url change for www.xxxx.comor better that all request go tohttps://www.xxxx.com
Redirection of url
redirects;godaddy
null
_softwareengineering.267204
I have a very common function that I have always unit tested in the same way, but I'm wondering if there is a better solution or if it's even possible a code smell is involved. It seems like a very simple case but I have a function that clears the properties of the object. Working in JavaScript, here is a simple example:function Dog(name, owner) { this.name = name; this.owner = owner; this.reset = function() { this.name = ''; this.owner = ''; };}var puppy = new Dog('Max', 'Timmy');console.log(puppy.name) // logs Maxpuppy.reset();console.log(puppy.name) // logs I would normally unit test by setting the properties, calling the clear function, and then asserting that the properties were indeed set back to the defaults or cleared out. The reason I'm asking about such a simple solution is because of the dogma that unit tests should only have 1 assertion. I also think that a reset type function could get way out of hand when it is dealing with a large number of properties (i.e. an object that is meant to store a SPA's state). I'm sure that I am over-thinking this but wanted to get some outside opinion/criticism for something I have been doing the same for many years. I just cannot possibly think of a better way to do this.Another question could be, are unit tests surrounding a reset function necessary? To me they seem to almost just test the language implementation -- similar to a getter/setter property.
How do you unit test a function that clears properties?
javascript;unit testing
Unit tests should cover logic, and as a matter of fact, reset doesn't contain any logic - there is no ifs, no switches, no loops in it - basically, no conditional statements of any type.And yes, it means that testing it sort of boils down to testing JavaScript as such, as you say. Set a, b, and c to empty strings! Have a, b and c been set to empty strings? Good. Good JavaScript!So, given there's no logic, why would we want unit test coverage here at all? I guess we'd wish to have it in order to protect ourselves against the scenario in which you're adding another property to the class, but then forget to reset it in your reset function.The problem here is that you would also have to update your unit test to reveal this bug, and if you forgot about updating your reset function, it stands to reason you would have failed to update testReset, too.Or your little special function that returns all the contents of your singleton, nicely packed for testing purposes.One possible alternative would be to use reflection (in case of JavaScript, it's just iterating over properties of course) for resetting all properties in existence, and then only unit test it as a universal utility, even on an arbitrary stub class. Of course you're likely to get into more problems if you want to actually preserve the value of some of your properties rather than wipe everything clean.All in all, it's a difficult task because that's a singleton you have to reset. Singletons are notoriously bad for testability. Misko Hevery devoted a series of articles and presentations to that. See:Root cause of singletons (article)Singletons are pathological liers (article)The Clean Code Talks - Global State and Singletons (video)