id
int64 1
141k
| title
stringlengths 15
150
| body
stringlengths 43
35.6k
| tags
stringlengths 1
118
| label
int64 0
1
|
---|---|---|---|---|
1,919 |
Equality testing of arrays and integers in a procedural language
|
<p>In terms of references and their implementation on the heap and the stack, how is
equality testing for arrays different from that for integers? </p>

<p>This is to do with Java programming, if you have a stack and a heap, would equality testing for example <code>j == i</code> be the same for arrays and for integers? I understand that arrays, are stored in the heap and the stack, as it holds bulks of data, but integers are only stored in the stack and referenced in the heap.</p>

<p><img src="https://i.stack.imgur.com/xtIHW.png" alt="this is a picture on how integer variables are stored on the heap and referenced on the heap"></p>

<p>I understand for equality testing <code>j==i</code> (variables) the stack pointer will point to the same location.</p>

<p>I'm confused on how <code>j==i</code> would be different for array and integers.</p>

<p>Could someone explain? </p>

|
programming languages arrays semantics equality memory management
| 1 |
1,921 |
Making random sources uniformly distributed
|
<p>How do I build a random source that outputs the bits 0 and 1 with $prob(0) = prob(1) = 0.5$. We have access to another random source $S$ that outputs $a$ or $b$ with independent probabilities $prob(a)$ and $prob(b) = 1 - prob(a)$ that are unknown to us.</p>

<p>How do I state an algorithm that does the job and that does not consume more than an expected number of
$(prob(a) \cdot prob(b))^{-1}$ symbols of $S$ between two output bits and prove its correcteness?</p>

|
algorithms probability theory randomized algorithms randomness
| 1 |
1,922 |
Deterministic and randomized communication complexity of set equality
|
<p>Two processors $A, B$ with inputs $a \in \{0, 1\}^n$ (for $A$) and $b \in \{0, 1\}^n$
(for $B$) want to decide whether $a = b$. $A$ does not know $B$’s input and vice versa.</p>

<p>A can send a message $m(a) \in \{0, 1\}^n$ which $B$ can use to decide $a = b$. The communication and computation rules are called a <em>protocol</em>.</p>

<ul>
<li>Show that every deterministic protocol must satisfy $|m(a)| \ge n$.</li>
<li>State a randomized protocol that uses only $O(\log_2n)$ Bits. The protocol should always accept if $a = b$ and accept with probability at most $1/n$ otherwise. Prove its correctness.</li>
</ul>

|
algorithms probability theory randomized algorithms
| 1 |
1,923 |
Online generation of uniform samples
|
<p>A source provides a stream of items $x_1, x_2,\dots$ . At each step $n$ we want to save a random sample $S_n \subseteq \{ (x_i, i)|1 \le i \le n\}$ of size $k$, i.e. $S_n$ should be a uniformly chosen sample from all $\tbinom{n}{k}$ possible samples consisting of seen items. So at each step $n \ge k$ we must decide whether to add the next item to $S$ or not. If so we must also decide which of the current items to remove from $S$ .</p>

<p>State an algorithm for the problem. Prove its correctness.</p>

|
algorithms probability theory randomized algorithms randomness online algorithms
| 1 |
1,936 |
When do structural hazards occur in pipelined architectures?
|
<p>I'm looking for some relatively simple examples of when <a href="http://en.wikipedia.org/wiki/Hazard_%28computer_architecture%29#Structural_hazards">structural hazards</a> occur in a pipelined architecture.</p>

<p>The only scenario I can think of is when memory needs to be accessed during different stages of the pipeline (ie, the initial instruction fetch stage and the later memory read/write stage).</p>

<p>I'm thinking that there are many more structural hazards in more complex architectures, such as superscalar. Does it class as a structural hazard when an instruction is dispatched to an execution unit but is queued because the unit is in use?</p>

<p>If this is highly architecture-specific, then just assume MIPS or something similar.</p>

|
computer architecture cpu pipelines
| 1 |
1,939 |
Does the language of Regular Expressions need a push down automata to parse it?
|
<p>I want to convert a user entered regular expression into an NFA so that I can then run the NFA against a string for matching purposes. What is the minimum machine that can be used to parse regular expresssions? </p>

<p>I assume it must be a push down automaton because the presense of brackets means the need to count and a DFA/NFA cannot perform arbitrary counting. Is this assumption correct? For example, the expression a(bc*)d would require a PDA so that the sub-expression in brackets is handled correctly.</p>

|
formal languages parsers regular expressions pushdown automata
| 0 |
1,940 |
If a point is a vertex of convex hull
|
<p>The exercise is </p>

<blockquote>
 <p>Given a set of point $S$ and a point $p$. Decide in $O(n)$ time if $p$ is a vertex of convex polygon formed from points of $S$.</p>
</blockquote>

<p>The problem is I am a little bit confused with time complexity $O(n)$. The more naive solution would be to construct convex polygon in $O(n\log n)$ and test if $p$ is one of the vertices. </p>

|
algorithms computational geometry
| 1 |
1,949 |
Use closure properties to transform languages to $L := \{ a^nb^n : n\in \mathbb N \}$
|
<p>For the purpose of proving that they are not regular, what closure properties can I use to transform the languages</p>

<ol>
<li>$L_a = \{ a^*cw \mid w \in \{a,b \}^* \land |w|_a = |w|_b \}$ and</li>
<li>$L_b = \{ab^{i_1}ab^{i_2}\ldots ab^{i_n} \mid i_j∈\mathbb N \land \exists j∈[1,n] \ i_j \not= j \}$</li>
</ol>

<p>to $L := \{ a^nb^n \mid n\in \mathbb N \}$, respectively?</p>

<p>I tried: </p>

<ol>
<li><p>$L_a = \{ a^*cw \mid w \in \{a,b \}^* \land |w|_a = |w|_b \}$ </p>

<p>$L_a' = \{ \{a,d\}^*cw \mid w \in \{a,b,d \}^* \land |w|_a + |w|_d = |w|_b \}$ (union?)</p>

<p>$L_a'' = \{ d^*cw \mid w \in \{a,b \}^* \land |w|_a = |w|_b \}$
(concatenation?)</p>

<p>$L_a''' = \{ w \mid w \in \{a,b \}^* \land |w|_a = |w|_b \}$
(homomorphism?)</p></li>
<li><p>$L_b = \{ab^{i_1}ab^{i_2}\ldots ab^{i_n} \mid i_j∈\mathbb N \land\exists j∈[1,n] \ i_j \not= j \}$</p>

<p>$L_b' = \{ab^{i_1}ab^{i_2}\ldots ab^{i_n} \mid i_j∈\mathbb N \land\forall j∈[1,n] \ i_j = j \}$ (complement?)</p>

<p>$L_b'' = \{ac^{i_1}ac^{i_2}\ldots ac^{i_n} \mid i_j∈\mathbb N \land\forall j∈[1,n] \ i_j = j \}$ (homomorphism?)</p></li>
</ol>

|
formal languages context free closure properties
| 1 |
1,954 |
Criteria for selecting language for first programming course
|
<p>As a university-level CS educator, the issue of which programming language to teach in the first programming course often comes up for discussion. There are thousands of languages to choose between, and lots of religious fever (or fevour) supporting one language camp over another. All of this subjective bias surrounding each programming language makes it very difficult for an educator to choose one.</p>

<p>My question is: </p>

<blockquote>
 <p>What <strong>objective</strong> criteria can an educator use to select a programming language to use as the basis for a first year university programming course? What is the basis for these criteria?</p>
</blockquote>

<p><strong>Note</strong>: I do not want to see a list of programming languages and why they are the best one to use. The question isn't about the best language, it is about <em>the criteria for selecting a language</em>. Answers may however be use programming languages to illustrate particular points.</p>

<hr>

<p>This question was inspired by another question which was deemed off-topic: <a href="https://cs.stackexchange.com/questions/1946/criteria-for-choosing-a-first-programming-language">https://cs.stackexchange.com/questions/1946/criteria-for-choosing-a-first-programming-language</a>. </p>

|
programming languages education
| 0 |
1,957 |
Master theorem not applicable?
|
<p>Given the following recursive equation</p>

<p>$$ T(n) = 2T\left(\frac{n}{2}\right)+n\log n$$ we want to apply the Master theorem and note that</p>

<p>$$ n^{\log_2(2)} = n.$$</p>

<p>Now we check the first two cases for $\varepsilon > 0$, that is whether</p>

<ul>
<li>$n\log n \in O(n^{1-\varepsilon})$ or</li>
<li>$n\log n \in \Theta(n)$.</li>
</ul>

<p>The two cases are not satisfied. So we have to check the third case, that is whether</p>

<ul>
<li>$n\log n \in \Omega(n^{1+\varepsilon})$ .</li>
</ul>

<p>I think the third condition is not satisfied either. But why? And what would be a good explanation for why the Master theorem cannot be applied in this case?</p>

|
proof techniques asymptotics recurrence relation master theorem
| 1 |
1,958 |
Relation between simple and regular grammars
|
<p>I am reading "An Introduction to Formal Languages and Automata" written by Peter Linz and after reading the first five chapters I face below problem with
simple and regular (especially right linear) grammars which are very similar to each other.</p>

<p>What relation exists between these? What is the difference?
Can you create (non-deterministic) finite automata for simple grammars (obviously without using a stack)?</p>

|
regular languages automata context free formal grammars
| 1 |
1,959 |
How to go from a recurrence relation to a final complexity
|
<p>I have an algorithm, shown below, that I need to analyze. Because it's recursive in nature I set up a recurrence relation. </p>

<pre><code>//Input: Adjacency matrix A[1..n, 1..n]) of an undirected graph G 
//Output: 1 (true) if G is complete and 0 (false) otherwise 
GraphComplete(A[1..n, 1..n]) {
 if ( n = 1 )
 return 1 //one-vertex graph is complete by definition 
 else 
 if not GraphComplete(A[0..n − 1, 0..n − 1]) 
 return 0 
 else 
 for ( j ← 1 to n − 1 ) do 
 if ( A[n, j] = 0 ) 
 return 0 
 end
 return 1
}
</code></pre>

<p>Here is what I believe is a valid and correct recurrence relation: </p>

<p>$\qquad \begin{align}
 T(1) &= 0 \\
 T(n) &= T(n-1) + n - 1 \quad \text{for } n \geq 2
\end{align}$</p>

<p>The "$n - 1$" is how many times the body of the for loop, specifically the "if A[n,j]=0" check, is executed.</p>

<p>The problem is, where do I go from here? How do I convert the above into something that actually shows what the resulting complexity is?</p>

|
algorithms algorithm analysis runtime analysis recursion
| 1 |
1,970 |
Data structure with search, insert and delete in amortised time $O(1)$?
|
<p>Is there a data structure to maintain an ordered list that supports the following operations in $O(1)$ amortized time? </p>

<ul>
<li><p><strong>GetElement(k)</strong>: Return the $k$th element of the list.</p></li>
<li><p><strong>InsertAfter(x,y)</strong>: Insert the new element y into the list immediately after x. </p></li>
<li><p><strong>Delete(x)</strong>: Remove x from the list.</p></li>
</ul>

<p>For the last two operations, you can assume that x is given as a pointer directly into the data structure; InsertElement returns the corresponding pointer for y. InsertAfter(NULL, y) inserts y at the beginning of the list.</p>

<p>For example, starting with an empty data structure, the following operations update the ordered list as shown below:</p>

<ul>
<li>InsertAfter(NULL, a) $\implies$ [a]</li>
<li>InsertAfter(NULL, b) $\implies$ [b, a]</li>
<li>InsertAfter(b, c) $\implies$ [b, c, a]</li>
<li>InsertAfter(a, d) $\implies$ [b, c, a, d]</li>
<li>Delete(c) $\implies$ [b, a, d]</li>
</ul>

<p>After these five updates, GetElement(2) should return d, and GetElement(3) should return an error.</p>

|
data structures time complexity asymptotics amortized analysis
| 1 |
1,972 |
Decide whether a context-free languages can be accepted by a deterministic pushdown automaton
|
<p>Given a context-free grammar G, there exists a Nondeterministic Pushdown Automaton N that accepts exactly the language G accepts. (and visa versa)</p>

<p>There <strong>may</strong> also exist a Deterministic Pushdown Automaton D that accepts exactly the language G accepts too. It depends on the grammar.</p>

<p>By what algorithm on the productions of G can we determine if D exists?</p>

|
automata context free pushdown automata
| 1 |
1,974 |
Compare-and-Swap in an RDBMS for custom locks and lock escalation
|
<p>I'm applying the Compare-and-Swap technique to a SQL database to create custom row-level locking in my dataset, allowing for safe READ UNCOMMITTED isolation at the database level.</p>

<p>The Resource table includes a LockOwner <code>GUID</code> and a IsLocked <code>BIT</code> field. To acquire a lock, a dirty-read query gets the ID, LockOwner, and LockStatus. If <code>Unlocked</code>, attempt to <code>UPDATE</code> the Resource by (ID, LockOwner) with a newly generated LockOwner and LockStatus of <code>Locked</code>. Abort and start again if no rows are updated - meaning someone else got there first. Otherwise, the Lock is held in the READ UNCOMMITTED transaction. The transaction is needed to allow rollback on client failure/abandon, but the dirty reads avoid locks.</p>

<p><strong>This seems to me to work great for resources that are independent of each other. But what must I add to account for a new kind of lock, ResourceGroup?</strong></p>

<p>ResourceGroup to Resource is a one-to-many relationship. Resources can be locked individually, but if the ResourceGroup needs to be locked, then all of the Resources must also be locked. </p>

<p>Locking a ResourceGroup is a far less frequent need than locking a Resource, so the scheme should be optimized for Resource queries, avoiding requiring joins to ResourceGroup if possible.</p>

<p>I am imagining a scenario where locking a ResourceGroup involves marking the member rows with some flag, but I'm not sure what scheme doesn't interfere with the original Resource-only scheme. Part of the problem comes from the UPDATE of a Resource while it is locked (and therefore already UPDATED in another transaction). I believe that even if the fields are different within the record, the UPDATE will place an UPDATE LOCK on the row, so any lock on ResourceGroup would introduce blocking that we are trying to avoid. Even if we could do this, how would the ResourceGroup lock acquisition mechanism know when all of the Resources (which may have had locks in process as we began locking their peers) have been released?</p>

<p>There may be differences in this locking granularity by RDBMS, I'm on MS SQL 2005+.</p>

|
concurrency database theory
| 1 |
1,979 |
Proof that $\{⟨M⟩ ∣ L(M) \mbox{ is context-free} \}$ is not (co-)recursively enumerable
|
<p>I would like to use your help with the following problem:</p>

<p>$L=\{⟨M⟩ ∣ L(M) \mbox{ is context-free} \}$. Show that $L \notin RE \cup CoRE$.</p>

<p>I know that to prove $L\notin RE$, it is enough to find a language $L'$ such that $L'\notin RE$ and show that there is a reduction from $L'$ to $L$ $(L'\leq _M L)$.</p>

<p>I started to think of languages which I already know that they are not in $RE$, and I know that $Halt^* =\{⟨M⟩ ∣ M\mbox{ halts for every input} \} \notin RE$. I thought of this reduction from $Halt^*$ to $L$: $f(⟨M⟩)=(M')$. for every $⟨M⟩$: if $M$ halts for every input $L(M')=0^n1^n$ otherwise it would be $o^n1^n0^n$, but this is not correct, Isn't it? How can I check that $M$ halts for every input to begin with? and- is this the way to do that?</p>

|
formal languages computability context free turing machines
| 1 |
1,980 |
Optimizing a join where each table has a selection
|
<p>Consider the following query:</p>

<pre><code>SELECT Customer.Name FROM Customer
INNER JOIN Order on Order.CustomerId = Customer.Id
WHERE Customer.Preferred = True AND
 Order.Complete = False
</code></pre>

<p>Let's suppose all of the relevant attributes (Customer.Preferred, Order.Complete, Order.CustomerId and Customer.Id) are indexed. How can I evaluate this as quickly as possible?</p>

<p>Standard optimization advice would say that I should do the select on each table first, then the join using sort-merge or whatever the cardinality would imply. But this involves two passes through the data - I'm wondering if there's a better way.</p>

<hr>

<p><strong>EDIT</strong>: I think asking if there was a "better way" was too ill-defined. Suppose we are trying to find $\sigma_a(A)\bowtie_j\sigma_b(B)$. Observe that we can find this in $O(\alpha)$ (where $\alpha$ is the cardinality of $\sigma_a(A)$) with the following pseudocode:</p>

<pre><code>for each a in A:
 find foreign tuple in B // constant-time, if using hash table
 check if foreign tuple meets foreign constraint // again, constant time
</code></pre>

<p>As mentioned by some answerers, there are various minor permutations (do the for loop over B instead, etc.). But they all seem to be $O(\alpha)$ or $O(\beta)$. Is there a better way?</p>

<p>Note that if it the query were a self join, we could just do the merge part of a sort-merge join, (since our indexes would already be sorted) which would run in time proportional to the number of results. So I ask if a similar thing can be done here.</p>

<p>I am more than happy to accept a proof that there is no better method as an answer. I believe that there is no faster algorithm, but I'm unable to prove it.</p>

|
optimization database theory relational algebra databases
| 0 |
1,983 |
Transforming an NFA into an NFA of similar size but without $\epsilon$-transitions
|
<p>I'm learning for the exam and have problems with this task:</p>

<blockquote>
 <p>Describe an algorithm that transforms a given NFA $A = (Q, \Sigma, \delta, q_0, F)$ (which may have $\epsilon$-transitions) into an equivalent NFA without $\epsilon$-transitions with the same condition number. And then determine the maturity of the algorithm. The algorithm should have a running time $O(|Q| · |\delta|)$ where
 $$|\delta| := \sum_{\substack{q\in Q\\ a\in\Sigma\cup\{\epsilon\}}} |\delta(q,a)|$$</p>
</blockquote>

|
algorithms automata finite automata
| 0 |
1,984 |
Proving the (in)tractability of this Nth prime recurrence
|
<p>As follows <a href="https://cs.stackexchange.com/questions/1828/polytime-and-polyspace-algorithm-for-determining-the-leading-intersection-of-n-d">from my previous question</a>, I've been playing with the <a href="http://en.wikipedia.org/wiki/Riemann_hypothesis" rel="nofollow noreferrer">Riemann hypothesis</a> as a matter of recreational mathematics. In the process, I've come to a rather interesting recurrence, and I'm curious as to its name, its reductions, and its tractability towards the solvability of the gap between prime numbers.</p>

<p>Tersely speaking, we can define the <em>gap</em> between each prime number as a recurrence of preceding <em>candidate</em> primes. For example, for our base of $p_0 = 2$, the next prime would be:</p>

<p>$\qquad \displaystyle p_1 = \min \{ x > p_0 \mid -\cos(2\pi(x+1)/p_0) + 1 = 0) \}$</p>

<p>Or, as we see by <a href="http://m.wolframalpha.com/input/?i=-cos%28%28x%2b1%29*2*pi/2%29%20%2b%201%20=%200" rel="nofollow noreferrer">plotting this out</a>: $p_1 = 3$.</p>

<p>We can repeat the process for $n$ primes by evaluating each candidate prime recurring forward. Suppose we want to get the next prime, $p_2$. Our candidate function becomes:</p>

<p>$\qquad \displaystyle \begin{align}
p_2 = \min\{ x > p_1 \mid f_{p_1}(x) + (&(-\cos(2\pi(x+1)/p_1) + 1) \\
 \cdot &(-\cos(2\pi(x+2)/p_1) + 1)) = 0\}
\end{align}$</p>

<p>Where:</p>

<p>$\qquad \displaystyle f_{p_1}(x) = -\cos(2\pi(x+1)/p_0) + 1$, as above.</p>

<p>It's easy to see that each component function only becomes zero on integer values, and it's equally easy to show how this captures our AND- and XOR-shaped relationships cleverly, by exploiting the properties of addition and multiplication in the context of a system of trigonometric equations.</p>

<p>The recurrence becomes:</p>

<p>$\qquad f_{p_0} = 0\\
\qquad p_0 = 2\\
\qquad \displaystyle
 f_{p_n}(x) = f_{p_{n-1}}(x) + \prod_{k=2}^{p_{n-1}} (-\cos(2\pi(x+k-1)/p_{n-1}) + 1)\\
 \qquad \displaystyle
 p_n = \min\left\{ x > p_{n-1} \mid f_{p_n}(x) = 0\right\}$</p>

<p>... where the entire problem hinges on whether we can evaluate the $\min$ operator over this function in polynomial time. This is, in effect, a generalization of the <a href="http://en.wikipedia.org/wiki/Sieve_of_Eratosthenes" rel="nofollow noreferrer">Sieve of Eratosthenes</a>.</p>

<p>Working Python code to demonstrate the recurrence:</p>

<pre><code>from math import cos,pi

def cosProduct(x,p):
 """ Handles the cosine product in a handy single function """
 ret = 1.0
 for k in xrange(2,p+1):
 ret *= -cos(2*pi*(x+k-1)/p)+1.0
 return ret

def nthPrime(n):
 """ Generates the nth prime, where n is a zero-based integer """

 # Preconditions: n must be an integer greater than -1
 if not isinstance(n,int) or n < 0:
 raise ValueError("n must be an integer greater than -1")

 # Base case: the 0th prime is 2, 0th function vacuous
 if n == 0:
 return 2,lambda x: 0

 # Get the preceding evaluation
 p_nMinusOne,fn_nMinusOne = nthPrime(n-1)

 # Define the function for the Nth prime
 fn_n = lambda x: fn_nMinusOne(x) + cosProduct(x,p_nMinusOne)

 # Evaluate it (I need a solver here if it's tractable!)
 for k in xrange(p_nMinusOne+1,int(p_nMinusOne**2.718281828)):
 if fn_n(k) == 0:
 p_n = k
 break

 # Return the Nth prime and its function
 return p_n,fn_n
</code></pre>

<p>A quick example:</p>

<pre><code>>>> [nthPrime(i)[0] for i in range(20)]
[2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71]
</code></pre>

<p>The trouble is, I'm now in way over my head, both mathematically and as a computer scientist. Specifically, I am not competent with <a href="http://en.wikipedia.org/wiki/Fourier_analysis" rel="nofollow noreferrer">Fourier analysis</a>, with defining <a href="http://en.wikipedia.org/wiki/Uniform_space#Uniform_cover_definition" rel="nofollow noreferrer">uniform covers</a>, or with the <a href="http://en.wikipedia.org/wiki/Complex_plane" rel="nofollow noreferrer">complex plane</a> in general, and I'm worried that this approach is either flat-out <em>wrong</em> or hides a lurking horror of a 3SAT problem that elevates it to NP-completeness.</p>

<p>Thus, I have three questions here:</p>

<blockquote>
 <ol>
 <li>Given my terse recurrence above, is it possible to deterministically compute or estimate the location of the zeroes in polynomial time and space?</li>
 <li>If so or if not, is it hiding <em>any other</em> subproblems that would make a polytime or polyspace solution intractable?</li>
 <li>And if by some miracle (1) and (2) hold up, what dynamic programming improvements would you make in satisfying this recurrence, from a high level? Clearly, iteration over the same integers through multiple functions is inelegant and quite wasteful.</li>
 </ol>
</blockquote>

|
complexity theory reference request recurrence relation mathematical analysis
| 0 |
1,986 |
Prove that regular languages are closed under the cycle operator
|
<p>I've got in a few days an exam and have problems to solve this task.</p>

<p>Let $L$ be a regular language over the alphabet $\Sigma$. We have the operation 
$\operatorname{cycle}(L) = \{ xy \mid x,y\in \Sigma^* \text{ and } yx\in L\}$
And now we should show that $\operatorname{cycle}(L)$ is also regular.</p>

<p>The reference is that we could construct out of a DFA $D=(Q,\Sigma,\delta, q_0, F)$ with $L(D) = L$ a $\epsilon$-NFA $N$ with $L(N) = \operatorname{cycle}(L)$ and $2 · |Q|^2 + 1$ states. </p>

|
formal languages regular languages finite automata closure properties
| 1 |
1,988 |
Is open adressing with prime steps bijective?
|
<p>Who can help me with this topic: <a href="https://en.wikipedia.org/wiki/Open_addressing" rel="nofollow">Probing</a> with a step width that is a prime number.</p>

<p>I am struggling with this question about defining a hashing function $h(k, i)$ for open addressing on a table of length m, that is, with slots numbers $0, 1, 2, \dots ,m − 1$.</p>

<p>We know that a function $h(k, i) = h_1(k) + i \cdot h_2(k) \mod m$ produces a permutation for every $k$ if $h_2(k)$ and $m$ are relatively prime, that is, if $\operatorname{gcd}(h_2(k),m) = 1$. </p>

<p>We can assume that $m, w$ be integers such that the greatest common divisor $\operatorname{gcd}(m,w) = 1$. </p>

<p>How can I prove that the function above</p>

<p>$\qquad f : \{ 0, \dots,m − 1 \} \to \{ 0, \dots,m − 1 \}\\
 \qquad f(i) = i \cdot w \mod m$</p>

<p>is a permutation, in other words, a <a href="https://en.wikipedia.org/wiki/Bijection" rel="nofollow">bijective function</a>? </p>

|
algorithms hash tables hash
| 0 |
1,990 |
Is there an undecidable finite language of finite words?
|
<p>Is there <em>a need</em> for $L\subseteq \Sigma^*$ to be <em>infinite</em> to be undecidable?</p>

<p>I mean what if we choose a language $L'$ be a <em>bounded finite version of</em> $L\subseteq \Sigma^*$, that is $|L'|\leq N$, ($N \in \mathbb{N}$), with $L' \subset L$. Is it possible for $L'$ to be an undecidable language? </p>

<p>I see that there is a problem of "How to choose the $N$ words that $\in$ $L' "$ for which we have to establish a rule for choosing which would be the first $N$ elements of $L'$, a kind of "finite" Kleene star operation. The aim is to find undecidability language without needing an infinite set, but I can't see it.</p>

<p><strong>EDIT Note:</strong> </p>

<p>Although I chose an answer, many answers <strong>and all comments</strong> are important.</p>

|
formal languages computability undecidability
| 1 |
1,993 |
Why absence of surjection with the power set is not enough to prove the existence of an undecidable language?
|
<p>From this statement </p>

<blockquote>
 <p>As there is no surjection from $\mathbb{N}$ onto $\mathcal{P}(\mathbb{N})$, thus there must exist an undecidable language.</p>
</blockquote>

<p>I would like to understand why similar reasoning does not work with a <em>finite</em> set $B$ which also has no surjection onto $\mathcal{P}(B)$! (with $|B|=K$ and $K \in \mathbb{N}$)</p>

<p>Why is there a minimum need for the infinite set? </p>

<p><strong>EDIT Note:</strong> </p>

<p>Although I chose an answer, many answers <strong>and all comments</strong> are important.</p>

|
formal languages computability undecidability
| 1 |
1,998 |
Low-degree nodes in sparse graphs
|
<p>Let $G = (V,E)$ be a graph having $n$ vertices, none of which are isolated, and $n−1$ edges, where $n \geq 2$. Show that $G$ contains at least two vertices of degree one.</p>

<p>I have tried to solve this problem by using the property $\sum_{v \in V} \operatorname{deg}(v) = 2|E|$. Can this problem be solved by using <a href="https://en.wikipedia.org/wiki/Pigeon_hole_principle">pigeon hole principle</a>?</p>

|
graphs proof techniques
| 1 |
2,002 |
Is the set of LL(*) grammars the same as the set of CFG grammars?
|
<p>Is the set of LL(*) grammars the same as the set of context-free grammars?</p>

|
formal languages formal grammars
| 1 |
2,005 |
Optimal path - best career
|
<p>I am new and I have to develop an algorithm with a 2d integer array as input to compute the best career path.
Lets consider a network with n nodes, which are numbered from $1$ to $n$.
All are connected one to the other. </p>

<p>To move from node $i$ to node $j$ can have different costs (classical example job changing):</p>

<ul>
<li>positive value indicates a benefit</li>
<li>negative value indicates a loss </li>
<li>some steps have a cost of $0$</li>
</ul>

<p>Each entry of $B[i, j]$ indicates the benefit (or the cost, if negative), of a step from node $i$ to node $j$.</p>

<p>I need to find out the maximal gain of a path from i to j is</p>

<p>$$
G(i, j) = \max { \{ g(p) \mid \text{ p is a path from i to j} \} }
$$</p>

<p>It is not possible to gain from walking in a cycle.
This means, the values in B must be such that for any path p from a node i to itself, we have $g(p) \leq 0$.</p>

<p>Ex of matrix B: </p>

<p>$$
\begin{array}{cc}
 0 & 1 & 0 & 1 \\\\
−2 & 0 & 0 & −2 \\\\
 0 & 2 & 0 & 1 \\\\
−3 & −1 & −3 & 0
\end{array}
$$</p>

<p>Hints:</p>

<p>1) Cycles bring no gain. Therefore, the greatest possible benefit of moving
from $i$ to $j$ can be achieved by visiting any intermediate node at most once.</p>

<p>2) Consider the following variant of the problem. Let $G_{aux}(i, j, k)$ be the maximal gain that can be achieved by walking from $i$ to $j$ along a path that uses only the nodes $1, \ldots, k$ as intermediate points. </p>

<p>The tasks asked:</p>

<ol>
<li><p>Explain how $G_{aux}(i, j, k)$ can be used to compute $G(i, j)$. Develop a recurrence for $G_{aux}(i, j, k)$.</p></li>
<li><p>Write pseudo-code for algorithms that compute arrays with all values of
$G_{aux}$ and $G$ and explain why your algorithms are correct.</p></li>
</ol>

<p>Who can help me with this tasks?
Is the basis of all this task the Floyd-Warshall's algorithm ? right?</p>

|
algorithms
| 0 |
2,006 |
Machine Learning algorithms based on "structural risk minimization"?
|
<p>Which machine learning algorithms (besides SVM's) use the principle of <a href="https://en.wikipedia.org/wiki/Structural_risk_minimization">structural risk minimization</a>?</p>

|
reference request machine learning
| 1 |
2,011 |
DLOGTIME complexity class and testing the length of the input string
|
<p>I read that testing the length of the input string is in DLOGTIME.</p>

<p>The question is how can testing the length of the input string be in DLOGTIME?</p>

<p>$\text{DLOGTIME} = O(\log n)$, so what number would be in $n$? (as it seems that $n$ is definitely not the length of the input string..... or is it?)</p>

<p>So, to summarize, can anyone show me how the algorithm performs and how it is in DLOGTIME? At this point, it seems to me that $n$ is just an arbitary number..</p>

<p>Note: I know what binary search is :) so you do not need to explain me about what that is.</p>

|
complexity theory
| 1 |
2,016 |
How to convert finite automata to regular expressions?
|
<p>Converting regular expressions into (minimal) NFA that accept the same language is easy with standard algorithms, e.g. <a href="http://en.wikipedia.org/wiki/Thompson%27s_construction_algorithm">Thompson's algorithm</a>. The other direction seems to be more tedious, though, and sometimes the resulting expressions are messy.</p>

<p>What algorithms are there for converting NFA into equivalent regular expressions? Are there advantages regarding time complexity or result size?</p>

<p><sup>This is supposed to be a reference question. Please include a general decription of your method as well as a non-trivial example.</sup></p>

|
algorithms formal languages finite automata regular expressions reference question
| 1 |
2,019 |
Area of the union of rectangles anchored on the x-axis
|
<p>I am trying to solve the following computational geometry problem.</p>
<blockquote>
<p>Let <span class="math-container">$S$</span> be a set of <span class="math-container">$n$</span> axis-parallel rectangles in the plane, so that the bottom edge of each rectangle in <span class="math-container">$S$</span> lies on the <span class="math-container">$x$</span>-axis.</p>
<ol>
<li>What is (an upper bound on) the combinatorial complexity of the union <span class="math-container">$K$</span> of the rectangles in <span class="math-container">$S$</span>?</li>
<li>Give an efficient algorithm for computing the union and its area.</li>
</ol>
</blockquote>
<p>I suggest using sweep line algorithm for the purpose of computing union of areas.
First we should consider queue of events. Events are just the leftmost and the rightmost <span class="math-container">$x$</span>'s of rectangle. As in standard interpretation all <span class="math-container">$x$</span>'s should be sorted.</p>
<p>Start iterations on event queue (like in standard algorithm). On every new event we can compute an area we've already covered. When two or more rectangles intersect (can be identified by data structure) we should pick the rectangle with the biggest <span class="math-container">$y$</span>-coordinate until the next event.</p>
<p>That's a general idea. The main difference from the classic sweep line algorithm is that we don't have to compute intersection and inserting them to queue. All we are interested in is intersection of rectangles which occur on vertical lines of leftmost <span class="math-container">$x$</span> and rightmost <span class="math-container">$x$</span>.</p>
<p>I am not completely sure that the solution I presented is the correct one. This exercice was marked with high complexity grade. Maybe I missed something?</p>
<p>In addition, I don't know how to answer the first question.</p>

|
algorithms computational geometry
| 0 |
2,022 |
regular expression
|
<blockquote>
 <p><strong>Possible Duplicate:</strong><br>
 <a href="https://cs.stackexchange.com/questions/2008/construction-of-a-regular-expression">Construction of a regular expression</a> </p>
</blockquote>



<p>can someone help me with the following exercise?
In the formation of a regular expression from a finite automaton $A$ with three states they have been calculated the following parts, in which 1,2 and 3 been renamed, 1 is the start state and 2,3 are accepting states:<br>
$L^2 _{1,1} = b^*(aab^*)^*$<br>
$L^2 _{1,2} = b^*a(ab^*a)^*$<br>
$L^2 _{1,3} = b^*a(ab^*a)^*b$<br>
$L^2 _{3,2} = b^*ba(ab^*a)^*$<br>
$L^2 _{3,3} = |\epsilon|a|b^*ba(ab^*a)^*b$<br>
And now the question is to perform the remaining construction of a regular expression for $L(A)$</p>

<p>i already tried this exercise with nerode but i didnt come to a solution please help me<br>
thank you guys</p>

|
formal languages
| 0 |
2,024 |
construct regular expression
|
<p>I need help with the following exercise:</p>

<p>Construct an $\varepsilon$-NFA for the following regular expression $(a|\varepsilon)(ba)^*(c^*a|bc)^*$.</p>

<p>i already tried this exercise with nerode but i didnt come to a solution please help me
thank you guys</p>

<p>Source wiki
Nerode : (Given a language L, and a pair of strings x and y, define a distinguishing extension to be a string z such that exactly one of the two strings xz and yz belongs to L. Define a relation RL on strings by the rule that x RL y if there is no distinguishing extension for x and y. It is easy to show that RL is an equivalence relation on strings, and thus it divides the set of all finite strings into equivalence classes.</p>

<p>The Myhill–Nerode theorem states that L is regular if and only if RL has a finite number of equivalence classes, and moreover that the number of states in the smallest deterministic finite automaton (DFA) recognizing L is equal to the number of equivalence classes in RL. In particular, this implies that there is a unique minimal DFA with minimum number of states.)</p>

|
formal languages finite automata regular expressions
| 0 |
2,028 |
Finding small node sets that can not be avoided on paths from source to sink
|
<p>In a directed graph with a starting node and an ending node, how to find a small (doesn't have to be smallest. <10 for example) set S of nodes such that every possible path from the starting node to the ending node contains at least one member of set S. The graph may have loops. This may be NP hard. Is there an approximate method to find one or several such S from the graph? Enumerating and testing every candidate seems not work. thanks.</p>

|
algorithms graphs
| 1 |
2,030 |
How can encryption involve randomness?
|
<p>If an encryption algorithm is meant to convert a string to another string which can then be decrypted back to the original, how could this process involve any randomness? </p>

<p>Surely it has to be deterministic, otherwise how could the decryption function know what factors were involved in creating the encrypted string?</p>

|
cryptography encryption randomness
| 0 |
2,031 |
Can constraint satisfaction problems be solved with Prolog?
|
<p>Is <a href="http://iggyfernandez.wordpress.com/2012/05/21/sql-vs-nosql-third-international-nocoug-sql-nosql-challenge-sponsored-by-pythian/">"party attendance"</a> type of problems solvable in Prolog? For example:</p>

<blockquote>
 <p>Burdock Muldoon and Carlotta Pinkstone both said they would come if Albus Dumbledore came. Albus Dumbledore and Daisy Dodderidge both said they would come if Carlotta Pinkstone came. Albus Dumbledore, Burdock Muldoon, and Carlotta Pinkstone all said they would come if Elfrida Clagg came. Carlotta Pinkstone and Daisy Dodderidge both said they would come if Falco Aesalon came. Burdock Muldoon, Elfrida Clagg, and Falco Aesalon all said they would come if Carlotta Pinkstone and Daisy Dodderidge both came. Daisy Dodderidge said she would come if Albus Dumbledore and Burdock Muldoon both came.
 Whom is needs to be persuaded to attend the party in order to ensure that all her invitees attend?</p>
</blockquote>

<p>I have tried to express this in GNU Prolog:</p>

<pre><code>attend(BM) :- attend(AD).
attend(CP) :- attend(AD).
attend(AD) :- attend(CP).
attend(DD) :- attend(CP). 
attend(AD) :- attend(EC).
attend(BM) :- attend(EC).
attend(CP) :- attend(EC). 
attend(CP) :- attend(FA).
attend(DD) :- attend(FA).
attend(BM) :- attend(CP),attend(DD).
attend(EC) :- attend(CP),attend(DD).
attend(FA) :- attend(CP),attend(DD).
attend(DD) :- attend(AD),attend(BM).

attend(FA). /* try different seed invitees in order to see if all would attend*/

/* input:
write('invited:'),nl,
 attend(X),write(X),nl,
 fail.*/
</code></pre>

<p>I'm experiencing stack overflow (no pun), and have no knowledge of prolog evaluation, this is why I'm asking.</p>

<p>Generally speaking, this problem can be cast into Boolean CNF satisfaction formula (with 6 boolean variables). Therefore, does the prolog perspective have any merit?</p>

|
logic constraint programming prolog logic programming
| 1 |
2,039 |
Bellman-Ford variation
|
<p>I have a "smarter" version of Bellman-Ford here; this version is more clever about choosing the edges to relax.</p>

<pre><code>//Queue Q; source s; vertices u, v; distance to v d(v)
Q ← s // Q holds vertices whose d(v) values have been updated recently.
While (Q !empty) {
 u ← Dequeue(Q)
 for each neighbor v of u {
 Relax(u, v)
 if d(v) was updated by Relax and v not in Q
 Enqueue(v)
 }
}
</code></pre>

<p>But, can anyone explain why this improved version correctly finds the shortest path from $s$ to every other vertex in a directed graph with no negative cycles?</p>

<p>Also, what is the <em>worst-case</em> runtime if every shortest path uses at most $v$ edges?</p>

|
algorithms graphs runtime analysis shortest path
| 0 |
2,040 |
Ternary processing instead of Binary
|
<p>Most of the computers available today are designed to work with binary system. It comes from the fact that information comes in two natural form, <strong>true</strong> or <strong>false</strong>.</p>

<p>We humans accept another form of information called "maybe" :)</p>

<p>I know there are ternary processing computers but not much information about them.</p>

<ol>
<li>What is the <strong>advantages</strong> / <strong>disadvantages</strong> of designing and using ternary or higher levels of data signals in computers? </li>
<li>Is it feasible? </li>
<li>In which domain can it be better than classic binary systems?</li>
<li>Can we give computers the chance to make mistakes and expect to see performance 
improvements in most situations by this way? (I think performance gains can be observed if computers are not so strict about being absolutely correct)</li>
</ol>

<p><strong>EDIT:</strong> Are there difficulties differentiating between 3 levels of signal? Would it be too hard to keep data in memory since memory voltage is frequently released and loaded, (maybe hundreds of time a second?).</p>

|
computer architecture
| 1 |
2,046 |
How to prove every well-balanced orientation of an Eulerian graph is Eulerian?
|
<p>I'm trying to prove that every well-balanced <a href="https://en.wikipedia.org/wiki/Strong_orientation" rel="nofollow">orientation</a> of an <a href="https://en.wikipedia.org/wiki/Eulerian_graph" rel="nofollow">Eulerian graph</a> is Eulerian.</p>

<p>I want to prove it by showing that for any two vertices $u$ and $v$, their local arc connectivities coincide, that is</p>

<p>$\qquad \displaystyle P_D'(u,v)=P_D'(v,u)$ </p>

<p>for every well-balanced orientation of an Eulerian graph. How can I do this?</p>

|
graphs
| 0 |
2,047 |
Probabilistic poly-time machine always halts on all inputs?
|
<p>In the usual definition of probabilistic poly-time machine it is said that the machine halts in polynomial time for all inputs. </p>

<p>Is the intention really to say that the machine halts for all inputs, or that if it halts it must be in polynomial time?</p>

|
complexity theory terminology turing machines probabilistic algorithms
| 0 |
2,049 |
How many layers should a neural network have?
|
<p>Are there any advantages of having more than 2 hidden layers in a Neural Network?</p>

<p>I've seen some places that recommend it, others prove that there is no advantage.</p>

<p>Which one is right?</p>

|
artificial intelligence neural networks neural computing
| 1 |
2,052 |
In s-t directed graph, how to find many small cuts?
|
<p>Solving the <a href="https://en.wikipedia.org/wiki/Maximum_flow_problem" rel="nofollow">maximum flow problem</a> yields one qualified minimal cut. But I want several (maybe hundreds) small cuts as candidates. The cuts don't have to be minimum cuts, as long as they are small (in weight). How do I do that?</p>

|
algorithms graphs optimization approximation
| 0 |
2,053 |
After implementing a novel encryption algorithm, how would one go about analyzing its security or get help from others in doing so?
|
<p><strong>Preface:</strong> This question was originally asked on <a href="https://cstheory.stackexchange.com/questions/11521">Theoretical Computer Science</a>, and the kind people there referred me to this web site. It is being repeated here in an attempt to find a satisfying answer.</p>

<hr>

<p>Over the years, two novel encryption techniques have come to mind and been implemented as programming libraries that could be integrated into applications. However, how to analyze their security and vulnerability characteristics has never been very clear, and their usage has been limited to mainly experimental tests. Are there tools available for automated examination of such parameters one may be interested in understanding for an encryption library? Are there bodies of people who are interested in being introduced to new encryption concepts for the purpose of executing their own personal analysis on such a process? I'm not sure where to look.</p>

<p>The first encryption algorithm is a mono-alphabetic simple substitution cipher. It requires two keys to operate and is designed to frustrate frequency analysis. The longer of the keys forms a table by which plain-text has a normal substitution cipher applied. Each encoded byte is then split into four values of two bits each. The second, shorter key is then used to allow a random selection from four groups of sixty-four unique bytes each. Each two bit value from the encoded byte is used to select which group of sixty-four bytes to use. Encoding has two disadvantages: the output is four times larger, and repeated data encoding may allow some frequency analysis.</p>

<p>The second encryption algorithm is a stream cipher like the first but internally operates on blocks of data. It utilizes two keys to operate: the first is a two-dimensional array that describes how to construct a (virtual) multidimensional grid, and the second is an initialization vector for the encoding/decoding engine. It attempts to overcome frequency analysis by encoding bytes with a window of preceding bytes (initialized from the second key). A byte with its preceding window of bytes form a multidimensional index into the aforementioned grid. Unfortunately, encoding duplicate blocks of data longer than the window size starts yielding equivalent data.</p>

|
cryptography encryption
| 0 |
2,057 |
When can I use dynamic programming to reduce the time complexity of my recursive algorithm?
|
<p>Dynamic programming can reduce the time needed to perform a recursive algorithm. I know that dynamic programming can help reduce the time complexity of algorithms. Are the general conditions such that if satisfied by a recursive algorithm would imply that using dynamic programming will reduce the time complexity of the algorithm? When should I use dynamic programming?</p>

|
algorithms dynamic programming efficiency algorithm design
| 0 |
2,059 |
How do you check if two algorithms return the same result for any input?
|
<p>How do you check if two algorithms (say, Merge sort and Naïve sort) return the same result for any input, when the set of all inputs is infinite?</p>

<p><strong>Update:</strong> Thank you <a href="https://cs.stackexchange.com/a/2062/1638">Ben</a> for describing how this is impossible to do algorithmically in the general case. <a href="https://cs.stackexchange.com/a/2063/1638">Dave's answer</a> is a great summary of both algorithmic and manual (subject to human wit and metaphor) methods that don't always work, but are quite effective.</p>

|
computability formal methods software engineering software verification
| 0 |
2,064 |
Block detection in repeated stream
|
<p>I need to recover a data block from a repeated stream of data. I'm looking to see what algorithms may already exist for this as it does not feel like a novel situation.</p>

<p>Here are the specifics:</p>

<ol>
<li>There is an N-length block of data contained in a stream</li>
<li>The block is repeated many times in the stream</li>
<li>the data is highly corrupted, some bytes could just be wrong, where as others can be detected as missing (erasures)</li>
<li>There is a function <code>F(data)</code> which can say if a block represents valid data (the probability of a false positive is virtually zero)</li>
<li><code>F</code> can also provide a probability value that even if the block is not valid data whether the block itself is valid (but just has too much corruption to be recovered)</li>
<li>The chance of corrupted data is very low compared to missing data</li>
</ol>

<p>For example, say I have this data stream and wish to recover the 10 length sequence <code>1234567890</code>. The data is just a rough visual example (I can't guarantee recovery is actually possible from this bit). A <code>.</code> represents a missing byte, and <code><break></code> indicates an unknown block of data (no data and not length known). Note also the <code>Q</code>s as an example of corrupt data.</p>

<p><code>23.5678901.3456789<break>2345678..1..4567QQ012345678..3456</code></p>

<p>How can I take such a stream of data and recovery probably blocks of N data? As the actual data includes forward error recovery the block recovery need not be perfect. All it needs to do is give probable reconstructed blocks of data and the <code>F</code> function will attempt to do error recovery. Thus I expect <code>F</code> fill have to be called several times. </p>

<p>I'd like to find something better than simply calling <code>F</code> at each point in the stream since the error rate could be high enough that no single run block of N can be recovered -- the repetitions in the stream must be used somehow.</p>

|
algorithms online algorithms communication protocols
| 1 |
2,067 |
What does this performance formula mean?
|
<p>I have to make a quick clustering program but the following formula is gibberish to me:</p>

<blockquote>
 <p>$\operatorname{Perf}(X,C) = \sum\limits_{i=1}^n\min\{||X_i-C_l||^2 \mid l = 1,...,K\}$</p>
 
 <p>where $X$ is a set of multi-dimensional data and $C$ is a set of centroids for each data cluster.</p>
</blockquote>

<p>This formula is a fitness function for an <a href="https://en.wikipedia.org/wiki/Artificial_bee_colony_algorithm" rel="nofollow">artificial bee colony clustering algorithm</a> as a substitute for <a href="https://en.wikipedia.org/wiki/K-means_clustering_algorithm" rel="nofollow">k-means clustering algorithm</a>. It is described as a total
within-cluster variance or the total mean-square quantization error (MSE).</p>

<p>Can anyone translate it to <em>pseudo-code</em>, normal human <em>English</em>, or at least enlighten me?</p>

|
algorithms terminology evolutionary computing
| 1 |
2,069 |
Coverage problem (transmitter and receiver)
|
<p>I try to solve the following coverage problem.</p>

<blockquote>
 <p>There are $n$ transmitters with coverage area of 1km and $n$ receivers. Decide in $O(n\log n)$ that all receivers are covered by any transmitter. All reveivers and transmiters are represented by their $x$ and $y$ coordinates.</p>
</blockquote>

<p>The most advanced solution I can come with takes $O(n^2\log n)$. For every receiver sort all transmitter by it distance to this current receiver, then take the transmitter with shortest distance and this shortest distance should be within 0.5 km.</p>

<p>But the naive approach looks like much better in time complexity $O(n^2)$. Just compute all distance between all pairs of transmitter and receiver.</p>

<p>I am not sure if I can apply range-search algorithms in this problem. For example kd-trees allow us to find such ranges, however I never saw an example, and I am not sure if there are kind of range-search for circles. </p>

<p>The given complexity $O(n\log n)$ assumes that the solution should be somehow similar to sorting.</p>

|
algorithms computational geometry search problem
| 0 |
2,076 |
Could someone suggest me a good introductory book or an article on graph clustering?
|
<p>For my pet project I need to cluster some data which could be easily represented as graph, so I want to use this as an opportunity to educate myself and play with various algorithms. I'd prefer the book on graph clustering as it often more self contained but articles are fine too. Back in the days I used to work in the field of numerical linear algebra so I'd also prefer algebraical view on things (so books which view graph as a matrix with specific properties are more accessible to me).</p>

<p>p.s. I've tried scholar.google.com but was overwhelmed by vast number of results. </p>

|
algorithms graphs reference request books
| 1 |
2,079 |
Determine missing number in data stream
|
<p>We receive a stream of $n-1$ pairwise different numbers from the set $\left\{1,\dots,n\right\}$.</p>

<p>How can I determine the missing number with an algorithm that reads the stream once and uses a memory of only $O(\log_2 n)$ bits?</p>

|
algorithms integers online algorithms
| 1 |
2,081 |
Is there a context free, non-regular language $L$, for which $L^*$ is regular?
|
<p>I know that there are non-regular languages, so that $L^*$ is regular, but all examples I can find are context-sensitive but not context free.</p>

<p>In case there are none how do you prove it?</p>

|
formal languages context free regular languages
| 1 |
2,082 |
Algorithm to minimize distance variance between 2D coordinates
|
<p>I've been looking around for an algorithm that would optimize the distance between 2 list of coordinates and choose which coordinate should go together.</p>

<p>Say I have List 1:</p>

<pre><code>205|200
220|210
200|220
200|180
</code></pre>

<p>List 2:</p>

<pre><code>210|200
207|190
230|200
234|190
</code></pre>

<p>Calculated Distance between Coords:</p>

<pre><code>205|200 to 210|200 == 5.00
205|200 to 207|190 == 10.20
205|200 to 230|200 == 25.00
205|200 to 234|190 == 30.68

220|210 to 210|200 == 14.14
220|210 to 207|190 == 23.85
220|210 to 230|200 == 14.14
220|210 to 234|190 == 24.41

200|220 to 210|200 == 22.36
200|220 to 207|190 == 30.81
200|220 to 230|200 == 36.06
200|220 to 234|190 == 45.34

200|180 to 210|200 == 22.36
200|180 to 207|190 == 12.21
200|180 to 230|200 == 36.06
200|180 to 234|190 == 35.44
</code></pre>

<p>This Algorithm would pick:</p>

<pre><code>205|200 to 230|200 == 25.00
220|210 to 207|190 == 23.85
200|220 to 210|200 == 22.36
200|180 to 234|190 == 35.44
</code></pre>

<p>The Algorithm would pick these numbers as they would be the group that would have the littlest variance between the distance.
Conditions:</p>

<ol>
<li>A Coordinate may only be used ones from each list</li>
<li>If List 1 or List2 is larger than it still only uses each coordinate once, but it tries to get the smallest distance variance and does nothing with the unused coordinates.</li>
</ol>

<p>If you need more clarification please ask.</p>

<p>P.S. I've looked at the Hungarian algorithm and it seems like it will sort of do the job, but not exactly how I was expecting. The Hungarian algorithm will only try and make the least distance from all the coordinates, which can mean the smallest variance, but not every time as variance is more important here then least distance optimization.</p>

<p><strong>Additional Information</strong></p>

<p>I will have an array of List1, List2, and then the distances:</p>

<pre><code>Distance[List1_item_0][List2_item_0] = 5;
Distance[List1_item_0][List2_item_1] = 10.20;
Distance[List1_item_0][List2_item_2] = 25.00;
Distance[List1_item_0][List2_item_3] = 30.68;

Distance[List1_item_1][List2_item_0] = 14.14;
Distance[List1_item_1][List2_item_1] = 23.85;
Distance[List1_item_1][List2_item_2] = 14.14;
Distance[List1_item_1][List2_item_3] = 24.41;

Distance[List1_item_2][List2_item_0] = 22.36;
Distance[List1_item_2][List2_item_1] = 30.81;
Distance[List1_item_2][List2_item_2] = 36.06;
Distance[List1_item_2][List2_item_3] = 45.34;

Distance[List1_item_3][List2_item_0] = 22.36;
Distance[List1_item_3][List2_item_1] = 12.21;
Distance[List1_item_3][List2_item_2] = 36.06;
Distance[List1_item_3][List2_item_3] = 35.44;
</code></pre>

<p>From the Distance['List1_item_#] I would need to pick a distance. Once that distance is picked the [List2_item_#] CANNOT be picked by a different [List1_item_#]. The distances picked for each [List1_item_#] element would need to be picked in a way that the variance between them all is minimal. So distance for each [List1_item_#] should be as close as possible to each other without reusing a [List2_item_#] more than once.</p>

|
algorithms computational geometry
| 0 |
2,088 |
Complete Problems for $DSPACE(\log(n)^k)$
|
<p>We know that the $polyL$-hierarchy doesn't have complete problems, as it would conflict with the space hierarchy theorem. But: Are there complete problems for each level of this hierarchy?</p>

<p>To be precise: Does the class $DSPACE(\log(n)^k)$ have complete problems under $L$-reductions for each $k > 0$?</p>

|
complexity theory reductions space complexity
| 1 |
2,092 |
Simple explanation as to why certain computable functions cannot be represented by a typed term?
|
<p>Reading the paper <a href="ftp://ftp.cs.ru.nl/pub/CompMath.Found/lambda.pdf">An Introduction to the Lambda Calculus</a>, I came across a paragraph I didn't really understand, on page 34 (my italics):</p>

<blockquote>
 <p>Within each of the two paradigms there are several versions of typed
 lambda calculus. In many important systems, especially those a la
 Church, it is the case that terms that do have a type always possess a
 normal form. By the unsolvability of the halting problem this
 implies that not all computable functions can be represented by a
 typed term, see Barendregt (1990), Theorem 4.2.15. This is not so bad
 as it sounds, because in order to find such computable functions that
 cannot be represented, one has to stand on one's head. For example in
 2, the second order typed lambda calculus, only those partial
 recursive functions cannot be represented that happen to be total,
 but not provably so in mathematical analysis (second order
 arithmetic).</p>
</blockquote>

<p>I am familiar with most of these concepts, but not the concept of a partial recursive function, nor the concept of a provably total function. However, this is not what I am interested in learning.</p>

<p>I am looking for a simple explanation as to why certain computable functions cannot be represented by a typed term, as well as to why such functions can only be found 'by standing on one's head.'</p>

|
computability logic lambda calculus type theory
| 0 |
2,093 |
Efficient map data structure supporting approximate lookup
|
<p>I'm looking for a data structure that supports efficient approximate lookups of keys (e.g., Levenshtein distance for strings), returning the closest possible match for the input key. The best suited data structure I've found so far are <a href="http://en.wikipedia.org/wiki/BK-tree">Burkhard-Keller trees</a>, but I was wondering if there are other/better data structures for this purpose.</p>

<p>Edit:
Some more details of my specific case:</p>

<ul>
<li>Strings usually have a fairly large Levenshtein difference from each other.</li>
<li>Strings have a max length of around 20-30 chars, with an average closer to 10-12.</li>
<li>I'm more interested in efficient lookup than insertion as I will be building a set of mostly static data that I want to query efficiently.</li>
</ul>

|
data structures strings efficiency
| 0 |
2,100 |
Need help understanding this optimization problem on graphs
|
<p>Has anyone seen this problem before? It's suppose to be NP-complete.</p>

<blockquote>
 <p>We are given vertices $V_1,\dots ,V_n$ and possible parent sets for each vertex. Each parent set has an associated cost. Let $O$ be an ordering (a permutation) of the vertices. We say that a parent set of a vertex $V_i$ is consistent with an ordering $O$ if all of the parents come before the vertex in the ordering. Let $mcc(V_i, O)$ be the minimum cost of the parent sets of vertex $V_i$ that are consistent with ordering $O$. I need to find an ordering $O$ that minimizes the total cost: $mcc(V_1, O), \dots ,mcc(V_n, O)$.</p>
</blockquote>

<p>I don't quite understand the part "...if all of the parents come before the vertex in the ordering." What does it mean?</p>

|
algorithms graphs terminology optimization
| 1 |
2,101 |
Finding the point nearest to the x-axis over some segment
|
<p>I have problem with solving the following exercise</p>

<blockquote>
 <p>Given the set $P$ on $n$ points in two dimensions, build in time $O(n\log n)$ a data structure of $P$ such that given a horizontal segment $s$ find the first point that $s$ touches when moving upwards from the x-axis in time $O(\log^2n)$.</p>
</blockquote>

<p>The preprocessing time is equivalent to sorting, so we can perform sorting by one dimension.</p>

<p>The query time is a little bit confusing - $\log^2$n. I would say it's $\log n$ binary searchs but it doesn't make sense.</p>

|
algorithms computational geometry
| 1 |
2,103 |
Depth-2 circuits with OR and MOD gates are not universal?
|
<p>It is well-known that every boolean function $f:\{0,1\}^n\to \{0,1\}$ can be realized using a boolean circuit of depth 2 (over the variables, their negation and constant values) containing AND gates in the first level and one single OR gate in the upper level; this is simply the <a href="http://en.wikipedia.org/wiki/Disjunctive_normal_form" rel="nofollow">DNF representation</a> of $f$.</p>

<p>Another type of gate which is of great interest in circuit complexity is the $MOD_m$ gate. The usual definition is the following:</p>

<p>$$\mathrm{MOD}_m(x_1,\dots,x_k)=\cases{
 1 & if \(\sum x_i \equiv 0 \mod m\) \\
 0 & if \(\sum x_i \not\equiv 0 \mod m\) \\
}$$</p>

<p>These gates sometimes have surprising power; for example, any boolean function can be represented by a depth-2 circuit having only $\mathrm{MOD}_6$ gates (this is folklore but I can elaborate is someone is interested).</p>

<p>However, another folklore is that circuits with a single OR gate at the top and $\mathrm{MOD}_m$ gates in the bottom layer (with $m$ being fixed once and for all, and in particular being the same for all the gates) is not universal, i.e. for any value of $m$, there are boolean functions that cannot be computed by $\mathrm{OR} \circ \mathrm{MOD}_m$ circuit.</p>

<p>I'm looking for a proof for this claim, or at least some direction.</p>

|
complexity theory logic circuits
| 1 |
2,110 |
Space bounded Turing Machine - clarification on Computational Complexity (book: Arora-Barak ) question 4.1
|
<p>I have the following question from <a href="http://www.cs.princeton.edu/theory/complexity/" rel="noreferrer">Computational Complexity - A modern Approach</a> by Sanjeev Arora and Boaz Barak:</p>

<blockquote>
 <p><em>[Q 4.1]</em><br>
 Prove the existence of a universal TM for space bounded computation (analogously to the deterministic universal TM of Theorem 1.9). </p>
</blockquote>

<p>That is, prove that there exists a Turing Machine $SU$ such that for every string $\alpha$ and input $x$, if the TM $M_\alpha$ -- the TM represented by $\alpha$ -- halts on $x$ before using $t$ cells of its work tape, then $SU(\alpha, t, x) = M_\alpha(x)$ and moreover, $SU$ uses at most $C\cdot t$ cells of its work tape, where $C$ is a constant depending only on $M_\alpha$.</p>

<p>After checking theorem 1.9 and the universal TM with time bound, I see that the construct $SU(\alpha, t, x)$ means that the Turing machine SU stops after $t$ steps. However if this is the case, then it means that we can create a Turing Machine equivalent to $M_\alpha$ such that the new Turing Machine stops in $t$ steps where $t$ is the "space" used in the original.</p>

<p>However, this seems a dubious interchange of space and time. If on the other hand, $t$ actually meant that the second machine stops within $t$ space, too, then the second part does not make sense any more because it says $SU$ uses $Ct$ cells, which is not $t$.</p>

<p>So my question is how do I interpret this? Is the first interpretation really possible?</p>

|
complexity theory terminology turing machines space complexity
| 1 |
2,118 |
Number of clique in random graphs
|
<p>There is a family of random graphs $G(n, p)$ with $n$ nodes (<a href="https://en.wikipedia.org/wiki/Random_graph">due to Gilbert</a>). Each possible edge is independently inserted into $G(n, p)$ with probability $p$. Let $X_k$ be the number of cliques of size $k$ in $G(n, p)$.</p>

<p>I know that $\mathbb{E}(X_k)=\tbinom{n}{k}\cdot p^{\tbinom{k}{2}}$, but how do I prove it?</p>

<p>How to show that $\mathbb{E}(X_{\log_2n})\ge1$ for $n\to\infty$? And how to show that $\mathbb{E}(X_{c\cdot\log_2n}) \to 0$ for $n\to\infty$ and a fixed, arbitrary constant $c>1$?</p>

|
graphs combinatorics probability theory random graphs
| 1 |
2,120 |
Forward checking vs arc consistency on 3-SAT
|
<p>If I were to let the variables be the propositions and, constraint be all clauses being satisfied, which technique would be more effective in solving 3-SAT? <a href="http://en.wikipedia.org/wiki/Look-ahead_%28backtracking%29#Look_ahead_techniques" rel="nofollow">Forward checking</a> or <a href="http://en.wikipedia.org/wiki/Arc_consistency#Arc_consistency" rel="nofollow">arc consistency</a>? From what I gathered forward-checking is $O(n)$, while Arc consistency is about $O(8c)$ where c is the number of constraints (According to this <a href="http://www.cs.ubc.ca/~kevinlb/teaching/cs322%20-%202006-7/Lectures/lect11.pdf" rel="nofollow">page</a>). So perhaps forward -checking is faster somehow? How should I determine which to use?</p>

|
algorithms satisfiability heuristics 3 sat sat solvers
| 0 |
2,121 |
Complexity of computer algebra for systems of trigonometric equations
|
<p><a href="https://cs.stackexchange.com/questions/1984/proving-the-intractability-of-this-nth-prime-recurrence">As discussed in this question,</a> I drafted a spec algorithm that hinges on finding a specific root of a system of trigonometric equations satisfying the following recurrence:</p>

<p>$\qquad f_{p_0} = 0\\
\qquad p_0 = 2\\
\qquad \displaystyle
 f_{p_n}(x) = f_{p_{n-1}}(x) + \prod_{k=2}^{p_{n-1}} (-\cos(2\pi(x+k-1)/p_{n-1}) + 1)\\
 \qquad \displaystyle
 p_n = \min\left\{ x > p_{n-1} \mid f_{p_n}(x) = 0\right\}$</p>

<p><a href="http://www.wolframalpha.com/input/?i=%E2%88%92cos%282%CF%80%28x%2b1%29/2%29%2b1%2b%28%E2%88%92cos%282%CF%80%28x%2b1%29/3%29%2b1%29%28%E2%88%92cos%282%CF%80%28x%2b2%29/3%29%2b1%29%2b%28%E2%88%92cos%282%CF%80%28x%2b1%29/5%29%2b1%29%28%E2%88%92cos%282%CF%80%28x%2b2%29/5%29%2b1%29%28%E2%88%92cos%282%CF%80%28x%2b3%29/5%29%2b1%29%28%E2%88%92cos%282%CF%80%28x%2b4%29/5%29%2b1%29=0%20for%20x" rel="nofollow noreferrer">Playing with this system a bit over on Wolfram|Alpha</a>, it seems I can get specific answers to the recurrence from their <a href="http://en.wikipedia.org/wiki/Computer_algebra_system" rel="nofollow noreferrer">computer algebra system</a>. Unfortunately, I can find no specific documentation on the methods they're using to solve my equations.</p>

<p>My question, then: </p>

<blockquote>
 <p>What methods (and what time and space complexities) do computer algebra systems use to solve these forms of equations? I suspect the <a href="http://en.wikipedia.org/wiki/Gr%C3%B6bner_basis" rel="nofollow noreferrer">Gröbner basis</a> is commonly used, but I could be very wrong.</p>
</blockquote>

|
reference request runtime analysis mathematical analysis computer algebra mathematical software
| 0 |
2,123 |
Transform regular grammar in linear grammar
|
<p>My problem is how can I transform a <em>regular</em> grammar into a <em>linear</em> grammar?</p>

<p>I know that a linear grammar has the form</p>

<p>$\begin{align} A &\to w_1Bw_2 \\ 
 A &\to w \end{align}$</p>

<p>where $A,B \in N$ and $w,w_1,w_2 \in \Sigma^*$.</p>

|
terminology formal grammars
| 0 |
2,127 |
Context-free grammar for $\{ a^n b^m a^{n+m} \}$
|
<p>I've got a problem with this task. I should declare a context-free grammar for this language:</p>

<p>$\qquad \displaystyle L := \{\, a^nb^ma^{n+m} : n,m \in \mathbb{N}\,\}$</p>

<p>My idea is: We need a start symbol, for example $S$. I know that I can generate the first $a$ and the last $a$ by $S \to a a$. I don't know what is the next idea to solve this task.</p>

|
formal languages context free formal grammars
| 1 |
2,130 |
What is the branch of Computer Science that studies how Anti Virus programs work?
|
<p>It is a trivial exercise in finite automata to show that there is no algorithm that can detect all the viruses, yet there are many software companies selling Anti Virus Software.</p>

<p>Is there any part of CS that deals with Viruses and Anti Viruses ?</p>

<p>PS : I am not asking about non CS related justification of to have AV or not, but only what category/subject within CS they come under, if any. If AV is not a subject within CS then that is also an acceptable answer, are there any refrences within CS context to Viruses and AV's? </p>

|
reference request security
| 0 |
2,149 |
Any very user friendly resources on the Baum-Welch algorithm?
|
<p>I'd like to understand the <a href="https://en.wikipedia.org/wiki/Baum%E2%80%93Welch_algorithm" rel="nofollow">Baum-Welch algorithm</a>. I liked <a href="http://www.youtube.com/watch?v=7zDARfKVm7s&feature=related" rel="nofollow">this video</a> on the Forward-Backward algorithm so I'd like a similar one for Baum-Welch.</p>

<p>I'm having trouble coming up with good resources for Baum-Welch. Any ideas?</p>

|
algorithms reference request hidden markov models
| 1 |
2,151 |
Proving NP is a subset of the union of exponential DTIME
|
<p>I need to prove that $\mathsf{NP}$ is a subset of the union of $\mathsf{DTIME}(2^{n^c})$ for all $c > 1$.</p>

<p>Let $L$ be a language/decision problem in $\mathsf{NP}$. Then $L$ can be decided given a polynomial-size certificate in polynomial time with a turing machine $M$. So then we enumerate all possible certificates of polynomial size. There are $2^l$ possible certificates for a certificate of length $l$. For a certificate of length up to $n^c$, there are $\sum_{l=0}^{n^c} 2^l = 2^{n^c + 1} - 1$ many certificates. Each certificate can be decided in polynomial time, so we get that each problem in $\mathsf{NP}$ can be done in $\mathsf{DTIME}(2^{n^c}n^c)$. What am I doing wrong?</p>

|
complexity theory check my proof
| 0 |
2,152 |
How to prove correctness of a shuffle algorithm?
|
<p>I have two ways of producing a list of items in a random order and would like to determine if they are equally fair (unbiased).</p>

<p>The first method I use is to construct the entire list of elements and then do a shuffle on it (say a Fisher-Yates shuffle). The second method is more of an iterative method which keeps the list shuffled at every insertion. In pseudo-code the insertion function is:</p>

<pre><code>insert( list, item )
 list.append( item )
 swap( list.random_item, list.last_item )
</code></pre>

<p>I'm interested in how one goes about showing the fairness of this particular shuffling. The advantages of this algorithm, where it is used, are enough that even if slightly unfair it'd be okay. To decide I need a way to evaluate its fairness.</p>

<p>My first idea is that I need to calculate the total permutations possible this way versus the total permutations possible for a set of the final length. I'm a bit at a loss however on how to calculate the permutations resulting from this algorithm. I also can't be certain this is the best, or easiest approach.</p>

|
algorithms proof techniques randomized algorithms correctness proof randomness
| 1 |
2,154 |
Swap space management during pure demand paging
|
<p>The following is a doubt that I came across while doing a OS home assignment - however, it seems more concept-based than a straightforward coding question, so IMHO I don't think the homework tag is appropriate for this.</p>

<p>In a pure demand paging scheme for multiple processes running at the same time, given a fixed amount of RAM and Swap memory, what happens in the following 2 cases w.r.t the swap space, when</p>

<ol>
<li><p>A process encounters a page-fault, and there are no free frames available in the RAM, hence requiring one of the pages from the process' chunk of Kernel Frames to be written out to swap (for simplicity, I'm not considering the copy-on-write case). Explicitly, where in the Swap space would this frame be written, and what data structures need to be updated for that?</p></li>
<li><p>When a process needs to page-in a particular page, where does it look in the Swap memory, and how would it know if that particular page be present in Swap at all ?</p></li>
</ol>

<p>As you can well imagine, I'm having difficulty understanding in what way to manage the Swap space during pure demand management scheme, and what data structures would be essential. It would be great if you could refer to any links in your answer (I searched in "Operating System Concepts - 8th edition by Silberschatz, I couldn't find an explicit answer for my question).</p>

|
operating systems memory allocation virtual memory paging memory management
| 1 |
2,155 |
How to read typing rules?
|
<p>I started reading more and more language research papers. I find it very interesting and a good way to learn more about programming in general. However, there usually comes a section where I always struggle with (take for instance part three of <a href="http://math.andrej.com/wp-content/uploads/2012/03/eff.pdf">this</a>) since I lack the theoretical background in computer science: Type Rules.</p>

<p>Are there any good books or online resources available to get started in this area? <a href="http://en.wikipedia.org/wiki/Type_rules">Wikipedia</a> is incredibly vague and doesn't really help a beginner.</p>

|
logic reference request terminology type theory
| 1 |
2,157 |
NP-Completeness of a Graph Coloring Problem
|
<p><strong>Alternative Formulation</strong></p>

<p>I came up with an alternative formulation to the below problem. The alternative formulation is actually a special case of the problem bellow and uses bipartite graphs to describe the problem. However, I believe that the alternative formulation is still NP-hard. The alternative formulation uses a disjoint set of incoming and outgoing nodes that simplifies the problem definition.</p>

<p>Given $n$ outgoing and $n$ incoming nodes (the red and blue nodes in the figure respectively), and a set $w_{ij}$'s of size $n \times n$ of edge weights between the outgoing and incoming vertices. The goal of the problem is to color the thick edges in the figure so that for every incoming node, a condition holds.</p>

<p><img src="https://i.stack.imgur.com/CsXJr.png" alt="Bipartite graph of the problem"></p>

<blockquote>
 <p>Given a set $\{ O_i \; | \; i=1 \dots n \}$ of output vertices, a set $\{ I_i\; | \; i=1 \dots n \}$ of input vertices, $n \times n$ weights
 $w_{ij} \ge 0$ between $O_i$'s and $I_j$'s for $i,j=1 \dots n$, and a positive
 constant $\beta$, find the minimum number of colors for the edges
 $e_{ii}$ (thick edges in the above figure) such that for all $j=1 \dots n$,</p>
 
 <p>$$ \frac{w_{jj}}{1+\sum_{c(i)=c(j),i \neq j} w_{ij}} \ge \beta $$</p>
 
 <p>where $c(i)$ shows the color of the edge $e_{ii}$.</p>
</blockquote>

<hr>

<p><strong>Old Formulation</strong></p>

<p>The following problem looks NP-hard to me, but I couldn't show it. Any proof/comment to show the hardness or easiness of it is appreciated.</p>

<blockquote>
 <p>Assume $K_n=\langle V,E \rangle$ is a complete weighted directed graph
 with $n$ nodes and $n(n-1)$ edges. Let $w_{ij} \ge 0$ show the weight
 of the edge $ij$ and $c(ij)$ shows the color of edge $ij$. Given a subset
 of the edges $T \subseteq E$ and a positive constant $\beta$ the goal is:
 find the minimum number of colors such that for each $e_{ij} \in T$:</p>
 
 <p>$$ \frac{w_{ij}}{1+\sum_{c(kl)=c(ij),kl \neq ij} w_{kj}} \ge \beta. $$
 and
 $$ c(ij) \neq c(ik) \quad for \quad j \neq k $$</p>
</blockquote>

<p>Please note that in the above problem, only the edges in $T$ needs to be colored. That is the problem can be solved in $\mathcal{O}(|T|!)$.</p>

<p><strong>Update:</strong></p>

<p>After Tsuyoshi Ito's comment I updated the problem. The denominator is changed from $1+\sum_{c(kj)=c(ij),k \neq i,e_{kj} \in T} w_{kj}$ to $1+\sum_{c(kl)=c(ij),kl \neq ij} w_{kj}$. Therefore, the denominator contains the weights outside $T$ as well. That's actually why I mentioned the complete graph in the definition.</p>

<p>I also added an additional constraint $c(ij) \neq c(ik) \quad for \quad j \neq k$. That means, the outgoing edges from a node must be of different colors (but the incoming colors can be the same as long as the inequality holds). This puts an intuitive lower bound on the number of colors, which is the maximum out-degree of the nodes in $T$.</p>

<p>As Tsuyoshi mentioned, $w_{ij}$'s, $T$, and $\beta$ are inputs to the problem and the edge colors are the output.</p>

<p><strong>Update 2:</strong></p>

<p>Problem does not enforce the edges $e_{ij}$ and $e_{ji}$ be of a same color.</p>

|
complexity theory graphs np complete
| 1 |
2,164 |
Transform in linear grammar
|
<p>i have the following regular grammar :
$$S \rightarrow aS | cS | bQ_1$$
$$Q_1 \rightarrow bQ_2$$
$$Q_2 \rightarrow aQ_3 | cQ_3 | bQ_1$$
$$Q_3 \rightarrow aQ_4 | cQ_4$$ 
$$Q_4 \rightarrow \varepsilon$$</p>

<p>The question is to transform that into a linear grammar with less nonterminals than the regular grammar and my idea was:
$$S \rightarrow aSa | cSc | aSc | cSa | bQ_1a | bQ_1c$$
$$Q_1 \rightarrow b$$</p>

<p>and the rest i don't know. Could you help me to solve this problem?</p>

|
formal grammars
| 1 |
2,166 |
generation of linear grammar
|
<p>We have the following linear grammar:
$$E \rightarrow aO | bO | bbE | bb$$
$$O \rightarrow aE | bE | abaE | aba$$
Does the linear grammar generate a regular language, if yes why ?
Our alphabet is $\Sigma$ = {a,b} and our nonterminals are E and O. We begin by E.</p>

|
formal languages formal grammars
| 1 |
2,175 |
Constructing a data structure for a computer algebra system
|
<p>In thinking about how to approach this problem I think several things will be required, some tivial:</p>

<ol>
<li>An expression tree where non-leaf node is an operation (not sure if that part is redundant), but not every node has just two children.</li>
<li>All nodes for operations have a defined number of children that they must have (some operators are unary (like $!$) while others are binary ($*,+,-,$ etc) and still other are n-ary ($f(a,b,d)$ and versions with different amounts of variables).</li>
<li>All leaf nodes are some type of number</li>
</ol>

<p>I am under the impression that the tree should not explicitly retain information regarding the order of operations, but rather that information should be used in the parsing stage to insert things into the tree correctly.</p>

<p>This leads to the question, how should inserting to a specific position in the tree be done? Simply passing a list of directions (from root, take node zero, then node 1, etc, then insert) will work, but it seems overly clunky.</p>

<p>Or should I avoid that situation entirely (not talking about editing an equation here, just building a representation of one) by using the fact that in some sense the tree must be complete (all binary operations MUST have two children, etc, and even operators that are seemingly ambiguous (the $_{^-}$ sign for example) but these ambiguities are resolved before this point. That would all me to insert "in order"</p>

<p>Am I taking a reasonable approach? Does it make no sense whatsoever?</p>

<p>Additionally, are there papers or articles that I should read about CAS systems?</p>

<p><strong>Clarification:</strong> The tree will need to support three different compound operations.</p>

<ol>
<li>Creation: (from a string, but how to actually do that is beyond the scope of this question)</li>
<li>Reduction: (to some type of canonical form) so that if $a+b$ and $b+a$ are both entered and reduced, they will form identical trees.</li>
<li>Evaluation: Be able to traverse the tree</li>
</ol>

<p>These are all the operations that need to be supported. There are probably many other more basic operations that may need to be supported, but in this case it only matters that the three operations above are supported. My understanding is that search for example is not a property that will be required, but deletion will be (of a whole subtree).</p>

|
data structures computer algebra mathematical software
| 1 |
2,184 |
Massalin's Synthesis Quajects equivalent to ASM generating macros used in Game Oriented Assembly LISP?
|
<p><a href="http://valerieaurora.org/synthesis/SynthesisOS/abs.html" rel="nofollow">Alexia Massalin's Dissertation on Synthesis</a> was a Phd thesis on Operating Systems that contained a concept called 'Quajects' (see <a href="http://valerieaurora.org/synthesis/SynthesisOS/ch4.html" rel="nofollow">Chapter 4</a>). </p>

<p>This is some <a href="http://news.ycombinator.com/item?id=4030665" rel="nofollow">additional commentary on the Phd Thesis</a>. </p>

<p>Best I can work out - a Quaject is construct that generates Assembler customised for the function being used at the time. (Perhaps like a <a href="http://en.wikipedia.org/wiki/Just-in-time_compilation" rel="nofollow">JIT</a>). </p>

<p>The project that I've seen that came closest to this was <a href="http://en.wikipedia.org/wiki/Game_Oriented_Assembly_Lisp" rel="nofollow">Game Oriented Assembly LISP</a> (GOAL), a framework used in Crash Bandicoot that used ASM-generating LISP macros to speed up the development iteration process and generate the production code. </p>

<p>Can we say that the Macros generating ASM in GOAL were quajects? (yes or no question - please explain why if yes, and reasons if no.)</p>

|
operating systems
| 1 |
2,186 |
Dependency Graph - Acyclic graph
|
<p>I have a directed acyclic graph where edge (A,B) means that vertex A depends on vertex B. </p>

<p>Vertex deletions have the following restrictions:</p>

<ol>
<li>When vertex B is removed, all dependent vertexes should also be removed. </li>
<li>When vertex A is removed and vertex A was the only vertex that depends on B, vertex B should also be removed.</li>
</ol>

<p><img src="https://i.stack.imgur.com/qeDQd.png" alt="enter image description here"></p>

<p>I need to list the vertixes which are deleted when</p>

<ol>
<li><p>Vertex B is deleted. My solution is B, E and J because</p>

<ul>
<li>B -- deleted</li>
<li>E -- because of condition 2, B is removed and B was the only vertex that depends on E</li>
<li>J -- because of condition 2</li>
</ul></li>
<li><p>Vertex C is deleted. My solution is C, F, A, G, ... ?</p>

<ul>
<li>C -- deleted</li>
<li>F -- because of condition 2 (C is the only vertex to F)</li>
<li>A -- condition 1 (depends on C)</li>
<li>G -- condition 2 (C is the only vertex to G)</li>
<li>I think here the process goes on and cascades. Is that correct?</li>
</ul></li>
</ol>

<p>What could be an algorithm for such vertexes dependency network which allows for the vertex deletion?</p>

<p>PS: this is an old exam exercise (2008/09); I use it as exercise for my one middle of June.</p>

|
algorithms graphs
| 0 |
2,188 |
How to use a greedy algorithm to find the non-decreasing sequence closest to the given one?
|
<p>You are given n integers $a_1, \ldots, a_n$ all between $0$ and $l$. Under each integer $a_i$ you should write an integer $b_i$ between $0$ and $l$ with the requirement that the $b_i$'s form a non-decreasing sequence. Define the deviation of such a sequence to be $\max(|a_1-b_1|, \ldots, |a_n-b_n|)$. Design an algorithm that finds the $b_i$'s with the minimum deviation in runtime $O(n\sqrt[4]{l})$.</p>

<p>I honestly have no clue whatsoever how to even begin to solve this question. It looks like a dynamic programming question to me, but the professor said that this should be solved using a greedy algorithm. It would be much appreciated if someone can point me in the right direction by giving a small hint.</p>

|
algorithms optimization greedy algorithms subsequences
| 1 |
2,189 |
What is the proof for the lemma "For every iteration of the Gomory-Hu algorithm, there is a representant pair for each edge"?
|
<p>For a given undirected graph $G$, a <a href="http://en.wikipedia.org/wiki/Gomory%E2%80%93Hu_tree" rel="nofollow">Gomory-Hu tree</a> is a graph which has the same nodes as $G$, but its edges represent the minimal cut between each pair of nodes in $G$. The Gomory-Hu algorithm finds such a tree for a graph. A representant pair of nodes is defined as follows: if $R$ and $S$ are two components of the Gomory-Hu tree, and there is an edge $e$ between them, then the nodes $r \in R$ and $s \in S$ are representants if the weight of the edge $(r,s)$ is the same as the weight of $e$. </p>

<p>I have to learn not only the algorithm, but also all the lemmas needed to prove that it works. For this specific lemma, there is a proof given in my learning materials, but I am afraid I don't understand how it works. </p>

<p>It starts by picking two components of the Gomory-Hu tree, $A$ and $B$, with an edge $h$ between them, $a \in A$ and $b \in B$ being the representants. In the next iteration, nodes $x$ and $y$ in $A$ are picked, and a new minimal $(x,y)$-cut is calculated (dividing $A$ into the subsets $X$ and $Y$), such that now $h$ connects $X$ and $B$. If $a \in X$, then $a$ and $b$ are still representants. But if $a \in Y$, the proof claims that $x$ and $b$ are the new representants of $h$. </p>

<p>For this, it states that </p>

<blockquote>
 <p>The cut which created $h$ divides $x$ and $b$. From that, it follows that $f(x,b) \le f(a,b)$. </p>
</blockquote>

<p>[It uses $f(a,b)$ to denote the flow in the minimal cut between nodes $a$ and $b$.] Then it goes on to prove that also $f(x,b) \ge f(a,b)$. And then the two flows must be equal, so the flow between $x$ and $b$ is the same as the flow in the minimal $(a,b)$-cut, so $x$ and $b$ are representants. </p>

<p>But as I understand the algorithm, the cut which created $h$ was a minimal cut between the nodes $a$ and $b$. The node $x$ wasn't even a special node at the time the graph was divided into components $A$ and B$.$ Yes, this cut happens to divide $x$ and $b$ too, but there is no guarantee that it is the minimal cut between $x$ and $b$ (this is exactly what we are trying to prove here). So I think that we can follow that $f(x,b) \ge f(a,b)$, but not that $f(x,b) \le f(a,b)$. I suspect that there is an error in my reasoning and not in the reasoning of the prof who wrote the learning materials, but where is it? </p>

<p>And if there actually is an error in this proof, what is the correct proof? </p>

|
algorithms graphs algorithm analysis
| 0 |
2,192 |
Finding lambda of Master Theorem
|
<p>Suppose I have a recurrence like $T(n)=2T(n/4)+\log(n)$ with $a=2, b=4$ and $f(n)=\log(n)$.</p>

<p>That should be <a href="http://en.wikipedia.org/wiki/Master_theorem#Case_1" rel="nofollow">case 1 of the Master theorem</a> because $n^{1/2}>\log(n)$. There is also a lambda in case 1: $f(n)=O(n^{(1/2)-\lambda})$. Is this correct? And how can I find this lambda?</p>

|
proof techniques asymptotics recurrence relation master theorem
| 1 |
2,193 |
How can I prove that a complete binary tree has $\lceil n/2 \rceil$ leaves?
|
<p>Given a complete binary tree with $n$ nodes. I'm trying to prove that a complete binary tree has exactly $\lceil n/2 \rceil$ leaves.
I think I can do this by induction.</p>

<p>For $h(t)=0$, the tree is empty. So there are no leaves and the claim holds for an empty tree.</p>

<p>For $h(t)=1$, the tree has 1 node, that also is a leaf, so the claim holds.
Here I'm stuck, I don't know what to choose as induction hypothesis and how to do the induction step.</p>

|
data structures graphs proof techniques combinatorics binary trees
| 1 |
2,197 |
How do I test if a polygon is monotone with respect to an arbitrary line?
|
<blockquote>
 <p><strong>Definition</strong>: A polygon $P$ in the plane is called monotone with respect to a straight line $L$, if every line orthogonal to $L$ intersects $P$ at most twice.</p>
</blockquote>

<p>Given a polygon $P$, is it possible to determine if there exists any line $L$ such that the polygon $P$ is monotone with respect to $L$? If yes, how?</p>

<p>Previously, I asked a <a href="https://cs.stackexchange.com/q/1577/20691">related question</a> (where I asked how to determine if a polygon is monotone with respect to a particular line), but now I am interested in the case when $L$ is <strong>not</strong> given or specified in advance.</p>

|
algorithms computational geometry
| 1 |
2,198 |
Efficient subtype testing
|
<p>Languages like Java, C#, Eiffel, and C++ have subtype hierarchies which are directed acyclic graphs, due to interfaces in Java and C# and multiple inheritance in Eiffel and C++. An obvious way to check whether type $A$ is a subtype of type $B$ is to traverse the graph of the subtype hierarchy starting at $A$ to see whether type $B$ appears 'above' it. This surely is not the most efficient way to implement subtype tests.</p>

<blockquote>
 <p>What techniques exist to efficiently implement subtype testing for modern OO languages?</p>
</blockquote>

<p>I'm interested in efficiency both in terms of time and memory and any trade-offs between the two.</p>

|
programming languages compilers typing
| 0 |
2,199 |
Streaming algorithm and random access
|
<p>Consider an array $X$ of $n$ cells, each containing a number from $\{1,..., n\}$. There is at least
one duplicate number, i.e., a number that appears at least twice. I want output <em>some</em> duplicate number. When streaming we may pass over $X$ more than once. The inspection of a cell generates cost $1$. The cost of a run of an algorithm is the sum of all individual costs. I can at most store $\log_2n$ bit numbers.
I tried to do that with a streaming algorithm that uses additional memory $O(1)$ with costs $O(n^2)$. Is it possible to state a ramdom access algorithm that uses additional memory $O(\log_2n)$ with costs $O(n)$?.</p>

<p>Which algorithm solves the problem by using additional memory $O(1)$ with costs $O(n^2)$?.
Which algorithm solves the problem by using additional memory $O(\log_2n)$ with costs $O(n)$?.</p>

<p>My problem is similar to the cycle detection problem, but I don't know how to use the cycle detection problem to solve mine. Is there maybe a simpler way that I can't see now?</p>

|
algorithms randomized algorithms randomness streaming algorithm
| 0 |
2,200 |
Weighted subset sum problem
|
<p>Given an integer sequence $\{ a_1, a_2, \ldots, a_N \}$ that has length $N$ and a fixed integer $M\leq N$, the <a href="http://opc.iarcs.org.in/public/WEIGHTED-SUM.pdf" rel="nofollow noreferrer">problem</a> is to find a subset $A =\{i_1, \dots, i_M\} \subseteq [N]$ with $1 \leq i_1 \lt i_1 \lt \dots \lt i_M \leq N$ such that</p>

<p>$\qquad \displaystyle \sum_{j=1}^M j \cdot a_{i_j}$ </p>

<p>is maximized.</p>

<hr>

<p>For instance, if the given sequence is $-50; 100; -20; 40; 30$ and $M = 2$, the best weighted sum arises when we choose positions 2 and 4. </p>

<p>So that we get a value $1 \cdot 100 + 2 \cdot 40 = 180$.</p>

<p>On the other hand, if the given sequence is $10; 50; 20$ and $M$ is again 2, the best option is to choose positions 1 and 2 that we get a value $1 \cdot 10 + 2 \cdot 50 = 110$.</p>

<hr>

<p>To me it looks similar to the <a href="http://en.wikipedia.org/wiki/Maximum_subarray_problem" rel="nofollow noreferrer">maximum subarray problem</a>, but I can think of many examples in which the maximum subarray is not the best solution.</p>

<p>Is this problem an instance of a well studied problem? What is the best algorithm to solve it?</p>

<p>This question was inspired by <a href="https://stackoverflow.com/questions/10861642/find-maximum-weighted-sum-over-all-m-subsequences">this StackOverflow question</a>.</p>

|
algorithms optimization
| 1 |
2,201 |
Optimal myopic maze solver
|
<p>I was fooling around with <a href="http://neil.fraser.name/software/blockly/demos/maze/index.html" rel="nofollow noreferrer">Google Blocky's Maze demo</a>, and remembered the old rule that if you want to solve a maze, just keep your left hand to the wall. This works for any simple-connected maze and can be implemented by a finite transducer.</p>

<p>Let our robot be represented by a transducer with the following actions, and observables:</p>

<ul>
<li>Actions: go forward ($\uparrow$), turn left ($\leftarrow$), turn right ($\rightarrow$)</li>
<li>Observables: wall ahead ($\bot$), no wall ahead ($\top$)</li>
</ul>

<p>Then we can build the left-hand maze solver as (pardon my lazy drawing):</p>

<p><img src="https://i.stack.imgur.com/vAnFO.png" alt="transducer to solve the maze"></p>

<p>Where seeing an observable will make us follow the appropriate edge out of the state while executing the action associated with that edge. This automaton will solve all simply-connected mazes, although it might take its time following dead ends. We call another automaton $B$ <em>better than</em> $A$ if:</p>

<ol>
<li><p>$B$ takes strictly more steps on only a finite number of mazes, and</p></li>
<li><p>$B$ takes strictly fewer steps (on average; for probabilistic variants) on an infinite number of mazes.</p></li>
</ol>

<p>My two questions:</p>

<ol>
<li><p><strong>Is there a finite automaton <em>better than</em> the one drawn above?</strong> What if we allow probabilistic transducers?</p></li>
<li><p><strong>Is there a finite automaton for solving mazes that are not necessarily simply-connected?</strong></p></li>
</ol>

|
automata finite automata artificial intelligence
| 1 |
2,204 |
Do the minimum spanning trees of a weighted graph have the same number of edges with a given weight?
|
<p>If a weighted graph $G$ has two different minimum spanning trees $T_1 = (V_1, E_1)$ and $T_2 = (V_2, E_2)$, then is it true that for any edge $e$ in $E_1$, the number of edges in $E_1$ with the same weight as $e$ (including $e$ itself) is the same as the number of edges in $E_2$ with the same weight as $e$? If the statement is true, then how can we prove it?</p>

|
graphs spanning trees weighted graphs
| 1 |
2,206 |
Advanced placement CS A Exam
|
<p>What would be the standard topics covered in an 'Advanced Placement Computer Science A' test? Is there any good study reference someone can share? </p>

<p><strong>Edit: Answer</strong></p>

<p>An adjustment for my search term returned a <a href="http://en.wikipedia.org/wiki/Advanced_Placement_Computer_Science" rel="nofollow">Wikipedia page</a> just for this type of exam and the AB version.</p>

|
reference request education
| 1 |
2,208 |
Reducing minimum vertex cover in a bipartite graph to maximum flow
|
<p>Is it possible to show that the minimum vertex cover in a bipartite graph can be reduced to a maximum flow problem? Or to the minimum cut problem (then follow max-flow min-cut theorem, the claim holds).</p>

<p>Intuitively: for each flow, pick one endpoint, then it is a minimum vertex cover in bipartite graph. But can it be shown rigorously?</p>

|
complexity theory graphs reductions network flow
| 0 |
2,212 |
Show that the halting problem is decidable for one-pass Turing machines
|
<p>$L=\{<\!M,x\!>\, \mid M's \text{ transition function can only move right and } M\text{ halts on } x \}$. I need to show that $L$ is recursive/decidable.</p>

<p>I thought of checking the encoding of $M$ first and determine whether its transition function moves only right (Can I do that?). If so then try to simulate $M$ on $x$ for $|Q|+1$ steps, if it stops then $<\!M,x\!>\, \in L$ otherwise it is not.</p>

<p>Is this correct?</p>

|
formal languages computability turing machines check my proof
| 1 |
2,215 |
Alpha-Beta Pruning with simultaneous moves?
|
<p>I have a game I'm building some ai for that has 2 players making simultaneous moves. In this game there is exactly one move where, if they both make it at the same time, the outcome is different than if they'd made it separately (all other moves are pretty independent).</p>

<p>Anyway, I'm trying to find a good algorithm to throw at it. Minimax with alpha-beta pruning seems like it would be a good candidate if the players were making alternating moves, but not for simultaneous ones. I found <a href="http://www.lamsade.dauphine.fr/~saffidine/Papers/2012/Alpha-Beta%20Pruning%20for%20Games%20with%20Simultaneous%20Moves.pdf">a paper(pdf)</a> on the topic, but it's a little over my head- I'm having trouble reading the pseduocode.</p>

<p>So, can someone either help clarify that approach, suggest another way to accomplish alpha-beta pruning on such a game, or suggest a better algorithm entirely?</p>

|
algorithms artificial intelligence search algorithms game theory
| 0 |
2,216 |
Show that a language is not regular using the Pumping Lemma
|
<blockquote>
 <p><strong>Possible Duplicate:</strong><br>
 <a href="https://cs.stackexchange.com/questions/1031/how-to-prove-that-a-language-is-not-regular">How to prove that a language is not regular?</a> </p>
</blockquote>



<p>Given a language $L = \{a^pb^{2p} \mid p \ge 1\}$, how could I show, using the Pumping Lemma that $L$ is not regular?</p>

|
formal languages regular languages pumping lemma
| 0 |
2,218 |
Turing machine and language decidability
|
<p>The document I am reading is here: <a href="http://www.cs.odu.edu/~toida/nerzic/390teched/tm/definitions.html" rel="nofollow noreferrer">Turing Machines</a></p>

<p>Before getting into the question, here is the notation used on the picture:</p>

<blockquote>
 <p>Here $\Delta$ denotes the blank and R, L and S denote move the head right, left
 and do not move it, respectively. A transition diagram can also be
 drawn for a Turing machine. The states are represented by vertices and
 for a transition $\delta( q, X ) = ( r, Y, D )$ , where D represents R, L 
 or S , an arc from q to r is drawn with
 label ( X/Y , D ) indicating that the state is changed from q to r,
 the symbol X currently being read is changed to Y and the tape head is
 moved as directed by D.</p>
</blockquote>

<p>According to the document:</p>

<blockquote>
 <p>A Turing machine T is said to decide a language L if and only if T
 writes "yes" and halts if a string is in L and T writes "no" and halts
 if a string is not in L</p>
</blockquote>

<p>Here is the three examples:</p>

<ul>
<li>Case 1:</li>
</ul>

<p><img src="https://i.stack.imgur.com/lGDWj.jpg" alt="Case 1"></p>

<ul>
<li>Case 2:</li>
</ul>

<p><img src="https://i.stack.imgur.com/SlYEj.jpg" alt="Case 2"></p>

<ul>
<li>Case 3:</li>
</ul>

<p><img src="https://i.stack.imgur.com/Eextt.jpg" alt="Case 3"></p>

<p>I just want to verify my understanding. According to the definition, in case 1 and case 2, its turing machines cannot decide because the machines cannot tell whether invalid inputs rather than { a } (such as aa, aaa, aaaa....) is in L or not. </p>

<p>In case 2, if another a appears after the first a, or if the input is empty, the machine goes to state S and loop forever. </p>

<p>In case 3, if <code>a</code> is detected and only a single <code>a</code> exists, that <code>a</code> is replaced by <code>1</code> and the machine accepts. Otherwise, a <code>0</code> is replaced and the input is decided not in the language.</p>

<p>Am I correct on all of these? However, in case 3, what if I give any input which contains other character rather than <code>a</code> (such as string <code>ab</code>, <code>bc</code>...)? Or is it said that TM decides only languages over a set of alphabet $\Sigma$ allowed by the Turing Machine?</p>

<p>If a string which is longer than a single <code>a</code> (like <code>aa</code>, <code>aaa</code>,<code>ab</code>,<code>bc</code>...), the machine may loop forever (like in case 2) or halt without accepting (in other words, it is "crashed", where it does not have transition rules for a symbol in the input such as <code>b</code> in the case of above Turing Machines). Is this correct also?</p>

|
computability turing machines
| 1 |
2,221 |
Complexity class that properly included in DLOGTIME
|
<p>Is there any decision problem that is in a complexity class properly included in DLOGTIME? (except $O(1)$, of course)</p>

<p>If there is, can we create complete problems for DLOGTIME? So, can there be reduction by $O(\log(\log n))$ or smaller?</p>

|
complexity theory time complexity complexity classes
| 0 |
2,222 |
Are there complete problems for P and NP under other kinds of reductions?
|
<p>I know that the complexity class $\mathsf{P}$ has complete problems w.r.t. $\mathsf{NC}$ and $\mathsf{L}$ reductions.</p>

<p>Are these two classes the only possible classes of reductions under which $\mathsf{P}$ has complete problems? </p>

<p>Also, what classes of reduction can be used for $\mathsf{NP}$ beside polynomial-time reductions?</p>

|
complexity theory reductions
| 1 |
2,225 |
Point Location Problem in Polygon in Repetitive Mode for a Simple Polygon
|
<p>I consider <a href="http://en.wikipedia.org/wiki/Point_in_polygon" rel="nofollow">Point Location Problem in Polygon</a> in repetitive mode in the case of simple polygon.</p>

<p>In computational geometry,Point Location Problem in Polygon problem asks whether a given point in the plane lies inside, outside, or on the boundary of a polygon.</p>

<p>There are few method that work in Single-Shot approach, where the input is a polygon $P$ and a single point $q$ (no preprocessing time). Ray casting algorithm is the famous algorithm for single-shot, it takes $O(n)$ to determine whether a point $q$ belongs to polygon $P$. </p>

<p>In addition, there is a repetitive approach, where instead of single point $q$ we should check the sequence of points, therefore the preprocessing is required. Division wedge is a algorithm that works in repetitive mode. Query time of division wedge is $O(\log n)$ and preprocessing time is $O(n)$. Division wedge assumes that there is a central point in polygon, visible from every vertex of polygon (part of the kernel of the polygon). The problem is a central point can be easily determined in convex polygon as well as in star-shaped polygon, but what to do in the case of simple polygon.</p>

<p>If division wedge is applied in the case of simple polygon how we can determine a central point in simple polygon? If division edge in not applied if there is the more efficient way to solve a problem in simple polygon than in arbitrary planar subdivision.</p>

|
algorithms computational geometry
| 1 |
2,226 |
How to find spanning tree of a graph that minimizes the maximum edge weight?
|
<p>Suppose we have a graph G. How can we find a spanning tree that minimizes the maximum weight of all the edges in the tree? I am convinced that by simply finding an MST of G would suffice, but I am having a lot of trouble proving that my idea is actually correct. Can anyone show me a proof sketch or give me some hints as to how to construct the proof? Thanks!</p>

|
algorithms graphs optimization spanning trees
| 0 |
2,230 |
Resolution complexity versus a constrained SAT algorithm
|
<p>EDIT: ad hoc speed-ups are excluded.</p>

<p>We have the result that <a href="http://homepages.cwi.nl/~rdewolf/resolutionlowerbound.pdf" rel="nofollow">propositional resolution requires exponential time</a>. The resolution result uses the proof of the pigeonhole principle as an example of a proof that takes exponential time. </p>

<p>Let's also say we have a hypothetical algorithm M for SAT that runs in polynomial time. 
<strong>EDIT : M is correct, complete, sound, and general-purpose; it contains no ad hoc speed-up rules for the pigeonhole principle or any other theorem that requires exponential length in resolution.</strong> M takes its input in clausal form; we'll set up the input like a resolution proof where the consequent is negated to lead to unsatisfiability if the theorem is true. Now let's consider how the proof of the pigeonhole principle works in algorithm M with a strong condition C added:</p>

<p>C. We are given that M simply transforms one clause (or set of clauses) to another clause (or set of clauses). Every such transformation is logically sound.</p>

<p>Some questions; please point out the most fatal flaws:</p>

<ol>
<li>Given condition C above, and since M's rule system must be finite, correct, and complete, can we conclude that there is a translation from M's rule system to an equivalent set of expansions based on resolution?</li>
<li>Are we now in a place where we can conclude that M would produce a computation that could be mapped by the translation in point 1 above into an impossible polynomial-time resolution proof of the pigeonhole principle?</li>
</ol>

|
complexity theory logic satisfiability sat solvers
| 0 |
2,233 |
Automata that recognizes Kleene closure of permutations of three symbols
|
<p>This is an automata theory homework question.</p>

<p>I need to create DFA that meets the following criteria: </p>

<ul>
<li><p>Alphabet $\Sigma = \{ a, b, c \}$</p></li>
<li><p>Machine accepts empty string and strings of length that is a multiple of three, with every block of three containing one $a$, one $b$ and one $c$.</p></li>
</ul>

<p>So far, I came up with this machine, it is obvious:</p>

<p><img src="https://i.stack.imgur.com/r3rCz.jpg" alt="the machine"></p>

<p>However, I can't get it to accept empty string. Does it mean there is a transition q0 → q3?</p>

<p><strong>Update1:</strong> Following corrections by Dave Clarke I made some corrections.</p>

<ol>
<li>A regular expression for this machine is $(www)^*$ where $w = \{abc,acb,…\}$. Therefore to represent multiple of three, I need to copy this (on the picture) machine 3 times. Final state should have arrows pointing to the first copy, for transitions marked 'a', 'b','c'. </li>
<li>As it was pointed out, since this is DFA, I need to add missing states. This can be accomplished by adding "dead" states. </li>
<li>Empty string should correspond to $\varepsilon$-transition from qStarting → qFinal.</li>
</ol>

<p><strong>Update2:</strong> 
As it was pointed out, my regular expression is wrong ! It should be $(w)^*$. Here is the final machine, that I think should be correct.(I didn't include "dead" state)<img src="https://i.stack.imgur.com/L3xZX.jpg" alt="enter image description here"></p>

|
formal languages automata finite automata
| 1 |
2,235 |
Can joins be parallelized?
|
<p>Suppose we want to join two relations on a predicate. Is this in NC?</p>

<p>I realize that a proof of it not being in NC would amount to a proof that $P\not=NC$, so I'd accept evidence of it being an open problem as an answer.</p>

<p>I'm interested in the general case as well as specific cases (e.g. perhaps with some specific data structure it can be parallelized). </p>

<p>EDIT: to bring some clarifications from the comments into this post:</p>

<ul>
<li>We could consider an equijoin $A.x = B.y$. On a single processor, a hash-based algorithm runs in $O(|A|+|B|)$ and this is the best we can do since we have to read each set</li>
<li>If the predicate is a "black box" where we have to check each pair, there are $|A|\cdot|B|$ pairs, and each one could be in or not, so $2^{ab}$ possibilities. Checking each pair divides the possibilities in half, so the best we can do is $O(ab)$.</li>
</ul>

<p>Could either of these (or some third type of join) be improved to $\log^k n$ on multiple processors?</p>

|
complexity theory time complexity parallel computing database theory descriptive complexity
| 1 |
2,237 |
Does Peterson's 2-process mutual exclusion algorithm account for dying processes?
|
<p>I think that in <a href="http://en.wikipedia.org/wiki/Peterson%27s_algorithm" rel="nofollow noreferrer">Peterson's algorithm</a> for <a href="http://en.wikipedia.org/wiki/Mutual_exclusion" rel="nofollow noreferrer">mutual exclusion</a>, if the process first to enter the critical section were to die or be cancelled, the other process would loop forever, waiting to enter the critical section.</p>

<p>In the picture, if process 1 is stopped, the rest of the processes behind process 1 will execute up to where of process 1 is but then loop.</p>

<p><img src="https://i.stack.imgur.com/Tz6vK.jpg" alt="enter image description here"></p>

<p>What happens if the process that reaches the critical section first dies before leaving it?</p>

|
programming languages concurrency mutual exclusion
| 0 |
2,238 |
Turing Recognisable => enumerable
|
<p>I get the proof of going from an enumerator to a Turing Machine (keep running enumerator and see if it matches input) but I don't see how the other way works.</p>

<p>According to my notes and the book (Intro to the Theory of Computation - Sipser), to get Turing enumerator from a Turing machine, we basically write all combinations of the alphabet. You then run the TM on this input, if it accepts print it out, replace with new string repeat ad infinitum.</p>

<p>The problem I am having is surely this requires the language to be decidable. Otherwise it might get stuck on the third word in some infinite loop doomed never to accept or reject and certainly never print out the whole language. </p>

<p>What am I missing?</p>

|
computability turing machines intuition
| 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.