qid
int64 1
3.11M
| question
stringlengths 10
32.1k
| date
stringlengths 10
10
| metadata
sequence | response_j
stringlengths 0
33.7k
| response_k
stringlengths 3
34.7k
|
---|---|---|---|---|---|
1,728,477 | I though it'll be interesting to look at threads and queues, so I've written 2 scripts, one will break a file up and encrypt each chunk in a thread, the other will do it serially. I'm still very new to python and don't really know why the treading script takes so much longer.
Threaded Script:
```
#!/usr/bin/env python
from Crypto.Cipher import AES
from optparse import OptionParser
import os, base64, time, sys, hashlib, pickle, threading, timeit, Queue
BLOCK_SIZE = 32 #32 = 256-bit | 16 = 128-bit
TFILE = 'mytestfile.bin'
CHUNK_SIZE = 2048 * 2048
KEY = os.urandom(32)
class DataSplit():
def __init__(self,fileObj, chunkSize):
self.fileObj = fileObj
self.chunkSize = chunkSize
def split(self):
while True:
data = self.fileObj.read(self.chunkSize)
if not data:
break
yield data
class encThread(threading.Thread):
def __init__(self, seg_queue,result_queue, cipher):
threading.Thread.__init__(self)
self.seg_queue = seg_queue
self.result_queue = result_queue
self.cipher = cipher
def run(self):
while True:
#Grab a data segment from the queue
data = self.seg_queue.get()
encSegment = []
for lines in data:
encSegment.append(self.cipher.encrypt(lines))
self.result_queue.put(encSegment)
print "Segment Encrypted"
self.seg_queue.task_done()
start = time.time()
def main():
seg_queue = Queue.Queue()
result_queue = Queue.Queue()
estSegCount = (os.path.getsize(TFILE)/CHUNK_SIZE)+1
cipher = AES.new(KEY, AES.MODE_CFB)
#Spawn threads (one for each segment at the moment)
for i in range(estSegCount):
eT = encThread(seg_queue, result_queue, cipher)
eT.setDaemon(True)
eT.start()
print ("thread spawned")
fileObj = open(TFILE, "rb")
splitter = DataSplit(fileObj, CHUNK_SIZE)
for data in splitter.split():
seg_queue.put(data)
print ("Data sent to thread")
seg_queue.join()
#result_queue.join()
print ("Seg Q: {0}".format(seg_queue.qsize()))
print ("Res Q: {0}".format(result_queue.qsize()))
main()
print ("Elapsed Time: {0}".format(time.time()-start))
```
Serial Script:
```
#!/usr/bin/env python
from Crypto.Cipher import AES
from optparse import OptionParser
import os, base64, time, sys, hashlib, pickle, threading, timeit, Queue
TFILE = 'mytestfile.bin'
CHUNK_SIZE = 2048 * 2048
class EncSeries():
def __init(self):
pass
def loadFile(self,path):
openFile = open(path, "rb")
#fileData = openFile.readlines()
fileData = openFile.read(CHUNK_SIZE)
openFile.close()
return fileData
def encryptData(self,key, data):
cipher = AES.new(key, AES.MODE_CFB)
newData = []
for lines in data:
newData.append(cipher.encrypt(lines))
return newData
start = time.time()
def main():
print ("Start")
key = os.urandom(32)
run = EncSeries()
fileData = run.loadFile(TFILE)
encFileData=run.encryptData(key, fileData)
print("Finish")
main()
print ("Elapsed Time: {0}".format(time.time()-start))
```
using readlines() instead of read seems to speed things up considerably on the serial version too, but it's already much fast than the threaded version. | 2009/11/13 | [
"https://Stackoverflow.com/questions/1728477",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/200822/"
] | 1. It seems like your second version only reads one chunk, while the first version reads the whole file - this would explain the big speedup. **Edit**: Another issue: I just noticed that you run `for lines in data` for no reason - this would actually encrypt the characters individually, which is much slower. Instead, just pass the data to `encrypt` directly.
2. There is no point in starting more CPU-heavy threads than you have processor cores.
3. The threads can only work in parallel if they call an extension module which unlocks the GIL while running. I don't think PyCrypto does this, so you won't get any parallel work done here.
4. If the bottleneck was disk performance, you wouldn't see much of an improvement here anyway - in that case it would be better to have one thread that does disk I/O and another to do the encryption. GIL wouldn't be an issue since it is released while doing disk I/O. | Threads have a couple different uses:
1. They only provide speedup if they allow you to get multiple pieces of hardware working at the same time on your problem, whether that hardware is CPU cores or disk heads.
2. They allow you to keep track of multiple sequences of I/O events that would be much more complicated without them, such as simultaneous conversations with multiple users.
The latter is not done for performance, but for clarity of code. |
1,728,477 | I though it'll be interesting to look at threads and queues, so I've written 2 scripts, one will break a file up and encrypt each chunk in a thread, the other will do it serially. I'm still very new to python and don't really know why the treading script takes so much longer.
Threaded Script:
```
#!/usr/bin/env python
from Crypto.Cipher import AES
from optparse import OptionParser
import os, base64, time, sys, hashlib, pickle, threading, timeit, Queue
BLOCK_SIZE = 32 #32 = 256-bit | 16 = 128-bit
TFILE = 'mytestfile.bin'
CHUNK_SIZE = 2048 * 2048
KEY = os.urandom(32)
class DataSplit():
def __init__(self,fileObj, chunkSize):
self.fileObj = fileObj
self.chunkSize = chunkSize
def split(self):
while True:
data = self.fileObj.read(self.chunkSize)
if not data:
break
yield data
class encThread(threading.Thread):
def __init__(self, seg_queue,result_queue, cipher):
threading.Thread.__init__(self)
self.seg_queue = seg_queue
self.result_queue = result_queue
self.cipher = cipher
def run(self):
while True:
#Grab a data segment from the queue
data = self.seg_queue.get()
encSegment = []
for lines in data:
encSegment.append(self.cipher.encrypt(lines))
self.result_queue.put(encSegment)
print "Segment Encrypted"
self.seg_queue.task_done()
start = time.time()
def main():
seg_queue = Queue.Queue()
result_queue = Queue.Queue()
estSegCount = (os.path.getsize(TFILE)/CHUNK_SIZE)+1
cipher = AES.new(KEY, AES.MODE_CFB)
#Spawn threads (one for each segment at the moment)
for i in range(estSegCount):
eT = encThread(seg_queue, result_queue, cipher)
eT.setDaemon(True)
eT.start()
print ("thread spawned")
fileObj = open(TFILE, "rb")
splitter = DataSplit(fileObj, CHUNK_SIZE)
for data in splitter.split():
seg_queue.put(data)
print ("Data sent to thread")
seg_queue.join()
#result_queue.join()
print ("Seg Q: {0}".format(seg_queue.qsize()))
print ("Res Q: {0}".format(result_queue.qsize()))
main()
print ("Elapsed Time: {0}".format(time.time()-start))
```
Serial Script:
```
#!/usr/bin/env python
from Crypto.Cipher import AES
from optparse import OptionParser
import os, base64, time, sys, hashlib, pickle, threading, timeit, Queue
TFILE = 'mytestfile.bin'
CHUNK_SIZE = 2048 * 2048
class EncSeries():
def __init(self):
pass
def loadFile(self,path):
openFile = open(path, "rb")
#fileData = openFile.readlines()
fileData = openFile.read(CHUNK_SIZE)
openFile.close()
return fileData
def encryptData(self,key, data):
cipher = AES.new(key, AES.MODE_CFB)
newData = []
for lines in data:
newData.append(cipher.encrypt(lines))
return newData
start = time.time()
def main():
print ("Start")
key = os.urandom(32)
run = EncSeries()
fileData = run.loadFile(TFILE)
encFileData=run.encryptData(key, fileData)
print("Finish")
main()
print ("Elapsed Time: {0}".format(time.time()-start))
```
using readlines() instead of read seems to speed things up considerably on the serial version too, but it's already much fast than the threaded version. | 2009/11/13 | [
"https://Stackoverflow.com/questions/1728477",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/200822/"
] | 1. It seems like your second version only reads one chunk, while the first version reads the whole file - this would explain the big speedup. **Edit**: Another issue: I just noticed that you run `for lines in data` for no reason - this would actually encrypt the characters individually, which is much slower. Instead, just pass the data to `encrypt` directly.
2. There is no point in starting more CPU-heavy threads than you have processor cores.
3. The threads can only work in parallel if they call an extension module which unlocks the GIL while running. I don't think PyCrypto does this, so you won't get any parallel work done here.
4. If the bottleneck was disk performance, you wouldn't see much of an improvement here anyway - in that case it would be better to have one thread that does disk I/O and another to do the encryption. GIL wouldn't be an issue since it is released while doing disk I/O. | I watched the presentation that Dave Kirby linked to and tried the example counter which takes more that twice as long to run in two threads:
```
import time
from threading import Thread
countmax=100000000
def count(n):
while n>0:
n-=1
def main1():
count(countmax)
count(countmax)
def main2():
t1=Thread(target=count,args=(countmax,))
t2=Thread(target=count,args=(countmax,))
t1.start()
t2.start()
t1.join()
t2.join()
def timeit(func):
start = time.time()
func()
end=time.time()-start
print ("Elapsed Time: {0}".format(end))
if __name__ == '__main__':
timeit(main1)
timeit(main2)
```
Outputs:
```
Elapsed Time: 21.5470001698
Elapsed Time: 55.3279998302
```
However, if I change Thread for Process:
```
from multiprocessing import Process
```
and
```
t1=Process(target ....
```
etc. I get this output:
```
Elapsed Time: 20.5
Elapsed Time: 10.4059998989
```
Now its as if my Pentium CPU has two cores, I bet its the hyperthreading. Can anyone try this on their two or four core machine and run 2 or 4 threads?
See the python 2.6.4 documentation for [multiprocessing](http://docs.python.org/library/multiprocessing.html) |
1,728,477 | I though it'll be interesting to look at threads and queues, so I've written 2 scripts, one will break a file up and encrypt each chunk in a thread, the other will do it serially. I'm still very new to python and don't really know why the treading script takes so much longer.
Threaded Script:
```
#!/usr/bin/env python
from Crypto.Cipher import AES
from optparse import OptionParser
import os, base64, time, sys, hashlib, pickle, threading, timeit, Queue
BLOCK_SIZE = 32 #32 = 256-bit | 16 = 128-bit
TFILE = 'mytestfile.bin'
CHUNK_SIZE = 2048 * 2048
KEY = os.urandom(32)
class DataSplit():
def __init__(self,fileObj, chunkSize):
self.fileObj = fileObj
self.chunkSize = chunkSize
def split(self):
while True:
data = self.fileObj.read(self.chunkSize)
if not data:
break
yield data
class encThread(threading.Thread):
def __init__(self, seg_queue,result_queue, cipher):
threading.Thread.__init__(self)
self.seg_queue = seg_queue
self.result_queue = result_queue
self.cipher = cipher
def run(self):
while True:
#Grab a data segment from the queue
data = self.seg_queue.get()
encSegment = []
for lines in data:
encSegment.append(self.cipher.encrypt(lines))
self.result_queue.put(encSegment)
print "Segment Encrypted"
self.seg_queue.task_done()
start = time.time()
def main():
seg_queue = Queue.Queue()
result_queue = Queue.Queue()
estSegCount = (os.path.getsize(TFILE)/CHUNK_SIZE)+1
cipher = AES.new(KEY, AES.MODE_CFB)
#Spawn threads (one for each segment at the moment)
for i in range(estSegCount):
eT = encThread(seg_queue, result_queue, cipher)
eT.setDaemon(True)
eT.start()
print ("thread spawned")
fileObj = open(TFILE, "rb")
splitter = DataSplit(fileObj, CHUNK_SIZE)
for data in splitter.split():
seg_queue.put(data)
print ("Data sent to thread")
seg_queue.join()
#result_queue.join()
print ("Seg Q: {0}".format(seg_queue.qsize()))
print ("Res Q: {0}".format(result_queue.qsize()))
main()
print ("Elapsed Time: {0}".format(time.time()-start))
```
Serial Script:
```
#!/usr/bin/env python
from Crypto.Cipher import AES
from optparse import OptionParser
import os, base64, time, sys, hashlib, pickle, threading, timeit, Queue
TFILE = 'mytestfile.bin'
CHUNK_SIZE = 2048 * 2048
class EncSeries():
def __init(self):
pass
def loadFile(self,path):
openFile = open(path, "rb")
#fileData = openFile.readlines()
fileData = openFile.read(CHUNK_SIZE)
openFile.close()
return fileData
def encryptData(self,key, data):
cipher = AES.new(key, AES.MODE_CFB)
newData = []
for lines in data:
newData.append(cipher.encrypt(lines))
return newData
start = time.time()
def main():
print ("Start")
key = os.urandom(32)
run = EncSeries()
fileData = run.loadFile(TFILE)
encFileData=run.encryptData(key, fileData)
print("Finish")
main()
print ("Elapsed Time: {0}".format(time.time()-start))
```
using readlines() instead of read seems to speed things up considerably on the serial version too, but it's already much fast than the threaded version. | 2009/11/13 | [
"https://Stackoverflow.com/questions/1728477",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/200822/"
] | 1. It seems like your second version only reads one chunk, while the first version reads the whole file - this would explain the big speedup. **Edit**: Another issue: I just noticed that you run `for lines in data` for no reason - this would actually encrypt the characters individually, which is much slower. Instead, just pass the data to `encrypt` directly.
2. There is no point in starting more CPU-heavy threads than you have processor cores.
3. The threads can only work in parallel if they call an extension module which unlocks the GIL while running. I don't think PyCrypto does this, so you won't get any parallel work done here.
4. If the bottleneck was disk performance, you wouldn't see much of an improvement here anyway - in that case it would be better to have one thread that does disk I/O and another to do the encryption. GIL wouldn't be an issue since it is released while doing disk I/O. | Just a quick note to update this thread: python 3.2 has a new implementation of the GIL which relieves a lot of the overheads associated with multithreading, but does not eliminate the locking. (i.e. it does not allow you to use more than one core, but it allows you to use multiple threads on that core efficiently). |
1,728,477 | I though it'll be interesting to look at threads and queues, so I've written 2 scripts, one will break a file up and encrypt each chunk in a thread, the other will do it serially. I'm still very new to python and don't really know why the treading script takes so much longer.
Threaded Script:
```
#!/usr/bin/env python
from Crypto.Cipher import AES
from optparse import OptionParser
import os, base64, time, sys, hashlib, pickle, threading, timeit, Queue
BLOCK_SIZE = 32 #32 = 256-bit | 16 = 128-bit
TFILE = 'mytestfile.bin'
CHUNK_SIZE = 2048 * 2048
KEY = os.urandom(32)
class DataSplit():
def __init__(self,fileObj, chunkSize):
self.fileObj = fileObj
self.chunkSize = chunkSize
def split(self):
while True:
data = self.fileObj.read(self.chunkSize)
if not data:
break
yield data
class encThread(threading.Thread):
def __init__(self, seg_queue,result_queue, cipher):
threading.Thread.__init__(self)
self.seg_queue = seg_queue
self.result_queue = result_queue
self.cipher = cipher
def run(self):
while True:
#Grab a data segment from the queue
data = self.seg_queue.get()
encSegment = []
for lines in data:
encSegment.append(self.cipher.encrypt(lines))
self.result_queue.put(encSegment)
print "Segment Encrypted"
self.seg_queue.task_done()
start = time.time()
def main():
seg_queue = Queue.Queue()
result_queue = Queue.Queue()
estSegCount = (os.path.getsize(TFILE)/CHUNK_SIZE)+1
cipher = AES.new(KEY, AES.MODE_CFB)
#Spawn threads (one for each segment at the moment)
for i in range(estSegCount):
eT = encThread(seg_queue, result_queue, cipher)
eT.setDaemon(True)
eT.start()
print ("thread spawned")
fileObj = open(TFILE, "rb")
splitter = DataSplit(fileObj, CHUNK_SIZE)
for data in splitter.split():
seg_queue.put(data)
print ("Data sent to thread")
seg_queue.join()
#result_queue.join()
print ("Seg Q: {0}".format(seg_queue.qsize()))
print ("Res Q: {0}".format(result_queue.qsize()))
main()
print ("Elapsed Time: {0}".format(time.time()-start))
```
Serial Script:
```
#!/usr/bin/env python
from Crypto.Cipher import AES
from optparse import OptionParser
import os, base64, time, sys, hashlib, pickle, threading, timeit, Queue
TFILE = 'mytestfile.bin'
CHUNK_SIZE = 2048 * 2048
class EncSeries():
def __init(self):
pass
def loadFile(self,path):
openFile = open(path, "rb")
#fileData = openFile.readlines()
fileData = openFile.read(CHUNK_SIZE)
openFile.close()
return fileData
def encryptData(self,key, data):
cipher = AES.new(key, AES.MODE_CFB)
newData = []
for lines in data:
newData.append(cipher.encrypt(lines))
return newData
start = time.time()
def main():
print ("Start")
key = os.urandom(32)
run = EncSeries()
fileData = run.loadFile(TFILE)
encFileData=run.encryptData(key, fileData)
print("Finish")
main()
print ("Elapsed Time: {0}".format(time.time()-start))
```
using readlines() instead of read seems to speed things up considerably on the serial version too, but it's already much fast than the threaded version. | 2009/11/13 | [
"https://Stackoverflow.com/questions/1728477",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/200822/"
] | I watched the presentation that Dave Kirby linked to and tried the example counter which takes more that twice as long to run in two threads:
```
import time
from threading import Thread
countmax=100000000
def count(n):
while n>0:
n-=1
def main1():
count(countmax)
count(countmax)
def main2():
t1=Thread(target=count,args=(countmax,))
t2=Thread(target=count,args=(countmax,))
t1.start()
t2.start()
t1.join()
t2.join()
def timeit(func):
start = time.time()
func()
end=time.time()-start
print ("Elapsed Time: {0}".format(end))
if __name__ == '__main__':
timeit(main1)
timeit(main2)
```
Outputs:
```
Elapsed Time: 21.5470001698
Elapsed Time: 55.3279998302
```
However, if I change Thread for Process:
```
from multiprocessing import Process
```
and
```
t1=Process(target ....
```
etc. I get this output:
```
Elapsed Time: 20.5
Elapsed Time: 10.4059998989
```
Now its as if my Pentium CPU has two cores, I bet its the hyperthreading. Can anyone try this on their two or four core machine and run 2 or 4 threads?
See the python 2.6.4 documentation for [multiprocessing](http://docs.python.org/library/multiprocessing.html) | Threads have a couple different uses:
1. They only provide speedup if they allow you to get multiple pieces of hardware working at the same time on your problem, whether that hardware is CPU cores or disk heads.
2. They allow you to keep track of multiple sequences of I/O events that would be much more complicated without them, such as simultaneous conversations with multiple users.
The latter is not done for performance, but for clarity of code. |
1,728,477 | I though it'll be interesting to look at threads and queues, so I've written 2 scripts, one will break a file up and encrypt each chunk in a thread, the other will do it serially. I'm still very new to python and don't really know why the treading script takes so much longer.
Threaded Script:
```
#!/usr/bin/env python
from Crypto.Cipher import AES
from optparse import OptionParser
import os, base64, time, sys, hashlib, pickle, threading, timeit, Queue
BLOCK_SIZE = 32 #32 = 256-bit | 16 = 128-bit
TFILE = 'mytestfile.bin'
CHUNK_SIZE = 2048 * 2048
KEY = os.urandom(32)
class DataSplit():
def __init__(self,fileObj, chunkSize):
self.fileObj = fileObj
self.chunkSize = chunkSize
def split(self):
while True:
data = self.fileObj.read(self.chunkSize)
if not data:
break
yield data
class encThread(threading.Thread):
def __init__(self, seg_queue,result_queue, cipher):
threading.Thread.__init__(self)
self.seg_queue = seg_queue
self.result_queue = result_queue
self.cipher = cipher
def run(self):
while True:
#Grab a data segment from the queue
data = self.seg_queue.get()
encSegment = []
for lines in data:
encSegment.append(self.cipher.encrypt(lines))
self.result_queue.put(encSegment)
print "Segment Encrypted"
self.seg_queue.task_done()
start = time.time()
def main():
seg_queue = Queue.Queue()
result_queue = Queue.Queue()
estSegCount = (os.path.getsize(TFILE)/CHUNK_SIZE)+1
cipher = AES.new(KEY, AES.MODE_CFB)
#Spawn threads (one for each segment at the moment)
for i in range(estSegCount):
eT = encThread(seg_queue, result_queue, cipher)
eT.setDaemon(True)
eT.start()
print ("thread spawned")
fileObj = open(TFILE, "rb")
splitter = DataSplit(fileObj, CHUNK_SIZE)
for data in splitter.split():
seg_queue.put(data)
print ("Data sent to thread")
seg_queue.join()
#result_queue.join()
print ("Seg Q: {0}".format(seg_queue.qsize()))
print ("Res Q: {0}".format(result_queue.qsize()))
main()
print ("Elapsed Time: {0}".format(time.time()-start))
```
Serial Script:
```
#!/usr/bin/env python
from Crypto.Cipher import AES
from optparse import OptionParser
import os, base64, time, sys, hashlib, pickle, threading, timeit, Queue
TFILE = 'mytestfile.bin'
CHUNK_SIZE = 2048 * 2048
class EncSeries():
def __init(self):
pass
def loadFile(self,path):
openFile = open(path, "rb")
#fileData = openFile.readlines()
fileData = openFile.read(CHUNK_SIZE)
openFile.close()
return fileData
def encryptData(self,key, data):
cipher = AES.new(key, AES.MODE_CFB)
newData = []
for lines in data:
newData.append(cipher.encrypt(lines))
return newData
start = time.time()
def main():
print ("Start")
key = os.urandom(32)
run = EncSeries()
fileData = run.loadFile(TFILE)
encFileData=run.encryptData(key, fileData)
print("Finish")
main()
print ("Elapsed Time: {0}".format(time.time()-start))
```
using readlines() instead of read seems to speed things up considerably on the serial version too, but it's already much fast than the threaded version. | 2009/11/13 | [
"https://Stackoverflow.com/questions/1728477",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/200822/"
] | I watched the presentation that Dave Kirby linked to and tried the example counter which takes more that twice as long to run in two threads:
```
import time
from threading import Thread
countmax=100000000
def count(n):
while n>0:
n-=1
def main1():
count(countmax)
count(countmax)
def main2():
t1=Thread(target=count,args=(countmax,))
t2=Thread(target=count,args=(countmax,))
t1.start()
t2.start()
t1.join()
t2.join()
def timeit(func):
start = time.time()
func()
end=time.time()-start
print ("Elapsed Time: {0}".format(end))
if __name__ == '__main__':
timeit(main1)
timeit(main2)
```
Outputs:
```
Elapsed Time: 21.5470001698
Elapsed Time: 55.3279998302
```
However, if I change Thread for Process:
```
from multiprocessing import Process
```
and
```
t1=Process(target ....
```
etc. I get this output:
```
Elapsed Time: 20.5
Elapsed Time: 10.4059998989
```
Now its as if my Pentium CPU has two cores, I bet its the hyperthreading. Can anyone try this on their two or four core machine and run 2 or 4 threads?
See the python 2.6.4 documentation for [multiprocessing](http://docs.python.org/library/multiprocessing.html) | Just a quick note to update this thread: python 3.2 has a new implementation of the GIL which relieves a lot of the overheads associated with multithreading, but does not eliminate the locking. (i.e. it does not allow you to use more than one core, but it allows you to use multiple threads on that core efficiently). |
585,127 | When i have a relation between two entities in my model:
[GroupMember] (\*) ----- (1) [User]
and tries to select items from this relation with LINQ:
From entity in \_user.GroupMember select entity
I always get an empty result unless I load the relation first with following statement:
\_user.GroupMember.Load()
Is there a way to avoid loading the relations like this? | 2009/02/25 | [
"https://Stackoverflow.com/questions/585127",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/40361/"
] | If you have cascading relations, you can handle them with `.Include("GroupMember.AnotherTable.YetAnotherTable")` which is a little nicer than having to do chained Include calls. | I just realized that when i load the User from the database, I can use Include to load GroupMember with the User like this:
```
Users=from entity in db.User.Include("GroupMember") select entity
```
But if I have several relations and maybe wants to access relations on the relations, this gets very ugly.
So I am still looking for a better/nicer solution to my issue. |
1,417,541 | I have a workorder system using SQL Express 2008. I have a table called Workorders that has several personnel that are linked to it via UserID. In the Workorder table I have TechID for the Technician, CustomerID for the Customer, QAID for quality assurance. These are linked back to the User Table via UserID (User Table PK). I want to join the tables to return Technician Name, Customer Name, and QA Name from the User Table and other job information information from the Workorder Table. I have no idea how to construct the join. | 2009/09/13 | [
"https://Stackoverflow.com/questions/1417541",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | What about something a bit like this :
```
select tech.name as tech_name,
customer.name as customer_name,
qa.name as qa_name
from Workorders
inner join User as tech on tech.userId = Workorders.techId
inner join User as customer on customer.useId = Workorders.CustomerId
inner join User as qa on qa.useId = Workorders.QAID
```
*(Might need some tunning, but the idea should be here)*
ie, you are :
* starting with a workorder
* inner join on its tech guy (a User),
* and then inner joinning on its customer (another user)
* and so on
And this allows you to get each name, using the right alias in the select clause.
Note that I used aliases in the select clause too -- that might be usefull to have "worker\_name" and "tech\_name", instead of just two columns names "name" -- especially if you are calling this query from some other programming language.
Note : if one of those userId field can be NULL, you might want to use a left join, instead of an inner join. | ```
select tus.Name as 'TechnicianName',
cus.Name as 'CustomerName',
qus.Name as 'QaName',
wod.*
from WorkOrders wod
left outer join
Users tus on tus.UserId = wod.TechId
left outer join
Users cus on cus.UserId = wod.CustomerId
left outer join
Users qus on qus.UserId = wod.QaId
``` |
155,422 | I have been thinking about design by contract lately and I was wondering what people think is the best way to assert pre-condition and post-condition of values in .NET?
i.e. validating argument values to a method.
Some people recommend Debug.Assert while others talk about using an if statement plus throwing an exception. What are the pros and cons of each?
What frameworks are available that you recommend? | 2008/09/30 | [
"https://Stackoverflow.com/questions/155422",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11829/"
] | Another option is [Spec#](http://research.microsoft.com/SpecSharp/).
*Spec# is an extension of the object-oriented language C#. It extends the type system to include non-null types and checked exceptions. It provides method contracts in the form of pre- and postconditions as well as object invariants.* | I prefer exceptions over asserts because if it's supposed to be that way and isn't, I want to know about it so I can fix it, and the coverage we get in debug mode is nowhere near real-life usage or coverage, so just using Debug.Assert doesn't do enough.
Using asserts means that you won't add bloat to your release code, but it means you only get to see when and why these contracts get broken if you catch them at it in a debug build.
Using exceptions means you get to see the contract breaking whenever it happens, debug or release, but it also means your release build contains more checks and code.
You could go with an inbetween approach and use Trace to trace out your pre and post conditions to some kind of application log, which you could use to debug problems. However, you'd need a way of harvesting these logs to learn what issues your users are encountering. THere is also the possibility of combing this with exceptions so you get exceptions for the more severe problems.
The way I see it though, is that if the contract is worth enforcing then its worth throwing an exception when it breaks. I think that's somewhat down to opinion and target application though. If you do throw exceptions, you probably want some form of incident reporting system that provides crash reports when raised exceptions are left unhandled. |
155,422 | I have been thinking about design by contract lately and I was wondering what people think is the best way to assert pre-condition and post-condition of values in .NET?
i.e. validating argument values to a method.
Some people recommend Debug.Assert while others talk about using an if statement plus throwing an exception. What are the pros and cons of each?
What frameworks are available that you recommend? | 2008/09/30 | [
"https://Stackoverflow.com/questions/155422",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11829/"
] | I prefer exceptions over asserts because if it's supposed to be that way and isn't, I want to know about it so I can fix it, and the coverage we get in debug mode is nowhere near real-life usage or coverage, so just using Debug.Assert doesn't do enough.
Using asserts means that you won't add bloat to your release code, but it means you only get to see when and why these contracts get broken if you catch them at it in a debug build.
Using exceptions means you get to see the contract breaking whenever it happens, debug or release, but it also means your release build contains more checks and code.
You could go with an inbetween approach and use Trace to trace out your pre and post conditions to some kind of application log, which you could use to debug problems. However, you'd need a way of harvesting these logs to learn what issues your users are encountering. THere is also the possibility of combing this with exceptions so you get exceptions for the more severe problems.
The way I see it though, is that if the contract is worth enforcing then its worth throwing an exception when it breaks. I think that's somewhat down to opinion and target application though. If you do throw exceptions, you probably want some form of incident reporting system that provides crash reports when raised exceptions are left unhandled. | Spec# is the way to do it, which is a superset of C#. Now you have "[Code Contracts](http://research.microsoft.com/en-us/projects/contracts/)", which is the language-agnostic version of Spec#, so now you can have code contracts in VB.NET, for example. |
155,422 | I have been thinking about design by contract lately and I was wondering what people think is the best way to assert pre-condition and post-condition of values in .NET?
i.e. validating argument values to a method.
Some people recommend Debug.Assert while others talk about using an if statement plus throwing an exception. What are the pros and cons of each?
What frameworks are available that you recommend? | 2008/09/30 | [
"https://Stackoverflow.com/questions/155422",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11829/"
] | We'll eventually use Code Contracts when .NET 4.0 ships. However, in our production code right now we have had great success with a "Guard" class along with common way to generate exceptions.
For more details, see [my post about this](http://www.moserware.com/2008/01/borrowing-ideas-from-3-interesting.html). | I prefer exceptions over asserts because if it's supposed to be that way and isn't, I want to know about it so I can fix it, and the coverage we get in debug mode is nowhere near real-life usage or coverage, so just using Debug.Assert doesn't do enough.
Using asserts means that you won't add bloat to your release code, but it means you only get to see when and why these contracts get broken if you catch them at it in a debug build.
Using exceptions means you get to see the contract breaking whenever it happens, debug or release, but it also means your release build contains more checks and code.
You could go with an inbetween approach and use Trace to trace out your pre and post conditions to some kind of application log, which you could use to debug problems. However, you'd need a way of harvesting these logs to learn what issues your users are encountering. THere is also the possibility of combing this with exceptions so you get exceptions for the more severe problems.
The way I see it though, is that if the contract is worth enforcing then its worth throwing an exception when it breaks. I think that's somewhat down to opinion and target application though. If you do throw exceptions, you probably want some form of incident reporting system that provides crash reports when raised exceptions are left unhandled. |
155,422 | I have been thinking about design by contract lately and I was wondering what people think is the best way to assert pre-condition and post-condition of values in .NET?
i.e. validating argument values to a method.
Some people recommend Debug.Assert while others talk about using an if statement plus throwing an exception. What are the pros and cons of each?
What frameworks are available that you recommend? | 2008/09/30 | [
"https://Stackoverflow.com/questions/155422",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11829/"
] | Another option is [Spec#](http://research.microsoft.com/SpecSharp/).
*Spec# is an extension of the object-oriented language C#. It extends the type system to include non-null types and checked exceptions. It provides method contracts in the form of pre- and postconditions as well as object invariants.* | Spec# is the way to do it, which is a superset of C#. Now you have "[Code Contracts](http://research.microsoft.com/en-us/projects/contracts/)", which is the language-agnostic version of Spec#, so now you can have code contracts in VB.NET, for example. |
155,422 | I have been thinking about design by contract lately and I was wondering what people think is the best way to assert pre-condition and post-condition of values in .NET?
i.e. validating argument values to a method.
Some people recommend Debug.Assert while others talk about using an if statement plus throwing an exception. What are the pros and cons of each?
What frameworks are available that you recommend? | 2008/09/30 | [
"https://Stackoverflow.com/questions/155422",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11829/"
] | We'll eventually use Code Contracts when .NET 4.0 ships. However, in our production code right now we have had great success with a "Guard" class along with common way to generate exceptions.
For more details, see [my post about this](http://www.moserware.com/2008/01/borrowing-ideas-from-3-interesting.html). | Another option is [Spec#](http://research.microsoft.com/SpecSharp/).
*Spec# is an extension of the object-oriented language C#. It extends the type system to include non-null types and checked exceptions. It provides method contracts in the form of pre- and postconditions as well as object invariants.* |
155,422 | I have been thinking about design by contract lately and I was wondering what people think is the best way to assert pre-condition and post-condition of values in .NET?
i.e. validating argument values to a method.
Some people recommend Debug.Assert while others talk about using an if statement plus throwing an exception. What are the pros and cons of each?
What frameworks are available that you recommend? | 2008/09/30 | [
"https://Stackoverflow.com/questions/155422",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11829/"
] | Another option is [Spec#](http://research.microsoft.com/SpecSharp/).
*Spec# is an extension of the object-oriented language C#. It extends the type system to include non-null types and checked exceptions. It provides method contracts in the form of pre- and postconditions as well as object invariants.* | You could have a look at the fluent framework at <http://conditions.codeplex.com/>
Its open source and free. |
155,422 | I have been thinking about design by contract lately and I was wondering what people think is the best way to assert pre-condition and post-condition of values in .NET?
i.e. validating argument values to a method.
Some people recommend Debug.Assert while others talk about using an if statement plus throwing an exception. What are the pros and cons of each?
What frameworks are available that you recommend? | 2008/09/30 | [
"https://Stackoverflow.com/questions/155422",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11829/"
] | We'll eventually use Code Contracts when .NET 4.0 ships. However, in our production code right now we have had great success with a "Guard" class along with common way to generate exceptions.
For more details, see [my post about this](http://www.moserware.com/2008/01/borrowing-ideas-from-3-interesting.html). | Spec# is the way to do it, which is a superset of C#. Now you have "[Code Contracts](http://research.microsoft.com/en-us/projects/contracts/)", which is the language-agnostic version of Spec#, so now you can have code contracts in VB.NET, for example. |
155,422 | I have been thinking about design by contract lately and I was wondering what people think is the best way to assert pre-condition and post-condition of values in .NET?
i.e. validating argument values to a method.
Some people recommend Debug.Assert while others talk about using an if statement plus throwing an exception. What are the pros and cons of each?
What frameworks are available that you recommend? | 2008/09/30 | [
"https://Stackoverflow.com/questions/155422",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11829/"
] | You could have a look at the fluent framework at <http://conditions.codeplex.com/>
Its open source and free. | Spec# is the way to do it, which is a superset of C#. Now you have "[Code Contracts](http://research.microsoft.com/en-us/projects/contracts/)", which is the language-agnostic version of Spec#, so now you can have code contracts in VB.NET, for example. |
155,422 | I have been thinking about design by contract lately and I was wondering what people think is the best way to assert pre-condition and post-condition of values in .NET?
i.e. validating argument values to a method.
Some people recommend Debug.Assert while others talk about using an if statement plus throwing an exception. What are the pros and cons of each?
What frameworks are available that you recommend? | 2008/09/30 | [
"https://Stackoverflow.com/questions/155422",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11829/"
] | We'll eventually use Code Contracts when .NET 4.0 ships. However, in our production code right now we have had great success with a "Guard" class along with common way to generate exceptions.
For more details, see [my post about this](http://www.moserware.com/2008/01/borrowing-ideas-from-3-interesting.html). | You could have a look at the fluent framework at <http://conditions.codeplex.com/>
Its open source and free. |
139,972 | Kind of a newbie question, but I am having problems using SNVKit.
I am using SVNKit in an application to commit changes to files. I have it successfully adding the files and folders to the working copy, but I am having problems committing it to the respository.
The command I am trying to run is 'commit -m "Test Add" /svnroot/project1/' but I keep getting "svn: '/home/user' is not a working copy"
I have a structure similar to this:
* /svnroot/
* /svnroot/project1/
* /svnroot/project1/grouping1/
* /svnroot/project1/grouping1/myfilesarehere
If I try to commit the file, I get the following message: "'/svnroot/project1/grouping1' is not under version control and is not part of the commit, yet its child is part of the commit."
What might I be doing wrong?
EDIT: Fixed the directories. | 2008/09/26 | [
"https://Stackoverflow.com/questions/139972",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2535/"
] | I got this message and I noticed that I was doing a commit while I was in the sub-directory. When I switched to the root of the tree, it commited w/o issue. | It's not entirely clear because you've inconsistently replaced them, but it looks like you're getting repository paths/URLs confused with working copy paths. If you're adding or committing files, always use the working copy paths. Try playing around with the command-line svn before trying to use SVNKit. |
139,972 | Kind of a newbie question, but I am having problems using SNVKit.
I am using SVNKit in an application to commit changes to files. I have it successfully adding the files and folders to the working copy, but I am having problems committing it to the respository.
The command I am trying to run is 'commit -m "Test Add" /svnroot/project1/' but I keep getting "svn: '/home/user' is not a working copy"
I have a structure similar to this:
* /svnroot/
* /svnroot/project1/
* /svnroot/project1/grouping1/
* /svnroot/project1/grouping1/myfilesarehere
If I try to commit the file, I get the following message: "'/svnroot/project1/grouping1' is not under version control and is not part of the commit, yet its child is part of the commit."
What might I be doing wrong?
EDIT: Fixed the directories. | 2008/09/26 | [
"https://Stackoverflow.com/questions/139972",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2535/"
] | If you have both a directory and its child added, but neither is not committed, I believe you get this message if you try to commit just the child. You need to commit the parent directory first. | It's not entirely clear because you've inconsistently replaced them, but it looks like you're getting repository paths/URLs confused with working copy paths. If you're adding or committing files, always use the working copy paths. Try playing around with the command-line svn before trying to use SVNKit. |
139,972 | Kind of a newbie question, but I am having problems using SNVKit.
I am using SVNKit in an application to commit changes to files. I have it successfully adding the files and folders to the working copy, but I am having problems committing it to the respository.
The command I am trying to run is 'commit -m "Test Add" /svnroot/project1/' but I keep getting "svn: '/home/user' is not a working copy"
I have a structure similar to this:
* /svnroot/
* /svnroot/project1/
* /svnroot/project1/grouping1/
* /svnroot/project1/grouping1/myfilesarehere
If I try to commit the file, I get the following message: "'/svnroot/project1/grouping1' is not under version control and is not part of the commit, yet its child is part of the commit."
What might I be doing wrong?
EDIT: Fixed the directories. | 2008/09/26 | [
"https://Stackoverflow.com/questions/139972",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2535/"
] | I think the problem is that you are committing changes to the actual SVN repository itself instead of doing an import, checking out a copy for yourself, making changes, and then doing a commit from your checked-out working copy after adding any subdirectories. So: `import`, `checkout`, *make changes*, and then finally do an `add` for each new file or directory and `commit -m "message"` form the top level.
More information in the [free online SVN "turtle" book](http://svnbook.red-bean.com/en/1.5/svn-book.html#svn.basic.in-action.wc). | It's not entirely clear because you've inconsistently replaced them, but it looks like you're getting repository paths/URLs confused with working copy paths. If you're adding or committing files, always use the working copy paths. Try playing around with the command-line svn before trying to use SVNKit. |
139,972 | Kind of a newbie question, but I am having problems using SNVKit.
I am using SVNKit in an application to commit changes to files. I have it successfully adding the files and folders to the working copy, but I am having problems committing it to the respository.
The command I am trying to run is 'commit -m "Test Add" /svnroot/project1/' but I keep getting "svn: '/home/user' is not a working copy"
I have a structure similar to this:
* /svnroot/
* /svnroot/project1/
* /svnroot/project1/grouping1/
* /svnroot/project1/grouping1/myfilesarehere
If I try to commit the file, I get the following message: "'/svnroot/project1/grouping1' is not under version control and is not part of the commit, yet its child is part of the commit."
What might I be doing wrong?
EDIT: Fixed the directories. | 2008/09/26 | [
"https://Stackoverflow.com/questions/139972",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2535/"
] | move your -m "comment" to the end.
I would just change directory into your project directory. Then you just type svn commit -m "comment" and svn does the rest. | If you want to commit an entire new directory consider using [svn import](http://svnbook.red-bean.com/en/1.2/svn.ref.svn.c.import.html) instead. As it stands right now you may have to revert or some other action clean up the current mess. |
139,972 | Kind of a newbie question, but I am having problems using SNVKit.
I am using SVNKit in an application to commit changes to files. I have it successfully adding the files and folders to the working copy, but I am having problems committing it to the respository.
The command I am trying to run is 'commit -m "Test Add" /svnroot/project1/' but I keep getting "svn: '/home/user' is not a working copy"
I have a structure similar to this:
* /svnroot/
* /svnroot/project1/
* /svnroot/project1/grouping1/
* /svnroot/project1/grouping1/myfilesarehere
If I try to commit the file, I get the following message: "'/svnroot/project1/grouping1' is not under version control and is not part of the commit, yet its child is part of the commit."
What might I be doing wrong?
EDIT: Fixed the directories. | 2008/09/26 | [
"https://Stackoverflow.com/questions/139972",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2535/"
] | I have it tracked down to a possible bug somewhere. If I don't add a message it works. Time for more digging. Thanks for the pointers. | It's not entirely clear because you've inconsistently replaced them, but it looks like you're getting repository paths/URLs confused with working copy paths. If you're adding or committing files, always use the working copy paths. Try playing around with the command-line svn before trying to use SVNKit. |
139,972 | Kind of a newbie question, but I am having problems using SNVKit.
I am using SVNKit in an application to commit changes to files. I have it successfully adding the files and folders to the working copy, but I am having problems committing it to the respository.
The command I am trying to run is 'commit -m "Test Add" /svnroot/project1/' but I keep getting "svn: '/home/user' is not a working copy"
I have a structure similar to this:
* /svnroot/
* /svnroot/project1/
* /svnroot/project1/grouping1/
* /svnroot/project1/grouping1/myfilesarehere
If I try to commit the file, I get the following message: "'/svnroot/project1/grouping1' is not under version control and is not part of the commit, yet its child is part of the commit."
What might I be doing wrong?
EDIT: Fixed the directories. | 2008/09/26 | [
"https://Stackoverflow.com/questions/139972",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2535/"
] | I think the problem is that you are committing changes to the actual SVN repository itself instead of doing an import, checking out a copy for yourself, making changes, and then doing a commit from your checked-out working copy after adding any subdirectories. So: `import`, `checkout`, *make changes*, and then finally do an `add` for each new file or directory and `commit -m "message"` form the top level.
More information in the [free online SVN "turtle" book](http://svnbook.red-bean.com/en/1.5/svn-book.html#svn.basic.in-action.wc). | You have probably done some refactoring and you are trying to commit 'some.package.YourClass.java', in that case try commiting the directory(package) 'some'.
If you want to save yourself from such headache in the future consider switching to GIT instead of svn. Remember svn keeps your changes in a .svn file and tries to puch it to the repository. When you commit it will according to this .svn file push 'your changes'. But what if yo're changes are very complex? svn can't handle it.
GIT on the other hand: you retrieve all updates so your project is up-to-date. And when committing it just overwrites the repo, since it knows yours is most recent and no hustle. |
139,972 | Kind of a newbie question, but I am having problems using SNVKit.
I am using SVNKit in an application to commit changes to files. I have it successfully adding the files and folders to the working copy, but I am having problems committing it to the respository.
The command I am trying to run is 'commit -m "Test Add" /svnroot/project1/' but I keep getting "svn: '/home/user' is not a working copy"
I have a structure similar to this:
* /svnroot/
* /svnroot/project1/
* /svnroot/project1/grouping1/
* /svnroot/project1/grouping1/myfilesarehere
If I try to commit the file, I get the following message: "'/svnroot/project1/grouping1' is not under version control and is not part of the commit, yet its child is part of the commit."
What might I be doing wrong?
EDIT: Fixed the directories. | 2008/09/26 | [
"https://Stackoverflow.com/questions/139972",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2535/"
] | I got this message and I noticed that I was doing a commit while I was in the sub-directory. When I switched to the root of the tree, it commited w/o issue. | If you want to commit an entire new directory consider using [svn import](http://svnbook.red-bean.com/en/1.2/svn.ref.svn.c.import.html) instead. As it stands right now you may have to revert or some other action clean up the current mess. |
139,972 | Kind of a newbie question, but I am having problems using SNVKit.
I am using SVNKit in an application to commit changes to files. I have it successfully adding the files and folders to the working copy, but I am having problems committing it to the respository.
The command I am trying to run is 'commit -m "Test Add" /svnroot/project1/' but I keep getting "svn: '/home/user' is not a working copy"
I have a structure similar to this:
* /svnroot/
* /svnroot/project1/
* /svnroot/project1/grouping1/
* /svnroot/project1/grouping1/myfilesarehere
If I try to commit the file, I get the following message: "'/svnroot/project1/grouping1' is not under version control and is not part of the commit, yet its child is part of the commit."
What might I be doing wrong?
EDIT: Fixed the directories. | 2008/09/26 | [
"https://Stackoverflow.com/questions/139972",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2535/"
] | I have it tracked down to a possible bug somewhere. If I don't add a message it works. Time for more digging. Thanks for the pointers. | If you want to commit an entire new directory consider using [svn import](http://svnbook.red-bean.com/en/1.2/svn.ref.svn.c.import.html) instead. As it stands right now you may have to revert or some other action clean up the current mess. |
139,972 | Kind of a newbie question, but I am having problems using SNVKit.
I am using SVNKit in an application to commit changes to files. I have it successfully adding the files and folders to the working copy, but I am having problems committing it to the respository.
The command I am trying to run is 'commit -m "Test Add" /svnroot/project1/' but I keep getting "svn: '/home/user' is not a working copy"
I have a structure similar to this:
* /svnroot/
* /svnroot/project1/
* /svnroot/project1/grouping1/
* /svnroot/project1/grouping1/myfilesarehere
If I try to commit the file, I get the following message: "'/svnroot/project1/grouping1' is not under version control and is not part of the commit, yet its child is part of the commit."
What might I be doing wrong?
EDIT: Fixed the directories. | 2008/09/26 | [
"https://Stackoverflow.com/questions/139972",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2535/"
] | I think the problem is that you are committing changes to the actual SVN repository itself instead of doing an import, checking out a copy for yourself, making changes, and then doing a commit from your checked-out working copy after adding any subdirectories. So: `import`, `checkout`, *make changes*, and then finally do an `add` for each new file or directory and `commit -m "message"` form the top level.
More information in the [free online SVN "turtle" book](http://svnbook.red-bean.com/en/1.5/svn-book.html#svn.basic.in-action.wc). | If you want to commit an entire new directory consider using [svn import](http://svnbook.red-bean.com/en/1.2/svn.ref.svn.c.import.html) instead. As it stands right now you may have to revert or some other action clean up the current mess. |
139,972 | Kind of a newbie question, but I am having problems using SNVKit.
I am using SVNKit in an application to commit changes to files. I have it successfully adding the files and folders to the working copy, but I am having problems committing it to the respository.
The command I am trying to run is 'commit -m "Test Add" /svnroot/project1/' but I keep getting "svn: '/home/user' is not a working copy"
I have a structure similar to this:
* /svnroot/
* /svnroot/project1/
* /svnroot/project1/grouping1/
* /svnroot/project1/grouping1/myfilesarehere
If I try to commit the file, I get the following message: "'/svnroot/project1/grouping1' is not under version control and is not part of the commit, yet its child is part of the commit."
What might I be doing wrong?
EDIT: Fixed the directories. | 2008/09/26 | [
"https://Stackoverflow.com/questions/139972",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2535/"
] | I got this message and I noticed that I was doing a commit while I was in the sub-directory. When I switched to the root of the tree, it commited w/o issue. | Some times, using some Softwares such as eclipse or Versions.app produces this errors. In this case, quit the SVN client and do it on command line. |
2,290,018 | I am trying to configure my lighttpd server to use a fastcgi module. The recipe I am following ( blindly ) calls for the following line in lighttpd.conf
```
$HTTP["host"] =~ "(^|\.)example\.com$" {
```
I am running on a virtual private server, and I do not have a domain name, just an IP. So I assume that I have to replace the domain name with my IP - let's say 100.101.102.103
This does not work
```
$HTTP["host"] =~ "(^|\.)100\.101\.102\.103$" {
```
Neither does several variations. | 2010/02/18 | [
"https://Stackoverflow.com/questions/2290018",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16582/"
] | I found that this works:
```
$SERVER["socket"] == "0.0.0.0:8000" {
``` | Do a lookup on your IP address, is there really *no* DNS name for it? They sually provide a subdomain at the very least.
Lastly, you can just put "\*" and it will respond to everything.
Are you using fastcgi? it *really* makes a difference. |
2,290,018 | I am trying to configure my lighttpd server to use a fastcgi module. The recipe I am following ( blindly ) calls for the following line in lighttpd.conf
```
$HTTP["host"] =~ "(^|\.)example\.com$" {
```
I am running on a virtual private server, and I do not have a domain name, just an IP. So I assume that I have to replace the domain name with my IP - let's say 100.101.102.103
This does not work
```
$HTTP["host"] =~ "(^|\.)100\.101\.102\.103$" {
```
Neither does several variations. | 2010/02/18 | [
"https://Stackoverflow.com/questions/2290018",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16582/"
] | You can determine what the value of $HTTP["host"] is for any given request by looking in lighttpd's access log (it's the second entry on a standard log line; it'll show as "-" if the request didn't specify one).
If the log shows you have a fixed IP address, this is a much cleaner test for it:
```
$HTTP["host"] == "100.101.102.103" {
```
(though the regular expression you were using should have worked).
What's probably causing confusion here however is the fact that $HTTP["host"] is set to the value of the "Host:" header in the incoming request so it's completely under the control of the client (browser, script, whatever) that's making the request. Knowing what the URL is you are testing with would clarify things a lot. | Do a lookup on your IP address, is there really *no* DNS name for it? They sually provide a subdomain at the very least.
Lastly, you can just put "\*" and it will respond to everything.
Are you using fastcgi? it *really* makes a difference. |
1,727,578 | i want to use css in Django templates..If i give CSS with in the templates it gets working.
But i want to use in static serve manner.
settings.py
```
DEBUG =True
MEDIA_ROOT = 'C:/WorkBase/Python/first/static/'
MEDIA_URL = '/static/'
ADMIN_MEDIA_PREFIX = '/media/'
```
TEMPLATE\_DIRS = (
```
'C:/WorkBase/Python/first/templates',
)
```
TEMPLATE\_LOADERS = (
```
'django.template.loaders.filesystem.load_template_source',
'django.template.loaders.app_directories.load_template_source',
'django.template.loaders.eggs.load_template_source',
```
)
urls.py
```
from django.conf import settings
if settings.DEBUG:
urlpatterns +=patterns(' ',
(r'^static/(?p<path>.*)$','django.views.static.serve',{'document_root':settings.MEDIA_ROOT}),
```
)
I got 'unexpected end of pattern' error for above line
```
<link rel="stylesheet" type="text/css" href="/static/css/style.css"/>
``` | 2009/11/13 | [
"https://Stackoverflow.com/questions/1727578",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/190350/"
] | I believe the 'P' to name the pattern needs to be capitalized. `r'^static/(?P<path>.*)$'`
All of the examples and doc show it capitalized. [Python Regex Doc](http://docs.python.org/library/re.html) | [T. Stone](https://stackoverflow.com/questions/1727578/how-to-serve-stylesheet-in-django/1727874#1727874) has hit the nail on the head with his answer. Here's what I use, for an example:
```
if settings.DEBUG:
urlpatterns += patterns('',
(r'^static/(?P<path>.*)$', 'django.views.static.serve',
{ 'document_root': os.path.join(os.path.dirname(__file__), "static")}),
)
``` |
1,008,564 | In the years that I've been at my place of employment, I've noticed a distinct trend towards something that I consider an anti-pattern: Maintaining internal data as big strings of XML. I've seen this done a number of different ways, though the two worst offenders were quite similar.
The Webservice
--------------
The first application, a web service, provides access to a potentially high volume of data within a SQL database. At startup, it pulls more-or-less all of that data out of the database and stores it in memory as XML. (Three times.) The owners of this application call it a cache. I call it slow, because every perf problem that's been run into while working against this has been directly traceable to this thing. (It being a corporate environment, there should be no surprise that the client gets blamed for the perf failure, not the service.) This application does use the XML DOM.
The Importer
------------
The second application reads an XML file that was generated as the result of an export from a third-party database. The goal is to import this data into a proprietary system (owned by us). The application that does it reads the entire XML file in and maintains at least two, sometimes as many as four, copies of the XML file throughout the entire importing sequence. Note that the data can be manipulated, transformed, and configuration can occur before the import takes place, so the importer owns this data in an XML format for it's entire lifetime. Unsurprisingly, this importer then explodes when a moderately sized XML file is provided. This application only uses the XML DOM for one of it's copies, the rest are all raw XML strings.
My understanding of common sense suggests that XML is *not* a good format for holding data in-memory, but rather data should be translated into XML when it's being output/transferred and translated into internal data structures when being read in and imported. The thing is, I'm constantly running into production code that completely ignores the scalability issues, and goes through a *ton* of extra effort to do so. (The sheer volume of string parsing in these applications is frightening.)
Is this a common failure to apply the right tool for the job that others people run into alos? Or is it just bad luck on my part? Or am I missing some blindingly obvious and *good* situations where it's Right and OK to store high volumes of data in-memory as XML? | 2009/06/17 | [
"https://Stackoverflow.com/questions/1008564",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6932/"
] | @Matthew Flaschen makes a great point. I would like to add that when you join any existing project, you are likely to find some design and implementation decisions that you disagree with.
We all learn new things all the time and we all make mistakes. Though I agree that this seems like a "duh" kind of problem, I'm sure the other developers were trying to optimize the code through the concept of a cache.
The point is, sometimes it takes an gentle approach to convince people, especially developers, to change their ways. This isn't a coding problem, but a people problem. You need to find a way to convince these developers that these changes you are suggesting don't imply they are incompetent.
I'd suggest agreeing with them that caching can be a great idea, but that you'd like to working on it to speed up the functions. Create a quick demo of how your (way more logical) implementation works compared with the old way. It's hard to argue with dramatic speed improvements. Just be careful about directly attacking the way they implemented in conversation. You need these people to work with you.
Good luck! | I agree as well, and I do think there is an element of bad luck.
...but grabbing for straws, the only use I could see for data being stored as XML is for automated unit tests, where XML provides an easy way to mock up test data. Definitely not worth it, though. |
1,008,564 | In the years that I've been at my place of employment, I've noticed a distinct trend towards something that I consider an anti-pattern: Maintaining internal data as big strings of XML. I've seen this done a number of different ways, though the two worst offenders were quite similar.
The Webservice
--------------
The first application, a web service, provides access to a potentially high volume of data within a SQL database. At startup, it pulls more-or-less all of that data out of the database and stores it in memory as XML. (Three times.) The owners of this application call it a cache. I call it slow, because every perf problem that's been run into while working against this has been directly traceable to this thing. (It being a corporate environment, there should be no surprise that the client gets blamed for the perf failure, not the service.) This application does use the XML DOM.
The Importer
------------
The second application reads an XML file that was generated as the result of an export from a third-party database. The goal is to import this data into a proprietary system (owned by us). The application that does it reads the entire XML file in and maintains at least two, sometimes as many as four, copies of the XML file throughout the entire importing sequence. Note that the data can be manipulated, transformed, and configuration can occur before the import takes place, so the importer owns this data in an XML format for it's entire lifetime. Unsurprisingly, this importer then explodes when a moderately sized XML file is provided. This application only uses the XML DOM for one of it's copies, the rest are all raw XML strings.
My understanding of common sense suggests that XML is *not* a good format for holding data in-memory, but rather data should be translated into XML when it's being output/transferred and translated into internal data structures when being read in and imported. The thing is, I'm constantly running into production code that completely ignores the scalability issues, and goes through a *ton* of extra effort to do so. (The sheer volume of string parsing in these applications is frightening.)
Is this a common failure to apply the right tool for the job that others people run into alos? Or is it just bad luck on my part? Or am I missing some blindingly obvious and *good* situations where it's Right and OK to store high volumes of data in-memory as XML? | 2009/06/17 | [
"https://Stackoverflow.com/questions/1008564",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6932/"
] | @Matthew Flaschen makes a great point. I would like to add that when you join any existing project, you are likely to find some design and implementation decisions that you disagree with.
We all learn new things all the time and we all make mistakes. Though I agree that this seems like a "duh" kind of problem, I'm sure the other developers were trying to optimize the code through the concept of a cache.
The point is, sometimes it takes an gentle approach to convince people, especially developers, to change their ways. This isn't a coding problem, but a people problem. You need to find a way to convince these developers that these changes you are suggesting don't imply they are incompetent.
I'd suggest agreeing with them that caching can be a great idea, but that you'd like to working on it to speed up the functions. Create a quick demo of how your (way more logical) implementation works compared with the old way. It's hard to argue with dramatic speed improvements. Just be careful about directly attacking the way they implemented in conversation. You need these people to work with you.
Good luck! | For high volumes of data the answer is no, there aren't good reasons to store data directly as XML strings in memory.
However, here is an interesting [presentation](http://www.xmlprague.cz/2009/presentations/Alex-Brown-High-performance-XML-theory-and-practice.pdf), by Alex Brown, on how to preserve XML in memory in a more efficient way. As a 'Frozen Stream'.
There is also a video of this, and other presentations given at XML Prague 2009 [here](http://www.xmlprague.cz/index.html).
[link text](http://www.xmlprague.cz/2009/presentations/Alex-Brown-High-performance-XML-theory-and-practice.pdf) |
1,008,564 | In the years that I've been at my place of employment, I've noticed a distinct trend towards something that I consider an anti-pattern: Maintaining internal data as big strings of XML. I've seen this done a number of different ways, though the two worst offenders were quite similar.
The Webservice
--------------
The first application, a web service, provides access to a potentially high volume of data within a SQL database. At startup, it pulls more-or-less all of that data out of the database and stores it in memory as XML. (Three times.) The owners of this application call it a cache. I call it slow, because every perf problem that's been run into while working against this has been directly traceable to this thing. (It being a corporate environment, there should be no surprise that the client gets blamed for the perf failure, not the service.) This application does use the XML DOM.
The Importer
------------
The second application reads an XML file that was generated as the result of an export from a third-party database. The goal is to import this data into a proprietary system (owned by us). The application that does it reads the entire XML file in and maintains at least two, sometimes as many as four, copies of the XML file throughout the entire importing sequence. Note that the data can be manipulated, transformed, and configuration can occur before the import takes place, so the importer owns this data in an XML format for it's entire lifetime. Unsurprisingly, this importer then explodes when a moderately sized XML file is provided. This application only uses the XML DOM for one of it's copies, the rest are all raw XML strings.
My understanding of common sense suggests that XML is *not* a good format for holding data in-memory, but rather data should be translated into XML when it's being output/transferred and translated into internal data structures when being read in and imported. The thing is, I'm constantly running into production code that completely ignores the scalability issues, and goes through a *ton* of extra effort to do so. (The sheer volume of string parsing in these applications is frightening.)
Is this a common failure to apply the right tool for the job that others people run into alos? Or is it just bad luck on my part? Or am I missing some blindingly obvious and *good* situations where it's Right and OK to store high volumes of data in-memory as XML? | 2009/06/17 | [
"https://Stackoverflow.com/questions/1008564",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6932/"
] | No, I agree. For your first example, the database should handle almost all the caching, so storing all the data in program memory is wrong. This applies whether it's stored in-memory as XML or otherwise.
For the second, you should convert the XML into a useful representation as soon as possible, probably a database, then work with it that way. Only if it's a small amount of data would it be appropriate to do all work in-memory as a XmlDocument (e.g. using XPath). String parsing should be used very sparingly. | In general, I would try to use an internal data model that is independent of its serialization in XML.
However, in my opinion **there is one case where using XML as an internal data structure makes sense**: If your data model needs to capture hierarchical relationships whose format can be extended by 3rd parties and if your application needs to forward this data while preserving the extended information.
Take, for example, [the lumberjack logging framework](https://fedorahosted.org/lumberjack/): The idea is to have an XML-based event data model in which every application can provide hierarchical information about events (warnings, errors, etc.). The framework takes care of gathering the events and distributing them to the appropriate handlers. A 3rd party can easily define its own additions to the format, and provide appropriate generators and handlers.
The important part here is that the framework has to forward the XML with all the XML information intact from the generator to a handler. **In this case implementing an internal data structure which captures all the necessary information results in a re-implementation of most of XML itself.** Hence, using an appropriate DOM framework for internal data representation makes sense. |
1,008,564 | In the years that I've been at my place of employment, I've noticed a distinct trend towards something that I consider an anti-pattern: Maintaining internal data as big strings of XML. I've seen this done a number of different ways, though the two worst offenders were quite similar.
The Webservice
--------------
The first application, a web service, provides access to a potentially high volume of data within a SQL database. At startup, it pulls more-or-less all of that data out of the database and stores it in memory as XML. (Three times.) The owners of this application call it a cache. I call it slow, because every perf problem that's been run into while working against this has been directly traceable to this thing. (It being a corporate environment, there should be no surprise that the client gets blamed for the perf failure, not the service.) This application does use the XML DOM.
The Importer
------------
The second application reads an XML file that was generated as the result of an export from a third-party database. The goal is to import this data into a proprietary system (owned by us). The application that does it reads the entire XML file in and maintains at least two, sometimes as many as four, copies of the XML file throughout the entire importing sequence. Note that the data can be manipulated, transformed, and configuration can occur before the import takes place, so the importer owns this data in an XML format for it's entire lifetime. Unsurprisingly, this importer then explodes when a moderately sized XML file is provided. This application only uses the XML DOM for one of it's copies, the rest are all raw XML strings.
My understanding of common sense suggests that XML is *not* a good format for holding data in-memory, but rather data should be translated into XML when it's being output/transferred and translated into internal data structures when being read in and imported. The thing is, I'm constantly running into production code that completely ignores the scalability issues, and goes through a *ton* of extra effort to do so. (The sheer volume of string parsing in these applications is frightening.)
Is this a common failure to apply the right tool for the job that others people run into alos? Or is it just bad luck on my part? Or am I missing some blindingly obvious and *good* situations where it's Right and OK to store high volumes of data in-memory as XML? | 2009/06/17 | [
"https://Stackoverflow.com/questions/1008564",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6932/"
] | No, I agree. For your first example, the database should handle almost all the caching, so storing all the data in program memory is wrong. This applies whether it's stored in-memory as XML or otherwise.
For the second, you should convert the XML into a useful representation as soon as possible, probably a database, then work with it that way. Only if it's a small amount of data would it be appropriate to do all work in-memory as a XmlDocument (e.g. using XPath). String parsing should be used very sparingly. | Greg,
in several applications I did follow more or less exactly the pattern you describe:
Edit: no scratch that. I never stored the XML as a string (or multiple strings). I just parsed it into a DOM and worked with that. THAT was helpful.
I've imported XML sources into the DOM (Microsoft Parser) and kept them there for all the required processing. I'm well aware of the memory overhead the DOM causes, but I found the apporach quite useful nonetheless.
* Some checks during processing need random access to the data. The selectPath statement works quite well for this purpose.
* DOM nodes can be handed back and forth in the application as arguments. The alternative is writing classes wrapping every single type of object, and updating them as the XML schema evolves. It's a poor (VB6/VBA) man's approach to polymorphism.
* Applying an XSLT transformation to all or parts of the DOM is a snap
* File I/O is taken care of by the DOM too (xmldoc.save...)
A linked list of objects would consume a comparable amount of memory and require more code. All the search and I/O functionality I would have to code myself.
What I've perceived as the anti-pattern is actually an older version of the application, where the XML was parsed more or less manually into arrays of structures. |
1,008,564 | In the years that I've been at my place of employment, I've noticed a distinct trend towards something that I consider an anti-pattern: Maintaining internal data as big strings of XML. I've seen this done a number of different ways, though the two worst offenders were quite similar.
The Webservice
--------------
The first application, a web service, provides access to a potentially high volume of data within a SQL database. At startup, it pulls more-or-less all of that data out of the database and stores it in memory as XML. (Three times.) The owners of this application call it a cache. I call it slow, because every perf problem that's been run into while working against this has been directly traceable to this thing. (It being a corporate environment, there should be no surprise that the client gets blamed for the perf failure, not the service.) This application does use the XML DOM.
The Importer
------------
The second application reads an XML file that was generated as the result of an export from a third-party database. The goal is to import this data into a proprietary system (owned by us). The application that does it reads the entire XML file in and maintains at least two, sometimes as many as four, copies of the XML file throughout the entire importing sequence. Note that the data can be manipulated, transformed, and configuration can occur before the import takes place, so the importer owns this data in an XML format for it's entire lifetime. Unsurprisingly, this importer then explodes when a moderately sized XML file is provided. This application only uses the XML DOM for one of it's copies, the rest are all raw XML strings.
My understanding of common sense suggests that XML is *not* a good format for holding data in-memory, but rather data should be translated into XML when it's being output/transferred and translated into internal data structures when being read in and imported. The thing is, I'm constantly running into production code that completely ignores the scalability issues, and goes through a *ton* of extra effort to do so. (The sheer volume of string parsing in these applications is frightening.)
Is this a common failure to apply the right tool for the job that others people run into alos? Or is it just bad luck on my part? Or am I missing some blindingly obvious and *good* situations where it's Right and OK to store high volumes of data in-memory as XML? | 2009/06/17 | [
"https://Stackoverflow.com/questions/1008564",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6932/"
] | Any data stored in memory should be in classes. The higher volume of data we are talking about, the more important this becomes. Xml is a hugely bloated format that reduces performance. Xml should be used only for transfering data between applications. IMHO. | No, I agree. For your first example, the database should handle almost all the caching, so storing all the data in program memory is wrong. This applies whether it's stored in-memory as XML or otherwise.
For the second, you should convert the XML into a useful representation as soon as possible, probably a database, then work with it that way. Only if it's a small amount of data would it be appropriate to do all work in-memory as a XmlDocument (e.g. using XPath). String parsing should be used very sparingly. |
1,008,564 | In the years that I've been at my place of employment, I've noticed a distinct trend towards something that I consider an anti-pattern: Maintaining internal data as big strings of XML. I've seen this done a number of different ways, though the two worst offenders were quite similar.
The Webservice
--------------
The first application, a web service, provides access to a potentially high volume of data within a SQL database. At startup, it pulls more-or-less all of that data out of the database and stores it in memory as XML. (Three times.) The owners of this application call it a cache. I call it slow, because every perf problem that's been run into while working against this has been directly traceable to this thing. (It being a corporate environment, there should be no surprise that the client gets blamed for the perf failure, not the service.) This application does use the XML DOM.
The Importer
------------
The second application reads an XML file that was generated as the result of an export from a third-party database. The goal is to import this data into a proprietary system (owned by us). The application that does it reads the entire XML file in and maintains at least two, sometimes as many as four, copies of the XML file throughout the entire importing sequence. Note that the data can be manipulated, transformed, and configuration can occur before the import takes place, so the importer owns this data in an XML format for it's entire lifetime. Unsurprisingly, this importer then explodes when a moderately sized XML file is provided. This application only uses the XML DOM for one of it's copies, the rest are all raw XML strings.
My understanding of common sense suggests that XML is *not* a good format for holding data in-memory, but rather data should be translated into XML when it's being output/transferred and translated into internal data structures when being read in and imported. The thing is, I'm constantly running into production code that completely ignores the scalability issues, and goes through a *ton* of extra effort to do so. (The sheer volume of string parsing in these applications is frightening.)
Is this a common failure to apply the right tool for the job that others people run into alos? Or is it just bad luck on my part? Or am I missing some blindingly obvious and *good* situations where it's Right and OK to store high volumes of data in-memory as XML? | 2009/06/17 | [
"https://Stackoverflow.com/questions/1008564",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6932/"
] | Any data stored in memory should be in classes. The higher volume of data we are talking about, the more important this becomes. Xml is a hugely bloated format that reduces performance. Xml should be used only for transfering data between applications. IMHO. | what about OOP and Databases? Xml has it's uses but there can be issues (as you are seeing) with using it for everything.
Databases can allow for indexing, transactions, etc. that will speed up your data access
Objects are in most cases easier to work with, They give a better picture of your domain, etc.
I am not against using xml but it is like patterns, they are a tools that we should understand where and when to use them, not fall in love with them and try to use them everywhere... |
1,008,564 | In the years that I've been at my place of employment, I've noticed a distinct trend towards something that I consider an anti-pattern: Maintaining internal data as big strings of XML. I've seen this done a number of different ways, though the two worst offenders were quite similar.
The Webservice
--------------
The first application, a web service, provides access to a potentially high volume of data within a SQL database. At startup, it pulls more-or-less all of that data out of the database and stores it in memory as XML. (Three times.) The owners of this application call it a cache. I call it slow, because every perf problem that's been run into while working against this has been directly traceable to this thing. (It being a corporate environment, there should be no surprise that the client gets blamed for the perf failure, not the service.) This application does use the XML DOM.
The Importer
------------
The second application reads an XML file that was generated as the result of an export from a third-party database. The goal is to import this data into a proprietary system (owned by us). The application that does it reads the entire XML file in and maintains at least two, sometimes as many as four, copies of the XML file throughout the entire importing sequence. Note that the data can be manipulated, transformed, and configuration can occur before the import takes place, so the importer owns this data in an XML format for it's entire lifetime. Unsurprisingly, this importer then explodes when a moderately sized XML file is provided. This application only uses the XML DOM for one of it's copies, the rest are all raw XML strings.
My understanding of common sense suggests that XML is *not* a good format for holding data in-memory, but rather data should be translated into XML when it's being output/transferred and translated into internal data structures when being read in and imported. The thing is, I'm constantly running into production code that completely ignores the scalability issues, and goes through a *ton* of extra effort to do so. (The sheer volume of string parsing in these applications is frightening.)
Is this a common failure to apply the right tool for the job that others people run into alos? Or is it just bad luck on my part? Or am I missing some blindingly obvious and *good* situations where it's Right and OK to store high volumes of data in-memory as XML? | 2009/06/17 | [
"https://Stackoverflow.com/questions/1008564",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6932/"
] | @Matthew Flaschen makes a great point. I would like to add that when you join any existing project, you are likely to find some design and implementation decisions that you disagree with.
We all learn new things all the time and we all make mistakes. Though I agree that this seems like a "duh" kind of problem, I'm sure the other developers were trying to optimize the code through the concept of a cache.
The point is, sometimes it takes an gentle approach to convince people, especially developers, to change their ways. This isn't a coding problem, but a people problem. You need to find a way to convince these developers that these changes you are suggesting don't imply they are incompetent.
I'd suggest agreeing with them that caching can be a great idea, but that you'd like to working on it to speed up the functions. Create a quick demo of how your (way more logical) implementation works compared with the old way. It's hard to argue with dramatic speed improvements. Just be careful about directly attacking the way they implemented in conversation. You need these people to work with you.
Good luck! | In general, I would try to use an internal data model that is independent of its serialization in XML.
However, in my opinion **there is one case where using XML as an internal data structure makes sense**: If your data model needs to capture hierarchical relationships whose format can be extended by 3rd parties and if your application needs to forward this data while preserving the extended information.
Take, for example, [the lumberjack logging framework](https://fedorahosted.org/lumberjack/): The idea is to have an XML-based event data model in which every application can provide hierarchical information about events (warnings, errors, etc.). The framework takes care of gathering the events and distributing them to the appropriate handlers. A 3rd party can easily define its own additions to the format, and provide appropriate generators and handlers.
The important part here is that the framework has to forward the XML with all the XML information intact from the generator to a handler. **In this case implementing an internal data structure which captures all the necessary information results in a re-implementation of most of XML itself.** Hence, using an appropriate DOM framework for internal data representation makes sense. |
1,008,564 | In the years that I've been at my place of employment, I've noticed a distinct trend towards something that I consider an anti-pattern: Maintaining internal data as big strings of XML. I've seen this done a number of different ways, though the two worst offenders were quite similar.
The Webservice
--------------
The first application, a web service, provides access to a potentially high volume of data within a SQL database. At startup, it pulls more-or-less all of that data out of the database and stores it in memory as XML. (Three times.) The owners of this application call it a cache. I call it slow, because every perf problem that's been run into while working against this has been directly traceable to this thing. (It being a corporate environment, there should be no surprise that the client gets blamed for the perf failure, not the service.) This application does use the XML DOM.
The Importer
------------
The second application reads an XML file that was generated as the result of an export from a third-party database. The goal is to import this data into a proprietary system (owned by us). The application that does it reads the entire XML file in and maintains at least two, sometimes as many as four, copies of the XML file throughout the entire importing sequence. Note that the data can be manipulated, transformed, and configuration can occur before the import takes place, so the importer owns this data in an XML format for it's entire lifetime. Unsurprisingly, this importer then explodes when a moderately sized XML file is provided. This application only uses the XML DOM for one of it's copies, the rest are all raw XML strings.
My understanding of common sense suggests that XML is *not* a good format for holding data in-memory, but rather data should be translated into XML when it's being output/transferred and translated into internal data structures when being read in and imported. The thing is, I'm constantly running into production code that completely ignores the scalability issues, and goes through a *ton* of extra effort to do so. (The sheer volume of string parsing in these applications is frightening.)
Is this a common failure to apply the right tool for the job that others people run into alos? Or is it just bad luck on my part? Or am I missing some blindingly obvious and *good* situations where it's Right and OK to store high volumes of data in-memory as XML? | 2009/06/17 | [
"https://Stackoverflow.com/questions/1008564",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6932/"
] | Any data stored in memory should be in classes. The higher volume of data we are talking about, the more important this becomes. Xml is a hugely bloated format that reduces performance. Xml should be used only for transfering data between applications. IMHO. | Greg,
in several applications I did follow more or less exactly the pattern you describe:
Edit: no scratch that. I never stored the XML as a string (or multiple strings). I just parsed it into a DOM and worked with that. THAT was helpful.
I've imported XML sources into the DOM (Microsoft Parser) and kept them there for all the required processing. I'm well aware of the memory overhead the DOM causes, but I found the apporach quite useful nonetheless.
* Some checks during processing need random access to the data. The selectPath statement works quite well for this purpose.
* DOM nodes can be handed back and forth in the application as arguments. The alternative is writing classes wrapping every single type of object, and updating them as the XML schema evolves. It's a poor (VB6/VBA) man's approach to polymorphism.
* Applying an XSLT transformation to all or parts of the DOM is a snap
* File I/O is taken care of by the DOM too (xmldoc.save...)
A linked list of objects would consume a comparable amount of memory and require more code. All the search and I/O functionality I would have to code myself.
What I've perceived as the anti-pattern is actually an older version of the application, where the XML was parsed more or less manually into arrays of structures. |
1,008,564 | In the years that I've been at my place of employment, I've noticed a distinct trend towards something that I consider an anti-pattern: Maintaining internal data as big strings of XML. I've seen this done a number of different ways, though the two worst offenders were quite similar.
The Webservice
--------------
The first application, a web service, provides access to a potentially high volume of data within a SQL database. At startup, it pulls more-or-less all of that data out of the database and stores it in memory as XML. (Three times.) The owners of this application call it a cache. I call it slow, because every perf problem that's been run into while working against this has been directly traceable to this thing. (It being a corporate environment, there should be no surprise that the client gets blamed for the perf failure, not the service.) This application does use the XML DOM.
The Importer
------------
The second application reads an XML file that was generated as the result of an export from a third-party database. The goal is to import this data into a proprietary system (owned by us). The application that does it reads the entire XML file in and maintains at least two, sometimes as many as four, copies of the XML file throughout the entire importing sequence. Note that the data can be manipulated, transformed, and configuration can occur before the import takes place, so the importer owns this data in an XML format for it's entire lifetime. Unsurprisingly, this importer then explodes when a moderately sized XML file is provided. This application only uses the XML DOM for one of it's copies, the rest are all raw XML strings.
My understanding of common sense suggests that XML is *not* a good format for holding data in-memory, but rather data should be translated into XML when it's being output/transferred and translated into internal data structures when being read in and imported. The thing is, I'm constantly running into production code that completely ignores the scalability issues, and goes through a *ton* of extra effort to do so. (The sheer volume of string parsing in these applications is frightening.)
Is this a common failure to apply the right tool for the job that others people run into alos? Or is it just bad luck on my part? Or am I missing some blindingly obvious and *good* situations where it's Right and OK to store high volumes of data in-memory as XML? | 2009/06/17 | [
"https://Stackoverflow.com/questions/1008564",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6932/"
] | No, I agree. For your first example, the database should handle almost all the caching, so storing all the data in program memory is wrong. This applies whether it's stored in-memory as XML or otherwise.
For the second, you should convert the XML into a useful representation as soon as possible, probably a database, then work with it that way. Only if it's a small amount of data would it be appropriate to do all work in-memory as a XmlDocument (e.g. using XPath). String parsing should be used very sparingly. | I've found that I've had to do it to interact with a legacy COM object. The COM object could take either xml or a class. The interop overhead to fill each member of the class was way too large and processing xml was a much faster alternative. We could have made a c# class identical to the COM class, but it was really too difficult to do in our timeframe. So xml it was. Not that it would ever be a good design decision, but when dealing with interop for huge data structures, it was the fastest we could do.
I do have to say that we are using LinqtoXML on the C# side, so it makes it slightly easier to work with. |
1,008,564 | In the years that I've been at my place of employment, I've noticed a distinct trend towards something that I consider an anti-pattern: Maintaining internal data as big strings of XML. I've seen this done a number of different ways, though the two worst offenders were quite similar.
The Webservice
--------------
The first application, a web service, provides access to a potentially high volume of data within a SQL database. At startup, it pulls more-or-less all of that data out of the database and stores it in memory as XML. (Three times.) The owners of this application call it a cache. I call it slow, because every perf problem that's been run into while working against this has been directly traceable to this thing. (It being a corporate environment, there should be no surprise that the client gets blamed for the perf failure, not the service.) This application does use the XML DOM.
The Importer
------------
The second application reads an XML file that was generated as the result of an export from a third-party database. The goal is to import this data into a proprietary system (owned by us). The application that does it reads the entire XML file in and maintains at least two, sometimes as many as four, copies of the XML file throughout the entire importing sequence. Note that the data can be manipulated, transformed, and configuration can occur before the import takes place, so the importer owns this data in an XML format for it's entire lifetime. Unsurprisingly, this importer then explodes when a moderately sized XML file is provided. This application only uses the XML DOM for one of it's copies, the rest are all raw XML strings.
My understanding of common sense suggests that XML is *not* a good format for holding data in-memory, but rather data should be translated into XML when it's being output/transferred and translated into internal data structures when being read in and imported. The thing is, I'm constantly running into production code that completely ignores the scalability issues, and goes through a *ton* of extra effort to do so. (The sheer volume of string parsing in these applications is frightening.)
Is this a common failure to apply the right tool for the job that others people run into alos? Or is it just bad luck on my part? Or am I missing some blindingly obvious and *good* situations where it's Right and OK to store high volumes of data in-memory as XML? | 2009/06/17 | [
"https://Stackoverflow.com/questions/1008564",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6932/"
] | No, I agree. For your first example, the database should handle almost all the caching, so storing all the data in program memory is wrong. This applies whether it's stored in-memory as XML or otherwise.
For the second, you should convert the XML into a useful representation as soon as possible, probably a database, then work with it that way. Only if it's a small amount of data would it be appropriate to do all work in-memory as a XmlDocument (e.g. using XPath). String parsing should be used very sparingly. | For high volumes of data the answer is no, there aren't good reasons to store data directly as XML strings in memory.
However, here is an interesting [presentation](http://www.xmlprague.cz/2009/presentations/Alex-Brown-High-performance-XML-theory-and-practice.pdf), by Alex Brown, on how to preserve XML in memory in a more efficient way. As a 'Frozen Stream'.
There is also a video of this, and other presentations given at XML Prague 2009 [here](http://www.xmlprague.cz/index.html).
[link text](http://www.xmlprague.cz/2009/presentations/Alex-Brown-High-performance-XML-theory-and-practice.pdf) |
2,575,605 | I'm reading about Red Gate SQL Backup, and I liked the concept of creating a database backup compressed and writing on disk the compressed backup directly without an intermediate SQL Server native backup.
And I'm wondering how this type of software make backups. It accesses the database files directly? It uses some sort of SQL Server or Windows API? Windows Shadow Copy? | 2010/04/04 | [
"https://Stackoverflow.com/questions/2575605",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/179804/"
] | The SQL Server has an API for backup providers to plug in elements into he backup pipeline. See [INFORMATIONAL: SHEDDING LIGHT on VSS & VDI Backups in SQL Server](http://blogs.msdn.com/sqlserverfaq/archive/2009/04/28/informational-shedding-light-on-vss-vdi-backups-in-sql-server.aspx), or have a look at the [SQL Server Compressed Backup](http://mssqlcompressed.sourceforge.net/) project on sourceforge.
More information at:
* [A Guide for SQL Server Backup Application Vendors](http://msdn.microsoft.com/en-us/library/cc966520.aspx)
* [SQL Server 2005 Virtual Backup Device Interface (VDI) Specification](http://www.microsoft.com/downloads/details.aspx?FamilyID=416f8a51-65a3-4e8e-a4c8-adfe15e850fc&DisplayLang=en) | It uses the "SQL Server [Virtual Device Interface](http://www.microsoft.com/downloads/details.aspx?FamilyID=416f8a51-65a3-4e8e-a4c8-adfe15e850fc&DisplayLang=en#Overview) (VDI)" as per the [Datasheet](http://www.red-gate.com/products/SQL_Backup/SQL_Backup_USD.pdf).
You can't shadow copy or use a Windows API to backup SQL server files
[CodeProject VDI wrapper](http://www.codeproject.com/KB/cs/VdiDotNet.aspx) if you want to write your own |
3,094,498 | I have a Python Unicode string. I want to make sure it only contains letters from the Roman alphabet (A through Z), as well as letters commonly found in European alphabets, such as ß, ü, ø, é, à, and î. It should *not* contain characters from other alphabets (Chinese, Japanese, Korean, Arabic, Cyrillic, Hebrew, etc.). What's the best way to go about doing this?
Currently I am using this bit of code, but I don't know if it's the best way:
```
def only_roman_chars(s):
try:
s.encode("iso-8859-1")
return True
except UnicodeDecodeError:
return False
```
(I am using Python 2.5. I am also doing this in Django, so if the Django framework happens to have a way to handle such strings, I can use that functionality -- I haven't come across anything like that, however.) | 2010/06/22 | [
"https://Stackoverflow.com/questions/3094498",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/28804/"
] | ```
import unicodedata as ud
latin_letters= {}
def is_latin(uchr):
try: return latin_letters[uchr]
except KeyError:
return latin_letters.setdefault(uchr, 'LATIN' in ud.name(uchr))
def only_roman_chars(unistr):
return all(is_latin(uchr)
for uchr in unistr
if uchr.isalpha()) # isalpha suggested by John Machin
>>> only_roman_chars(u"ελληνικά means greek")
False
>>> only_roman_chars(u"frappé")
True
>>> only_roman_chars(u"hôtel lœwe")
True
>>> only_roman_chars(u"123 ångstrom ð áß")
True
>>> only_roman_chars(u"russian: гага")
False
``` | check the code in `django.template.defaultfilters.slugify`
```
import unicodedata
value = unicodedata.normalize('NFKD', value).encode('ascii', 'ignore')
```
is what you are looking for, you can then compare the resulting string with the original |
3,094,498 | I have a Python Unicode string. I want to make sure it only contains letters from the Roman alphabet (A through Z), as well as letters commonly found in European alphabets, such as ß, ü, ø, é, à, and î. It should *not* contain characters from other alphabets (Chinese, Japanese, Korean, Arabic, Cyrillic, Hebrew, etc.). What's the best way to go about doing this?
Currently I am using this bit of code, but I don't know if it's the best way:
```
def only_roman_chars(s):
try:
s.encode("iso-8859-1")
return True
except UnicodeDecodeError:
return False
```
(I am using Python 2.5. I am also doing this in Django, so if the Django framework happens to have a way to handle such strings, I can use that functionality -- I haven't come across anything like that, however.) | 2010/06/22 | [
"https://Stackoverflow.com/questions/3094498",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/28804/"
] | The top answer to this by @tzot is great, but IMO there should really be a library for this that works for all scripts. [So, I made one](https://github.com/EliFinkelshteyn/alphabet-detector) (heavily based on that answer).
```
pip install alphabet-detector
```
and then use it directly:
```
from alphabet_detector import AlphabetDetector
ad = AlphabetDetector()
ad.only_alphabet_chars(u"ελληνικά means greek", "LATIN") #False
ad.only_alphabet_chars(u"ελληνικά", "GREEK") #True
ad.only_alphabet_chars(u'سماوي يدور', 'ARABIC')
ad.only_alphabet_chars(u'שלום', 'HEBREW')
ad.only_alphabet_chars(u"frappé", "LATIN") #True
ad.only_alphabet_chars(u"hôtel lœwe 67", "LATIN") #True
ad.only_alphabet_chars(u"det forårsaker første", "LATIN") #True
ad.only_alphabet_chars(u"Cyrillic and кириллический", "LATIN") #False
ad.only_alphabet_chars(u"кириллический", "CYRILLIC") #True
```
Also, a few convenience methods for major languages:
```
ad.is_cyrillic(u"Поиск") #True
ad.is_latin(u"howdy") #True
ad.is_cjk(u"hi") #False
ad.is_cjk(u'汉字') #True
``` | For what you say you want to do, your approach is about right. If you are running on Windows, I'd suggest using `cp1252` instead of `iso-8859-1`. You might also allow `cp1250` as well -- this would pick up eastern European countries like Poland, Czech Republic, Slovakia, Romania, Slovenia, Hungary, Croatia, etc where the alphabet is Latin-based. Other cp125x would include Turkish and Maltese ...
You may also like to consider transcription from Cyrillic to Latin; as far as I know there are several systems, one of which may be endorsed by the UPU (Universal Postal Union).
I'm a little intrigued by your comment "Our shipping department doesn't want to have to fill out labels with, e.g., Chinese addresses" ... three questions: (1) do you mean "addresses in country X" or "addresses written in X-ese characters" (2) wouldn't it be better for your system to print the labels? (3) how does the order get shipped if it fails your test? |
3,094,498 | I have a Python Unicode string. I want to make sure it only contains letters from the Roman alphabet (A through Z), as well as letters commonly found in European alphabets, such as ß, ü, ø, é, à, and î. It should *not* contain characters from other alphabets (Chinese, Japanese, Korean, Arabic, Cyrillic, Hebrew, etc.). What's the best way to go about doing this?
Currently I am using this bit of code, but I don't know if it's the best way:
```
def only_roman_chars(s):
try:
s.encode("iso-8859-1")
return True
except UnicodeDecodeError:
return False
```
(I am using Python 2.5. I am also doing this in Django, so if the Django framework happens to have a way to handle such strings, I can use that functionality -- I haven't come across anything like that, however.) | 2010/06/22 | [
"https://Stackoverflow.com/questions/3094498",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/28804/"
] | ```
import unicodedata as ud
latin_letters= {}
def is_latin(uchr):
try: return latin_letters[uchr]
except KeyError:
return latin_letters.setdefault(uchr, 'LATIN' in ud.name(uchr))
def only_roman_chars(unistr):
return all(is_latin(uchr)
for uchr in unistr
if uchr.isalpha()) # isalpha suggested by John Machin
>>> only_roman_chars(u"ελληνικά means greek")
False
>>> only_roman_chars(u"frappé")
True
>>> only_roman_chars(u"hôtel lœwe")
True
>>> only_roman_chars(u"123 ångstrom ð áß")
True
>>> only_roman_chars(u"russian: гага")
False
``` | Maybe this will do if you're a django user?
```
from django.template.defaultfilters import slugify
def justroman(s):
return len(slugify(s)) == len(s)
``` |
3,094,498 | I have a Python Unicode string. I want to make sure it only contains letters from the Roman alphabet (A through Z), as well as letters commonly found in European alphabets, such as ß, ü, ø, é, à, and î. It should *not* contain characters from other alphabets (Chinese, Japanese, Korean, Arabic, Cyrillic, Hebrew, etc.). What's the best way to go about doing this?
Currently I am using this bit of code, but I don't know if it's the best way:
```
def only_roman_chars(s):
try:
s.encode("iso-8859-1")
return True
except UnicodeDecodeError:
return False
```
(I am using Python 2.5. I am also doing this in Django, so if the Django framework happens to have a way to handle such strings, I can use that functionality -- I haven't come across anything like that, however.) | 2010/06/22 | [
"https://Stackoverflow.com/questions/3094498",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/28804/"
] | The top answer to this by @tzot is great, but IMO there should really be a library for this that works for all scripts. [So, I made one](https://github.com/EliFinkelshteyn/alphabet-detector) (heavily based on that answer).
```
pip install alphabet-detector
```
and then use it directly:
```
from alphabet_detector import AlphabetDetector
ad = AlphabetDetector()
ad.only_alphabet_chars(u"ελληνικά means greek", "LATIN") #False
ad.only_alphabet_chars(u"ελληνικά", "GREEK") #True
ad.only_alphabet_chars(u'سماوي يدور', 'ARABIC')
ad.only_alphabet_chars(u'שלום', 'HEBREW')
ad.only_alphabet_chars(u"frappé", "LATIN") #True
ad.only_alphabet_chars(u"hôtel lœwe 67", "LATIN") #True
ad.only_alphabet_chars(u"det forårsaker første", "LATIN") #True
ad.only_alphabet_chars(u"Cyrillic and кириллический", "LATIN") #False
ad.only_alphabet_chars(u"кириллический", "CYRILLIC") #True
```
Also, a few convenience methods for major languages:
```
ad.is_cyrillic(u"Поиск") #True
ad.is_latin(u"howdy") #True
ad.is_cjk(u"hi") #False
ad.is_cjk(u'汉字') #True
``` | Maybe this will do if you're a django user?
```
from django.template.defaultfilters import slugify
def justroman(s):
return len(slugify(s)) == len(s)
``` |
3,094,498 | I have a Python Unicode string. I want to make sure it only contains letters from the Roman alphabet (A through Z), as well as letters commonly found in European alphabets, such as ß, ü, ø, é, à, and î. It should *not* contain characters from other alphabets (Chinese, Japanese, Korean, Arabic, Cyrillic, Hebrew, etc.). What's the best way to go about doing this?
Currently I am using this bit of code, but I don't know if it's the best way:
```
def only_roman_chars(s):
try:
s.encode("iso-8859-1")
return True
except UnicodeDecodeError:
return False
```
(I am using Python 2.5. I am also doing this in Django, so if the Django framework happens to have a way to handle such strings, I can use that functionality -- I haven't come across anything like that, however.) | 2010/06/22 | [
"https://Stackoverflow.com/questions/3094498",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/28804/"
] | ```
import unicodedata as ud
latin_letters= {}
def is_latin(uchr):
try: return latin_letters[uchr]
except KeyError:
return latin_letters.setdefault(uchr, 'LATIN' in ud.name(uchr))
def only_roman_chars(unistr):
return all(is_latin(uchr)
for uchr in unistr
if uchr.isalpha()) # isalpha suggested by John Machin
>>> only_roman_chars(u"ελληνικά means greek")
False
>>> only_roman_chars(u"frappé")
True
>>> only_roman_chars(u"hôtel lœwe")
True
>>> only_roman_chars(u"123 ångstrom ð áß")
True
>>> only_roman_chars(u"russian: гага")
False
``` | Checking for ISO-8559-1 would miss reasonable Western characters like 'œ' and '€'. The solution depends on how you define "Western", and how you want to handle non-letters. Here's one approach:
```
import unicodedata
def is_permitted_char(char):
cat = unicodedata.category(char)[0]
if cat == 'L': # Letter
return 'LATIN' in unicodedata.name(char, '').split()
elif cat == 'N': # Number
# Only DIGIT ZERO - DIGIT NINE are allowed
return '0' <= char <= '9'
elif cat in ('S', 'P', 'Z'): # Symbol, Punctuation, or Space
return True
else:
return False
def is_valid(text):
return all(is_permitted_char(c) for c in text)
``` |
3,094,498 | I have a Python Unicode string. I want to make sure it only contains letters from the Roman alphabet (A through Z), as well as letters commonly found in European alphabets, such as ß, ü, ø, é, à, and î. It should *not* contain characters from other alphabets (Chinese, Japanese, Korean, Arabic, Cyrillic, Hebrew, etc.). What's the best way to go about doing this?
Currently I am using this bit of code, but I don't know if it's the best way:
```
def only_roman_chars(s):
try:
s.encode("iso-8859-1")
return True
except UnicodeDecodeError:
return False
```
(I am using Python 2.5. I am also doing this in Django, so if the Django framework happens to have a way to handle such strings, I can use that functionality -- I haven't come across anything like that, however.) | 2010/06/22 | [
"https://Stackoverflow.com/questions/3094498",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/28804/"
] | The standard `string` package contains all `Latin` letters, `numbers` and `symbols`. You can remove these values from the text and if there is anything left, it is not-Latin characters. I did that:
```py
In [1]: from string import printable
In [2]: def is_latin(text):
...: return not bool(set(text) - set(printable))
...:
In [3]: is_latin('Hradec Králové District,,Czech Republic,')
Out[3]: False
In [4]: is_latin('Hradec Krlov District,,Czech Republic,')
Out[4]: True
```
I have no way to check all non-Latin characters and if anyone can do that, please let me know. Thanks. | To simply tzot's answer using the built-in unicodedata library, this seems to work for me:
```py
import unicodedata as ud
def is_latin(word):
return all(['LATIN' in ud.name(c) for c in word])
``` |
3,094,498 | I have a Python Unicode string. I want to make sure it only contains letters from the Roman alphabet (A through Z), as well as letters commonly found in European alphabets, such as ß, ü, ø, é, à, and î. It should *not* contain characters from other alphabets (Chinese, Japanese, Korean, Arabic, Cyrillic, Hebrew, etc.). What's the best way to go about doing this?
Currently I am using this bit of code, but I don't know if it's the best way:
```
def only_roman_chars(s):
try:
s.encode("iso-8859-1")
return True
except UnicodeDecodeError:
return False
```
(I am using Python 2.5. I am also doing this in Django, so if the Django framework happens to have a way to handle such strings, I can use that functionality -- I haven't come across anything like that, however.) | 2010/06/22 | [
"https://Stackoverflow.com/questions/3094498",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/28804/"
] | The standard `string` package contains all `Latin` letters, `numbers` and `symbols`. You can remove these values from the text and if there is anything left, it is not-Latin characters. I did that:
```py
In [1]: from string import printable
In [2]: def is_latin(text):
...: return not bool(set(text) - set(printable))
...:
In [3]: is_latin('Hradec Králové District,,Czech Republic,')
Out[3]: False
In [4]: is_latin('Hradec Krlov District,,Czech Republic,')
Out[4]: True
```
I have no way to check all non-Latin characters and if anyone can do that, please let me know. Thanks. | check the code in `django.template.defaultfilters.slugify`
```
import unicodedata
value = unicodedata.normalize('NFKD', value).encode('ascii', 'ignore')
```
is what you are looking for, you can then compare the resulting string with the original |
3,094,498 | I have a Python Unicode string. I want to make sure it only contains letters from the Roman alphabet (A through Z), as well as letters commonly found in European alphabets, such as ß, ü, ø, é, à, and î. It should *not* contain characters from other alphabets (Chinese, Japanese, Korean, Arabic, Cyrillic, Hebrew, etc.). What's the best way to go about doing this?
Currently I am using this bit of code, but I don't know if it's the best way:
```
def only_roman_chars(s):
try:
s.encode("iso-8859-1")
return True
except UnicodeDecodeError:
return False
```
(I am using Python 2.5. I am also doing this in Django, so if the Django framework happens to have a way to handle such strings, I can use that functionality -- I haven't come across anything like that, however.) | 2010/06/22 | [
"https://Stackoverflow.com/questions/3094498",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/28804/"
] | check the code in `django.template.defaultfilters.slugify`
```
import unicodedata
value = unicodedata.normalize('NFKD', value).encode('ascii', 'ignore')
```
is what you are looking for, you can then compare the resulting string with the original | Maybe this will do if you're a django user?
```
from django.template.defaultfilters import slugify
def justroman(s):
return len(slugify(s)) == len(s)
``` |
3,094,498 | I have a Python Unicode string. I want to make sure it only contains letters from the Roman alphabet (A through Z), as well as letters commonly found in European alphabets, such as ß, ü, ø, é, à, and î. It should *not* contain characters from other alphabets (Chinese, Japanese, Korean, Arabic, Cyrillic, Hebrew, etc.). What's the best way to go about doing this?
Currently I am using this bit of code, but I don't know if it's the best way:
```
def only_roman_chars(s):
try:
s.encode("iso-8859-1")
return True
except UnicodeDecodeError:
return False
```
(I am using Python 2.5. I am also doing this in Django, so if the Django framework happens to have a way to handle such strings, I can use that functionality -- I haven't come across anything like that, however.) | 2010/06/22 | [
"https://Stackoverflow.com/questions/3094498",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/28804/"
] | ```
import unicodedata as ud
latin_letters= {}
def is_latin(uchr):
try: return latin_letters[uchr]
except KeyError:
return latin_letters.setdefault(uchr, 'LATIN' in ud.name(uchr))
def only_roman_chars(unistr):
return all(is_latin(uchr)
for uchr in unistr
if uchr.isalpha()) # isalpha suggested by John Machin
>>> only_roman_chars(u"ελληνικά means greek")
False
>>> only_roman_chars(u"frappé")
True
>>> only_roman_chars(u"hôtel lœwe")
True
>>> only_roman_chars(u"123 ångstrom ð áß")
True
>>> only_roman_chars(u"russian: гага")
False
``` | For what you say you want to do, your approach is about right. If you are running on Windows, I'd suggest using `cp1252` instead of `iso-8859-1`. You might also allow `cp1250` as well -- this would pick up eastern European countries like Poland, Czech Republic, Slovakia, Romania, Slovenia, Hungary, Croatia, etc where the alphabet is Latin-based. Other cp125x would include Turkish and Maltese ...
You may also like to consider transcription from Cyrillic to Latin; as far as I know there are several systems, one of which may be endorsed by the UPU (Universal Postal Union).
I'm a little intrigued by your comment "Our shipping department doesn't want to have to fill out labels with, e.g., Chinese addresses" ... three questions: (1) do you mean "addresses in country X" or "addresses written in X-ese characters" (2) wouldn't it be better for your system to print the labels? (3) how does the order get shipped if it fails your test? |
3,094,498 | I have a Python Unicode string. I want to make sure it only contains letters from the Roman alphabet (A through Z), as well as letters commonly found in European alphabets, such as ß, ü, ø, é, à, and î. It should *not* contain characters from other alphabets (Chinese, Japanese, Korean, Arabic, Cyrillic, Hebrew, etc.). What's the best way to go about doing this?
Currently I am using this bit of code, but I don't know if it's the best way:
```
def only_roman_chars(s):
try:
s.encode("iso-8859-1")
return True
except UnicodeDecodeError:
return False
```
(I am using Python 2.5. I am also doing this in Django, so if the Django framework happens to have a way to handle such strings, I can use that functionality -- I haven't come across anything like that, however.) | 2010/06/22 | [
"https://Stackoverflow.com/questions/3094498",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/28804/"
] | ```
import unicodedata as ud
latin_letters= {}
def is_latin(uchr):
try: return latin_letters[uchr]
except KeyError:
return latin_letters.setdefault(uchr, 'LATIN' in ud.name(uchr))
def only_roman_chars(unistr):
return all(is_latin(uchr)
for uchr in unistr
if uchr.isalpha()) # isalpha suggested by John Machin
>>> only_roman_chars(u"ελληνικά means greek")
False
>>> only_roman_chars(u"frappé")
True
>>> only_roman_chars(u"hôtel lœwe")
True
>>> only_roman_chars(u"123 ångstrom ð áß")
True
>>> only_roman_chars(u"russian: гага")
False
``` | To simply tzot's answer using the built-in unicodedata library, this seems to work for me:
```py
import unicodedata as ud
def is_latin(word):
return all(['LATIN' in ud.name(c) for c in word])
``` |
721,505 | Is it possible to use appcmd to change the value of allowDefinition? Specifically I'm try to enable changes to the httpCompression module at the application level.
Modifying the applicationHost.config by manually changing the following line:
```
<section name="httpCompression" allowDefinition="AppHostOnly" overrideModeDefault="Deny" />
```
To
```
<section name="httpCompression" allowDefinition="MachineToApplication" overrideModeDefault="Allow" />
```
allows me to then execute the following appcmd commands:
```
appcmd set config "website name" /section:httpCompression /noCompressionForProxies:false
appcmd set config "website name" /section:httpCompression /noCompressionForHttp10:false
```
However I need a solution that does not rely on manually editing the applicationHost.config | 2009/04/06 | [
"https://Stackoverflow.com/questions/721505",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5182/"
] | Try using `%windir%\system32\inetsrv\appcmd unlock config -section:*sectionName*`. See <http://blog.donnfelker.com/2007/03/26/iis-7-this-configuration-section-cannot-be-used-at-this-path/>
I actually came across a need to do just that after posting this answer.
```
%systemroot%\System32\inetsrv\appcmd.exe unlock config /section:system.WebServer/[rest of the path to config section you need to edit]
``` | One big warning, you should NEVER change the allowDefinition, that is an important setting that is usually there for a reason, for example it might be that even if you set it in a specific directory or app it will not work, so the developers have specified that.
So please, never modify the allowDefinition attribute in the section definitions. On the other hand you can modify the overrideModeDefault which will allow users to define it in a different place if allowed by definition. |
721,505 | Is it possible to use appcmd to change the value of allowDefinition? Specifically I'm try to enable changes to the httpCompression module at the application level.
Modifying the applicationHost.config by manually changing the following line:
```
<section name="httpCompression" allowDefinition="AppHostOnly" overrideModeDefault="Deny" />
```
To
```
<section name="httpCompression" allowDefinition="MachineToApplication" overrideModeDefault="Allow" />
```
allows me to then execute the following appcmd commands:
```
appcmd set config "website name" /section:httpCompression /noCompressionForProxies:false
appcmd set config "website name" /section:httpCompression /noCompressionForHttp10:false
```
However I need a solution that does not rely on manually editing the applicationHost.config | 2009/04/06 | [
"https://Stackoverflow.com/questions/721505",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5182/"
] | My problem was, I had to change anonymousAuthentication from False to True. When I did:
**appcmd set config websitename /section:anonymousAuthentication /enabled:True**
Error I got:
Config Error This configuration cannot be used at this path. This happens when the section is locked at the parent level. Locking is either by default(overrideModeDefault="Deny")...
To unlock, do the following:
**appcmd unlock config /section:?** This will list the section you want. Then type:
**appcmd unlock config /section:system.webserver/security/authentication/anonymousauthentication**
Thats it... :) | Try using `%windir%\system32\inetsrv\appcmd unlock config -section:*sectionName*`. See <http://blog.donnfelker.com/2007/03/26/iis-7-this-configuration-section-cannot-be-used-at-this-path/>
I actually came across a need to do just that after posting this answer.
```
%systemroot%\System32\inetsrv\appcmd.exe unlock config /section:system.WebServer/[rest of the path to config section you need to edit]
``` |
721,505 | Is it possible to use appcmd to change the value of allowDefinition? Specifically I'm try to enable changes to the httpCompression module at the application level.
Modifying the applicationHost.config by manually changing the following line:
```
<section name="httpCompression" allowDefinition="AppHostOnly" overrideModeDefault="Deny" />
```
To
```
<section name="httpCompression" allowDefinition="MachineToApplication" overrideModeDefault="Allow" />
```
allows me to then execute the following appcmd commands:
```
appcmd set config "website name" /section:httpCompression /noCompressionForProxies:false
appcmd set config "website name" /section:httpCompression /noCompressionForHttp10:false
```
However I need a solution that does not rely on manually editing the applicationHost.config | 2009/04/06 | [
"https://Stackoverflow.com/questions/721505",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5182/"
] | My problem was, I had to change anonymousAuthentication from False to True. When I did:
**appcmd set config websitename /section:anonymousAuthentication /enabled:True**
Error I got:
Config Error This configuration cannot be used at this path. This happens when the section is locked at the parent level. Locking is either by default(overrideModeDefault="Deny")...
To unlock, do the following:
**appcmd unlock config /section:?** This will list the section you want. Then type:
**appcmd unlock config /section:system.webserver/security/authentication/anonymousauthentication**
Thats it... :) | One big warning, you should NEVER change the allowDefinition, that is an important setting that is usually there for a reason, for example it might be that even if you set it in a specific directory or app it will not work, so the developers have specified that.
So please, never modify the allowDefinition attribute in the section definitions. On the other hand you can modify the overrideModeDefault which will allow users to define it in a different place if allowed by definition. |
872,877 | I want users to be able to upload mp3s and also be able to play them through a player embedded on a page. I know it's impossible to stop dedicated users from copying the audio by directly recording it from the computers output but I want to make it difficult or impossible for a user to just copy a URL and paste it which will allow them direct access to the data.
Currently, what I am doing is:
* Saving the mp3 files to a directory that is not accessable to my web server.
* Using headers to change the mime type to text/html instead of audio/mpeg (my swf player doesn't care it just reads the data)
The problem is the url to the controller that feeds the data is accessable. So if a user looks at the source of the page and copy pastes the url in the address bar, the web server will happily spew the mp3 data to them.
Does anyone have any suggestions on how to make this more difficult to do? Thanks. | 2009/05/16 | [
"https://Stackoverflow.com/questions/872877",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/83119/"
] | Depends on how hard you want to make it. You could use javascript to rewrite the url, basic ROT-13 and hopefully people won't bother to decode it.
I haven't tried putting a javascript function in an html object src though. | i don't know if a Flash player solution can do it but it might be worth looking into. |
872,877 | I want users to be able to upload mp3s and also be able to play them through a player embedded on a page. I know it's impossible to stop dedicated users from copying the audio by directly recording it from the computers output but I want to make it difficult or impossible for a user to just copy a URL and paste it which will allow them direct access to the data.
Currently, what I am doing is:
* Saving the mp3 files to a directory that is not accessable to my web server.
* Using headers to change the mime type to text/html instead of audio/mpeg (my swf player doesn't care it just reads the data)
The problem is the url to the controller that feeds the data is accessable. So if a user looks at the source of the page and copy pastes the url in the address bar, the web server will happily spew the mp3 data to them.
Does anyone have any suggestions on how to make this more difficult to do? Thanks. | 2009/05/16 | [
"https://Stackoverflow.com/questions/872877",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/83119/"
] | Use a PHP script to mask the location so
```
http://www.site.com/files/foo.mp3
```
Becomes
```
http://www.site.com/files.php?fn=foo.mp3
```
Flash is probably the next step from there. Maybe have it used some form of one-time id to authorize the download using shared state. Your session\_id will come in helpful here.
Remember: Flash may keep a cache in some temporary folder ... I know I used to find /tmp/aiden-sdjks/foo.mp3 on some players. There might be a better streaming solution in flash that takes another file format on the backend?
At least this stops people looking in the source and finding the URL. Unless they go to the effort of reverse engineering the player and writing their own to spit out the download.
**Security through obscurity** is a dangerous road to head down however. Someone, with enough effort, will always succeed. Look at how BBCIplayer does their DRMification, might help. | Depends on how hard you want to make it. You could use javascript to rewrite the url, basic ROT-13 and hopefully people won't bother to decode it.
I haven't tried putting a javascript function in an html object src though. |
872,877 | I want users to be able to upload mp3s and also be able to play them through a player embedded on a page. I know it's impossible to stop dedicated users from copying the audio by directly recording it from the computers output but I want to make it difficult or impossible for a user to just copy a URL and paste it which will allow them direct access to the data.
Currently, what I am doing is:
* Saving the mp3 files to a directory that is not accessable to my web server.
* Using headers to change the mime type to text/html instead of audio/mpeg (my swf player doesn't care it just reads the data)
The problem is the url to the controller that feeds the data is accessable. So if a user looks at the source of the page and copy pastes the url in the address bar, the web server will happily spew the mp3 data to them.
Does anyone have any suggestions on how to make this more difficult to do? Thanks. | 2009/05/16 | [
"https://Stackoverflow.com/questions/872877",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/83119/"
] | Connected straight to a file on a server with Flash is called progressive streaming. This makes the Flash player load the entire file from the server when playing. There is another solution:streaming that only loads a small fraction of data into the users machine at any time during playback.
The most reliable option for flash is to use a streaming server for your content. Flash Media Server is one option but thats a product you can either purchase or find a hosted version (like Akamai).
If you are a smaller unit, there are open source versions of the Media server like Red5 (<http://osflash.org/red5>)
Not sure about windows media player or quicktime players but I am sure there are similar solutions there as well | Depends on how hard you want to make it. You could use javascript to rewrite the url, basic ROT-13 and hopefully people won't bother to decode it.
I haven't tried putting a javascript function in an html object src though. |
872,877 | I want users to be able to upload mp3s and also be able to play them through a player embedded on a page. I know it's impossible to stop dedicated users from copying the audio by directly recording it from the computers output but I want to make it difficult or impossible for a user to just copy a URL and paste it which will allow them direct access to the data.
Currently, what I am doing is:
* Saving the mp3 files to a directory that is not accessable to my web server.
* Using headers to change the mime type to text/html instead of audio/mpeg (my swf player doesn't care it just reads the data)
The problem is the url to the controller that feeds the data is accessable. So if a user looks at the source of the page and copy pastes the url in the address bar, the web server will happily spew the mp3 data to them.
Does anyone have any suggestions on how to make this more difficult to do? Thanks. | 2009/05/16 | [
"https://Stackoverflow.com/questions/872877",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/83119/"
] | Use a PHP script to mask the location so
```
http://www.site.com/files/foo.mp3
```
Becomes
```
http://www.site.com/files.php?fn=foo.mp3
```
Flash is probably the next step from there. Maybe have it used some form of one-time id to authorize the download using shared state. Your session\_id will come in helpful here.
Remember: Flash may keep a cache in some temporary folder ... I know I used to find /tmp/aiden-sdjks/foo.mp3 on some players. There might be a better streaming solution in flash that takes another file format on the backend?
At least this stops people looking in the source and finding the URL. Unless they go to the effort of reverse engineering the player and writing their own to spit out the download.
**Security through obscurity** is a dangerous road to head down however. Someone, with enough effort, will always succeed. Look at how BBCIplayer does their DRMification, might help. | i don't know if a Flash player solution can do it but it might be worth looking into. |
872,877 | I want users to be able to upload mp3s and also be able to play them through a player embedded on a page. I know it's impossible to stop dedicated users from copying the audio by directly recording it from the computers output but I want to make it difficult or impossible for a user to just copy a URL and paste it which will allow them direct access to the data.
Currently, what I am doing is:
* Saving the mp3 files to a directory that is not accessable to my web server.
* Using headers to change the mime type to text/html instead of audio/mpeg (my swf player doesn't care it just reads the data)
The problem is the url to the controller that feeds the data is accessable. So if a user looks at the source of the page and copy pastes the url in the address bar, the web server will happily spew the mp3 data to them.
Does anyone have any suggestions on how to make this more difficult to do? Thanks. | 2009/05/16 | [
"https://Stackoverflow.com/questions/872877",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/83119/"
] | Connected straight to a file on a server with Flash is called progressive streaming. This makes the Flash player load the entire file from the server when playing. There is another solution:streaming that only loads a small fraction of data into the users machine at any time during playback.
The most reliable option for flash is to use a streaming server for your content. Flash Media Server is one option but thats a product you can either purchase or find a hosted version (like Akamai).
If you are a smaller unit, there are open source versions of the Media server like Red5 (<http://osflash.org/red5>)
Not sure about windows media player or quicktime players but I am sure there are similar solutions there as well | i don't know if a Flash player solution can do it but it might be worth looking into. |
872,877 | I want users to be able to upload mp3s and also be able to play them through a player embedded on a page. I know it's impossible to stop dedicated users from copying the audio by directly recording it from the computers output but I want to make it difficult or impossible for a user to just copy a URL and paste it which will allow them direct access to the data.
Currently, what I am doing is:
* Saving the mp3 files to a directory that is not accessable to my web server.
* Using headers to change the mime type to text/html instead of audio/mpeg (my swf player doesn't care it just reads the data)
The problem is the url to the controller that feeds the data is accessable. So if a user looks at the source of the page and copy pastes the url in the address bar, the web server will happily spew the mp3 data to them.
Does anyone have any suggestions on how to make this more difficult to do? Thanks. | 2009/05/16 | [
"https://Stackoverflow.com/questions/872877",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/83119/"
] | You should store the files in a protected directory. My hosting service (Nearly Free Speech) sets this up for you automatically, and you can retrieve the file within a CGI script (PHP for example) and either write to a temporary file or use the binary directly. For example, I store a simple list of visits to my personal website in this directory, which is outside of the scope of the web root. | i don't know if a Flash player solution can do it but it might be worth looking into. |
872,877 | I want users to be able to upload mp3s and also be able to play them through a player embedded on a page. I know it's impossible to stop dedicated users from copying the audio by directly recording it from the computers output but I want to make it difficult or impossible for a user to just copy a URL and paste it which will allow them direct access to the data.
Currently, what I am doing is:
* Saving the mp3 files to a directory that is not accessable to my web server.
* Using headers to change the mime type to text/html instead of audio/mpeg (my swf player doesn't care it just reads the data)
The problem is the url to the controller that feeds the data is accessable. So if a user looks at the source of the page and copy pastes the url in the address bar, the web server will happily spew the mp3 data to them.
Does anyone have any suggestions on how to make this more difficult to do? Thanks. | 2009/05/16 | [
"https://Stackoverflow.com/questions/872877",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/83119/"
] | Use a PHP script to mask the location so
```
http://www.site.com/files/foo.mp3
```
Becomes
```
http://www.site.com/files.php?fn=foo.mp3
```
Flash is probably the next step from there. Maybe have it used some form of one-time id to authorize the download using shared state. Your session\_id will come in helpful here.
Remember: Flash may keep a cache in some temporary folder ... I know I used to find /tmp/aiden-sdjks/foo.mp3 on some players. There might be a better streaming solution in flash that takes another file format on the backend?
At least this stops people looking in the source and finding the URL. Unless they go to the effort of reverse engineering the player and writing their own to spit out the download.
**Security through obscurity** is a dangerous road to head down however. Someone, with enough effort, will always succeed. Look at how BBCIplayer does their DRMification, might help. | Connected straight to a file on a server with Flash is called progressive streaming. This makes the Flash player load the entire file from the server when playing. There is another solution:streaming that only loads a small fraction of data into the users machine at any time during playback.
The most reliable option for flash is to use a streaming server for your content. Flash Media Server is one option but thats a product you can either purchase or find a hosted version (like Akamai).
If you are a smaller unit, there are open source versions of the Media server like Red5 (<http://osflash.org/red5>)
Not sure about windows media player or quicktime players but I am sure there are similar solutions there as well |
872,877 | I want users to be able to upload mp3s and also be able to play them through a player embedded on a page. I know it's impossible to stop dedicated users from copying the audio by directly recording it from the computers output but I want to make it difficult or impossible for a user to just copy a URL and paste it which will allow them direct access to the data.
Currently, what I am doing is:
* Saving the mp3 files to a directory that is not accessable to my web server.
* Using headers to change the mime type to text/html instead of audio/mpeg (my swf player doesn't care it just reads the data)
The problem is the url to the controller that feeds the data is accessable. So if a user looks at the source of the page and copy pastes the url in the address bar, the web server will happily spew the mp3 data to them.
Does anyone have any suggestions on how to make this more difficult to do? Thanks. | 2009/05/16 | [
"https://Stackoverflow.com/questions/872877",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/83119/"
] | Use a PHP script to mask the location so
```
http://www.site.com/files/foo.mp3
```
Becomes
```
http://www.site.com/files.php?fn=foo.mp3
```
Flash is probably the next step from there. Maybe have it used some form of one-time id to authorize the download using shared state. Your session\_id will come in helpful here.
Remember: Flash may keep a cache in some temporary folder ... I know I used to find /tmp/aiden-sdjks/foo.mp3 on some players. There might be a better streaming solution in flash that takes another file format on the backend?
At least this stops people looking in the source and finding the URL. Unless they go to the effort of reverse engineering the player and writing their own to spit out the download.
**Security through obscurity** is a dangerous road to head down however. Someone, with enough effort, will always succeed. Look at how BBCIplayer does their DRMification, might help. | You should store the files in a protected directory. My hosting service (Nearly Free Speech) sets this up for you automatically, and you can retrieve the file within a CGI script (PHP for example) and either write to a temporary file or use the binary directly. For example, I store a simple list of visits to my personal website in this directory, which is outside of the scope of the web root. |
872,877 | I want users to be able to upload mp3s and also be able to play them through a player embedded on a page. I know it's impossible to stop dedicated users from copying the audio by directly recording it from the computers output but I want to make it difficult or impossible for a user to just copy a URL and paste it which will allow them direct access to the data.
Currently, what I am doing is:
* Saving the mp3 files to a directory that is not accessable to my web server.
* Using headers to change the mime type to text/html instead of audio/mpeg (my swf player doesn't care it just reads the data)
The problem is the url to the controller that feeds the data is accessable. So if a user looks at the source of the page and copy pastes the url in the address bar, the web server will happily spew the mp3 data to them.
Does anyone have any suggestions on how to make this more difficult to do? Thanks. | 2009/05/16 | [
"https://Stackoverflow.com/questions/872877",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/83119/"
] | Connected straight to a file on a server with Flash is called progressive streaming. This makes the Flash player load the entire file from the server when playing. There is another solution:streaming that only loads a small fraction of data into the users machine at any time during playback.
The most reliable option for flash is to use a streaming server for your content. Flash Media Server is one option but thats a product you can either purchase or find a hosted version (like Akamai).
If you are a smaller unit, there are open source versions of the Media server like Red5 (<http://osflash.org/red5>)
Not sure about windows media player or quicktime players but I am sure there are similar solutions there as well | You should store the files in a protected directory. My hosting service (Nearly Free Speech) sets this up for you automatically, and you can retrieve the file within a CGI script (PHP for example) and either write to a temporary file or use the binary directly. For example, I store a simple list of visits to my personal website in this directory, which is outside of the scope of the web root. |
1,001,084 | I have a a web flow where I need to capture data on one of the screens.
This data is stored in an object which will be held in a list in the bean.
On submitting the page I want to be able to create an object, and add it to the list in the bean.
Is this possible?
Thanks | 2009/06/16 | [
"https://Stackoverflow.com/questions/1001084",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15352/"
] | You need to do a couple of things:
1. Place an object into the flow scope (or add an extra field on an existing object like your Form) to give a fixed binding path to to the object you want to edit. If you don't do this, you can't take advantage of Spring's databinding.
2. Write a method on your FormAction to place this object into your list, and set this method to run on the transition followed when you submit the current page. This method can clean up the flowscope-level resources used in (1) as required.
**Edit** The Webflow documentation has good examples of how to execute actions on transitions. For Webflow version 2 check out [Executing view transitions](http://static.springframework.org/spring-webflow/docs/2.0.x/reference/html/ch04s12.html) and [Executing actions](http://static.springframework.org/spring-webflow/docs/2.0.x/reference/html/ch05.html). For version 1, see [Flow definition](http://static.springframework.org/spring-webflow/docs/1.0.x/reference/flow-definition.html). | I would store the Bean (and the list) in the Session. |
1,001,084 | I have a a web flow where I need to capture data on one of the screens.
This data is stored in an object which will be held in a list in the bean.
On submitting the page I want to be able to create an object, and add it to the list in the bean.
Is this possible?
Thanks | 2009/06/16 | [
"https://Stackoverflow.com/questions/1001084",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15352/"
] | In the end I managed to get it working with the following flows.
I created a helper bean to hold a function for adding to the list held in the form bean.
```
<view-state id="page2" view="page2">
<transition on="save" to="addToList">
<action bean="form" method="bindAndValidate"/>
</transition>
<transition on="back" to="page1">
<action bean="formAction" method="bindAndValidate"/>
</transition>
<transition on="next" to="page3">
<action bean="formAction" method="bindAndValidate"/>
</transition>
</view-state>
<action-state id="addToList">
<bean-action bean="helperbean" method="addToList">
<method-arguments>
<argument expression="conversationScope.form"/>
</method-arguments>
</bean-action>
<transition on="success" to="page2"/>
</action-state>
```
It then displays the original page again | I would store the Bean (and the list) in the Session. |
1,001,084 | I have a a web flow where I need to capture data on one of the screens.
This data is stored in an object which will be held in a list in the bean.
On submitting the page I want to be able to create an object, and add it to the list in the bean.
Is this possible?
Thanks | 2009/06/16 | [
"https://Stackoverflow.com/questions/1001084",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15352/"
] | In the end I managed to get it working with the following flows.
I created a helper bean to hold a function for adding to the list held in the form bean.
```
<view-state id="page2" view="page2">
<transition on="save" to="addToList">
<action bean="form" method="bindAndValidate"/>
</transition>
<transition on="back" to="page1">
<action bean="formAction" method="bindAndValidate"/>
</transition>
<transition on="next" to="page3">
<action bean="formAction" method="bindAndValidate"/>
</transition>
</view-state>
<action-state id="addToList">
<bean-action bean="helperbean" method="addToList">
<method-arguments>
<argument expression="conversationScope.form"/>
</method-arguments>
</bean-action>
<transition on="success" to="page2"/>
</action-state>
```
It then displays the original page again | You need to do a couple of things:
1. Place an object into the flow scope (or add an extra field on an existing object like your Form) to give a fixed binding path to to the object you want to edit. If you don't do this, you can't take advantage of Spring's databinding.
2. Write a method on your FormAction to place this object into your list, and set this method to run on the transition followed when you submit the current page. This method can clean up the flowscope-level resources used in (1) as required.
**Edit** The Webflow documentation has good examples of how to execute actions on transitions. For Webflow version 2 check out [Executing view transitions](http://static.springframework.org/spring-webflow/docs/2.0.x/reference/html/ch04s12.html) and [Executing actions](http://static.springframework.org/spring-webflow/docs/2.0.x/reference/html/ch05.html). For version 1, see [Flow definition](http://static.springframework.org/spring-webflow/docs/1.0.x/reference/flow-definition.html). |
2,271,473 | I have a database which holds URL's in a table (along with other many details about the URL). I have another table which stores strings that I'm going to use to perform searches on each and every link. My database will be big, I'm expecting at least 5 million entries in the links table.
The application which communicates with the user is written in PHP. I need some suggestions about how I can search over all the links with all the patterns (n X m searches) and in the same time not to cause a high load on the server and also not to lose speed. I want it to operate at high speed and low resources. If you have any hints, suggestions in pseudo-code, they are all welcomed.
Right now I don't know whether to use SQL commands to perform these searches and have some help from PHP also or completely do it in PHP. | 2010/02/16 | [
"https://Stackoverflow.com/questions/2271473",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/253530/"
] | Using jQuery (naturally):
```
$("#myForm input").keyup(
function(event){
if(event.keycode == 13){
$(this).closest('form').submit();
}
}
);
```
Give that a try. | onKeyDown event of textbox or some control call a javascript function and add form.submit(); statement to the function.
Happy coding!! |
2,271,473 | I have a database which holds URL's in a table (along with other many details about the URL). I have another table which stores strings that I'm going to use to perform searches on each and every link. My database will be big, I'm expecting at least 5 million entries in the links table.
The application which communicates with the user is written in PHP. I need some suggestions about how I can search over all the links with all the patterns (n X m searches) and in the same time not to cause a high load on the server and also not to lose speed. I want it to operate at high speed and low resources. If you have any hints, suggestions in pseudo-code, they are all welcomed.
Right now I don't know whether to use SQL commands to perform these searches and have some help from PHP also or completely do it in PHP. | 2010/02/16 | [
"https://Stackoverflow.com/questions/2271473",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/253530/"
] | Tested/cross-browser:
```
function submitOnEnter(e) {
var theEvent = e || window.event;
if(theEvent.keyCode == 13) {
this.submit;
}
return true;
}
document.getElementById("myForm").onkeypress = function(e) { return submitOnEnter(e); }
<form id="myForm">
<input type="text"/>
...
</form>
```
If there is no submit button, the form will degrade miserably if javascript is not available! | Is this what you mean?
```
document.myformname.submit();
``` |
2,271,473 | I have a database which holds URL's in a table (along with other many details about the URL). I have another table which stores strings that I'm going to use to perform searches on each and every link. My database will be big, I'm expecting at least 5 million entries in the links table.
The application which communicates with the user is written in PHP. I need some suggestions about how I can search over all the links with all the patterns (n X m searches) and in the same time not to cause a high load on the server and also not to lose speed. I want it to operate at high speed and low resources. If you have any hints, suggestions in pseudo-code, they are all welcomed.
Right now I don't know whether to use SQL commands to perform these searches and have some help from PHP also or completely do it in PHP. | 2010/02/16 | [
"https://Stackoverflow.com/questions/2271473",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/253530/"
] | Tested/cross-browser:
```
function submitOnEnter(e) {
var theEvent = e || window.event;
if(theEvent.keyCode == 13) {
this.submit;
}
return true;
}
document.getElementById("myForm").onkeypress = function(e) { return submitOnEnter(e); }
<form id="myForm">
<input type="text"/>
...
</form>
```
If there is no submit button, the form will degrade miserably if javascript is not available! | You know that you can just put a `<button type="submit">submit</button>` there and change his position with css, right? `position:absolute;left:-9999px;` Should do the trick. `display:none` will not work tho.
This will also work if js is not loaded.
edit:
However, if you chose to use js, do not forget to not submit if you have a textarea. |
2,271,473 | I have a database which holds URL's in a table (along with other many details about the URL). I have another table which stores strings that I'm going to use to perform searches on each and every link. My database will be big, I'm expecting at least 5 million entries in the links table.
The application which communicates with the user is written in PHP. I need some suggestions about how I can search over all the links with all the patterns (n X m searches) and in the same time not to cause a high load on the server and also not to lose speed. I want it to operate at high speed and low resources. If you have any hints, suggestions in pseudo-code, they are all welcomed.
Right now I don't know whether to use SQL commands to perform these searches and have some help from PHP also or completely do it in PHP. | 2010/02/16 | [
"https://Stackoverflow.com/questions/2271473",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/253530/"
] | Tested/cross-browser:
```
function submitOnEnter(e) {
var theEvent = e || window.event;
if(theEvent.keyCode == 13) {
this.submit;
}
return true;
}
document.getElementById("myForm").onkeypress = function(e) { return submitOnEnter(e); }
<form id="myForm">
<input type="text"/>
...
</form>
```
If there is no submit button, the form will degrade miserably if javascript is not available! | This may be accomplished cross-browser by keeping the submit input intact and some creative CSS. By keeping the input available, you also preserve support for screen readers.
```css
.no-submit-button {
position: relative;
}
.hidden-submit {
border: 0;
clip: rect(0, 0, 0, 0);
height: 1px;
margin: -1px;
padding: 0;
position: absolute;
width: 1px;
}
```
```html
<form class="no-submit-button">
<input class="hidden-submit" type="submit" value="Submit">
</form>
``` |
2,271,473 | I have a database which holds URL's in a table (along with other many details about the URL). I have another table which stores strings that I'm going to use to perform searches on each and every link. My database will be big, I'm expecting at least 5 million entries in the links table.
The application which communicates with the user is written in PHP. I need some suggestions about how I can search over all the links with all the patterns (n X m searches) and in the same time not to cause a high load on the server and also not to lose speed. I want it to operate at high speed and low resources. If you have any hints, suggestions in pseudo-code, they are all welcomed.
Right now I don't know whether to use SQL commands to perform these searches and have some help from PHP also or completely do it in PHP. | 2010/02/16 | [
"https://Stackoverflow.com/questions/2271473",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/253530/"
] | Using jQuery (naturally):
```
$("#myForm input").keyup(
function(event){
if(event.keycode == 13){
$(this).closest('form').submit();
}
}
);
```
Give that a try. | This may be accomplished cross-browser by keeping the submit input intact and some creative CSS. By keeping the input available, you also preserve support for screen readers.
```css
.no-submit-button {
position: relative;
}
.hidden-submit {
border: 0;
clip: rect(0, 0, 0, 0);
height: 1px;
margin: -1px;
padding: 0;
position: absolute;
width: 1px;
}
```
```html
<form class="no-submit-button">
<input class="hidden-submit" type="submit" value="Submit">
</form>
``` |
2,271,473 | I have a database which holds URL's in a table (along with other many details about the URL). I have another table which stores strings that I'm going to use to perform searches on each and every link. My database will be big, I'm expecting at least 5 million entries in the links table.
The application which communicates with the user is written in PHP. I need some suggestions about how I can search over all the links with all the patterns (n X m searches) and in the same time not to cause a high load on the server and also not to lose speed. I want it to operate at high speed and low resources. If you have any hints, suggestions in pseudo-code, they are all welcomed.
Right now I don't know whether to use SQL commands to perform these searches and have some help from PHP also or completely do it in PHP. | 2010/02/16 | [
"https://Stackoverflow.com/questions/2271473",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/253530/"
] | Using jQuery (naturally):
```
$("#myForm input").keyup(
function(event){
if(event.keycode == 13){
$(this).closest('form').submit();
}
}
);
```
Give that a try. | This will require JavaScript. The easiest way to implement this with JavaScript if you don't know the language would be to use something like jQuery (much like what inkedmn said).
```
<head>
<script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1/jquery.min.js"></script>
<script type="text/javascript" charset="utf-8">
<!--
$(document).ready(function() {
$(document).keyup(function(event){
if(event.keycode == 13){ // This checks that it was the Enter key
$("#myForm").submit(); // #myForm should match the form's id attribute
}
});
});
//-->
</script>
</head>
<body>
<form id="myForm">
...
</form>
</body>
```
For more information on jQuery:
<http://docs.jquery.com/Tutorials:How_jQuery_Works#jQuery:_The_Basics> |
2,271,473 | I have a database which holds URL's in a table (along with other many details about the URL). I have another table which stores strings that I'm going to use to perform searches on each and every link. My database will be big, I'm expecting at least 5 million entries in the links table.
The application which communicates with the user is written in PHP. I need some suggestions about how I can search over all the links with all the patterns (n X m searches) and in the same time not to cause a high load on the server and also not to lose speed. I want it to operate at high speed and low resources. If you have any hints, suggestions in pseudo-code, they are all welcomed.
Right now I don't know whether to use SQL commands to perform these searches and have some help from PHP also or completely do it in PHP. | 2010/02/16 | [
"https://Stackoverflow.com/questions/2271473",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/253530/"
] | Tested/cross-browser:
```
function submitOnEnter(e) {
var theEvent = e || window.event;
if(theEvent.keyCode == 13) {
this.submit;
}
return true;
}
document.getElementById("myForm").onkeypress = function(e) { return submitOnEnter(e); }
<form id="myForm">
<input type="text"/>
...
</form>
```
If there is no submit button, the form will degrade miserably if javascript is not available! | Using jQuery (naturally):
```
$("#myForm input").keyup(
function(event){
if(event.keycode == 13){
$(this).closest('form').submit();
}
}
);
```
Give that a try. |
2,271,473 | I have a database which holds URL's in a table (along with other many details about the URL). I have another table which stores strings that I'm going to use to perform searches on each and every link. My database will be big, I'm expecting at least 5 million entries in the links table.
The application which communicates with the user is written in PHP. I need some suggestions about how I can search over all the links with all the patterns (n X m searches) and in the same time not to cause a high load on the server and also not to lose speed. I want it to operate at high speed and low resources. If you have any hints, suggestions in pseudo-code, they are all welcomed.
Right now I don't know whether to use SQL commands to perform these searches and have some help from PHP also or completely do it in PHP. | 2010/02/16 | [
"https://Stackoverflow.com/questions/2271473",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/253530/"
] | Using jQuery (naturally):
```
$("#myForm input").keyup(
function(event){
if(event.keycode == 13){
$(this).closest('form').submit();
}
}
);
```
Give that a try. | Is this what you mean?
```
document.myformname.submit();
``` |
2,271,473 | I have a database which holds URL's in a table (along with other many details about the URL). I have another table which stores strings that I'm going to use to perform searches on each and every link. My database will be big, I'm expecting at least 5 million entries in the links table.
The application which communicates with the user is written in PHP. I need some suggestions about how I can search over all the links with all the patterns (n X m searches) and in the same time not to cause a high load on the server and also not to lose speed. I want it to operate at high speed and low resources. If you have any hints, suggestions in pseudo-code, they are all welcomed.
Right now I don't know whether to use SQL commands to perform these searches and have some help from PHP also or completely do it in PHP. | 2010/02/16 | [
"https://Stackoverflow.com/questions/2271473",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/253530/"
] | Tested/cross-browser:
```
function submitOnEnter(e) {
var theEvent = e || window.event;
if(theEvent.keyCode == 13) {
this.submit;
}
return true;
}
document.getElementById("myForm").onkeypress = function(e) { return submitOnEnter(e); }
<form id="myForm">
<input type="text"/>
...
</form>
```
If there is no submit button, the form will degrade miserably if javascript is not available! | onKeyDown event of textbox or some control call a javascript function and add form.submit(); statement to the function.
Happy coding!! |
2,271,473 | I have a database which holds URL's in a table (along with other many details about the URL). I have another table which stores strings that I'm going to use to perform searches on each and every link. My database will be big, I'm expecting at least 5 million entries in the links table.
The application which communicates with the user is written in PHP. I need some suggestions about how I can search over all the links with all the patterns (n X m searches) and in the same time not to cause a high load on the server and also not to lose speed. I want it to operate at high speed and low resources. If you have any hints, suggestions in pseudo-code, they are all welcomed.
Right now I don't know whether to use SQL commands to perform these searches and have some help from PHP also or completely do it in PHP. | 2010/02/16 | [
"https://Stackoverflow.com/questions/2271473",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/253530/"
] | Using jQuery (naturally):
```
$("#myForm input").keyup(
function(event){
if(event.keycode == 13){
$(this).closest('form').submit();
}
}
);
```
Give that a try. | You know that you can just put a `<button type="submit">submit</button>` there and change his position with css, right? `position:absolute;left:-9999px;` Should do the trick. `display:none` will not work tho.
This will also work if js is not loaded.
edit:
However, if you chose to use js, do not forget to not submit if you have a textarea. |
2,291,814 | I have a problem, maybe due to TinyMCE.
I want to put a text in a markup with jQuery.
This is my code :
```
$(".page").change(function(){
tinyMCE.triggerSave(true, true);
$(".description").val("my text");
});
```
Have you an answer to this ? | 2010/02/18 | [
"https://Stackoverflow.com/questions/2291814",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/260810/"
] | Maybe appending your own constant prefix to the result of [generate-id](http://www.w3schools.com/XSL/func_generateid.asp) function will do the trick? | Case 5 of <http://www.dpawson.co.uk/xsl/sect2/N4598.html> might get you along. |
2,291,814 | I have a problem, maybe due to TinyMCE.
I want to put a text in a markup with jQuery.
This is my code :
```
$(".page").change(function(){
tinyMCE.triggerSave(true, true);
$(".description").val("my text");
});
```
Have you an answer to this ? | 2010/02/18 | [
"https://Stackoverflow.com/questions/2291814",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/260810/"
] | First off, the context node does not change when you call a template, you don't need to pass a parameter in your situation.
```
<xsl:template match="Foo">
<xsl:variable name="varName">
<xsl:call-template name="getVarName" />
</xsl:variable>
<xsl:value-of select="$varName"/> = <xsl:value-of select="@value"/>
</xsl:template>
<xsl:template name="getVarName">
<xsl:choose>
<xsl:when test="@name != ''">
<xsl:value-of select="@name"/>
</xsl:when>
<xsl:otherwise>
<!-- position() is sequential and unique to the batch -->
<xsl:value-of select="concat('unnamed', position())" />
</xsl:otherwise>
</xsl:choose>
</xsl:template>
```
Maybe this is all you need right now. The output for unnamed nodes will not be strictly sequentially numbered (unnamed1, unnamed2, etc), though. You would get this:
```
item1 = 100
item2 = 200
unnamed3 = 300
``` | Case 5 of <http://www.dpawson.co.uk/xsl/sect2/N4598.html> might get you along. |
2,291,814 | I have a problem, maybe due to TinyMCE.
I want to put a text in a markup with jQuery.
This is my code :
```
$(".page").change(function(){
tinyMCE.triggerSave(true, true);
$(".description").val("my text");
});
```
Have you an answer to this ? | 2010/02/18 | [
"https://Stackoverflow.com/questions/2291814",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/260810/"
] | Maybe appending your own constant prefix to the result of [generate-id](http://www.w3schools.com/XSL/func_generateid.asp) function will do the trick? | Try something like this instead of your templates:
```
<xsl:template match="/DocumentRootElement">
<xsl:for-each select="Foo">
<xsl:variable name="varName">
<xsl:choose>
<xsl:when test="string-length(@name) > 0">
<xsl:value-of select="@name"/>
</xsl:when>
<xsl:otherwise>unnamed<xsl:value-of select="position()"/></xsl:otherwise>
</xsl:choose>
</xsl:variable>
<xsl:value-of select="$varName"/> = <xsl:value-of select="@value"/>\r\n
</xsl:for-each>
``` |
2,291,814 | I have a problem, maybe due to TinyMCE.
I want to put a text in a markup with jQuery.
This is my code :
```
$(".page").change(function(){
tinyMCE.triggerSave(true, true);
$(".description").val("my text");
});
```
Have you an answer to this ? | 2010/02/18 | [
"https://Stackoverflow.com/questions/2291814",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/260810/"
] | First off, the context node does not change when you call a template, you don't need to pass a parameter in your situation.
```
<xsl:template match="Foo">
<xsl:variable name="varName">
<xsl:call-template name="getVarName" />
</xsl:variable>
<xsl:value-of select="$varName"/> = <xsl:value-of select="@value"/>
</xsl:template>
<xsl:template name="getVarName">
<xsl:choose>
<xsl:when test="@name != ''">
<xsl:value-of select="@name"/>
</xsl:when>
<xsl:otherwise>
<!-- position() is sequential and unique to the batch -->
<xsl:value-of select="concat('unnamed', position())" />
</xsl:otherwise>
</xsl:choose>
</xsl:template>
```
Maybe this is all you need right now. The output for unnamed nodes will not be strictly sequentially numbered (unnamed1, unnamed2, etc), though. You would get this:
```
item1 = 100
item2 = 200
unnamed3 = 300
``` | Maybe appending your own constant prefix to the result of [generate-id](http://www.w3schools.com/XSL/func_generateid.asp) function will do the trick? |
2,291,814 | I have a problem, maybe due to TinyMCE.
I want to put a text in a markup with jQuery.
This is my code :
```
$(".page").change(function(){
tinyMCE.triggerSave(true, true);
$(".description").val("my text");
});
```
Have you an answer to this ? | 2010/02/18 | [
"https://Stackoverflow.com/questions/2291814",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/260810/"
] | First off, the context node does not change when you call a template, you don't need to pass a parameter in your situation.
```
<xsl:template match="Foo">
<xsl:variable name="varName">
<xsl:call-template name="getVarName" />
</xsl:variable>
<xsl:value-of select="$varName"/> = <xsl:value-of select="@value"/>
</xsl:template>
<xsl:template name="getVarName">
<xsl:choose>
<xsl:when test="@name != ''">
<xsl:value-of select="@name"/>
</xsl:when>
<xsl:otherwise>
<!-- position() is sequential and unique to the batch -->
<xsl:value-of select="concat('unnamed', position())" />
</xsl:otherwise>
</xsl:choose>
</xsl:template>
```
Maybe this is all you need right now. The output for unnamed nodes will not be strictly sequentially numbered (unnamed1, unnamed2, etc), though. You would get this:
```
item1 = 100
item2 = 200
unnamed3 = 300
``` | Try something like this instead of your templates:
```
<xsl:template match="/DocumentRootElement">
<xsl:for-each select="Foo">
<xsl:variable name="varName">
<xsl:choose>
<xsl:when test="string-length(@name) > 0">
<xsl:value-of select="@name"/>
</xsl:when>
<xsl:otherwise>unnamed<xsl:value-of select="position()"/></xsl:otherwise>
</xsl:choose>
</xsl:variable>
<xsl:value-of select="$varName"/> = <xsl:value-of select="@value"/>\r\n
</xsl:for-each>
``` |
516,615 | How do you deal with validation on complex aggregates in a domain driven design? Are you consolidating your business rules/validation logic?
I understand argument validation and I understand property validation which can be attached to the models themselves and do things like check that an email address or zipcode is valid or that a first name has a minimum and maximum length.
But what about complex validation that involves multiple models? Where do you typically place these rules & methods within your architecture? And what patterns if any do you use to implement them? | 2009/02/05 | [
"https://Stackoverflow.com/questions/516615",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/31624/"
] | Instead of relying on `IsValid(xx)` calls all over your application, consider taking some advice from Greg Young:
>
> Don't ever let your entities get into
> an invalid state.
>
>
>
What this basically means is that you transition from thinking of entities as pure data containers and more about objects with behaviors.
Consider the example of a person's address:
```
person.Address = "123 my street";
person.City = "Houston";
person.State = "TX";
person.Zip = 12345;
```
Between any of those calls your entity is invalid (because you would have properties that don't agree with each other. Now consider this:
```
person.ChangeAddress(.......);
```
all of the calls relating to the behavior of changing an address are now an atomic unit. Your entity is never invalid here.
If you take this idea of modeling behaviors rather than state, then you can reach a model that doesn't allow invalid entities.
For a good discussion on this, check out this infoq interview: <http://www.infoq.com/interviews/greg-young-ddd> | I usualy use a specification class,
it provides a method (this is C# but you can translate it in any language) :
```
bool IsVerifiedBy(TEntity candidate)
```
This method performs a complete check of the candidate and its relations.
You can use arguments in the specification class to make it parametrized, like a check level...
You can also add a method to know why the candidate did not verify the specification :
```
IEnumerable<string> BrokenRules(TEntity canditate)
```
You can simply decide to implement the first method like this :
```
bool IsVerifiedBy(TEntity candidate)
{
return BrokenRules(candidate).IsEmpty();
}
```
For broken rules, I usualy write an iterator :
```
IEnumerable<string> BrokenRules(TEntity candidate)
{
if (someComplexCondition)
yield return "Message describing cleary what is wrong...";
if (someOtherCondition)
yield return
string.Format("The amount should not be {0} when the state is {1}",
amount, state);
}
```
For localization, you should use resources, and why not pass a culture to the BrokenRules method.
I place this classes in the model namespace with names that suggest their use. |
516,615 | How do you deal with validation on complex aggregates in a domain driven design? Are you consolidating your business rules/validation logic?
I understand argument validation and I understand property validation which can be attached to the models themselves and do things like check that an email address or zipcode is valid or that a first name has a minimum and maximum length.
But what about complex validation that involves multiple models? Where do you typically place these rules & methods within your architecture? And what patterns if any do you use to implement them? | 2009/02/05 | [
"https://Stackoverflow.com/questions/516615",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/31624/"
] | I like Jimmy Bogard's solution to this problem. He has a post on his blog titled ["Entity validation with visitors and extension methods"](http://www.lostechies.com/blogs/jimmy_bogard/archive/2007/10/24/entity-validation-with-visitors-and-extension-methods.aspx) in which he presents a very elegant approach to entity validation that suggest the implementation of a separate class to store validation code.
```cs
public interface IValidator<T>
{
bool IsValid(T entity);
IEnumerable<string> BrokenRules(T entity);
}
public class OrderPersistenceValidator : IValidator<Order>
{
public bool IsValid(Order entity)
{
return BrokenRules(entity).Count() == 0;
}
public IEnumerable<string> BrokenRules(Order entity)
{
if (entity.Id < 0)
yield return "Id cannot be less than 0.";
if (string.IsNullOrEmpty(entity.Customer))
yield return "Must include a customer.";
yield break;
}
}
``` | I usualy use a specification class,
it provides a method (this is C# but you can translate it in any language) :
```
bool IsVerifiedBy(TEntity candidate)
```
This method performs a complete check of the candidate and its relations.
You can use arguments in the specification class to make it parametrized, like a check level...
You can also add a method to know why the candidate did not verify the specification :
```
IEnumerable<string> BrokenRules(TEntity canditate)
```
You can simply decide to implement the first method like this :
```
bool IsVerifiedBy(TEntity candidate)
{
return BrokenRules(candidate).IsEmpty();
}
```
For broken rules, I usualy write an iterator :
```
IEnumerable<string> BrokenRules(TEntity candidate)
{
if (someComplexCondition)
yield return "Message describing cleary what is wrong...";
if (someOtherCondition)
yield return
string.Format("The amount should not be {0} when the state is {1}",
amount, state);
}
```
For localization, you should use resources, and why not pass a culture to the BrokenRules method.
I place this classes in the model namespace with names that suggest their use. |
516,615 | How do you deal with validation on complex aggregates in a domain driven design? Are you consolidating your business rules/validation logic?
I understand argument validation and I understand property validation which can be attached to the models themselves and do things like check that an email address or zipcode is valid or that a first name has a minimum and maximum length.
But what about complex validation that involves multiple models? Where do you typically place these rules & methods within your architecture? And what patterns if any do you use to implement them? | 2009/02/05 | [
"https://Stackoverflow.com/questions/516615",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/31624/"
] | I usualy use a specification class,
it provides a method (this is C# but you can translate it in any language) :
```
bool IsVerifiedBy(TEntity candidate)
```
This method performs a complete check of the candidate and its relations.
You can use arguments in the specification class to make it parametrized, like a check level...
You can also add a method to know why the candidate did not verify the specification :
```
IEnumerable<string> BrokenRules(TEntity canditate)
```
You can simply decide to implement the first method like this :
```
bool IsVerifiedBy(TEntity candidate)
{
return BrokenRules(candidate).IsEmpty();
}
```
For broken rules, I usualy write an iterator :
```
IEnumerable<string> BrokenRules(TEntity candidate)
{
if (someComplexCondition)
yield return "Message describing cleary what is wrong...";
if (someOtherCondition)
yield return
string.Format("The amount should not be {0} when the state is {1}",
amount, state);
}
```
For localization, you should use resources, and why not pass a culture to the BrokenRules method.
I place this classes in the model namespace with names that suggest their use. | This questions a bit old now but in case anyone is interested here's how I implement validation in my service classes.
I have a private **Validate** method in each of my service classes that takes an entity instance and action being performed, if validation fails a custom exception is thrown with the details of the broken rules.
**Example DocumentService with built in validation**
```cs
public class DocumentService : IDocumentService
{
private IRepository<Document> _documentRepository;
public DocumentService(IRepository<Document> documentRepository)
{
_documentRepository = documentRepository;
}
public void Create(Document document)
{
Validate(document, Action.Create);
document.CreatedDate = DateTime.Now;
_documentRepository.Create(document);
}
public void Update(Document document)
{
Validate(document, Action.Update);
_documentRepository.Update(document);
}
public void Delete(int id)
{
Validate(_documentRepository.GetById(id), Action.Delete);
_documentRepository.Delete(id);
}
public IList<Document> GetAll()
{
return _documentRepository
.GetAll()
.OrderByDescending(x => x.PublishDate)
.ToList();
}
public int GetAllCount()
{
return _documentRepository
.GetAll()
.Count();
}
public Document GetById(int id)
{
return _documentRepository.GetById(id);
}
// validation
private void Validate(Document document, Action action)
{
var brokenRules = new List<string>();
if (action == Action.Create || action == Action.Update)
{
if (string.IsNullOrWhiteSpace(document.Title))
brokenRules.Add("Title is required");
if (document.PublishDate == null)
brokenRules.Add("Publish Date is required");
}
if (brokenRules.Any())
throw new EntityException(string.Join("\r\n", brokenRules));
}
private enum Action
{
Create,
Update,
Delete
}
}
```
I like this approach because it allows me to put all my core validation logic in one place which keeps things simple. |
516,615 | How do you deal with validation on complex aggregates in a domain driven design? Are you consolidating your business rules/validation logic?
I understand argument validation and I understand property validation which can be attached to the models themselves and do things like check that an email address or zipcode is valid or that a first name has a minimum and maximum length.
But what about complex validation that involves multiple models? Where do you typically place these rules & methods within your architecture? And what patterns if any do you use to implement them? | 2009/02/05 | [
"https://Stackoverflow.com/questions/516615",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/31624/"
] | I usualy use a specification class,
it provides a method (this is C# but you can translate it in any language) :
```
bool IsVerifiedBy(TEntity candidate)
```
This method performs a complete check of the candidate and its relations.
You can use arguments in the specification class to make it parametrized, like a check level...
You can also add a method to know why the candidate did not verify the specification :
```
IEnumerable<string> BrokenRules(TEntity canditate)
```
You can simply decide to implement the first method like this :
```
bool IsVerifiedBy(TEntity candidate)
{
return BrokenRules(candidate).IsEmpty();
}
```
For broken rules, I usualy write an iterator :
```
IEnumerable<string> BrokenRules(TEntity candidate)
{
if (someComplexCondition)
yield return "Message describing cleary what is wrong...";
if (someOtherCondition)
yield return
string.Format("The amount should not be {0} when the state is {1}",
amount, state);
}
```
For localization, you should use resources, and why not pass a culture to the BrokenRules method.
I place this classes in the model namespace with names that suggest their use. | Multiple model validation should be going through your aggregate root. If you have to validate across aggregate roots, you probably have a design flaw.
The way I do validation for aggregates is to return a response interface that tells me if validation pass/fail and any messages about why it failed.
You can validate all the sub-models on the aggregate root so they remain consistent.
```cs
// Command Response class to return from public methods that change your model
public interface ICommandResponse
{
CommandResult Result { get; }
IEnumerable<string> Messages { get; }
}
// The result options
public enum CommandResult
{
Success = 0,
Fail = 1
}
// My default implementation
public class CommandResponse : ICommandResponse
{
public CommandResponse(CommandResult result)
{
Result = result;
}
public CommandResponse(CommandResult result, params string[] messages) : this(result)
{
Messages = messages;
}
public CommandResponse(CommandResult result, IEnumerable<string> messages) : this(result)
{
Messages = messages;
}
public CommandResult Result { get; private set; }
public IEnumerable<string> Messages { get; private set; }
}
// usage
public class SomeAggregateRoot
{
public string SomeProperty { get; private set; }
public ICommandResponse ChangeSomeProperty(string newProperty)
{
if(newProperty == null)
{
return new CommandResponse(CommandResult.Fail, "Some property cannot be changed to null");
}
SomeProperty = newProperty;
return new CommandResponse(CommandResult.Success);
}
}
``` |
516,615 | How do you deal with validation on complex aggregates in a domain driven design? Are you consolidating your business rules/validation logic?
I understand argument validation and I understand property validation which can be attached to the models themselves and do things like check that an email address or zipcode is valid or that a first name has a minimum and maximum length.
But what about complex validation that involves multiple models? Where do you typically place these rules & methods within your architecture? And what patterns if any do you use to implement them? | 2009/02/05 | [
"https://Stackoverflow.com/questions/516615",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/31624/"
] | Instead of relying on `IsValid(xx)` calls all over your application, consider taking some advice from Greg Young:
>
> Don't ever let your entities get into
> an invalid state.
>
>
>
What this basically means is that you transition from thinking of entities as pure data containers and more about objects with behaviors.
Consider the example of a person's address:
```
person.Address = "123 my street";
person.City = "Houston";
person.State = "TX";
person.Zip = 12345;
```
Between any of those calls your entity is invalid (because you would have properties that don't agree with each other. Now consider this:
```
person.ChangeAddress(.......);
```
all of the calls relating to the behavior of changing an address are now an atomic unit. Your entity is never invalid here.
If you take this idea of modeling behaviors rather than state, then you can reach a model that doesn't allow invalid entities.
For a good discussion on this, check out this infoq interview: <http://www.infoq.com/interviews/greg-young-ddd> | This questions a bit old now but in case anyone is interested here's how I implement validation in my service classes.
I have a private **Validate** method in each of my service classes that takes an entity instance and action being performed, if validation fails a custom exception is thrown with the details of the broken rules.
**Example DocumentService with built in validation**
```cs
public class DocumentService : IDocumentService
{
private IRepository<Document> _documentRepository;
public DocumentService(IRepository<Document> documentRepository)
{
_documentRepository = documentRepository;
}
public void Create(Document document)
{
Validate(document, Action.Create);
document.CreatedDate = DateTime.Now;
_documentRepository.Create(document);
}
public void Update(Document document)
{
Validate(document, Action.Update);
_documentRepository.Update(document);
}
public void Delete(int id)
{
Validate(_documentRepository.GetById(id), Action.Delete);
_documentRepository.Delete(id);
}
public IList<Document> GetAll()
{
return _documentRepository
.GetAll()
.OrderByDescending(x => x.PublishDate)
.ToList();
}
public int GetAllCount()
{
return _documentRepository
.GetAll()
.Count();
}
public Document GetById(int id)
{
return _documentRepository.GetById(id);
}
// validation
private void Validate(Document document, Action action)
{
var brokenRules = new List<string>();
if (action == Action.Create || action == Action.Update)
{
if (string.IsNullOrWhiteSpace(document.Title))
brokenRules.Add("Title is required");
if (document.PublishDate == null)
brokenRules.Add("Publish Date is required");
}
if (brokenRules.Any())
throw new EntityException(string.Join("\r\n", brokenRules));
}
private enum Action
{
Create,
Update,
Delete
}
}
```
I like this approach because it allows me to put all my core validation logic in one place which keeps things simple. |
516,615 | How do you deal with validation on complex aggregates in a domain driven design? Are you consolidating your business rules/validation logic?
I understand argument validation and I understand property validation which can be attached to the models themselves and do things like check that an email address or zipcode is valid or that a first name has a minimum and maximum length.
But what about complex validation that involves multiple models? Where do you typically place these rules & methods within your architecture? And what patterns if any do you use to implement them? | 2009/02/05 | [
"https://Stackoverflow.com/questions/516615",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/31624/"
] | Instead of relying on `IsValid(xx)` calls all over your application, consider taking some advice from Greg Young:
>
> Don't ever let your entities get into
> an invalid state.
>
>
>
What this basically means is that you transition from thinking of entities as pure data containers and more about objects with behaviors.
Consider the example of a person's address:
```
person.Address = "123 my street";
person.City = "Houston";
person.State = "TX";
person.Zip = 12345;
```
Between any of those calls your entity is invalid (because you would have properties that don't agree with each other. Now consider this:
```
person.ChangeAddress(.......);
```
all of the calls relating to the behavior of changing an address are now an atomic unit. Your entity is never invalid here.
If you take this idea of modeling behaviors rather than state, then you can reach a model that doesn't allow invalid entities.
For a good discussion on this, check out this infoq interview: <http://www.infoq.com/interviews/greg-young-ddd> | Multiple model validation should be going through your aggregate root. If you have to validate across aggregate roots, you probably have a design flaw.
The way I do validation for aggregates is to return a response interface that tells me if validation pass/fail and any messages about why it failed.
You can validate all the sub-models on the aggregate root so they remain consistent.
```cs
// Command Response class to return from public methods that change your model
public interface ICommandResponse
{
CommandResult Result { get; }
IEnumerable<string> Messages { get; }
}
// The result options
public enum CommandResult
{
Success = 0,
Fail = 1
}
// My default implementation
public class CommandResponse : ICommandResponse
{
public CommandResponse(CommandResult result)
{
Result = result;
}
public CommandResponse(CommandResult result, params string[] messages) : this(result)
{
Messages = messages;
}
public CommandResponse(CommandResult result, IEnumerable<string> messages) : this(result)
{
Messages = messages;
}
public CommandResult Result { get; private set; }
public IEnumerable<string> Messages { get; private set; }
}
// usage
public class SomeAggregateRoot
{
public string SomeProperty { get; private set; }
public ICommandResponse ChangeSomeProperty(string newProperty)
{
if(newProperty == null)
{
return new CommandResponse(CommandResult.Fail, "Some property cannot be changed to null");
}
SomeProperty = newProperty;
return new CommandResponse(CommandResult.Success);
}
}
``` |
516,615 | How do you deal with validation on complex aggregates in a domain driven design? Are you consolidating your business rules/validation logic?
I understand argument validation and I understand property validation which can be attached to the models themselves and do things like check that an email address or zipcode is valid or that a first name has a minimum and maximum length.
But what about complex validation that involves multiple models? Where do you typically place these rules & methods within your architecture? And what patterns if any do you use to implement them? | 2009/02/05 | [
"https://Stackoverflow.com/questions/516615",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/31624/"
] | I like Jimmy Bogard's solution to this problem. He has a post on his blog titled ["Entity validation with visitors and extension methods"](http://www.lostechies.com/blogs/jimmy_bogard/archive/2007/10/24/entity-validation-with-visitors-and-extension-methods.aspx) in which he presents a very elegant approach to entity validation that suggest the implementation of a separate class to store validation code.
```cs
public interface IValidator<T>
{
bool IsValid(T entity);
IEnumerable<string> BrokenRules(T entity);
}
public class OrderPersistenceValidator : IValidator<Order>
{
public bool IsValid(Order entity)
{
return BrokenRules(entity).Count() == 0;
}
public IEnumerable<string> BrokenRules(Order entity)
{
if (entity.Id < 0)
yield return "Id cannot be less than 0.";
if (string.IsNullOrEmpty(entity.Customer))
yield return "Must include a customer.";
yield break;
}
}
``` | This questions a bit old now but in case anyone is interested here's how I implement validation in my service classes.
I have a private **Validate** method in each of my service classes that takes an entity instance and action being performed, if validation fails a custom exception is thrown with the details of the broken rules.
**Example DocumentService with built in validation**
```cs
public class DocumentService : IDocumentService
{
private IRepository<Document> _documentRepository;
public DocumentService(IRepository<Document> documentRepository)
{
_documentRepository = documentRepository;
}
public void Create(Document document)
{
Validate(document, Action.Create);
document.CreatedDate = DateTime.Now;
_documentRepository.Create(document);
}
public void Update(Document document)
{
Validate(document, Action.Update);
_documentRepository.Update(document);
}
public void Delete(int id)
{
Validate(_documentRepository.GetById(id), Action.Delete);
_documentRepository.Delete(id);
}
public IList<Document> GetAll()
{
return _documentRepository
.GetAll()
.OrderByDescending(x => x.PublishDate)
.ToList();
}
public int GetAllCount()
{
return _documentRepository
.GetAll()
.Count();
}
public Document GetById(int id)
{
return _documentRepository.GetById(id);
}
// validation
private void Validate(Document document, Action action)
{
var brokenRules = new List<string>();
if (action == Action.Create || action == Action.Update)
{
if (string.IsNullOrWhiteSpace(document.Title))
brokenRules.Add("Title is required");
if (document.PublishDate == null)
brokenRules.Add("Publish Date is required");
}
if (brokenRules.Any())
throw new EntityException(string.Join("\r\n", brokenRules));
}
private enum Action
{
Create,
Update,
Delete
}
}
```
I like this approach because it allows me to put all my core validation logic in one place which keeps things simple. |
516,615 | How do you deal with validation on complex aggregates in a domain driven design? Are you consolidating your business rules/validation logic?
I understand argument validation and I understand property validation which can be attached to the models themselves and do things like check that an email address or zipcode is valid or that a first name has a minimum and maximum length.
But what about complex validation that involves multiple models? Where do you typically place these rules & methods within your architecture? And what patterns if any do you use to implement them? | 2009/02/05 | [
"https://Stackoverflow.com/questions/516615",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/31624/"
] | I like Jimmy Bogard's solution to this problem. He has a post on his blog titled ["Entity validation with visitors and extension methods"](http://www.lostechies.com/blogs/jimmy_bogard/archive/2007/10/24/entity-validation-with-visitors-and-extension-methods.aspx) in which he presents a very elegant approach to entity validation that suggest the implementation of a separate class to store validation code.
```cs
public interface IValidator<T>
{
bool IsValid(T entity);
IEnumerable<string> BrokenRules(T entity);
}
public class OrderPersistenceValidator : IValidator<Order>
{
public bool IsValid(Order entity)
{
return BrokenRules(entity).Count() == 0;
}
public IEnumerable<string> BrokenRules(Order entity)
{
if (entity.Id < 0)
yield return "Id cannot be less than 0.";
if (string.IsNullOrEmpty(entity.Customer))
yield return "Must include a customer.";
yield break;
}
}
``` | Multiple model validation should be going through your aggregate root. If you have to validate across aggregate roots, you probably have a design flaw.
The way I do validation for aggregates is to return a response interface that tells me if validation pass/fail and any messages about why it failed.
You can validate all the sub-models on the aggregate root so they remain consistent.
```cs
// Command Response class to return from public methods that change your model
public interface ICommandResponse
{
CommandResult Result { get; }
IEnumerable<string> Messages { get; }
}
// The result options
public enum CommandResult
{
Success = 0,
Fail = 1
}
// My default implementation
public class CommandResponse : ICommandResponse
{
public CommandResponse(CommandResult result)
{
Result = result;
}
public CommandResponse(CommandResult result, params string[] messages) : this(result)
{
Messages = messages;
}
public CommandResponse(CommandResult result, IEnumerable<string> messages) : this(result)
{
Messages = messages;
}
public CommandResult Result { get; private set; }
public IEnumerable<string> Messages { get; private set; }
}
// usage
public class SomeAggregateRoot
{
public string SomeProperty { get; private set; }
public ICommandResponse ChangeSomeProperty(string newProperty)
{
if(newProperty == null)
{
return new CommandResponse(CommandResult.Fail, "Some property cannot be changed to null");
}
SomeProperty = newProperty;
return new CommandResponse(CommandResult.Success);
}
}
``` |
516,615 | How do you deal with validation on complex aggregates in a domain driven design? Are you consolidating your business rules/validation logic?
I understand argument validation and I understand property validation which can be attached to the models themselves and do things like check that an email address or zipcode is valid or that a first name has a minimum and maximum length.
But what about complex validation that involves multiple models? Where do you typically place these rules & methods within your architecture? And what patterns if any do you use to implement them? | 2009/02/05 | [
"https://Stackoverflow.com/questions/516615",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/31624/"
] | Multiple model validation should be going through your aggregate root. If you have to validate across aggregate roots, you probably have a design flaw.
The way I do validation for aggregates is to return a response interface that tells me if validation pass/fail and any messages about why it failed.
You can validate all the sub-models on the aggregate root so they remain consistent.
```cs
// Command Response class to return from public methods that change your model
public interface ICommandResponse
{
CommandResult Result { get; }
IEnumerable<string> Messages { get; }
}
// The result options
public enum CommandResult
{
Success = 0,
Fail = 1
}
// My default implementation
public class CommandResponse : ICommandResponse
{
public CommandResponse(CommandResult result)
{
Result = result;
}
public CommandResponse(CommandResult result, params string[] messages) : this(result)
{
Messages = messages;
}
public CommandResponse(CommandResult result, IEnumerable<string> messages) : this(result)
{
Messages = messages;
}
public CommandResult Result { get; private set; }
public IEnumerable<string> Messages { get; private set; }
}
// usage
public class SomeAggregateRoot
{
public string SomeProperty { get; private set; }
public ICommandResponse ChangeSomeProperty(string newProperty)
{
if(newProperty == null)
{
return new CommandResponse(CommandResult.Fail, "Some property cannot be changed to null");
}
SomeProperty = newProperty;
return new CommandResponse(CommandResult.Success);
}
}
``` | This questions a bit old now but in case anyone is interested here's how I implement validation in my service classes.
I have a private **Validate** method in each of my service classes that takes an entity instance and action being performed, if validation fails a custom exception is thrown with the details of the broken rules.
**Example DocumentService with built in validation**
```cs
public class DocumentService : IDocumentService
{
private IRepository<Document> _documentRepository;
public DocumentService(IRepository<Document> documentRepository)
{
_documentRepository = documentRepository;
}
public void Create(Document document)
{
Validate(document, Action.Create);
document.CreatedDate = DateTime.Now;
_documentRepository.Create(document);
}
public void Update(Document document)
{
Validate(document, Action.Update);
_documentRepository.Update(document);
}
public void Delete(int id)
{
Validate(_documentRepository.GetById(id), Action.Delete);
_documentRepository.Delete(id);
}
public IList<Document> GetAll()
{
return _documentRepository
.GetAll()
.OrderByDescending(x => x.PublishDate)
.ToList();
}
public int GetAllCount()
{
return _documentRepository
.GetAll()
.Count();
}
public Document GetById(int id)
{
return _documentRepository.GetById(id);
}
// validation
private void Validate(Document document, Action action)
{
var brokenRules = new List<string>();
if (action == Action.Create || action == Action.Update)
{
if (string.IsNullOrWhiteSpace(document.Title))
brokenRules.Add("Title is required");
if (document.PublishDate == null)
brokenRules.Add("Publish Date is required");
}
if (brokenRules.Any())
throw new EntityException(string.Join("\r\n", brokenRules));
}
private enum Action
{
Create,
Update,
Delete
}
}
```
I like this approach because it allows me to put all my core validation logic in one place which keeps things simple. |
1,772,764 | I have a Jtable and I want to highlight a row by adding a border to the row. I have extended a `DefaultTableCellRenderer` and I figure the work needs to be done in the `getTableCellRendererComponent` method.
I'm guessing that since there doesn't seem to be a concept of a row that I need to create a custom border for the individual cells in the row. Something like a left side, top, and bottom for the first cell, a top and bottom for all the inner cells, and a top, bottom, and right side for the last cell in the row. I'm having problems finding out how to go about actually executing the thought process. I'm not sure how to use the `setBorder()` method or if that's even the direction I need to take. | 2009/11/20 | [
"https://Stackoverflow.com/questions/1772764",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/180516/"
] | You have the correct idea in mind, you will need to set the border on the label in the cellrenderer depending on where it is in the table(i.e. edge, center etc).
Take a look at [matteborder](http://java.sun.com/j2se/1.4.2/docs/api/javax/swing/BorderFactory.html#createMatteBorder%28int,%20int,%20int,%20int,%20java.awt.Color%29). You can specify which areas to draw a border along w/ width and color. | I agree with > camickr
the best way to go is to override the prepareRendere method. The following code will create a border for a row with a selected cell:
```
@Override
public Component prepareRenderer(TableCellRenderer renderer, int row, int column) {
Component c = super.prepareRenderer(renderer, row, column);
JComponent jc = (JComponent)c;
if (isRowSelected(row)){
int top = (row > 0 && isRowSelected(row-1))?1:2;
int left = column == 0?2:0;
int bottom = (row < getRowCount()-1 && isRowSelected(row + 1))?1:2;
int right = column == getColumnCount()-1?2:0;
jc.setBorder(BorderFactory.createMatteBorder(top, left, bottom, right, this.getSelectionBackground()));
}
else
jc.setBorder(null);
return c;
}
``` |
1,772,764 | I have a Jtable and I want to highlight a row by adding a border to the row. I have extended a `DefaultTableCellRenderer` and I figure the work needs to be done in the `getTableCellRendererComponent` method.
I'm guessing that since there doesn't seem to be a concept of a row that I need to create a custom border for the individual cells in the row. Something like a left side, top, and bottom for the first cell, a top and bottom for all the inner cells, and a top, bottom, and right side for the last cell in the row. I'm having problems finding out how to go about actually executing the thought process. I'm not sure how to use the `setBorder()` method or if that's even the direction I need to take. | 2009/11/20 | [
"https://Stackoverflow.com/questions/1772764",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/180516/"
] | I would not create a custom renderer for this. Yes it will work if all your data is of the same type. But what happens when you start to mix Strings, with Dates and Integers and Booleans which all use different renderers? Then you would need to create 4 custom renderers.
The better approach is to override the prepareRenderer(...) method JTable so you can add the code in one place. Here is an example to get you started. In reality you would want to use a CompoundBorder that contains a MatteBorder for the top/bottom and and EmptyBorder for the left/right and you would create a single instance of the Border.
```
import java.awt.*;
import java.util.*;
import javax.swing.*;
import javax.swing.table.*;
import javax.swing.text.*;
import javax.swing.border.*;
public class TablePrepareRenderer extends JFrame
{
JTable table;
public TablePrepareRenderer()
{
Object[] columnNames = {"Type", "Company", "Shares", "Price", "Boolean"};
Object[][] data =
{
{"Buy", "IBM", new Double(1000), new Double(80.5), Boolean.TRUE},
{"Sell", "MicroSoft", new Double(2000), new Double(6.25), Boolean.TRUE},
{"RSell", "Apple", new Double(3000), new Double(7.35), Boolean.TRUE},
{"Buy", "Nortel", new Double(4000), new Double(20), Boolean.TRUE}
};
DefaultTableModel model = new DefaultTableModel(data, columnNames);
table = new JTable( model )
{
// Returning the Class of each column will allow different
// renderers to be used based on Class
public Class getColumnClass(int column)
{
return getValueAt(0, column).getClass();
}
public Component prepareRenderer(
TableCellRenderer renderer, int row, int column)
{
Component c = super.prepareRenderer(renderer, row, column);
JComponent jc = (JComponent)c;
// Color row based on a cell value
// Alternate row color
if (!isRowSelected(row))
c.setBackground(row % 2 == 0 ? getBackground() : Color.LIGHT_GRAY);
else
jc.setBorder(new MatteBorder(1, 0, 1, 0, Color.RED) );
// Use bold font on selected row
return c;
}
};
table.setPreferredScrollableViewportSize(table.getPreferredSize());
table.changeSelection(0, 0, false, false);
JScrollPane scrollPane = new JScrollPane( table );
getContentPane().add( scrollPane );
}
public static void main(String[] args)
{
TablePrepareRenderer frame = new TablePrepareRenderer();
frame.setDefaultCloseOperation( EXIT_ON_CLOSE );
frame.pack();
frame.setLocationRelativeTo( null );
frame.setVisible(true);
}
}
``` | I agree with > camickr
the best way to go is to override the prepareRendere method. The following code will create a border for a row with a selected cell:
```
@Override
public Component prepareRenderer(TableCellRenderer renderer, int row, int column) {
Component c = super.prepareRenderer(renderer, row, column);
JComponent jc = (JComponent)c;
if (isRowSelected(row)){
int top = (row > 0 && isRowSelected(row-1))?1:2;
int left = column == 0?2:0;
int bottom = (row < getRowCount()-1 && isRowSelected(row + 1))?1:2;
int right = column == getColumnCount()-1?2:0;
jc.setBorder(BorderFactory.createMatteBorder(top, left, bottom, right, this.getSelectionBackground()));
}
else
jc.setBorder(null);
return c;
}
``` |
1,775,700 | In my project, we allow customer to write customer specific logic in JSP pages and attach to our product. Right now after deploying .ear file customer copy the custom files under /WebContent/custom directory so that we can refer those JSPs. This is a tedious process for installation, I would like to simplify this.
I tried the following solution
1) extendedDocumentRoot - IBM WebSphere
It works fine when I kept the JSP outside the EAR deployment directory
2) OC4J - This solution also works fine in OracleAS.
```
<virtual-directory virtual-path="/img" real-path="/e:/pictures/" />
```
I am looking for a generic solution for all J2EE containers. | 2009/11/21 | [
"https://Stackoverflow.com/questions/1775700",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/100647/"
] | No. There is no standard way to accomplish this. This is part of server deployment/configuration and it's out of scope for Servlet spec.
If you use Tomcat, you have to use yet another mechanism,
```
<Context aliases="/img=/e:/pictures/" ...>
``` | Very similar question here - [Loading JSP pages from custom sources](https://stackoverflow.com/questions/195437/loading-jsp-pages-from-custom-sources)
I don't think this is possible with jsp because of the potential security risks. However, other view technologies, such as [Velocity](http://velocity.apache.org/) do support this. |
157,249 | Which Computer-aided Software Engineering tools do you use and why? In what ways do they increase your productivity or help you design your programs? Or, in case you do not use CASE tools, what are your reasons for this? | 2008/10/01 | [
"https://Stackoverflow.com/questions/157249",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/23163/"
] | The best CASE tool I had to work with is the [Enterprise Architect](http://www.sparxsystems.com.au/products/ea/index.html) from [Sparx](http://www.sparxsystems.com.au/).
It's lightweight comparing to Rose (easier to buy and cheaper too) but extremely powerful. You could do great UML diagrams or database model or anything else you want but in a nice and organised way.
It greatly helps on the initial stages of the elaboration process as you could create domain model, do some preliminary use cases, map them to the requirements and present all of it in a nice way to the customer. It helps me thinking and I re-factor my design with it until I am satisfied enough to start proper documentation.
It is also very good for database models as it could reverse-engineer most databases very neatly.
The only (but quite serious) drawback it has in my eyes is that its documentation generator is, to put it mildly, crap. Getting a proper document from it is almost impossible unless you invest a significant amount of work in the templates and then it would be only OK. | I have used Rational Rose and a few other similar packages in the past. Mostly I have used them for the UML diagram elements and have not gone into the more detailed functionality such as code generation etc.
I mostly use them for aiding the design process and clarifying my own ideas. Often I find that, in trying to come up with a design for a componant, I end up needing to write down / draw what I want to happen so I can get a clear overview in my mind of what needs to happen and why. I have found that in a lot of cases, what I end up trying to draw is essentially the same as a predefined kind of diagram in UML, such as a Use Case Diagram etc. and by then adopting that style, it becomes easier to get my ideas on paper as I have some framework to work within.
So, I use CASE tools principally for thier UML / designing tools at a highish, semi-abstract level. |
157,249 | Which Computer-aided Software Engineering tools do you use and why? In what ways do they increase your productivity or help you design your programs? Or, in case you do not use CASE tools, what are your reasons for this? | 2008/10/01 | [
"https://Stackoverflow.com/questions/157249",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/23163/"
] | The best CASE tool I had to work with is the [Enterprise Architect](http://www.sparxsystems.com.au/products/ea/index.html) from [Sparx](http://www.sparxsystems.com.au/).
It's lightweight comparing to Rose (easier to buy and cheaper too) but extremely powerful. You could do great UML diagrams or database model or anything else you want but in a nice and organised way.
It greatly helps on the initial stages of the elaboration process as you could create domain model, do some preliminary use cases, map them to the requirements and present all of it in a nice way to the customer. It helps me thinking and I re-factor my design with it until I am satisfied enough to start proper documentation.
It is also very good for database models as it could reverse-engineer most databases very neatly.
The only (but quite serious) drawback it has in my eyes is that its documentation generator is, to put it mildly, crap. Getting a proper document from it is almost impossible unless you invest a significant amount of work in the templates and then it would be only OK. | Oracle Designer |
157,249 | Which Computer-aided Software Engineering tools do you use and why? In what ways do they increase your productivity or help you design your programs? Or, in case you do not use CASE tools, what are your reasons for this? | 2008/10/01 | [
"https://Stackoverflow.com/questions/157249",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/23163/"
] | The best CASE tool I had to work with is the [Enterprise Architect](http://www.sparxsystems.com.au/products/ea/index.html) from [Sparx](http://www.sparxsystems.com.au/).
It's lightweight comparing to Rose (easier to buy and cheaper too) but extremely powerful. You could do great UML diagrams or database model or anything else you want but in a nice and organised way.
It greatly helps on the initial stages of the elaboration process as you could create domain model, do some preliminary use cases, map them to the requirements and present all of it in a nice way to the customer. It helps me thinking and I re-factor my design with it until I am satisfied enough to start proper documentation.
It is also very good for database models as it could reverse-engineer most databases very neatly.
The only (but quite serious) drawback it has in my eyes is that its documentation generator is, to put it mildly, crap. Getting a proper document from it is almost impossible unless you invest a significant amount of work in the templates and then it would be only OK. | Not using any. No money for them. |
2,649,194 | I am updating this post with what I think I now know about getting this configuration; HOWEVER, there is more to know as I am still having a problem is one crucial area.
I use SQLite for unit testing, which now works fine, using the configuration steps below. I also use it when I want a test run of the UI with more data than in-memory test data but without the overhead of SQLServer - this configuration fails with the following:
```
{"Could not create the driver from NHibernate.Driver.SQLite20Driver, NHibernate, Version=2.1.0.4000, Culture=neutral, PublicKeyToken=aa95f207798dfdb4."}
```
Here is updated info on configs that DO work:
1) Which SQLite dll?? **There are some bad links out there that look helpful but that have build errors in them**. The *only* good download as of this date is [here at Source Forge](http://sourceforge.net/projects/sqlite-dotnet2/files/SQLite%20for%20ADO.NET%202.0/1.0.66.0/SQLite-1.0.66.0-binaries.zip/download). v1.066 which was released today, 4-18-2010.
2) Must you use the GAC? No, as answered by Mauricio.
3) x64 builds - as answered by Mauricio.
4) NHib driver - SQLite20Driver, as answered by Mauricio
5) FNH as a potential conflict - no, as answered by Mauricio
Cheers,
Berryl
== ADD'L DEBUG INFO ===
When the exception is hit and I call up the SQLite20Drive assembly, I get the following which suggests to me that the driver *should* be available. I am wondering though, as the configuration code is in a different assembly.
-- assembly when error ----
```
?typeof(SQLite20Driver).Assembly
{NHibernate, Version=2.1.0.4000, Culture=neutral, PublicKeyToken=aa95f207798dfdb4}
[System.Reflection.RuntimeAssembly]: {NHibernate, Version=2.1.0.4000, Culture=neutral, PublicKeyToken=aa95f207798dfdb4}
CodeBase: "file:///C:/Users/Lord & Master/Documents/Projects/Smack/trunk/src/ConstructionAdmin.WpfPresentation/bin/Debug/NHibernate.DLL"
EntryPoint: null
EscapedCodeBase: "file:///C:/Users/Lord%20%26%20Master/Documents/Projects/Smack/trunk/src/ConstructionAdmin.WpfPresentation/bin/Debug/NHibernate.DLL"
Evidence: {System.Security.Policy.Evidence}
FullName: "NHibernate, Version=2.1.0.4000, Culture=neutral, PublicKeyToken=aa95f207798dfdb4"
GlobalAssemblyCache: false
HostContext: 0
ImageRuntimeVersion: "v2.0.50727"
IsDynamic: false
IsFullyTrusted: true
Location: "C:\\Users\\Lord & Master\\Documents\\Projects\\Smack\\trunk\\src\\ConstructionAdmin.WpfPresentation\\bin\\Debug\\NHibernate.dll"
ManifestModule: {NHibernate.dll}
PermissionSet: {<PermissionSet class="System.Security.PermissionSet"
version="1"
Unrestricted="true"/>
}
ReflectionOnly: false
SecurityRuleSet: Level1
```
--- assembly when unit testing (NO ERROR)
```
{NHibernate, Version=2.1.0.4000, Culture=neutral, PublicKeyToken=aa95f207798dfdb4}
[System.Reflection.RuntimeAssembly]: {NHibernate, Version=2.1.0.4000, Culture=neutral, PublicKeyToken=aa95f207798dfdb4}
CodeBase: "file:///C:/Users/Lord & Master/Documents/Projects/Smack/trunk/src/ConstructionAdmin.Tests/bin/Debug/NHibernate.DLL"
EntryPoint: null
EscapedCodeBase: "file:///C:/Users/Lord%20%26%20Master/Documents/Projects/Smack/trunk/src/ConstructionAdmin.Tests/bin/Debug/NHibernate.DLL"
Evidence: {System.Security.Policy.Evidence}
FullName: "NHibernate, Version=2.1.0.4000, Culture=neutral, PublicKeyToken=aa95f207798dfdb4"
GlobalAssemblyCache: false
HostContext: 0
ImageRuntimeVersion: "v2.0.50727"
IsDynamic: false
IsFullyTrusted: true
Location: "C:\\Users\\Lord & Master\\Documents\\Projects\\Smack\\trunk\\src\\ConstructionAdmin.Tests\\bin\\Debug\\NHibernate.dll"
ManifestModule: {NHibernate.dll}
PermissionSet: {<PermissionSet class="System.Security.PermissionSet"
```
version="1"
Unrestricted="true"/>
}
ReflectionOnly: false
SecurityRuleSet: Level1
Here is the bootstrapper for this a SQLite session:
```
/// <summary>SQLite-NHibernate bootstrapper for general use.</summary>
public class SQLiteBoot : IDisposable
{
public readonly ISessionFactory SessionFactory;
private readonly ISession _session;
private static Configuration _config;
private static string _persistenceModelGeneratorName;
public SQLiteBoot(IAutoPersistenceModelGenerator persistenceModelGenerator) {
if (_isSessionFactoryBuildRequired(persistenceModelGenerator)) {
_config = new Configuration()
.SetProperty(ENV.ReleaseConnections, "on_close")
.SetProperty(ENV.Dialect, typeof (SQLiteDialect).AssemblyQualifiedName)
.SetProperty(ENV.ConnectionDriver, typeof (SQLite20Driver).AssemblyQualifiedName)
.SetProperty(ENV.ConnectionString, "data source=:memory:")
.SetProperty(ENV.ProxyFactoryFactoryClass, typeof (ProxyFactoryFactory).AssemblyQualifiedName)
.SetProperty(ENV.CurrentSessionContextClass, typeof (ThreadStaticSessionContext).AssemblyQualifiedName);
_persistenceModelGeneratorName = persistenceModelGenerator.Name;
var persistenceModel = persistenceModelGenerator.Generate();
var fluentCfg = Fluently.Configure(_config).Mappings(m => m.AutoMappings.Add(persistenceModel));
SessionFactory = fluentCfg.BuildSessionFactory();
Check.Require(SessionFactory.GetAllClassMetadata().Count > 0, "No mapped classes - check your AutoPersistenceModel!");
}
_session = SessionFactory.OpenSession();
CurrentSessionContext.Bind(_session);
new SchemaExport(_config).Execute(true, true, false, _session.Connection, Console.Out);
}
private bool _isSessionFactoryBuildRequired(IAutoPersistenceModelGenerator persistenceModelGenerator)
{
return
_config == null
|| SessionFactory == null
|| !persistenceModelGenerator.Name.Equals(_persistenceModelGeneratorName);
}
public void Dispose()
{
_session.Dispose();
}
}
```
} | 2010/04/15 | [
"https://Stackoverflow.com/questions/2649194",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/95245/"
] | 1. Sure. You can also use previous versions if you configure [mixed mode loading](https://stackoverflow.com/questions/2455654/net-4-0-project-reference-2-0-assembly).
2. No need to be in the GAC. You can use gacutil to remove the assemblies from the GAC.
3. Use the x64 DLL to target Windows x64 and x86 for Windows x86
4. Please post the full exception stack trace. Also if you're using a 3.5 assembly use [mixed mode loading](https://stackoverflow.com/questions/2455654/net-4-0-project-reference-2-0-assembly).
5. FNH has no reference to SQLite. | I want this to stand out so it will help someone else; the full reason this happens is explained [here](https://stackoverflow.com/questions/2697795/using-fluentnhibernate-with-net4); so adjust your congig to use BOTH the redirect there in combo with the mixed loading mode referenced here by Mauricio. |
2,649,194 | I am updating this post with what I think I now know about getting this configuration; HOWEVER, there is more to know as I am still having a problem is one crucial area.
I use SQLite for unit testing, which now works fine, using the configuration steps below. I also use it when I want a test run of the UI with more data than in-memory test data but without the overhead of SQLServer - this configuration fails with the following:
```
{"Could not create the driver from NHibernate.Driver.SQLite20Driver, NHibernate, Version=2.1.0.4000, Culture=neutral, PublicKeyToken=aa95f207798dfdb4."}
```
Here is updated info on configs that DO work:
1) Which SQLite dll?? **There are some bad links out there that look helpful but that have build errors in them**. The *only* good download as of this date is [here at Source Forge](http://sourceforge.net/projects/sqlite-dotnet2/files/SQLite%20for%20ADO.NET%202.0/1.0.66.0/SQLite-1.0.66.0-binaries.zip/download). v1.066 which was released today, 4-18-2010.
2) Must you use the GAC? No, as answered by Mauricio.
3) x64 builds - as answered by Mauricio.
4) NHib driver - SQLite20Driver, as answered by Mauricio
5) FNH as a potential conflict - no, as answered by Mauricio
Cheers,
Berryl
== ADD'L DEBUG INFO ===
When the exception is hit and I call up the SQLite20Drive assembly, I get the following which suggests to me that the driver *should* be available. I am wondering though, as the configuration code is in a different assembly.
-- assembly when error ----
```
?typeof(SQLite20Driver).Assembly
{NHibernate, Version=2.1.0.4000, Culture=neutral, PublicKeyToken=aa95f207798dfdb4}
[System.Reflection.RuntimeAssembly]: {NHibernate, Version=2.1.0.4000, Culture=neutral, PublicKeyToken=aa95f207798dfdb4}
CodeBase: "file:///C:/Users/Lord & Master/Documents/Projects/Smack/trunk/src/ConstructionAdmin.WpfPresentation/bin/Debug/NHibernate.DLL"
EntryPoint: null
EscapedCodeBase: "file:///C:/Users/Lord%20%26%20Master/Documents/Projects/Smack/trunk/src/ConstructionAdmin.WpfPresentation/bin/Debug/NHibernate.DLL"
Evidence: {System.Security.Policy.Evidence}
FullName: "NHibernate, Version=2.1.0.4000, Culture=neutral, PublicKeyToken=aa95f207798dfdb4"
GlobalAssemblyCache: false
HostContext: 0
ImageRuntimeVersion: "v2.0.50727"
IsDynamic: false
IsFullyTrusted: true
Location: "C:\\Users\\Lord & Master\\Documents\\Projects\\Smack\\trunk\\src\\ConstructionAdmin.WpfPresentation\\bin\\Debug\\NHibernate.dll"
ManifestModule: {NHibernate.dll}
PermissionSet: {<PermissionSet class="System.Security.PermissionSet"
version="1"
Unrestricted="true"/>
}
ReflectionOnly: false
SecurityRuleSet: Level1
```
--- assembly when unit testing (NO ERROR)
```
{NHibernate, Version=2.1.0.4000, Culture=neutral, PublicKeyToken=aa95f207798dfdb4}
[System.Reflection.RuntimeAssembly]: {NHibernate, Version=2.1.0.4000, Culture=neutral, PublicKeyToken=aa95f207798dfdb4}
CodeBase: "file:///C:/Users/Lord & Master/Documents/Projects/Smack/trunk/src/ConstructionAdmin.Tests/bin/Debug/NHibernate.DLL"
EntryPoint: null
EscapedCodeBase: "file:///C:/Users/Lord%20%26%20Master/Documents/Projects/Smack/trunk/src/ConstructionAdmin.Tests/bin/Debug/NHibernate.DLL"
Evidence: {System.Security.Policy.Evidence}
FullName: "NHibernate, Version=2.1.0.4000, Culture=neutral, PublicKeyToken=aa95f207798dfdb4"
GlobalAssemblyCache: false
HostContext: 0
ImageRuntimeVersion: "v2.0.50727"
IsDynamic: false
IsFullyTrusted: true
Location: "C:\\Users\\Lord & Master\\Documents\\Projects\\Smack\\trunk\\src\\ConstructionAdmin.Tests\\bin\\Debug\\NHibernate.dll"
ManifestModule: {NHibernate.dll}
PermissionSet: {<PermissionSet class="System.Security.PermissionSet"
```
version="1"
Unrestricted="true"/>
}
ReflectionOnly: false
SecurityRuleSet: Level1
Here is the bootstrapper for this a SQLite session:
```
/// <summary>SQLite-NHibernate bootstrapper for general use.</summary>
public class SQLiteBoot : IDisposable
{
public readonly ISessionFactory SessionFactory;
private readonly ISession _session;
private static Configuration _config;
private static string _persistenceModelGeneratorName;
public SQLiteBoot(IAutoPersistenceModelGenerator persistenceModelGenerator) {
if (_isSessionFactoryBuildRequired(persistenceModelGenerator)) {
_config = new Configuration()
.SetProperty(ENV.ReleaseConnections, "on_close")
.SetProperty(ENV.Dialect, typeof (SQLiteDialect).AssemblyQualifiedName)
.SetProperty(ENV.ConnectionDriver, typeof (SQLite20Driver).AssemblyQualifiedName)
.SetProperty(ENV.ConnectionString, "data source=:memory:")
.SetProperty(ENV.ProxyFactoryFactoryClass, typeof (ProxyFactoryFactory).AssemblyQualifiedName)
.SetProperty(ENV.CurrentSessionContextClass, typeof (ThreadStaticSessionContext).AssemblyQualifiedName);
_persistenceModelGeneratorName = persistenceModelGenerator.Name;
var persistenceModel = persistenceModelGenerator.Generate();
var fluentCfg = Fluently.Configure(_config).Mappings(m => m.AutoMappings.Add(persistenceModel));
SessionFactory = fluentCfg.BuildSessionFactory();
Check.Require(SessionFactory.GetAllClassMetadata().Count > 0, "No mapped classes - check your AutoPersistenceModel!");
}
_session = SessionFactory.OpenSession();
CurrentSessionContext.Bind(_session);
new SchemaExport(_config).Execute(true, true, false, _session.Connection, Console.Out);
}
private bool _isSessionFactoryBuildRequired(IAutoPersistenceModelGenerator persistenceModelGenerator)
{
return
_config == null
|| SessionFactory == null
|| !persistenceModelGenerator.Name.Equals(_persistenceModelGeneratorName);
}
public void Dispose()
{
_session.Dispose();
}
}
```
} | 2010/04/15 | [
"https://Stackoverflow.com/questions/2649194",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/95245/"
] | 1. Sure. You can also use previous versions if you configure [mixed mode loading](https://stackoverflow.com/questions/2455654/net-4-0-project-reference-2-0-assembly).
2. No need to be in the GAC. You can use gacutil to remove the assemblies from the GAC.
3. Use the x64 DLL to target Windows x64 and x86 for Windows x86
4. Please post the full exception stack trace. Also if you're using a 3.5 assembly use [mixed mode loading](https://stackoverflow.com/questions/2455654/net-4-0-project-reference-2-0-assembly).
5. FNH has no reference to SQLite. | I had the same problem, and found little or no help on all the forum and blog posts.
Note that this problem is specific to a case respecting *all* of the following criteria:
- using SQLite
- with System.Data.SqlLite
- on an x64 machine
- and NHibernate (2.1.2.4 in my case)
That chunk of config in my web.config (or app.config for my unit tests) got it to work. I had to qualify the assembly to be sure he loads correctly.
```
<configuration>
<runtime>
<assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
<qualifyAssembly
partialName="System.Data.SQLite"
fullName="System.Data.SQLite, Version=1.0.66.0, Culture=neutral, PublicKeyToken=db937bc2d44ff139, processorArchitecture=AMD64" />
</assemblyBinding>
</runtime>
</configuration>
```
Somewhere in it's inner plumbing, during the mapping using scanned assemblies, NHibernate creates an Assembly object using it's paritla name, as a string, "System.Data.SQLite". Somehow, the x86 version of the assembly got loaded.
The above configuration made sure that using the partial name to load an assembly would provide the x64 version.
EDIT: I use version 1.0.66.0 and took the DLL under the bin\x64 folder in the file SQLite-1.0.66.0-binaries.zip available on sourceforge [here](http://sourceforge.net/projects/sqlite-dotnet2/files/ "here"). |
2,649,194 | I am updating this post with what I think I now know about getting this configuration; HOWEVER, there is more to know as I am still having a problem is one crucial area.
I use SQLite for unit testing, which now works fine, using the configuration steps below. I also use it when I want a test run of the UI with more data than in-memory test data but without the overhead of SQLServer - this configuration fails with the following:
```
{"Could not create the driver from NHibernate.Driver.SQLite20Driver, NHibernate, Version=2.1.0.4000, Culture=neutral, PublicKeyToken=aa95f207798dfdb4."}
```
Here is updated info on configs that DO work:
1) Which SQLite dll?? **There are some bad links out there that look helpful but that have build errors in them**. The *only* good download as of this date is [here at Source Forge](http://sourceforge.net/projects/sqlite-dotnet2/files/SQLite%20for%20ADO.NET%202.0/1.0.66.0/SQLite-1.0.66.0-binaries.zip/download). v1.066 which was released today, 4-18-2010.
2) Must you use the GAC? No, as answered by Mauricio.
3) x64 builds - as answered by Mauricio.
4) NHib driver - SQLite20Driver, as answered by Mauricio
5) FNH as a potential conflict - no, as answered by Mauricio
Cheers,
Berryl
== ADD'L DEBUG INFO ===
When the exception is hit and I call up the SQLite20Drive assembly, I get the following which suggests to me that the driver *should* be available. I am wondering though, as the configuration code is in a different assembly.
-- assembly when error ----
```
?typeof(SQLite20Driver).Assembly
{NHibernate, Version=2.1.0.4000, Culture=neutral, PublicKeyToken=aa95f207798dfdb4}
[System.Reflection.RuntimeAssembly]: {NHibernate, Version=2.1.0.4000, Culture=neutral, PublicKeyToken=aa95f207798dfdb4}
CodeBase: "file:///C:/Users/Lord & Master/Documents/Projects/Smack/trunk/src/ConstructionAdmin.WpfPresentation/bin/Debug/NHibernate.DLL"
EntryPoint: null
EscapedCodeBase: "file:///C:/Users/Lord%20%26%20Master/Documents/Projects/Smack/trunk/src/ConstructionAdmin.WpfPresentation/bin/Debug/NHibernate.DLL"
Evidence: {System.Security.Policy.Evidence}
FullName: "NHibernate, Version=2.1.0.4000, Culture=neutral, PublicKeyToken=aa95f207798dfdb4"
GlobalAssemblyCache: false
HostContext: 0
ImageRuntimeVersion: "v2.0.50727"
IsDynamic: false
IsFullyTrusted: true
Location: "C:\\Users\\Lord & Master\\Documents\\Projects\\Smack\\trunk\\src\\ConstructionAdmin.WpfPresentation\\bin\\Debug\\NHibernate.dll"
ManifestModule: {NHibernate.dll}
PermissionSet: {<PermissionSet class="System.Security.PermissionSet"
version="1"
Unrestricted="true"/>
}
ReflectionOnly: false
SecurityRuleSet: Level1
```
--- assembly when unit testing (NO ERROR)
```
{NHibernate, Version=2.1.0.4000, Culture=neutral, PublicKeyToken=aa95f207798dfdb4}
[System.Reflection.RuntimeAssembly]: {NHibernate, Version=2.1.0.4000, Culture=neutral, PublicKeyToken=aa95f207798dfdb4}
CodeBase: "file:///C:/Users/Lord & Master/Documents/Projects/Smack/trunk/src/ConstructionAdmin.Tests/bin/Debug/NHibernate.DLL"
EntryPoint: null
EscapedCodeBase: "file:///C:/Users/Lord%20%26%20Master/Documents/Projects/Smack/trunk/src/ConstructionAdmin.Tests/bin/Debug/NHibernate.DLL"
Evidence: {System.Security.Policy.Evidence}
FullName: "NHibernate, Version=2.1.0.4000, Culture=neutral, PublicKeyToken=aa95f207798dfdb4"
GlobalAssemblyCache: false
HostContext: 0
ImageRuntimeVersion: "v2.0.50727"
IsDynamic: false
IsFullyTrusted: true
Location: "C:\\Users\\Lord & Master\\Documents\\Projects\\Smack\\trunk\\src\\ConstructionAdmin.Tests\\bin\\Debug\\NHibernate.dll"
ManifestModule: {NHibernate.dll}
PermissionSet: {<PermissionSet class="System.Security.PermissionSet"
```
version="1"
Unrestricted="true"/>
}
ReflectionOnly: false
SecurityRuleSet: Level1
Here is the bootstrapper for this a SQLite session:
```
/// <summary>SQLite-NHibernate bootstrapper for general use.</summary>
public class SQLiteBoot : IDisposable
{
public readonly ISessionFactory SessionFactory;
private readonly ISession _session;
private static Configuration _config;
private static string _persistenceModelGeneratorName;
public SQLiteBoot(IAutoPersistenceModelGenerator persistenceModelGenerator) {
if (_isSessionFactoryBuildRequired(persistenceModelGenerator)) {
_config = new Configuration()
.SetProperty(ENV.ReleaseConnections, "on_close")
.SetProperty(ENV.Dialect, typeof (SQLiteDialect).AssemblyQualifiedName)
.SetProperty(ENV.ConnectionDriver, typeof (SQLite20Driver).AssemblyQualifiedName)
.SetProperty(ENV.ConnectionString, "data source=:memory:")
.SetProperty(ENV.ProxyFactoryFactoryClass, typeof (ProxyFactoryFactory).AssemblyQualifiedName)
.SetProperty(ENV.CurrentSessionContextClass, typeof (ThreadStaticSessionContext).AssemblyQualifiedName);
_persistenceModelGeneratorName = persistenceModelGenerator.Name;
var persistenceModel = persistenceModelGenerator.Generate();
var fluentCfg = Fluently.Configure(_config).Mappings(m => m.AutoMappings.Add(persistenceModel));
SessionFactory = fluentCfg.BuildSessionFactory();
Check.Require(SessionFactory.GetAllClassMetadata().Count > 0, "No mapped classes - check your AutoPersistenceModel!");
}
_session = SessionFactory.OpenSession();
CurrentSessionContext.Bind(_session);
new SchemaExport(_config).Execute(true, true, false, _session.Connection, Console.Out);
}
private bool _isSessionFactoryBuildRequired(IAutoPersistenceModelGenerator persistenceModelGenerator)
{
return
_config == null
|| SessionFactory == null
|| !persistenceModelGenerator.Name.Equals(_persistenceModelGeneratorName);
}
public void Dispose()
{
_session.Dispose();
}
}
```
} | 2010/04/15 | [
"https://Stackoverflow.com/questions/2649194",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/95245/"
] | I want this to stand out so it will help someone else; the full reason this happens is explained [here](https://stackoverflow.com/questions/2697795/using-fluentnhibernate-with-net4); so adjust your congig to use BOTH the redirect there in combo with the mixed loading mode referenced here by Mauricio. | I had the same problem, and found little or no help on all the forum and blog posts.
Note that this problem is specific to a case respecting *all* of the following criteria:
- using SQLite
- with System.Data.SqlLite
- on an x64 machine
- and NHibernate (2.1.2.4 in my case)
That chunk of config in my web.config (or app.config for my unit tests) got it to work. I had to qualify the assembly to be sure he loads correctly.
```
<configuration>
<runtime>
<assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
<qualifyAssembly
partialName="System.Data.SQLite"
fullName="System.Data.SQLite, Version=1.0.66.0, Culture=neutral, PublicKeyToken=db937bc2d44ff139, processorArchitecture=AMD64" />
</assemblyBinding>
</runtime>
</configuration>
```
Somewhere in it's inner plumbing, during the mapping using scanned assemblies, NHibernate creates an Assembly object using it's paritla name, as a string, "System.Data.SQLite". Somehow, the x86 version of the assembly got loaded.
The above configuration made sure that using the partial name to load an assembly would provide the x64 version.
EDIT: I use version 1.0.66.0 and took the DLL under the bin\x64 folder in the file SQLite-1.0.66.0-binaries.zip available on sourceforge [here](http://sourceforge.net/projects/sqlite-dotnet2/files/ "here"). |
2,714,404 | I have a VBA form (in Excel if that matters) that contains text boxes.
On three occasions, I have found myself pressing the tab key to navigate to the next control, but instead an actual TAB is being put in the text box.
The form normally acts as it should, but it does concern me that this gremlin is showing up sporadically.
It has shown up on Office 2003 as well as Office 2007 on two different computers.
Has anyone else encountered this problem and, if so, how did you fix it? | 2010/04/26 | [
"https://Stackoverflow.com/questions/2714404",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/250385/"
] | I created a form with three text boxes. I entered characters and tabbed onto the next for some time without being able to duplicate your problem.
The only way I can get an tab into the text box is by entering Ctrl+Tab. This might be embarrassing but backspace removes it so it is not a major issue. Is it possible that you are accidentally pressing Ctrl at the same time?
I find occasionally that if I mispress a key that the cursor jumps to somewhere else on the screen. I am not quite sure what I mean by "mispress"; it seems to be something to do with pressing two keys at once. This seems to be a feature of modern keyboards and how they detect which key has been pressed because I have encountered it on many different computers. The implication is that by mispressing a key, a control character (perhaps tab or ctrl+tab) is generated.
I also tried the following which worked and conceals the problem by removing the tab and moving on to the next control.
```
Private Sub TextBox1_Change()
If InStr(1, TextBox1.Text, Chr(9)) <> 0 Then
TextBox1.Text = Replace(TextBox1.Text, Chr(9), "")
TextBox2.SetFocus
End If
End Sub
``` | This might solve the problem:
```
Public Sub MoveFocusToNextControl(xfrmFormName As UserForm, _
xctlCurrentControl As control)
Dim xctl As control
Dim lngTab As Long, lngNewTab As Long
On Error Resume Next
' Move focus to the next control in the tab order
lngTab = xctlCurrentControl.TabIndex + 1
For Each xctl In xfrmFormName.Controls
lngNewTab = xctl.TabIndex
' An error will occur if the control does not have a TabIndex property;
' skip over those controls.
If Err.Number = 0 Then
If lngNewTab = lngTab Then
xctl.SetFocus
Exit For
End If
Else
Err.Clear
End If
Next xctl
Set xctl = Nothing
Err.Clear
End Sub
``` |
2,714,404 | I have a VBA form (in Excel if that matters) that contains text boxes.
On three occasions, I have found myself pressing the tab key to navigate to the next control, but instead an actual TAB is being put in the text box.
The form normally acts as it should, but it does concern me that this gremlin is showing up sporadically.
It has shown up on Office 2003 as well as Office 2007 on two different computers.
Has anyone else encountered this problem and, if so, how did you fix it? | 2010/04/26 | [
"https://Stackoverflow.com/questions/2714404",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/250385/"
] | I have also had this behaviour in my coworkers' computer for several years now, while mine works fine. I have set all the Checkboxes TabStop property to False. It seems to work fine now. | Set the `TabKeyBehavior` property to `False` to get "Tab jumps to next field" behavior. |