doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1505.00521 | 33 | We trained our model using SGD with a ï¬xed learning rate of 0.05 and a ï¬xed momentum of 0.9. We used a batch of size 200, which we found to work better than smaller batch sizes (such as 50 or 20). We normalized the gradient by batch size but not by sequence length. We independently clip the norm of the gradients w.r.t. the RL-NTM parameters to 5, and the gradient w.r.t. the baseline network to 2. We initialize the RLâNTM controller and the baseline model using a Gaussian with standard deviation 0.1. We used an inverse temperature of 0.01 for the different action distributions. Doing so reduced the effective learning rate of the Reinforce derivatives. The memory consists of 35 real values through which we backpropagate. The initial memory state and the controllerâs initial hidden states were set to the zero vector. | 1505.00521#33 | Reinforcement Learning Neural Turing Machines - Revised | The Neural Turing Machine (NTM) is more expressive than all previously
considered models because of its external memory. It can be viewed as a broader
effort to use abstract external Interfaces and to learn a parametric model that
interacts with them.
The capabilities of a model can be extended by providing it with proper
Interfaces that interact with the world. These external Interfaces include
memory, a database, a search engine, or a piece of software such as a theorem
verifier. Some of these Interfaces are provided by the developers of the model.
However, many important existing Interfaces, such as databases and search
engines, are discrete.
We examine feasibility of learning models to interact with discrete
Interfaces. We investigate the following discrete Interfaces: a memory Tape, an
input Tape, and an output Tape. We use a Reinforcement Learning algorithm to
train a neural network that interacts with such Interfaces to solve simple
algorithmic tasks. Our Interfaces are expressive enough to make our model
Turing complete. | http://arxiv.org/pdf/1505.00521 | Wojciech Zaremba, Ilya Sutskever | cs.LG | null | null | cs.LG | 20150504 | 20160112 | [
{
"id": "1503.01007"
},
{
"id": "1506.02516"
},
{
"id": "1504.00702"
},
{
"id": "1503.08895"
}
] |
1505.00521 | 34 | Input Tape | Memory Output Tape Input Tape Output Tape ERFCS7R3BGA AGB3R75CFREO WWW6667778S88SSRRRWWWYYY | W67TE8SRWYO e » |e W â R . * Ww Ww F | W * ¢ : 6 s 5 . : 6 : 7 * + 6 6 R + * 7 * 8 : A 1 ; 8 â : 7 : 5 : : 8 : _ : 3 : A * ic 8 8 A . 8 8 * A * 3 8 8 A : R 8 : A â 7 ® : A * 5 R R A : c ® . A é â : A * * ; : A * t i : A . R ; . A : : y : A â 3 7 :
Input Tape Output Tape WWW6667778S88SSRRRWWWYYY | W67TE8SRWYO W â Ww Ww W * 6 s 6 : 6 6 7 * 1 ; 7 : 8 : 3 : 8 8 8 * 8 8 8 : ® : R R ® . â : ; : i : ; . y : 7 : | 1505.00521#34 | Reinforcement Learning Neural Turing Machines - Revised | The Neural Turing Machine (NTM) is more expressive than all previously
considered models because of its external memory. It can be viewed as a broader
effort to use abstract external Interfaces and to learn a parametric model that
interacts with them.
The capabilities of a model can be extended by providing it with proper
Interfaces that interact with the world. These external Interfaces include
memory, a database, a search engine, or a piece of software such as a theorem
verifier. Some of these Interfaces are provided by the developers of the model.
However, many important existing Interfaces, such as databases and search
engines, are discrete.
We examine feasibility of learning models to interact with discrete
Interfaces. We investigate the following discrete Interfaces: a memory Tape, an
input Tape, and an output Tape. We use a Reinforcement Learning algorithm to
train a neural network that interacts with such Interfaces to solve simple
algorithmic tasks. Our Interfaces are expressive enough to make our model
Turing complete. | http://arxiv.org/pdf/1505.00521 | Wojciech Zaremba, Ilya Sutskever | cs.LG | null | null | cs.LG | 20150504 | 20160112 | [
{
"id": "1503.01007"
},
{
"id": "1506.02516"
},
{
"id": "1504.00702"
},
{
"id": "1503.08895"
}
] |
1505.00521 | 35 | Input Tape | Memory Output Tape ERFCS7R3BGA AGB3R75CFREO e » |e R . * F | ¢ : 5 . : 7 * + R + * 8 : A 8 â : 5 : : _ : A * ic A . 8 A * 3 A : R A â 7 A * 5 A : c A é A * * A * t A . R A : : A â 3
# : Time
Figure 7: (Left) Trace of ForwardReverse solution, (Right) trace of RepeatInput. The vertical depicts execution time. The rows show the input pointer, output pointer, and memory pointer (with the â symbol) at each step of the RL-NTMâs execution. Note that we represent the set {1, . . . , 30} with 30 distinct symbols, and lack of prediction with #.
The ForwardReverse task is particularly interesting. In order to solve the problem, the RLâNTM has to move to the end of the sequence without making any predictions. While doing so, it has to store the input sequence into its memory (encoded in real values), and use its memory when reversing the sequence (Fig. 7). | 1505.00521#35 | Reinforcement Learning Neural Turing Machines - Revised | The Neural Turing Machine (NTM) is more expressive than all previously
considered models because of its external memory. It can be viewed as a broader
effort to use abstract external Interfaces and to learn a parametric model that
interacts with them.
The capabilities of a model can be extended by providing it with proper
Interfaces that interact with the world. These external Interfaces include
memory, a database, a search engine, or a piece of software such as a theorem
verifier. Some of these Interfaces are provided by the developers of the model.
However, many important existing Interfaces, such as databases and search
engines, are discrete.
We examine feasibility of learning models to interact with discrete
Interfaces. We investigate the following discrete Interfaces: a memory Tape, an
input Tape, and an output Tape. We use a Reinforcement Learning algorithm to
train a neural network that interacts with such Interfaces to solve simple
algorithmic tasks. Our Interfaces are expressive enough to make our model
Turing complete. | http://arxiv.org/pdf/1505.00521 | Wojciech Zaremba, Ilya Sutskever | cs.LG | null | null | cs.LG | 20150504 | 20160112 | [
{
"id": "1503.01007"
},
{
"id": "1506.02516"
},
{
"id": "1504.00702"
},
{
"id": "1503.08895"
}
] |
1505.00521 | 36 | We have also experimented with a number of additional tasks but with less empirical success. Tasks we found to be too difï¬cult include sorting and long integer addition (in base 3 for simplicity), and Repeat- Copy when the input tape is forced to only move forward. While we were able to achieve reasonable performance on the sorting task, the RLâNTM learned an ad-hoc algorithm and made excessive use of its controller memory in order to sort the sequence.
Empirically, we found all the components of the RL-NTM essential to successfully solving these prob- lems. All our tasks are either solvable in under 20,000 parameter updates or fail in arbitrary number of updates. We were completely unable to solve RepeatCopy, Reverse, and Forward reverse with the LSTM controller, but with direct access controller we succeeded. Moreover, we were also unable to solve any of these problems at all without a curriculum (except for short sequences of length 5). We present more traces for our tasks in the supplementary material (together with failure traces).
9
# Under review as a conference paper at ICLR 2016
# 10 CONCLUSIONS
We have shown that the Reinforce algorithm is capable of training an NTM-style model to solve very simple algorithmic problems. While the Reinforce algorithm is very general and is easily applicable to a wide range of problems, it seems that learning memory access patterns with Reinforce is difï¬cult. | 1505.00521#36 | Reinforcement Learning Neural Turing Machines - Revised | The Neural Turing Machine (NTM) is more expressive than all previously
considered models because of its external memory. It can be viewed as a broader
effort to use abstract external Interfaces and to learn a parametric model that
interacts with them.
The capabilities of a model can be extended by providing it with proper
Interfaces that interact with the world. These external Interfaces include
memory, a database, a search engine, or a piece of software such as a theorem
verifier. Some of these Interfaces are provided by the developers of the model.
However, many important existing Interfaces, such as databases and search
engines, are discrete.
We examine feasibility of learning models to interact with discrete
Interfaces. We investigate the following discrete Interfaces: a memory Tape, an
input Tape, and an output Tape. We use a Reinforcement Learning algorithm to
train a neural network that interacts with such Interfaces to solve simple
algorithmic tasks. Our Interfaces are expressive enough to make our model
Turing complete. | http://arxiv.org/pdf/1505.00521 | Wojciech Zaremba, Ilya Sutskever | cs.LG | null | null | cs.LG | 20150504 | 20160112 | [
{
"id": "1503.01007"
},
{
"id": "1506.02516"
},
{
"id": "1504.00702"
},
{
"id": "1503.08895"
}
] |
1505.00521 | 37 | Our gradient checking procedure for Reinforce can be applied to a wide variety of implementations. We also found it extremely useful: without it, we had no way of being sure that our gradient was correct, which made debugging and tuning much more difï¬cult.
# 11 ACKNOWLEDGMENTS
We thank Christopher Olah for the LSTM ï¬gure that have been used in the paper, and to Tencia Lee for revising the paper.
# REFERENCES
Aberdeen, Douglas and Baxter, Jonathan. Scaling internal-state policy-gradient methods for pomdps. In MACHINE LEARNING-INTERNATIONAL WORKSHOP THEN CONFERENCE-, pp. 3â10, 2002.
Ba, Jimmy, Mnih, Volodymyr, and Kavukcuoglu, Koray. Multiple object recognition with visual attention. arXiv preprint arXiv:1412.7755, 2014.
Bengio, Yoshua, Louradour, J´erËome, Collobert, Ronan, and Weston, Jason. Curriculum learning. In Proceedings of the 26th annual international conference on machine learning, pp. 41â48. ACM, 2009.
Graves, Alex, Wayne, Greg, and Danihelka, Ivo. Neural turing machines. arXiv preprint arXiv:1410.5401, 2014a. | 1505.00521#37 | Reinforcement Learning Neural Turing Machines - Revised | The Neural Turing Machine (NTM) is more expressive than all previously
considered models because of its external memory. It can be viewed as a broader
effort to use abstract external Interfaces and to learn a parametric model that
interacts with them.
The capabilities of a model can be extended by providing it with proper
Interfaces that interact with the world. These external Interfaces include
memory, a database, a search engine, or a piece of software such as a theorem
verifier. Some of these Interfaces are provided by the developers of the model.
However, many important existing Interfaces, such as databases and search
engines, are discrete.
We examine feasibility of learning models to interact with discrete
Interfaces. We investigate the following discrete Interfaces: a memory Tape, an
input Tape, and an output Tape. We use a Reinforcement Learning algorithm to
train a neural network that interacts with such Interfaces to solve simple
algorithmic tasks. Our Interfaces are expressive enough to make our model
Turing complete. | http://arxiv.org/pdf/1505.00521 | Wojciech Zaremba, Ilya Sutskever | cs.LG | null | null | cs.LG | 20150504 | 20160112 | [
{
"id": "1503.01007"
},
{
"id": "1506.02516"
},
{
"id": "1504.00702"
},
{
"id": "1503.08895"
}
] |
1505.00521 | 38 | Graves, Alex, Wayne, Greg, and Danihelka, Ivo. Neural turing machines. arXiv preprint arXiv:1410.5401, 2014a.
Graves, Alex, Wayne, Greg, and Danihelka, Ivo. Neural turing machines. arXiv preprint arXiv:1410.5401, 2014b.
Grefenstette, Edward, Hermann, Karl Moritz, Suleyman, Mustafa, and Blunsom, Phil. Learning to transduce with unbounded memory. arXiv preprint arXiv:1506.02516, 2015.
Joulin, Armand and Mikolov, Tomas. Inferring algorithmic patterns with stack-augmented recurrent nets. arXiv preprint arXiv:1503.01007, 2015.
Kohl, Nate and Stone, Peter. Policy gradient reinforcement learning for fast quadrupedal locomotion. In Robotics and Automation, 2004. Proceedings. ICRAâ04. 2004 IEEE International Conference on, volume 3, pp. 2619â 2624. IEEE, 2004.
Levine, Sergey, Finn, Chelsea, Darrell, Trevor, and Abbeel, Pieter. End-to-end training of deep visuomotor policies. arXiv preprint arXiv:1504.00702, 2015. | 1505.00521#38 | Reinforcement Learning Neural Turing Machines - Revised | The Neural Turing Machine (NTM) is more expressive than all previously
considered models because of its external memory. It can be viewed as a broader
effort to use abstract external Interfaces and to learn a parametric model that
interacts with them.
The capabilities of a model can be extended by providing it with proper
Interfaces that interact with the world. These external Interfaces include
memory, a database, a search engine, or a piece of software such as a theorem
verifier. Some of these Interfaces are provided by the developers of the model.
However, many important existing Interfaces, such as databases and search
engines, are discrete.
We examine feasibility of learning models to interact with discrete
Interfaces. We investigate the following discrete Interfaces: a memory Tape, an
input Tape, and an output Tape. We use a Reinforcement Learning algorithm to
train a neural network that interacts with such Interfaces to solve simple
algorithmic tasks. Our Interfaces are expressive enough to make our model
Turing complete. | http://arxiv.org/pdf/1505.00521 | Wojciech Zaremba, Ilya Sutskever | cs.LG | null | null | cs.LG | 20150504 | 20160112 | [
{
"id": "1503.01007"
},
{
"id": "1506.02516"
},
{
"id": "1504.00702"
},
{
"id": "1503.08895"
}
] |
1505.00521 | 39 | Mnih, Volodymyr, Kavukcuoglu, Koray, Silver, David, Graves, Alex, Antonoglou, Ioannis, Wierstra, Daan, and Riedmiller, Martin. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013.
Mnih, Volodymyr, Heess, Nicolas, Graves, Alex, et al. Recurrent models of visual attention. In Advances in Neural Information Processing Systems, pp. 2204â2212, 2014.
Peters, Jan and Schaal, Stefan. Policy gradient methods for robotics. IEEE/RSJ International Conference on, pp. 2219â2225. IEEE, 2006. In Intelligent Robots and Systems, 2006
Schmidhuber, Juergen. Self-delimiting neural networks. arXiv preprint arXiv:1210.0118, 2012.
Schmidhuber, J¨urgen. Optimal ordered problem solver. Machine Learning, 54(3):211â254, 2004.
Sukhbaatar, Sainbayar, Szlam, Arthur, Weston, Jason, and Fergus, Rob. Weakly supervised memory networks. arXiv preprint arXiv:1503.08895, 2015. | 1505.00521#39 | Reinforcement Learning Neural Turing Machines - Revised | The Neural Turing Machine (NTM) is more expressive than all previously
considered models because of its external memory. It can be viewed as a broader
effort to use abstract external Interfaces and to learn a parametric model that
interacts with them.
The capabilities of a model can be extended by providing it with proper
Interfaces that interact with the world. These external Interfaces include
memory, a database, a search engine, or a piece of software such as a theorem
verifier. Some of these Interfaces are provided by the developers of the model.
However, many important existing Interfaces, such as databases and search
engines, are discrete.
We examine feasibility of learning models to interact with discrete
Interfaces. We investigate the following discrete Interfaces: a memory Tape, an
input Tape, and an output Tape. We use a Reinforcement Learning algorithm to
train a neural network that interacts with such Interfaces to solve simple
algorithmic tasks. Our Interfaces are expressive enough to make our model
Turing complete. | http://arxiv.org/pdf/1505.00521 | Wojciech Zaremba, Ilya Sutskever | cs.LG | null | null | cs.LG | 20150504 | 20160112 | [
{
"id": "1503.01007"
},
{
"id": "1506.02516"
},
{
"id": "1504.00702"
},
{
"id": "1503.08895"
}
] |
1505.00521 | 40 | Williams, Ronald J. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229â256, 1992.
Zaremba, Wojciech and Sutskever, Ilya. Learning to execute. arXiv preprint arXiv:1410.4615, 2014.
10
# Under review as a conference paper at ICLR 2016
# APPENDIX A: DETAILED REINFORCE EXPLANATION
We present here several techniques to decrease variance of the gradient estimation for the Reinforce. We have employed all of these tricks in our RLâNTM implementation. We expand notation introduced in Sec. 4. Let Aâ¡ denote all valid subsequences of actions (i.e. Aâ¡ â Aâ â Aâ). Moreover, we deï¬ne set of sequences of actions that are valid after executing a sequence a1:t, and that terminate. We denote such set by: Aâ a1:t terminates an episode.
# CAUSALITY OF ACTIONS
Actions at time t cannot possibly inï¬uence rewards obtained in the past, because the past rewards are caused by actions prior to them. This idea allows to derive an unbiased estimator of âθJ(θ) with lower variance. Here, we formalize it: | 1505.00521#40 | Reinforcement Learning Neural Turing Machines - Revised | The Neural Turing Machine (NTM) is more expressive than all previously
considered models because of its external memory. It can be viewed as a broader
effort to use abstract external Interfaces and to learn a parametric model that
interacts with them.
The capabilities of a model can be extended by providing it with proper
Interfaces that interact with the world. These external Interfaces include
memory, a database, a search engine, or a piece of software such as a theorem
verifier. Some of these Interfaces are provided by the developers of the model.
However, many important existing Interfaces, such as databases and search
engines, are discrete.
We examine feasibility of learning models to interact with discrete
Interfaces. We investigate the following discrete Interfaces: a memory Tape, an
input Tape, and an output Tape. We use a Reinforcement Learning algorithm to
train a neural network that interacts with such Interfaces to solve simple
algorithmic tasks. Our Interfaces are expressive enough to make our model
Turing complete. | http://arxiv.org/pdf/1505.00521 | Wojciech Zaremba, Ilya Sutskever | cs.LG | null | null | cs.LG | 20150504 | 20160112 | [
{
"id": "1503.01007"
},
{
"id": "1506.02516"
},
{
"id": "1504.00702"
},
{
"id": "1503.08895"
}
] |
1505.00521 | 42 | âθJ(θ) =
= S> po(a)[dp log po(a)] R(a) ay.7rEAt = SD pola) [a logpo(a)] [> rars)] are At t=1 T SS dp log po (a11)r(a1)| i M SS S ai.reâ¬At t=1 T = Ss po(a) Soa log po(aix)r(ai:e) + Oo log po(acr41):rla1)r(ai)| ay.rEAt t=1 T Ss SS po(ar:2)d0 log pa(a1:4)r(a1:4) + pa(a)Oe log po (4141-741) (a1) ay.reAt t=1 Tv S25 po(a1.2)06 log po (ar1)r(are) + po (are)r (art) ope (ai4s)-r|a1:1) ay.rEAt t=1 T T Ss [35 po(a1:1)00 log po (a1:e)r(a1:2)] + Ss SS [Po(are)r (a1) Popo ayrEAt t=1 ay,pEAt t=1 | 1505.00521#42 | Reinforcement Learning Neural Turing Machines - Revised | The Neural Turing Machine (NTM) is more expressive than all previously
considered models because of its external memory. It can be viewed as a broader
effort to use abstract external Interfaces and to learn a parametric model that
interacts with them.
The capabilities of a model can be extended by providing it with proper
Interfaces that interact with the world. These external Interfaces include
memory, a database, a search engine, or a piece of software such as a theorem
verifier. Some of these Interfaces are provided by the developers of the model.
However, many important existing Interfaces, such as databases and search
engines, are discrete.
We examine feasibility of learning models to interact with discrete
Interfaces. We investigate the following discrete Interfaces: a memory Tape, an
input Tape, and an output Tape. We use a Reinforcement Learning algorithm to
train a neural network that interacts with such Interfaces to solve simple
algorithmic tasks. Our Interfaces are expressive enough to make our model
Turing complete. | http://arxiv.org/pdf/1505.00521 | Wojciech Zaremba, Ilya Sutskever | cs.LG | null | null | cs.LG | 20150504 | 20160112 | [
{
"id": "1503.01007"
},
{
"id": "1506.02516"
},
{
"id": "1504.00702"
},
{
"id": "1503.08895"
}
] |
1505.00521 | 43 | [Po(are)r (a1) Popo (@e41)-7la1)|
We will show that the right side of this equation is equal to zero. Itâs zero, because the future actions a(t+1):T donât inï¬uence past rewards r(a1:t). Here we formalize it; we use an identity E
a(t+1):T âAâ
# a1:t
T Ss Ss [pe (a1) r(aa:e)Oope (assy lais)| = a,.7EAt t=1 SS [polare)r(aie) > Aopo(aus1rlare)] = aye At ae41): TEAL, SS polars)r(ai1)do1 = 0 ay. â¬At
We can purge the right side of the equation for âθJ(θ):
T 0oF(8) = Ss [35 po (a1:1)00 log po (a1)r(a1:2)| ay.rEAt t=1 T = Ea, ~po(a)Eax~po(alar) «++ Ear~po(alaxcrâay)[ >, 00 108 Po(aelar.(eâ1)) > '(a1:8)] t=1 i=t | 1505.00521#43 | Reinforcement Learning Neural Turing Machines - Revised | The Neural Turing Machine (NTM) is more expressive than all previously
considered models because of its external memory. It can be viewed as a broader
effort to use abstract external Interfaces and to learn a parametric model that
interacts with them.
The capabilities of a model can be extended by providing it with proper
Interfaces that interact with the world. These external Interfaces include
memory, a database, a search engine, or a piece of software such as a theorem
verifier. Some of these Interfaces are provided by the developers of the model.
However, many important existing Interfaces, such as databases and search
engines, are discrete.
We examine feasibility of learning models to interact with discrete
Interfaces. We investigate the following discrete Interfaces: a memory Tape, an
input Tape, and an output Tape. We use a Reinforcement Learning algorithm to
train a neural network that interacts with such Interfaces to solve simple
algorithmic tasks. Our Interfaces are expressive enough to make our model
Turing complete. | http://arxiv.org/pdf/1505.00521 | Wojciech Zaremba, Ilya Sutskever | cs.LG | null | null | cs.LG | 20150504 | 20160112 | [
{
"id": "1503.01007"
},
{
"id": "1506.02516"
},
{
"id": "1504.00702"
},
{
"id": "1503.08895"
}
] |
1505.00521 | 44 | âθJ(θ) =
11
# Under review as a conference paper at ICLR 2016
The last line of derived equations describes the learning algorithm. This can be implemented as fol- lows. A neural network outputs: l, = log po (az|a1.4-1)). We sequentially sample action a; from the distribution e!, and execute the sampled action a,. Simultaneously, we experience a reward r(a1;,). We should backpropagate to the node 0 log po(az1.(,â1)) the sum of rewards starting from time step t: a ,7(a1:;). The only difference in comparison to the initial algorithm is that we backpropagate sum of rewards starting from the current time step, instead of the sum of rewards over the entire episode.
# ONLINE BASELINE PREDICTION
Online baseline prediction is an idea, that the importance of reward is determined by its relative relation to other rewards. All the rewards could be shifted by a constant factor and such change shouldnât effect its relation, thus it shouldnât inï¬uence expected gradient. However, it could decrease the variance of the gradient estimate.
Aforementioned shift is called the baseline, and it can be estimated separately for the every time-step. We have that: | 1505.00521#44 | Reinforcement Learning Neural Turing Machines - Revised | The Neural Turing Machine (NTM) is more expressive than all previously
considered models because of its external memory. It can be viewed as a broader
effort to use abstract external Interfaces and to learn a parametric model that
interacts with them.
The capabilities of a model can be extended by providing it with proper
Interfaces that interact with the world. These external Interfaces include
memory, a database, a search engine, or a piece of software such as a theorem
verifier. Some of these Interfaces are provided by the developers of the model.
However, many important existing Interfaces, such as databases and search
engines, are discrete.
We examine feasibility of learning models to interact with discrete
Interfaces. We investigate the following discrete Interfaces: a memory Tape, an
input Tape, and an output Tape. We use a Reinforcement Learning algorithm to
train a neural network that interacts with such Interfaces to solve simple
algorithmic tasks. Our Interfaces are expressive enough to make our model
Turing complete. | http://arxiv.org/pdf/1505.00521 | Wojciech Zaremba, Ilya Sutskever | cs.LG | null | null | cs.LG | 20150504 | 20160112 | [
{
"id": "1503.01007"
},
{
"id": "1506.02516"
},
{
"id": "1504.00702"
},
{
"id": "1503.08895"
}
] |
1505.00521 | 45 | Aforementioned shift is called the baseline, and it can be estimated separately for the every time-step. We have that:
Ss Po(A(141):7|@12) = 1 acpi eCAhy., Oo Ss Po(4(141):7 a1) = 0 acpi eCAhy.,
We are allowed to subtract above quantity (multiplied by bt) from our estimate of the gradient without changing its expected value:
T T oF (8) = Eaynpo(a)Ea2~po(alar) ««-Ear~po(alarcrâ1y) | >, 00 log po(ailar.eâ1)) 9 (7(asi) â be) t=1 i=t
Above statement holds for an any sequence of bt. We aim to ï¬nd the sequence bt that yields the lowest variance estimator on âθJ(θ). The variance of our estimator is: | 1505.00521#45 | Reinforcement Learning Neural Turing Machines - Revised | The Neural Turing Machine (NTM) is more expressive than all previously
considered models because of its external memory. It can be viewed as a broader
effort to use abstract external Interfaces and to learn a parametric model that
interacts with them.
The capabilities of a model can be extended by providing it with proper
Interfaces that interact with the world. These external Interfaces include
memory, a database, a search engine, or a piece of software such as a theorem
verifier. Some of these Interfaces are provided by the developers of the model.
However, many important existing Interfaces, such as databases and search
engines, are discrete.
We examine feasibility of learning models to interact with discrete
Interfaces. We investigate the following discrete Interfaces: a memory Tape, an
input Tape, and an output Tape. We use a Reinforcement Learning algorithm to
train a neural network that interacts with such Interfaces to solve simple
algorithmic tasks. Our Interfaces are expressive enough to make our model
Turing complete. | http://arxiv.org/pdf/1505.00521 | Wojciech Zaremba, Ilya Sutskever | cs.LG | null | null | cs.LG | 20150504 | 20160112 | [
{
"id": "1503.01007"
},
{
"id": "1506.02516"
},
{
"id": "1504.00702"
},
{
"id": "1503.08895"
}
] |
1505.00521 | 46 | T Var = La, wpe (a) Eag~po(alar) oo -Eap~po(alar.crâ1y) [> 06 log po(aelar.ceâ 1) Sec a1: «) 76 d)â- t=1 i=t r 2 [Eos~po(a)Eos~po(aler) + Eapspo(alarcr-) | >, log po(alan.e-ay) 32 (r(rs) - b)]| i=t 8
The second term doesnât depend on bt, and the variance is always positive. Itâs sufï¬cient to minimize the ï¬rst term. The ï¬rst term is minimal when itâs derivative with respect to bt is zero. This implies | 1505.00521#46 | Reinforcement Learning Neural Turing Machines - Revised | The Neural Turing Machine (NTM) is more expressive than all previously
considered models because of its external memory. It can be viewed as a broader
effort to use abstract external Interfaces and to learn a parametric model that
interacts with them.
The capabilities of a model can be extended by providing it with proper
Interfaces that interact with the world. These external Interfaces include
memory, a database, a search engine, or a piece of software such as a theorem
verifier. Some of these Interfaces are provided by the developers of the model.
However, many important existing Interfaces, such as databases and search
engines, are discrete.
We examine feasibility of learning models to interact with discrete
Interfaces. We investigate the following discrete Interfaces: a memory Tape, an
input Tape, and an output Tape. We use a Reinforcement Learning algorithm to
train a neural network that interacts with such Interfaces to solve simple
algorithmic tasks. Our Interfaces are expressive enough to make our model
Turing complete. | http://arxiv.org/pdf/1505.00521 | Wojciech Zaremba, Ilya Sutskever | cs.LG | null | null | cs.LG | 20150504 | 20160112 | [
{
"id": "1503.01007"
},
{
"id": "1506.02516"
},
{
"id": "1504.00702"
},
{
"id": "1503.08895"
}
] |
1505.00521 | 47 | T T Ea, ~po(a)Eas~po(ajar) +++ Ear~polalarcrâ1)) >» 20 log po (ailaa.(eâ1y) 9 (r(aa:) â be) = 0 t=1 i=t T T S506 log po (aelaa.eâ1) S(r(ai2) = be) = 0 t=1 i=t 1 86 log po(ar|ar.(tâ1)) Spee Patt) by oy Oo log po(ar|ar.(¢-1))
This gives us estimate for a vector bt â R#θ. However, it is common to use a single scalar for bt â R, and estimate it as Epθ(at:T |a1:(tâ1))R(at:T ).
# OFFLINE BASELINE PREDICTION
The Reinforce algorithm works much better whenever it has accurate baselines. A separate LSTM can help in the baseline estimation. First, run the baseline LSTM on the entire input tape to produce a vector summarizing the input. Next, continue running the baseline LSTM in tandem with the controller LSTM,
12
# Under review as a conference paper at ICLR 2016 | 1505.00521#47 | Reinforcement Learning Neural Turing Machines - Revised | The Neural Turing Machine (NTM) is more expressive than all previously
considered models because of its external memory. It can be viewed as a broader
effort to use abstract external Interfaces and to learn a parametric model that
interacts with them.
The capabilities of a model can be extended by providing it with proper
Interfaces that interact with the world. These external Interfaces include
memory, a database, a search engine, or a piece of software such as a theorem
verifier. Some of these Interfaces are provided by the developers of the model.
However, many important existing Interfaces, such as databases and search
engines, are discrete.
We examine feasibility of learning models to interact with discrete
Interfaces. We investigate the following discrete Interfaces: a memory Tape, an
input Tape, and an output Tape. We use a Reinforcement Learning algorithm to
train a neural network that interacts with such Interfaces to solve simple
algorithmic tasks. Our Interfaces are expressive enough to make our model
Turing complete. | http://arxiv.org/pdf/1505.00521 | Wojciech Zaremba, Ilya Sutskever | cs.LG | null | null | cs.LG | 20150504 | 20160112 | [
{
"id": "1503.01007"
},
{
"id": "1506.02516"
},
{
"id": "1504.00702"
},
{
"id": "1503.08895"
}
] |
1505.00521 | 48 | 12
# Under review as a conference paper at ICLR 2016
batt=1 batt=2 batt=3 bb =[}{}{}-â input tape the = thenentmatte2 | [the RLNTMat tes
Figure 8: The baseline LSTM computes a baseline bt for every computational step t of the RL-NTM. The baseline LSTM receives the same inputs as the RL-NTM, and it computes a baseline bt for time t before observing the chosen actions of time t. However, it is important to ï¬rst provide the baseline LSTM with the entire input tape as a preliminary inputs, because doing so allows the baseline LSTM to accurately estimate the true difï¬culty of a given problem instance and therefore compute better base- lines. For example, if a problem instance is unusually difï¬cult, then we expect R1 to be large and negative. If the baseline LSTM is given entire input tape as an auxiliary input, it could compute an appropriately large and negative b1.
so that the baseline LSTM receives precisely the same inputs as the controller LSTM, and outputs a baseline ); at each timestep t. The baseline LSTM is trained to minimize yy [R(ar) - bi] (Fig. [8p. This technique introduces a biased estimator, however it works well in practise. | 1505.00521#48 | Reinforcement Learning Neural Turing Machines - Revised | The Neural Turing Machine (NTM) is more expressive than all previously
considered models because of its external memory. It can be viewed as a broader
effort to use abstract external Interfaces and to learn a parametric model that
interacts with them.
The capabilities of a model can be extended by providing it with proper
Interfaces that interact with the world. These external Interfaces include
memory, a database, a search engine, or a piece of software such as a theorem
verifier. Some of these Interfaces are provided by the developers of the model.
However, many important existing Interfaces, such as databases and search
engines, are discrete.
We examine feasibility of learning models to interact with discrete
Interfaces. We investigate the following discrete Interfaces: a memory Tape, an
input Tape, and an output Tape. We use a Reinforcement Learning algorithm to
train a neural network that interacts with such Interfaces to solve simple
algorithmic tasks. Our Interfaces are expressive enough to make our model
Turing complete. | http://arxiv.org/pdf/1505.00521 | Wojciech Zaremba, Ilya Sutskever | cs.LG | null | null | cs.LG | 20150504 | 20160112 | [
{
"id": "1503.01007"
},
{
"id": "1506.02516"
},
{
"id": "1504.00702"
},
{
"id": "1503.08895"
}
] |
1505.00521 | 49 | We found it important to ï¬rst have the baseline LSTM go over the entire input before computing the baselines bt. It is especially beneï¬cial whenever there is considerable variation in the difï¬culty of the examples. For example, if the baseline LSTM can recognize that the current instance is unusually difï¬cult, it can output a large negative value for bt=1 in anticipation of a large and a negative R1. In general, it is cheap and therefore worthwhile to provide the baseline network with all of the available information, even if this information would not be available at test time, because the baseline network is not needed at test time.
# APPENDIX B: EXECUTION TRACES
We present several execution traces of the RLâNTM. Each ï¬gure shows execution traces of the trained RL-NTM on each of the tasks. The ï¬rst row shows the input tape and the desired output, while each subsequent row shows the RL-NTMâs position on the input tape and its prediction for the output tape. In these examples, the RL-NTM solved each task perfectly, so the predictions made in the output tape perfectly match the desired outputs listed in the ï¬rst row.
13
# Under review as a conference paper at ICLR 2016 | 1505.00521#49 | Reinforcement Learning Neural Turing Machines - Revised | The Neural Turing Machine (NTM) is more expressive than all previously
considered models because of its external memory. It can be viewed as a broader
effort to use abstract external Interfaces and to learn a parametric model that
interacts with them.
The capabilities of a model can be extended by providing it with proper
Interfaces that interact with the world. These external Interfaces include
memory, a database, a search engine, or a piece of software such as a theorem
verifier. Some of these Interfaces are provided by the developers of the model.
However, many important existing Interfaces, such as databases and search
engines, are discrete.
We examine feasibility of learning models to interact with discrete
Interfaces. We investigate the following discrete Interfaces: a memory Tape, an
input Tape, and an output Tape. We use a Reinforcement Learning algorithm to
train a neural network that interacts with such Interfaces to solve simple
algorithmic tasks. Our Interfaces are expressive enough to make our model
Turing complete. | http://arxiv.org/pdf/1505.00521 | Wojciech Zaremba, Ilya Sutskever | cs.LG | null | null | cs.LG | 20150504 | 20160112 | [
{
"id": "1503.01007"
},
{
"id": "1506.02516"
},
{
"id": "1504.00702"
},
{
"id": "1503.08895"
}
] |
1505.00521 | 50 | 13
# Under review as a conference paper at ICLR 2016
Input Tape Output Tape G8C33EA6W G W6AE33C8GO * + * # * + * # * + W
Input Tape Output Tape WESGLPA67CR68FY W YF86RCT6APLG3SEWO # # ® # * FS # ® # * FS # ® # Y
An RL-NTM successfully solving a small in- stance of the Reverse problem (where the external memory is not used).
An RL-NTM successfully solving a small in- stance of the ForwardReverse problem, where the external memory is used.
Input Tape Output Tape SHBEW*56DL 3 HBEWsSDLHBEWsSDLHBEW+S6DLO H 8 Ww 5 D i. Seen eee
Input Tape | Memory Output Tape QLKDLTP7KL * LKDLTPTKLLKDLTPTKLO 2 * is 2 * 5 L * Pi L * * K * D « + A D * is D * * is * T u * 5 T * P T * * RRR KEK
An RL-NTM successfully solving an instance of the RepeatCopy problem where the input is to be repeated three times. | 1505.00521#50 | Reinforcement Learning Neural Turing Machines - Revised | The Neural Turing Machine (NTM) is more expressive than all previously
considered models because of its external memory. It can be viewed as a broader
effort to use abstract external Interfaces and to learn a parametric model that
interacts with them.
The capabilities of a model can be extended by providing it with proper
Interfaces that interact with the world. These external Interfaces include
memory, a database, a search engine, or a piece of software such as a theorem
verifier. Some of these Interfaces are provided by the developers of the model.
However, many important existing Interfaces, such as databases and search
engines, are discrete.
We examine feasibility of learning models to interact with discrete
Interfaces. We investigate the following discrete Interfaces: a memory Tape, an
input Tape, and an output Tape. We use a Reinforcement Learning algorithm to
train a neural network that interacts with such Interfaces to solve simple
algorithmic tasks. Our Interfaces are expressive enough to make our model
Turing complete. | http://arxiv.org/pdf/1505.00521 | Wojciech Zaremba, Ilya Sutskever | cs.LG | null | null | cs.LG | 20150504 | 20160112 | [
{
"id": "1503.01007"
},
{
"id": "1506.02516"
},
{
"id": "1504.00702"
},
{
"id": "1503.08895"
}
] |
1505.00387 | 0 | 5 1 0 2
v o N 3 ] G L . s c [
2 v 7 8 3 0 0 . 5 0 5 1 : v i X r a
# Highway Networks
# Rupesh Kumar Srivastava Klaus Greff J ¨urgen Schmidhuber
[email protected] [email protected] [email protected]
The Swiss AI Lab IDSIA Istituto Dalle Molle di Studi sullâIntelligenza Artiï¬ciale Universit`a della Svizzera italiana (USI) Scuola universitaria professionale della Svizzera italiana (SUPSI) Galleria 2, 6928 Manno-Lugano, Switzerland
# Abstract | 1505.00387#0 | Highway Networks | There is plenty of theoretical and empirical evidence that depth of neural
networks is a crucial ingredient for their success. However, network training
becomes more difficult with increasing depth and training of very deep networks
remains an open problem. In this extended abstract, we introduce a new
architecture designed to ease gradient-based training of very deep networks. We
refer to networks with this architecture as highway networks, since they allow
unimpeded information flow across several layers on "information highways". The
architecture is characterized by the use of gating units which learn to
regulate the flow of information through a network. Highway networks with
hundreds of layers can be trained directly using stochastic gradient descent
and with a variety of activation functions, opening up the possibility of
studying extremely deep and efficient architectures. | http://arxiv.org/pdf/1505.00387 | Rupesh Kumar Srivastava, Klaus Greff, Jürgen Schmidhuber | cs.LG, cs.NE, 68T01, I.2.6; G.1.6 | 6 pages, 2 figures. Presented at ICML 2015 Deep Learning workshop.
Full paper is at arXiv:1507.06228 | null | cs.LG | 20150503 | 20151103 | [
{
"id": "1502.01852"
}
] |
1505.00387 | 1 | # Abstract
There is plenty of theoretical and empirical evi- dence that depth of neural networks is a crucial ingredient for their success. However, network training becomes more difï¬cult with increasing depth and training of very deep networks remains an open problem. In this extended abstract, we introduce a new architecture designed to ease gradient-based training of very deep networks. We refer to networks with this architecture as highway networks, since they allow unimpeded information ï¬ow across several layers on infor- mation highways. The architecture is character- ized by the use of gating units which learn to reg- ulate the ï¬ow of information through a network. Highway networks with hundreds of layers can be trained directly using stochastic gradient de- scent and with a variety of activation functions, opening up the possibility of studying extremely deep and efï¬cient architectures.
instance, the top-5 image classiï¬cation accuracy on the 1000-class ImageNet dataset has increased from â¼84% (Krizhevsky et al., 2012) to â¼95% (Szegedy et al., 2014; Simonyan & Zisserman, 2014) through the use of ensem- bles of deeper architectures and smaller receptive ï¬elds (Ciresan et al., 2011a;b; 2012) in just a few years. | 1505.00387#1 | Highway Networks | There is plenty of theoretical and empirical evidence that depth of neural
networks is a crucial ingredient for their success. However, network training
becomes more difficult with increasing depth and training of very deep networks
remains an open problem. In this extended abstract, we introduce a new
architecture designed to ease gradient-based training of very deep networks. We
refer to networks with this architecture as highway networks, since they allow
unimpeded information flow across several layers on "information highways". The
architecture is characterized by the use of gating units which learn to
regulate the flow of information through a network. Highway networks with
hundreds of layers can be trained directly using stochastic gradient descent
and with a variety of activation functions, opening up the possibility of
studying extremely deep and efficient architectures. | http://arxiv.org/pdf/1505.00387 | Rupesh Kumar Srivastava, Klaus Greff, Jürgen Schmidhuber | cs.LG, cs.NE, 68T01, I.2.6; G.1.6 | 6 pages, 2 figures. Presented at ICML 2015 Deep Learning workshop.
Full paper is at arXiv:1507.06228 | null | cs.LG | 20150503 | 20151103 | [
{
"id": "1502.01852"
}
] |
1505.00387 | 2 | On the theoretical side, it is well known that deep net- works can represent certain function classes exponentially more efï¬ciently than shallow ones (e.g. the work of HËastad (1987); HËastad & Goldmann (1991) and recently of Mont- ufar et al. (2014)). As argued by Bengio et al. (2013), the use of deep networks can offer both computational and sta- tistical efï¬ciency for complex tasks.
However, training deeper networks is not as straightfor- ward as simply adding layers. Optimization of deep net- works has proven to be considerably more difï¬cult, lead- ing to research on initialization schemes (Glorot & Ben- gio, 2010; Saxe et al., 2013; He et al., 2015), techniques of training networks in multiple stages (Simonyan & Zis- serman, 2014; Romero et al., 2014) or with temporary companion loss functions attached to some of the layers (Szegedy et al., 2014; Lee et al., 2015).
Note: A full paper extending this study is available at http://arxiv.org/abs/1507.06228, with addi- tional references, experiments and analysis.
# 1. Introduction | 1505.00387#2 | Highway Networks | There is plenty of theoretical and empirical evidence that depth of neural
networks is a crucial ingredient for their success. However, network training
becomes more difficult with increasing depth and training of very deep networks
remains an open problem. In this extended abstract, we introduce a new
architecture designed to ease gradient-based training of very deep networks. We
refer to networks with this architecture as highway networks, since they allow
unimpeded information flow across several layers on "information highways". The
architecture is characterized by the use of gating units which learn to
regulate the flow of information through a network. Highway networks with
hundreds of layers can be trained directly using stochastic gradient descent
and with a variety of activation functions, opening up the possibility of
studying extremely deep and efficient architectures. | http://arxiv.org/pdf/1505.00387 | Rupesh Kumar Srivastava, Klaus Greff, Jürgen Schmidhuber | cs.LG, cs.NE, 68T01, I.2.6; G.1.6 | 6 pages, 2 figures. Presented at ICML 2015 Deep Learning workshop.
Full paper is at arXiv:1507.06228 | null | cs.LG | 20150503 | 20151103 | [
{
"id": "1502.01852"
}
] |
1505.00387 | 3 | # 1. Introduction
Many recent empirical breakthroughs in supervised ma- chine learning have been achieved through the applica- tion of deep neural networks. Network depth (referring to the number of successive computation layers) has played perhaps the most important role in these successes. For
In this extended abstract, we present a novel architecture that enables the optimization of networks with virtually ar- bitrary depth. This is accomplished through the use of a learned gating mechanism for regulating information ï¬ow which is inspired by Long Short Term Memory recurrent neural networks (Hochreiter & Schmidhuber, 1995). Due to this gating mechanism, a neural network can have paths along which information can ï¬ow across several layers without attenuation. We call such paths information high- ways, and such networks highway networks.
Presented at the Deep Learning Workshop, International Confer- ence on Machine Learning, Lille, France, 2015. Copyright 2015 by the author(s).
In preliminary experiments, we found that highway net- works as deep as 900 layers can be optimized using simple Stochastic Gradient Descent (SGD) with momentum. For
Highway Networks | 1505.00387#3 | Highway Networks | There is plenty of theoretical and empirical evidence that depth of neural
networks is a crucial ingredient for their success. However, network training
becomes more difficult with increasing depth and training of very deep networks
remains an open problem. In this extended abstract, we introduce a new
architecture designed to ease gradient-based training of very deep networks. We
refer to networks with this architecture as highway networks, since they allow
unimpeded information flow across several layers on "information highways". The
architecture is characterized by the use of gating units which learn to
regulate the flow of information through a network. Highway networks with
hundreds of layers can be trained directly using stochastic gradient descent
and with a variety of activation functions, opening up the possibility of
studying extremely deep and efficient architectures. | http://arxiv.org/pdf/1505.00387 | Rupesh Kumar Srivastava, Klaus Greff, Jürgen Schmidhuber | cs.LG, cs.NE, 68T01, I.2.6; G.1.6 | 6 pages, 2 figures. Presented at ICML 2015 Deep Learning workshop.
Full paper is at arXiv:1507.06228 | null | cs.LG | 20150503 | 20151103 | [
{
"id": "1502.01852"
}
] |
1505.00387 | 4 | In preliminary experiments, we found that highway net- works as deep as 900 layers can be optimized using simple Stochastic Gradient Descent (SGD) with momentum. For
Highway Networks
up to 100 layers we compare their training behavior to that of traditional networks with normalized initialization (Glo- rot & Bengio, 2010; He et al., 2015). We show that opti- mization of highway networks is virtually independent of depth, while for traditional networks it suffers signiï¬cantly as the number of layers increases. We also show that archi- tectures comparable to those recently presented by Romero et al. (2014) can be directly trained to obtain similar test set accuracy on the CIFAR-10 dataset without the need for a pre-trained teacher network.
# 1.1. Notation
We use boldface letters for vectors and matrices, and ital- icized capital letters to denote transformation functions. 0 and 1 denote vectors of zeros and ones respectively, and I denotes an identity matrix. The function Ï(x) is deï¬ned as Ï(x) = 1
Similarly, for the Jacobian of the layer transform,
if T(x, Wr) = 0, if T(x, Wy) =1. ©) dy _ JI, dx H'(x, Wy), | 1505.00387#4 | Highway Networks | There is plenty of theoretical and empirical evidence that depth of neural
networks is a crucial ingredient for their success. However, network training
becomes more difficult with increasing depth and training of very deep networks
remains an open problem. In this extended abstract, we introduce a new
architecture designed to ease gradient-based training of very deep networks. We
refer to networks with this architecture as highway networks, since they allow
unimpeded information flow across several layers on "information highways". The
architecture is characterized by the use of gating units which learn to
regulate the flow of information through a network. Highway networks with
hundreds of layers can be trained directly using stochastic gradient descent
and with a variety of activation functions, opening up the possibility of
studying extremely deep and efficient architectures. | http://arxiv.org/pdf/1505.00387 | Rupesh Kumar Srivastava, Klaus Greff, Jürgen Schmidhuber | cs.LG, cs.NE, 68T01, I.2.6; G.1.6 | 6 pages, 2 figures. Presented at ICML 2015 Deep Learning workshop.
Full paper is at arXiv:1507.06228 | null | cs.LG | 20150503 | 20151103 | [
{
"id": "1502.01852"
}
] |
1505.00387 | 5 | if T(x, Wr) = 0, if T(x, Wy) =1. ©) dy _ JI, dx H'(x, Wy),
Thus, depending on the output of the transform gates, a highway layer can smoothly vary its behavior between that of a plain layer and that of a layer which simply passes its inputs through. Just as a plain layer consists of multi- ple computing units such that the ith unit computes yi = Hi(x), a highway network consists of multiple blocks such that the ith block computes a block state Hi(x) and trans- form gate output Ti(x). Finally, it produces the block out- put yi = Hi(x) â Ti(x) + xi â (1 â Ti(x)), which is con- nected to the next layer.
# 2.1. Constructing Highway Networks
# 2. Highway Networks
A plain feedforward neural network typically consists of L layers where the lth layer (l â {1, 2, ..., L}) applies a non- linear transform H (parameterized by WH,l) on its input xl to produce its output yl. Thus, x1 is the input to the network and yL is the networkâs output. Omitting the layer index and biases for clarity, | 1505.00387#5 | Highway Networks | There is plenty of theoretical and empirical evidence that depth of neural
networks is a crucial ingredient for their success. However, network training
becomes more difficult with increasing depth and training of very deep networks
remains an open problem. In this extended abstract, we introduce a new
architecture designed to ease gradient-based training of very deep networks. We
refer to networks with this architecture as highway networks, since they allow
unimpeded information flow across several layers on "information highways". The
architecture is characterized by the use of gating units which learn to
regulate the flow of information through a network. Highway networks with
hundreds of layers can be trained directly using stochastic gradient descent
and with a variety of activation functions, opening up the possibility of
studying extremely deep and efficient architectures. | http://arxiv.org/pdf/1505.00387 | Rupesh Kumar Srivastava, Klaus Greff, Jürgen Schmidhuber | cs.LG, cs.NE, 68T01, I.2.6; G.1.6 | 6 pages, 2 figures. Presented at ICML 2015 Deep Learning workshop.
Full paper is at arXiv:1507.06228 | null | cs.LG | 20150503 | 20151103 | [
{
"id": "1502.01852"
}
] |
1505.00387 | 6 | As mentioned earlier, Equation (3) requires that the dimen- sionality of x, y, H(x, WH) and T (x, WT) be the same. In cases when it is desirable to change the size of the rep- resentation, one can replace x with Ëx obtained by suitably sub-sampling or zero-padding x. Another alternative is to use a plain layer (without highways) to change dimension- ality and then continue with stacking highway layers. This is the alternative we use in this study.
y = H(x, WH). (1)
H is usually an afï¬ne transform followed by a non-linear activation function, but in general it may take other forms.
Convolutional highway layers are constructed similar to fully connected layers. Weight-sharing and local receptive ï¬elds are utilized for both H and T transforms. We use zero-padding to ensure that the block state and transform gate feature maps are the same size as the input.
For a highway network, we additionally deï¬ne two non- linear transforms T (x, WT) and C(x, WC) such that
# 2.2. Training Deep Highway Networks
y = H(x, WH)· T (x, WT) + x · C(x, WC). | 1505.00387#6 | Highway Networks | There is plenty of theoretical and empirical evidence that depth of neural
networks is a crucial ingredient for their success. However, network training
becomes more difficult with increasing depth and training of very deep networks
remains an open problem. In this extended abstract, we introduce a new
architecture designed to ease gradient-based training of very deep networks. We
refer to networks with this architecture as highway networks, since they allow
unimpeded information flow across several layers on "information highways". The
architecture is characterized by the use of gating units which learn to
regulate the flow of information through a network. Highway networks with
hundreds of layers can be trained directly using stochastic gradient descent
and with a variety of activation functions, opening up the possibility of
studying extremely deep and efficient architectures. | http://arxiv.org/pdf/1505.00387 | Rupesh Kumar Srivastava, Klaus Greff, Jürgen Schmidhuber | cs.LG, cs.NE, 68T01, I.2.6; G.1.6 | 6 pages, 2 figures. Presented at ICML 2015 Deep Learning workshop.
Full paper is at arXiv:1507.06228 | null | cs.LG | 20150503 | 20151103 | [
{
"id": "1502.01852"
}
] |
1505.00387 | 7 | # 2.2. Training Deep Highway Networks
y = H(x, WH)· T (x, WT) + x · C(x, WC).
We refer to T as the transform gate and C as the carry gate, since they express how much of the output is produced by transforming the input and carrying it, respectively. For simplicity, in this paper we set C = 1 â T , giving
y = H(x, WH)· T (x, WT) + x · (1 â T (x, WT)). (3)
The dimensionality of x, y, H(x, WH) and T (x, WT) must be the same for Equation (3) to be valid. Note that this re-parametrization of the layer transformation is much more ï¬exible than Equation (1). In particular, observe that
For plain deep networks, training with SGD stalls at the beginning unless a speciï¬c weight initialization scheme is used such that the variance of the signals during forward and backward propagation is preserved initially (Glorot & Bengio, 2010; He et al., 2015). This initialization depends on the exact functional form of H. | 1505.00387#7 | Highway Networks | There is plenty of theoretical and empirical evidence that depth of neural
networks is a crucial ingredient for their success. However, network training
becomes more difficult with increasing depth and training of very deep networks
remains an open problem. In this extended abstract, we introduce a new
architecture designed to ease gradient-based training of very deep networks. We
refer to networks with this architecture as highway networks, since they allow
unimpeded information flow across several layers on "information highways". The
architecture is characterized by the use of gating units which learn to
regulate the flow of information through a network. Highway networks with
hundreds of layers can be trained directly using stochastic gradient descent
and with a variety of activation functions, opening up the possibility of
studying extremely deep and efficient architectures. | http://arxiv.org/pdf/1505.00387 | Rupesh Kumar Srivastava, Klaus Greff, Jürgen Schmidhuber | cs.LG, cs.NE, 68T01, I.2.6; G.1.6 | 6 pages, 2 figures. Presented at ICML 2015 Deep Learning workshop.
Full paper is at arXiv:1507.06228 | null | cs.LG | 20150503 | 20151103 | [
{
"id": "1502.01852"
}
] |
1505.00387 | 8 | For highway layers, we use the transform gate deï¬ned as T x + bT), where WT is the weight matrix T (x) = Ï(WT and bT the bias vector for the transform gates. This sug- gests a simple initialization scheme which is independent of the nature of H: bT can be initialized with a negative value (e.g. -1, -3 etc.) such that the network is initially biased towards carry behavior. This scheme is strongly in- spired by the proposal of Gers et al. (1999) to initially bias the gates in a Long Short-Term Memory recurrent network to help bridge long-term temporal dependencies early in learning. Note that Ï(x) â (0, 1), âx â R, so the condi- tions in Equation (4) can never be exactly true.
y = x, H(x, WH), if T (x, WT) = 0, if T (x, WT) = 1.
(4)
In our experiments, we found that a negative bias initialHighway Networks
ization was sufï¬cient for learning to proceed in very deep networks for various zero-mean initial distributions of WH and different activation functions used by H. This is sig- niï¬cant property since in general it may not be possible to ï¬nd effective initialization schemes for many choices of H. | 1505.00387#8 | Highway Networks | There is plenty of theoretical and empirical evidence that depth of neural
networks is a crucial ingredient for their success. However, network training
becomes more difficult with increasing depth and training of very deep networks
remains an open problem. In this extended abstract, we introduce a new
architecture designed to ease gradient-based training of very deep networks. We
refer to networks with this architecture as highway networks, since they allow
unimpeded information flow across several layers on "information highways". The
architecture is characterized by the use of gating units which learn to
regulate the flow of information through a network. Highway networks with
hundreds of layers can be trained directly using stochastic gradient descent
and with a variety of activation functions, opening up the possibility of
studying extremely deep and efficient architectures. | http://arxiv.org/pdf/1505.00387 | Rupesh Kumar Srivastava, Klaus Greff, Jürgen Schmidhuber | cs.LG, cs.NE, 68T01, I.2.6; G.1.6 | 6 pages, 2 figures. Presented at ICML 2015 Deep Learning workshop.
Full paper is at arXiv:1507.06228 | null | cs.LG | 20150503 | 20151103 | [
{
"id": "1502.01852"
}
] |
1505.00387 | 9 | address this question, we compared highway networks to the thin and deep architectures termed Fitnets proposed re- cently by Romero et al. (2014) on the CIFAR-10 dataset augmented with random translations. Results are summa- rized in Table 1.
# 3. Experiments
# 3.1. Optimization
Very deep plain networks become difï¬cult to optimize even if using the variance-preserving initialization scheme form (He et al., 2015). To show that highway networks do not suffer from depth in the same way we train run a series of experiments on the MNIST digit classiï¬cation dataset. We measure the cross entropy error on the training set, to investigate optimization, without conï¬ating them with gen- eralization issues. | 1505.00387#9 | Highway Networks | There is plenty of theoretical and empirical evidence that depth of neural
networks is a crucial ingredient for their success. However, network training
becomes more difficult with increasing depth and training of very deep networks
remains an open problem. In this extended abstract, we introduce a new
architecture designed to ease gradient-based training of very deep networks. We
refer to networks with this architecture as highway networks, since they allow
unimpeded information flow across several layers on "information highways". The
architecture is characterized by the use of gating units which learn to
regulate the flow of information through a network. Highway networks with
hundreds of layers can be trained directly using stochastic gradient descent
and with a variety of activation functions, opening up the possibility of
studying extremely deep and efficient architectures. | http://arxiv.org/pdf/1505.00387 | Rupesh Kumar Srivastava, Klaus Greff, Jürgen Schmidhuber | cs.LG, cs.NE, 68T01, I.2.6; G.1.6 | 6 pages, 2 figures. Presented at ICML 2015 Deep Learning workshop.
Full paper is at arXiv:1507.06228 | null | cs.LG | 20150503 | 20151103 | [
{
"id": "1502.01852"
}
] |
1505.00387 | 10 | We train both plain networks and highway networks with the same architecture and varying depth. The ï¬rst layer is always a regular fully-connected layer followed by 9, 19, 49, or 99 fully-connected plain or highway layers and a single softmax output layer. The number of units in each layer is kept constant and it is 50 for highways and 71 for plain networks. That way the number of parameters is roughly the same for both. To make the comparison fair we run a random search of 40 runs for both plain and high- way networks to ï¬nd good settings for the hyperparame- ters. We optimized the initial learning rate, momentum, learning rate decay rate, activation function for H (either ReLU or tanh) and, for highway networks, the value for the transform gate bias (between -1 and -10). All other weights were initialized following the scheme introduced by (He et al., 2015). | 1505.00387#10 | Highway Networks | There is plenty of theoretical and empirical evidence that depth of neural
networks is a crucial ingredient for their success. However, network training
becomes more difficult with increasing depth and training of very deep networks
remains an open problem. In this extended abstract, we introduce a new
architecture designed to ease gradient-based training of very deep networks. We
refer to networks with this architecture as highway networks, since they allow
unimpeded information flow across several layers on "information highways". The
architecture is characterized by the use of gating units which learn to
regulate the flow of information through a network. Highway networks with
hundreds of layers can be trained directly using stochastic gradient descent
and with a variety of activation functions, opening up the possibility of
studying extremely deep and efficient architectures. | http://arxiv.org/pdf/1505.00387 | Rupesh Kumar Srivastava, Klaus Greff, Jürgen Schmidhuber | cs.LG, cs.NE, 68T01, I.2.6; G.1.6 | 6 pages, 2 figures. Presented at ICML 2015 Deep Learning workshop.
Full paper is at arXiv:1507.06228 | null | cs.LG | 20150503 | 20151103 | [
{
"id": "1502.01852"
}
] |
1505.00387 | 11 | The convergence plots for the best performing networks for each depth can be seen in Figure 1. While for 10 layers plain network show very good performance, their perfor- mance signiï¬cantly degrades as depth increases. Highway networks on the other hand do not seem to suffer from an increase in depth at all. The ï¬nal result of the 100 layer highway network is about 1 order of magnitude better than the 10 layer one, and is on par with the 10 layer plain net- work. In fact, we started training a similar 900 layer high- way network on CIFAR-100 which is only at 80 epochs as of now, but so far has shown no signs of optimization difï¬culties. It is also worth pointing out that the highway networks always converge signiï¬cantly faster than the plain ones.
# 3.2. Comparison to Fitnets | 1505.00387#11 | Highway Networks | There is plenty of theoretical and empirical evidence that depth of neural
networks is a crucial ingredient for their success. However, network training
becomes more difficult with increasing depth and training of very deep networks
remains an open problem. In this extended abstract, we introduce a new
architecture designed to ease gradient-based training of very deep networks. We
refer to networks with this architecture as highway networks, since they allow
unimpeded information flow across several layers on "information highways". The
architecture is characterized by the use of gating units which learn to
regulate the flow of information through a network. Highway networks with
hundreds of layers can be trained directly using stochastic gradient descent
and with a variety of activation functions, opening up the possibility of
studying extremely deep and efficient architectures. | http://arxiv.org/pdf/1505.00387 | Rupesh Kumar Srivastava, Klaus Greff, Jürgen Schmidhuber | cs.LG, cs.NE, 68T01, I.2.6; G.1.6 | 6 pages, 2 figures. Presented at ICML 2015 Deep Learning workshop.
Full paper is at arXiv:1507.06228 | null | cs.LG | 20150503 | 20151103 | [
{
"id": "1502.01852"
}
] |
1505.00387 | 12 | # 3.2. Comparison to Fitnets
Romero et al. (2014) reported that training using plain backpropogation was only possible for maxout networks with depth up to 5 layers when number of parameters was limited to â¼250K and number of multiplications to â¼30M. Training of deeper networks was only possible through the use of a two-stage training procedure and addition of soft targets produced from a pre-trained shallow teacher net- work (hint-based training). Similarly it was only possible to train 19-layer networks with a budget of 2.5M parame- ters using hint-based training.
We found that it was easy to train highway networks with number of parameters and operations comparable to ï¬t- nets directly using backpropagation. As shown in Table 1, Highway 1 and Highway 4, which are based on the archi- tecture of Fitnet 1 and Fitnet 4 respectively obtain similar or higher accuracy on the test set. We were also able to train thinner and deeper networks: a 19-layer highway net- work with â¼1.4M parameters and a 32-layer highway net- work with â¼1.25M parameter both perform similar to the teacher network of Romero et al. (2014).
# 4. Analysis | 1505.00387#12 | Highway Networks | There is plenty of theoretical and empirical evidence that depth of neural
networks is a crucial ingredient for their success. However, network training
becomes more difficult with increasing depth and training of very deep networks
remains an open problem. In this extended abstract, we introduce a new
architecture designed to ease gradient-based training of very deep networks. We
refer to networks with this architecture as highway networks, since they allow
unimpeded information flow across several layers on "information highways". The
architecture is characterized by the use of gating units which learn to
regulate the flow of information through a network. Highway networks with
hundreds of layers can be trained directly using stochastic gradient descent
and with a variety of activation functions, opening up the possibility of
studying extremely deep and efficient architectures. | http://arxiv.org/pdf/1505.00387 | Rupesh Kumar Srivastava, Klaus Greff, Jürgen Schmidhuber | cs.LG, cs.NE, 68T01, I.2.6; G.1.6 | 6 pages, 2 figures. Presented at ICML 2015 Deep Learning workshop.
Full paper is at arXiv:1507.06228 | null | cs.LG | 20150503 | 20151103 | [
{
"id": "1502.01852"
}
] |
1505.00387 | 13 | # 4. Analysis
In Figure 2 we show some inspections on the inner work- ings of the best1 50 hidden layer fully-connected high- way networks trained on MNIST (top row) and CIFAR- 100 (bottom row). The ï¬rst three columns show, for each transform gate, the bias, the mean activity over 10K ran- dom samples, and the activity for a single random sample respectively. The block outputs for the same single sample are displayed in the last column.
The transform gate biases of the two networks were initial- ized to -2 and -4 respectively. It is interesting to note that contrary to our expectations most biases actually decreased further during training. For the CIFAR-100 network the bi- ases increase with depth forming a gradient. Curiously this gradient is inversely correlated with the average activity of the transform gates as seen in the second column. This in- dicates that the strong negative biases at low depths are not used to shut down the gates, but to make them more selec- tive. This behavior is also suggested by the fact that the transform gate activity for a single example (column 3) is very sparse. This effect is more pronounced for the CIFAR- 100 network, but can also be observed to a lesser extent in the MNIST network. | 1505.00387#13 | Highway Networks | There is plenty of theoretical and empirical evidence that depth of neural
networks is a crucial ingredient for their success. However, network training
becomes more difficult with increasing depth and training of very deep networks
remains an open problem. In this extended abstract, we introduce a new
architecture designed to ease gradient-based training of very deep networks. We
refer to networks with this architecture as highway networks, since they allow
unimpeded information flow across several layers on "information highways". The
architecture is characterized by the use of gating units which learn to
regulate the flow of information through a network. Highway networks with
hundreds of layers can be trained directly using stochastic gradient descent
and with a variety of activation functions, opening up the possibility of
studying extremely deep and efficient architectures. | http://arxiv.org/pdf/1505.00387 | Rupesh Kumar Srivastava, Klaus Greff, Jürgen Schmidhuber | cs.LG, cs.NE, 68T01, I.2.6; G.1.6 | 6 pages, 2 figures. Presented at ICML 2015 Deep Learning workshop.
Full paper is at arXiv:1507.06228 | null | cs.LG | 20150503 | 20151103 | [
{
"id": "1502.01852"
}
] |
1505.00387 | 14 | Deep highway networks are easy to optimize, but are they also beneï¬cial for supervised learning where we are in- terested in generalization performance on a test set? To
1obtained via random search over hyperparameters to mini- mize the best training set error achieved using each conï¬guration
Highway Networks
10° Depth 10 Depth 20 107 10° Mean Cross Entropy Error 10% 10° Depth 50 Depth 100 â plain â highway 0 50 100 150 200 250 300 350 4000 Number of Epochs Number of Epochs 50 100 150 200 250 300 350 4000 50 100 150 200 250 300 350 4000 Number of Epochs 50 100 150 200 250 300 350 400 Number of Epochs
Figure 1. Comparison of optimization of plain networks and highway networks of various depths. All networks were optimized using SGD with momentum. The curves shown are for the best hyperparameter settings obtained for each conï¬guration using a random search. Plain networks become much harder to optimize with increasing depth, while highway networks with up to 100 layers can still be optimized well. | 1505.00387#14 | Highway Networks | There is plenty of theoretical and empirical evidence that depth of neural
networks is a crucial ingredient for their success. However, network training
becomes more difficult with increasing depth and training of very deep networks
remains an open problem. In this extended abstract, we introduce a new
architecture designed to ease gradient-based training of very deep networks. We
refer to networks with this architecture as highway networks, since they allow
unimpeded information flow across several layers on "information highways". The
architecture is characterized by the use of gating units which learn to
regulate the flow of information through a network. Highway networks with
hundreds of layers can be trained directly using stochastic gradient descent
and with a variety of activation functions, opening up the possibility of
studying extremely deep and efficient architectures. | http://arxiv.org/pdf/1505.00387 | Rupesh Kumar Srivastava, Klaus Greff, Jürgen Schmidhuber | cs.LG, cs.NE, 68T01, I.2.6; G.1.6 | 6 pages, 2 figures. Presented at ICML 2015 Deep Learning workshop.
Full paper is at arXiv:1507.06228 | null | cs.LG | 20150503 | 20151103 | [
{
"id": "1502.01852"
}
] |
1505.00387 | 15 | Network Fitnet Results reported by Romero et al. (2014) Number of Layers Number of Parameters Accuracy Teacher Fitnet 1 Fitnet 2 Fitnet 3 Fitnet 4 5 11 11 13 19 â¼9M â¼250K â¼862K â¼1.6M â¼2.5M 90.18% 89.01% 91.06% 91.10% 91.61% Highway networks Highway 1 (Fitnet 1) Highway 2 (Fitnet 4) Highway 3* Highway 4* 11 19 19 32 â¼236K â¼2.3M â¼1.4M â¼1.25M 89.18% 92.24% 90.68% 90.34%
Table 1. CIFAR-10 test set accuracy of convolutional highway networks with rectiï¬ed linear activation and sigmoid gates. For compar- ison, results reported by Romero et al. (2014) using maxout networks are also shown. Fitnets were trained using a two step training procedure using soft targets from the trained Teacher network, which was trained using backpropagation. We trained all highway net- works directly using backpropagation. * indicates networks which were trained only on a set of 40K out of 50K examples in the training set. | 1505.00387#15 | Highway Networks | There is plenty of theoretical and empirical evidence that depth of neural
networks is a crucial ingredient for their success. However, network training
becomes more difficult with increasing depth and training of very deep networks
remains an open problem. In this extended abstract, we introduce a new
architecture designed to ease gradient-based training of very deep networks. We
refer to networks with this architecture as highway networks, since they allow
unimpeded information flow across several layers on "information highways". The
architecture is characterized by the use of gating units which learn to
regulate the flow of information through a network. Highway networks with
hundreds of layers can be trained directly using stochastic gradient descent
and with a variety of activation functions, opening up the possibility of
studying extremely deep and efficient architectures. | http://arxiv.org/pdf/1505.00387 | Rupesh Kumar Srivastava, Klaus Greff, Jürgen Schmidhuber | cs.LG, cs.NE, 68T01, I.2.6; G.1.6 | 6 pages, 2 figures. Presented at ICML 2015 Deep Learning workshop.
Full paper is at arXiv:1507.06228 | null | cs.LG | 20150503 | 20151103 | [
{
"id": "1502.01852"
}
] |
1505.00387 | 16 | The last column of Figure 2 displays the block outputs and clearly visualizes the concept of âinformation highwaysâ. Most of the outputs stay constant over many layers form- ing a pattern of stripes. Most of the change in outputs hap- pens in the early layers (â 10 for MNIST and â 30 for CIFAR-100). We hypothesize that this difference is due to the higher complexity of the CIFAR-100 dataset.
# 5. Conclusion
Learning to route information through neural networks has helped to scale up their application to challenging prob- lems by improving credit assignment and making training easier (Srivastava et al., 2015). Even so, training very deep networks has remained difï¬cult, especially without consid- erably increasing total network size.
In summary it is clear that highway networks actually uti- lize the gating mechanism to pass information almost un- changed through many layers. This mechanism serves not just as a means for easier training, but is also heavily used to route information in a trained network. We observe very selective activity of the transform gates, varying strongly in reaction to the current input patterns. | 1505.00387#16 | Highway Networks | There is plenty of theoretical and empirical evidence that depth of neural
networks is a crucial ingredient for their success. However, network training
becomes more difficult with increasing depth and training of very deep networks
remains an open problem. In this extended abstract, we introduce a new
architecture designed to ease gradient-based training of very deep networks. We
refer to networks with this architecture as highway networks, since they allow
unimpeded information flow across several layers on "information highways". The
architecture is characterized by the use of gating units which learn to
regulate the flow of information through a network. Highway networks with
hundreds of layers can be trained directly using stochastic gradient descent
and with a variety of activation functions, opening up the possibility of
studying extremely deep and efficient architectures. | http://arxiv.org/pdf/1505.00387 | Rupesh Kumar Srivastava, Klaus Greff, Jürgen Schmidhuber | cs.LG, cs.NE, 68T01, I.2.6; G.1.6 | 6 pages, 2 figures. Presented at ICML 2015 Deep Learning workshop.
Full paper is at arXiv:1507.06228 | null | cs.LG | 20150503 | 20151103 | [
{
"id": "1502.01852"
}
] |
1505.00387 | 17 | Highway networks are novel neural network architectures which enable the training of extremely deep networks us- ing simple SGD. While the traditional plain neural archi- tectures become increasingly difï¬cult to train with increas- ing network depth (even with variance-preserving initial- ization), our experiments show that optimization of high- way networks is not hampered even as network depth in- creases to a hundred layers.
The ability to train extremely deep networks opens up the possibility of studying the impact of depth on complex
Highway Networks
Transform Gate Biases MNIST CIFAR-100 Depth Mean Transform Gate Outputs Transform Gate Outputs Block Outputs ecooor oN B&D wD L ° iy 0.4 0.8 eoory, RD ®oM ° ! | i 0 10 20 30 40 Block Block | 1505.00387#17 | Highway Networks | There is plenty of theoretical and empirical evidence that depth of neural
networks is a crucial ingredient for their success. However, network training
becomes more difficult with increasing depth and training of very deep networks
remains an open problem. In this extended abstract, we introduce a new
architecture designed to ease gradient-based training of very deep networks. We
refer to networks with this architecture as highway networks, since they allow
unimpeded information flow across several layers on "information highways". The
architecture is characterized by the use of gating units which learn to
regulate the flow of information through a network. Highway networks with
hundreds of layers can be trained directly using stochastic gradient descent
and with a variety of activation functions, opening up the possibility of
studying extremely deep and efficient architectures. | http://arxiv.org/pdf/1505.00387 | Rupesh Kumar Srivastava, Klaus Greff, Jürgen Schmidhuber | cs.LG, cs.NE, 68T01, I.2.6; G.1.6 | 6 pages, 2 figures. Presented at ICML 2015 Deep Learning workshop.
Full paper is at arXiv:1507.06228 | null | cs.LG | 20150503 | 20151103 | [
{
"id": "1502.01852"
}
] |
1505.00387 | 18 | Figure 2. Visualization of certain internals of the blocks in the best 50 hidden layer highway networks trained on MNIST (top row) and CIFAR-100 (bottom row). The ï¬rst hidden layer is a plain layer which changes the dimensionality of the representation to 50. Each of the 49 highway layers (y-axis) consists of 50 blocks (x-axis). The ï¬rst column shows the transform gate biases, which were initialized to -2 and -4 respectively. In the second column the mean output of the transform gate over 10,000 training examples is depicted. The third and forth columns show the output of the transform gates and the block outputs for a single random training sample.
problems without restrictions. Various activation functions which may be more suitable for particular problems but for which robust initialization schemes are unavailable can be used in deep highway networks. Future work will also at- tempt to improve the understanding of learning in highway networks.
# Acknowledgments
ï¬c sign classiï¬cation. In Neural Networks (IJCNN), The 2011 International Joint Conference on, pp. 1918â1921. IEEE, 2011a. URL http://ieeexplore.ieee. org/xpls/abs_all.jsp?arnumber=6033458. | 1505.00387#18 | Highway Networks | There is plenty of theoretical and empirical evidence that depth of neural
networks is a crucial ingredient for their success. However, network training
becomes more difficult with increasing depth and training of very deep networks
remains an open problem. In this extended abstract, we introduce a new
architecture designed to ease gradient-based training of very deep networks. We
refer to networks with this architecture as highway networks, since they allow
unimpeded information flow across several layers on "information highways". The
architecture is characterized by the use of gating units which learn to
regulate the flow of information through a network. Highway networks with
hundreds of layers can be trained directly using stochastic gradient descent
and with a variety of activation functions, opening up the possibility of
studying extremely deep and efficient architectures. | http://arxiv.org/pdf/1505.00387 | Rupesh Kumar Srivastava, Klaus Greff, Jürgen Schmidhuber | cs.LG, cs.NE, 68T01, I.2.6; G.1.6 | 6 pages, 2 figures. Presented at ICML 2015 Deep Learning workshop.
Full paper is at arXiv:1507.06228 | null | cs.LG | 20150503 | 20151103 | [
{
"id": "1502.01852"
}
] |
1505.00387 | 19 | Ciresan, Dan, Meier, Ueli, and Schmidhuber, J¨urgen. Multi-column deep neural networks for image classiï¬- In IEEE Conference on Computer Vision and cation. Pattern Recognition, 2012.
This research was supported by the by EU project âNASCENCEâ (FP7-ICT-317662). We gratefully ac- knowledge the support of NVIDIA Corporation with the donation of the Tesla K40 GPUs used for this research.
# References
Ciresan, DC, Meier, Ueli, Masci, Jonathan, Gambardella, Luca M, and Schmidhuber, J¨urgen. Flexible, high performance convolutional neural networks for image classiï¬cation. In IJCAI, 2011b. URL http://www. aaai.org/ocs/index.php/IJCAI/IJCAI11/ paper/download/3098/3425%0020http: //dl.acm.org/citation.cfm?id=2283603. | 1505.00387#19 | Highway Networks | There is plenty of theoretical and empirical evidence that depth of neural
networks is a crucial ingredient for their success. However, network training
becomes more difficult with increasing depth and training of very deep networks
remains an open problem. In this extended abstract, we introduce a new
architecture designed to ease gradient-based training of very deep networks. We
refer to networks with this architecture as highway networks, since they allow
unimpeded information flow across several layers on "information highways". The
architecture is characterized by the use of gating units which learn to
regulate the flow of information through a network. Highway networks with
hundreds of layers can be trained directly using stochastic gradient descent
and with a variety of activation functions, opening up the possibility of
studying extremely deep and efficient architectures. | http://arxiv.org/pdf/1505.00387 | Rupesh Kumar Srivastava, Klaus Greff, Jürgen Schmidhuber | cs.LG, cs.NE, 68T01, I.2.6; G.1.6 | 6 pages, 2 figures. Presented at ICML 2015 Deep Learning workshop.
Full paper is at arXiv:1507.06228 | null | cs.LG | 20150503 | 20151103 | [
{
"id": "1502.01852"
}
] |
1505.00387 | 20 | Bengio, Yoshua, Courville, Aaron, and Vincent, Pas- Representation learning: A review and new Pattern Analysis and Machine In- IEEE Transactions on, 35(8):1798â1828, URL http://ieeexplore.ieee.org/ cal. perspectives. telligence, 2013. xpls/abs_all.jsp?arnumber=6472238.
Gers, Felix A., Schmidhuber, J¨urgen, and Cummins, Learning to forget: Continual prediction In ICANN, volume 2, pp. 850â855, URL http://ieeexplore.ieee.org/
Ciresan, Dan, Meier, Ueli, Masci, Jonathan, and Schmid- huber, J¨urgen. A committee of neural networks for trafGlorot, Xavier and Bengio, Yoshua. Understanding the difï¬culty of training deep feedforward neural networks.
Highway Networks
In International Conference on Artiï¬cial Intelligence URL http: and Statistics, pp. 249â256, 2010. //machinelearning.wustl.edu/mlpapers/ paper_files/AISTATS2010_GlorotB10.pdf. | 1505.00387#20 | Highway Networks | There is plenty of theoretical and empirical evidence that depth of neural
networks is a crucial ingredient for their success. However, network training
becomes more difficult with increasing depth and training of very deep networks
remains an open problem. In this extended abstract, we introduce a new
architecture designed to ease gradient-based training of very deep networks. We
refer to networks with this architecture as highway networks, since they allow
unimpeded information flow across several layers on "information highways". The
architecture is characterized by the use of gating units which learn to
regulate the flow of information through a network. Highway networks with
hundreds of layers can be trained directly using stochastic gradient descent
and with a variety of activation functions, opening up the possibility of
studying extremely deep and efficient architectures. | http://arxiv.org/pdf/1505.00387 | Rupesh Kumar Srivastava, Klaus Greff, Jürgen Schmidhuber | cs.LG, cs.NE, 68T01, I.2.6; G.1.6 | 6 pages, 2 figures. Presented at ICML 2015 Deep Learning workshop.
Full paper is at arXiv:1507.06228 | null | cs.LG | 20150503 | 20151103 | [
{
"id": "1502.01852"
}
] |
1505.00387 | 21 | Simonyan, Karen and Zisserman, Andrew. Very deep con- volutional networks for large-scale image recognition. arXiv:1409.1556 [cs], September 2014. URL http: //arxiv.org/abs/1409.1556.
HËastad, Johan. Computational limitations of small-depth circuits. MIT press, 1987. URL http://dl.acm. org/citation.cfm?id=SERIES9056.27031.
HËastad, Johan and Goldmann, Mikael. On the Compu- URL power of small-depth threshold circuits. tational Complexity, 1(2):113â129, 1991. http://link.springer.com/article/10. 1007/BF01272517.
He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Delving deep into rectiï¬ers: Surpassing human-level performance on ImageNet classiï¬cation. arXiv:1502.01852 [cs], February 2015. URL http: //arxiv.org/abs/1502.01852. | 1505.00387#21 | Highway Networks | There is plenty of theoretical and empirical evidence that depth of neural
networks is a crucial ingredient for their success. However, network training
becomes more difficult with increasing depth and training of very deep networks
remains an open problem. In this extended abstract, we introduce a new
architecture designed to ease gradient-based training of very deep networks. We
refer to networks with this architecture as highway networks, since they allow
unimpeded information flow across several layers on "information highways". The
architecture is characterized by the use of gating units which learn to
regulate the flow of information through a network. Highway networks with
hundreds of layers can be trained directly using stochastic gradient descent
and with a variety of activation functions, opening up the possibility of
studying extremely deep and efficient architectures. | http://arxiv.org/pdf/1505.00387 | Rupesh Kumar Srivastava, Klaus Greff, Jürgen Schmidhuber | cs.LG, cs.NE, 68T01, I.2.6; G.1.6 | 6 pages, 2 figures. Presented at ICML 2015 Deep Learning workshop.
Full paper is at arXiv:1507.06228 | null | cs.LG | 20150503 | 20151103 | [
{
"id": "1502.01852"
}
] |
1505.00387 | 22 | Srivastava, Rupesh Kumar, Masci, Jonathan, Gomez, Faustino, and Schmidhuber, J¨urgen. Understanding lo- In International Confer- cally competitive networks. ence on Learning Representations, 2015. URL http: //arxiv.org/abs/1410.1165.
Szegedy, Christian, Liu, Wei, Jia, Yangqing, Sermanet, Pierre, Reed, Scott, Anguelov, Dragomir, Erhan, Du- mitru, Vanhoucke, Vincent, and Rabinovich, Andrew. Going deeper with convolutions. arXiv:1409.4842 [cs], September 2014. URL http://arxiv.org/abs/ 1409.4842.
Long short term memory. Technical Report FKI-207-95, Technische Universit¨at M¨unchen, M¨unchen, August 1995. URL http://citeseerx.ist.psu.edu/ viewdoc/summary?doi=10.1.1.51.3117. | 1505.00387#22 | Highway Networks | There is plenty of theoretical and empirical evidence that depth of neural
networks is a crucial ingredient for their success. However, network training
becomes more difficult with increasing depth and training of very deep networks
remains an open problem. In this extended abstract, we introduce a new
architecture designed to ease gradient-based training of very deep networks. We
refer to networks with this architecture as highway networks, since they allow
unimpeded information flow across several layers on "information highways". The
architecture is characterized by the use of gating units which learn to
regulate the flow of information through a network. Highway networks with
hundreds of layers can be trained directly using stochastic gradient descent
and with a variety of activation functions, opening up the possibility of
studying extremely deep and efficient architectures. | http://arxiv.org/pdf/1505.00387 | Rupesh Kumar Srivastava, Klaus Greff, Jürgen Schmidhuber | cs.LG, cs.NE, 68T01, I.2.6; G.1.6 | 6 pages, 2 figures. Presented at ICML 2015 Deep Learning workshop.
Full paper is at arXiv:1507.06228 | null | cs.LG | 20150503 | 20151103 | [
{
"id": "1502.01852"
}
] |
1505.00387 | 23 | Krizhevsky, Alex, Sutskever, Ilya, and Hinton, Geoffrey E. Imagenet classiï¬cation with deep convolutional neural networks. In Advances in Neural Information Process- ing Systems, 2012. URL http://books.nips.cc/ papers/files/nips25/NIPS2012_0534.pdf.
Lee, Chen-Yu, Xie, Saining, Gallagher, Patrick, Zhang, Zhengyou, and Tu, Zhuowen. Deeply-supervised nets. URL http://jmlr.org/ pp. 562â570, 2015. proceedings/papers/v38/lee15a.html.
and Bengio, Yoshua. regions of deep neural networks. in Neural URL 5422-on-the-number-of-linear-regions-of-deep-neural-networks. pdf.
Kahou, Ballas, Samira Ebrahimi, Chassang, Antoine, Gatta, Carlo, and Bengio, Yoshua. FitNets: Hints for thin deep arXiv:1412.6550 [cs], December 2014. URL nets. http://arxiv.org/abs/1412.6550. | 1505.00387#23 | Highway Networks | There is plenty of theoretical and empirical evidence that depth of neural
networks is a crucial ingredient for their success. However, network training
becomes more difficult with increasing depth and training of very deep networks
remains an open problem. In this extended abstract, we introduce a new
architecture designed to ease gradient-based training of very deep networks. We
refer to networks with this architecture as highway networks, since they allow
unimpeded information flow across several layers on "information highways". The
architecture is characterized by the use of gating units which learn to
regulate the flow of information through a network. Highway networks with
hundreds of layers can be trained directly using stochastic gradient descent
and with a variety of activation functions, opening up the possibility of
studying extremely deep and efficient architectures. | http://arxiv.org/pdf/1505.00387 | Rupesh Kumar Srivastava, Klaus Greff, Jürgen Schmidhuber | cs.LG, cs.NE, 68T01, I.2.6; G.1.6 | 6 pages, 2 figures. Presented at ICML 2015 Deep Learning workshop.
Full paper is at arXiv:1507.06228 | null | cs.LG | 20150503 | 20151103 | [
{
"id": "1502.01852"
}
] |
1504.03410 | 0 | 5 1 0 2
r p A 4 1 ] V C . s c [ 1 v 0 1 4 3 0 . 4 0 5 1 : v i X r a
# Simultaneous Feature Learning and Hash Coding with Deep Neural Networks
Hanjiang Laiâ , Yan Pan ââ¡, Ye Liu§ , and Shuicheng Yanâ
â Department of Electronic and Computer Engineering, National University of Singapore, Singapore â¡School of Software, Sun Yan-Sen University, China § School of Information Science and Technology, Sun Yan-Sen University, China
# Abstract | 1504.03410#0 | Simultaneous Feature Learning and Hash Coding with Deep Neural Networks | Similarity-preserving hashing is a widely-used method for nearest neighbour
search in large-scale image retrieval tasks. For most existing hashing methods,
an image is first encoded as a vector of hand-engineering visual features,
followed by another separate projection or quantization step that generates
binary codes. However, such visual feature vectors may not be optimally
compatible with the coding process, thus producing sub-optimal hashing codes.
In this paper, we propose a deep architecture for supervised hashing, in which
images are mapped into binary codes via carefully designed deep neural
networks. The pipeline of the proposed deep architecture consists of three
building blocks: 1) a sub-network with a stack of convolution layers to produce
the effective intermediate image features; 2) a divide-and-encode module to
divide the intermediate image features into multiple branches, each encoded
into one hash bit; and 3) a triplet ranking loss designed to characterize that
one image is more similar to the second image than to the third one. Extensive
evaluations on several benchmark image datasets show that the proposed
simultaneous feature learning and hash coding pipeline brings substantial
improvements over other state-of-the-art supervised or unsupervised hashing
methods. | http://arxiv.org/pdf/1504.03410 | Hanjiang Lai, Yan Pan, Ye Liu, Shuicheng Yan | cs.CV | This paper has been accepted to IEEE International Conference on
Pattern Recognition and Computer Vision (CVPR), 2015 | null | cs.CV | 20150414 | 20150414 | [] |
1504.03592 | 0 | 5 1 0 2
r p A 4 1 ] I A . s c [
1 v 2 9 5 3 0 . 4 0 5 1 : v i X r a
# Towards Veriï¬ably Ethical Robot Behaviour
Louise A. Dennis & Michael Fisher Department of Computer Science University of Liverpool, UK {L.A.Dennis,MFisher}@liverpool.ac.uk & Alan F. T. Winï¬eld Bristol Robotics Laboratory University of the West of England, UK [email protected]
# February 10, 2022
# Abstract
Ensuring that autonomous systems work ethically is both complex and difï¬cult. However, the idea of having an additional âgovernorâ that assesses options the system has, and prunes them to select the most ethical choices is well understood. Recent work has produced such a governor consisting of a âconsequence engineâ that assesses the likely future outcomes of actions then applies a Safety/Ethical logic to select actions. Although this is appealing, it is impossible to be certain that the most ethical options are actually taken. In this paper we extend and apply a well-known agent veriï¬cation approach to our consequence engine, allowing us to verify the correctness of its ethical decision-making.
# 1 Introduction | 1504.03592#0 | Towards Verifiably Ethical Robot Behaviour | Ensuring that autonomous systems work ethically is both complex and
difficult. However, the idea of having an additional `governor' that assesses
options the system has, and prunes them to select the most ethical choices is
well understood. Recent work has produced such a governor consisting of a
`consequence engine' that assesses the likely future outcomes of actions then
applies a Safety/Ethical logic to select actions. Although this is appealing,
it is impossible to be certain that the most ethical options are actually
taken. In this paper we extend and apply a well-known agent verification
approach to our consequence engine, allowing us to verify the correctness of
its ethical decision-making. | http://arxiv.org/pdf/1504.03592 | Louise A. Dennis, Michael Fisher, Alan F. T. Winfield | cs.AI | Presented at the 1st International Workshop on AI and Ethics, Sunday
25th January 2015, Hill Country A, Hyatt Regency Austin. Will appear in the
workshop proceedings published by AAAI | null | cs.AI | 20150414 | 20150414 | [] |
1504.03410 | 1 | # Abstract
Similarity-preserving hashing is a widely-used method for nearest neighbour search in large-scale image retrieval tasks. For most existing hashing methods, an image is ï¬rst encoded as a vector of hand-engineering visual fea- tures, followed by another separate projection or quantiza- tion step that generates binary codes. However, such visual feature vectors may not be optimally compatible with the coding process, thus producing sub-optimal hashing codes. In this paper, we propose a deep architecture for supervised hashing, in which images are mapped into binary codes via carefully designed deep neural networks. The pipeline of the proposed deep architecture consists of three building blocks: 1) a sub-network with a stack of convolution lay- ers to produce the effective intermediate image features; 2) a divide-and-encode module to divide the intermediate im- age features into multiple branches, each encoded into one hash bit; and 3) a triplet ranking loss designed to character- ize that one image is more similar to the second image than to the third one. Extensive evaluations on several bench- mark image datasets show that the proposed simultaneous feature learning and hash coding pipeline brings substan- tial improvements over other state-of-the-art supervised or unsupervised hashing methods.
# 1. Introduction | 1504.03410#1 | Simultaneous Feature Learning and Hash Coding with Deep Neural Networks | Similarity-preserving hashing is a widely-used method for nearest neighbour
search in large-scale image retrieval tasks. For most existing hashing methods,
an image is first encoded as a vector of hand-engineering visual features,
followed by another separate projection or quantization step that generates
binary codes. However, such visual feature vectors may not be optimally
compatible with the coding process, thus producing sub-optimal hashing codes.
In this paper, we propose a deep architecture for supervised hashing, in which
images are mapped into binary codes via carefully designed deep neural
networks. The pipeline of the proposed deep architecture consists of three
building blocks: 1) a sub-network with a stack of convolution layers to produce
the effective intermediate image features; 2) a divide-and-encode module to
divide the intermediate image features into multiple branches, each encoded
into one hash bit; and 3) a triplet ranking loss designed to characterize that
one image is more similar to the second image than to the third one. Extensive
evaluations on several benchmark image datasets show that the proposed
simultaneous feature learning and hash coding pipeline brings substantial
improvements over other state-of-the-art supervised or unsupervised hashing
methods. | http://arxiv.org/pdf/1504.03410 | Hanjiang Lai, Yan Pan, Ye Liu, Shuicheng Yan | cs.CV | This paper has been accepted to IEEE International Conference on
Pattern Recognition and Computer Vision (CVPR), 2015 | null | cs.CV | 20150414 | 20150414 | [] |
1504.03592 | 1 | # 1 Introduction
It is widely recognised that autonomous systems will need to conform to legal, practical and ethical speciï¬cations. For instance, during normal operation, we expect such systems to fulï¬ll their goals within a prescribed legal or professional framework of rules and protocols. However, in exceptional circumstances, the autonomous system may choose to ignore its basic goals or break legal or professional rules in order to act in an ethical fashion, e.g., to save a human life. But, we need to be sure that the system will only break rules for justiï¬ably ethical reasons and so we are here concerned with the veriï¬cation of autonomous systems and, more broadly, with the development of veriï¬able autonomous systems. | 1504.03592#1 | Towards Verifiably Ethical Robot Behaviour | Ensuring that autonomous systems work ethically is both complex and
difficult. However, the idea of having an additional `governor' that assesses
options the system has, and prunes them to select the most ethical choices is
well understood. Recent work has produced such a governor consisting of a
`consequence engine' that assesses the likely future outcomes of actions then
applies a Safety/Ethical logic to select actions. Although this is appealing,
it is impossible to be certain that the most ethical options are actually
taken. In this paper we extend and apply a well-known agent verification
approach to our consequence engine, allowing us to verify the correctness of
its ethical decision-making. | http://arxiv.org/pdf/1504.03592 | Louise A. Dennis, Michael Fisher, Alan F. T. Winfield | cs.AI | Presented at the 1st International Workshop on AI and Ethics, Sunday
25th January 2015, Hill Country A, Hyatt Regency Austin. Will appear in the
workshop proceedings published by AAAI | null | cs.AI | 20150414 | 20150414 | [] |
1504.03410 | 2 | # 1. Introduction
With the ever-growing large-scale image data on the Web, much attention has been devoted to nearest neigh- bor search via hashing methods. In this paper, we focus on learning-based hashing, an emerging stream of hash meth- ods that learn similarity-preserving hash functions to en- code input data points (e.g., images) into binary codes.
Many learning-based hashing methods have been proâCorresponding author: Yan Pan, email: [email protected].
posed, e.g., [8, 9, 4, 12, 16, 27, 14, 25, 3]. The existing learning-based hashing methods can be categorized into un- supervised and supervised methods, based on whether su- pervised information (e.g., similarities or dissimilarities on data points) is involved. Compact bitwise representations are advantageous for improving the efï¬ciency in both stor- age and search speed, particularly in big data applications. Compared to unsupervised methods, supervised methods usually embed the input data points into compact hash codes with fewer bits, with the help of supervised information. | 1504.03410#2 | Simultaneous Feature Learning and Hash Coding with Deep Neural Networks | Similarity-preserving hashing is a widely-used method for nearest neighbour
search in large-scale image retrieval tasks. For most existing hashing methods,
an image is first encoded as a vector of hand-engineering visual features,
followed by another separate projection or quantization step that generates
binary codes. However, such visual feature vectors may not be optimally
compatible with the coding process, thus producing sub-optimal hashing codes.
In this paper, we propose a deep architecture for supervised hashing, in which
images are mapped into binary codes via carefully designed deep neural
networks. The pipeline of the proposed deep architecture consists of three
building blocks: 1) a sub-network with a stack of convolution layers to produce
the effective intermediate image features; 2) a divide-and-encode module to
divide the intermediate image features into multiple branches, each encoded
into one hash bit; and 3) a triplet ranking loss designed to characterize that
one image is more similar to the second image than to the third one. Extensive
evaluations on several benchmark image datasets show that the proposed
simultaneous feature learning and hash coding pipeline brings substantial
improvements over other state-of-the-art supervised or unsupervised hashing
methods. | http://arxiv.org/pdf/1504.03410 | Hanjiang Lai, Yan Pan, Ye Liu, Shuicheng Yan | cs.CV | This paper has been accepted to IEEE International Conference on
Pattern Recognition and Computer Vision (CVPR), 2015 | null | cs.CV | 20150414 | 20150414 | [] |
1504.03592 | 2 | This paper considers a technique for developing veriï¬able ethical components for autonomous systems, and we speciï¬cally consider the consequence engine proposed by [23]. This consequence engine is a discrete component of an autonomous system that integrates together with methods for action selection in the robotic controller. It evaluates the outcomes of actions using simulation and prediction, and selects the most ethical option using a safety/ethical logic. In Winï¬eld et al. [23], an example of such a system is implemented using a high-level Python program to control the operation of an e-puck robot [17] tracked with a VICON system. This approach tightly couples the ethical reasoning with the use of standard criteria for action selection and the implementation was validated using empirical testing.
In addition, given the move towards conï¬gurable, component-based plug-and-play platforms for robotics and autonomous systems, e.g. [20, 11, 19], we are interested in decoupling the ethical reasoning from the rest of the robot control so it appears as a distinct component. We would like to do this in a way that allows the consequence engine to be veriï¬able in a straightforward manner.
This paper describes the ï¬rst steps towards this. It develops a declarative language for specifying such conse- quence engines as agents implemented within the agent infrastructure layer toolkit (AIL). Systems developed using
1 | 1504.03592#2 | Towards Verifiably Ethical Robot Behaviour | Ensuring that autonomous systems work ethically is both complex and
difficult. However, the idea of having an additional `governor' that assesses
options the system has, and prunes them to select the most ethical choices is
well understood. Recent work has produced such a governor consisting of a
`consequence engine' that assesses the likely future outcomes of actions then
applies a Safety/Ethical logic to select actions. Although this is appealing,
it is impossible to be certain that the most ethical options are actually
taken. In this paper we extend and apply a well-known agent verification
approach to our consequence engine, allowing us to verify the correctness of
its ethical decision-making. | http://arxiv.org/pdf/1504.03592 | Louise A. Dennis, Michael Fisher, Alan F. T. Winfield | cs.AI | Presented at the 1st International Workshop on AI and Ethics, Sunday
25th January 2015, Hill Country A, Hyatt Regency Austin. Will appear in the
workshop proceedings published by AAAI | null | cs.AI | 20150414 | 20150414 | [] |
1504.03410 | 3 | In the pipelines of most existing hashing methods for im- ages, each input image is ï¬rstly represented by a vector of traditional hand-crafted visual descriptors (e.g., GIST [18], HOG [1]), followed by separate projection and quantiza- tion steps to encode this vector into a binary code. How- ever, such ï¬xed hand-crafted visual features may not be op- timally compatible with the coding process. In other words, a pair of semantically similar/dissimilar images may not have feature vectors with relatively small/large Euclidean distance. Ideally, it is expected that an image feature rep- resentation can sufï¬ciently preserve the image similarities, which can be learned during the hash learning process. Very recently, Xia et al. [27] proposed CNNH, a supervised hash- ing method in which the learning process is decomposed into a stage of learning approximate hash codes from the su- pervised information, followed by a stage of simultaneously learning hash functions and image representations based on the learned approximate hash codes. However, in this two-stage method, the learned approximate hash codes are used to guide the learning of the image representation, but the learned image representation cannot give feedback for learning better approximate hash codes. This one-way in- teraction thus still has limitations. | 1504.03410#3 | Simultaneous Feature Learning and Hash Coding with Deep Neural Networks | Similarity-preserving hashing is a widely-used method for nearest neighbour
search in large-scale image retrieval tasks. For most existing hashing methods,
an image is first encoded as a vector of hand-engineering visual features,
followed by another separate projection or quantization step that generates
binary codes. However, such visual feature vectors may not be optimally
compatible with the coding process, thus producing sub-optimal hashing codes.
In this paper, we propose a deep architecture for supervised hashing, in which
images are mapped into binary codes via carefully designed deep neural
networks. The pipeline of the proposed deep architecture consists of three
building blocks: 1) a sub-network with a stack of convolution layers to produce
the effective intermediate image features; 2) a divide-and-encode module to
divide the intermediate image features into multiple branches, each encoded
into one hash bit; and 3) a triplet ranking loss designed to characterize that
one image is more similar to the second image than to the third one. Extensive
evaluations on several benchmark image datasets show that the proposed
simultaneous feature learning and hash coding pipeline brings substantial
improvements over other state-of-the-art supervised or unsupervised hashing
methods. | http://arxiv.org/pdf/1504.03410 | Hanjiang Lai, Yan Pan, Ye Liu, Shuicheng Yan | cs.CV | This paper has been accepted to IEEE International Conference on
Pattern Recognition and Computer Vision (CVPR), 2015 | null | cs.CV | 20150414 | 20150414 | [] |
1504.03592 | 3 | 1
Sense data Loop through next possible actions Consequence Engine Actuator demands
Figure 1: Internal-model based architecture. Robot control data ï¬ows are shown in red (darker shaded); the Internal Model data ï¬ows in blue (lighter shaded).
the AIL are veriï¬able in the AJPF model-checker [13] and can integrate with external systems such as MatLab sim- ulations [16], and Robotic Operating System (ROS) nodes [9]. Having developed the language, we then reimplement a version of the case study reported in Winï¬eld et al. [23] as an agent and show how the operation of the consequence engine can be veriï¬ed in the AJPF model checker. We also use recently developed techniques to show how further investigations of the system behaviour can be undertaken by exporting a model from AJPF to the PRISM probabilistic model checker.
# 2 Background
# 2.1 An Internal Model Based Architecture | 1504.03592#3 | Towards Verifiably Ethical Robot Behaviour | Ensuring that autonomous systems work ethically is both complex and
difficult. However, the idea of having an additional `governor' that assesses
options the system has, and prunes them to select the most ethical choices is
well understood. Recent work has produced such a governor consisting of a
`consequence engine' that assesses the likely future outcomes of actions then
applies a Safety/Ethical logic to select actions. Although this is appealing,
it is impossible to be certain that the most ethical options are actually
taken. In this paper we extend and apply a well-known agent verification
approach to our consequence engine, allowing us to verify the correctness of
its ethical decision-making. | http://arxiv.org/pdf/1504.03592 | Louise A. Dennis, Michael Fisher, Alan F. T. Winfield | cs.AI | Presented at the 1st International Workshop on AI and Ethics, Sunday
25th January 2015, Hill Country A, Hyatt Regency Austin. Will appear in the
workshop proceedings published by AAAI | null | cs.AI | 20150414 | 20150414 | [] |
1504.03410 | 4 | In this paper, we propose a âone-stageâ supervised hash- ing method via a deep architecture that maps input images to binary codes. As shown in Figure 1, the proposed deep architecture has three building blocks: 1) shared stacked
1
# Figure (I, 1*, image feature
ranking
pairs and maximize those on dissimilar pairs.
# i.e., the
# image triplet
convolution layers to capture a useful image representation, 2) divide-and-encode modules to divide intermediate im- age features into multiple branches, with each branch cor- responding to one hash bit, (3) a triplet ranking loss [17] designed to preserve relative similarities. Extensive evalua- tions on several benchmarks show that the proposed deep- networks-based hashing method has substantially superior search accuracies over the state-of-the-art supervised or un- supervised hashing methods.
# 2. Related Work
Learning-based hashing methods can be divided into two categories: unsupervised methods and supervised methods. Unsupervised methods only use the training data to learn hash functions that can encode input data points to bi- nary codes. Notable examples in this category include Kernelized Locality-Sensitive Hashing [9], Semantic Hash- ing [19], graph-based hashing methods [26, 13], and Itera- tive Quantization [4]. | 1504.03410#4 | Simultaneous Feature Learning and Hash Coding with Deep Neural Networks | Similarity-preserving hashing is a widely-used method for nearest neighbour
search in large-scale image retrieval tasks. For most existing hashing methods,
an image is first encoded as a vector of hand-engineering visual features,
followed by another separate projection or quantization step that generates
binary codes. However, such visual feature vectors may not be optimally
compatible with the coding process, thus producing sub-optimal hashing codes.
In this paper, we propose a deep architecture for supervised hashing, in which
images are mapped into binary codes via carefully designed deep neural
networks. The pipeline of the proposed deep architecture consists of three
building blocks: 1) a sub-network with a stack of convolution layers to produce
the effective intermediate image features; 2) a divide-and-encode module to
divide the intermediate image features into multiple branches, each encoded
into one hash bit; and 3) a triplet ranking loss designed to characterize that
one image is more similar to the second image than to the third one. Extensive
evaluations on several benchmark image datasets show that the proposed
simultaneous feature learning and hash coding pipeline brings substantial
improvements over other state-of-the-art supervised or unsupervised hashing
methods. | http://arxiv.org/pdf/1504.03410 | Hanjiang Lai, Yan Pan, Ye Liu, Shuicheng Yan | cs.CV | This paper has been accepted to IEEE International Conference on
Pattern Recognition and Computer Vision (CVPR), 2015 | null | cs.CV | 20150414 | 20150414 | [] |
1504.03592 | 4 | # 2 Background
# 2.1 An Internal Model Based Architecture
Winï¬eld et al. [23] describe both the abstract architecture and concrete implementation of a robot that contains a consequence engine. The engine utilises an internal model of the robot itself and its environment in order to predict the outcomes of actions and make ethical and safety choices based on those predictions. The architecture for this system is shown in Figure 1. In this architecture, the consequence engine intercepts the robotâs own action selection mechanism. It runs a simulation of all available actions and notes the outcomes of the simulation. These outcomes are evaluated and selected using a Safety/Ethical Logic layer (SEL).
Winï¬eld et al. [23] consider a simple scenario in which a human is approaching a hole. In normal operation the robot should select actions which avoid colliding with the human but, if the robotâs inference suggests the human will fall in the hole then it may opt to collide with the human. While this is âagainst the rulesâ, it is a more ethical option as it avoids the greater harm of the human falling into the hole. In order to do this, the paper suggests scoring the outcomes of the actions for each of the actors (the human and the robot) â e.g., 0 if the actor is safe, 4 if the actor
2 | 1504.03592#4 | Towards Verifiably Ethical Robot Behaviour | Ensuring that autonomous systems work ethically is both complex and
difficult. However, the idea of having an additional `governor' that assesses
options the system has, and prunes them to select the most ethical choices is
well understood. Recent work has produced such a governor consisting of a
`consequence engine' that assesses the likely future outcomes of actions then
applies a Safety/Ethical logic to select actions. Although this is appealing,
it is impossible to be certain that the most ethical options are actually
taken. In this paper we extend and apply a well-known agent verification
approach to our consequence engine, allowing us to verify the correctness of
its ethical decision-making. | http://arxiv.org/pdf/1504.03592 | Louise A. Dennis, Michael Fisher, Alan F. T. Winfield | cs.AI | Presented at the 1st International Workshop on AI and Ethics, Sunday
25th January 2015, Hill Country A, Hyatt Regency Austin. Will appear in the
workshop proceedings published by AAAI | null | cs.AI | 20150414 | 20150414 | [] |
1504.03410 | 5 | In most of the existing supervised hashing methods for images, input images are represented by some hand-crafted visual features (e.g. GIST [18]), before the projection and quantization steps to generate hash codes.
On the other hand, we are witnessing dramatic progress in deep convolution networks in the last few years. Ap- proaches based on deep networks have achieved state-of- the-art performance on image classiï¬cation [7, 21, 23], object detection [7, 23] and other recognition tasks [24]. The recent trend in convolution networks has been to in- crease the depth of the networks [11, 21, 23] and the layer size [20, 23]. The success of deep-networks-based meth- ods for images is mainly due to their power of automati- cally learning effective image representations. In this paper, we focus on a deep architecture tailored for learning-based hashing. Some parts of the proposed architecture are de- signed on the basis of [11] that uses additional 1 à 1 con- volution layers to increase the representational power of the networks. | 1504.03410#5 | Simultaneous Feature Learning and Hash Coding with Deep Neural Networks | Similarity-preserving hashing is a widely-used method for nearest neighbour
search in large-scale image retrieval tasks. For most existing hashing methods,
an image is first encoded as a vector of hand-engineering visual features,
followed by another separate projection or quantization step that generates
binary codes. However, such visual feature vectors may not be optimally
compatible with the coding process, thus producing sub-optimal hashing codes.
In this paper, we propose a deep architecture for supervised hashing, in which
images are mapped into binary codes via carefully designed deep neural
networks. The pipeline of the proposed deep architecture consists of three
building blocks: 1) a sub-network with a stack of convolution layers to produce
the effective intermediate image features; 2) a divide-and-encode module to
divide the intermediate image features into multiple branches, each encoded
into one hash bit; and 3) a triplet ranking loss designed to characterize that
one image is more similar to the second image than to the third one. Extensive
evaluations on several benchmark image datasets show that the proposed
simultaneous feature learning and hash coding pipeline brings substantial
improvements over other state-of-the-art supervised or unsupervised hashing
methods. | http://arxiv.org/pdf/1504.03410 | Hanjiang Lai, Yan Pan, Ye Liu, Shuicheng Yan | cs.CV | This paper has been accepted to IEEE International Conference on
Pattern Recognition and Computer Vision (CVPR), 2015 | null | cs.CV | 20150414 | 20150414 | [] |
1504.03592 | 5 | 2
is involved in a collision and 10 if the actor falls in the hole. It then recommends a simple if-then logic for selecting actions based on these values.
I F f o r a l l r o b o t a c t i o n s , t h e human i s e q u a l l y s a f e THEN ( â d e f a u l t s a f e a c t i o n s â ) o u t p u t s a f e a c t i o n s ELSE ( â e t h i c a l a c t i o n â ) o u t p u t a c t i o n ( s ) f o r l e a s t unsafe human outcome ( s ) | 1504.03592#5 | Towards Verifiably Ethical Robot Behaviour | Ensuring that autonomous systems work ethically is both complex and
difficult. However, the idea of having an additional `governor' that assesses
options the system has, and prunes them to select the most ethical choices is
well understood. Recent work has produced such a governor consisting of a
`consequence engine' that assesses the likely future outcomes of actions then
applies a Safety/Ethical logic to select actions. Although this is appealing,
it is impossible to be certain that the most ethical options are actually
taken. In this paper we extend and apply a well-known agent verification
approach to our consequence engine, allowing us to verify the correctness of
its ethical decision-making. | http://arxiv.org/pdf/1504.03592 | Louise A. Dennis, Michael Fisher, Alan F. T. Winfield | cs.AI | Presented at the 1st International Workshop on AI and Ethics, Sunday
25th January 2015, Hill Country A, Hyatt Regency Austin. Will appear in the
workshop proceedings published by AAAI | null | cs.AI | 20150414 | 20150414 | [] |
1504.03410 | 6 | Supervised methods try to leverage supervised informa- tion (e.g., class labels, pairwise similarities, or relative sim- ilarities of data points) to learn compact bitwise representa- tions. Here are some representative examples in this cate- gory. Binary Reconstruction Embedding (BRE) [8] learns hash functions by minimizing the reconstruction errors be- tween the distances of data points and those of the corre- sponding hash codes. Minimal Loss Hashing (MLH) [16] and its extension [17] learn hash codes by minimizing hinge-like loss functions based on similarities or relative similarities of data points. Supervised Hashing with Kernels (KSH) [12] is a kernel-based method that pursues compact binary codes to minimize the Hamming distances on similar | 1504.03410#6 | Simultaneous Feature Learning and Hash Coding with Deep Neural Networks | Similarity-preserving hashing is a widely-used method for nearest neighbour
search in large-scale image retrieval tasks. For most existing hashing methods,
an image is first encoded as a vector of hand-engineering visual features,
followed by another separate projection or quantization step that generates
binary codes. However, such visual feature vectors may not be optimally
compatible with the coding process, thus producing sub-optimal hashing codes.
In this paper, we propose a deep architecture for supervised hashing, in which
images are mapped into binary codes via carefully designed deep neural
networks. The pipeline of the proposed deep architecture consists of three
building blocks: 1) a sub-network with a stack of convolution layers to produce
the effective intermediate image features; 2) a divide-and-encode module to
divide the intermediate image features into multiple branches, each encoded
into one hash bit; and 3) a triplet ranking loss designed to characterize that
one image is more similar to the second image than to the third one. Extensive
evaluations on several benchmark image datasets show that the proposed
simultaneous feature learning and hash coding pipeline brings substantial
improvements over other state-of-the-art supervised or unsupervised hashing
methods. | http://arxiv.org/pdf/1504.03410 | Hanjiang Lai, Yan Pan, Ye Liu, Shuicheng Yan | cs.CV | This paper has been accepted to IEEE International Conference on
Pattern Recognition and Computer Vision (CVPR), 2015 | null | cs.CV | 20150414 | 20150414 | [] |
1504.03592 | 6 | A version of this architecture was implemented on e-pucks (small, relatively simple, robots). The basic activity is for the robot to attempt to reach some target location while avoiding a hole in the ground. Two humans (actually simulated by additional e-pucks in this experiment) were also programmed to move towards the hole and then the robot could choose to move towards these in an attempt to get them to divert using their own inbuilt avoidance mechanisms in order to prevent them falling into the hole. A number of experiments were carried out. In all situations the robot avoided falling into the hole itself. When there was a single other robot (representing the human that is in danger) it successfully managed to divert the âhumanâ on all occasions. When a third robot (representing a second human) was introduced into the problem, the robot rescued at least one âhumanâ in about 58% of runs and rescued both in 9% of runs. These outcomes depended upon both noise and the starting conditions effecting which additional robot moved ï¬rst and whether the e-puck had time to reach both of them. | 1504.03592#6 | Towards Verifiably Ethical Robot Behaviour | Ensuring that autonomous systems work ethically is both complex and
difficult. However, the idea of having an additional `governor' that assesses
options the system has, and prunes them to select the most ethical choices is
well understood. Recent work has produced such a governor consisting of a
`consequence engine' that assesses the likely future outcomes of actions then
applies a Safety/Ethical logic to select actions. Although this is appealing,
it is impossible to be certain that the most ethical options are actually
taken. In this paper we extend and apply a well-known agent verification
approach to our consequence engine, allowing us to verify the correctness of
its ethical decision-making. | http://arxiv.org/pdf/1504.03592 | Louise A. Dennis, Michael Fisher, Alan F. T. Winfield | cs.AI | Presented at the 1st International Workshop on AI and Ethics, Sunday
25th January 2015, Hill Country A, Hyatt Regency Austin. Will appear in the
workshop proceedings published by AAAI | null | cs.AI | 20150414 | 20150414 | [] |
1504.03410 | 7 | Without using hand-crafted image features, the recently proposed CNNH [27] decomposes the hash learning pro- cess into a stage of learning approximate hash codes, fol- lowed by a deep-networks-based stage of simultaneously learning image features and hash functions, with the raw image pixels as input. However, a limitation in CNNH is that the learned image representation (in Stage 2) cannot be used to improve the learning of approximate hash codes, although the learned approximate hash codes can be used to guide the learning of image representation. In the pro- posed method, we learn the image representation and the hash codes in one stage, such that these two tasks have interaction and help each other forward.
# 3. The Proposed Approach | 1504.03410#7 | Simultaneous Feature Learning and Hash Coding with Deep Neural Networks | Similarity-preserving hashing is a widely-used method for nearest neighbour
search in large-scale image retrieval tasks. For most existing hashing methods,
an image is first encoded as a vector of hand-engineering visual features,
followed by another separate projection or quantization step that generates
binary codes. However, such visual feature vectors may not be optimally
compatible with the coding process, thus producing sub-optimal hashing codes.
In this paper, we propose a deep architecture for supervised hashing, in which
images are mapped into binary codes via carefully designed deep neural
networks. The pipeline of the proposed deep architecture consists of three
building blocks: 1) a sub-network with a stack of convolution layers to produce
the effective intermediate image features; 2) a divide-and-encode module to
divide the intermediate image features into multiple branches, each encoded
into one hash bit; and 3) a triplet ranking loss designed to characterize that
one image is more similar to the second image than to the third one. Extensive
evaluations on several benchmark image datasets show that the proposed
simultaneous feature learning and hash coding pipeline brings substantial
improvements over other state-of-the-art supervised or unsupervised hashing
methods. | http://arxiv.org/pdf/1504.03410 | Hanjiang Lai, Yan Pan, Ye Liu, Shuicheng Yan | cs.CV | This paper has been accepted to IEEE International Conference on
Pattern Recognition and Computer Vision (CVPR), 2015 | null | cs.CV | 20150414 | 20150414 | [] |
1504.03592 | 7 | The actual implementation of the basic ethical action selection in this system was based on potential functions. Each action was assigned a score based upon the weighted sum of how close it took the e-puck to the goal position, whether the e-puck was placed in danger, whether one of the other âhumansâ was placed in danger, and how close the action would take the e-puck to the âhumansâ. The system then simply selected the action with the highest score.
# 2.2 Verifying Autonomous Systems using AJPF
Formal verification is essentially the process of assessing whether a specification, given in formal logic, is true of the system in question. For a specific logical property, y, there are many different approaches to achieving this [1 (4, ranging from deductive verification against a logical description of the system wg (i.e., / ws = y) to the algorithmic verification of the property against a formal model of the system, M (i.e., MZ wy). The latter has been extremely successful in Computer Science and Artificial Intelligence, primarily through the model checking approach [5]. This takes an executable model of the system in question, defining all the systemâs possible executions, and then checks a logical property against this model (and, hence, against all possible executions). | 1504.03592#7 | Towards Verifiably Ethical Robot Behaviour | Ensuring that autonomous systems work ethically is both complex and
difficult. However, the idea of having an additional `governor' that assesses
options the system has, and prunes them to select the most ethical choices is
well understood. Recent work has produced such a governor consisting of a
`consequence engine' that assesses the likely future outcomes of actions then
applies a Safety/Ethical logic to select actions. Although this is appealing,
it is impossible to be certain that the most ethical options are actually
taken. In this paper we extend and apply a well-known agent verification
approach to our consequence engine, allowing us to verify the correctness of
its ethical decision-making. | http://arxiv.org/pdf/1504.03592 | Louise A. Dennis, Michael Fisher, Alan F. T. Winfield | cs.AI | Presented at the 1st International Workshop on AI and Ethics, Sunday
25th January 2015, Hill Country A, Hyatt Regency Austin. Will appear in the
workshop proceedings published by AAAI | null | cs.AI | 20150414 | 20150414 | [] |
1504.03410 | 8 | # 3. The Proposed Approach
We assume I to be the image space. The goal of hash learning for images is to learn a mapping F : I â {0, 1}q 1, such that an input image I can be encoded into a q-bit binary code F(I), with the similarities of images being preserved. In this paper, we propose an architecture of deep con- volution networks designed for hash learning, as shown in Figure 1. This architecture accepts input images in a triplet form. Given triplets of input images, the pipeline of the pro- posed architecture contains three parts: 1) a sub-network with multiple convolution-pooling layers to capture a rep- resentation of images; 2) a divide-and-encode module de- signed to generate bitwise hash codes; 3) a triplet ranking loss layer for learning good similarity measures. In the fol- lowing, we will present the details of these parts, respec- tively.
# 3.1. Triplet Ranking Loss and Optimization | 1504.03410#8 | Simultaneous Feature Learning and Hash Coding with Deep Neural Networks | Similarity-preserving hashing is a widely-used method for nearest neighbour
search in large-scale image retrieval tasks. For most existing hashing methods,
an image is first encoded as a vector of hand-engineering visual features,
followed by another separate projection or quantization step that generates
binary codes. However, such visual feature vectors may not be optimally
compatible with the coding process, thus producing sub-optimal hashing codes.
In this paper, we propose a deep architecture for supervised hashing, in which
images are mapped into binary codes via carefully designed deep neural
networks. The pipeline of the proposed deep architecture consists of three
building blocks: 1) a sub-network with a stack of convolution layers to produce
the effective intermediate image features; 2) a divide-and-encode module to
divide the intermediate image features into multiple branches, each encoded
into one hash bit; and 3) a triplet ranking loss designed to characterize that
one image is more similar to the second image than to the third one. Extensive
evaluations on several benchmark image datasets show that the proposed
simultaneous feature learning and hash coding pipeline brings substantial
improvements over other state-of-the-art supervised or unsupervised hashing
methods. | http://arxiv.org/pdf/1504.03410 | Hanjiang Lai, Yan Pan, Ye Liu, Shuicheng Yan | cs.CV | This paper has been accepted to IEEE International Conference on
Pattern Recognition and Computer Vision (CVPR), 2015 | null | cs.CV | 20150414 | 20150414 | [] |
1504.03592 | 8 | Whereas model checking involves assessing a logical speciï¬cation against all executions of a model of the system, an alternative approach is to check a logical property directly against all actual executions of the system. This is termed the model checking of programs [21] and crucially depends on being able to determine all executions of the actual program. In the case of Java, this is feasible since a modiï¬ed virtual machine can be used to manipulate the program executions. The Java Pathï¬nder (JPF) system carries out formal veriï¬cation of Java programs in this way by analysing all the possible execution paths [21]. This avoids the need for an extra level of modelling and ensures that the veriï¬cation results truly apply to the real system.
In the examples discussed later in this paper we use the MCAPL framework which includes a model checker for agent programs built on top of JPF. As this framework is described in detail in [13], we only provide a brief overview here. MCAPL has two main sub-components: the AIL-toolkit for implementing interpreters for belief-desire-intention (BDI) agent programming languages and the AJPF model checker. | 1504.03592#8 | Towards Verifiably Ethical Robot Behaviour | Ensuring that autonomous systems work ethically is both complex and
difficult. However, the idea of having an additional `governor' that assesses
options the system has, and prunes them to select the most ethical choices is
well understood. Recent work has produced such a governor consisting of a
`consequence engine' that assesses the likely future outcomes of actions then
applies a Safety/Ethical logic to select actions. Although this is appealing,
it is impossible to be certain that the most ethical options are actually
taken. In this paper we extend and apply a well-known agent verification
approach to our consequence engine, allowing us to verify the correctness of
its ethical decision-making. | http://arxiv.org/pdf/1504.03592 | Louise A. Dennis, Michael Fisher, Alan F. T. Winfield | cs.AI | Presented at the 1st International Workshop on AI and Ethics, Sunday
25th January 2015, Hill Country A, Hyatt Regency Austin. Will appear in the
workshop proceedings published by AAAI | null | cs.AI | 20150414 | 20150414 | [] |
1504.03410 | 9 | # 3.1. Triplet Ranking Loss and Optimization
In most of the existing supervised hashing methods, the side information is in the form of pairwise labels that indi- cate the semantical similarites/dissimilarites on image pairs. The loss functions in these methods are thus designed to preserve the pairwise similarities of images. Recently, some efforts [17, 10] have been made to learn hash functions that preserve relative similarities of the form âimage I is more similar to image I + than to image I ââ. Such a form of triplet-based relative similarities can be more easily ob- tained than pairwise similarities (e.g., the click-through data from image retrieval systems). Furthermore, given the side information of pairwise similarities, one can easily generate a set of triplet constraints2. | 1504.03410#9 | Simultaneous Feature Learning and Hash Coding with Deep Neural Networks | Similarity-preserving hashing is a widely-used method for nearest neighbour
search in large-scale image retrieval tasks. For most existing hashing methods,
an image is first encoded as a vector of hand-engineering visual features,
followed by another separate projection or quantization step that generates
binary codes. However, such visual feature vectors may not be optimally
compatible with the coding process, thus producing sub-optimal hashing codes.
In this paper, we propose a deep architecture for supervised hashing, in which
images are mapped into binary codes via carefully designed deep neural
networks. The pipeline of the proposed deep architecture consists of three
building blocks: 1) a sub-network with a stack of convolution layers to produce
the effective intermediate image features; 2) a divide-and-encode module to
divide the intermediate image features into multiple branches, each encoded
into one hash bit; and 3) a triplet ranking loss designed to characterize that
one image is more similar to the second image than to the third one. Extensive
evaluations on several benchmark image datasets show that the proposed
simultaneous feature learning and hash coding pipeline brings substantial
improvements over other state-of-the-art supervised or unsupervised hashing
methods. | http://arxiv.org/pdf/1504.03410 | Hanjiang Lai, Yan Pan, Ye Liu, Shuicheng Yan | cs.CV | This paper has been accepted to IEEE International Conference on
Pattern Recognition and Computer Vision (CVPR), 2015 | null | cs.CV | 20150414 | 20150414 | [] |
1504.03592 | 9 | Interpreters for BDI languages are programmed by instantiating the Java-based AIL toolkit [10]. Here, an agent system can be programmed in the normal way for the programming language but then runs in the AIL interpreter which in turn runs on top of the Java Pathï¬nder (JPF) virtual machine.
Agent JPF (AJPF) is a customisation of JPF that is speciï¬cally optimised for AIL-based language interpreters. Agents programmed in languages that are implemented using the AIL-toolkit can thus be formally veriï¬ed via AJPF. Furthermore if they run in an environment programmed in Java, then the whole agent-environment system can be model checked. Common to all language interpreters implemented using the AIL are the AIL-agent data structures
3
for beliefs, intentions, goals, etc., which are subsequently accessed by the model checker and on which the logical modalities of a property speciï¬cation language are deï¬ned. | 1504.03592#9 | Towards Verifiably Ethical Robot Behaviour | Ensuring that autonomous systems work ethically is both complex and
difficult. However, the idea of having an additional `governor' that assesses
options the system has, and prunes them to select the most ethical choices is
well understood. Recent work has produced such a governor consisting of a
`consequence engine' that assesses the likely future outcomes of actions then
applies a Safety/Ethical logic to select actions. Although this is appealing,
it is impossible to be certain that the most ethical options are actually
taken. In this paper we extend and apply a well-known agent verification
approach to our consequence engine, allowing us to verify the correctness of
its ethical decision-making. | http://arxiv.org/pdf/1504.03592 | Louise A. Dennis, Michael Fisher, Alan F. T. Winfield | cs.AI | Presented at the 1st International Workshop on AI and Ethics, Sunday
25th January 2015, Hill Country A, Hyatt Regency Austin. Will appear in the
workshop proceedings published by AAAI | null | cs.AI | 20150414 | 20150414 | [] |
1504.03410 | 10 | In the proposed deep architecture, we propose to use a variant of the triplet ranking loss in [17] to preserve the rel- ative similarities of images. Speciï¬cally, given the training triplets of images in the form of (I, I +, I â) in which I is more similar to I + than to I â, the goal is to ï¬nd a mapping F(.) such that the binary code F(I) is closer to F(I +) than to F(I â). Accordingly, the triplet ranking hinge loss is de- ï¬ned by
lrripter(F(I), FI"), F(L)) =max(0,1 = (\[F(D) â F(Z ile â FD) â FU") h)) st. F(I), F(I*), F(I7) ⬠{0,1}4, (d)
1In some of the existing hash methods, e.g., [27, 12], this mapping (or the set of hash functions) is deï¬ned as F : I â {â1, 1}q, which is essentially the same as the deï¬nition used here. | 1504.03410#10 | Simultaneous Feature Learning and Hash Coding with Deep Neural Networks | Similarity-preserving hashing is a widely-used method for nearest neighbour
search in large-scale image retrieval tasks. For most existing hashing methods,
an image is first encoded as a vector of hand-engineering visual features,
followed by another separate projection or quantization step that generates
binary codes. However, such visual feature vectors may not be optimally
compatible with the coding process, thus producing sub-optimal hashing codes.
In this paper, we propose a deep architecture for supervised hashing, in which
images are mapped into binary codes via carefully designed deep neural
networks. The pipeline of the proposed deep architecture consists of three
building blocks: 1) a sub-network with a stack of convolution layers to produce
the effective intermediate image features; 2) a divide-and-encode module to
divide the intermediate image features into multiple branches, each encoded
into one hash bit; and 3) a triplet ranking loss designed to characterize that
one image is more similar to the second image than to the third one. Extensive
evaluations on several benchmark image datasets show that the proposed
simultaneous feature learning and hash coding pipeline brings substantial
improvements over other state-of-the-art supervised or unsupervised hashing
methods. | http://arxiv.org/pdf/1504.03410 | Hanjiang Lai, Yan Pan, Ye Liu, Shuicheng Yan | cs.CV | This paper has been accepted to IEEE International Conference on
Pattern Recognition and Computer Vision (CVPR), 2015 | null | cs.CV | 20150414 | 20150414 | [] |
1504.03592 | 10 | The system described in Winï¬eld et al. [23] is not explicitly a BDI system or even an agent system, yet it is based on the concept of a software system that forms some component in a wider environment and there was a moderately clear, if informal, semantics describing its operation, both of which are key assumptions underlying the MCAPL framework. We therefore targeted AJPF as a preliminary tool for exploring how such a consequence engine might be built in a veriï¬able fashion, especially as simple decision-making within the safety/ethical logic could be straightforwardly captured within an agent.
# 3 Modelling a Consequence Engine for AJPF
Since AJPF is speciï¬cally designed to model check systems implemented using Java it was necessary to re-implement the consequence engine and case study described in Winï¬eld et al. [23].
We implemented a declarative consequence engine in the AIL as a simple language governed by two operational semantic rules, called Model Applicable Actions and Evaluating Outcomes. Semantically, a consequence engine is represented as a tuple (ce, ag,â¬, A, An, SA, EP, fs) where:
⢠ce and ag are the names of the consequence engine and the agent it is linked to;
⢠ξ is an external environment (either the real world, a simulation or a combination of the two); | 1504.03592#10 | Towards Verifiably Ethical Robot Behaviour | Ensuring that autonomous systems work ethically is both complex and
difficult. However, the idea of having an additional `governor' that assesses
options the system has, and prunes them to select the most ethical choices is
well understood. Recent work has produced such a governor consisting of a
`consequence engine' that assesses the likely future outcomes of actions then
applies a Safety/Ethical logic to select actions. Although this is appealing,
it is impossible to be certain that the most ethical options are actually
taken. In this paper we extend and apply a well-known agent verification
approach to our consequence engine, allowing us to verify the correctness of
its ethical decision-making. | http://arxiv.org/pdf/1504.03592 | Louise A. Dennis, Michael Fisher, Alan F. T. Winfield | cs.AI | Presented at the 1st International Workshop on AI and Ethics, Sunday
25th January 2015, Hill Country A, Hyatt Regency Austin. Will appear in the
workshop proceedings published by AAAI | null | cs.AI | 20150414 | 20150414 | [] |
1504.03410 | 11 | 2For example, for a pair of similar images (I1, I2) and a pair of dissim- ilar images (I1, I3), one can generate a triplet (I1, I2, I3) that represents âimage I1 is more similar to image I2 than to image I3â.
where ||.||7, represents the Hamming distance. For ease of optimization, natural relaxation tricks on (1) are to replace the Hamming norm with the ¢2 norm and replace the in- teger constraints on F(.) with the range constraints. The modified loss functions is
Crripter(F (I), F(I*), FL) =max(0, ||F(2) â FI*)||3 â ||F) â FU )|I3 + 1) s.t. F(I), F(I*+), FUI-) ⬠[0, 1%. (2)
This variant of triplet ranking loss is convex. )gradients with respect to F(I), F(I +) or F(I â) are | 1504.03410#11 | Simultaneous Feature Learning and Hash Coding with Deep Neural Networks | Similarity-preserving hashing is a widely-used method for nearest neighbour
search in large-scale image retrieval tasks. For most existing hashing methods,
an image is first encoded as a vector of hand-engineering visual features,
followed by another separate projection or quantization step that generates
binary codes. However, such visual feature vectors may not be optimally
compatible with the coding process, thus producing sub-optimal hashing codes.
In this paper, we propose a deep architecture for supervised hashing, in which
images are mapped into binary codes via carefully designed deep neural
networks. The pipeline of the proposed deep architecture consists of three
building blocks: 1) a sub-network with a stack of convolution layers to produce
the effective intermediate image features; 2) a divide-and-encode module to
divide the intermediate image features into multiple branches, each encoded
into one hash bit; and 3) a triplet ranking loss designed to characterize that
one image is more similar to the second image than to the third one. Extensive
evaluations on several benchmark image datasets show that the proposed
simultaneous feature learning and hash coding pipeline brings substantial
improvements over other state-of-the-art supervised or unsupervised hashing
methods. | http://arxiv.org/pdf/1504.03410 | Hanjiang Lai, Yan Pan, Ye Liu, Shuicheng Yan | cs.CV | This paper has been accepted to IEEE International Conference on
Pattern Recognition and Computer Vision (CVPR), 2015 | null | cs.CV | 20150414 | 20150414 | [] |
1504.03592 | 11 | ⢠ξ is an external environment (either the real world, a simulation or a combination of the two);
⢠A is a list of agâs actions that are currently applicable;
⢠An is a list of such actions annotated with outcomes;
⢠SA is a ï¬ltered list of the applicable actions, indicating the ones the engine believes to be the most ethical in the current situation;
⢠EP is a precedence order over the actors in the environment dictating which one gets priority in terms of ethical outcomes; and
fES is a map from outcomes to an integer indicating their ethical severity.
Anâ = {(a,0s) |a ⬠AA os = E.model(a)} (ce,ag,â¬, A, An, SA, EP, firs) > (ce,ag,â¬, A, Anâ, SA, EP, firs) The operational rule for Model Applicable Actions is shown in (Ip. This invokes some model or simulation in the environment (â¬.model(a)) that simulates the effects of ag taking each applicable action a and returns these as a list of tuples, os, indicating the outcome for each actor, e.g., (auman, hole) to indicate that a human has fallen into a hole. The consequence engine replaces its set of annotated actions with this new information. (1) | 1504.03592#11 | Towards Verifiably Ethical Robot Behaviour | Ensuring that autonomous systems work ethically is both complex and
difficult. However, the idea of having an additional `governor' that assesses
options the system has, and prunes them to select the most ethical choices is
well understood. Recent work has produced such a governor consisting of a
`consequence engine' that assesses the likely future outcomes of actions then
applies a Safety/Ethical logic to select actions. Although this is appealing,
it is impossible to be certain that the most ethical options are actually
taken. In this paper we extend and apply a well-known agent verification
approach to our consequence engine, allowing us to verify the correctness of
its ethical decision-making. | http://arxiv.org/pdf/1504.03592 | Louise A. Dennis, Michael Fisher, Alan F. T. Winfield | cs.AI | Presented at the 1st International Workshop on AI and Ethics, Sunday
25th January 2015, Hill Country A, Hyatt Regency Austin. Will appear in the
workshop proceedings published by AAAI | null | cs.AI | 20150414 | 20150414 | [] |
1504.03410 | 12 | This variant of triplet ranking loss is convex. )gradients with respect to F(I), F(I +) or F(I â) are
oe a = (2b7 = 2b*) x Tipo |j2â|âv- ||24150 oe apr = (20% = 2b) x Tip-v+ igo ig+1>0 G) oe _ db (2b7 â 2b) x Typ_o+||2â|Jo-0-|[3-41 50°
where we denote F(I), F(I +), F(I â) as b, b+, bâ. The indicator function Icondition = 1 if condition is true; oth- erwise Icondition = 0. Hence, the loss function in (2) can be easily integrated in back propagation in neural networks.
# 3.2. Shared Sub-Network with Stacked Convolution Layers | 1504.03410#12 | Simultaneous Feature Learning and Hash Coding with Deep Neural Networks | Similarity-preserving hashing is a widely-used method for nearest neighbour
search in large-scale image retrieval tasks. For most existing hashing methods,
an image is first encoded as a vector of hand-engineering visual features,
followed by another separate projection or quantization step that generates
binary codes. However, such visual feature vectors may not be optimally
compatible with the coding process, thus producing sub-optimal hashing codes.
In this paper, we propose a deep architecture for supervised hashing, in which
images are mapped into binary codes via carefully designed deep neural
networks. The pipeline of the proposed deep architecture consists of three
building blocks: 1) a sub-network with a stack of convolution layers to produce
the effective intermediate image features; 2) a divide-and-encode module to
divide the intermediate image features into multiple branches, each encoded
into one hash bit; and 3) a triplet ranking loss designed to characterize that
one image is more similar to the second image than to the third one. Extensive
evaluations on several benchmark image datasets show that the proposed
simultaneous feature learning and hash coding pipeline brings substantial
improvements over other state-of-the-art supervised or unsupervised hashing
methods. | http://arxiv.org/pdf/1504.03410 | Hanjiang Lai, Yan Pan, Ye Liu, Shuicheng Yan | cs.CV | This paper has been accepted to IEEE International Conference on
Pattern Recognition and Computer Vision (CVPR), 2015 | null | cs.CV | 20150414 | 20150414 | [] |
1504.03592 | 12 | SA! = fep(EP, An, fs, A) (ce,ag,â¬, A, An, SA, EP, firs) > (ce,ag,â¬, A, An, SAâ, EP, firs) The operational rule for Evaluating Outcomes, specifically of the ethical actions, is shown in (2). It uses the function fep to select a subset of the agentâs applicable actions using the annotated actions, the precedence order and an evaluation map as follows: (2)
fep([], An, fES, SA) = SA (3)
fep(h :: T, An, fES, SA) = fep(T, An, fES, fme(h, An, fES, SA)) (4)
fep recurses over a precedence list of actors (where [] indicates the empty list and h :: T is element h in front of a list T ). Itâs purpose is to ï¬lter the set of actions down just to those that are best, ethically, for the ï¬rst actor in the list (i.e., the one whose well-being has the highest priority) and then further ï¬lter the actions for the next actor in the list and so on. The ï¬ltering of actions for each individual actor is performed by fme.
4 | 1504.03592#12 | Towards Verifiably Ethical Robot Behaviour | Ensuring that autonomous systems work ethically is both complex and
difficult. However, the idea of having an additional `governor' that assesses
options the system has, and prunes them to select the most ethical choices is
well understood. Recent work has produced such a governor consisting of a
`consequence engine' that assesses the likely future outcomes of actions then
applies a Safety/Ethical logic to select actions. Although this is appealing,
it is impossible to be certain that the most ethical options are actually
taken. In this paper we extend and apply a well-known agent verification
approach to our consequence engine, allowing us to verify the correctness of
its ethical decision-making. | http://arxiv.org/pdf/1504.03592 | Louise A. Dennis, Michael Fisher, Alan F. T. Winfield | cs.AI | Presented at the 1st International Workshop on AI and Ethics, Sunday
25th January 2015, Hill Country A, Hyatt Regency Austin. Will appear in the
workshop proceedings published by AAAI | null | cs.AI | 20150414 | 20150414 | [] |
1504.03410 | 13 | # 3.2. Shared Sub-Network with Stacked Convolution Layers
With this modiï¬ed triplet ranking loss function (2), the input to the proposed deep architecture are triplets of im- ages, i.e., {(Ii, I + i , I â i=1, in which Ii is more similar to than to I â I + (i = 1, 2, ...n). As shown in Figure 1, we i i propose to use a shared sub-network with a stack of convo- lution layers to automatically learn a uniï¬ed representation of the input images. Through this sub-network, an input triplet (I, I +, I â) is encoded to a triplet of intermediate im- age features (x, x+, xâ), where x, x+, xâ are vectors with the same dimension. | 1504.03410#13 | Simultaneous Feature Learning and Hash Coding with Deep Neural Networks | Similarity-preserving hashing is a widely-used method for nearest neighbour
search in large-scale image retrieval tasks. For most existing hashing methods,
an image is first encoded as a vector of hand-engineering visual features,
followed by another separate projection or quantization step that generates
binary codes. However, such visual feature vectors may not be optimally
compatible with the coding process, thus producing sub-optimal hashing codes.
In this paper, we propose a deep architecture for supervised hashing, in which
images are mapped into binary codes via carefully designed deep neural
networks. The pipeline of the proposed deep architecture consists of three
building blocks: 1) a sub-network with a stack of convolution layers to produce
the effective intermediate image features; 2) a divide-and-encode module to
divide the intermediate image features into multiple branches, each encoded
into one hash bit; and 3) a triplet ranking loss designed to characterize that
one image is more similar to the second image than to the third one. Extensive
evaluations on several benchmark image datasets show that the proposed
simultaneous feature learning and hash coding pipeline brings substantial
improvements over other state-of-the-art supervised or unsupervised hashing
methods. | http://arxiv.org/pdf/1504.03410 | Hanjiang Lai, Yan Pan, Ye Liu, Shuicheng Yan | cs.CV | This paper has been accepted to IEEE International Conference on
Pattern Recognition and Computer Vision (CVPR), 2015 | null | cs.CV | 20150414 | 20150414 | [] |
1504.03592 | 13 | 4
Environment Model Outcomes Select Action Ethical Filter Choose Using Metric Execute Action Consequence Engine Model Applicable Outcomes Simple Agent Evaluate Outcomes <
Figure 2: Architecture for testing the AIL Version of the Consequence Engine
fime(h, An, fos, A)= {alae AAVa A#aeA Yo fas(out) < Ss frs(outâ)} (5) (a,(h,out))â¬An (aâ,(h,outâ))â¬An
fme sums the outcomes for actor, h given some action a â A and returns the set of those where the sum has the lowest value. For instance if all actions are safe for actor h we can assume that fES maps them all to some equal (low) value (say 0) and so fme will return all actions. If some are unsafe for h then fES will map them to a higher value (say 4) and these will be excluded from the return set.
We sum over the outcomes for a given actor because either there may be multiple unethical outcomes and we may wish to account for all of them, or there may be multiple actors of a given type in the precedence order (e.g., several humans) and we want to minimize the number of people harmed by the robotâs actions. | 1504.03592#13 | Towards Verifiably Ethical Robot Behaviour | Ensuring that autonomous systems work ethically is both complex and
difficult. However, the idea of having an additional `governor' that assesses
options the system has, and prunes them to select the most ethical choices is
well understood. Recent work has produced such a governor consisting of a
`consequence engine' that assesses the likely future outcomes of actions then
applies a Safety/Ethical logic to select actions. Although this is appealing,
it is impossible to be certain that the most ethical options are actually
taken. In this paper we extend and apply a well-known agent verification
approach to our consequence engine, allowing us to verify the correctness of
its ethical decision-making. | http://arxiv.org/pdf/1504.03592 | Louise A. Dennis, Michael Fisher, Alan F. T. Winfield | cs.AI | Presented at the 1st International Workshop on AI and Ethics, Sunday
25th January 2015, Hill Country A, Hyatt Regency Austin. Will appear in the
workshop proceedings published by AAAI | null | cs.AI | 20150414 | 20150414 | [] |
1504.03410 | 14 | In this sub-network, we adopt the architecture of Net- work in Network [11] as our basic framework, where we insert convolution layers with 1 à 1 ï¬lters after some con- volution layers with ï¬lters of a larger receptive ï¬eld. These 1 à 1 convolution ï¬lters can be regarded as a linear trans- formation of their input channels (followed by rectiï¬cation non-linearity). As suggested in [11], we use an average- pooling layer as the output layer of this sub-network, to re- place the fully-connected layer(s) used in traditional archi- tectures (e.g., [7]). As an example, Table 1 shows the con- ï¬gurations of the sub-network for images of size 256 à 256. Note that all the convolution layers use rectiï¬cation activa- tion which are omitted in Table 1.
This sub-network is shared by the three images in each input triplet. Such a way of parameter sharing can signif- icantly reduce the number of parameters in the whole ar- chitecture. A possible alternative is that, for (I, I +, I â) | 1504.03410#14 | Simultaneous Feature Learning and Hash Coding with Deep Neural Networks | Similarity-preserving hashing is a widely-used method for nearest neighbour
search in large-scale image retrieval tasks. For most existing hashing methods,
an image is first encoded as a vector of hand-engineering visual features,
followed by another separate projection or quantization step that generates
binary codes. However, such visual feature vectors may not be optimally
compatible with the coding process, thus producing sub-optimal hashing codes.
In this paper, we propose a deep architecture for supervised hashing, in which
images are mapped into binary codes via carefully designed deep neural
networks. The pipeline of the proposed deep architecture consists of three
building blocks: 1) a sub-network with a stack of convolution layers to produce
the effective intermediate image features; 2) a divide-and-encode module to
divide the intermediate image features into multiple branches, each encoded
into one hash bit; and 3) a triplet ranking loss designed to characterize that
one image is more similar to the second image than to the third one. Extensive
evaluations on several benchmark image datasets show that the proposed
simultaneous feature learning and hash coding pipeline brings substantial
improvements over other state-of-the-art supervised or unsupervised hashing
methods. | http://arxiv.org/pdf/1504.03410 | Hanjiang Lai, Yan Pan, Ye Liu, Shuicheng Yan | cs.CV | This paper has been accepted to IEEE International Conference on
Pattern Recognition and Computer Vision (CVPR), 2015 | null | cs.CV | 20150414 | 20150414 | [] |
1504.03592 | 14 | It should be noted that this implementation of a consequence engine is closer in nature to the abstract description from Winï¬eld et al. [23] than to the implementation where potential functions are used to evaluate and order the outcomes of actions. This allows certain actions to be vetoed simply because they are bad for some agent high in the precedence order even if they have very good outcomes lower in the order. However, this behaviour can be also be reproduced by choosing suitable weights for the sum of the potential functions (and, indeed, this is what was done in the implementation in [23]).
It should also be noted (as hinted in our discussion of fme) that we assume a precedence order over types of agents, rather than individual agents and that our model outputs outcomes for types of agents rather than individuals. In our case study we consider only outcomes for humans and robots rather than distinguishing between the two humans. Importantly, nothing in the implementation prevents an individual being treated as a type that contains only one object.
Our consequence engine language can be used to ï¬lter a set of actions in any environment that can provide a suitable modelling capability. | 1504.03592#14 | Towards Verifiably Ethical Robot Behaviour | Ensuring that autonomous systems work ethically is both complex and
difficult. However, the idea of having an additional `governor' that assesses
options the system has, and prunes them to select the most ethical choices is
well understood. Recent work has produced such a governor consisting of a
`consequence engine' that assesses the likely future outcomes of actions then
applies a Safety/Ethical logic to select actions. Although this is appealing,
it is impossible to be certain that the most ethical options are actually
taken. In this paper we extend and apply a well-known agent verification
approach to our consequence engine, allowing us to verify the correctness of
its ethical decision-making. | http://arxiv.org/pdf/1504.03592 | Louise A. Dennis, Michael Fisher, Alan F. T. Winfield | cs.AI | Presented at the 1st International Workshop on AI and Ethics, Sunday
25th January 2015, Hill Country A, Hyatt Regency Austin. Will appear in the
workshop proceedings published by AAAI | null | cs.AI | 20150414 | 20150414 | [] |
1504.03410 | 15 | Table 1. Conï¬gurations of the shared sub-network for input images of size 256 à 256 type convolution convolution max pool convolution convolution max pool convolution convolution max pool convolution convolution ave pool
in a triplet, the query I has an independent sub-network P , while I + and I â have a shared sub-network Q, where P /Q maps I/(I +, I â) into the corresponding image feature vector(s) (i.e., x, x+ and xâ, respectively)3. The scheme of such an alternative is similar to the idea of âasymmetric hashingâ methods [15], which use two distinct hash coding maps on a pair of images. In our experiments, we empir- ically show that a shared sub-network of capturing a uni- ï¬ed image representation performs better than the alterna- tive with two independent sub-networks.
# 3.3. Divide-and-Encode Module | 1504.03410#15 | Simultaneous Feature Learning and Hash Coding with Deep Neural Networks | Similarity-preserving hashing is a widely-used method for nearest neighbour
search in large-scale image retrieval tasks. For most existing hashing methods,
an image is first encoded as a vector of hand-engineering visual features,
followed by another separate projection or quantization step that generates
binary codes. However, such visual feature vectors may not be optimally
compatible with the coding process, thus producing sub-optimal hashing codes.
In this paper, we propose a deep architecture for supervised hashing, in which
images are mapped into binary codes via carefully designed deep neural
networks. The pipeline of the proposed deep architecture consists of three
building blocks: 1) a sub-network with a stack of convolution layers to produce
the effective intermediate image features; 2) a divide-and-encode module to
divide the intermediate image features into multiple branches, each encoded
into one hash bit; and 3) a triplet ranking loss designed to characterize that
one image is more similar to the second image than to the third one. Extensive
evaluations on several benchmark image datasets show that the proposed
simultaneous feature learning and hash coding pipeline brings substantial
improvements over other state-of-the-art supervised or unsupervised hashing
methods. | http://arxiv.org/pdf/1504.03410 | Hanjiang Lai, Yan Pan, Ye Liu, Shuicheng Yan | cs.CV | This paper has been accepted to IEEE International Conference on
Pattern Recognition and Computer Vision (CVPR), 2015 | null | cs.CV | 20150414 | 20150414 | [] |
1504.03592 | 15 | Our consequence engine language can be used to ï¬lter a set of actions in any environment that can provide a suitable modelling capability.
Implementing a Robot In order to test the operation of consequence engines such as this, we also created a very simple agent language in which agents can have beliefs, a single goal and a number of actions. Each agent invokes an external selectAction function to pick an action from the set of those applicable (given the agentâs beliefs). Once the goal is achieved then the agent stops. In our case we embedded the consequence engine within the call to selectAction. First, the consequence engine would ï¬lter the available actions down to those it considered most ethical and then selectAction would use a metric (in this example, distance) to choose the action which would bring the agent closest to its goal.
This simple architecture is shown in Figure 2. Here, arrows are used to indicate ï¬ow of control. In the simple agent ï¬rst an action is selected and then it is executed. Selecting this action invokes the selectAction method in the environment which invokes ï¬rst the consequence engine and then a metric-based selection before returning an action to the agent. (The two rules in the consequence engine are shown.) Execution of the action by the agent also invokes the environment which computes the appropriate changes to the agentsâ perceptions.
5 | 1504.03592#15 | Towards Verifiably Ethical Robot Behaviour | Ensuring that autonomous systems work ethically is both complex and
difficult. However, the idea of having an additional `governor' that assesses
options the system has, and prunes them to select the most ethical choices is
well understood. Recent work has produced such a governor consisting of a
`consequence engine' that assesses the likely future outcomes of actions then
applies a Safety/Ethical logic to select actions. Although this is appealing,
it is impossible to be certain that the most ethical options are actually
taken. In this paper we extend and apply a well-known agent verification
approach to our consequence engine, allowing us to verify the correctness of
its ethical decision-making. | http://arxiv.org/pdf/1504.03592 | Louise A. Dennis, Michael Fisher, Alan F. T. Winfield | cs.AI | Presented at the 1st International Workshop on AI and Ethics, Sunday
25th January 2015, Hill Country A, Hyatt Regency Austin. Will appear in the
workshop proceedings published by AAAI | null | cs.AI | 20150414 | 20150414 | [] |
1504.03410 | 16 | # 3.3. Divide-and-Encode Module
After obtaining intermediate image features from the shared sub-network with stacked convolution layers, we propose a divide-and-encode module to map these image features to approximate hash codes. We assume each target hash code has q bits. Then the outputs of the shared sub- network are designed to be 50q (see the output size of the average-pooling layer in Table 1). As can be seen in Fig- ure 2(a), the proposed divide-and-encode module ï¬rstly di- vides the input intermediate features into q slices with equal length4. Then each slice is mapped to one dimension by a fully-connected layer, followed by a sigmoid activation function that restricts the output value in the range [0, 1], | 1504.03410#16 | Simultaneous Feature Learning and Hash Coding with Deep Neural Networks | Similarity-preserving hashing is a widely-used method for nearest neighbour
search in large-scale image retrieval tasks. For most existing hashing methods,
an image is first encoded as a vector of hand-engineering visual features,
followed by another separate projection or quantization step that generates
binary codes. However, such visual feature vectors may not be optimally
compatible with the coding process, thus producing sub-optimal hashing codes.
In this paper, we propose a deep architecture for supervised hashing, in which
images are mapped into binary codes via carefully designed deep neural
networks. The pipeline of the proposed deep architecture consists of three
building blocks: 1) a sub-network with a stack of convolution layers to produce
the effective intermediate image features; 2) a divide-and-encode module to
divide the intermediate image features into multiple branches, each encoded
into one hash bit; and 3) a triplet ranking loss designed to characterize that
one image is more similar to the second image than to the third one. Extensive
evaluations on several benchmark image datasets show that the proposed
simultaneous feature learning and hash coding pipeline brings substantial
improvements over other state-of-the-art supervised or unsupervised hashing
methods. | http://arxiv.org/pdf/1504.03410 | Hanjiang Lai, Yan Pan, Ye Liu, Shuicheng Yan | cs.CV | This paper has been accepted to IEEE International Conference on
Pattern Recognition and Computer Vision (CVPR), 2015 | null | cs.CV | 20150414 | 20150414 | [] |
1504.03592 | 16 | Figure 3: Initial State of the Case Study Environment
Note that our implementation of the consequence engine is independent of this particular architecture. In fact it would desirable to have the consequence engine as a sub-component of some agent rather than as a separate entity interacting with the environment, as is the case in Winï¬eld et al. [23]. However this simple architecture allowed for quick and easy prototyping of our ideas 1
# 4 Reproducing the Case Study | 1504.03592#16 | Towards Verifiably Ethical Robot Behaviour | Ensuring that autonomous systems work ethically is both complex and
difficult. However, the idea of having an additional `governor' that assesses
options the system has, and prunes them to select the most ethical choices is
well understood. Recent work has produced such a governor consisting of a
`consequence engine' that assesses the likely future outcomes of actions then
applies a Safety/Ethical logic to select actions. Although this is appealing,
it is impossible to be certain that the most ethical options are actually
taken. In this paper we extend and apply a well-known agent verification
approach to our consequence engine, allowing us to verify the correctness of
its ethical decision-making. | http://arxiv.org/pdf/1504.03592 | Louise A. Dennis, Michael Fisher, Alan F. T. Winfield | cs.AI | Presented at the 1st International Workshop on AI and Ethics, Sunday
25th January 2015, Hill Country A, Hyatt Regency Austin. Will appear in the
workshop proceedings published by AAAI | null | cs.AI | 20150414 | 20150414 | [] |
1504.03410 | 17 | 3Another possible alternative is that each of I, I + and I â in a triplet has an independent sub-networks (i.e., a sub-network P /Q/R corresponds to I/I +/I â, respectively), which maps it into corresponding intermediate image features. However, such an alternative tends to get bad solutions. An extreme example is, for any input triplets, the sub-network P outputs hash codes with all zeros; the sub-network Q also outputs hash codes with all zeros; the sub-network R outputs hash codes with all ones. Such kind of solutions may have zero loss on training data, but their generalization performances (on test data) can be very bad. Hence, in order to avoid such bad solutions, we consider the alternative that uses a shared sub-network for I + and I â (i.e., let Q = R).
4For ease of presentation, here we assume the dimension d of the input intermediate image features is a multiple of q. In practice, if d = q à s + c with 0 < c < q, we can set the ï¬rst c slices to be length of s + 1 and the rest q â c ones to be length of s. | 1504.03410#17 | Simultaneous Feature Learning and Hash Coding with Deep Neural Networks | Similarity-preserving hashing is a widely-used method for nearest neighbour
search in large-scale image retrieval tasks. For most existing hashing methods,
an image is first encoded as a vector of hand-engineering visual features,
followed by another separate projection or quantization step that generates
binary codes. However, such visual feature vectors may not be optimally
compatible with the coding process, thus producing sub-optimal hashing codes.
In this paper, we propose a deep architecture for supervised hashing, in which
images are mapped into binary codes via carefully designed deep neural
networks. The pipeline of the proposed deep architecture consists of three
building blocks: 1) a sub-network with a stack of convolution layers to produce
the effective intermediate image features; 2) a divide-and-encode module to
divide the intermediate image features into multiple branches, each encoded
into one hash bit; and 3) a triplet ranking loss designed to characterize that
one image is more similar to the second image than to the third one. Extensive
evaluations on several benchmark image datasets show that the proposed
simultaneous feature learning and hash coding pipeline brings substantial
improvements over other state-of-the-art supervised or unsupervised hashing
methods. | http://arxiv.org/pdf/1504.03410 | Hanjiang Lai, Yan Pan, Ye Liu, Shuicheng Yan | cs.CV | This paper has been accepted to IEEE International Conference on
Pattern Recognition and Computer Vision (CVPR), 2015 | null | cs.CV | 20150414 | 20150414 | [] |
1504.03592 | 17 | # 4 Reproducing the Case Study
We reproduced the case study described in Winï¬eld et al. [23]. Since all parts of the system involved in the veriï¬cation needed to exist as Java code, we created a very simple simulated environment consisting of a 5x5 grid. Note that we could not reproduce the case study with full ï¬delity since we required a ï¬nite state space and the original case study took place in the potentially inï¬nite state space of the physical world. The grid had a hole in its centre and a robot and two humans represented in a column along one side. At each time step the robot could move to any square while there was a 50% chance that each of the humans would move towards the hole. The initial state is shown in Figure 3. The robot, R, can not reach the goal, G, in a single move and so will move to one side or the other. At the same time the humans, H1 and H2, may move towards the hole (central square).
The agent representing the consequence engine is shown in code listing 1. Lines 6-7 deï¬ne the map of outcomes to values fES and line 12 gives the precedence ordering. | 1504.03592#17 | Towards Verifiably Ethical Robot Behaviour | Ensuring that autonomous systems work ethically is both complex and
difficult. However, the idea of having an additional `governor' that assesses
options the system has, and prunes them to select the most ethical choices is
well understood. Recent work has produced such a governor consisting of a
`consequence engine' that assesses the likely future outcomes of actions then
applies a Safety/Ethical logic to select actions. Although this is appealing,
it is impossible to be certain that the most ethical options are actually
taken. In this paper we extend and apply a well-known agent verification
approach to our consequence engine, allowing us to verify the correctness of
its ethical decision-making. | http://arxiv.org/pdf/1504.03592 | Louise A. Dennis, Michael Fisher, Alan F. T. Winfield | cs.AI | Presented at the 1st International Workshop on AI and Ethics, Sunday
25th January 2015, Hill Country A, Hyatt Regency Austin. Will appear in the
workshop proceedings published by AAAI | null | cs.AI | 20150414 | 20150414 | [] |
1504.03410 | 18 | fully connected fully connected @ eo o\ LSP coum 8 sigmoit jiece-wise sigmoid o| req mee | 8) @|=| e+e | ed + @ @ @] r@| | ex -O | [Sisfel-[s| /Saeeecs e| Le 1?! | oA re e| |e / of ) ee" % @ eo (a) divide-and-encode module (b) fully-connected alternation
Figure 2. (a) A divide-and-encode module. (b) An alternative that consists of a fully-connected layer, followed by a sigmoid layer.
and a piece-wise threshold function to encourage the output of binary hash bits. After that, the q output hash bits are concatenated to be a q-bit (approximate) code. | 1504.03410#18 | Simultaneous Feature Learning and Hash Coding with Deep Neural Networks | Similarity-preserving hashing is a widely-used method for nearest neighbour
search in large-scale image retrieval tasks. For most existing hashing methods,
an image is first encoded as a vector of hand-engineering visual features,
followed by another separate projection or quantization step that generates
binary codes. However, such visual feature vectors may not be optimally
compatible with the coding process, thus producing sub-optimal hashing codes.
In this paper, we propose a deep architecture for supervised hashing, in which
images are mapped into binary codes via carefully designed deep neural
networks. The pipeline of the proposed deep architecture consists of three
building blocks: 1) a sub-network with a stack of convolution layers to produce
the effective intermediate image features; 2) a divide-and-encode module to
divide the intermediate image features into multiple branches, each encoded
into one hash bit; and 3) a triplet ranking loss designed to characterize that
one image is more similar to the second image than to the third one. Extensive
evaluations on several benchmark image datasets show that the proposed
simultaneous feature learning and hash coding pipeline brings substantial
improvements over other state-of-the-art supervised or unsupervised hashing
methods. | http://arxiv.org/pdf/1504.03410 | Hanjiang Lai, Yan Pan, Ye Liu, Shuicheng Yan | cs.CV | This paper has been accepted to IEEE International Conference on
Pattern Recognition and Computer Vision (CVPR), 2015 | null | cs.CV | 20150414 | 20150414 | [] |
1504.03592 | 18 | The agent representing the consequence engine is shown in code listing 1. Lines 6-7 deï¬ne the map of outcomes to values fES and line 12 gives the precedence ordering.
: name : e t h i c a l g : agent : r o b o t : Outcome Scores : s a f e = 0 c o l l i s i o n = 4 h o l e = 10 : E t h i c a l Precedence : human > r o b o t 1 2 3 4 5 6 7 8 9 10 11 12
1Indeed the entire prototype system took less than a week to produce.
6
H1 R H2
Figure 4: Situation where the Robot can not save the Human
The actions available to the simple agent were all of the form moveTo(X, Y) where X and Y were coordinates on the grid. A Breseham based super-cover line algorithm [6]] was used to calculate all the grid squares that would be traversed between the robotâs current position and the new one. If these included either the hole or one of the âhumansâ then the outcomes (robot, hole) and (robot, collision) together with (human, collision) were generated as appropriate. If either of the âhumansâ occupied a square adjacent to the hole then the outcome (human, hole) was also generated.
# 4.1 Results | 1504.03592#18 | Towards Verifiably Ethical Robot Behaviour | Ensuring that autonomous systems work ethically is both complex and
difficult. However, the idea of having an additional `governor' that assesses
options the system has, and prunes them to select the most ethical choices is
well understood. Recent work has produced such a governor consisting of a
`consequence engine' that assesses the likely future outcomes of actions then
applies a Safety/Ethical logic to select actions. Although this is appealing,
it is impossible to be certain that the most ethical options are actually
taken. In this paper we extend and apply a well-known agent verification
approach to our consequence engine, allowing us to verify the correctness of
its ethical decision-making. | http://arxiv.org/pdf/1504.03592 | Louise A. Dennis, Michael Fisher, Alan F. T. Winfield | cs.AI | Presented at the 1st International Workshop on AI and Ethics, Sunday
25th January 2015, Hill Country A, Hyatt Regency Austin. Will appear in the
workshop proceedings published by AAAI | null | cs.AI | 20150414 | 20150414 | [] |
1504.03410 | 19 | As shown in Figure 2(b), a possible alternative to the divide-and-encode module is a simple fully-connected layer that maps the input intermediate image features into q- dimensional vectors, followed by sigmoid activation func- tions to transform these vectors into [0, 1]q. Compared to this alternative, the key idea of the overall divide-and- encode strategy is trying to reduce the redundancy among the hash bits. Speciï¬cally, in the fully-connected alterna- tive in Figure 2(b), each hash bit is generated on the ba- sis of the whole (and the same) input image feature vec- tor, which may inevitably result in redundancy among the hash bits. On the other hand, since each hash bit is gen- erated from a separated slice of features, the output hash codes from the proposed divide-and-encode module may be less redundant to each other. Hash codes with fewer redun- dant bits are advocated by some recent research. For exam- ple, the recently proposed Batch-Orthogonal Locality Sen- sitive Hashing [5] theoretically and empirically shows that hash codes generated by batch-orthogonalized random pro- jections are superior to those generated by simple | 1504.03410#19 | Simultaneous Feature Learning and Hash Coding with Deep Neural Networks | Similarity-preserving hashing is a widely-used method for nearest neighbour
search in large-scale image retrieval tasks. For most existing hashing methods,
an image is first encoded as a vector of hand-engineering visual features,
followed by another separate projection or quantization step that generates
binary codes. However, such visual feature vectors may not be optimally
compatible with the coding process, thus producing sub-optimal hashing codes.
In this paper, we propose a deep architecture for supervised hashing, in which
images are mapped into binary codes via carefully designed deep neural
networks. The pipeline of the proposed deep architecture consists of three
building blocks: 1) a sub-network with a stack of convolution layers to produce
the effective intermediate image features; 2) a divide-and-encode module to
divide the intermediate image features into multiple branches, each encoded
into one hash bit; and 3) a triplet ranking loss designed to characterize that
one image is more similar to the second image than to the third one. Extensive
evaluations on several benchmark image datasets show that the proposed
simultaneous feature learning and hash coding pipeline brings substantial
improvements over other state-of-the-art supervised or unsupervised hashing
methods. | http://arxiv.org/pdf/1504.03410 | Hanjiang Lai, Yan Pan, Ye Liu, Shuicheng Yan | cs.CV | This paper has been accepted to IEEE International Conference on
Pattern Recognition and Computer Vision (CVPR), 2015 | null | cs.CV | 20150414 | 20150414 | [] |
1504.03592 | 19 | # 4.1 Results
We were able to model check the combined program in AJPF and so formally verify that the agent always reached its target. However, we were not able to verify that the âhumansâ never fell into the hole because in several situations the hole came between the agent and the human. One such situation is shown in Figure 4. Here, Human H2 will fall into the hole when it takes its next step but the robot R cannot reach it in a single straight line without itself falling into the hole before it reaches the human. | 1504.03592#19 | Towards Verifiably Ethical Robot Behaviour | Ensuring that autonomous systems work ethically is both complex and
difficult. However, the idea of having an additional `governor' that assesses
options the system has, and prunes them to select the most ethical choices is
well understood. Recent work has produced such a governor consisting of a
`consequence engine' that assesses the likely future outcomes of actions then
applies a Safety/Ethical logic to select actions. Although this is appealing,
it is impossible to be certain that the most ethical options are actually
taken. In this paper we extend and apply a well-known agent verification
approach to our consequence engine, allowing us to verify the correctness of
its ethical decision-making. | http://arxiv.org/pdf/1504.03592 | Louise A. Dennis, Michael Fisher, Alan F. T. Winfield | cs.AI | Presented at the 1st International Workshop on AI and Ethics, Sunday
25th January 2015, Hill Country A, Hyatt Regency Austin. Will appear in the
workshop proceedings published by AAAI | null | cs.AI | 20150414 | 20150414 | [] |
1504.03410 | 20 | sitive Hashing [5] theoretically and empirically shows that hash codes generated by batch-orthogonalized random pro- jections are superior to those generated by simple random projections, where batch-orthogonalized projections gener- ate fewer redundant hash bits than random projections. In the experiments section, we empirically show that the pro- posed divide-and-encode module leads to superior perfor- mance over the fully-connected alternative. | 1504.03410#20 | Simultaneous Feature Learning and Hash Coding with Deep Neural Networks | Similarity-preserving hashing is a widely-used method for nearest neighbour
search in large-scale image retrieval tasks. For most existing hashing methods,
an image is first encoded as a vector of hand-engineering visual features,
followed by another separate projection or quantization step that generates
binary codes. However, such visual feature vectors may not be optimally
compatible with the coding process, thus producing sub-optimal hashing codes.
In this paper, we propose a deep architecture for supervised hashing, in which
images are mapped into binary codes via carefully designed deep neural
networks. The pipeline of the proposed deep architecture consists of three
building blocks: 1) a sub-network with a stack of convolution layers to produce
the effective intermediate image features; 2) a divide-and-encode module to
divide the intermediate image features into multiple branches, each encoded
into one hash bit; and 3) a triplet ranking loss designed to characterize that
one image is more similar to the second image than to the third one. Extensive
evaluations on several benchmark image datasets show that the proposed
simultaneous feature learning and hash coding pipeline brings substantial
improvements over other state-of-the-art supervised or unsupervised hashing
methods. | http://arxiv.org/pdf/1504.03410 | Hanjiang Lai, Yan Pan, Ye Liu, Shuicheng Yan | cs.CV | This paper has been accepted to IEEE International Conference on
Pattern Recognition and Computer Vision (CVPR), 2015 | null | cs.CV | 20150414 | 20150414 | [] |
1504.03592 | 20 | Since we were particularly interested in verifying the performance of the consequence engine we adapted the modelling method in the environment to assert percepts (declarative statements the robot could perceive) whenever one of the humans was in danger and whenever there was a safe path for the robot to reach a human. These percepts had no effect on the execution of the program but their existence could be checked by AJPFâs property specification language. Using these percepts we were able to verify (6) where 0 is the linear temporal operator meaning âalwaysâ and B(r,p) means that ârobot r believes p to be trueâ. So (6) reads that it is always the case that if the robot believes hl is in danger and it can find a safe path to h1 then it will always be the case that the robot never believes h1 has allen in the hole. We also proved the equivalent property for h2. | 1504.03592#20 | Towards Verifiably Ethical Robot Behaviour | Ensuring that autonomous systems work ethically is both complex and
difficult. However, the idea of having an additional `governor' that assesses
options the system has, and prunes them to select the most ethical choices is
well understood. Recent work has produced such a governor consisting of a
`consequence engine' that assesses the likely future outcomes of actions then
applies a Safety/Ethical logic to select actions. Although this is appealing,
it is impossible to be certain that the most ethical options are actually
taken. In this paper we extend and apply a well-known agent verification
approach to our consequence engine, allowing us to verify the correctness of
its ethical decision-making. | http://arxiv.org/pdf/1504.03592 | Louise A. Dennis, Michael Fisher, Alan F. T. Winfield | cs.AI | Presented at the 1st International Workshop on AI and Ethics, Sunday
25th January 2015, Hill Country A, Hyatt Regency Austin. Will appear in the
workshop proceedings published by AAAI | null | cs.AI | 20150414 | 20150414 | [] |
1504.03410 | 21 | In order to encourage the output of a divide-and-encode module to be binary codes, we use a sigmoid activa- tion function followed by a piece-wise threshold function. Given a 50-dimensional slice x(i)(i = 1, 2, ..., q), the out- put of the 50-to-1 fully-connected layer is deï¬ned by
f ci(x(i)) = Wix(i), (4)
with Wi being the weight matrix.
0.5 0 0.5 1
Figure 3. The piece-wise threshold function.
Given c = f ci(x(i)), the sigmoid function is deï¬ned by
sigmoid(c) = 1 1 + eâβc , (5)
where β is a hyper-parameter.
The piece-wise threshold function, as shown in Figure 3, is to encourage binary outputs. Speciï¬cally, for an input variable s = sigmoid(c) â [0, 1], this piece-wise function is deï¬ned by
0, s<05-⬠g(s) = s, 05-e<s<05+e (6) 1, s>0.5+e,
where ⬠is a small positive hyper-parameter. | 1504.03410#21 | Simultaneous Feature Learning and Hash Coding with Deep Neural Networks | Similarity-preserving hashing is a widely-used method for nearest neighbour
search in large-scale image retrieval tasks. For most existing hashing methods,
an image is first encoded as a vector of hand-engineering visual features,
followed by another separate projection or quantization step that generates
binary codes. However, such visual feature vectors may not be optimally
compatible with the coding process, thus producing sub-optimal hashing codes.
In this paper, we propose a deep architecture for supervised hashing, in which
images are mapped into binary codes via carefully designed deep neural
networks. The pipeline of the proposed deep architecture consists of three
building blocks: 1) a sub-network with a stack of convolution layers to produce
the effective intermediate image features; 2) a divide-and-encode module to
divide the intermediate image features into multiple branches, each encoded
into one hash bit; and 3) a triplet ranking loss designed to characterize that
one image is more similar to the second image than to the third one. Extensive
evaluations on several benchmark image datasets show that the proposed
simultaneous feature learning and hash coding pipeline brings substantial
improvements over other state-of-the-art supervised or unsupervised hashing
methods. | http://arxiv.org/pdf/1504.03410 | Hanjiang Lai, Yan Pan, Ye Liu, Shuicheng Yan | cs.CV | This paper has been accepted to IEEE International Conference on
Pattern Recognition and Computer Vision (CVPR), 2015 | null | cs.CV | 20150414 | 20150414 | [] |
1504.03592 | 21 | It should be noted that we would not necessarily expect both the above to be the case because, in the situation where both H1 and H2 move simultaneously towards the hole, the robot would have to choose which to rescue and leave one at risk. In reality it turned out that whenever this occurred the hole was between the robot and human 2 (as in ï¬gure 4). This was an artifact of the fact that the humans had to make at least one move before the robot could tell they were in danger. The robotâs ï¬rst move was always to the far corner since this represented a point on the grid closest to the goal that the robot could safely reach. The outcome would have been different if action selection had been set up to pick at random from all the points the robot could safely reach that were equidistant from the hole.
We were also able to export our program model to the probabilistic PRISM model checker, as described in [14], in order to obtain probabilistic results. These tell us that human 1 never falls in the hole while human 2 falls in the hole
7
O(B(r, danger(h1)) A B(r, path_to(h1))) + O-B(r, h1(hole)) (6)
# O(B(ce, sel(a,)) A B(ce, outcome(a;,, human(hole)))) > | 1504.03592#21 | Towards Verifiably Ethical Robot Behaviour | Ensuring that autonomous systems work ethically is both complex and
difficult. However, the idea of having an additional `governor' that assesses
options the system has, and prunes them to select the most ethical choices is
well understood. Recent work has produced such a governor consisting of a
`consequence engine' that assesses the likely future outcomes of actions then
applies a Safety/Ethical logic to select actions. Although this is appealing,
it is impossible to be certain that the most ethical options are actually
taken. In this paper we extend and apply a well-known agent verification
approach to our consequence engine, allowing us to verify the correctness of
its ethical decision-making. | http://arxiv.org/pdf/1504.03592 | Louise A. Dennis, Michael Fisher, Alan F. T. Winfield | cs.AI | Presented at the 1st International Workshop on AI and Ethics, Sunday
25th January 2015, Hill Country A, Hyatt Regency Austin. Will appear in the
workshop proceedings published by AAAI | null | cs.AI | 20150414 | 20150414 | [] |
1504.03410 | 22 | where ⬠is a small positive hyper-parameter.
This piece-wise threshold function approximates the be- havior of hard-coding, and it encourages binary outputs in training. Specifically, if the outputs from the sigmoid func- tion are in [0,0.5 â e) or (0.5 + e, 1], they are truncated to be 0 or 1, respectively. Note that in prediction, the pro- posed deep architecture only generates approximate (real- value) hash codes for input images, where these approxi- mate codes are converted to binary codes by quantization (see Section 3.4 for details). With the proposed piece-wise threshold function, some of the values in the approximate hash codes (that are produced by the deep architecture) are already zeros or ones. Hence, less errors may be introduced by the quantization step.
# 3.4. Hash Coding for New Images | 1504.03410#22 | Simultaneous Feature Learning and Hash Coding with Deep Neural Networks | Similarity-preserving hashing is a widely-used method for nearest neighbour
search in large-scale image retrieval tasks. For most existing hashing methods,
an image is first encoded as a vector of hand-engineering visual features,
followed by another separate projection or quantization step that generates
binary codes. However, such visual feature vectors may not be optimally
compatible with the coding process, thus producing sub-optimal hashing codes.
In this paper, we propose a deep architecture for supervised hashing, in which
images are mapped into binary codes via carefully designed deep neural
networks. The pipeline of the proposed deep architecture consists of three
building blocks: 1) a sub-network with a stack of convolution layers to produce
the effective intermediate image features; 2) a divide-and-encode module to
divide the intermediate image features into multiple branches, each encoded
into one hash bit; and 3) a triplet ranking loss designed to characterize that
one image is more similar to the second image than to the third one. Extensive
evaluations on several benchmark image datasets show that the proposed
simultaneous feature learning and hash coding pipeline brings substantial
improvements over other state-of-the-art supervised or unsupervised hashing
methods. | http://arxiv.org/pdf/1504.03410 | Hanjiang Lai, Yan Pan, Ye Liu, Shuicheng Yan | cs.CV | This paper has been accepted to IEEE International Conference on
Pattern Recognition and Computer Vision (CVPR), 2015 | null | cs.CV | 20150414 | 20150414 | [] |
1504.03592 | 22 | # O(B(ce, sel(a,)) A B(ce, outcome(a;,, human(hole)))) >
B(ce, outcome(a2, human(hole))) ⧠B(ce, outcome(a3, human(hole))) ⧠B(ce, outcome(a4, human(hole))) (7)
# O(B(ce, sel(a,)) \ B(ce, outcome(a, robot(hole)))) >
(B(ce, outcome(a2, human(hole)))â¨B(ce, outcome(a2, robot(hole)))â¨B(ce, outcome(a2, human(collision))))⧠(B(ce, outcome(a3, human(hole)))â¨B(ce, outcome(a3, robot(hole)))â¨B(ce, outcome(a3, human(collision))))⧠(B(ce, outcome(a4, human(hole)))â¨B(ce, outcome(a4, robot(hole)))â¨B(ce, outcome(a4, human(collision)))) | 1504.03592#22 | Towards Verifiably Ethical Robot Behaviour | Ensuring that autonomous systems work ethically is both complex and
difficult. However, the idea of having an additional `governor' that assesses
options the system has, and prunes them to select the most ethical choices is
well understood. Recent work has produced such a governor consisting of a
`consequence engine' that assesses the likely future outcomes of actions then
applies a Safety/Ethical logic to select actions. Although this is appealing,
it is impossible to be certain that the most ethical options are actually
taken. In this paper we extend and apply a well-known agent verification
approach to our consequence engine, allowing us to verify the correctness of
its ethical decision-making. | http://arxiv.org/pdf/1504.03592 | Louise A. Dennis, Michael Fisher, Alan F. T. Winfield | cs.AI | Presented at the 1st International Workshop on AI and Ethics, Sunday
25th January 2015, Hill Country A, Hyatt Regency Austin. Will appear in the
workshop proceedings published by AAAI | null | cs.AI | 20150414 | 20150414 | [] |
1504.03410 | 23 | # 3.4. Hash Coding for New Images
After the deep architecture is trained, one can use it to generate a q-bit hash code for an input image. As shown in Figure 4, in prediction, an input image I is ï¬rst en- coded into a q-dimensional feature vector F(I). Then one can obtain a q-bit binary code by simple quantization b = sign(F(I) â 0.5), where sign(v) is the sign function on vectors that for i = 1, 2, ..., q, sign(vi) = 1 if vi > 0, otherwise sign(vi) = 0.
divide-and-encode quantization image shared sub network et, e kes 0 helo a1 âeo 0 of el
Figure 4. The architecture of prediction.
# 4. Experiments
# 4.1. Experimental Settings
In this section, we conduct extensive evaluations of the proposed method on three benchmark datasets:
⢠The Stree View House Numbers (SVHN)5 dataset is a real-world image dataset for recognizing digits and numbers in natural scene images. SVHN consists of over 600,000 32 à 32 color images in 10 classes (with digits from 0 to 9).
⢠The CIFAR-106 dataset consists of 60,000 color im- ages in 10 classes. Each class has 6,000 images in size 32 à 32. | 1504.03410#23 | Simultaneous Feature Learning and Hash Coding with Deep Neural Networks | Similarity-preserving hashing is a widely-used method for nearest neighbour
search in large-scale image retrieval tasks. For most existing hashing methods,
an image is first encoded as a vector of hand-engineering visual features,
followed by another separate projection or quantization step that generates
binary codes. However, such visual feature vectors may not be optimally
compatible with the coding process, thus producing sub-optimal hashing codes.
In this paper, we propose a deep architecture for supervised hashing, in which
images are mapped into binary codes via carefully designed deep neural
networks. The pipeline of the proposed deep architecture consists of three
building blocks: 1) a sub-network with a stack of convolution layers to produce
the effective intermediate image features; 2) a divide-and-encode module to
divide the intermediate image features into multiple branches, each encoded
into one hash bit; and 3) a triplet ranking loss designed to characterize that
one image is more similar to the second image than to the third one. Extensive
evaluations on several benchmark image datasets show that the proposed
simultaneous feature learning and hash coding pipeline brings substantial
improvements over other state-of-the-art supervised or unsupervised hashing
methods. | http://arxiv.org/pdf/1504.03410 | Hanjiang Lai, Yan Pan, Ye Liu, Shuicheng Yan | cs.CV | This paper has been accepted to IEEE International Conference on
Pattern Recognition and Computer Vision (CVPR), 2015 | null | cs.CV | 20150414 | 20150414 | [] |
1504.03592 | 23 | with a probability of 0.71875 (71.8% of the time). The high chance of human 2 falling in the hole is caused by the robotâs behaviour, moving into the far corner, as described above. These probabilities are very different from those reported in Winï¬eld et alâs experimental set up. This is primarily because the environment used here is considerably cruder, with the robot able to reach any point in the grid in a single time step. The behaviour of the humans is also different to that implemented in [23] where the H robots proceeded steadily towards the hole and the differences in behaviour were determined by small variations in the precise start up time and direction of each robot.
# 5 Verifying the Consequence Engine in Isolation
Following the methodology from [12] we also investigated verifying the consequence engine in isolation without any speciï¬c environment. To do this we had to extend the implementation of our declarative language to allow the consequence engine to have mental states which could be examined by AJPFâs property speciï¬cation language. In particular we extended the operational semantics so that information about the outcomes of all actions were stored as beliefs in the consequence engine, and so that the ï¬nal set of selected actions were also stored as beliefs in the consequence engine. We were then able to prove theorems about these beliefs. | 1504.03592#23 | Towards Verifiably Ethical Robot Behaviour | Ensuring that autonomous systems work ethically is both complex and
difficult. However, the idea of having an additional `governor' that assesses
options the system has, and prunes them to select the most ethical choices is
well understood. Recent work has produced such a governor consisting of a
`consequence engine' that assesses the likely future outcomes of actions then
applies a Safety/Ethical logic to select actions. Although this is appealing,
it is impossible to be certain that the most ethical options are actually
taken. In this paper we extend and apply a well-known agent verification
approach to our consequence engine, allowing us to verify the correctness of
its ethical decision-making. | http://arxiv.org/pdf/1504.03592 | Louise A. Dennis, Michael Fisher, Alan F. T. Winfield | cs.AI | Presented at the 1st International Workshop on AI and Ethics, Sunday
25th January 2015, Hill Country A, Hyatt Regency Austin. Will appear in the
workshop proceedings published by AAAI | null | cs.AI | 20150414 | 20150414 | [] |
1504.03410 | 24 | ⢠The CIFAR-106 dataset consists of 60,000 color im- ages in 10 classes. Each class has 6,000 images in size 32 à 32.
⢠The NUS-WIDE7 dataset contains nearly 270,000 im- ages collected from Flickr. Each of these images is associated with one or multiple labels in 81 semantic concepts. For a fair comparison, we follow the set- tings in [27, 13] to use the subset of images associated with the 21 most frequent labels, where each label as- sociates with at least 5,000 images. We resize images of this subset into 256 à 256.
We test and compare the search accuracies of the pro- posed method with eight state-of-the-art hashing methods, including three unsupervised methods LSH [2], SH [26] and ITQ [4], and ï¬ve supervised methods CNNH [27], KSH [12], MLH [16], BRE [8] and ITQ-CCA [4]. | 1504.03410#24 | Simultaneous Feature Learning and Hash Coding with Deep Neural Networks | Similarity-preserving hashing is a widely-used method for nearest neighbour
search in large-scale image retrieval tasks. For most existing hashing methods,
an image is first encoded as a vector of hand-engineering visual features,
followed by another separate projection or quantization step that generates
binary codes. However, such visual feature vectors may not be optimally
compatible with the coding process, thus producing sub-optimal hashing codes.
In this paper, we propose a deep architecture for supervised hashing, in which
images are mapped into binary codes via carefully designed deep neural
networks. The pipeline of the proposed deep architecture consists of three
building blocks: 1) a sub-network with a stack of convolution layers to produce
the effective intermediate image features; 2) a divide-and-encode module to
divide the intermediate image features into multiple branches, each encoded
into one hash bit; and 3) a triplet ranking loss designed to characterize that
one image is more similar to the second image than to the third one. Extensive
evaluations on several benchmark image datasets show that the proposed
simultaneous feature learning and hash coding pipeline brings substantial
improvements over other state-of-the-art supervised or unsupervised hashing
methods. | http://arxiv.org/pdf/1504.03410 | Hanjiang Lai, Yan Pan, Ye Liu, Shuicheng Yan | cs.CV | This paper has been accepted to IEEE International Conference on
Pattern Recognition and Computer Vision (CVPR), 2015 | null | cs.CV | 20150414 | 20150414 | [] |
1504.03592 | 24 | We developed a special verification environment for the engine. This environment called the engine to select from four abstract actions, a1, a2, a3 and a4. When the consequence engine invoked the environment to model the outcomes of these four actions then four possible outcomes were returned (human, hole), (robot, hole), (human, collision) and (robot, collision). Each of these four outcomes was chosen independently and at random â ie., the action was returned with a random selection of outcomes attached. AJPF then explored all possible combinations of the four outcomes for each of the four actions.
# 5.1 Results
Model-checking the consequence engine in listing 1 with the addition of beliefs and placed in in this new environment we were able to prove (7): it is always the case that if a1 is a selected action and its outcome is predicted to be that the human has fallen in the hole, then all the other actions are also predicted to result in the human in the hole â i.e., all other actions are equally bad.
We could prove similar theorems for the other outcomes, e.g. (8) which states that if a1 is the selected action and it results in the robot falling in the hole, then the other actions either result in the human in the hole, the robot in the hole, or the human colliding with something. | 1504.03592#24 | Towards Verifiably Ethical Robot Behaviour | Ensuring that autonomous systems work ethically is both complex and
difficult. However, the idea of having an additional `governor' that assesses
options the system has, and prunes them to select the most ethical choices is
well understood. Recent work has produced such a governor consisting of a
`consequence engine' that assesses the likely future outcomes of actions then
applies a Safety/Ethical logic to select actions. Although this is appealing,
it is impossible to be certain that the most ethical options are actually
taken. In this paper we extend and apply a well-known agent verification
approach to our consequence engine, allowing us to verify the correctness of
its ethical decision-making. | http://arxiv.org/pdf/1504.03592 | Louise A. Dennis, Michael Fisher, Alan F. T. Winfield | cs.AI | Presented at the 1st International Workshop on AI and Ethics, Sunday
25th January 2015, Hill Country A, Hyatt Regency Austin. Will appear in the
workshop proceedings published by AAAI | null | cs.AI | 20150414 | 20150414 | [] |
1504.03410 | 25 | In SVHN and CIFAR-10, we randomly select 1,000 im- ages (100 images per class) as the test query set. For the unsupervised methods, we use the rest images as training samples. For the supervised methods, we randomly select 5,000 images (500 images per class) from the rest images as the training set. The triplets of images for training are randomly constructed based on the image class labels.
In NUS-WIDE, we randomly select 100 images from each of the selected 21 classes to form a test query set of 2,100 images. For the unsupervised methods, the rest im- ages in the selected 21 classes are used as the training set. For supervised methods, we uniformly sample 500 images from each of the selected 21 classes to form a training set.
# 5http://uï¬dl.stanford.edu/housenumbers/ 6http://www.cs.toronto.edu/ kriz/cifar.html 7http://lms.comp.nus.edu.sg/research/NUS-WIDE.htm | 1504.03410#25 | Simultaneous Feature Learning and Hash Coding with Deep Neural Networks | Similarity-preserving hashing is a widely-used method for nearest neighbour
search in large-scale image retrieval tasks. For most existing hashing methods,
an image is first encoded as a vector of hand-engineering visual features,
followed by another separate projection or quantization step that generates
binary codes. However, such visual feature vectors may not be optimally
compatible with the coding process, thus producing sub-optimal hashing codes.
In this paper, we propose a deep architecture for supervised hashing, in which
images are mapped into binary codes via carefully designed deep neural
networks. The pipeline of the proposed deep architecture consists of three
building blocks: 1) a sub-network with a stack of convolution layers to produce
the effective intermediate image features; 2) a divide-and-encode module to
divide the intermediate image features into multiple branches, each encoded
into one hash bit; and 3) a triplet ranking loss designed to characterize that
one image is more similar to the second image than to the third one. Extensive
evaluations on several benchmark image datasets show that the proposed
simultaneous feature learning and hash coding pipeline brings substantial
improvements over other state-of-the-art supervised or unsupervised hashing
methods. | http://arxiv.org/pdf/1504.03410 | Hanjiang Lai, Yan Pan, Ye Liu, Shuicheng Yan | cs.CV | This paper has been accepted to IEEE International Conference on
Pattern Recognition and Computer Vision (CVPR), 2015 | null | cs.CV | 20150414 | 20150414 | [] |
1504.03592 | 25 | In this way we could verify that the consequence engine indeed captured the order of priorities that we intended.
8
(8)
# 6 Related Work
The idea of a distinct entity, be it software or hardware, that can be attached to an existing autonomous machine in order to constrain its behaviour is very appealing. Particularly so if the constraints ensure that the machine conforms to recognised ethical principles. Arkin [3] introduced this idea of an ethical governor to autonomous system, using it to evaluate the âethical appropriatenessâ of a plan of the system prior to its execution. The ethical governor prohibits plans it ï¬nds to be in violation with some prescribed ethical constraint.
Also of relevance Anderson and Andersonâs approach, where machine learning is used to âdiscoverâ ethical prin- ciples, which then guide the systemâs behaviour, as exhibited by their humanoid robot that âtakes ethical concerns into consideration when reminding a patient when to take medicationâ [1]. A range of other work, for example in [2, 18], also aims to construct software entities (âagentsâ) able to form ethical rules of behaviour and solve ethical dilemmas based on these. The work of [22] provides a logical framework for moral reasoning, though it is not clear whether this is used to modify practical system behaviour. | 1504.03592#25 | Towards Verifiably Ethical Robot Behaviour | Ensuring that autonomous systems work ethically is both complex and
difficult. However, the idea of having an additional `governor' that assesses
options the system has, and prunes them to select the most ethical choices is
well understood. Recent work has produced such a governor consisting of a
`consequence engine' that assesses the likely future outcomes of actions then
applies a Safety/Ethical logic to select actions. Although this is appealing,
it is impossible to be certain that the most ethical options are actually
taken. In this paper we extend and apply a well-known agent verification
approach to our consequence engine, allowing us to verify the correctness of
its ethical decision-making. | http://arxiv.org/pdf/1504.03592 | Louise A. Dennis, Michael Fisher, Alan F. T. Winfield | cs.AI | Presented at the 1st International Workshop on AI and Ethics, Sunday
25th January 2015, Hill Country A, Hyatt Regency Austin. Will appear in the
workshop proceedings published by AAAI | null | cs.AI | 20150414 | 20150414 | [] |
1504.03410 | 26 | Table 2. MAP of Hamming ranking w.r.t different numbers of bits on three datasets. For NUS-WIDE, we calculate the MAP values within the top 5000 returned neighbors. The results of CNNH is directly cited from [27]. CNNH«x is our implementation of the CNNH method in [27] using Caffe, by using a network configuration comparable to that of the proposed method (see the text in Section 4.1 for implementation details). | 1504.03410#26 | Simultaneous Feature Learning and Hash Coding with Deep Neural Networks | Similarity-preserving hashing is a widely-used method for nearest neighbour
search in large-scale image retrieval tasks. For most existing hashing methods,
an image is first encoded as a vector of hand-engineering visual features,
followed by another separate projection or quantization step that generates
binary codes. However, such visual feature vectors may not be optimally
compatible with the coding process, thus producing sub-optimal hashing codes.
In this paper, we propose a deep architecture for supervised hashing, in which
images are mapped into binary codes via carefully designed deep neural
networks. The pipeline of the proposed deep architecture consists of three
building blocks: 1) a sub-network with a stack of convolution layers to produce
the effective intermediate image features; 2) a divide-and-encode module to
divide the intermediate image features into multiple branches, each encoded
into one hash bit; and 3) a triplet ranking loss designed to characterize that
one image is more similar to the second image than to the third one. Extensive
evaluations on several benchmark image datasets show that the proposed
simultaneous feature learning and hash coding pipeline brings substantial
improvements over other state-of-the-art supervised or unsupervised hashing
methods. | http://arxiv.org/pdf/1504.03410 | Hanjiang Lai, Yan Pan, Ye Liu, Shuicheng Yan | cs.CV | This paper has been accepted to IEEE International Conference on
Pattern Recognition and Computer Vision (CVPR), 2015 | null | cs.CV | 20150414 | 20150414 | [] |
1504.03592 | 26 | Work by one of the authors of this paper (Winï¬eld) has involved developing and extending a generalised method- ology for safe and ethical robotic interaction, comprising both physical and ethical behaviours. To address the former, a safety protection system, serves as a high-level safety enforcer by governing the actions of the robot and preventing it from performing unsafe operations [24]. To address the latter, the ethical consequence engine studied here has been developed [23].
There has been little direct work on the formal veriï¬cation of ethical principles in practical autonomous systems. Work of the ï¬rst two authors has considered the formal veriï¬cation of ethical principles in autonomous systems, in particular autonomous vehicles [8]. In that paper, we propose and implement a framework for constraining the plan selection of the rational agent controlling the autonomous vehicle with respect to ethical principles. We then formally verify the ethical extent of the agent, proving that the agent never executes a plan that it knows is âunethicalâ, unless it does not have any ethical plan to choose. If all plan options are such that some ethical principles are violated, it was also proved that the agent choose to execute the âleast unethicalâ plan it had available.
# 7 Further Work | 1504.03592#26 | Towards Verifiably Ethical Robot Behaviour | Ensuring that autonomous systems work ethically is both complex and
difficult. However, the idea of having an additional `governor' that assesses
options the system has, and prunes them to select the most ethical choices is
well understood. Recent work has produced such a governor consisting of a
`consequence engine' that assesses the likely future outcomes of actions then
applies a Safety/Ethical logic to select actions. Although this is appealing,
it is impossible to be certain that the most ethical options are actually
taken. In this paper we extend and apply a well-known agent verification
approach to our consequence engine, allowing us to verify the correctness of
its ethical decision-making. | http://arxiv.org/pdf/1504.03592 | Louise A. Dennis, Michael Fisher, Alan F. T. Winfield | cs.AI | Presented at the 1st International Workshop on AI and Ethics, Sunday
25th January 2015, Hill Country A, Hyatt Regency Austin. Will appear in the
workshop proceedings published by AAAI | null | cs.AI | 20150414 | 20150414 | [] |
1504.03410 | 27 | Method SVHN(MAP) CIFAR-10(MAP) NUS-WIDE(MAP) 12bits 24bits 32bits 48bits | 12bits 24bits 32bits 48bits | 12bits 24bits 32 bits 48 bits Ours 0.899 0.914 0.925 0.923 | 0.552 0.566 = 0.558 (0.581 | 0.674 0.697 0.713 0.715 CNNH* 0.897 0.903 0.904 0.896 | 0.484 0.476 0.472 0489 | 0.617 0.663 0.657 0.688 CNNH [27] 0.439 (0.511 0.509 0.522 | 0.611 0.618 0.625 0.608 KSH [12] 0.469 0.539 0.563 0.581 0.303 =0.337. 0.346 = 0.356 | 0.556 0.572 0.581 0.588 ITQ-CCA [4] | 0.428 0.488 0.489 0.509 | 0.264 0.282 0.288 ~â0.295 | 0.435 0.435 0.435 0.435 MLH [16] 0.147 0.247 0.261 0.273 | 0.182 0.195 0.207 0.211 | | 1504.03410#27 | Simultaneous Feature Learning and Hash Coding with Deep Neural Networks | Similarity-preserving hashing is a widely-used method for nearest neighbour
search in large-scale image retrieval tasks. For most existing hashing methods,
an image is first encoded as a vector of hand-engineering visual features,
followed by another separate projection or quantization step that generates
binary codes. However, such visual feature vectors may not be optimally
compatible with the coding process, thus producing sub-optimal hashing codes.
In this paper, we propose a deep architecture for supervised hashing, in which
images are mapped into binary codes via carefully designed deep neural
networks. The pipeline of the proposed deep architecture consists of three
building blocks: 1) a sub-network with a stack of convolution layers to produce
the effective intermediate image features; 2) a divide-and-encode module to
divide the intermediate image features into multiple branches, each encoded
into one hash bit; and 3) a triplet ranking loss designed to characterize that
one image is more similar to the second image than to the third one. Extensive
evaluations on several benchmark image datasets show that the proposed
simultaneous feature learning and hash coding pipeline brings substantial
improvements over other state-of-the-art supervised or unsupervised hashing
methods. | http://arxiv.org/pdf/1504.03410 | Hanjiang Lai, Yan Pan, Ye Liu, Shuicheng Yan | cs.CV | This paper has been accepted to IEEE International Conference on
Pattern Recognition and Computer Vision (CVPR), 2015 | null | cs.CV | 20150414 | 20150414 | [] |
1504.03592 | 27 | # 7 Further Work
We believe that there is a value in the existence of a declarative language for describing consequence engines and that the AIL-based implementation used in this veriï¬cation lays the groundwork for such a language. We would be interested in combining this language, which is structured towards the ethical evaluation of actions, with a language geared towards the ethical evaluation of plans for BDI systems, such as is discussed in [8].
While using AJPF allowed us to very rapidly implement and verify a consequence engine in a scenario broadly similar to that reported in Winï¬eld et al. [23] there were obvious issues trying to adapt an approach intended for use with BDI agent languages to this new setting. | 1504.03592#27 | Towards Verifiably Ethical Robot Behaviour | Ensuring that autonomous systems work ethically is both complex and
difficult. However, the idea of having an additional `governor' that assesses
options the system has, and prunes them to select the most ethical choices is
well understood. Recent work has produced such a governor consisting of a
`consequence engine' that assesses the likely future outcomes of actions then
applies a Safety/Ethical logic to select actions. Although this is appealing,
it is impossible to be certain that the most ethical options are actually
taken. In this paper we extend and apply a well-known agent verification
approach to our consequence engine, allowing us to verify the correctness of
its ethical decision-making. | http://arxiv.org/pdf/1504.03592 | Louise A. Dennis, Michael Fisher, Alan F. T. Winfield | cs.AI | Presented at the 1st International Workshop on AI and Ethics, Sunday
25th January 2015, Hill Country A, Hyatt Regency Austin. Will appear in the
workshop proceedings published by AAAI | null | cs.AI | 20150414 | 20150414 | [] |
1504.03410 | 28 | 0.435 0.435 0.435 MLH [16] 0.147 0.247 0.261 0.273 | 0.182 0.195 0.207 0.211 | 0.500 0.514 0.520 0.522 BRE [8] 0.165 0.206 0.230 0.237 | 0.159 0.181 0.193 0.196 | 0.485 0.525 0.530 0.544 SH [26] 0.140 0.138 = 0.141 0.140 | 0.131 0.135 0.133. --0.130 | 0.433 0.426 0.426 0.423 ITQ [4] 0.127 0.132. 0.135. 0.139: | 0.162 0.169 0.172 0.175 | 0.452 0.468 0.472 0.477 LSH [2] 0.110 0.122 0.120 0.128 | 0.121 0.126 0.120 0.120 | 0.403 0.421 0.426 0.441 | 1504.03410#28 | Simultaneous Feature Learning and Hash Coding with Deep Neural Networks | Similarity-preserving hashing is a widely-used method for nearest neighbour
search in large-scale image retrieval tasks. For most existing hashing methods,
an image is first encoded as a vector of hand-engineering visual features,
followed by another separate projection or quantization step that generates
binary codes. However, such visual feature vectors may not be optimally
compatible with the coding process, thus producing sub-optimal hashing codes.
In this paper, we propose a deep architecture for supervised hashing, in which
images are mapped into binary codes via carefully designed deep neural
networks. The pipeline of the proposed deep architecture consists of three
building blocks: 1) a sub-network with a stack of convolution layers to produce
the effective intermediate image features; 2) a divide-and-encode module to
divide the intermediate image features into multiple branches, each encoded
into one hash bit; and 3) a triplet ranking loss designed to characterize that
one image is more similar to the second image than to the third one. Extensive
evaluations on several benchmark image datasets show that the proposed
simultaneous feature learning and hash coding pipeline brings substantial
improvements over other state-of-the-art supervised or unsupervised hashing
methods. | http://arxiv.org/pdf/1504.03410 | Hanjiang Lai, Yan Pan, Ye Liu, Shuicheng Yan | cs.CV | This paper has been accepted to IEEE International Conference on
Pattern Recognition and Computer Vision (CVPR), 2015 | null | cs.CV | 20150414 | 20150414 | [] |
1504.03592 | 28 | In order to verify the consequence engine in a more general, abstract, scenario we had to endow it with mental states and it may be appropriate to pursue this direction in order to move our declarative consequence engine language into the sphere of BDI languages. An alternative would have been to equip AJPF with a richer property speciï¬cation language able to detect features of interest in the ethical selection of actions. At present it is unclear what such an extended property speciï¬cation language should include, but it is likely that as the work on extending the declarative consequence engine language progresses the nature of the declarative properties to be checked will become clearer. It may be that ultimately we will need to add BDI-like features to the declarative consequence engine and extend the property speciï¬cation language.
We would also like to incorporate the experimental validation approach into our system by using the MCAPL frameworkâs ability to integrate with the Robot Operating System [9] in order to use our new ethical consequence engine to govern actual physical robots in order to explore how formal veriï¬cation and experimental validation can complement each other.
9
# 8 Conclusion | 1504.03592#28 | Towards Verifiably Ethical Robot Behaviour | Ensuring that autonomous systems work ethically is both complex and
difficult. However, the idea of having an additional `governor' that assesses
options the system has, and prunes them to select the most ethical choices is
well understood. Recent work has produced such a governor consisting of a
`consequence engine' that assesses the likely future outcomes of actions then
applies a Safety/Ethical logic to select actions. Although this is appealing,
it is impossible to be certain that the most ethical options are actually
taken. In this paper we extend and apply a well-known agent verification
approach to our consequence engine, allowing us to verify the correctness of
its ethical decision-making. | http://arxiv.org/pdf/1504.03592 | Louise A. Dennis, Michael Fisher, Alan F. T. Winfield | cs.AI | Presented at the 1st International Workshop on AI and Ethics, Sunday
25th January 2015, Hill Country A, Hyatt Regency Austin. Will appear in the
workshop proceedings published by AAAI | null | cs.AI | 20150414 | 20150414 | [] |
1504.03410 | 29 | The triplets for training are also randomly constructed based on the image class labels.
For the proposed method and CNNH, we directly use the image pixels as input. For the other baseline methods, we follow [27, 12] to represent each image in SVHN and CIFAR-10 by a 512-dimensional GIST vector; we represent each image in NUS-WIDE by a 500-dimensional bag-of- words vector 8.
To evaluate the quality of hashing, we use four evalu- ation metrics: Mean Average Precision (MAP), Precision- Recall curves, Precision curves within Hamming distance 2, and Precision curves w.r.t. different numbers of top re- turned samples. For a fair comparison, all of the methods use identical training and test sets.
# 4.2. Results of Search Accuracies
Table 2 and Figure 2â¼4 show the comparison results of search accuracies on all of the three datasets. Two observa- tions can be made from these results: | 1504.03410#29 | Simultaneous Feature Learning and Hash Coding with Deep Neural Networks | Similarity-preserving hashing is a widely-used method for nearest neighbour
search in large-scale image retrieval tasks. For most existing hashing methods,
an image is first encoded as a vector of hand-engineering visual features,
followed by another separate projection or quantization step that generates
binary codes. However, such visual feature vectors may not be optimally
compatible with the coding process, thus producing sub-optimal hashing codes.
In this paper, we propose a deep architecture for supervised hashing, in which
images are mapped into binary codes via carefully designed deep neural
networks. The pipeline of the proposed deep architecture consists of three
building blocks: 1) a sub-network with a stack of convolution layers to produce
the effective intermediate image features; 2) a divide-and-encode module to
divide the intermediate image features into multiple branches, each encoded
into one hash bit; and 3) a triplet ranking loss designed to characterize that
one image is more similar to the second image than to the third one. Extensive
evaluations on several benchmark image datasets show that the proposed
simultaneous feature learning and hash coding pipeline brings substantial
improvements over other state-of-the-art supervised or unsupervised hashing
methods. | http://arxiv.org/pdf/1504.03410 | Hanjiang Lai, Yan Pan, Ye Liu, Shuicheng Yan | cs.CV | This paper has been accepted to IEEE International Conference on
Pattern Recognition and Computer Vision (CVPR), 2015 | null | cs.CV | 20150414 | 20150414 | [] |